CoRL 2020: Guaranteeing Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions

Sarah Dean, Andrew J. Taylor, Ryan K Cosner, Benjamin Recht, Aaron D. Ames. [pdf]


Abstract:

Modern nonlinear control theory seeks to develop feedback controllers that endow systems with properties such as safety and stability. The guarantees ensured by these controllers often rely on accurate estimates of the system state for determining control actions. In practice, measurement model uncertainty can lead to error in state estimates that degrades these guarantees. In this paper, we seek to unify techniques from control theory and machine learning to synthesize controllers that achieve safety in the presence of measurement model uncertainty. We define the notion of a Measurement-Robust Control Barrier Function (MR-CBF) as a tool for determining safe control inputs when facing measurement model uncertainty. Furthermore, MR-CBFs are used to inform sampling methodologies for learning-based perception systems and quantify tolerable error in the resulting learned models. We demonstrate the efficacy of MR-CBFs in achieving safety with measurement model uncertainty on a simulated Segway system.

This is work performed in collaboration with Sarah Dean and Ben Recht (UC Berkeley) and Andrew Taylor and Aaron Ames (Caltech). It was originally published at the 2020 Conference on Robotic Learning.

The full article can be found here (https://arxiv.org/pdf/2010.16001.pdf).

Below is a simplified version of the paper.

Introduction

Safety is really important, but most of our methods of assuring safety really on highly accurate system measurements. We explored the problem of ensuring safety despite the measurement errors. We apply our solution to a simulated segway system with a camera-in-the-loop sensor.

Background

When designing a system we often require that the final design is "safe". But what exactly does it mean to be safe? For our purposes we define a safe set as a region of space where our system is considered safe. Like an autonomous vehicle is safe if it's one the road and unsafe if it's not on the road. More precisely, we say that a system is safe if that safe set is *invariant*.

Safety (Invariance): A system is safe if starting in the safe set implies that it will stay in the safe set.

To ensure this type of safety we consider the dynamics of the system:

$$\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}) + \mathbf{g}(\mathbf{x})\mathbf{u}$$

and a safe set defined as a 0-superlevel set of a continuously differentiable safety function \(h\) (aka places where \(h(\mathbf{x})>0\)).

This function \(h\) is a Control Barrier Function if there exists inputs such that:

$$\dot{h}( \mathbf{x}, \mathbf{u}) \geq - \alpha(h(\mathbf{x})).$$

When this inequality is ensured, the system is guaranteed to be safe (Thrm 1).

New Theory: Measurement-Robust Control Barrier Functions

Often when working with real systems we can’t measure things exactly, but Control Barrier Functions incorporate the current value of \(\mathbf{x}\). In this work we found a way of extending existing CBF theory to ensure that the system remains safe even despite thesse measurement errors.

The CBF \(h\) is Measurement Robust if there exists a controller that satisfies the constraint:

$$\dot{h}(\hat{\mathbf{x}},\mathbf{u}) - \epsilon (\mathfrak{L}_{L_fh} + \mathfrak{L}_{\alpha \circ h} + \mathfrak{L}_{L_gh} ||\mathbf{u}||_2) \geq - \alpha ( h (\hat{\mathbf{x}}))$$

The difference between this Meausrement Robust CBF condition and the standard CBF condition is the addition of the term:

$$-\epsilon (\mathfrak{L}_{L_fh} + \mathfrak{L}_{\alpha \circ h} + \mathfrak{L}_{L_gh} ||\mathbf{u}||_2) $$

Here \(\epsilon\) is represents how bad our measurement is and the \(\mathfrak{L}\) terms represent how “smooth” our dynamics are. Intuitively, if the dynamics are really “smooth” then small measurement errors will have little effect, but if the dynamics aren’t smooth and change really quickly then small measurement errors can result in large mispredictions of the system’s behavior.

Learning for Measurement Model Uncertainty Reduction

Because the input, \(\mathbf{u}\), appears in this smoothed term, it is possible that no inputs exist that ensure safety. In particular, our formulation implies that if the following condition does not hold, then it is impossible to render the system safe:

$$\epsilon \leq \textrm{max}\left\{ \frac{||L_gh(\hat{\mathbf{x})}||_2}{\mathfrak{L}_{L_gh}}, \frac{L_fh(\hat{\mathbf{x}}) + \alpha(h(\hat{mathbf{x}}))}{\mathfrak{L_fh} + \mathfrak{L}_{\alpha\circ h}} \right\}$$.

In the context of machine learning, this inequality suggests the use of a sampling scheme that will ensure that the measurement error never reaches an unsafe level.

Simulation Results

Due to Covid19 restrictions, the experiments for this work were performed exclusively in simulation. They were performed using on a simulated Segway. The system was considered safe as long as it remained upright within a certain angle window.

segway_sim

Two types of errors were considered: the worst case error synthetically added to the measurement and the error found when measuring the system using a learned model which estimated the position based on camera data.

In both cases the standard CBF controller failed to ensure safety and the MRCBF controller succeeded.