On The Robustness of Safe Reinforcement Learning Under Observational Perturbations
On The Robustness of Safe Reinforcement Learning Under Observational Perturbations
On The Robustness of Safe Reinforcement Learning Under Observational Perturbations
A BSTRACT
Safe reinforcement learning (RL) trains a policy to maximize the task reward
while satisfying safety constraints. While prior works focus on the performance
optimality, we find that the optimal solutions of many safe RL problems are not
robust and safe against carefully designed observational perturbations. We formally
analyze the unique properties of designing effective observational adversarial
attackers in the safe RL setting. We show that baseline adversarial attack techniques
for standard RL tasks are not always effective for safe RL and propose two new
approaches - one maximizes the cost and the other maximizes the reward. One
interesting and counter-intuitive finding is that the maximum reward attack is strong,
as it can both induce unsafe behaviors and make the attack stealthy by maintaining
the reward. We further propose a robust training framework for safe RL and
evaluate it via comprehensive experiments. This paper provides a pioneer work to
investigate the safety and robustness of RL under observational attacks for future
safe RL studies. Code is available at: https://github.com/liuzuxin/
safe-rl-robustness
1 I NTRODUCTION
Despite the great success of deep reinforcement learning (RL) in recent years, it is still challenging
to ensure safety when deploying them to the real world. Safe RL tackles the problem by solving a
constrained optimization that can maximize the task reward while satisfying safety constraints (Brunke
et al., 2021), which has shown to be effective in learning a safe policy in many tasks (Zhao et al., 2021;
Liu et al., 2022; Sootla et al., 2022b). The success of recent safe RL approaches leverages the power
of neural networks (Srinivasan et al., 2020; Thananjeyan et al., 2021). However, it has been shown
that neural networks are vulnerable to adversarial attacks – a small perturbation of the input data may
lead to a large variance of the output (Machado et al., 2021; Pitropakis et al., 2019), which raises a
concern when deploying a neural network RL policy to safety-critical applications (Akhtar & Mian,
2018). While many recent safe RL methods with deep policies can achieve outstanding constraint
satisfaction in noise-free simulation environments, such a concern regarding their vulnerability under
adversarial perturbations has not been studied in the safe RL setting. We consider the observational
perturbations that commonly exist in the physical world, such as unavoidable sensor errors and
upstream perception inaccuracy (Zhang et al., 2020a).
Several recent works of observational robust RL have shown that deep RL agent could be attacked
via sophisticated observation perturbations, drastically decreasing their rewards (Huang et al., 2017;
Zhang et al., 2021). However, the robustness concept and adversarial training methods in standard RL
settings may not be suitable for safe RL because of an additional metric that characterizes the cost of
constraint violations (Brunke et al., 2021). The cost should be more important than the measure of
reward, since any constraint violations could be fatal and unacceptable in the real world (Berkenkamp
et al., 2017). For example, consider the autonomous vehicle navigation task where the reward is
to reach the goal as fast as possible and the safety constraint is to not collide with obstacles, then
sacrificing some reward is not comparable with violating the constraint because the latter may cause
catastrophic consequences. However, we find little research formally studying the robustness in
the safe RL setting with adversarial observation perturbations, while we believe this should be an
important aspect in the safe RL area, because a vulnerable policy under adversarial attacks cannot
be regarded as truly safe in the physical world.
1
Published as a conference paper at ICLR 2023
We aim to address the following questions in this work: 1) How vulnerable would a learned RL agent
be under observational adversarial attacks? 2) How to design effective attackers in the safe RL setting?
3) How to obtain a robust policy that can maintain safety even under worst-case perturbations? To
answer them, we formally define the observational robust safe RL problem and discuss how to
evaluate the adversary and robustness of a safe RL policy. We also propose two strong adversarial
attacks that can induce the agent to perform unsafe behaviors and show that adversarial training can
help improve the robustness of constraint satisfaction. We summarize the contributions as follows.
1. We formally analyze the policy vulnerability in safe RL under observational corruptions, investi-
gate the observational-adversarial safe RL problem, and show that the optimal solutions of safe
RL problems are vulnerable under observational adversarial attacks.
2. We find that existing adversarial attacks focusing on minimizing agent rewards do not always work,
and propose two effective attack algorithms with theoretical justifications – one directly maximizes
the cost, and one maximizes the task reward to induce a tempting but risky policy. Surprisingly, the
maximum reward attack is very strong in inducing unsafe behaviors, both in theory and practice.
We believe this property is overlooked as maximizing reward is the optimization goal for standard
RL, yet it leads to risky and stealthy attacks to safety constraints.
3. We propose an adversarial training algorithm with the proposed attackers and show contraction
properties of their Bellman operators. Extensive experiments in continuous control tasks show
that our method is more robust against adversarial perturbations in terms of constraint satisfaction.
2 R ELATED W ORK
Safe RL. One type of approach utilizes domain knowledge of the target problem to improve the
safety of an RL agent, such as designing a safety filter (Dalal et al., 2018; Yu et al., 2022), assuming
sophisticated system dynamics model (Liu et al., 2020; Luo & Ma, 2021; Chen et al., 2021), or
incorporating expert interventions (Saunders et al., 2017; Alshiekh et al., 2018). Constrained Markov
Decision Process (CMDP) is a commonly used framework to model the safe RL problem, which can
be solved via constrained optimization techniques (Garcıa & Fernández, 2015; Gu et al., 2022; Sootla
et al., 2022a; Flet-Berliac & Basu, 2022). The Lagrangian-based method is a generic constrained
optimization algorithm to solve CMDP, which introduces additional Lagrange multipliers to penalize
constraints violations (Bhatnagar & Lakshmanan, 2012; Chow et al., 2017; As et al., 2022). The
multiplier can be optimized via gradient descent together with the policy parameters (Liang et al.,
2018; Tessler et al., 2018), and can be easily incorporated into many existing RL methods (Ray et al.,
2019). Another line of work approximates the non-convex constrained optimization problem with
low-order Taylor expansions and then obtains the dual variable via convex optimization (Yu et al.,
2019; Yang et al., 2020; Gu et al., 2021; Kim & Oh, 2022). Since the constrained optimization-based
methods are more general, we will focus on the discussions of safe RL upon them.
Robust RL. The robustness definition in the RL context has many interpretations (Sun et al., 2021;
Moos et al., 2022; Korkmaz, 2023), including the robustness against action perturbations (Tessler
et al., 2019), reward corruptions (Wang et al., 2020; Lin et al., 2020; Eysenbach & Levine, 2021),
domain shift (Tobin et al., 2017; Muratore et al., 2018), and dynamics uncertainty (Pinto et al.,
2017; Huang et al., 2022). The most related works are investigating the observational robustness
of an RL agent under observational adversarial attacks (Zhang et al., 2020a; 2021; Liang et al.,
2022; Korkmaz, 2022). It has been shown that the neural network policies can be easily attacked by
adversarial observation noise and thus lead to much lower rewards than the optimal policy (Huang
et al., 2017; Kos & Song, 2017; Lin et al., 2017; Pattanaik et al., 2017). However, most of the robust
RL approaches model the attack and defense regarding the reward, while the robustness regarding
safety, i.e., constraint satisfaction for safe RL, has not been formally investigated.
2
Published as a conference paper at ICLR 2023
3
Published as a conference paper at ICLR 2023
as π ◦ ν := π(a|s̃) = π(a|ν(s)), as the state is first contaminated by ν and then used by the operator
π. Note that the adversary does not modify the original CMDP and true states in the environment,
but only the input of the agent. This setting mimics realistic scenarios, for instance, the adversary
could be the noise from the sensing system or the errors from the upstream perception system.
Constraint satisfaction is of the top priority in safe RL, since violating constraints in safety-critical
applications can be unaffordable. In addition, the reward metric is usually used to measure the agent’s
performance in finishing a task, so significantly reducing the task reward may warn the agent of the
existence of attacks. As a result, a strong adversary in the safe RL setting aims to generate more
constraint violations while maintaining high rewards to make the attack stealthy. In contrast, existing
adversaries on standard RL aim to reduce the overall reward. Concretely, we evaluate the adversary
performance for safe RL from two perspectives:
Definition 4. (Attack) Effectiveness JE (ν, π) is defined as the increased cost value under the
adversary: JE (ν, π) = Vcπ◦ν (µ0 ) − Vcπ (µ0 ). An adversary ν is effective if JE (ν, π) > 0.
The effectiveness metric measures an adversary’s capability of attacking the safe RL agent to violate
constraints. We additionally introduce another metric to characterize the adversary’s stealthiness w.r.t.
the task reward in the safe RL setting.
Definition 5. (Reward) Stealthiness JS (ν, π) is defined as the increased reward value under the
adversary: JS (ν, π) = Vrπ◦ν (µ0 ) − Vrπ (µ0 ). An adversary ν is stealthy if JS (ν, π) ≥ 0.
Note that the stealthiness concept is widely used in supervised learning (Sharif et al., 2016; Pitropakis
et al., 2019). It usually means that the adversarial attack should be covert to human eyes regarding the
input data so that it can hardly be identified (Machado et al., 2021). While the stealthiness regarding
the perturbation range is naturally satisfied based on the perturbation set definition, we introduce
another level of stealthiness in terms of the task reward in the safe RL task. In some situations, the
agent might easily detect a dramatic reward drop. A more stealthy attack is maintaining the agent’s
task reward while increasing constraint violations; see Appendix B.1 for more discussions.
In practice, the power of the adversary is usually restricted (Madry et al., 2017; Zhang et al., 2020a),
such that the perturbed observation will be limited within a pre-defined perturbation set B(s):
∀s ∈ S, ν(s) ∈ B(s). Following convention, we define the perturbation set Bpϵ (s) as the ℓp -ball
around the original observation: ∀s′ ∈ Bpϵ (s), ∥s′ − s∥p ≤ ϵ, where ϵ is the ball size.
3.3 V ULNERABILITY OF AN OPTIMAL POLICY UNDER ADVERSARIAL ATTACKS
We aim to design strong adversaries such that they are effective in making the agent unsafe and keep
reward stealthiness. Motivated by Lemma 1, we propose the Maximum Reward (MR) attacker that
corrupts the observation by maximizing the reward value: νMR = arg maxν Vrπ◦ν (µ0 )
Proposition 1. For an optimal policy π ∗ ∈ Π, the MR attacker is guaranteed to be reward stealthy
∗ ∗
and effective, given enough large perturbation set Bpϵ (s) such that Vrπ ◦νMR > Vrπ .
The MR attacker is counter-intuitive because it is exactly the goal for standard RL.This is an interesting
phenomenon worthy of highlighting since we observe that the MR attacker effectively makes the
optimal policy unsafe and retains stealthy regarding the reward in the safe RL setting. The proof is
given in Appendix A.1. If we enlarge the policy space from Π : S × A → [0, 1] to an augmented
space Π̄ : S × A × O → [0, 1], where O = {0, 1} is the space of indicator, we can further observe
the following important property for the optimal policy:
∗
Lemma 2. The optimal policy π ∗ ∈ Π̄ of a tempting safe RL problem satisfies: Vcπ (µ0 ) = κ.
The proof is given in Appendix A.2. The definition of the augmented policy space is commonly used
in hierarchical RL and can be viewed as a subset of option-based RL (Riemer et al., 2018; Zhang
& Whiteson, 2019). Note that Lemma 2 holds in expectation rather than for a single trajectory. It
suggests that the optimal policy in a tempting safe RL problem will be vulnerable as it is on the
safety boundary, which motivates us to propose the Maximum Cost (MC) attacker that corrupts the
observation of a policy π by maximizing the cost value: νMC = arg maxν Vcπ◦ν (µ0 )
It is apparent to see that the MC attacker is effective w.r.t. the optimal policy with a large enough
perturbation range, since we directly solve the adversarial observation such that it can maximize the
constraint violations. Therefore, as long as νMC can lead to a policy that has a higher cost return than
π ∗ , it is guaranteed to be effective in making the agent violate the constraint based on Lemma 2.
4
Published as a conference paper at ICLR 2023
Practically, given a fixed policy π and its critics Qπf (s, a), f ∈ {r, c}, we obtain the corrupted
observation s̃ of s from the MR and MC attackers by solving:
νMR (s) = arg max
ϵ
Eã∼π(a|s̃) [Qπr (s, ã))] , νMC (s) = arg max
ϵ
Eã∼π(a|s̃) [Qπc (s, ã))] (2)
s̃∈Bp (s) s̃∈Bp (s)
Suppose the policy π and the critics Q are all parametrized by differentiable models such as neural
networks, then we can back-propagate the gradient through Q and π to solve the adversarial observa-
tion s̃. This is similar to the policy optimization procedure in DDPG (Lillicrap et al., 2015), whereas
we replace the optimization domain from the policy parameter space to the observation space Bpϵ (s).
The attacker implementation details can be found in Appendix C.1.
3.4 T HEORETICAL ANALYSIS OF ADVERSARIAL ATTACKS
Theorem 1 provides the theoretical foundation of Bellman operators that require optimal and deter-
ministic adversaries in the next section. The proof is given in Appendix A.3. We can also obtain the
upper-bound of constraint violations of the adversary attack at state s. Denote Sc as the set of unsafe
states that have non-zero cost: Sc := {s′ ∈ S : c(s, ′
Pa, s ) > 0}′ and ps as the maximum probability
of entering unsafe states from state s: ps = maxa s′ ∈Sc p(s |s, a).
Theorem 2 (One-step perturbation cost value bound). Suppose the optimal policy is locally L-
Lipschitz continuous at state s: DTV [π(·|s′ )∥π(·|s)] ≤ L ∥s′ − s∥p , and the perturbation set of the
adversary ν(s) is an ℓp -ball Bpϵ (s). Let Ṽcπ,ν (s) = Ea∼π(·|ν(s)),s′ ∼p(·|s,a) [c(s, a, s′ ) + γVcπ (s′ )]
denote the cost value for only perturbing state s. The upper bound of Ṽcπ,ν (s) is given by:
γCm
Ṽcπ,ν (s) − Vcπ (s) ≤ 2Lϵ ps Cm + . (3)
1−γ
Note that Ṽcπ,ν (s) ̸= Vcπ (ν(s)) because the next state s′ is still transited from the original state s,
i.e., s′ ∼ p(·|s, a) instead of s′ ∼ p(·|ν(s), a). Theorem 2 indicates that the power of an adversary is
controlled by the policy smoothness L and perturbation range ϵ. In addition, the ps term indicates
that a safe policy should keep a safe distance from the unsafe state to prevent it from being attacked.
We further derive the upper bound of constraint violation for attacking the entire episodes.
Theorem 3 (Episodic bound). Given a feasible policy π ∈ ΠκM , suppose L-Lipschitz continuity
holds globally for π, and the perturbation set is an ℓp -ball, then the following bound holds:
π◦ν 1 4γLϵ γ
Vc (µ0 ) ≤ κ + 2LϵCm + max ps + . (4)
1−γ (1 − γ)2 s 1−γ
See Theorem 2, 3 proofs in Appendix A.4, A.5. We can still observe that the maximum cost value
under perturbations is bounded by the Lipschitzness of the policy and the maximum perturbation
range ϵ. The bound is tight since when ϵ → − 0 (no attack) or L →
− 0 (constant policy π(·|s) for all
states), the RHS is 0 for Eq. (3) and κ for Eq. (4), which means that the attack is ineffective.
To defend against observational attacks, we propose an adversarial training method for safe RL. We
directly optimize the policy upon the corrupted sampling trajectories τ̃ = {s0 , ã0 , s1 , ã1 , ...}, where
ãt ∼ π(a|ν(st )). We can compactly represent the adversarial safe RL objective under ν as:
π ∗ = arg max Vrπ◦ν (µ0 ), s.t. Vcπ◦ν (µ0 ) ≤ κ, ∀ν. (5)
π
The adversarial training objective (5) can be solved by many policy-based safe RL methods, such
as the primal-dual approach, and we show that the Bellman operator for evaluating the policy
performance under a deterministic adversary is a contraction (see Appendix A.6 for proof).
5
Published as a conference paper at ICLR 2023
Theorem 4 (Bellman contraction). Define the Bellman policy operator as Tπ : R|S| → − R|S| :
X X
(Tπ Vfπ◦ν )(s) = p(s′ |s, a) f (s, a, s′ ) + γVfπ◦ν (s′ ) , f ∈ {r, c}.
π(a|ν(s)) (6)
a∈A s′ ∈S
The Bellman equation can be written as Vfπ◦ν (s) = (Tπ Vfπ◦ν )(s). In addition, the operator Tπ is a
contraction under the sup-norm ∥ · ∥∞ and has a fixed point.
Theorem 4 shows that we can accurately evaluate the task performance (reward return) and the safety
performance (cost return) of a policy under one fixed deterministic adversary, which is similar to
solving a standard CMDP. The Bellman contraction property provides the theoretical justification of
adversarial training, i.e., training a safe RL agent under observational perturbed sampling trajectories.
Then the key part is selecting proper adversaries during learning, such that the trained policy is robust
and safe against any other attackers. We can easily show that performing adversarial training with
the MC or the MR attacker will enable the agent to be robust against the most effective or the most
reward stealthy perturbations, respectively (see Appendix A.6 for details).
′
Remark 1. Suppose a trained policy π ′ under the MC attacker satisfies: Vcπ ◦νMC (µ0 ) ≤ κ, then π ′ ◦ ν
ϵ
is guaranteed to be feasible with any Bp bounded adversarial perturbations. Similarly, suppose a
′
trained policy π ′ under the MR attacker satisfies: Vcπ ◦νMR (µ0 ) ≤ κ, then π ′ ◦ ν is guaranteed to be
non-tempting with any Bpϵ bounded adversarial perturbations.
Remark 1 indicates that by solving the adversarial constrained optimization problem under the MC
attacker, all the feasible solutions will be safe under any bounded adversarial perturbations. It also
shows a nice property for training a robust policy, since the max operation over the reward in the safe
RL objective may lead the policy to the tempting policy class, while the adversarial training with
MR attacker can naturally keep the trained policy at a safe distance from the tempting policy class.
Practically, we observe that both MC and MR attackers can increase the robustness and safety via
adversarial training, and could be easily plugged into any on-policy safe RL algorithms, in principle.
We leave the robust training framework for off-policy safe RL methods as future work.
4.2 P RACTICAL IMPLEMENTATION
The meta adversarial training al-
gorithm is shown in Algo. 1. Algorithm 1 Adversarial safe RL training meta algorithm
We particularly adopt the primal- Input: Safe RL learner, Adversary scheduler
dual methods (Ray et al., 2019; Output: Observational robust policy π
Stooke et al., 2020) that are 1: Initialize policy π ∈ Π and adversary ν : S → − S
widely used in the safe RL liter- 2: for each training epoch n = 1, ..., N do
ature as the learner, then the 3: Rollout trajectories: τ̃ = {s0 , ã0 , ...}T , ãt ∼ π(a|ν(st ))
adversarial training objective in 4: Run safe RL learner: π ← − learner(τ̃ , Π)
Eq. (5) can be converted to a min- 5: Update adversary: ν ← − scheduler(τ̃ , π, n)
max form by using the Lagrange 6: end for
multiplier λ:
Solving the inner maximization (primal update) via any policy optimization methods and the outer
minimization (dual update) via gradient descent iteratively yields the Lagrangian algorithm. Under
proper learning rates and bounded noise assumptions, the iterates (πn , λn ) converge to a fixed point
(a local minimum) almost surely (Tessler et al., 2018; Paternain et al., 2019).
Based on previous theoretical analysis, we adopt MC or MR as the adversary when sampling
trajectories. The scheduler aims to train the reward and cost Q-value functions for the MR and
the MC attackers, because many on-policy algorithms such as PPO do not use them. In addition, the
scheduler can update the power of the adversary based on the learning progress accordingly, since a
strong adversary at the beginning may prohibit the learner from exploring the environment and
thus corrupt the training. We gradually increase the perturbation range ϵ along with the training
epochs to adjust the adversary perturbation set Bpϵ , such that the agent will not be too conservative in
the early stage of training. A similar idea is also used in adversarial training (Salimans et al., 2016;
Arjovsky & Bottou, 2017; Gowal et al., 2018) and curriculum learning literature (Dennis et al., 2020;
Portelas et al., 2020). See more implementation details in Appendix C.3.
6
Published as a conference paper at ICLR 2023
5 E XPERIMENT
In this section, we aim to answer the questions raised in Sec. 1. To this end, we adopt the robot
locomotion continuous control tasks that are easy to interpret, motivated by safety, and used in
many previous works (Achiam et al., 2017; Chow et al., 2019; Zhang et al., 2020b). The simulation
environments are from a public available benchmark (Gronauer, 2022). We consider two tasks, and
train multiple different robots (Car, Drone, Ant) for each task:
Run task. Agents are rewarded for running fast between two safety boundaries and are given costs for
violation constraints if they run across the boundaries or exceed an agent-specific velocity threshold.
The tempting policies can violate the velocity constraint to obtain more rewards.
Circle task. The agents are rewarded for running in a circle in a clockwise direction but are
constrained to stay within a safe region that is smaller than the radius of the target circle. The
tempting policies in this task will leave the safe region to gain more rewards.
We name each task via the Robot-Task format, for instance, Car-Run. More detailed descriptions
and video demos are available on our anonymous project website 1 . In addition, we will use the PID
PPO-Lagrangian (abbreviated as PPOL) method (Stooke et al., 2020) as the base safe RL algorithm
to fairly compare different robust training approaches, while the proposed adversarial training can
be easily used in other on-policy safe RL methods as well. The detailed hyperparameters of the
adversaries and safe RL algorithms can be found in Appendix C.
We first demonstrate the vulnerability of the optimal safe RL policies without adversarial training
and compare the performance of different adversaries. All the adversaries have the same ℓ∞ norm
ϵ
perturbation set B∞ restriction. We adopt three adversary baselines, including one improved version:
Random attacker baseline. This is a simple baseline by sampling the corrupted observations
randomly within the perturbation set via a uniform distribution.
Maximum Action Difference (MAD) attacker baseline. The MAD attacker (Zhang et al., 2020a)
is designed for standard RL tasks, which is shown to be effective in decreasing a trained RL agent’s
reward return. The optimal adversarial observation is obtained by maximizing the KL-divergence
between the corrupted policy: νMAD (s) = arg maxs̃∈Bpϵ (s) DKL [π(a|s̃)∥π(a|s)]
Adaptive MAD (AMAD) attacker. Since the vanilla MAD attacker is not designed for safe RL,
we further improve it to an adaptive version as a stronger baseline. The motivation comes from
Lemma 2 – the optimal policy will be close to the constraint boundary that with high risks. To
better understand this property, we introduce the discounted future state distribution dπ (s) (Kakade,
2003), which allows us to rewrite the result
R in Lemma 2R as (see Appendix C.6 for derivation and
π∗ ∗
1
p(s′ |s, a)c(s, a, s′ )ds′ dads = κ. We
R
implementation details): 1−γ s∈S
d (s) a∈A
π (a|s) s′ ∈S
∗
can see that performing MAD attack for the optimal policy π in low-risk regions that with small
p(s′ |s, a)c(s, a, s′ ) values may not be effective. Therefore, AMAD only perturbs the observation
when the agent is within high-risk regions that are determined by the cost value function and a
νMAD (s), if Vcπ (s) ≥ ξ,
threshold ξ to achieve more effective attacks: νAMAD (s) :=
s, otherwise .
Experiment setting. We evaluate the performance of all three baselines above and our MC, MR
adversaries by attacking well-trained PPO-Lagrangian policies in different tasks. The trained policies
can achieve nearly zero constraint violation costs without observational perturbations. We keep the
trained model weights and environment seeds fixed for all the attackers to ensure fair comparisons.
Experiment result. Fig. 2 shows the attack results of the 5 adversaries on PPOL-vanilla. Each
column corresponds to an environment. The first row is the episode reward and the second row is
the episode cost of constraint violations. We can see that the vanilla safe RL policies are vulnerable,
since the safety performance deteriorates (cost increases) significantly even with a small adversarial
perturbation range ϵ. Generally, we can see an increasing cost trend as the ϵ increases, except for
the MAD attacker. Although MAD can reduce the agent’s reward quite well, it fails to perform
an effective attack in increasing the cost because the reward decrease may keep the agent away
1
https://sites.google.com/view/robustsaferl/home
7
Published as a conference paper at ICLR 2023
from high-risk regions. It is even worse than the random attacker in the Car-Circle task. The
improved AMAD attacker is a stronger baseline than MAD, as it only attacks in high-risk regions
and thus has a higher chance of entering unsafe regions to induce more constraint violations. More
comparisons between MAD and AMAD can be found in Appendix C.9. Our proposed MC and MR
attackers outperform all baselines attackers (Random, MAD and AMAD) in terms of effectiveness
by increasing the cost by a large margin in most tasks. Surprisingly, the MR attacker can achieve
even higher costs than MC and is more stealthy as it can maintain or increase the reward well, which
validates our theoretical analysis and the existence of tempting policies.
Figure 2: Reward and cost curves of all 5 attackers evaluated on well-trained vanilla PPO-Lagrangian models
w.r.t. the perturbation range ϵ. The curves are averaged over 50 episodes and 5 seeds, where the solid lines are
the mean and the shadowed areas are the standard deviation. The dashed line is the cost without perturbations.
8
Published as a conference paper at ICLR 2023
Table 1: Evaluation results of natural performance (no attack) and under 3 attackers. Our methods are ADV-
PPOL(MC/MR). Each value is reported as: mean ± standard deviation for 50 episodes and 5 seeds. We shadow
two lowest-costs agents under each attacker column and break ties based on rewards, excluding the failing agents
(whose natural rewards are less than 30% of PPOL-vanilla’s). We mark the failing agents with ⋆.
Natural AMAD MC MR
Env Method
Reward Cost Reward Cost Reward Cost Reward Cost
PPOL-vanilla 560.86±1.09 0.16±0.36 559.45±2.87 3.7±7.65 624.92±16.22 184.04±0.67 625.12±15.96 184.08±0.46
PPOL-random 557.27±1.06 0.0±0.0 556.46±1.07 0.28±0.71 583.52±1.59 183.78±0.7 583.43±1.46 183.88±0.55
SA-PPOL 534.3±8.84 0.0±0.0 534.22±8.91 0.0±0.0 566.75±7.68 13.77±13.06 566.54±7.4 11.79±11.95
Car-Run
SA-PPOL(MC) 552.0±2.76 0.0±0.0 550.68±2.81 0.0±0.0 568.28±3.98 3.28±5.73 568.93±3.17 2.73±5.27
ϵ = 0.05
SA-PPOL(MR) 548.71±2.03 0.0±0.0 547.61±1.93 0.0±0.0 568.49±5.2 25.72±51.51 568.72±4.92 24.33±48.74
ADV-PPOL(MC) 505.76±9.11 0.0±0.0 503.49±9.17 0.0±0.0 552.98±3.76 0.0±0.06 549.07±8.22 0.02±0.14
ADV-PPOL(MR) 497.67±8.15 0.0±0.0 494.81±7.49 0.0±0.0 549.24±9.98 0.02±0.15 551.75±7.63 0.04±0.21
PPOL-vanilla 346.1±2.71 0.0±0.0 344.95±3.08 1.76±4.15 339.62±5.12 79.0±0.0 359.01±14.62 78.82±0.43
PPOL-random 342.66±0.96 0.0±0.0 357.56±19.31 31.36±35.64 265.42±3.08 0.04±0.57 317.26±29.93 33.31±19.26
SA-PPOL 338.5±2.26 0.0±0.0 358.66±32.06 33.27±34.58 313.81±163.22 52.44±28.28 264.08±168.62 42.8±22.61
Drone-Run
SA-PPOL(MC) 223.1±22.5 0.84±1.93 210.61±28.78 0.82±1.88 251.67±31.72 22.98±16.69 262.73±29.1 21.48±16.18
ϵ = 0.025
*SA-PPOL(MR) 0.3±0.49 0.0±0.0 0.3±0.45 0.0±0.0 0.44±0.87 0.0±0.0 0.17±0.43 0.0±0.0
ADV-PPOL(MC) 263.24±9.67 0.0±0.0 268.66±15.34 0.0±0.0 272.34±52.35 3.0±6.5 282.36±39.84 13.48±13.8
ADV-PPOL(MR) 226.18±74.06 0.0±0.0 225.34±75.01 0.0±0.0 227.89±61.5 3.58±7.44 242.47±80.6 6.62±8.84
PPOL-vanilla 703.11±3.83 1.3±1.17 702.31±3.76 2.53±1.71 692.88±9.32 65.56±9.56 714.37±26.4 120.68±28.63
PPOL-random 698.39±14.76 1.34±1.39 697.56±14.38 2.02±1.47 648.88±83.55 54.52±24.27 677.95±52.34 80.96±42.04
SA-PPOL 699.7±12.1 0.66±0.82 699.48±12.19 1.02±1.11 683.03±21.1 70.54±27.69 723.52±36.33 122.69±39.75
Ant-Run
SA-PPOL(MC) 383.21±256.58 5.71±6.34 382.32±256.39 5.46±6.03 402.83±274.66 34.28±42.18 406.31±276.04 38.5±46.93
ϵ = 0.025
*SA-PPOL(MR) 114.34±35.83 6.63±3.7 115.3±35.13 6.55±3.72 112.7±32.76 9.64±3.76 115.77±33.51 9.6±3.72
ADV-PPOL(MC) 615.4±2.94 0.0±0.0 614.96±2.94 0.0±0.06 674.65±12.01 2.21±1.64 675.87±20.64 5.3±3.11
ADV-PPOL(MR) 596.14±12.06 0.0±0.0 595.52±12.03 0.0±0.0 657.31±17.09 0.96±1.11 678.65±13.16 1.56±1.41
PPOL-vanilla 446.83±9.89 1.32±3.61 406.75±15.82 21.85±24.9 248.05±21.66 38.56±24.01 296.17±20.95 89.23±17.11
PPOL-random 429.57±10.55 0.06±1.01 442.89±11.26 41.85±12.06 289.17±30.67 70.9±23.24 313.31±25.77 95.23±13.62
SA-PPOL 435.83±10.98 0.34±1.55 430.58±10.41 7.48±15.43 295.38±88.05 126.3±33.87 468.74±12.4 94.19±11.62
Car-Circle
SA-PPOL(MC) 439.18±10.12 0.27±1.27 352.71±53.84 0.1±0.45 311.04±41.29 91.07±16.8 450.93±20.37 87.62±17.0
ϵ = 0.05
SA-PPOL(MR) 419.9±34.0 0.32±1.29 411.32±36.23 0.31±1.04 317.01±72.81 99.3±22.89 421.31±67.83 83.89±15.59
ADV-PPOL(MC) 270.25±16.99 0.0±0.0 273.48±17.52 0.0±0.0 263.5±24.5 1.44±3.48 248.43±40.74 8.99±7.46
ADV-PPOL(MR) 274.69±20.5 0.0±0.0 281.73±21.43 0.0±0.0 219.29±31.25 2.21±5.7 281.12±25.89 1.66±2.52
PPOL-vanilla 706.94±53.66 4.55±6.58 634.54±129.07 29.04±17.75 153.18±147.23 24.14±30.25 121.85±159.92 20.01±29.08
PPOL-random 728.62±64.07 1.2±3.75 660.72±122.6 28.3±19.42 194.63±149.35 12.9±18.63 165.13±165.01 23.3±24.58
SA-PPOL 599.56±67.56 1.71±3.39 596.98±67.66 1.93±3.81 338.85±204.86 72.83±43.65 84.2±132.76 20.43±31.26
Drone-Circle
SA-PPOL(MC) 480.34±96.61 2.7±4.12 475.21±97.68 1.25±3.44 361.46±190.63 54.71±39.13 248.74±203.66 36.68±32.19
ϵ = 0.025
SA-PPOL(MR) 335.99±150.18 2.8±5.55 326.73±152.64 2.66±4.85 233.8±158.16 51.79±38.98 287.92±194.92 52.39±41.26
ADV-PPOL(MC) 309.83±64.1 0.0±0.0 279.91±85.93 4.25±8.62 393.66±92.91 0.88±2.65 250.59±112.6 11.16±22.11
ADV-PPOL(MR) 358.23±40.59 0.46±2.35 360.4±42.24 0.4±3.9 289.1±90.7 6.77±9.58 363.75±74.02 2.44±5.2
PPOL-vanilla 186.71±28.65 4.47±7.22 185.15±25.72 5.26±8.65 185.89±34.57 67.43±24.58 232.42±37.32 80.59±20.41
PPOL-random 140.1±25.56 3.58±7.6 143.25±17.97 4.22±8.21 139.42±27.53 35.69±26.59 155.77±32.44 54.54±28.12
SA-PPOL 197.9±27.39 3.4±8.04 196.2±32.59 4.06±8.93 198.73±32.08 76.45±27.26 246.8±40.61 82.24±20.28
Ant-Circle
*SA-PPOL(MC) 0.65±0.43 0.0±0.0 0.66±0.43 0.0±0.0 0.63±0.42 0.0±0.0 0.63±0.38 0.0±0.0
ϵ = 0.025
*SA-PPOL(MR) 0.63±0.41 0.0±0.0 0.63±0.41 0.0±0.0 0.58±0.44 0.0±0.0 0.64±0.44 0.0±0.0
ADV-PPOL(MC) 121.57±20.11 1.24±4.7 122.2±20.55 0.98±4.43 124.29±26.04 1.9±5.28 107.89±21.35 9.0±17.31
ADV-PPOL(MR) 123.13±19.19 0.46±2.69 121.51±19.68 0.74±3.42 110.11±25.49 5.72±10.1 128.88±20.06 3.0±7.9
Generalization to other safe RL methods. We also conduct the experiments for other types of base
safe RL algorithms, including another on-policy method FOCOPS (Zhang et al., 2020b), one off-
policy method SAC-Lagrangian Yang et al. (2021), and one policy-gradient-free off-policy method
CVPO (Liu et al., 2022). Due to the page limit, we leave the results and detailed discussions in
Appendix C.9. In summary, all the vanilla safe RL methods suffer the vulnerability issue – though
they are safe in noise-free environments, they are not safe anymore under strong attacks, which
validates the necessity of studying the observational robustness of safe RL agents. In addition, the
adversarial training can help to improve the robustness and make the FOCOPS agent much safer
under attacks. Therefore, the problem formulations, methods, results, and analysis can be generalized
to different safe RL approaches, hopefully attracting more attention in the safe RL community to
study the inherent connection between safety and robustness.
6 C ONCLUSION
We study the observational robustness regarding constraint satisfaction for safe RL and show that
the optimal policy of tempting problems could be vulnerable. We propose two effective attackers to
induce unsafe behaviors. An interesting and surprising finding is that maximizing-reward attack is as
effective as directly maximizing the cost while keeping stealthiness. We further propose an adversarial
training method to increase the robustness and safety performance, and extensive experiments show
that the proposed method outperforms the robust training techniques for standard RL settings.
One limitation of this work is that the adversarial training pipeline could be expensive for real-world
RL applications because it requires to attack the behavior agents when collecting data. In addition,
the adversarial training might be unstable for high-dimensional and complex problems. Nevertheless,
our results show the existence of a previously unrecognized problem in safe RL, and we hope this
work encourages other researchers to study safety from the robustness perspective, as both safety and
robustness are important ingredients for real-world deployment.
9
Published as a conference paper at ICLR 2023
ACKNOWLEDGMENTS
We gratefully acknowledge support from the National Science Foundation under grant CAREER
CNS-2047454.
R EFERENCES
Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In
International Conference on Machine Learning, pp. 22–31. PMLR, 2017.
Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision:
A survey. Ieee Access, 6:14410–14430, 2018.
Mohammed Alshiekh, Roderick Bloem, Rüdiger Ehlers, Bettina Könighofer, Scott Niekum, and
Ufuk Topcu. Safe reinforcement learning via shielding. In Thirty-Second AAAI Conference on
Artificial Intelligence, 2018.
Eitan Altman. Constrained markov decision processes with total cost criteria: Lagrangian approach
and dual linear program. Mathematical methods of operations research, 48(3):387–417, 1998.
Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial
networks. arXiv preprint arXiv:1701.04862, 2017.
Yarden As, Ilnura Usmanova, Sebastian Curi, and Andreas Krause. Constrained policy optimization
via bayesian world models. arXiv preprint arXiv:2201.09802, 2022.
Felix Berkenkamp, Matteo Turchetta, Angela P Schoellig, and Andreas Krause. Safe model-based
reinforcement learning with stability guarantees. arXiv preprint arXiv:1705.08551, 2017.
Shalabh Bhatnagar and K Lakshmanan. An online actor–critic algorithm with function approximation
for constrained markov decision processes. Journal of Optimization Theory and Applications, 153
(3):688–708, 2012.
Lukas Brunke, Melissa Greeff, Adam W Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, and
Angela P Schoellig. Safe learning in robotics: From learning-based control to safe reinforcement
learning. Annual Review of Control, Robotics, and Autonomous Systems, 5, 2021.
Baiming Chen, Zuxin Liu, Jiacheng Zhu, Mengdi Xu, Wenhao Ding, Liang Li, and Ding Zhao.
Context-aware safe reinforcement learning for non-stationary environments. In 2021 IEEE Inter-
national Conference on Robotics and Automation (ICRA), pp. 10689–10695. IEEE, 2021.
Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained
reinforcement learning with percentile risk criteria. The Journal of Machine Learning Research,
18(1):6070–6120, 2017.
Yinlam Chow, Ofir Nachum, Aleksandra Faust, Edgar Duenez-Guzman, and Mohammad
Ghavamzadeh. Lyapunov-based safe policy optimization for continuous control. arXiv preprint
arXiv:1901.10031, 2019.
Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval
Tassa. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018.
Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch,
and Sergey Levine. Emergent complexity and zero-shot transfer via unsupervised environment
design. Advances in Neural Information Processing Systems, 33:13049–13061, 2020.
Benjamin Eysenbach and Sergey Levine. Maximum entropy rl (provably) solves some robust rl
problems. arXiv preprint arXiv:2103.06257, 2021.
Yannis Flet-Berliac and Debabrota Basu. Saac: Safe reinforcement learning as an adversarial game
of actor-critics. arXiv preprint arXiv:2204.09424, 2022.
Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning.
Journal of Machine Learning Research, 16(1):1437–1480, 2015.
10
Published as a conference paper at ICLR 2023
Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan
Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval
bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715, 2018.
Sven Gronauer. Bullet-safety-gym: Aframework for constrained reinforcement learning. 2022.
Shangding Gu, Jakub Grudzien Kuba, Munning Wen, Ruiqing Chen, Ziyan Wang, Zheng Tian,
Jun Wang, Alois Knoll, and Yaodong Yang. Multi-agent constrained policy optimisation. arXiv
preprint arXiv:2110.02793, 2021.
Shangding Gu, Long Yang, Yali Du, Guang Chen, Florian Walter, Jun Wang, Yaodong Yang, and
Alois Knoll. A review of safe reinforcement learning: Methods, theory and applications. arXiv
preprint arXiv:2205.10330, 2022.
Peide Huang, Mengdi Xu, Fei Fang, and Ding Zhao. Robust reinforcement learning as a stackelberg
game via adaptively-regularized adversarial training. arXiv preprint arXiv:2202.09514, 2022.
Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks
on neural network policies. arXiv preprint arXiv:1702.02284, 2017.
Sham Machandranath Kakade. On the sample complexity of reinforcement learning. University of
London, University College London (United Kingdom), 2003.
Dohyeong Kim and Songhwai Oh. Efficient off-policy safe reinforcement learning using trust region
conditional value at risk. IEEE Robotics and Automation Letters, 7(3):7644–7651, 2022.
Ezgi Korkmaz. Deep reinforcement learning policies learn shared adversarial features across mdps.
In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 7229–7238, 2022.
Ezgi Korkmaz. Adversarial robust deep reinforcement learning requires redefining robustness. arXiv
preprint arXiv:2301.07487, 2023.
Jernej Kos and Dawn Song. Delving into adversarial attacks on deep policies. arXiv preprint
arXiv:1705.06452, 2017.
Qingkai Liang, Fanyu Que, and Eytan Modiano. Accelerated primal-dual policy optimization for
safe reinforcement learning. arXiv preprint arXiv:1802.06480, 2018.
Yongyuan Liang, Yanchao Sun, Ruijie Zheng, and Furong Huang. Efficient adversarial training with-
out attacking: Worst-case-aware robust reinforcement learning. arXiv preprint arXiv:2210.05927,
2022.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa,
David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv
preprint arXiv:1509.02971, 2015.
Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. Tactics
of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748,
2017.
Zichuan Lin, Garrett Thomas, Guangwen Yang, and Tengyu Ma. Model-based adversarial meta-
reinforcement learning. Advances in Neural Information Processing Systems, 33:10161–10173,
2020.
Zuxin Liu, Hongyi Zhou, Baiming Chen, Sicheng Zhong, Martial Hebert, and Ding Zhao. Safe model-
based reinforcement learning with robust cross-entropy method. arXiv preprint arXiv:2010.07968,
2020.
Zuxin Liu, Zhepeng Cen, Vladislav Isenbaev, Wei Liu, Steven Wu, Bo Li, and Ding Zhao. Constrained
variational policy optimization for safe reinforcement learning. In International Conference on
Machine Learning, pp. 13644–13668. PMLR, 2022.
Yuping Luo and Tengyu Ma. Learning barrier certificates: Towards safe reinforcement learning with
zero training-time violations. Advances in Neural Information Processing Systems, 34, 2021.
11
Published as a conference paper at ICLR 2023
Gabriel Resende Machado, Eugênio Silva, and Ronaldo Ribeiro Goldschmidt. Adversarial machine
learning in image classification: A survey toward the defender’s perspective. ACM Computing
Surveys (CSUR), 55(1):1–38, 2021.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.
Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083,
2017.
Amram Meir and Emmett Keeler. A theorem on contraction mappings. Journal of Mathematical
Analysis and Applications, 28(2):326–329, 1969.
Janosch Moos, Kay Hansel, Hany Abdulsamad, Svenja Stark, Debora Clever, and Jan Peters. Robust
reinforcement learning: A review of foundations and recent advances. Machine Learning and
Knowledge Extraction, 4(1):276–315, 2022.
Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G Bellemare. Safe and efficient off-policy
reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.
Fabio Muratore, Felix Treede, Michael Gienger, and Jan Peters. Domain randomization for simulation-
based policy optimization with transferability assessment. In Conference on Robot Learning, pp.
700–713. PMLR, 2018.
Santiago Paternain, Luiz FO Chamon, Miguel Calvo-Fullana, and Alejandro Ribeiro. Constrained
reinforcement learning has zero duality gap. arXiv preprint arXiv:1910.13393, 2019.
Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. Robust
deep reinforcement learning with adversarial attacks. arXiv preprint arXiv:1712.03632, 2017.
Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial rein-
forcement learning. In International Conference on Machine Learning, pp. 2817–2826. PMLR,
2017.
Nikolaos Pitropakis, Emmanouil Panaousis, Thanassis Giannetsos, Eleftherios Anastasiadis, and
George Loukas. A taxonomy and survey of attacks against machine learning. Computer Science
Review, 34:100199, 2019.
Rémy Portelas, Cédric Colas, Lilian Weng, Katja Hofmann, and Pierre-Yves Oudeyer. Automatic
curriculum learning for deep rl: A short survey. arXiv preprint arXiv:2003.04664, 2020.
Alex Ray, Joshua Achiam, and Dario Amodei. Benchmarking safe exploration in deep reinforcement
learning. arXiv preprint arXiv:1910.01708, 7, 2019.
Matthew Riemer, Miao Liu, and Gerald Tesauro. Learning abstract options. Advances in neural
information processing systems, 31, 2018.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. Advances in neural information processing systems, 29,
2016.
William Saunders, Girish Sastry, Andreas Stuhlmueller, and Owain Evans. Trial without error:
Towards safe reinforcement learning via human intervention. arXiv preprint arXiv:1707.05173,
2017.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Accessorize to a crime: Real
and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac
conference on computer and communications security, pp. 1528–1540, 2016.
Aivar Sootla, Alexander I Cowen-Rivers, Taher Jafferjee, Ziyan Wang, David H Mguni, Jun Wang,
and Haitham Ammar. Sauté rl: Almost surely safe reinforcement learning using state augmentation.
In International Conference on Machine Learning, pp. 20423–20443. PMLR, 2022a.
12
Published as a conference paper at ICLR 2023
Aivar Sootla, Alexander I Cowen-Rivers, Jun Wang, and Haitham Bou Ammar. Enhancing safe
exploration using safety state augmentation. arXiv preprint arXiv:2206.02675, 2022b.
Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, and Chelsea Finn. Learning to be
safe: Deep rl with a safety critic. arXiv preprint arXiv:2010.14603, 2020.
Adam Stooke, Joshua Achiam, and Pieter Abbeel. Responsive safety in reinforcement learning by pid
lagrangian methods. In International Conference on Machine Learning, pp. 9133–9143. PMLR,
2020.
Yanchao Sun, Ruijie Zheng, Yongyuan Liang, and Furong Huang. Who is the strongest enemy?
towards optimal and efficient evasion attacks in deep rl. arXiv preprint arXiv:2106.05087, 2021.
Richard S Sutton, Andrew G Barto, et al. Introduction to reinforcement learning. 1998.
Chen Tessler, Daniel J Mankowitz, and Shie Mannor. Reward constrained policy optimization. arXiv
preprint arXiv:1805.11074, 2018.
Chen Tessler, Yonathan Efroni, and Shie Mannor. Action robust reinforcement learning and applica-
tions in continuous control. In International Conference on Machine Learning, pp. 6215–6224.
PMLR, 2019.
Brijen Thananjeyan, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho
Hwang, Joseph E Gonzalez, Julian Ibarz, Chelsea Finn, and Ken Goldberg. Recovery rl: Safe
reinforcement learning with learned recovery zones. IEEE Robotics and Automation Letters, 6(3):
4915–4922, 2021.
Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain
randomization for transferring deep neural networks from simulation to the real world. In 2017
IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23–30. IEEE,
2017.
Jingkang Wang, Yang Liu, and Bo Li. Reinforcement learning with perturbed rewards. In Proceedings
of the AAAI Conference on Artificial Intelligence, volume 34, pp. 6202–6209, 2020.
Qisong Yang, Thiago D Simão, Simon H Tindemans, and Matthijs TJ Spaan. Wcsac: Worst-case soft
actor critic for safety-constrained reinforcement learning. In AAAI, pp. 10639–10646, 2021.
Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, and Peter J Ramadge. Projection-based
constrained policy optimization. arXiv preprint arXiv:2010.03152, 2020.
Haonan Yu, Wei Xu, and Haichao Zhang. Towards safe reinforcement learning with a safety editor
policy. arXiv preprint arXiv:2201.12427, 2022.
Ming Yu, Zhuoran Yang, Mladen Kolar, and Zhaoran Wang. Convergent policy optimization for safe
reinforcement learning. arXiv preprint arXiv:1910.12156, 2019.
Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, and Cho-Jui
Hsieh. Robust deep reinforcement learning against adversarial perturbations on state observations.
Advances in Neural Information Processing Systems, 33:21024–21037, 2020a.
Huan Zhang, Hongge Chen, Duane Boning, and Cho-Jui Hsieh. Robust reinforcement learning on
state observations with learned optimal adversary. arXiv preprint arXiv:2101.08452, 2021.
Shangtong Zhang and Shimon Whiteson. Dac: The double actor-critic architecture for learning
options. Advances in Neural Information Processing Systems, 32, 2019.
Yiming Zhang, Quan Vuong, and Keith Ross. First order constrained optimization in policy space.
Advances in Neural Information Processing Systems, 2020b.
Weiye Zhao, Tairan He, and Changliu Liu. Model-free safe control for zero-violation reinforcement
learning. In 5th Annual Conference on Robot Learning, 2021.
13
Published as a conference paper at ICLR 2023
Table of Contents
A Proofs and Discussions 14
A.1 Proof of Lemma 1 and Proposition 1 – infeasible tempting policies . . . . . . . . 14
A.2 Proof of Lemma 2 – optimal policy’s cost value . . . . . . . . . . . . . . . . . . 15
A.3 Proof of Theorem 1 – existence of optimal deterministic MC/MR adversary . . . 16
A.4 Proof of Theorem 2 – one-step attack cost bound . . . . . . . . . . . . . . . . . 17
A.5 Proof of Theorem 3 – episodic attack cost bound . . . . . . . . . . . . . . . . . 17
A.6 Proof of Theorem 4 and Proposition 1 – Bellman contraction . . . . . . . . . . . 18
B Remarks 20
B.1 Remarks of the safe RL setting, stealthiness, and assumptions . . . . . . . . . . 20
B.2 Remarks of the failure of SA-PPOL(MC/MR) baselines . . . . . . . . . . . . . 22
C Implementation Details 22
C.1 MC and MR attackers implementation . . . . . . . . . . . . . . . . . . . . . . 22
C.2 PPO-Lagrangian algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
C.3 Adversarial training full algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 23
C.4 MAD attacker implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 24
C.5 SA-PPO-Lagrangian baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
C.6 Improved adaptive MAD (AMAD) attacker baseline . . . . . . . . . . . . . . . 25
C.7 Environment description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
C.8 Hyper-parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
C.9 More experiment results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Lemma 1 indicates that all the tempting policies are infeasible: ∀π ∈ ΠTM , Vcπ (µ0 ) > κ. We will
prove it by contradiction.
Proof. For a tempting safe RL problem MκΠ , there exists a tempting policy that satisfies the constraint:
′
π ′ ∈ ΠTM , Vcπ (µ0 ) ≤ κ, π ′ ∈ ΠκM . Denote the optimal policy as π ∗ , then based on the definition of
′ ∗
the tempting policy, we have Vrπ (µ0 ) > Vrπ (µ0 ). Based on the definition of optimality, we know
κ
that for any other feasible policy π ∈ ΠM , we have:
′ ∗
Vrπ (µ0 ) > Vrπ (µ0 ) ≥ Vrπ (µ0 ),
which indicates that π ′ is the optimal policy for MκΠ . Then again, based on the definition of tempting
policy, we will obtain:
′ ′
Vrπ (µ0 ) > Vrπ (µ0 ),
′ ′
which contradicts to the fact that Vrπ (µ0 ) = Vrπ (µ0 ). Therefore, there is no tempting policy that
satisfies the constraint.
Proposition 1 suggest that as long as the MR attacker can successfully obtain a policy that has higher
reward return than the optimal policy π ∗ given enough large perturbation set Bpϵ (s), it is guaranteed
to be reward stealthy and effective.
Proof. The stealthiness is naturally satisfied based on the definition. The effectiveness is guaranteed
∗ ∗
by Lemma 1. Since the corrupted policy π ∗ ◦ νMR can achieve Vrπ ◦νMR > Vrπ , we can conclude
that π ∗ ◦ νMR is within the tempting policy class, since it has higher reward than the optimal policy.
Then we know that it will violate the constraint based on Lemma 1, and thus the MR attacker is
effective.
14
Published as a conference paper at ICLR 2023
Lemma 2 says that in augmented policy space Π̄, the optimal policy π ∗ of a tempting safe RL problem
∗
satisfies: Vcπ (µ0 ) = κ. It is clear to see that the temping policy space and the original policy space
are subsets of the augmented policy space: ΠTM ⊂ Π ⊂ Π̄. We then prove Lemma 2 by contradiction.
Proof. Suppose the optimal policy π ∗ (a|s, o) in augmented policy space for a tempting safe RL
∗
problem has Vcπ (µ0 ) < κ and its option update function is πo∗ . Denote π ′ ∈ ΠTM as a tempting
′ ′ ∗
policy. Based on Lemma 1, we know that Vcπ (µ0 ) > κ and Vrπ (µ0 ) > Vrπ (µ0 ). Then we can
compute a weight α:
∗
κ − Vcπ (µ0 )
α = π′ . (8)
Vc (µ0 ) − Vcπ∗ (µ0 )
We can see that: ′ ∗
αVcπ (µ0 ) + (1 − α)Vcπ (µ0 ) = κ. (9)
∗ ′
Now we consider the augmented space Π̄. Since Π ⊆ Π̄, π , π ∈ Π̄, and then we further define
another policy π̄ based on the trajectory-wise mixture of π ∗ and π ′ as
′
π (at |st ), if ot = 1
π̄(at |st , ot ) = (10)
π ∗ (at |st , ut ), if ot = 0
with ot+1 = ot , o0 ∼ Bernoulli(α) and the update of u follows the definition of πo∗ . Therefore, the
trajectory of π̄ has α probability to be sampled from π ′ and 1 − α probability to be sampled from π ∗ :
τ ∼ π ′ , with probability α,
τ ∼ π̄ := (11)
τ ∼ π ∗ , with probability 1 − α.
∗ P∞
Remark 2. The cost value function Vcπ (µ0 ) = Eτ ∼π [ t=0 γ t ct ] is based on the expectation of the
sampled trajectories (expectation over episodes) rather than a single trajectory
P∞ (expectation within
∗
one episode), because for a single sampled trajectory τ ∼ π, Vcπ (τ ) = t=0 γ t ct may even not
necessarily satisfy the constraint.
Remark 3. The proof also indicates that the range of metric function V := {(Vrπ (µ0 ), Vcπ (µ0 ))}
(as shown as the blue circle in Fig.1) is convex when we extend Π̄ to a linear mixture ofPΠ, i.e., let
O = {1, 2, 3, . . . } and Π̄ : S × A × O → [0, 1]. Consider α = [α1 , α2 , . . . ], αi ≥ 0, i=1 αi =
1, π = [π1 , π2 , . . . ]. We can construct a policy π̄ ∈ Π̄ = ⟨α, π⟩:
π1 (at |st ), if ot = 1
π̄(at |st , ot ) = π2 (at |st ), if ot = 2 (17)
...
15
Published as a conference paper at ICLR 2023
Existence. Given a fixed policy π, We first introduce two adversary MDPs M̂r = (S, Â, P̂, R̂r , γ)
for reward maximization adversary and M̂c = (S, Â, P̂, R̂c , γ) for cost maximization adversary
to prove the existence of optimal adversary. In adversary MDPs, the adversary acts as the agent
P
to choose a perturbed state as the action (i.e., â = s̃) to maximize the cumulative reward R̂.
Therefore, in adversary MDPs, the action space  = S and ν(·|s) denotes a policy distribution.
Based on the above definitions, we can also derive transition function and reward function for new
MDPs Zhang et al. (2020a)
X
p̂(s′ |s, a) = π(a|â)p(s′ |s, a), (20)
a
π(a|â)p(s′ |s,a)f (s,a,s′ )
(P
′
a P ′ , â ∈ Bpϵ (s)
R̂f (s, â, s ) = a π(a|â)p(s |s,a) , f ∈ {r, c}, (21)
−C, / Bpϵ (s)
â ∈
where â = s̃ ∼ ν(·|s) and C is a constant. Therefore, with sufficiently large C, we can guarantee
that the optimal adversary ν ∗ will not choose a perturbed state â out of the lp -ball of the given state s,
i.e., ν ∗ (â|s) = 0, ∀â ∈
/ Bpϵ (s).
According to the properties of MDP Sutton et al. (1998), M̂r , M̂c have corresponding optimal policy
νr∗ , νc∗ , which are deterministic by assigning unit mass probability to the optimal action â for each
state.
Next, we will prove that νr∗ = νMR , νc∗ = νMC . Consider value function in M̂f , f ∈ {r, c}, for an
adversary ν ∈ N := {ν|ν ∗ (â|s) = 0, ∀â ∈
/ Bpϵ (s)}, we have
V̂fν (s) = Eâ∼ν(·|s),s′ ∼p̂(·|s,â) [R̂f (s, â, s′ ) + γ V̂fν (s′ )] (22)
X X
= ν(â|s) p̂(s′ |s, â)[R̂f (s, â, s′ ) + γ V̂fν (s′ )] (23)
â s′
π(a|â)p(s′ |s, a)f (s, a, s′ )
X XX P
′ a ν ′
= ν(â|s) π(a|â)p(s |s, a) P ′
+ γ V̂f (s ) (24)
â s′ a a π(a|â)p(s |s, a)
X X X
= p(s′ |s, a) π(a|â) ν(â|s)[f (s, a, s′ ) + γ V̂fν (s′ )] (25)
s′ a â
X X
= p(s′ |s, a) π(a|ν(s))[f (s, a, s′ ) + γ V̂fν (s′ )]. (26)
s′ a
16
Published as a conference paper at ICLR 2023
Therefore, Vfπ◦ν (s) = V̂fν (s), ν ∈ N . Note that in adversary MDPs νf∗ ∈ N and
νf∗ = arg max Ea∼π(·|ν(s)),s′ ∼p(·|s,a) [f (s, a, s′ ) + γ V̂fν (s′ )]. (28)
ν
We also know that νf∗ is deterministic,
⇒ νf∗ (s) = arg max Ea∼π(·|s̃),s′ ∼p(·|s,a) [f (s, a, s′ ) + γ V̂fν (s′ )] (29)
ν
= arg max Ea∼π(·|s̃),s′ ∼p(·|s,a) [f (s, a, s′ ) + γVfπ◦ν (s′ )] (30)
ν
= arg max Vfπ◦ν (s, a). (31)
ν
Therefore, νr∗ = νMR , νc∗ = νMC .
Optimality. We will prove the optimality by contradiction. By definition, ∀s ∈ S,
′
Vcπ◦ν (s0 ) ≤ Vcπ◦νMC (s0 ). (32)
′ ′
Suppose ∃ν ′ , s.t.Vcπ◦ν (µ0 ) > Vcπ◦νMC (µ0 ), then there also exists s0 ∈ S, s.t.Vcπ◦ν (s0 ) >
Vcπ◦νMC (s0 ), which is contradictory to Eq.(32). Similarly, we can also prove that the property
holds for νMR by replacing Vcπ◦ν with Vrπ◦ν . Therefore, there is no other adversary that achieves
higher attack effectiveness than νMR or higher reward stealthiness than νMR .
We have
Ṽcπ,ν (s) = Ea∼π(·|ν(s)),s′ ∼p(·|s,a) [c(s, a, s′ ) + γVcπ (s′ )]. (33)
By Bellman equation,
Vcπ (s) = Ea∼π(·|s),s′ ∼p(·|s,a) [c(s, a, s′ ) + γV (s′ )]. (34)
′
′
For simplicity, denote pssa
= p(s |s, a) and we have
!
X X ′
π,ν π s ′ π ′
Ṽc (s) − Vc (s) = π(a|ν(s)) − π(a|s) psa (c(s, a, s ) + γVc (s )) (35)
a∈A s∈S
!
X X ′
≤ |π(a|ν(s)) − π(a|s)| max pssa (c(s, a, s′ ) + γVcπ (s′ )). (36)
a∈A
a∈A s∈S
1
|π(a|ν(s)) − π(a|s)|, and c(s, a, s′ ) = 0, s′ ∈ Sc .
P
By definition, DTV [π(·|ν(s)∥π(·|s)] = 2 a∈A
Therefore, we have
!
X ′ X ′
Ṽcπ,ν (s) − Vcπ (s) ≤ 2DT V [π(·|ν(s)∥π(·|s)] max pssa c(s, a, s′ ) + pssa γVcπ (s′ ) (37)
a∈A
s∈Sc s∈S
!
X ′ X ′ Cm
≤ 2L∥ν(s) − s∥p max pssa Cm + pssa γ (38)
a∈A 1−γ
s∈Sc s∈S
γCm
≤ 2Lϵ ps Cm + . (39)
1−γ
17
Published as a conference paper at ICLR 2023
By theorem 2,
δcπ◦ν = max |Ea∼π◦ν Aπc (s, a)| (44)
s
γCm
≤ max 2Lϵ ps Cm +
(45)
s 1−γ
γ
= 2LϵCm max ps + . (46)
s 1−γ
Recall Theorem 4, the Bellman policy operator Tπ is a contraction under the sup-norm ∥ · ∥∞ and
will converge to its fixed point. The Bellman policy operator is defined as:
X X
(Tπ Vfπ◦ν )(s) = p(s′ |s, a) f (s, a, s′ ) + γVfπ◦ν (s′ ) , f ∈ {r, c},
π(a|ν(s)) (51)
a∈A s′ ∈S
′ ′
s
Proof. Denote fsa = f (s, a, s′ ), f ∈ {r, c} and pssa = p(s′ |s, a) for simplicity, we have:
X X ′ h ′ i
(Tπ Ufπ◦ν )(s) − (Tπ Vfπ◦ν )(s) = π(a|ν(s)) s
pssa fsa + γUfπ◦ν (s′ ) (52)
a∈A s′ ∈S
X X ′
h ′ i
− π(a|ν(s)) pssa fsa
s
+ γVfπ◦ν (s′ ) (53)
a∈A s′ ∈S
X X ′
= γ π(a|ν(s)) pssa Ufπ◦ν (s′ ) − Vfπ◦ν (s′ ) (54)
a∈A s′ ∈S
π◦ν ′ π◦ν ′
≤ γ max Uf (s ) − Vf (s ) (55)
s′ ∈S
= γ
Uf (s ) − Vfπ◦ν (s′ )
∞ ,
π◦ν ′
(56)
Since the above holds for any state s, we have:
max (Tπ Ufπ◦ν )(s) − (Tπ Vfπ◦ν )(s) ≤ γ
Ufπ◦ν (s′ ) − Vfπ◦ν (s′ )
∞ ,
s
Then based on the Contraction Mapping Theorem (Meir & Keeler, 1969), we know that Tπ has a
unique fixed point Vf∗ (s), f ∈ {r, c} such that Vf∗ (s) = (Tπ Vf∗ )(s).
18
Published as a conference paper at ICLR 2023
With the proof of Bellman contraction, we show that why we can perform adversarial training
successfully under observational attacks. Since the Bellman operator is a contraction for both reward
and cost under adversarial attacks, we can accurately evaluate the performance of the corrupted policy
in the policy evaluation phase. This is a crucial and strong guarantee for the success of adversarial
training, because we can not improve the policy without well-estimated values.
′
Propisition 1 states that suppose a trained policy π ′ under the MC attacker satisfies: Vcπ ◦νMC (µ0 ) ≤ κ,
then π ′ ◦ ν is guaranteed to be feasible with any Bpϵ bounded adversarial perturbations. Similarly,
′
suppose a trained policy π ′ under the MR attacker satisfies: Vcπ ◦νMR (µ0 ) ≤ κ, then π ′ ◦ ν is
ϵ
guaranteed to be non-tempting with any Bp bounded adversarial perturbations. Before proving it, we
first give the following definitions and lemmas.
Definition 6. Define the Bellman adversary effectiveness operator as Tc∗ : R|S| → − R|S| :
X X
(Tc∗ Vcπ◦ν )(s) = max
ϵ
π(a|s̃) p(s′ |s, a) [c(s, a, s′ ) + γVcπ◦ν (s′ )] . (57)
s̃∈Bp (s)
a∈A s′ ∈S
Definition 7. Define the Bellman adversary reward stealthiness operator as Tr∗ : R|S| → − R|S| :
X X
(Tr∗ Vrπ◦ν )(s) = max
ϵ
π(a|s̃) p(s′ |s, a) [r(s, a, s′ ) + γVrπ◦ν (s′ )] . (58)
s̃∈Bp (s)
a∈A s′ ∈S
Recall that Bpϵ (s) is the ℓp ball to constrain the perturbation range. The two definitions correspond to
computing the value of the most effective and the most reward-stealthy attackers, which is similar to
the Bellman optimality operator in the literature. We then show their contraction properties via the
following Lemma:
Lemma 3. The Bellman operators Tc∗ , Tr∗ are contractions under the sup-norm ∥ · ∥∞ and will
converge to their fixed points, respectively. The fixed point for Tc∗ is Vcπ◦νMC = Tc∗ Vcπ◦νMC , and the
fixed point for Tr∗ is Vrπ◦νMR = Tr∗ Vrπ◦νMR .
Proof.
X X ′ h ′ i
∗ π◦ν1
(Tf Vf )(s) − (Tf∗ Vfπ◦ν2 )(s) = max π(a|s̃) p s
f s
+ γV π◦ν1 ′
(s ) (60)
ϵ sa sa f
s̃∈Bp (s)
a∈A s′ ∈S
X X ′
h ′ i
− max π(a|s̃) pssa fsa
s
+ γVfπ◦ν2 (s′ ) (61)
ϵ
s̃∈Bp (s)
a∈A s′ ∈S
X X ′ h i
s π◦ν1 ′ π◦ν2 ′
= γ max π(a|s̃) p V (s ) − V (s ) (62)
ϵ sa f f
s̃∈Bp (s)
a∈A s′ ∈S
X X ′ h i
s π◦ν1 ′ π◦ν2 ′
≤ γ max π(a|s̃) p V (s ) − V (s ) (63)
ϵ
sa f f
s̃∈Bp (s)
a∈A s′ ∈S
X X ′ h i
∆
= γ π(a|s̃∗ ) pssa Vfπ◦ν1 (s′ ) − Vfπ◦ν2 (s′ ) (64)
a∈A s′ ∈S
π◦ν1 ′ π◦ν2 ′
≤ γ max Vf (s ) − V f (s ) (65)
s′ ∈S
= γ
Vf 1 (s ) − Vfπ◦ν2 (s′ )
∞ ,
π◦ν ′
(66)
where inequality (63) comes from Lemma 4, and s̃∗ in Eq. (64) denote the argmax of the RHS.
19
Published as a conference paper at ICLR 2023
Since the above holds for any state s, we can also conclude that:
∗ π◦ν
)(s) − (Tf∗ Vfπ◦ν2 )(s)
∞ ≤ γ
Vfπ◦ν2 (s′ ) − Vfπ◦ν2 (s′ )
∞ ,
(Tf V 1
f
After proving the contraction, we prove that the value function of the MC and MR adversaries
Vcπ◦νMC (s), Vrπ◦νMR (s) are the fixed points for Tc∗ , Tr∗ as follows:
With Lemma 3 and the proof above, we can easily obtain the conclusions in Remark 1: if the
trained policy is safe under the MC or the MR attacker, then it is guaranteed to be feasible or non-
tempting under any Bpϵ (s) bounded adversarial perturbations respectively, since there are no other
attackers can achieve higher cost or reward returns than them. It provides theoretical guarantees of
the safety of adversarial training under the MC and MR attackers. The adversarial trained agents
under the proposed attacks are guaranteed to be safe or non-tempting under any bounded adversarial
perturbations. We believe the above theoretical guarantees are crucial for the success of our adversarial
training agents, because from our ablation studies, we can see adversarial training can not achieve
desired performance with other attackers.
B R EMARKS
Safe RL setting regarding the reward and the cost. We consider the safe RL problems that have
separate task rewards and constraint violation costs, i.e. independent reward and cost functions.
Combining the cost with reward to a single scalar metric, which can be viewed as manually selecting
Lagrange multipliers, may work in simple problems. However, it lacks interpretability – it is hard to
explain what does a single scalar value mean, and requires good domain knowledge of the problem
– the weight between costs and rewards should be carefully balanced, which is difficult when the
task rewards already contain many objectives/factors. On the other hand, separating the costs from
rewards is easy to monitor the safety performance and task performance respectively, which is more
interpretable and applicable for different cost constraint thresholds.
20
Published as a conference paper at ICLR 2023
Determine temptation status of a safe RL problem. According to Def. 1-3, no tempting policy
indicates a non-tempting safe RL problem, where the optimal policy has the highest reward while
satisfying the constraint. However, for the safe deployment problem that only cares about safety
after training, no tempting policy means that the cost signal is unnecessary for training, because
one can simply focus on maximizing the reward. As long as the most rewarding policies are found,
the safety requirement would be automatically satisfied, and thus many standard RL algorithms can
solve the problem. Since safe RL methods are not required in this setting, the non-tempting tasks
are usually not discussed in safe RL papers, and are also not the focus of this paper. From another
perspective, since a safe RL problem is specified by the cost threshold κ, one can tune the threshold
to change the status of temptation. For instance, if κ > maxs,a,s′ c(s, a, s′ ), then it is guaranteed
to be a non-tempting problem because all the policies satisfy the constraints, and thus we can use
standard RL methods to solve it.
Independently estimated reward and cost value functions assumption. Similar to most existing
safe RL algorithms, such as PPO-Lagrangian Ray et al. (2019); Stooke et al. (2020), CPO Achiam
et al. (2017), FOCOPS Zhang et al. (2020b), and CVPO Liu et al. (2022), we consider the policy-
based (or actor-critic-based) safe RL in this work. There are two phases for this type of approach:
policy evaluation and policy improvement. In the policy evaluation phase, the reward and cost value
functions Vrπ , Vcπ are evaluated separately. At this stage, the Bellman operators for reward and cost
values are independent. Therefore, they have contractions (Theorem 4) and will converge to their
fixed points separately. This is a commonly used treatment in safe RL papers to train the policy: first
evaluating the reward and cost values independently by Bellman equations and then optimizing the
policy based on the learned value estimations. Therefore, our theoretical analysis of robustness is
also developed under this setting.
(Reward) Stealthy attack for safe RL. As we discussed in Sec. 3.2, the stealthiness concept in
supervised learning refers to that the adversarial attack should be covert to prevent from being easily
identified. While we use the perturbation set Bpϵ to ensure the stealthiness regarding the observation
corruption, we notice that another level of stealthiness regarding the task reward performance is
interesting and worthy of being discussed. In some real-world applications, the task-related metrics
(such as velocity, acceleration, goal distances) are usually easy to be monitored from sensors.
However, the safety metrics can be sparse and hard to monitor until breaking the constraints, such as
colliding with obstacles and entering hazard states, which are determined by binary indicator signals.
Therefore, a dramatic task-related metrics (reward) drop might be easily detected by the agent, while
constraint violation signals could be hard to detect until catastrophic failures. An unstealthy attack
in this scenario may decrease the reward a lot and prohibit the agent from finishing the task, which
can warn the agent that it is attacked and thus lead to a failing attack. On the contrary, a stealthy
attack can maintain the agent’s task reward such that the agent is not aware of the existence of the
attacks based on "good" task metrics, while performing successful attacks by leading to constraint
violations. In other words, a stealthy attack should corrupt the policy to be tempted, since all the
tempting policies are high-rewarding while unsafe.
Stealthiness definition of the attacks. There is an alternative definition of stealthiness by viewing
the difference in the reward regardless of increasing or decreasing. The two-sided stealthiness is a
more strict one than the one-sided lower-bound definition in this paper. However, if we consider a
practical system design, people usually set a threshold for the lower bound of the task performance
to determine whether the system functions properly, rather than specifying an upper bound of the
performance because it might be tricky to determine what should be the upper-bound of the task
performance to be alerted by the agent. For instance, an autonomous vehicle that fails to reach the
destination within a certain amount of time may be identified as abnormal, while reaching the goal
faster may not since it might be hard to specify such a threshold to determine what is an overly
good performance. Therefore, increasing the reward with the same amount of decreasing it may not
attract the same attention from the agents. In addition, finding a stealthy and effective attacker with
minimum reward change might be a much harder problem with the two-sided definition, since the
candidate solutions are much fewer and the optimization problem could be harder to be formulated.
But we believe that this is an interesting point that is worthy to be investigated in the future, while we
will focus on the one-sided definition of stealthiness in this work.
21
Published as a conference paper at ICLR 2023
The detailed algorithm of SA-PPOL Zhang et al. (2020a) can be found in Appendix C.5. The basic
idea can be summarized via the following equation:
ℓν (s) = −DKL [π(·|s)||πθ (·|ν(s))], (73)
which aims to minimize the divergence between the corrupted states and the original states. Note
that we only optimize (compute gradient) for πθ (·|ν(s)) rather than π(·|s), since we view π(·|s) as
the "ground-truth" target action distribution. Adding the above KL regularizer to the original PPOL
loss yields the SA-PPOL algorithm. We could observe the original SA-PPOL that uses the MAD
attacker as the adversary can learn well in most of the tasks, though it is not safe under strong attacks.
However, SA-PPOL with MR or MC adversaries often fail to learn a meaningful policy in many tasks,
especially for the MR attacker. The reason is that: the MR attacker aims to find the high-rewarding
adversarial states, while the KL loss will make the policy distribution of high-rewarding adversarial
states to match with the policy distribution of the original relatively lower-rewards states. As a
result, the training could fail due to wrong policy optimization direction and prohibited exploration to
high-rewarding states. Since the MC attacker can also lead to high-rewarding adversarial states due
to the existence of tempting polices, we may also observe failure training with the MC attacker.
C I MPLEMENTATION D ETAILS
C.1 MC AND MR ATTACKERS IMPLEMENTATION
We use the gradient of the state-action value function Q(s, a) to provide the direction to update states
adversarially in K steps (Q = Qπr for MR and Q = Qπc for MC):
sk+1 = Proj[sk − η∇sk Q(s0 , π(sk ))], k = 0, . . . , K − 1 (74)
ϵ 0 0
where Proj[·] is a projection to Bp (s ), η is the learning rate, and s is the state under attack. Since
the Q-value function and policy are parametrized by neural networks, we can backpropagate the
gradient from Qc or Qr to sk via π(ã|sk ), which can be solved efficiently by many optimizers like
ADAM. It is related to the Projected Gradient Descent (PGD) attack, and the deterministic policy
gradient method such as DDPG and TD3 in the literature, but the optimization variables are the state
perturbations rather than the policy parameters.
Note that we use the gradient of Q(s0 , π(sk )) rather than Q(sk , π(sk )) to make the optimization
more stable, since the Q function may not generalize well to unseen states in practice. This technique
for solving adversarial attacks is also widely used in the standard RL literature and is shown to be
successful, such as (Zhang et al., 2020a). The implementation of MC and MR attacker is shown in
algorithm 2. Empirically, this gradient-based method converges fast with a few iterations and within
10ms as shown in Fig. 3, which greatly improves adversarial training efficiency.
The objective of PPO (clipped) has the form (Schulman et al., 2017):
πθ (a|s) πθ πθ (a|s)
ℓppo = min( A k (s, a), clip( , 1 − ϵ, 1 + ϵ)Aπθk (s, a)) (75)
πθk (a|s) πθk (a|s)
22
Published as a conference paper at ICLR 2023
We use PID Lagrangian Stooke et al. (2020) that addresses the oscillation and overshoot problem in
Lagrangian methods. The loss of the PPO-Lagrangian has the form:
1
ℓppol = (ℓppo + Vrπ − λVcπ ) (76)
1+λ
The Lagrangian multiplier λ is computed by applying feedback control to Vcπ and is determined by
KP , KI , and KD that need to be fine-tuned.
Due to the page limit, we omit some implementation details in the main content. We will present
the full algorithm and some implementation tricks in this section. Without otherwise statement, the
critics’ and policies’ parameterization is assumed to be neural networks (NN), while we believe other
parameterization form should also work well.
Critics update. Denote ϕr as the parameters for the task reward critic Qr , and ϕc as the parameters
for the constraint violation cost critic Qc . Similar to many other off-policy algorithms Lillicrap et al.
(2015), we use a target network for each critic and the polyak smoothing trick to stabilize the training.
Other off-policy critics training methods, such as Re-trace Munos et al. (2016), could also be easily
incorporated with PPO-Lagrangian training framework. Denote ϕ′r as the parameters for the target
reward critic Q′r , and ϕ′c as the parameters for the target cost critic Q′c . Define D as the replay buffer
and (s, a, s′ , r, c) as the state, action, next state, reward, and cost respectively. The critics are updated
by minimizing the following mean-squared Bellman error (MSBE):
h i
2
ℓ(ϕr ) = E(s,a,s′ ,r,c)∼D (Qr (s, a) − (r + γEa′ ∼π [Q′r (s′ , a′ )])) (77)
h i
2
ℓ(ϕc ) = E(s,a,s′ ,r,c)∼D (Qc (s, a) − (c + γEa′ ∼π [Q′c (s′ , a′ )])) . (78)
Denote αc as the critics’ learning rate, we have the following updating equations:
ϕr ←
− ϕr − αc ∇ϕr ℓ(ϕr ) (79)
ϕc ←
− ϕc − αc ∇ϕc ℓ(ϕc ) (80)
Note that the original PPO-Lagrangian algorithm is an on-policy algorithm, which doesn’t require the
reward critic and cost critic to train the policy. We learn the critics because the MC and MR attackers
require them, which is an essential module for adversarial training.
Polyak averaging for the target networks. The polyak averaging is specified by a weight parameter
ρ ∈ (0, 1) and updates the parameters with:
ϕ′r = ρϕ′r + (1 − ρ)ϕr
ϕ′c = ρϕ′c + (1 − ρ)ϕc (81)
′ ′
θ = ρθ + (1 − ρ)θ.
The critic’s training tricks are widely adopted in many off-policy RL algorithms, such as SAC, DDPG
and TD3. We observe that the critics trained with those implementation tricks work well in practice.
Then we present the full Robust PPO-Lagrangian algorithm:
23
Published as a conference paper at ICLR 2023
The full algorithm of MAD attacker is presented in algorithm 4. We use the same SGLD optimizer
as in Zhang et al. (2020a) to maximize the KL-divergence. The objective of the MAD attacker is
defined as:
Note that we back-propagate the gradient from the corrupted state s instead of the original state s0 to
the policy parameters θ. The full algorithm is shown below:
24
Published as a conference paper at ICLR 2023
To motivate the design of AMAD baseline, we denote P π (s′ |s) = p(s′ |s, a)π(a|s)da as the state
R
transition kernel and pπt (s) = p(s
R t = s|π) as the probability of visiting the state s at the time t under
the policy π, where pπt (s′ ) = P π (s′ |s)pπt−1 (s)ds. Then the discounted future state distribution
dπ (s) is defined as (Kakade, 2003):
∞
X
dπ (s) = (1 − γ) γ t pπt (s),
t=0
We can see that performing MAD attack in low-risk regions that with small p(s′ |s, a)c(s, a, s′ )
values may not be effective – the agent may not even be close to the safety boundary. On the other
hand, perturbing π when p(s′ |s, a)c(s, a, s′ ) is large may have higher chance to result in constraint
violations. Therefore, we improve the MAD to the Adaptive MAD attacker, which will only attack
the agent in high-risk regions (determined by the cost value function and a threshold ξ).
The implementation of AMAD is shown in algorithm 6. Given a batch of states {s}N , we compute
the cost values {Vcπ (s)}N and sort them in ascending order. Then we select certain percentile of
{Vcπ (s)}N as the threshold ξ and attack the states that have higher cost value than ξ.
25
Published as a conference paper at ICLR 2023
We use the Bullet safety gym (Gronauer, 2022) environments for this set of experiments. In the Circle
tasks, the goal is for an agent to move along the circumference of a circle while remaining within a
safety region smaller than the radius of the circle. The reward and cost functions are defined as:
−yvx + xvy
r(s) = p + rrobot (s)
1 + | x2 + y 2 − r|
c(s) = 1(|x| > xlim )
where x, y are the position of the agent on the plane, vx , vy are the velocities of the agent along the
x and y directions, r is the radius of the circle, and xlim specified the range of the safety region,
rrobot (s) is the specific reward for different robot. For example, an ant robot will gain reward if its
feet do not collide with each other. In the Run tasks, the goal for an agent is to move as far as possible
within the safety region and the speed limit. The reward and cost functions are defined as:
q q
r(s) = (xt−1 − gx )2 + (yt−1 − gy )2 − (xt − gx )2 + (yt − gy )2 + rrobot (s)
q
c(s) = 1(|y| > ylim ) + 1( vx2 + vy2 > vlim )
where vlim is the speed limit and gx and gy is the position of a fictitious target. The reward is the
difference between current distance to the target and the distance in the last timestamp.
In all experiments, we use Gaussian policies with mean vectors given as the outputs of neural
networks, and with variances that are separate learnable parameters. For the Car-Run experiment,
the policy networks and Q networks consist of two hidden layers with sizes of (128, 128). For
other experiments, they have two hidden layers with sizes of (256, 256). In both cases, the ReLU
activation function is used. We use a discount factor of γ = 0.995, a GAE-λ for estimating the regular
advantages of λGAE = 0.97, a KL-divergence step size of δKL = 0.01, a clipping coefficient of 0.02.
The PID parameters for the Lagrange multiplier are: Kp = 0.1, KI = 0.003, and KD = 0.001. The
learning rate of the adversarial attackers: MAD, AMAD, MC, and MR is 0.05. The optimization
steps of MAD and AMAD is 60 and 200 for MC and MR attacker. The threshold ξ for AMAD is
0.1. The complete hyperparameters used in the experiments are shown in Table 2. We choose larger
perturbation range for the Car robot-related tasks because they are simpler and easier to train.
26
Published as a conference paper at ICLR 2023
All the experiments are performed on a server with AMD EPYC 7713 64-Core Processor CPU.
For each experiment, we use 4 CPUs to train each agent that is implemented by PyTorch, and the
training time varies from 4 hours (Car-Run) to 7 days (Ant-Circle). Video demos are available at:
https://sites.google.com/view/robustsaferl/home
The experiments for the minimizing reward attack for our method are shown in Table 3. We can
see that the minimizing reward attack does not have an effect on the cost since it remains below the
constraint violation threshold. Besides, we adopted one SOTA attack method (MAD) in standard RL
as a baseline, and improve it (AMAD) in the safe RL setting. The results, however, demonstrate that
they do not perform well. As a result, it does not necessarily mean that the attacking methods and
robust training methods in standard RL settings still perform well in the safe RL setting.
Table 3: Evaluation results under Minimum Reward attacker. Each value is reported as: mean and the difference
between the natural performance for 50 episodes and 5 seeds.
Natural MAD MC MR
Env Method
Reward Cost Reward Cost Reward Cost Reward Cost
FOCOPS-vanilla 304.2±16.91 0.0±0.0 307.08±42.04 19.94±16.05 286.66±53.7 31.25±18.08 382.99±22.86 48.88±14.25
Car-Circle
FOCOPS(MC) 268.56±44.79 0.0±0.0 256.05±45.26 0.0±0.0 284.93±45.84 0.97±2.99 267.37±49.75 0.64±1.92
ϵ = 0.05
FOCOPS(MR) 305.91±18.16 0.0±0.0 295.86±20.02 0.04±0.4 264.33±25.76 1.64±3.62 308.62±26.33 0.82±1.98
FOCOPS-vanilla 509.47±11.7 0.0±0.0 494.74±11.75 0.95±1.32 540.23±12.56 27.0±17.61 539.85±11.85 25.1±17.29
Car-Run
FOCOPS(MC) 473.47±5.89 0.0±0.0 460.79±7.76 0.0±0.0 495.54±9.83 0.45±1.15 497.24±6.6 0.62±1.23
ϵ = 0.05
FOCOPS(MR) 486.98±5.53 0.0±0.0 434.96±19.79 0.0±0.0 488.24±23.98 0.62±1.1 488.58±24.65 0.52±0.9
We evaluate the performance of MAD and AMAD adversaries by attacking well-trained PPO-
Lagrangian policies. We keep the policies’ model weights fixed for all the attackers. The comparison
is in Fig. 4. We vary the attacking fraction (determined by ξ) to thoroughly study the effectiveness of
the AMAD attacker. We can see that AMAD attacker is more effective because the cost increases
significantly with the increase in perturbation, while the reward is maintained well. This validates our
hypothesis that attacking the agent in high-risk regions is more effective and stealthy.
27
Published as a conference paper at ICLR 2023
The experiment results of trained safe RL policies under the Random and MAD attackers are shown
in Table 5. The last column shows the average rewards and costs over all the 5 attackers (Random,
MAD, AMAD, MC, MR). Our agent (ADV-PPOL) with adversarial training is robust against all the
5 attackers and achieves the lowest cost. We can also see that AMAD attacker is more effective than
MAD since the cost under the AMAD attacker is higher than the cost under the MAD attacker.
Table 5: Evaluation results of natural performance (no attack) and under Random and MAD attackers. The
average column shows the average rewards and costs over all 5 attackers (Random, MAD, AMAD, MC, and
MR). Our methods are ADV-PPOL(MC/MR). Each value is reported as: mean ± standard deviation for 50
episodes and 5 seeds. We shadow two lowest-costs agents under each attacker column and break ties based on
rewards, excluding the failing agents (whose natural rewards are less than 30% of PPOL-vanilla’s). We mark the
failing agents with ⋆.
28
Published as a conference paper at ICLR 2023
Table 6: Evaluation results of natural performance (no attack) and under MC and MR attackers of
SAC-Lagrangian w.r.t different entropy regularizer α. Each value is reported as: mean ± standard
deviation for 50 episodes and 5 seeds.
Natural MC MR
Env α
Reward Cost Reward Cost Reward Cost
0.1 414.43±7.99 1.04±2.07 342.32±17.8 112.53±6.92 328.5±22.06 43.52±18.39
Car-Circle 0.01 437.12±9.83 0.94±1.96 309.0±60.72 92.53±22.04 313.58±21.1 35.0±15.59
ϵ = 0.05 0.001 437.41±10.0 1.15±2.36 261.1±53.0 65.92±24.37 383.09±50.06 53.92±16.3
0.0001 369.79±130.6 5.23±11.13 276.32±107.11 84.21±35.68 347.85±117.79 52.97±22.4
0.1 544.77±17.44 0.32±0.71 599.54±10.08 167.77±29.44 591.7±27.82 158.71±51.95
Car-Run 0.01 521.12±23.42 0.19±0.5 549.43±53.71 73.31±54.43 535.99±23.34 21.71±24.25
ϵ = 0.05 0.001 516.22±47.14 0.47±0.95 550.29±34.78 90.47±48.81 546.3±54.49 87.25±60.46
0.0001 434.92±136.81 0.0±0.0 446.44±151.74 41.89±44.41 452.29±119.77 15.16±17.7
Linearly combined MC and MR attacker. The experiment results of trained safe RL policies under
the mixture of MC and MR attackers are shown in Figure 5 and some detailed results are shown
in Table 7. The mixed attacker is computed as the linear combination of MC and MR objectives,
namely, w × M C + (1 − w) × M R, where w ∈ [0, 1] is the weight. Our agent (ADV-PPOL) with
adversarial training is robust against the mixture attacker. However, there is no obvious trend to show
which weight performs the best attack. In addition, we believe the performance in practice is heavily
dependent on the quality of the learned reward and cost Q functions. If the reward Q function is
learned to be more robust and accurate than the cost Q function, then giving larger weight to the
reward Q should achieve better results, and vice versa.
Table 7: Evaluation results under different ratio of MC and MR attackers of PPOL. Each value is
reported as: mean ± standard deviation for 50 episodes and 5 seeds.
29
Published as a conference paper at ICLR 2023
Table 8: Evaluation results of natural performance (no attack) and under MAD, MC, and MR attackers
of CVPO. Each value is reported as: mean ± standard deviation for 50 episodes and 5 seeds.
Natural MAD MC MR
Env
Reward Cost Reward Cost Reward Cost Reward Cost
Car-Circle
412.17±13.02 0.02±0.13 236.93±72.79 49.32±34.01 310.64±37.37 98.03±25.53 329.68±77.66 51.52±24.65
ϵ = 0.05
Car-Run
530.04±1.61 0.02±0.16 481.53±17.43 2.55±3.11 537.51±8.7 23.18±16.26 533.52±2.87 14.42±6.77
ϵ = 0.05
The experiments results of CVPO (Liu et al., 2022) is shown in Table 8. We can that the vanilla
version is not robust against adversarial attackers since the cost is much larger after being attacked.
Based on the conducted experiments of SAC-Lagrangian, FOCOPS, and CVPO, we can conclude
that the vanilla version of them all suffer from vulnerability issues: though they are safe in noise-free
environments, they are no longer safe under strong MC and MR attacks, which validate that our
proposed methods and theories could be applied to a general safe RL setting.
30