| { | |
| "title": "Single-Trajectory Distributionally Robust Reinforcement Learning", | |
| "abstract": "To mitigate the limitation that the classical reinforcement learning (RL) framework heavily relies on identical training and test environments, Distributionally Robust RL (DRRL) has been proposed to enhance performance across a range of environments, possibly including unknown test environments.\nAs a price for robustness gain, DRRL involves optimizing over a set of distributions, which is inherently more challenging than optimizing over a fixed distribution in the non-robust case.\nExisting DRRL algorithms are either model-based or fail to learn from a single sample trajectory.\nIn this paper, we design a first fully model-free DRRL algorithm, called distributionally robust Q-learning with single trajectory (DRQ).\nWe delicately design a multi-timescale framework to fully utilize each incrementally arriving sample and directly learn the optimal distributionally robust policy without modeling the environment, thus the algorithm can be trained along a single trajectory in a model-free fashion.\nDespite the algorithm\u2019s complexity, we provide asymptotic convergence guarantees by generalizing classical stochastic approximation tools.\nComprehensive experimental results demonstrate the superior robustness and sample complexity of our proposed algorithm, compared to non-robust methods and other robust RL algorithms.", | |
| "sections": [ | |
| { | |
| "section_id": "1", | |
| "parent_section_id": null, | |
| "section_name": "Introduction", | |
| "text": "Reinforcement Learning (RL) is a machine learning paradigm for studying sequential decision problems.\nDespite considerable progress in recent years (Silver et al., 2016 ###reference_b29###; Mnih et al., 2015 ###reference_b21###; Vinyals et al., 2019 ###reference_b32###),\nRL algorithms often encounter a discrepancy between training and test environments. This discrepancy is widespread since test environments may be too complex to be perfectly represented in training, or the test environments may inherently shift from the training ones, especially in certain application scenarios, such as financial markets and robotic control.\nOverlooking the mismatch could impede the application of RL algorithms in real-world settings, given the known sensitivity of the optimal policy of the Markov Decision Process (MDP) to the model (Mannor et al., 2004 ###reference_b20###; Iyengar, 2005 ###reference_b15###).\nTo address this concern,\nDistributionally Robust RL (DRRL) (Zhou et al., 2021 ###reference_b38###; Yang et al., 2022 ###reference_b37###; Shi & Chi, ###reference_b28###; Panaganti & Kalathil, 2022 ###reference_b24###; Panaganti et al., 2022 ###reference_b25###; Ma et al., 2022 ###reference_b19###; Yang, 2018 ###reference_b36###; Abdullah et al., 2019 ###reference_b2###; Neufeld & Sester, 2022 ###reference_b22###) formulates the decision problem under the assumption that the test environment varies but remains close to the training environment.\nThe objective is to design algorithms optimizing the worst-case expected return over an ambiguity set encompassing all possible test distributions.\nEvaluating a DRRL policy necessitates deeper insight into the transition dynamics than evaluating a non-robust one, as it entails searching for the worst-case performance across all distributions within the ambiguity set.\nTherefore, most prior solutions are model-based, require the maintenance of an estimator for the entire transition model and the ambiguity set.\nSuch requirements may render these algorithms less practical in scenarios with large state-action spaces or where adequate modeling of the real environment is unfeasible.\nPrompted by this issue, we study a fully model-free DRRL algorithm in this paper, which learns the optimal DR policy without explicit environmental modeling.\nThe algorithm\u2019s distinctive feature is its capacity to learn from a single sample trajectory, representing the least demanding requirement for data collection.\nThis feature results from our innovative algorithmic framework, comprising incrementally updated estimators and a delicate approximation scheme.\nWhile most model-free non-robust RL algorithms support training in this setting\u2014contributing to their widespread use\u2014no existing work can effectively address the DRRL problem in this way.\nThe challenge arises from the fact that approximating a DR policy by learning from a single trajectory suffers from restricted control over state-action pairs and limited samples, i.e., only one sample at a time.\nAs we will demonstrate, a simple plug-in estimator using one sample, which is unbiased in the non-robust -learning algorithm, fails to approximate any robust value accurately.\nThe complexity of this task is further affirmed by the sole attempt to develop a model-free DRRL algorithm in (Liu et al., 2022 ###reference_b18###).\nIt relies on a restricted simulator assumption, enabling the algorithm to access an arbitrary number of samples from any state-action pair, thereby amassing sufficient system dynamics information before addressing the DRRL problem.\nRelaxing the dependence on a simulator and developing a fully model-free algorithm capable of learning from a single trajectory necessitates a delicate one-sample estimator for the DR value, carefully integrated into an algorithmic framework to eradicate bias from insufficient samples and ensure convergence to the optimal policy.\nMoreover, current solutions heavily depend on the specific divergence chosen to construct the ambiguity set and fail to bridge different divergences, underscoring the practical importance of divergence selection.\nThus a nature question arises:\nIs it possible to develop a model-free DRRL framework that can learn the optimal DR policy across different divergences using only a single sample trajectory for learning?" | |
| }, | |
| { | |
| "section_id": "1.1", | |
| "parent_section_id": "1", | |
| "section_name": "Our Contributions", | |
| "text": "In this paper, we provide a positive solution to the aforementioned question by making the following contributions:\nWe introduce a pioneering approach to construct the ambiguity set using the Cressie-Read family of -divergence. By leveraging the strong duality form of the corresponding distributionally robust reinforcement learning (DRRL) problem, we reformulate it, allowing for the learning of the optimal DR policies using misspecified MDP samples. This formulation effortlessly covers widely used divergences such as the Kullback-Leibler (KL) and divergence.\nTo address the additional nonlinearity that arises from the DR Bellman equation, which is absent in its non-robust counterpart, we develop a novel multi-timescale stochastic approximation scheme. This scheme carefully exploits the structure of the DR Bellman operator. The update of the table occurs in the slowest loop, while the other two loops are delicately designed to mitigate the bias introduced by the plug-in estimator due to the nonlinearity.\nWe instantiate our framework into a DR variant of the -learning algorithm, called distributionally robust -learning with single trajectory (DRQ). This algorithm solves discount Markov Decision Processes (MDPs) in a fully online and incremental manner. We prove the asymptotic convergence of our proposed algorithm by extending the classical two-timescale stochastic approximation framework, which may be of independent interest.\nWe conduct extensive experiments to showcase the robustness and sample efficiency of the policy learned by our proposed DR -learning algorithm.\nWe also create a deep learning version of our algorithm and compare its performance to representative online and offline (robust) reinforcement learning benchmarks on classical control tasks." | |
| }, | |
| { | |
| "section_id": "1.2", | |
| "parent_section_id": "1", | |
| "section_name": "Related Work", | |
| "text": "Robust MDPs and RL:\nThe framework of robust MDPs has been studied in several works such as Nilim & El Ghaoui (2005 ###reference_b23###); Iyengar (2005 ###reference_b15###); Wiesemann et al. (2013 ###reference_b34###); Lim et al. (2013 ###reference_b17###); Ho et al. (2021 ###reference_b14###); Goyal & Grand-Clement (2022 ###reference_b13###).\nThese works discuss the computational issues using dynamic programming with different choices of MDP formulation, as well as the choice of ambiguity set, when the transition model is known.\nRobust Reinforcement Learning (RL) (Roy et al., 2017 ###reference_b26###; Badrinath & Kalathil, 2021 ###reference_b3###; Wang & Zou, 2021 ###reference_b33###) relaxes the requirement of accessing to the transition model by simultaneously approximating to the ambiguity set as well as the optimal robust policy, using only the samples from the misspecified MDP.\nOnline Robust RL:\nExisting online robust RL algorithms including Wang & Zou (2021 ###reference_b33###); Badrinath & Kalathil (2021 ###reference_b3###); Roy et al. (2017 ###reference_b26###), highly relies on the choice of the -contamination model and could suffer over-conservatism.\nThis ambiguity set maintains linearity in their corresponding Bellman operator and thus inherits most of the desirable benefits from its non-robust counterpart.\nInstead, common distributionally robust ambiguity sets, such as KL or divergence ball, suffer from extra nonlinearity when trying to learn along a single-trajectory data, which serves as the foundamental challenge in this paper.\nDistributionally Robust RL:\nTo tackle the over-conservatism aroused by probability-agnostic -contamination ambiguity set in the aforementioned robust RL, DRRL is proposed by constructing the ambiguity set with probability-aware distance (Zhou et al., 2021 ###reference_b38###; Yang et al., 2022 ###reference_b37###; Shi & Chi, ###reference_b28###; Panaganti & Kalathil, 2022 ###reference_b24###; Panaganti et al., 2022 ###reference_b25###; Ma et al., 2022 ###reference_b19###), including KL and divergence.\nAs far as we know, most of the existing DRRL algorithms fall into the model-based fashion, which first estimate the whole transition model and then construct the ambiguity set around the model.\nThe DR value and the corresponding policy are then computed based upon them.\nTheir main focus is to understand the sample complexity of the DRRL problem in the offline RL regime, leaving the more prevalent single-trajectory setting largely unexplored." | |
| }, | |
| { | |
| "section_id": "2", | |
| "parent_section_id": null, | |
| "section_name": "Preliminary", | |
| "text": "" | |
| }, | |
| { | |
| "section_id": "2.1", | |
| "parent_section_id": "2", | |
| "section_name": "Discounted MDPs", | |
| "text": "Consider an infinite-horizon MDP where and are finite state and action spaces with cardinality and .\n is the state transition probability measure.\nHere is the set of probability measures over .\n is the reward function and is the discount factor.\nWe assume that is deterministic and bounded in .\nA stationary policy maps, for each state to a probability distribution over the action set and induce a random trajectory , with , and for .\nTo derive the policy corresponding to the value function, we define the optimal state-action function as the expected cumulative discounted rewards under the optimal policy,\n\nThe optimal state-action function is also the fixed point of the Bellman optimality equation," | |
| }, | |
| { | |
| "section_id": "2.2", | |
| "parent_section_id": "2", | |
| "section_name": "-learning", | |
| "text": "Our model-free algorithmic design relies on the -learning template, originally designed to solve the non-robust Bellman optimality equation (Equation 1 ###reference_###). -learning is a model-free reinforcement learning algorithm that uses a single sample trajectory to update the estimator for the function incrementally. Suppose at time , we draw a sample from the environment.\nThen, the algorithm updates the estimated -function following:\nHere, is a learning rate.\nThe algorithm updates the estimated function by constructing a unbiased estimator for the true value, i.e., using one sample." | |
| }, | |
| { | |
| "section_id": "2.3", | |
| "parent_section_id": "2", | |
| "section_name": "Distributionally Robust MDPs", | |
| "text": "DRRL learns an optimal policy that is robust to unknown environmental changes, where the transition model and reward function may differ in the test environment.\nTo focus on the perturbation of the transition model, we assume no pertubation to the reward function.\nOur approach adopts the notion of distributional robustness, where the true transition model is unknown but lies within an ambiguity set that contains all transition models that are close to the training environment under some probability distance .\nTo ensure computational feasibility, we construct the ambiguity set in the -rectangular manner, where for each , we define the ambiguity set as,\nWe then build the ambiguity set for the whole transition model as the Cartesian product of every -ambiguity set, i.e., .\nGiven , we define the optimal DR state-action function as the value function of the best policy to maximize the worst-case return over the ambiguity set,\nUnder the -rectangular assumption, the Bellman optimality equation has been established by Iyengar (2005 ###reference_b15###); Xu & Mannor (2010 ###reference_b35###),\nFor notation simplicity, we would ignore the superscript ." | |
| }, | |
| { | |
| "section_id": "3", | |
| "parent_section_id": null, | |
| "section_name": "Distributonally Robust -learning with Single Trajectory", | |
| "text": "This section presents a general model-free framework for DRRL.\nWe begin by instantiating the distance as Cressie-Read family of -divergence (Cressie & Read, 1984 ###reference_b9###), which is designed to recover previous common choices such as the and KL divergence.\nWe then discuss the challenges and previous solutions in solving the corresponding DRRL problem, as described in Section 3.2 ###reference_###. Finally, we present the design idea of our three-timescale framework and establish the corresponding convergence guarantee." | |
| }, | |
| { | |
| "section_id": "3.1", | |
| "parent_section_id": "3", | |
| "section_name": "Divergence Families", | |
| "text": "Previous work on DRRL has mainly focused on one or several divergences, such as KL, , and total variation (TV) divergences.\nIn contrast, we provide a unified framework that applies to a family of divergences known as the Cressie-Read family of -divergences.\nThis family is parameterized by , and for any chosen , the Cressie-Read family of -divergences is defined as\nwith .\nBased on this family, we instantiate our ambiguity set in Equation 2 ###reference_### as for some radius .\nThe Cressie-Read family of -divergence includes -divergence () and KL divergence ().\nOne key challenge in developing DRRL algorithms using the formulation in Equation 3 ###reference_### is that the expectation is taken over the ambiguity set , which is computationally intensive even with the access to the center model .\nSince we only have access to samples generated from the possibly misspecific model , estimating the expectation with respect to other models is even more challenging. While importance sampling-based techniques can achieve this, the cost of high variance is still undesirable.\nTo solve this issue, we rely on the dual reformulation of Equation 3 ###reference_###:\nFor any random variable , define with and .\nThen\nHere . Equation 4 ###reference_### shows that protecting against the distribution shift is equivalent to optimizing the tail-performance of a model, as only the value below the dual variable are taken into account.\nAnother key insight from the reformulation is that as the growth of for large becomes steeper for larger , the -divergence ball shrinks and the risk measure becomes less conservative.\nThis bridges the gap between difference divergences, whereas previous literature, including Yang et al. (2022 ###reference_b37###) and Zhou et al. (2021 ###reference_b38###), treats different divergences as separate.\nBy applying the dual reformulation, we can rewrite the Cressie-Read Bellman operator in Equation 3 ###reference_### as" | |
| }, | |
| { | |
| "section_id": "3.2", | |
| "parent_section_id": "3", | |
| "section_name": "Bias in Plug-in Estimator in Single Trajectory Setting", | |
| "text": "In this subsection, we aim to solve Equation 5 ###reference_### using single-trajectory data, which has not been addressed by previous DRRL literature.\nAs we can only observe one newly arrival sample each time, to design a online model-free DRRL algorithm, we need to approximate the expectation in Equation 5 ###reference_### using that single sample properly.\nAs mentioned in Section 2.2 ###reference_###, the design of the -learning algorithm relies on an one-sample unbiased estimator of the true Bellman operator.\nHowever, this convenience vanishes in the DR Bellman operator.\nTo illustrate this, consider plugging only one sample into the Cressie-Read Bellman operator Equation 5 ###reference_###:\nThis reduces to the non-robust Bellman operator and is obviously not an unbiased estimator for . This example reveals the inherently more challenging nature of the online DRRL problem. Whereas non-robust RL only needs to improve the expectation of the cumulative return, improving the worst-case return requires more information about the system dynamics, which seems hopeless to be obtained from only one sample and sharply contrasts with our target.\nEven with the help of batch samples, deriving an appropriate estimator for the DR Bellman operator is still nontrivial.\nConsider a standard approach to construct estimators, sample average approximation (SAA):\ngiven a batch of sample size starting from a fix state-action pair , i.e., , the SAA empirical Bellman operator is defined as:\nHere, is the empirical Cressie-Read functional defined as\nAs pointed out by Liu et al. (2022 ###reference_b18###), the SAA estimator is biased, prompting the introduction of the multilevel Monte-Carlo method (Blanchet & Glynn, 2015 ###reference_b4###). Specifically, it first obtains samples from the distribution , and then uses the simulator to draw samples . The samples are further decomposed into two parts: consists of the first samples, while contains the remaining samples. Finally, the DR term in Equation 5 ###reference_### is approximated by solving three optimization problems:\nHowever, this multilevel Monte-Carlo solution requires a large batch of samples for the same state-action pair before the next update, resulting in unbounded memory costs/computational time that are not practical.\nFurthermore, it is prohibited in the single-trajectory setting, where each step only one sample can be observed.\nOur experimental results show that simply approximating the Bellman operator with simulation data, without exploiting its structure, suffers from low data efficiency." | |
| }, | |
| { | |
| "section_id": "3.3", | |
| "parent_section_id": "3", | |
| "section_name": "Three-timescale Framework", | |
| "text": "The -learning is solving the nonrobust Bellman operator\u2019s fixed point in a stochastic approximation manner.\nA salient feature in the DR Bellman operator, compared with its nonrobust counterpart, is a bi-level optimization nature, i.e., jointly solving the dual parameter and the fixed point of the Bellman optimality equation.\nWe revisit the stochastic approximation view of the -learning and develop a three-timescale framework, by a faster running estimate of the optimal dual parameter, and a slower update of the table.\nTo solve Equation 5 ###reference_### using a stochastic approximation template, we iteratively update the variables and table as follows: for the -th iteration after observing a new transition sample and some learning rates ,\nAs the update of and relies on each other, we keep the learning speeds of and , i.e., and , different to stabilize the training process.\nAdditionally, due to the -rectangular assumption, is independent across different -pairs, while the table depends on each other.\nThe independent structure for allows it to be estimated more easily; so we approximate it in a faster loop, while for we update it in a slower loop." | |
| }, | |
| { | |
| "section_id": "3.4", | |
| "parent_section_id": "3", | |
| "section_name": "Algorithmic Design", | |
| "text": "In this subsection, we further instantiate the three-timescale framework to the Cressie-Read family of -divergences.\nFirst, we compute the gradient of in Equation 5 ###reference_### with respect to .\nwhere\nDue to the nonlinearity in Equation 6 ###reference_###, the plug-in gradient estimator is in fact biased.\nThe bias arises as for a random variable , for in .\nTo address this issue, we introduce another even faster timescale to estimate and ,\nIn the medium timescale, we approximate by incrementally update the dual variable using the stochastic gradient descent method, where the true gradient computed in Equation 6 ###reference_### is approximated by:\nFinally, we update the DR function in the slowest timescale using Equation 12 ###reference_###,\nwhere is the empirical version of Equation 5 ###reference_### in the -th iteration:\nHere and are learning rates for three timescales at time , which will be specified later.\nWe summarize the ingredients into our DR -learning (DRQ) algorithm (Algorithm 1 ###reference_###), and prove the almost surely (a.s.) convergence of the algorithm as Theorem 3.3 ###reference_theorem3###.\nThe proof is deferred in Appendix C ###reference_###.\nThe estimators at the n-th step in Algorithm 1 ###reference_###, , converge to a.s. as , where and are the fixed-point of the equation , and and are the corresponding quantity under and .\nThe proof establishes that, by appropriately selecting stepsizes to prioritize frequent updates of and , followed by , and with updated at the slowest rate, the solution path of closely tracks a system of three-dimensional ordinary differential equations (ODEs) considering martingale noise.\nOur approach is to generalize the classic machinery of two-timescale stochastic approximation (Borkar, 2009 ###reference_b5###) to a three-timescale framework, and use it to analyze our proposed algorithm.\nSee Appendix B ###reference_### for the detailed proof." | |
| }, | |
| { | |
| "section_id": "4", | |
| "parent_section_id": null, | |
| "section_name": "Experiments", | |
| "text": "We demonstrate the robustness and sample complexity of our DRQ algorithm in the Cliffwalking environment (Del\u00e9tang et al., 2021 ###reference_b10###) and American put option environment (deferred in Appendix A ###reference_###).\nThese environments provide a focused perspective on the policy and enable a clear understanding of the key parameters effects.\nWe develop a deep learning version of DRQ and compare it with practical online and offline (robust) RL algorithms in classical control tasks, LunarLander and CartPole." | |
| }, | |
| { | |
| "section_id": "4.1", | |
| "parent_section_id": "4", | |
| "section_name": "Convergence and Sample Complexity", | |
| "text": "Before we begin, let us outline the key findings and messages conveyed in this subsection:\n(1) Our ambiguity set design provides substantial robustness, as demonstrated through comparisons with non-robust -learning and -contamination ambiguity sets (Wang & Zou, 2021 ###reference_b33###).\n(2) Our DRQ algorithm exhibits desirable sample complexity, significantly outperforming the multi-level Monte Carlo based DRQ algorithm proposed by Liu et al. (2022 ###reference_b18###) and comparable to the sample complexity of the model-based DRRL algorithm by Panaganti & Kalathil (2022 ###reference_b24###).\n###figure_1### ###figure_2### ###figure_3### ###figure_4### Experiment Setup:\nThe Cliffwalking task is commonly used in risk-sensitive RL research (Del\u00e9tang et al., 2021 ###reference_b10###). Compared to the Frozen Lake environment used by Panaganti & Kalathil (2022 ###reference_b24###), Cliffwalking offers a more intuitive visualization of robust policies (see Figure 1 ###reference_###). The task involves a robot navigating from an initial state of to a goal state of . At each step, the robot is affected by wind, which causes it to move in a random direction with probability . Reaching the goal state earns a reward of , while encountering a wave in the water region results in a penalty of . We train the agent in the nominal environment with for 3 million steps per run, using an -greedy exploration strategy with . We evaluate its performance in perturbed environments, varying the choices of and to demonstrate different levels of robustness.\nWe set the stepsize parameters according to Assumption B.1 ###reference_theorem1###: , , and , where the discount factor is .\n###figure_5### ###figure_6### ###figure_7### ###figure_8### Robustness:\nTo evaluate the robustness of the learned policies, we compare their cumulative returns in perturbed environments with over 100 episodes per setting.\nWe visulize the decision at each status in Figure 1 ###reference_### with different robustness level .\nIn particular, the more robust policy tends to avoid falling into the water, thus arrives to the goal state with a longer path by keeping going up before going right.\nFigure 2a ###reference_sf1### shows the return distribution for each policy. Figure 2b ###reference_sf2### displays the time taken for the policies to reach the goal, and the more robust policy tends to spend more time, which quantitatively supports our observations in Figure 1 ###reference_###. Interestingly, we find that the robust policies outperform the nonrobust one even in the nominal environment.\nFor the different \u2019s, is the best within a relatively wide range (), while is preferred in the environment of extreme pertubation ().\nThis suggests that DRRL provides a elegant trade-off for different robustness preferences.\nWe also compare our model-free DRRL algorithm with the robust RL algorithm presented in Wang & Zou (2021 ###reference_b33###), which also supports training using a single trajectory.\nThe algorithm in Wang & Zou (2021 ###reference_b33###) uses an -contamination ambiguity set.\nWe select the best value of from to and other detailed descriptions in Appendix A ###reference_###. In most cases, the -contamination based algorithm performs very similarly to the non-robust benchmark, and even performs worse in some cases (i.e., and ), due to its excessive conservatism.\nAs we mentioned in Section 3.1 ###reference_###, larger would render the the risk measure less conservative and thus less sensitive to the change in the ball radius , which is empirically confirmed by Figure 2c ###reference_sf3###.\n###figure_9### Sample Complexity:\nThe training curves in Figure 3 ###reference_### depict the estimated value (solid line) and the optimal robust value (dashed line) for the initial state .\nThe results indicate that the estimated value converges quickly to the optimal value, regardless of the values of and . Importantly, our DRQ algorithm achieves a similar convergence rate to the non-robust baseline (represented by the black line).\nWe further compare our algorithm with two robust baselines: the DRQ algorithm with a weak simulator proposed by Liu et al. (2022 ###reference_b18###) (referred to as Liu\u2019s), and the model-based algorithm introduced by Panaganti & Kalathil (2022 ###reference_b24###) (referred to as Model) in Figure 4 ###reference_###.\nTo ensure a fair comparison, we set the same learning rate, , for our DRQ algorithm and the -table update loop of the Liu\u2019s algorithm, as per their recommended choices.\nOur algorithm converges to the true DR value at a similar rate as the model-based algorithm, while the Liu\u2019s algorithm exhibits substantial deviation from the true value and converges relatively slowly. Our algorithm\u2019s superior sample efficiency is attributed to the utilization of first-order information to approximate optimal dual variables, whereas Liu\u2019s relies on a large amount of simulation data for an unbiased estimator.\n###figure_10###" | |
| }, | |
| { | |
| "section_id": "4.2", | |
| "parent_section_id": "4", | |
| "section_name": "Practical Implementation", | |
| "text": "We validate the practicality of our DRQ framework by implementing a practical version, called the Deep Distributionally Robust -learning (DDRQ) algorithm, based on the DQN algorithm (Mnih et al., 2015 ###reference_b21###). We apply this algorithm to two classical control tasks from the OpenAI Gym (Brockman et al., 2016 ###reference_b7###): CartPole and LunarLander.\nOur practical algorithm, denoted as Algorithm 2 ###reference_###, is a variant of Algorithm 1 ###reference_###.\nSpecifically, we adopt the Deep Q-Network (DQN) architecture (Mnih et al., 2015 ###reference_b21###) and employ two sets of neural networks as functional approximators. One set, and , serves as approximators for the function, while the other set, and , approximates the distributionally robust dual variable . To enhance training stability, we introduce a target network, , for the fast network and for the fast dual variable network .\nDue to the approximation error introduced by neural networks and to further improve sample efficiency, our practical DDRQ algorithm adopts a two-timescale update approach.\nIn this approach, our network aims to minimize the Bellman error, while the dual variable network strives to maximize the DR value defined in Equation 5 ###reference_###.\nIt\u2019s important to note that the two-timescale update approach could introduce bias in the convergence of the dual variable, and thus the dual variable may not the optimal dual variable for the primal problem.\nGiven the primal-dual structure of this DR problem, this could render an even lower target value for the network to learn.\nThis approach can be understood as a robust update strategy for our original DRRL problem, share some spirits to the optimization techniques used in other algorithms like Variational Autoencoders (VAE)(Kingma & Welling, 2013 ###reference_b16###), Proximal Policy Optimization (PPO)(Schulman et al., 2017 ###reference_b27###), and Maximum a Posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018 ###reference_b1###). Additional experimental details can be found in Appendix A.3 ###reference_###.\nTo assess the effectiveness of our DDRQ algorithm, we compare it against the RFQI algorithm (Panaganti et al., 2022 ###reference_b25###), the soft-robust RL algorithm (Derman et al., 2018 ###reference_b11###), and the non-robust DQN and FQI algorithms. This comparison encompasses representative practical (robust) reinforcement learning algorithms for both online and offline datasets.\nTo evaluate the robustness of the learned policies, we introduce action and physical environment perturbations. For action perturbation, we simulate the perturbations by varying the probability of randomly selecting an action for both CartPole and LunarLander tasks. We test with for CartPole and for LunarLander.\nRegarding physical environment perturbation in LunarLander, we decrease the power of all the main engine and side engines by the same proportions, ranging from 0 to . For CartPole, we reduce the \u201dforce mag\u201d parameter from to .\nWe set the same ambiguity set radius for both our DDRQ and RFQI algorithm for fair comparisons.\nFigure 5 ###reference_### illustrates how our DDRQ algorithm successfully learns robust policies across all tested tasks, achieving comparable performance to other robust counterparts such as RFQI and SR-DQN.\nConversely, the non-robust DQN and FQI algorithms fail to learn robust policies and deteriorate significantly even under slight perturbations.\nIt is worth noting that RFQI does not perform well in the LunarLander environment, despite using the official code provided by the authors. This outcome could be attributed to the restriction to their TV distance in constructing the ambiguity set, while our Creass-Read ambiguity set can be flexibily chosen to well adopted to the environment nature.\nAdditionally, the soft-robust RL algorithm requires generating data based on multiple models within the ambiguity set. This process can be excessively time-consuming, particularly in large-scale applications." | |
| }, | |
| { | |
| "section_id": "5", | |
| "parent_section_id": null, | |
| "section_name": "Conclusion", | |
| "text": "In this paper, we introduce our DRQ algorithm, a fully model-free DRRL algorithm trained on a single trajectory.\nBy leveraging the stochastic approximation framework, we effectively tackle the joint optimization problem involving the state-action function and the DR dual variable.\nThrough an extension of the classic two-timescale stochastic approximation framework, we establish the asymptotic convergence of our algorithm to the optimal DR policy. Our extensive experimentation showcases the convergence, sample efficiency, and robustness improvements achieved by our approach, surpassing non-robust methods and other robust RL algorithms.\nOur DDRQ algorithm further validates the practicality of our algorithmic framework." | |
| } | |
| ], | |
| "appendix": [ | |
| { | |
| "section_id": "Appendix x1", | |
| "parent_section_id": null, | |
| "section_name": "Appendix", | |
| "text": "In the subsequent sections, we delve into the experimental specifics and provide the technical proofs that were not included in the primary content.\nIn Section A ###reference_###, we commence by showcasing an additional experiment on the American call option. This aligns with the convergence and sample complexity discussions from the main content. We then elucidate the intricacies of Liu\u2019s algorithm to facilitate a transparent comparison with our methodology. Lastly, we discuss the algorithmic intricacies of our DDRQ algorithm and provide details on the experiments that were previously omitted.\nIn Section B ###reference_###, to prove Theorem 3.3 ###reference_theorem3###, we begin by extending the two-timescale stochastic approximation framework to a three-timescale one. Following this, we adapt it to our algorithm, ensuring all requisite conditions are met." | |
| }, | |
| { | |
| "section_id": "Appendix 1", | |
| "parent_section_id": null, | |
| "section_name": "Appendix A Additional Experiments Details", | |
| "text": "In this section, we present additional experimental results from a simulated American put option problem (Cox et al., 1979 ###reference_b8###) that has been previously studied in robust RL literature (Zhou et al., 2021 ###reference_b38###; Tamar et al., 2014 ###reference_b30###).\nThe problem involves holding a put option in multiple stages, whose payoff depends on the price of a financial asset that follows a Bernoulli distribution.\nSpecifically, the next price at stage follows,\nwhere the and are the price up and down factors and is the probability that the price goes up. The initial price is uniformly sampled from , where is the strike price and in our simulation. The agent can take an action to exercise the option () or not exercise () at the time step . If exercising the option, the agent receives a reward and the state transits into an exit state.\nOtherwise, the price will fluctuate based on the above model and no reward will be assigned.\nMoreover we introduce a discount structure in this problem, i.e., the reward in the stage worths in stage as our algorithm is designed for discounted RL setting.\nIn our experiments, we set , , and . We limit the price in and discretize with the precision of 1 decimal place. Thus the state space size .\n###figure_11### We first demonstrate the robustness gain of our DR -learning algorithm by comparing with the non-robust -learning algorithm, and investigate the effect of different robustness levels by varying .\nEach agent is trained for steps with an -greedy exploration policy of and evaluated in perturbed environments.\nWe use the same learning rates for the three timescales in our DR -learning algorithm as in the Cliffwalking environment: , , and .\nFor the non-robust -learning we set the same learning rate as in our -update, i.e., .\nWe perturb the transition probability to the price up and down status , and evaluate each agent for episodes.\nFigure 6 ###reference_### reports the average return and one standard deviation level.\nThe non-robust -learning performs best when the price tends to decrease and the market gets more benefitial (), which benefits the return of holding an American put option.\nHowever, when the prices tend to increase and the market is riskier (), our DR -learning algorithm significantly outperforms the non-robust counterpart, demonstrating the robustness gain of our algorithm against worst-case scenarios.\n###figure_12### We present the learning curve of our DR -learning algorithm with different in Figure 7 ###reference_###.\nOur algorithm can accurately learn the DR value under different \u2019s and \u2019s within million steps.\nWe compare the sample efficiency of our algorithm with the DR -learning algorithm in Liu et al. (2022 ###reference_b18###) (referred to as Liu\u2019s) and the model-based algorithm in Panaganti & Kalathil (2022 ###reference_b24###) (referred to as Model).\nWe set a smaller learning rate for Liu\u2019s as .\nThe reason is setting the same learning rate for their algorithm would render a much slower convergence performance, which is not fair for comparisons.\nWe use the recommended choice for the sampling procedure in Liu algorithm.\nBoth DR -learning and Liu are trained for steps per run, while the model-based algorithm is trained for steps per run to ensure sufficient samples for convergence.\nAs shown in Figure 8 ###reference_###,\nthe model-based approach is the most sample-efficient, converging accurately to the optimal robust value with less than samples.\nOur DR -learning algorithm is slightly less efficient, using samples to converge.\nLiu algorithm is significantly less efficient, using samples to converge.\nNote that the model-based approach we compared here is to first obtain samples for each state-action pairs, and then conduct the learning procedure to learn the optimal robust value.\nIn particular, we need to specify the number of samples for each state-action pair .\nThen the total number of samples used is the sum of all these number, i.e., , whose computation manner is different from that in the model-free algorithms we used where each update requires one or a batch of new samples.\nTo ensure self-containment, we provide the pseudocode for our implemented Liu algorithm (Algorithm 3 ###reference_###) and the model-based algorithm (Algorithm 2 ###reference_###) below. These algorithms were not originally designed to solve the ambiguity set constructed by the Cressie-Read family of -divergences.\nIn this subsection, we provide the pseudo-code for the Liu algorithm, represented in Algorithm 2 ###reference_###. Our intention is to emphasize the differences in algorithmic design between their approach and ours.\nTheir algorithm, in particular, relies extensively on multi-level Monte Carlo, requiring the sampling of a batch of samples for each state-action pair. Once they estimate the Doubly Robust (DR) value for a specific state-action pair, the samples are promptly discarded and subsequently resampled from a simulator. To summarize, their algorithm exhibits significant distinctions from ours in terms of algorithmic design.\n###figure_13### In this section, we provide a comprehensive description of our Deep Distributionally Robust -learning (DDRQ) algorithm, as illustrated in Algorithm 2 ###reference_###, along with its experimental setup in the context of CaroPole and LunarLander.\nMost of the hyperparameters are set the same for both LunarLander and CartPole.\nWe choose Cressie-Read family parameter , which is indeed the ambiguity set and we set ambiguity set radius as .\nFor RFQI we also use the same for fair comparison. Our replay buffer size is set and the batch size for training is set .\nOur fast and network are update every 10 steps () and the target networks are updated every 500 steps (). The learning rate for network is and for network is .\nThe network and the network both employ a dual-layer structure, with each layer consisting of 120 dimensions.\nFor exploration scheme, we choose epsilon-greedy exploration with linearly decay epsilon with ending .\nThe remain parameters tuned for each environments are referred in Table 1 ###reference_###." | |
| }, | |
| { | |
| "section_id": "Appendix 2", | |
| "parent_section_id": null, | |
| "section_name": "Appendix B Multiple Timescale Convergence", | |
| "text": "We fix some notations that will be used in the following proof.\nFor a positive integer , denotes the set . denotes the cardinality of the set .\nWe adopt the standard asymptotic notations: for two non-negative sequences and , iff .\n is the simplex on a dimensional space, i.e., .\nFor any vector and any semi-positive matrix with , we denote .\n is Euclidean norm.\nIn this subsection, we outline the roadmap for establishing the a.s. convergence of the Algorithm 1 ###reference_###.\nFor ease of presentation, our analysis is given for the synchronous case, where every entry of the function is updated at each timestep. Extension to the asynchronous case, where only one state-action pair entry is updated at each timestep, follows Tsitsiklis (1994 ###reference_b31###).\nOur approach is to generalize the classic machinery of two-timescale stochastic approximation (Borkar, 2009 ###reference_b5###) to a three-timescale framework, and use it to analyze our proposed algorithm.\nWe rewrite the Algorithm 1 ###reference_### as\nHere, we use to represent the and jointly.\nTo echo with our algorithm, and are defined as,\nIn the update of (Equation 15 ###reference_###), and are defined as\nFinally in the update of (Equation 16 ###reference_###), and are defined as\nThe algorithm 1 ###reference_### approximates the dynamic described by the system of , and through samples along a single trajectory, with the resulting approximation error manifesting as martingale noise conditioned on some filtration and the error terms and .\nTo analyze the dynamic of algorithm 1 ###reference_###, we first obtain the continuous dynamic of , and using ordinary differential equations (ODEs) analysis.\nThe second step is to analyze the stochastic nature of the noise term and the error terms and , to ensure that they are negligible compared to the main trend of , , and , which is achieved by the following stepsizes,\nThe stepsizes satisfy\nThese stepsize schedules satisfy the standard conditions for stochastic approximation algorithms, ensuring that (1). the key quantities in gradient estimator update on the fastest timescale, (2). the dual variable for the DR problem, , update on the intermediate timescale; and (3). the table updates on the slowest timescale.\nExamples of such stepsize are and .\nNotably, the first two conditions in Condition B.1 ###reference_theorem1### ensure the martingale noise is negligible.\nThe different stepsizes for the three loops specificed by the third and fourth conditions ensures that and are sufficiently estimated with respect to the and , and these outer two loops are free from bias or noise in the stochastic approximation sense.\nUnder Condition B.1 ###reference_theorem1###, when analyzing the behavior of the , the and the can be viewed as quasi-static.\nTo study the behavior of the fastest loop, we analyze the following ODEs:\nand prove that ODEs (17 ###reference_###) a.s. converge to for proper and and some mapping .\nSimilarly, can be viewed as fixed when analyzing the behavior of , and the corresponding ODEs to understand its behavior are\nBy exploiting the dual form of the distributionally robust optimization problem, we can prove these ODEs converge to the set for some mapping and with is the set containing all the mapping from to .\nLastly, we examine the slowest timescale ODE given by\nand employ our analysis to establish the almost sure convergence of Algorithm 1 ###reference_### to the globally optimal pair .\nLet (resp. ) be nonnegative (resp. positive) sequences and scalars such that for all ,\nThen for ,\nFor continuous and scalars\nimplies\nConsider the stochastic approximation scheme given by\nwith the following Condition:\nis Lipschitz.\nThe sequence satisfies .\nis a martingale difference sequence with respect to the filtration , there exists such that a.s..\nThe functions satisfy as uniformly on compacts for some continuous function . In addition, the ODE\nhas the origin as its globally asymptotically stable equilibrium.\nWe then have\nUnder Condition B.4 ###reference_theorem4### to B.6 ###reference_theorem6###, we have\n a.s.\nSee Section 2.2 and 3.2 in Borkar (2009 ###reference_b5###) for the proof.\nAs the stability proofs in Section 3.2 of Borkar (2009 ###reference_b5###) are path-wise, we can apply this result to analyze multiple timescales dynamic.\nConsider the scheme\nwhere , , , are martingale difference sequences with respect to the -fields , and the form decreasing stepsize sequences.\nIt is instructive to compare the stochastic update algorithms from Equations 20 ###reference_### to 22 ###reference_### with the following o.d.e.,\nin the limit that and , .\nWe impose the following conditions, which are necessary for the a.s. convergence for each timescale itself and are commonly used in the literature of stochastic approximation algorithms, e.g., (Borkar, 2009 ###reference_b5###).\nand is -Lipschitz map for some and is bounded.\nFor and , is a martingale differeence sequence with respect to the increasing family of -fields .\nFurthermore, there exists some , such that for and ,\n, a.s..\nFor each and , has a globally asymptotically stable equilibrium , where is a -Lipschitz map for some .\nFor each , has a globally asymptotically stable equilibrium , where is a -Lipschitz map for some .\nhas a globally asymptotically stable equilibrium .\nConditions B.9 ###reference_theorem9###, B.10 ###reference_theorem10###, B.11 ###reference_theorem11### and B.12 ###reference_theorem12### are necessary for the a.s. convergence for each timescale itself.\nMoreover, Condition B.12 ###reference_theorem12### itself requires Conditions like B.9 ###reference_theorem9###, B.10 ###reference_theorem10###, B.11 ###reference_theorem11###, with an extra condition like Condition B.6 ###reference_theorem6###.\nInstead, we need to prove the boundedness for each timescale, thus the three timescales version is as follow\nThe ODE\nall have the origin as their globally asymptotically stable equilibrium for each and , where\nWe have the following results, which appears as a three timescales extension of Lemma 6.1 in Borkar (2009 ###reference_b5###) and serves as a auxiliary lemma for the our a.s. convergence.\nUnder the conditions B.9 ###reference_theorem9###, B.10 ###reference_theorem10###, B.11 ###reference_theorem11### and B.12 ###reference_theorem12###.\n a.s..\nRewrite Equations 21 ###reference_### and 22 ###reference_### as\nwhere , , , .\nNote that as .\nConsider them as the special case in the third extension in Section 2.2 in Borkar (2009 ###reference_b5###) and then we can conclude that converges to the internally chain transitive invariant sets of the o.d.e.,\nwhich implies that .\nRewrite Equation 22 ###reference_### again as\nwhere and .\nWe use the same extension again and can conclude that converges to the internally chain transitive invariant sets of the o.d.e.,\nThus .\n\u220e\nUnder the Condition B.9 ###reference_theorem9### to B.16 ###reference_theorem16###, .\nLet and for .\nDefine the piecewise linear continuous function where and for with any .\nLet .\nFor any , denote . Then for , we have\nWe further define as the trajectory of with .\nTaking the difference between Equation 23 ###reference_### and the Equation 24 ###reference_### we have\nWe analyze the I term. For notation simplicity we ignore the supsript .\nBy the Lipschitzness of the we have\nwhich implies\nBy Gronwall\u2019s inequality (Lemma B.3 ###reference_theorem3###), we have\nThus for all , we have\nFor any and ,\nwhere the last inequality is from the construction of .\nFinally we can conclude\nFor the III term, it converges to zero from the martingale convergence property.\nSubtracting equation 23 ###reference_### from 24 ###reference_### and take norms, we have\nDefine .\nNote that a.s. .\nLet .\nThus, above inequality becomes\nThus the above inequality becomes\nNote that and , then using the discrete Gronwall lemma (Lemma B.2 ###reference_theorem2###) we have\nFollowing the similar logic as in Lemma 1 in Borkar (2009 ###reference_b5###), we can extend the above result to the case where .\nThen using the proof of Theorem 2 of Chapter 2 in Borkar (2009 ###reference_b5###), we get a.s. and thus by Lemma B.17 ###reference_theorem17### the proof can be concluded.\n\u220e" | |
| }, | |
| { | |
| "section_id": "Appendix 3", | |
| "parent_section_id": null, | |
| "section_name": "Appendix C Convergence of the DR -learning Algorithm", | |
| "text": "Before we start the proof of the DR -learning algorithm, we first introduce the following lemma.\nDenote\n. Given that , then we have .\nNote that for , .\nAlso we know that when ,\nThen we can conclude that . Moreover, as , we know , which concludes that .\n\u220e\nNote that when reward is bounded by . Thus in our case and then we denote .\nNow we are ready to prove the convergence of the DR -learning algorithm.\nFor theoretical analysis, we consider the clipping version of our DR -learning algorithm.\nWe define the filtration generated by the historical trajectory,\nIn the following analysis, we fix for a but ignore the dependence for notation simplicity.\nFollowing the roadmap in Section 3.4, we rewrite the algorithm as\nHere for theoretical analysis, we add a clipping operator and compared with the algorithm presented in the main text.\nWe first proceed by first identifying the terms in Equation 26 ###reference_### and 27 ###reference_### and studying the corresponding ODEs\nAs and is in fact irrelavant to the and , we analyze their equilibria seperately. For notation convenience, we denote .\nFor ODE 26 ###reference_### and each , it is easy to know there exists a unique global asymtotically stable equilibrium .\nSimilarly, For ODE 27 ###reference_### and each , there exists a unique global asymtotically stable equilibrium .\nSecond, and .\nNote that for any , , and .\nThus , which leads to .\nSince and for any , we have,\nwhere . Similarly, we can conclude that for some .\nNext we analyze the second loop.\nwhere\nThe global convergence point is .\nFinally we arrive to the outer loop, i.e.,\nBy using the dual form of Cressie-Read Divergence (Lemma 3.1 ###reference_theorem1###), we know that this is equivilant to\nfor ambiguity set using Cressie-Read of divergence.\nDenote and thus\nwe can rewrite the above ODE as\nFollowing , we consider its infity version, i.e., .\nThis is a contraction by Theorem 3.2 in Iyengar (2005 ###reference_b15###).\nBy the proof in Section 3.2 in Borkar & Meyn (2000 ###reference_b6###), we know the contraction can lead to the global unique equilibrium point in the ode.\nThus we finish verifying all the conditions in Section B.3 ###reference_###, which can lead to the desired result.\n\u220e" | |
| } | |
| ], | |
| "tables": { | |
| "1": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"A1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"A1.T1.4.4.5\">Environment</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A1.T1.1.1.1\">Maximum Training Step \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A1.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A1.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T1.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A1.T1.8.8.5\">CartPole</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T1.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"A1.T1.12.12.5\">LunarLander</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A1.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A1.T1.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A1.T1.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T1.12.12.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Different Hyperparamers between CartPole and LunarLander</figcaption>\n</figure>", | |
| "capture": "Table 1: Different Hyperparamers between CartPole and LunarLander" | |
| } | |
| }, | |
| "image_paths": { | |
| "1(a)": { | |
| "figure_path": "2301.11721v2_figure_1(a).png", | |
| "caption": "(a) Environment\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x1.png" | |
| }, | |
| "1(b)": { | |
| "figure_path": "2301.11721v2_figure_1(b).png", | |
| "caption": "(b) Nonrobust\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x2.png" | |
| }, | |
| "1(c)": { | |
| "figure_path": "2301.11721v2_figure_1(c).png", | |
| "caption": "(c) \u03c1=1.0\ud835\udf0c1.0\\rho=1.0italic_\u03c1 = 1.0\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x3.png" | |
| }, | |
| "1(d)": { | |
| "figure_path": "2301.11721v2_figure_1(d).png", | |
| "caption": "(d) \u03c1=1.5\ud835\udf0c1.5\\rho=1.5italic_\u03c1 = 1.5\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x4.png" | |
| }, | |
| "2(a)": { | |
| "figure_path": "2301.11721v2_figure_2(a).png", | |
| "caption": "(a) Return\nFigure 2: Averaged return and steps with 100 random seeds in the perturbed environments. \u03c1=0\ud835\udf0c0\\rho=0italic_\u03c1 = 0 corresponds to the non-robust Q\ud835\udc44Qitalic_Q-learning. R\ud835\udc45Ritalic_R denotes the R\ud835\udc45Ritalic_R-contamination ambiguity set.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x5.png" | |
| }, | |
| "2(b)": { | |
| "figure_path": "2301.11721v2_figure_2(b).png", | |
| "caption": "(b) Episode length\nFigure 2: Averaged return and steps with 100 random seeds in the perturbed environments. \u03c1=0\ud835\udf0c0\\rho=0italic_\u03c1 = 0 corresponds to the non-robust Q\ud835\udc44Qitalic_Q-learning. R\ud835\udc45Ritalic_R denotes the R\ud835\udc45Ritalic_R-contamination ambiguity set.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x6.png" | |
| }, | |
| "2(c)": { | |
| "figure_path": "2301.11721v2_figure_2(c).png", | |
| "caption": "(c) Value of various k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1\nFigure 2: Averaged return and steps with 100 random seeds in the perturbed environments. \u03c1=0\ud835\udf0c0\\rho=0italic_\u03c1 = 0 corresponds to the non-robust Q\ud835\udc44Qitalic_Q-learning. R\ud835\udc45Ritalic_R denotes the R\ud835\udc45Ritalic_R-contamination ambiguity set.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x7.png" | |
| }, | |
| "3": { | |
| "figure_path": "2301.11721v2_figure_3.png", | |
| "caption": "Figure 3: The training curves in the Cliffwalking environment. Each curve is averaged over 100 random seeds and shaded by their standard deviations. The dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x8.png" | |
| }, | |
| "4": { | |
| "figure_path": "2301.11721v2_figure_4.png", | |
| "caption": "Figure 4: Sample complexity comparisons in Cliffwalking environment with Liu\u2019s and Model-based algorithms. Each curve is averaged over 100 random seeds and shaded by their standard deviations.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x9.png" | |
| }, | |
| "5": { | |
| "figure_path": "2301.11721v2_figure_5.png", | |
| "caption": "Figure 5: The return in the CartPole and LunarLander environment. Each curve is averaged over 100 random seeds and shaded by their standard deviations. AP: Action Perturbation; FMP: Force Mag Perturbation; EPP: Engines Power Perturbation.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x10.png" | |
| }, | |
| "6": { | |
| "figure_path": "2301.11721v2_figure_6.png", | |
| "caption": "Figure 6: Averaged return in the American call option problem. \u03c1=0.0\ud835\udf0c0.0\\rho=0.0italic_\u03c1 = 0.0 is the non-robust Q\ud835\udc44Qitalic_Q-learning.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x11.png" | |
| }, | |
| "7": { | |
| "figure_path": "2301.11721v2_figure_7.png", | |
| "caption": "Figure 7: Convergence curve of DR Q\ud835\udc44Qitalic_Q-learning algorithm to the true DR value under different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s and k\ud835\udc58kitalic_k\u2019s. Each curve is averaged over 10 random seeds and shaded by their standard deviation. The dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x12.png" | |
| }, | |
| "8": { | |
| "figure_path": "2301.11721v2_figure_8.png", | |
| "caption": "Figure 8: Sample complexity comparisons in American option environment with other DRRL algorithms.\nThe dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1. The x\ud835\udc65xitalic_x-axis is in log10 scale. Each curve is averaged over 10 random seeds and shaded by their one standard deviation.\nThe dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1.", | |
| "url": "http://arxiv.org/html/2301.11721v2/x13.png" | |
| } | |
| }, | |
| "validation": true, | |
| "references": [ | |
| { | |
| "1": { | |
| "title": "Maximum a posteriori policy optimisation.", | |
| "author": "Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and\nRiedmiller, M.", | |
| "venue": "arXiv preprint arXiv:1806.06920, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "2": { | |
| "title": "Wasserstein Robust Reinforcement Learning, 2019.", | |
| "author": "Abdullah, M. A., Ren, H., Ammar, H. B., Milenkovic, V., Luo, R., Zhang, M., and\nWang, J.", | |
| "venue": "URL http://arxiv.org/abs/1907.13196.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "3": { | |
| "title": "Robust reinforcement learning using least squares policy iteration\nwith provable performance guarantees.", | |
| "author": "Badrinath, K. P. and Kalathil, D.", | |
| "venue": "In International Conference on Machine Learning, pp. 511\u2013520. PMLR, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "4": { | |
| "title": "Unbiased monte carlo for optimization and functions of expectations\nvia multi-level randomization.", | |
| "author": "Blanchet, J. H. and Glynn, P. W.", | |
| "venue": "In 2015 Winter Simulation Conference (WSC), pp. 3656\u20133667.\nIEEE, 2015.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "5": { | |
| "title": "Stochastic approximation: a dynamical systems viewpoint,\nvolume 48.", | |
| "author": "Borkar, V. S.", | |
| "venue": "Springer, 2009.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "6": { | |
| "title": "The ode method for convergence of stochastic approximation and\nreinforcement learning.", | |
| "author": "Borkar, V. S. and Meyn, S. P.", | |
| "venue": "SIAM Journal on Control and Optimization, 38(2):447\u2013469, 2000.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "7": { | |
| "title": "Openai gym.", | |
| "author": "Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang,\nJ., and Zaremba, W.", | |
| "venue": "arXiv preprint arXiv:1606.01540, 2016.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "8": { | |
| "title": "Option pricing: A simplified approach.", | |
| "author": "Cox, J. C., Ross, S. A., and Rubinstein, M.", | |
| "venue": "Journal of financial Economics, 7(3):229\u2013263, 1979.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "9": { | |
| "title": "Multinomial goodness-of-fit tests.", | |
| "author": "Cressie, N. and Read, T. R.", | |
| "venue": "Journal of the Royal Statistical Society: Series B\n(Methodological), 46(3):440\u2013464, 1984.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "10": { | |
| "title": "Model-free risk-sensitive reinforcement learning.", | |
| "author": "Del\u00e9tang, G., Grau-Moya, J., Kunesch, M., Genewein, T., Brekelmans, R.,\nLegg, S., and Ortega, P. A.", | |
| "venue": "arXiv preprint arXiv:2111.02907, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "11": { | |
| "title": "Soft-robust actor-critic policy-gradient.", | |
| "author": "Derman, E., Mankowitz, D. J., Mann, T. A., and Mannor, S.", | |
| "venue": "arXiv preprint arXiv:1803.04848, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "12": { | |
| "title": "Learning models with uniform performance via distributionally robust\noptimization.", | |
| "author": "Duchi, J. C. and Namkoong, H.", | |
| "venue": "The Annals of Statistics, 49(3):1378\u20131406, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "13": { | |
| "title": "Robust markov decision processes: Beyond rectangularity.", | |
| "author": "Goyal, V. and Grand-Clement, J.", | |
| "venue": "Mathematics of Operations Research, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "14": { | |
| "title": "Partial policy iteration for l1-robust markov decision processes.", | |
| "author": "Ho, C. P., Petrik, M., and Wiesemann, W.", | |
| "venue": "J. Mach. Learn. Res., 22:275\u20131, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "15": { | |
| "title": "Robust dynamic programming.", | |
| "author": "Iyengar, G. N.", | |
| "venue": "Mathematics of Operations Research, 30(2):257\u2013280, 2005.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "16": { | |
| "title": "Auto-encoding variational bayes.", | |
| "author": "Kingma, D. P. and Welling, M.", | |
| "venue": "arXiv preprint arXiv:1312.6114, 2013.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "17": { | |
| "title": "Reinforcement learning in robust markov decision processes.", | |
| "author": "Lim, S. H., Xu, H., and Mannor, S.", | |
| "venue": "Advances in Neural Information Processing Systems, 26, 2013.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "18": { | |
| "title": "Distributionally robust -learning.", | |
| "author": "Liu, Z., Bai, Q., Blanchet, J., Dong, P., Xu, W., Zhou, Z., and Zhou, Z.", | |
| "venue": "In International Conference on Machine Learning, pp. 13623\u201313643. PMLR, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "19": { | |
| "title": "Distributionally robust offline reinforcement learning with linear\nfunction approximation.", | |
| "author": "Ma, X., Liang, Z., Xia, L., Zhang, J., Blanchet, J., Liu, M., Zhao, Q., and\nZhou, Z.", | |
| "venue": "arXiv preprint arXiv:2209.06620, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "20": { | |
| "title": "Bias and variance in value function estimation.", | |
| "author": "Mannor, S., Simester, D., Sun, P., and Tsitsiklis, J. N.", | |
| "venue": "In Proceedings of the twenty-first international conference on\nMachine learning, pp. 72, 2004.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "21": { | |
| "title": "Human-level control through deep reinforcement learning.", | |
| "author": "Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare,\nM. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al.", | |
| "venue": "nature, 518(7540):529\u2013533, 2015.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "22": { | |
| "title": "Robust q-learning algorithm for markov decision processes under\nwasserstein uncertainty.", | |
| "author": "Neufeld, A. and Sester, J.", | |
| "venue": "ArXiv, abs/2210.00898, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "23": { | |
| "title": "Robust control of markov decision processes with uncertain transition\nmatrices.", | |
| "author": "Nilim, A. and El Ghaoui, L.", | |
| "venue": "Operations Research, 53(5):780\u2013798, 2005.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "24": { | |
| "title": "Sample complexity of robust reinforcement learning with a generative\nmodel.", | |
| "author": "Panaganti, K. and Kalathil, D.", | |
| "venue": "In International Conference on Artificial Intelligence and\nStatistics, pp. 9582\u20139602. PMLR, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "25": { | |
| "title": "Robust reinforcement learning using offline data.", | |
| "author": "Panaganti, K., Xu, Z., Kalathil, D., and Ghavamzadeh, M.", | |
| "venue": "Advances in neural information processing systems,\n35:32211\u201332224, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "26": { | |
| "title": "Reinforcement learning under model mismatch.", | |
| "author": "Roy, A., Xu, H., and Pokutta, S.", | |
| "venue": "Advances in neural information processing systems, 30, 2017.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "27": { | |
| "title": "Proximal policy optimization algorithms.", | |
| "author": "Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O.", | |
| "venue": "arXiv preprint arXiv:1707.06347, 2017.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "28": { | |
| "title": "Distributionally Robust Model-Based Offline Reinforcement\nLearning with Near-Optimal Sample Complexity.", | |
| "author": "Shi, L. and Chi, Y.", | |
| "venue": "URL http://arxiv.org/abs/2208.05767.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "29": { | |
| "title": "Mastering the game of go with deep neural networks and tree search.", | |
| "author": "Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche,\nG., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M.,\net al.", | |
| "venue": "nature, 529(7587):484\u2013489, 2016.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "30": { | |
| "title": "Scaling up robust mdps using function approximation.", | |
| "author": "Tamar, A., Mannor, S., and Xu, H.", | |
| "venue": "In International conference on machine learning, pp. 181\u2013189. PMLR, 2014.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "31": { | |
| "title": "Asynchronous stochastic approximation and q-learning.", | |
| "author": "Tsitsiklis, J. N.", | |
| "venue": "Machine learning, 16:185\u2013202, 1994.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "32": { | |
| "title": "Grandmaster level in StarCraft II using multi-agent reinforcement\nlearning.", | |
| "author": "Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung,\nJ., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D.,\nKroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P.,\nJaderberg, M., Vezhnevets, A. S., Leblond, R., Pohlen, T., Dalibard, V.,\nBudden, D., Sulsky, Y., Molloy, J., Paine, T. L., Gulcehre, C., Wang, Z.,\nPfaff, T., Wu, Y., Ring, R., Yogatama, D., W\u00fcnsch, D., McKinney, K., Smith,\nO., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., and\nSilver, D.", | |
| "venue": "575(7782):350\u2013354, 2019.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "33": { | |
| "title": "Online robust reinforcement learning with model uncertainty.", | |
| "author": "Wang, Y. and Zou, S.", | |
| "venue": "Advances in Neural Information Processing Systems,\n34:7193\u20137206, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "34": { | |
| "title": "Robust markov decision processes.", | |
| "author": "Wiesemann, W., Kuhn, D., and Rustem, B.", | |
| "venue": "Mathematics of Operations Research, 38(1):153\u2013183, 2013.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "35": { | |
| "title": "Distributionally robust markov decision processes.", | |
| "author": "Xu, H. and Mannor, S.", | |
| "venue": "Advances in Neural Information Processing Systems, 23, 2010.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "36": { | |
| "title": "Wasserstein distributionally robust stochastic control: A data-driven\napproach.", | |
| "author": "Yang, I.", | |
| "venue": "IEEE Transactions on Automatic Control, 66:3863\u20133870, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "37": { | |
| "title": "Toward theoretical understandings of robust markov decision\nprocesses: Sample complexity and asymptotics.", | |
| "author": "Yang, W., Zhang, L., and Zhang, Z.", | |
| "venue": "The Annals of Statistics, 50(6):3223\u20133248, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "38": { | |
| "title": "Finite-sample regret bound for distributionally robust offline\ntabular reinforcement learning.", | |
| "author": "Zhou, Z., Zhou, Z., Bai, Q., Qiu, L., Blanchet, J., and Glynn, P.", | |
| "venue": "In International Conference on Artificial Intelligence and\nStatistics, pp. 3331\u20133339. PMLR, 2021.", | |
| "url": null | |
| } | |
| } | |
| ], | |
| "url": "http://arxiv.org/html/2301.11721v2" | |
| } |