Reinforcement Learning

Train Hard, Fight Easy: Robust Meta Reinforcement Learning

We introduce RoML - a meta-algorithm that takes any meta-learning baseline algorithm and generates a robust version of it. A test task corresponding to high body mass, which is typically more difficult to control.

SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search

Despite the popularity of policy gradient methods, they are known to suffer from large variance and high sample complexity. To mitigate this, we introduce SoftTreeMax – a generalization of softmax that takes planning into account.

CALM: Conditional Adversarial Latent Models for Directable Virtual Characters

In this work, we present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters. Using imitation learning, CALM learns a representation of movement …

Planning and Learning with Adaptive Lookahead

Some of the most powerful reinforcement learning frameworks use planning for action selection. Interestingly, their planning horizon is either fixed or determined arbitrarily by the state visitation history. Here, we expand beyond the naive fixed …

Never Worse, Mostly Better: Stable Policy Improvement in Deep Reinforcement Learning

In recent years, there has been significant progress in applying deep reinforcement learning (RL) for solving challenging problems across a wide variety of domains. Nevertheless, convergence of various methods has been shown to suffer from …

Reinforcement Learning with a Terminator

We present the problem of reinforcement learning with exogenous termination. We define the Termination Markov Decision Process (TerMDP), an extension of the MDP framework, in which episodes may be interrupted by an external non-Markovian observer. …

Reinforcement Learning for Datacenter Congestion Control

We approach the task of network congestion control in datacenters using Reinforcement Learning (RL). Successful congestion control algorithms can dramatically improve latency and overall network throughput. Until today, no such learning-based …

Improve Agents without Retraining: Parallel Tree Search with Off-Policy Correction

Tree Search (TS) is crucial to some of the most influential successes in reinforcement learning. Here, we tackle two major challenges with TS that limit its usability: *distribution shift* and *scalability*. We first discover and analyze a …

Known unknowns: Learning novel concepts using exploratory reasoning-by-elimination

Video Abstract Cite the paper If you use the contents of this project, please cite our paper. @article{hagrawal2021unknown, title={Known unknowns: Learning novel concepts using exploratory reasoning-by-elimination}, author={Harsh Agrawal, Eli Meirom, Yuval Atzmon, Shie Mannor, Gal Chechik}, journal={Uncertainty in artificial intelligence}, year={2021} }

Acting in Delayed Environments with Non-Stationary Markov Policies

The standard Markov Decision Process (MDP) formulation hinges on the assumption that an action is executed immediately after it was chosen. However, assuming it is often unrealistic and can lead to catastrophic failures in applications such as …