Name
Papers
Collaborators
MATTHIEU GEIST
106
174
Citations 
PageRank 
Referers 
386
47.03
681
Referees 
References 
771
814
Search Limit
200771
Title
Citations
PageRank
Year
Lazy-MDPs: Towards Interpretable RL by Learning When to Act.00.342022
Continuous Control with Action Quantization from Demonstrations.00.342022
A general class of surrogate functions for stable and efficient reinforcement learning00.342022
Generalization in Mean Field Games by Learning Master Policies.00.342022
Implicitly Regularized RL with Implicit Q-values00.342022
Scaling Mean Field Games by Online Mirror Descent.00.342022
Offline Reinforcement Learning as Anti-exploration.00.342022
Concave Utility Reinforcement Learning: The Mean-field Game Viewpoint.00.342022
Scalable Deep Reinforcement Learning Algorithms for Mean Field Games.00.342022
Offline Reinforcement Learning With Pseudometric Learning00.342021
Adversarially Guided Actor-Critic.00.342021
Show me the Way: Intrinsic Motivation from Demonstrations00.342021
How To Train Your Heron00.342021
Mean Field Games Flock! The Reinforcement Learning Way.00.342021
Adversarially Guided Actor-Critic00.342021
Self-Imitation Advantage Learning00.342021
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study00.342021
Hyperparameter Selection for Imitation Learning00.342021
Evaluation Of Prioritized Deep System Identification On A Path Following Task00.342021
Learning Behaviors through Physics-driven Latent Imagination.00.342021
What Matters for Adversarial Imitation Learning?00.342021
Primal Wasserstein Imitation Learning00.342021
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study.00.342021
Primal Wasserstein Imitation Learning.00.342021
Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications00.342020
On The Convergence Of Model Free Learning In Mean Field Games00.342020
Self-Attentional Credit Assignment for Transfer in Reinforcement Learning10.362020
Foolproof Cooperative Learning.00.342020
Image-Based Place Recognition on Bucolic Environment Across Seasons From Semantic Edge Description.00.342020
CopyCAT:: Taking Control of Neural Policies with Constant Attacks00.342020
Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning00.342020
Munchausen Reinforcement Learning00.342020
Deep Conservative Policy Iteration00.342019
Stable and Efficient Policy Evaluation.10.412019
Deep Reinforcement Learning-Based Continuous Control For Multicopter Systems00.342019
Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations.00.342019
A Theory of Regularized Markov Decision Processes.00.342019
Learning from a Learner00.342019
Image-Based Text Classification using 2D Convolutional Neural Networks00.342019
Foolproof Cooperative Learning.00.342019
Importance Sampling for Deep System Identification00.342019
A Deep Learning Approach For Privacy Preservation In Assisted Living10.352018
Human Activity Recognition Using Recurrent Neural Networks.100.622018
Image-based Natural Language Understanding Using 2D Convolutional Neural Networks.00.342018
Anderson Acceleration for Reinforcement Learning.00.342018
Deep Representation Learning for Domain Adaptation of Semantic Image Segmentation.00.342018
Reconstruct & Crush Network.00.342017
Is the Bellman residual a bad proxy?00.342017
Bridging the Gap Between Imitation Learning and Inverse Reinforcement Learning.70.512017
Should one minimize the expected Bellman residual or maximize the mean value?00.342016