Follow
Minqi Jiang
Title
Cited by
Cited by
Year
Prioritized level replay
M Jiang, E Grefenstette, T Rocktäschel
International Conference on Machine Learning, 4940-4950, 2021
682021
Minihack the planet: A sandbox for open-ended reinforcement learning research
M Samvelyan, R Kirk, V Kurin, J Parker-Holder, M Jiang, E Hambro, ...
NeurIPS 2021 Datasets and Benchmarks, 2021
362021
Evolving Curricula with Regret-Based Environment Design
J Parker-Holder*, M Jiang*, M Dennis, M Samvelyan, J Foerster, ...
International Conference on Machine Learning, https://accelagent.github.io, 2022
262022
Replay-Guided Adversarial Environment Design
M Jiang*, M Dennis*, J Parker-Holder, J Foerster, E Grefenstette, ...
NeurIPS 2021, 2021
262021
Motion responsive user interface for realtime language translation
AJ Cuthbert, JJ Estelle, MR Hughes, S Goyal, MS Jiang
US Patent 9,355,094, 2016
232016
WordCraft: An Environment for Benchmarking Commonsense Agents
M Jiang, J Luketina, N Nardelli, P Minervini, PHS Torr, S Whiteson, ...
Language in Reinforcement Learning Workshop at ICML 2020, 2020
162020
Improving intrinsic exploration with language abstractions
J Mu, V Zhong, R Raileanu, M Jiang, N Goodman, T Rocktäschel, ...
NeurIPS 2022, 2022
152022
Insights from the neurips 2021 nethack challenge
E Hambro, S Mohanty, D Babaev, M Byeon, D Chakraborty, ...
NeurIPS 2021 Competitions and Demonstrations Track, 41-52, 2022
32022
Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Z Jiang, P Minervini, M Jiang, T Rocktäschel
AAMAS 2021 (Oral), 2021
22021
Resolving causal confusion in reinforcement learning via robust exploration
C Lyle, A Zhang, M Jiang, J Pineau, Y Gal
Self-Supervision for Reinforcement Learning Workshop-ICLR 2021, 2021
22021
Exploration via Elliptical Episodic Bonuses
M Henaff, R Raileanu, M Jiang, T Rocktäschel
NeurIPS 2022, 2022
12022
MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning
M Samvelyan, A Khan, MD Dennis, M Jiang, J Parker-Holder, JN Foerster, ...
Deep Reinforcement Learning Workshop NeurIPS 2022, 2022
12022
General Intelligence Requires Rethinking Exploration
M Jiang, T Rocktäschel, E Grefenstette
arXiv preprint arXiv:2211.07819, 2022
2022
GriddlyJS: A Web IDE for Reinforcement Learning
C Bamford, M Jiang, M Samvelyan, T Rocktäschel
NeurIPS 2022 Datasets and Benchmarks, 2022
2022
Grounding Aleatoric Uncertainty for Unsupervised Environment Design
M Jiang, M Dennis, J Parker-Holder, A Lupu, H Küttler, E Grefenstette, ...
NeurIPS 2022, 2022
2022
Integrating Episodic and Global Bonuses for Efficient Exploration
M Henaff, M Jiang, R Raileanu
Deep Reinforcement Learning Workshop NeurIPS 2022, 0
A Study of Off-Policy Learning in Environments with Procedural Content Generation
A Ehrenberg, R Kirk, M Jiang, E Grefenstette, T Rocktäschel
ICLR Workshop on Agent Learning in Open-Endedness, 0
Return Dispersion as an Estimator of Learning Potential for Prioritized Level Replay
I Korshunova, M Jiang, J Parker-Holder, T Rocktäschel, E Grefenstette
I (Still) Can't Believe It's Not Better! NeurIPS 2021 Workshop, 0
The system can't perform the operation now. Try again later.
Articles 1–18