Language models are few-shot learners TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arXiv preprint arXiv:2005.14165, 2020 | 38710* | 2020 |
GPT-4 technical report IA J Achiam, S Adler, S Agarwal, L Ahmad ArXiv 2303, 2023 | 4458* | 2023 |
Adding gradient noise improves learning for very deep networks A Neelakantan, L Vilnis, QV Le, I Sutskever, L Kaiser, K Kurach, J Martens International Conference on Learning Representations Workshop (ICLR Workshop …, 2015 | 613 | 2015 |
Efficient non-parametric estimation of multiple embeddings per word in vector space A Neelakantan, J Shankar, A Passos, A McCallum Conference on Empirical Methods in Natural Language Processing, 2014, 2015 | 609 | 2015 |
Compositional vector space models for knowledge base inference A Neelakantan, B Roth, A McCallum 2015 aaai spring symposium series, 2015 | 434* | 2015 |
Text and code embeddings by contrastive pre-training A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ... arXiv preprint arXiv:2201.10005, 2022 | 345 | 2022 |
Chains of reasoning over entities, relations, and text using recurrent neural networks R Das, A Neelakantan, D Belanger, A McCallum European Chapter of the Association for Computational Linguistics (EACL), 2017., 2016 | 330 | 2016 |
Neural programmer: Inducing latent programs with gradient descent A Neelakantan, QV Le, I Sutskever International Conference on Learning Representations (ICLR), 2016, 2015 | 294 | 2015 |
Taskmaster-1: Toward a realistic and diverse dialog dataset B Byrne, K Krishnamoorthi, C Sankar, A Neelakantan, D Duckworth, ... arXiv preprint arXiv:1909.05358, 2019 | 217 | 2019 |
Learning a natural language interface with neural programmer A Neelakantan, QV Le, M Abadi, A McCallum, D Amodei International Conference on Learning Representations (ICLR), 2017., 2016 | 137 | 2016 |
Language models are few-shot learners (arXiv: 2005.14165). arXiv TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... | 96 | 2005 |
Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods A Neelakantan, MW Chang The North American Chapter of the Association for Computational Linguistics …, 2015 | 91 | 2015 |
& Amodei, D.(2020) TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Language models are few-shot learners, 2005 | 88 | 2005 |
Theory and experiments on vector quantized autoencoders A Roy, A Vaswani, A Neelakantan, N Parmar arXiv preprint arXiv:1805.11063, 2018 | 87 | 2018 |
Predicting the impact of scientific concepts using full‐text features K McKeown, H Daume III, S Chaturvedi, J Paparrizos, K Thadani, P Barrio, ... Journal of the Association for Information Science and Technology 67 (11 …, 2016 | 80 | 2016 |
Trading off diversity and quality in natural language generation H Zhang, D Duckworth, D Ippolito, A Neelakantan arXiv preprint arXiv:2004.10450, 2020 | 78 | 2020 |
Learning Dictionaries for Named Entity Recognition using Minimal Supervision A Neelakantan, M Collins European Chapter of the Association for Computational Linguistics., 2014 | 57 | 2014 |
Generalizing to unseen entities and entity pairs with row-less universal schema P Verga, A Neelakantan, A McCallum European Chapter of the Association for Computational Linguistics (EACL), 2017., 2016 | 50 | 2016 |
RelNet: End-to-end Modeling of Entities & Relations T Bansal, A Neelakantan, A McCallum arXiv preprint arXiv:1706.07179, 2017 | 35 | 2017 |
Parallel scheduled sampling D Duckworth, A Neelakantan, B Goodrich, L Kaiser, S Bengio arXiv preprint arXiv:1906.04331, 2019 | 25 | 2019 |