Sledovat
Kelvin Guu
Kelvin Guu
Senior Staff Research Scientist, Google
E-mailová adresa ověřena na: google.com - Domovská stránka
Název
Citace
Citace
Rok
Finetuned language models are zero-shot learners
J Wei, M Bosma, VY Zhao, K Guu, AW Yu, B Lester, N Du, AM Dai, QV Le
arXiv preprint arXiv:2109.01652, 2021
31942021
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805, 2023
22362023
Retrieval augmented language model pre-training
K Guu, K Lee, Z Tung, P Pasupat, M Chang
International conference on machine learning, 3929-3938, 2020
19332020
Traversing Knowledge Graphs in Vector Space
K Guu, J Miller, P Liang
Empirical Methods in Natural Language Processing (EMNLP), 2015
4242015
Generating Sentences by Editing Prototypes
K Guu, TB Hashimoto, Y Oren, P Liang
Transactions of the Association for Computational Linguistics 6, 437-450, 2018
3722018
Rarr: Researching and revising what language models say, using language models
L Gao, Z Dai, P Pasupat, A Chen, AT Chaganty, Y Fan, VY Zhao, N Lao, ...
arXiv preprint arXiv:2210.08726, 2022
2142022
From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood
K Guu, P Pasupat, EZ Liu, P Liang
Association for Computational Linguistics (ACL), 2017
2092017
Promptagator: Few-shot dense retrieval from 8 examples
Z Dai, VY Zhao, J Ma, Y Luan, J Ni, J Lu, A Bakalov, K Guu, KB Hall, ...
arXiv preprint arXiv:2209.11755, 2022
1912022
Reinforcement learning on web interfaces using workflow-guided exploration
EZ Liu, K Guu, P Pasupat, T Shi, P Liang
arXiv preprint arXiv:1802.08802, 2018
1912018
A Retrieve-and-Edit Framework for Predicting Structured Outputs
TB Hashimoto, K Guu, Y Oren, PS Liang
Advances in Neural Information Processing Systems, 10073-10083, 2018
1842018
Transforming question answering datasets into natural language inference datasets
D Demszky, K Guu, P Liang
arXiv preprint arXiv:1809.02922, 2018
1792018
Pretraining with contrastive sentence objectives improves discourse performance of language models
D Iter, K Guu, L Lansing, D Jurafsky
arXiv preprint arXiv:2005.10389, 2020
872020
Towards tracing factual knowledge in language models back to the training data
E Akyürek, T Bolukbasi, F Liu, B Xiong, I Tenney, J Andreas, K Guu
arXiv preprint arXiv:2205.11482, 2022
752022
Light timeout optimization
X Gu, K Gu, D Nulu
US Patent 8,538,596, 2013
752013
Neurips 2020 efficientqa competition: Systems, analyses and lessons learned
S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ...
NeurIPS 2020 Competition and Demonstration Track, 86-111, 2021
742021
Kermit: Generative insertion-based modeling for sequences
W Chan, N Kitaev, K Guu, M Stern, J Uszkoreit
arXiv preprint arXiv:1906.01604, 2019
732019
Unlocking compositional generalization in pre-trained models using intermediate representations
J Herzig, P Shaw, MW Chang, K Guu, P Pasupat, Y Zhang
arXiv preprint arXiv:2104.07478, 2021
702021
Neural data augmentation via example extrapolation
K Lee, K Guu, L He, T Dozat, HW Chung
arXiv preprint arXiv:2102.01335, 2021
612021
Dialog inpainting: Turning documents into dialogs
Z Dai, AT Chaganty, VY Zhao, A Amini, QM Rashid, M Green, K Guu
International conference on machine learning, 4558-4586, 2022
592022
Mapping natural language commands to web elements
P Pasupat, TS Jiang, EZ Liu, K Guu, P Liang
arXiv preprint arXiv:1808.09132, 2018
502018
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–20