Follow
Julian Martin Eisenschlos
Julian Martin Eisenschlos
NLP Researcher, Google DeepMind
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
TAPAS: Weakly Supervised Table Parsing via Pre-training
J Herzig, PK Nowak, T Müller, F Piccinno, JM Eisenschlos
Proceedings of ACL 2020, 2020
4862020
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
3482023
Time-aware language models as temporal knowledge bases
B Dhingra, JR Cole, JM Eisenschlos, D Gillick, J Eisenstein, WW Cohen
Transactions of the Association for Computational Linguistics 10, 257-273, 2022
1592022
Multifit: Efficient multi-lingual language model fine-tuning
JM Eisenschlos, S Ruder, P Czapla, M Kardas, S Gugger, J Howard
Proceedings of EMNLP 2020, 2019
1032019
Pix2struct: Screenshot parsing as pretraining for visual language understanding
K Lee, M Joshi, IR Turc, H Hu, F Liu, JM Eisenschlos, U Khandelwal, ...
International Conference on Machine Learning, 18893-18912, 2023
1012023
Understanding tables with intermediate pre-training
JM Eisenschlos, S Krichene, T Müller
Findings of EMNLP 2020, 2020
922020
Open Domain Question Answering over Tables via Dense Retrieval
J Herzig, T Müller, S Krichene, JM Eisenschlos
Proceedings of NAACL 2021, 2021
662021
MATE: Multi-view attention for table transformer efficiency
JM Eisenschlos, M Gor, T Mueller, W Cohen
Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021
622021
SoftSort: A Continuous Relaxation for the argsort Operator
S Prillo, JM Eisenschlos
Proceedings of ICML 2020, 2020
542020
Deplot: One-shot visual language reasoning by plot-to-table translation
F Liu, JM Eisenschlos, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, ...
arXiv preprint arXiv:2212.10505, 2022
352022
Matcha: Enhancing visual language pretraining with math reasoning and chart derendering
F Liu, F Piccinno, S Krichene, C Pang, K Lee, M Joshi, Y Altun, N Collier, ...
arXiv preprint arXiv:2212.09662, 2022
292022
Fool Me Twice: Entailment from Wikipedia Gamification
JM Eisenschlos, B Dhingra, J Bulian, B Börschinger, J Boyd-Graber
Proceedings of NAACL 2021, 2021
272021
Table-to-text generation and pre-training with tabt5
E Andrejczuk, JM Eisenschlos, F Piccinno, S Krichene, Y Altun
arXiv preprint arXiv:2210.09162, 2022
212022
Selectively answering ambiguous questions
JR Cole, MJQ Zhang, D Gillick, JM Eisenschlos, B Dhingra, J Eisenstein
arXiv preprint arXiv:2305.14613, 2023
152023
TAPAS at SemEval-2021 Task 9: Reasoning over tables with intermediate pre-training
T Müller, JM Eisenschlos, S Krichene
SemEval 2021, 2021
132021
DoT: An efficient Double Transformer for NLP tasks with tables
S Krichene, T Müller, JM Eisenschlos
Findings of ACL 2021, 2021
112021
Chain-of-table: Evolving tables in the reasoning chain for table understanding
Z Wang, H Zhang, CL Li, JM Eisenschlos, V Perot, Z Wang, L Miculicich, ...
arXiv preprint arXiv:2401.04398, 2024
42024
Universal self-adaptive prompting
X Wan, R Sun, H Nakhost, H Dai, JM Eisenschlos, SO Arik, T Pfister
arXiv preprint arXiv:2305.14926, 2023
42023
Leveraging data recasting to enhance tabular reasoning
A Jena, V Gupta, M Shrivastava, JM Eisenschlos
arXiv preprint arXiv:2211.12641, 2022
42022
Do ever larger octopi still amplify reporting biases? evidence from judgments of typical colour
F Liu, JM Eisenschlos, JR Cole, N Collier
arXiv preprint arXiv:2209.12786, 2022
42022
The system can't perform the operation now. Try again later.
Articles 1–20