Llama: Open and efficient foundation language models H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ... arXiv preprint arXiv:2302.13971, 2023 | 14261 | 2023 |
Llama 2: Open foundation and fine-tuned chat models H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 2023 | 13666 | 2023 |
The llama 3 herd of models A Grattafiori, A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, ... arXiv preprint arXiv:2407.21783, 2024 | 3718 | 2024 |
Llama: Open and efficient foundation language models. arXiv 2023 H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ... arXiv preprint arXiv:2302.13971 10, 2023 | 217 | 2023 |
Llama 2: Open foundation and fine-tuned chat models. arXiv 2023 H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288 10, 2023 | 159 | 2023 |
Hypertree proof search for neural theorem proving G Lample, T Lacroix, MA Lachaux, A Rodriguez, A Hayat, T Lavril, ... Advances in neural information processing systems 35, 26337-26349, 2022 | 149 | 2022 |
Llama 2: open foundation and fine-tuned chat models. CoRR abs/2307.09288 (2023) H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288 10, 2023 | 67 | 2023 |
Polygames: Improved zero learning T Cazenave, YC Chen, GW Chen, SY Chen, XD Chiu, J Dehos, M Elsa, ... ICGA Journal 42 (4), 244-256, 2021 | 58 | 2021 |
Llama 2: Open foundation and fine-tuned chat models, 2023b H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... URL https://arxiv. org/abs/2307.09288, 2023 | 49 | 2023 |
Llama 2: Open foundation and fine-tuned chat models. arXiv [Preprint](2023) H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 0 | 40 | |
LLaMA: open and efficient foundation language models. DOI: 10.48550 H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ... arXiv preprint arXiv.2302.13971 2302, 2023 | 27 | 2023 |
Timoth´ ee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. LLaMA: Open and Efficient Foundation Language Models. arXiv, 2023 H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux | 21 | |
Llama: Open and efficient foundation language models, CoRR abs/2302.13971 (2023). URL: https://doi. org/10.48550/arXiv. 2302.13971. doi: 10.48550/ARXIV. 2302.13971 H Touvron, T Lavril, G Izacard, X Martinet, M Lachaux, T Lacroix, ... arXiv preprint arXiv:2302.13971, 0 | 21 | |
Worldsense: A synthetic benchmark for grounded reasoning in large language models Y Benchekroun, M Dervishi, M Ibrahim, JB Gaya, X Martinet, G Mialon, ... arXiv preprint arXiv:2311.15930, 2023 | 7 | 2023 |
LLaMA: open and efficient foundation language models, 2023 [J] H Touvron, T Lavril, G Izacard, X Martinet, MA Lachaux, T Lacroix, ... URL https://arxiv. org/abs/2302.13971, 2023 | | 2023 |