Sledovat
Sheng Shen
Sheng Shen
E-mailová adresa ověřena na: berkeley.edu - Domovská stránka
Název
Citace
Citace
Rok
The llama 3 herd of models
A Grattafiori, A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, ...
arXiv preprint arXiv:2407.21783, 2024
36672024
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
ICLR 2022, 2021
18462021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
18142023
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
ACL 2023, 2022
7322022
Q-bert: Hessian based ultra low precision quantization of bert
S Shen, Z Dong, J Ye, L Ma, Z Yao, A Gholami, MW Mahoney, K Keutzer
AAAI 2020, 2019
6272019
How Much Can CLIP Benefit Vision-and-Language Tasks?
S Shen*, LH Li*, H Tan, M Bansal, A Rohrbach, KW Chang, Z Yao, ...
ICLR 2022, 2021
4662021
Agentbench: Evaluating llms as agents
X Liu, H Yu, H Zhang, Y Xu, X Lei, H Lai, Y Gu, H Ding, K Men, K Yang, ...
arXiv preprint arXiv:2308.03688, 2023
447*2023
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Z Li*, E Wallace*, S Shen*, K Lin*, K Keutzer, D Klein, JE Gonzalez
ICML 2020, 2020
3452020
Llavanext: Improved reasoning, ocr, and world knowledge
H Liu, C Li, Y Li, B Li, Y Zhang, S Shen, YJ Lee
3312024
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
Z Yao, A Gholami, S Shen, K Keutzer, MW Mahoney
AAAI 2021, 2020
3092020
Aligning large multimodal models with factually augmented rlhf
Z Sun*, S Shen*, S Cao*, H Liu, C Li, Y Shen, C Gan, LY Gui, YX Wang, ...
arXiv preprint arXiv:2309.14525, 2023
2792023
SqueezeLLM: Dense-and-Sparse Quantization
S Kim*, C Hooper*, A Gholami*, Z Dong, X Li, S Shen, MW Mahoney, ...
arXiv preprint arXiv:2306.07629, 2023
2012023
Poisoning Language Models During Instruction Tuning
A Wan*, E Wallace*, S Shen, D Klein
ICML 2023, 2023
1862023
Learned token pruning for transformers
S Kim*, S Shen*, D Thorsley, A Gholami, W Kwon, J Hassoun, K Keutzer
KDD 2022, 2021
1732021
Raft: Adapting language model to domain specific rag
T Zhang, SG Patil, N Jain, S Shen, M Zaharia, I Stoica, JE Gonzalez
First Conference on Language Modeling, 2024
1602024
What Language Model to Train if You Have One Million GPU Hours?
T Le Scao, T Wang, D Hesslow, L Saulnier, S Bekman, MS Bari, ...
EMNLP 2022, 2022
1222022
An annotated dataset of literary entities
D Bamman, S Popat, S Shen
NAACL 2019, 2019
1202019
K-lite: Learning transferable visual models with external knowledge
S Shen, C Li, X Hu, Y Xie, J Yang, P Zhang, A Rohrbach, Z Gan, L Wang, ...
NeurIPS 2022, 2022
1032022
Powernorm: Rethinking batch normalization in transformers
S Shen, Z Yao, A Gholami, M Mahoney, K Keutzer
ICML 2020, 2020
1002020
Scaling Vision-Language Models with Sparse Mixture of Experts
S Shen, Z Yao, C Li, T Darrell, K Keutzer, Y He
EMNLP 2023 Findings, 2023
922023
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–20