Follow
Chufan Shi
Chufan Shi
Verified email at mails.tsinghua.edu.cn
Title
Cited by
Cited by
Year
A thorough examination of decoding methods in the era of llms
C Shi, H Yang, D Cai, Z Zhang, Y Wang, Y Yang, W Lam
arXiv preprint arXiv:2402.06925, 2024
132024
Specialist or generalist? instruction tuning for specific NLP tasks
C Shi, Y Su, C Yang, Y Yang, D Cai
EMNLP 2023, 2023
13*2023
Inscl: A data-efficient continual learning paradigm for fine-tuning large language models with instructions
Y Wang, Y Liu, C Shi, H Li, C Chen, H Lu, Y Yang
NAACL 2024, 2024
92024
Assisting language learners: Automated trans-lingual definition generation via contrastive prompt learning
H Zhang, D Li, Y Li, C Shang, C Shi, Y Jiang
ACL 2023 BEA, 2023
92023
Iigroup submissions for wmt22 word-level autocompletion task
C Yang, S Li, C Shi, Y Yang
WMT 2022, 2022
52022
Hint-enhanced in-context learning wakes large language models up for knowledge-intensive tasks
Y Wang, Q Guo, X Ni, C Shi, L Liu, H Jiang, Y Yang
ICASSP 2024, 2024
22024
LiFi: lightweight controlled text generation with fine-grained control codes
C Shi, D Cai, Y Yang
arXiv preprint arXiv:2402.06930, 2024
22024
ChartMimic: Evaluating LMM's Cross-Modal Reasoning Capability via Chart-to-Code Generation
C Shi, C Yang, Y Liu, B Shui, J Wang, M Jing, L Xu, X Zhu, S Li, Y Zhang, ...
arXiv preprint arXiv:2406.09961, 2024
12024
Community tour: an expandable knowledge exploration system for urban migrant children
B Shui, H Guo, H Li, C Shi, X Nie
IDC 2023, 2023
12023
ToolBeHonest: A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models
Y Zhang, J Chen, J Wang, Y Liu, C Yang, C Shi, X Zhu, Z Lin, H Wan, ...
arXiv preprint arXiv:2406.20015, 2024
2024
HoLLMwood: Unleashing the Creativity of Large Language Models in Screenwriting via Role Playing
J Chen, X Zhu, C Yang, C Shi, Y Xi, Y Zhang, J Wang, J Pu, R Zhang, ...
arXiv preprint arXiv:2406.11683, 2024
2024
ContextVis: Envision Contextual Learning and Interaction with Generative Models
B Shui, C Shi, Y Yang, X Nie
HCII 2024, 2024
2024
Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast
C Shi, C Yang, X Zhu, J Wang, T Wu, S Li, D Cai, Y Yang, Y Meng
arXiv preprint arXiv:2405.14507, 2024
2024
Metro Re-illustrated: Incremental Generation of Stylized Paintings Using Neural Networks
B Shui, C Shi, X Nie
SIGGRAPH 2023 Posters, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–14