Sledovat
Christopher Olah
Christopher Olah
Anthropic
E-mailová adresa ověřena na: google.com - Domovská stránka
Název
Citace
Citace
Rok
TensorFlow: Large-scale machine learning on heterogeneous systems
M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, GS Corrado, ...
59698*2015
Conditional image synthesis with auxiliary classifier gans
A Odena, C Olah, J Shlens
International conference on machine learning, 2642-2651, 2017
44362017
Concrete problems in AI safety
D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané
arXiv preprint arXiv:1606.06565, 2016
32672016
Understanding LSTM Networks
C Olah
colah.github.io, 2015
2972*2015
Deconvolution and Checkerboard Artifacts
A Odena, V Dumoulin, C Olah
Distill, 2016
19792016
Training a helpful and harmless assistant with reinforcement learning from human feedback
Y Bai, A Jones, K Ndousse, A Askell, A Chen, N DasSarma, D Drain, ...
arXiv preprint arXiv:2204.05862, 2022
18872022
Feature visualization
C Olah, A Mordvintsev, L Schubert
Distill 2 (11), e7, 2017
1564*2017
Constitutional ai: Harmlessness from ai feedback
Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, ...
arXiv preprint arXiv:2212.08073, 2022
14042022
Inceptionism: Going deeper into neural networks
A Mordvintsev, C Olah, M Tyka
Google research blog 20 (14), 5, 2015
1109*2015
The building blocks of interpretability
C Olah, A Satyanarayan, I Johnson, S Carter, L Schubert, K Ye, ...
Distill 3 (3), e10, 2018
895*2018
A mathematical framework for transformer circuits
N Elhage, N Nanda, C Olsson, T Henighan, N Joseph, B Mann, A Askell, ...
Transformer Circuits Thread 1 (1), 12, 2021
676*2021
In-context learning and induction heads
C Olsson, N Elhage, N Nanda, N Joseph, N DasSarma, T Henighan, ...
arXiv preprint arXiv:2209.11895, 2022
651*2022
Document embedding with paragraph vectors
AM Dai, C Olah, QV Le
arXiv preprint arXiv:1507.07998, 2015
5892015
Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned
D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, ...
arXiv preprint arXiv:2209.07858, 2022
5282022
Zoom in: An introduction to circuits
C Olah, N Cammarata, L Schubert, G Goh, M Petrov, S Carter
Distill 5 (3), e00024. 001, 2020
4632020
A general language assistant as a laboratory for alignment
A Askell, Y Bai, A Chen, D Drain, D Ganguli, T Henighan, A Jones, ...
arXiv preprint arXiv:2112.00861, 2021
4012021
Language models (mostly) know what they know
S Kadavath, T Conerly, A Askell, T Henighan, D Drain, E Perez, ...
arXiv preprint arXiv:2207.05221, 2022
3942022
Understanding LSTM networks. 2015
C Olah
3882015
Multimodal neurons in artificial neural networks
G Goh, N Cammarata, C Voss, S Carter, M Petrov, L Schubert, A Radford, ...
Distill 6 (3), e30, 2021
3872021
Towards monosemanticity: Decomposing language models with dictionary learning
T Bricken, A Templeton, J Batson, B Chen, A Jermyn, T Conerly, N Turner, ...
Transformer Circuits Thread 2, 2023
3572023
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–20