About Me
I want to help others as much as possible. Currently, that means (1) doing AI research; (2) maxmizing the expected counterfactual impact of my donations. I'm particularly concerned about the importance, neglectedness, and tractability of artificial sentience.
I'm studying for a Bachelor's in Computer Science at Georgia Institute of Technology. You can find my CV here
Research
My goal is to become a Principal Scientist of Scalable Alignment. Currently, I'm interested in research on goal misgeneralization, out of distribution detection, robustness, AI calibration, AI honesty, activation steering, proof of learning, and the science of deep learning.
I'm especially excited about OpenAI's Superalignment team. If you are hiring to do similar work and you think I would be a good candidate, please reach out!
Contact Information
You can contact me via email ([email protected]) or book a meeting with me on Calendly.
Feedback
If you have any feedback for me, I would love to receive it here. You can submit feedback anonymously or provide your contact information. I especially value strong, thoughtful criticism.