Many Other Resources
Curious to learn more about AI Safety? Here are several resources to get you started.
AGISF Resources
The Artificial General Intelligence Safety Fundamentals (AGISF) website has several excellent resources, including:
AISF AI Alignment course (whose curriculum inspired our intro fellowship!)
Newsletters
AI Alignment Newsletter by Rohin Shah
ML Safety Newsletter by Dan Hendrycks
Advice
Getting up to speed on AI Alignment Recommendations by Holden Karnofsky
Advice for AI Alignment Researchers FAQ by Rohin Shah (DeepMind)
Beneficial AI Research Career Advice by Adam Gleave (Fund for Alignment Research, Center for Human Compatible AI (UC Berkeley) PhD)
Other Links
A list of resources for early alignment researchers
AI alignment forum: A forum for experts in the field to post thoughts and research results involving technical and theoretical approaches to alignment
LessWrong forum: this is similar to the AI alignment forum, and there's a lot of overlap, but LW focuses more on rationality and such
This map of the field as a whole
Fun map of the AI Safety world