AI Safety and Alignment

Jeremy's thinking on AI safety, alignment problems, and the relative safety of different paths to general intelligence.

This is a synthesis page. It compiles and cross-references entries from across the wiki.


[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?