The following was copied from AI Safety - Cause Prioritization Wiki. Feel free to edit at will.
Summary
Importance<importance rating>
Tractability<tractability rating>
Neglectedness<neglectedness rating>
AI Impacts is an informational site that “aims to improve our understanding of the likely impacts of human-level artificial intelligence”.
The main EA organization working in this field seems to be MIRI, which does relevant math research and sponsors forecasting projects like AI Impacts. FHI might also have more information.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom lays out a foundation for navigating scenarios where machine brains surpass human brains in general intelligence.
How large do you think the first Strong AI be(lines of code, servers etc)? • /r/artificial
Artificial Intelligence • /r/artificial
Importance
FIXME
Tractability
FIXME
Neglectedness
FIXME
See also
External links
- General resources on AI safety
- Artificial General Intelligence: Coordination & Great Powers
- AI alignment landscape by Paul Christiano
- Some relevant timelines:
- Timeline of AI safety
- Timeline of Machine Intelligence Research Institute
- Timeline of Center for Applied Rationality
- Timeline of Berkeley Existential Risk Initiative
- Timeline of Future of Humanity Institute
- Timeline of Foundational Research Institute
- Timeline of OpenAI
- CarlShulman comments on How does MIRI Know it Has a Medium Probability of Success?
- Don’t Worry, Smart Machines Will Take Us With Them
- Jeff Kaufman’s posts on AI safety:
- “Looking into AI Risk”
- “Superintelligence Risk Project”
- “Conversation with Dario Amodei”
- “Conversation with Michael Littman”
- “Superintelligence Risk Project Update”
- artificial intelligence index - 2018 annual report
- AI Safety Ideas