Smarter-than-human Artificial Intelligence could be around the corner, with AI companies racing to build these systems as quickly as possible. Meanwhile, leading researchers have warned that superhuman AI could cause global catastrophe. A 2023 statement signed by thousands of AI experts warned us that “mitigating the risk of extinction from AI should be a global priority”.
It’s a bad idea to build something smarter than you, if you don’t know how to control it. We need guardrails to prevent dangerous, superhuman AI – and fast.
Join us in this online workshop to learn more about the dangers of AI and policy solutions to keep us safe – and how to pressure our elected officials to act on this crucial problem.