Is Artificial General Intelligence too Dangerous to Build? Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4
An open letter published on March 22, 2023 calls for Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4. In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligent AI. Join us for an interactive Q&A with Yudkowsky about Al Safety! Dr. Mark Bailey of National Intelligence University will moderate the discussion.