Is Artificial General Intelligence too Dangerous to Build? Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4

Is Artificial General Intelligence too Dangerous to Build? Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4


An open letter published on March 22, 2023 calls for Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4. In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligent AI. Join us for an interactive Q&A with Yudkowsky about Al Safety! Dr. Mark Bailey of National Intelligence University will moderate the discussion.

Eliezer Yudkowsky is a decision theorist from the United States and leads research at the Machine Intelligence Research Institute (MIRI). He has been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

Dr. Mark Bailey is the Chair of the Cyber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University

đź”— Date / Time / Location

  • Wednesday, April 19th, 2023, 4:00 pm - 5:30 pm EST
  • The Rubin and Cindy Gruber Sandbox, Wimberly Library, Boca Raton FL

A video of his talk is available at the Center for the Future Mind YouTube page: