AI Safety at Erasmus

AI Governance Fellowship

Join us for a Fellowship that aims to provide an in-depth overview of the challenges in AI Safety and its governance.

⌚Deadline to Apply: Sunday, the 4th of May
📆Dates: Starting the 7th of May, lasting for 6 weeks
📍Location: Langeveld Building, Erasmus University Woudestein Campus

About the program:

Did you know that nearly half of AI experts warn that advanced AI systems pose an existential threat to humanity? Our discussion-based AI Safety Fellowship dives into the challenges of AI ethics, governance, and technical safety—topics that shape the future of human-AI coexistence. You’ll explore real-world risks together with your peers and learn what can be done to mitigate these risks.

The course will be based on Bluedot’s AI Governance course, which you can see here: https://course.aisafetyfundamentals.com/governance


More information about the existential risks of AI

Existential risks associated with AI is one of the most pressing issues of our time. Without adequate policy changes and research ensuring that superintelligent AI holds the same values as us (alignment) and safety barriers ensuring safe development, this risk is heightened. Experts in the domain of AI believe that there is an average of 3% chance of AI causing the extinction of humanity within 2100. This is much higher than the existential risk of climate change, but climate change is heavily invested in, while AI safety is highly neglected. Our reliance on AI could also lead to many other catastrophic outcomes, like mass unemployment, malicious use of AI, or rogue AIs. You can read more about these risks here: https://www.safe.ai/ai-risk

PISE AI Safety Goals

At PISE, we wish to teach students about the risks of AI and equip them with the tools that allow them to do meaningful work in the field, mainly through AI governance. Our primary goal is to foster progress in AI Safety by:

  • Running fellowships where students learn about AI safety

  • Conducting research and collaborating with Erasmus University on AI Safety initiatives