Let’s make sure AI development happens safely

AI Safety Würzburg is a student-led initiative dedicated to reducing the risks posed by advanced AI Systems. We aim to empower students and academics in Würzburg to address this pressing problem. Together, we want to learn about the risks of AI, how we can develop relevant skills, and how we can contribute to beneficial AI outcomes.

We do so by:

  • Educating ourselves with programs, hackathons, research and discussions.

  • Connecting ourselves with a community of people dedicated to this problem worldwide.

  • Supporting each other in making long-term plans to contribute to AI Safety through our career, engagement, or advocacy.

Every semester we run an introductory course on AI safety based on the AI safety fundamentals course developed by OpenAI researcher Richard Ngo. More information can be found here:

Contact AI Safety Würzburg

What we do

AI Safety

AI is advancing rapidly and brings huge potential for positive change. However, according to a recent survey 48% of AI experts think that the risk of human extinction from AI is >10% (Grace et al, 2022). AI could be used to develop bioweapons, deploy hazardous malware or empower oppressive regimes. Companies are investing billions of dollars and racing to deploy frontier models while the underlying technology is mostly a black/grey box with no rigorous theory of how to make systems safe. We believe there are still fundamental questions to answer and technical challenges to address to ensure that advanced AI systems are beneficial, rather than harmful to humanity.

We are committed to reduce catastrophic risks from advanced AI systems


If this sounds interesting to you, join us at one of regular events