Event details
Sep
9
Princeton AI Alignment Introduction Event
The coming decades of AI development present one of the most pressing problems of our time. At the core of the concern is the alignment problem, potentially leading to dramatic ramifications for society.
Richard Ngo, AI governance researcher at OpenAI, previously research engineer at Deepmind, PhD of Machine Learning from Cambridge, will give us an introduction to the alignment problem: beyond getting AI systems to be capable of sophisticated, complex behaviors that have the potential for huge impact, how do we ensure that AI systems follow humans' intended goals, preferences, or ethical principles?
We will also give information about Princeton AI Alignment's programming for the semester, which includes introductory seminars for AI alignment and AI governance, as well as weekly paper reading groups and socials.
Richard Ngo, AI governance researcher at OpenAI, previously research engineer at Deepmind, PhD of Machine Learning from Cambridge, will give us an introduction to the alignment problem: beyond getting AI systems to be capable of sophisticated, complex behaviors that have the potential for huge impact, how do we ensure that AI systems follow humans' intended goals, preferences, or ethical principles?
We will also give information about Princeton AI Alignment's programming for the semester, which includes introductory seminars for AI alignment and AI governance, as well as weekly paper reading groups and socials.
University programs and activities are open to all eligible participants without regard to identity or other protected characteristics. Sponsorship of an event does not constitute institutional endorsement of external speakers or views presented.
View physical accessibility information for campus buildings and find accessible routes using the Princeton Campus Map app.