Jan Leike

From Wikipedia, the free encyclopedia

Jan Leike is an AI alignment researcher who has worked at DeepMind and OpenAI.

Education[edit]

Jan Leike obtained his undergraduate degree in Freiburg, Germany. After earning a master's degree in computer science, he pursued a PhD in machine learning at the Australian National University under the supervision of Marcus Hutter.[1]

Career[edit]

Leike made a six-month postdoctoral fellowship at the Future of Humanity Institute before joining DeepMind to focus on empirical AI safety research,[1] where he notably collaborated with Shane Legg.[2]

OpenAI[edit]

In 2021, Leike joined OpenAI.[2] In June 2023, he and Ilya Sutskever became the co-leaders of the newly introduced "superalignment" project, which aimed to determine how to align future artificial superintelligences within four years to ensure their safety. This project involved automating AI alignment research using relatively advanced AI systems. At the time, Sutskever was OpenAI's Chief Scientist, and Leike was the Head of Alignment.[3][2] In May 2024, Leike announced his resignation from OpenAI, following the departure of Ilya Sutskever, Daniel Kokotajlo and several other AI safety employees from the company. Leike wrote that "Over the past years, safety culture and processes have taken a backseat to shiny products", and that he "gradually lost trust" in OpenAI's leadership.[4][5][6]

Recognition[edit]

Leike was featured in the 2023 Time 100/AI, a list of the 100 most influential personalities in artificial intelligence.[2] He was also featured in Vox's 2023 Future Perfect 50, which highlights individuals working on the world's biggest problems.[7]

References[edit]

  1. ^ a b "An AI safety researcher on how to become an AI safety researcher". 80,000 Hours. Retrieved 2024-05-19.
  2. ^ a b c d "TIME100 AI 2023: Jan Leike". Time. 2023-09-07. Retrieved 2024-05-19.
  3. ^ Leike, Jan; Sutskever, Ilya (July 5, 2023). "Introducing Superalignment". OpenAI.
  4. ^ Samuel, Sigal (2024-05-17). ""I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded". Vox. Retrieved 2024-05-20.
  5. ^ Bastian, Matthias (2024-05-18). "OpenAI's AI safety teams lost at least seven researchers in recent months". the decoder. Retrieved 2024-05-20.
  6. ^ Milmo, Dan (2024-05-18). "OpenAI putting 'shiny products' above safety, says departing researcher". The Observer. ISSN 0029-7712. Retrieved 2024-05-20.
  7. ^ Piper, Kelsey (2023-11-29). "OpenAI's Jan Leike is trying to ensure superintelligent AI remains on our side". Vox. Retrieved 2024-05-19.

External links[edit]