Leopold Aschenbrenner

Leopold Aschenbrenner
Born2001 or 2002 (age 23–24)
Germany
EducationJohn F. Kennedy School
Columbia University
Occupation(s)AI researcher
Investor
EmployerOpenAI (2023–2024)
Notable workSituational Awareness

Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI's "Superalignment" team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called "Situational Awareness" about the emergence of artificial general intelligence and related security risks.[2] He is the founder and chief investment officer (CIO) of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.[3]

Early life

[edit]

Aschenbrenner was born in Germany to parents who were both doctors.[4] He was educated at the John F. Kennedy School in Berlin and graduated as valedictorian from Columbia University in 2021 at the age of 19, majoring in economics and mathematics-statistics.[1][5][6] While at Columbia, he co-founded the university's effective altruism (EA) chapter.[4] He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. Aschenbrenner was a member of the FTX Future Fund team, an EA philanthropic initiative created by the FTX Foundation,[7] from February 2022 until his resignation prior to FTX's bankruptcy in November of that year.[8][9]

Career

[edit]

OpenAI

[edit]

Aschenbrenner joined OpenAI in 2023, on a team called "Superalignment", headed by Jan Leike and Ilya Sutskever. The team pursued technical breakthroughs to steer and control AI systems smarter than humans.[10] As a member of the team, Aschenbrenner co-authored “Weak to Strong Generalization”,[11] which was presented at the 2024 International Conference on Machine Learning.[12]

In April 2023, a hacker gained access to OpenAI's internal messaging system and stole information, an event that OpenAI kept private.[13] Subsequently, Aschenbrenner wrote a memo to OpenAI's board of directors about the possibility of industrial espionage by Chinese and other foreign entities, arguing that OpenAI's security was insufficient. According to Aschenbrenner, this memo led to tensions between the board and the leadership about security, and he received a warning from human resources. OpenAI later fired him in April 2024 over an alleged information leak, which Aschenbrenner said was about a benign brainstorming document shared to three external researchers for feedback. OpenAI stated that the firing is unrelated to the security memo, whereas Aschenbrenner said that it was made explicit to him at the time that it was a major reason.[14][15] The "Superalignment" team was dissolved one month later, with the departure from OpenAI of other researchers such as Ilya Sutskever and Jan Leike.[16]

Investment firm

[edit]

After publishing "Situational Awareness" in 2024, Aschenbrenner founded Situational Awareness LP, an investment firm backed by Patrick and John Collison, Daniel Gross, and Nat Friedman.[17][18] Named after his essay "Situational Awareness", the AI-focused hedge fund manages over $1.5 billion as of 2025.[3]

Situational Awareness essay

[edit]

In 2024, Aschenbrenner wrote a 165-page essay named "Situational Awareness: The Decade Ahead".[19] It contains sections that predict the emergence of AGI, imagine a path from AGI to superintelligence, describe four risks to humanity, outline a way for humans to deal with superintelligent machines, and articulate the principles of an "AGI realism". He specifically warns that the United States needs to defend against the use of AI technologies by countries such as Russia and China.[17] Aschenbrenner argues that by 2027 AI systems will have the capacity to conduct their own AI research. Hundreds of millions of AGIs could then automate AI research, compressing a decade of algorithmic progress into less than a year, which would lead to "runaway superintelligence".[20]

Personal life

[edit]

As of 2025, Aschenbrenner is engaged to Avital Balwit, the chief of staff to the CEO at Anthropic.[4] He lives in San Francisco.[21]

References

[edit]
  1. ^ a b Nærland, Mina Hauge; Bjorkeng, Per Kristian (2024-06-24). "22-åring har satt fyr på Silicon Valley" [22-year-old has set Silicon Valley on fire]. Aftenposten (in Norwegian). ISSN 0804-3116.
  2. ^ "Introduction - SITUATIONAL AWARENESS: The Decade Ahead". situational-awareness.ai. Retrieved 2025-08-28.
  3. ^ a b Rudegeair, Peter (2025-08-10). "Billions Flow to New Hedge Funds Focused on AI-Related Bets". The Wall Street Journal. Retrieved 2025-08-11.
  4. ^ a b c Goldman, Sharon (October 8, 2025). "How former OpenAI researcher Leopold Aschenbrenner turned a viral AI prophecy into profit, with a $1.5 billion hedge fund and outsize influence from Silicon Valley to D.C." Fortune. Retrieved October 9, 2025.
  5. ^ "Die Jugend forscht Landessieger 2016 stehen fest" [The 2016 Youth Research State Winners have been announced] (in German). Jugend forscht. 2016-03-16.
  6. ^ "Columbia College Announces 2021 Valedictorian and Salutatorian". Columbia College. April 9, 2021. Retrieved September 23, 2025.
  7. ^ Allen, Mike (2024-06-23). "10 takeaways: AI from now to 2034". Axios. Retrieved 2024-12-27.
  8. ^ Howcroft, Elizabeth (2023-04-06). "Collapse of FTX deprives academics of grants, stokes fears of forced repayment". Reuters.
  9. ^ Pahwa, Nitish (2023-12-13). "The Money Is Oregone". Slate. ISSN 1091-2339.
  10. ^ "Ex-OpenAI employee writes AI essay: War with China, resources and robots". heise online. 2024-07-02. Retrieved 2024-12-28.
  11. ^ Burns, C.; Izmailov, P.; Kirchner, J.H.; Baker, B.; Gao, L.; Aschenbrenner, L.; Chen, Y.; Ecoffet, A.; Joglekar, M.; Leike, J.; Sutskever, I.; Wu, J. (December 14, 2023). "Weak-to-strong generalization: Eliciting strong capabilities with weak supervision". arXiv:2312.09390 [cs.CL].
  12. ^ "Oral - Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision". ICML. 2024. Retrieved October 1, 2025.
  13. ^ Metz, Cade (2024-07-04). "A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too". The New York Times. Archived from the original on 2024-12-26. Retrieved 2024-12-27.
  14. ^ Altchek, Ana. "Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'". Business Insider. Retrieved 2024-12-28.
  15. ^ "Ex-OpenAI Employee Reveals Reason For Getting Fired, "Security Memo Was..."". NDTV. June 6, 2024. Retrieved 2024-12-28.
  16. ^ Field, Hayden (2024-05-17). "OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it". CNBC. Retrieved 2024-12-28.
  17. ^ a b Naughton, John (2024-06-15). "How's this for a bombshell – the US must make AI its next Manhattan Project". The Observer. ISSN 0029-7712. Retrieved 2024-12-27.
  18. ^ "Post Script: Patrick Collison the Swiss dictator; McKillen-Bono whiskey lands ex-Bank of Ireland governor". www.businesspost.ie. Retrieved 2024-12-27.
  19. ^ Aschenbrenner, Leopold (June 2024). "Situational Awareness: The Decade Ahead". Situational-Awareness. Retrieved September 17, 2025.
  20. ^ Toews, Rob (November 5, 2024). "AI That Can Invent AI Is Coming. Buckle Up". Forbes. Retrieved 2024-12-27.
  21. ^ Naughton, John (June 15, 2024). "How's this for a bombshell – the US must make AI its next Manhattan Project". The Guardian. ISSN 0261-3077. Retrieved October 9, 2025.
[edit]