• sponser

PRESENTS

TYRE PARTNER

  • sponser

ASSOCIATE PARTNER

  • sponser
  • sponser
  • sponser
  • sponser
  • sponser
  • sponser
News » Viral » Ex-OpenAI Employee Reveals Reasons Behind His Termination: ‘It Was Totally Normal To Share Safety Ideas’
2-MIN READ

Ex-OpenAI Employee Reveals Reasons Behind His Termination: ‘It Was Totally Normal To Share Safety Ideas’

Curated By:

Last Updated:

Delhi, India

OpenAI terminated Leopold Aschenbrenner for allegedly leaking information. (Photo Credit: X)

OpenAI terminated Leopold Aschenbrenner for allegedly leaking information. (Photo Credit: X)

The employee revealed that after he shared a brainstorming document with external team members, it became the reason behind his dismissal.

OpenAI recently terminated Leopold Aschenbrenner, a close associate of the company’s chief scientist Ilya Sutskever, for allegedly leaking information. The exact details of the leaked information remain unclear, but Aschenbrenner mentioned in a recent podcast with Dwarkesh Patel that after he shared a brainstorming document with external team members, it became the reason behind his dismissal.

Aschenbrenner, who graduated from Columbia University at the age of 19, disclosed that the document was related to OpenAI’s safety measures, which he described as “egregiously insufficient” in protection against the theft of “key algorithmic secrets from foreign actors.”

He explained, “Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s it, that’s the leak.

“It was totally normal at OpenAI at the time to share safety ideas with external researchers for feedback. It happened all the time. The doc was my idea. The internal version had a reference to a future cluster, but I redacted it with the external copy,” he said.

“When I pressed them to specify what confidential information was in this document. They came back with a line about planning for AGI by 2027-2028 and not setting timelines for preparedness. I wrote an internal memo about OpenAI’s security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors. I shared this memo with a few colleagues and a couple of members of leadership, who sort of said it was mostly said it was helpful,” Leopold Aschenbrenner added.

Aschenbrenner further explained that a few weeks after he shared a memo, a major security incident took place at OpenAI. In response, he decided to share the memo with a couple of board members. However, he received a warning from HR for sharing the memo with the board. According to him, the HR even suggested that his concerns about possible undercover activities by the Chinese Communist Party (CCP) were racist and unnecessary.

“When I was fired, it was very made explicit that the security memo was a major reason for my being fired. They’d gone through all of my digital artifacts from my time at OpenAI and that’s when they found the leak. There were a couple of other allegations they threw in,” the former OpenAI employee stated.

However, an OpenAI spokesperson told Business Insider that the issues Leopold Aschenbrenner raised with the Board of Directors did not directly result in his termination. The spokesperson also stated that they disagree with many of his claims.

first published:June 06, 2024, 18:15 IST
last updated:June 06, 2024, 18:43 IST