Valedix
Member
OpenAI whistleblower found dead at 26 in San Francisco apartment | TechCrunch
A former OpenAI employee, Suchir Balaji, was recently found dead in his San Francisco apartment, according to the San Francisco Office of the Chief
techcrunch.com
A former OpenAI employee, Suchir Balaji, was recently found dead in his San Francisco apartment, according to the San Francisco Office of the Chief Medical Examiner. In October, the 26-year-old AI researcher raised concerns about OpenAI breaking copyright law when he was interviewed by The New York Times.
“The Office of the Chief Medical Examiner (OCME) has identified the decedent as Suchir Balaji, 26, of San Francisco. The manner of death has been determined to be suicide,” said a spokesperson in a statement to TechCrunch. “The OCME has notified the next-of-kin and has no further comment or reports for publication at this time.”
After nearly four years working at OpenAI, Balaji quit the company when he realized the technology would bring more harm than good to society, he told The New York Times. Balaji’s main concern was the way OpenAI allegedly used copyright data, and he believed its practices were damaging to the internet.
“We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir’s loved ones during this difficult time,” said an OpenAI spokesperson in an email to TechCrunch.
Balaji was found dead in his Buchanan Street apartment on November 26, a spokesperson for the San Francisco Police Department told TechCrunch. Officers and medics were called to his residence in the city’s Lower Haight district to perform a wellness check on the former OpenAI researcher. No evidence of foul play was found during the initial investigation, according to police.
“I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them,” said Balaji in a tweet from October. “I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on.”