OpenAI is going through rising stress to show it is not hiding AI dangers after whistleblowers alleged to the US Securities and Alternate Fee (SEC) that the AI firm’s non-disclosure agreements had illegally silenced workers from disclosing main security considerations to lawmakers.
In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded proof that OpenAI is now not requiring agreements that may very well be “stifling” its “workers from making protected disclosures to authorities regulators.”
Particularly, Grassley requested OpenAI to supply present employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts do not discourage disclosures. That is vital, Grassley stated, in order that it will likely be doable to depend on whistleblowers exposing rising threats to assist form efficient AI insurance policies safeguarding towards existential AI dangers as applied sciences advance.
Grassley has apparently twice requested these information with no response from OpenAI, his letter stated. And up to now, OpenAI has not responded to the latest request to ship paperwork, Grassley’s spokesperson, Clare Slattery, instructed The Washington Publish.
“It’s not sufficient to easily declare you’ve made ‘updates,’” Grassley stated in a press release offered to Ars. “The proof is within the pudding. Altman wants to supply information and responses to my oversight requests so Congress can precisely assess whether or not OpenAI is satisfactorily defending its workers and customers.”
Along with requesting OpenAI’s lately up to date worker agreements, Grassley pushed OpenAI to be extra clear concerning the complete variety of requests it has obtained from workers searching for to make federal disclosures since 2023. The senator desires to know what data workers wished to open up to officers and whether or not OpenAI truly permitted their requests.
Alongside the identical traces, Grassley requested OpenAI to verify what number of investigations the SEC has opened into OpenAI since 2023.
Collectively, these paperwork would make clear whether or not OpenAI workers are doubtlessly nonetheless being silenced from making federal disclosures, what sorts of disclosures OpenAI denies, and the way intently the SEC is monitoring OpenAI’s seeming efforts to cover security dangers.
“It’s essential OpenAI guarantee its workers can present protected disclosures with out unlawful restrictions,” Grassley wrote in his letter.
He has requested a response from OpenAI by August 15 in order that “Congress could conduct goal and impartial oversight on OpenAI’s security protocols and NDAs.”
OpenAI didn’t instantly reply to Ars’ request for remark.
On X, Altman wrote that OpenAI has taken steps to extend transparency, together with “working with the US AI Security Institute on an settlement the place we would offer early entry to our subsequent basis mannequin in order that we will work collectively to push ahead the science of AI evaluations.” He additionally confirmed that OpenAI desires “present and former workers to have the ability to elevate considerations and really feel snug doing so.”
“That is essential for any firm, however for us particularly and an vital a part of our security plan,” Altman wrote. “In Could, we voided non-disparagement phrases for present and former workers and provisions that gave OpenAI the suitable (though it was by no means used) to cancel vested fairness. We’ve labored onerous to make it proper.”
In July, whistleblowers instructed the SEC that OpenAI ought to be required to supply not simply present worker contracts, however all contracts that contained a non-disclosure settlement to make sure that OpenAI hasn’t been obscuring a historical past or present observe of obscuring AI security dangers. They need all present and former workers to be notified of any contract that included an unlawful NDA and for OpenAI to be fined for each unlawful contract.