In a dramatic unfolding of internal strife, a recently released 365-page deposition transcript from OpenAI co-founder Ilya Sutskever sheds light on how one of the world’s most prominent AI research companies nearly imploded. The sworn testimony, given over nearly 10 hours in October during the Musk v. Altman legal case, exposes a chaotic period marked by leadership confusion, ideological conflict, and fundamental flaws in corporate governance.
At the heart of the turmoil was Sutskever himself, a key architect behind ChatGPT and a central figure in the controversial decision to oust CEO Sam Altman in November 2023. His testimony reveals a startling level of internal disarray, where decisions with existential consequences were made based on incomplete information and emotional reasoning rather than sound judgment.
One of the most shocking claims to emerge from the deposition was the suggestion that “destroying OpenAI could be consistent with its mission.” This paradoxical statement reflects the deep philosophical divisions within the organization. Founded with the goal of ensuring artificial general intelligence (AGI) benefits all of humanity, OpenAI’s mission has often been interpreted through wildly different lenses by its leaders and board members. To some, halting or dismantling the organization was a form of safeguarding humanity from potential AI risks. To others, it was a betrayal of the very mission they sought to serve.
The deposition also highlighted serious governance issues. According to Sutskever, the board was inexperienced and ill-equipped to handle the complexities of overseeing a company at the forefront of global AI development. Instead of a diverse and seasoned panel, the board was composed of individuals with limited corporate or operational experience, many of whom were deeply ideological. This lack of institutional maturity contributed to a series of rushed and poorly considered decisions.
Another contributing factor was the cult-like culture within the organization. Employees were described as being intensely loyal to specific individuals rather than the company as a whole. This created factions and echo chambers, where dissenting opinions were either ignored or suppressed. These internal dynamics made it increasingly difficult to maintain objectivity or unity when facing critical decisions.
The events leading to Altman’s abrupt firing were cloaked in secrecy and confusion. Sutskever admitted that he and others acted on allegations that were never fully substantiated. Rumors and speculation were treated as credible threats, leading to a decision that nearly unraveled the company. Only after immense backlash from employees, investors, and the broader tech community was Altman reinstated, but the damage to internal trust and organizational stability was significant.
Behind the scenes, a confidential 52-page document reportedly played a key role in escalating tensions. While its contents remain undisclosed to the public, it is believed to have contained a mix of concerns about Altman’s leadership, speculative risks regarding AGI development, and warnings about potential conflicts of interest. The document further polarized the board, pushing them toward a hasty and ultimately regrettable decision.
The weekend following Altman’s removal has since been described as a near-death moment for OpenAI. Top engineers threatened to resign en masse, investors voiced concerns about the company’s future, and key partners began reevaluating their commitments. For a moment, the very existence of OpenAI hung in the balance.
Long-standing disagreements over the direction and pace of AGI research also played a role. While some board members advocated for a cautious, safety-first approach, others—including Altman—pushed for continued innovation and deployment. This ideological rift was never adequately resolved and contributed to a volatile leadership environment.
The deposition also exposed the risks of concentrating too much authority and information in a small group. Sutskever admitted that much of the board’s decision-making relied on a narrow set of inputs, lacking a robust framework for validation or external review. This echo chamber effect amplified fears and stifled critical thinking at key moments.
In the aftermath, OpenAI has made efforts to strengthen its governance. New board members with deeper operational experience have been appointed, and there are discussions about implementing more transparent decision-making processes. However, trust within the organization remains fragile, and the scars from the crisis are still visible.
The situation at OpenAI serves as a cautionary tale for other technology companies navigating the uncharted territory of advanced AI. It underscores the importance of sound governance, clear communication, and a shared understanding of mission and values. Without these elements, even the most visionary organizations can veer dangerously off course.
Looking forward, OpenAI faces a dual challenge: regaining internal cohesion and restoring public confidence. The company must ensure that its leadership structure is resilient enough to withstand future disagreements without resorting to drastic, destabilizing actions. Transparency and accountability must become central tenets, especially as the stakes of AGI development continue to rise.
The events also raise broader questions about how mission-driven organizations should be governed. Can a nonprofit holding company balance its idealism with the practical demands of operating a cutting-edge tech firm? How should ethical disagreements be resolved when they concern issues that are still largely theoretical but potentially world-altering?
Furthermore, the story invites scrutiny of the personalities at the helm. Sutskever’s own transformation—from a founding member and chief scientist to someone who voted to remove the CEO—illustrates the intense personal and professional pressures that come with stewarding one of the most consequential technologies of our time.
Ultimately, the deposition offers a rare, raw glimpse into the human side of technological innovation. It shows that even in companies built on logic and algorithms, decisions are still made by people—flawed, emotional, and often overwhelmed by the weight of their responsibilities. The future of AI may be written in code, but its trajectory will continue to be shaped by the very human dynamics of power, fear, and ambition.

