Anthropic trust adds novartis Ceo to board, cementing safety-first governance

Anthropic Trust Puts Novartis CEO on Board, Cementing Safety-First Control

Anthropic’s Long-Term Benefit Trust has appointed Novartis chief executive Vas Narasimhan to the company’s board of directors, marking the first time a leader from the pharmaceutical sector has joined the AI firm’s governing body. With this move, directors chosen by the Trust now hold a majority of the seven board seats, activating a key governance mechanism written into Anthropic’s founding documents but never before realized.

Narasimhan officially joined the board on April 14, 2026. His arrival marks a turning point in how Anthropic aligns its long-term safety commitments with direct oversight power. The board now consists of Dario Amodei, Daniela Amodei, Yasmin Razavi, Jay Kreps, Reed Hastings, Chris Liddell, and Vas Narasimhan, with Trust-appointed members Kreps, Hastings, and Narasimhan forming a majority bloc.

Who Is Vas Narasimhan and Why He Matters for Anthropic

Narasimhan is a physician-scientist with nearly two decades of experience in one of the most heavily regulated industries in the world. At Novartis, he has overseen the development and regulatory approval of more than 35 new medicines and vaccines, guiding them from research through clinical trials and global approval processes.

For Anthropic, which is increasingly positioning its Claude models as tools for healthcare, life sciences, and other regulated domains, that background is strategically important. Narasimhan’s track record is not just about running a large company; it’s about shepherding powerful technologies from the lab into high-stakes real-world environments without losing sight of safety, compliance, and patient welfare.

On social media, Narasimhan has emphasized that in healthcare AI, speed cannot be the sole priority. He has argued that the critical questions are how systems are designed, governed, and applied – a perspective that dovetails with Anthropic’s focus on safety, alignment, and responsible deployment.

Daniela Amodei highlighted this alignment, noting that Narasimhan brings “something rare” to the board: direct responsibility for dozens of breakthrough therapies in one of the strictest regulatory environments on earth. For Anthropic, which is working to deploy increasingly capable AI systems at scale, his experience is a real-world template for “how to move fast without breaking everything.”

What the Anthropic Long-Term Benefit Trust Actually Is

The Long-Term Benefit Trust is not a conventional corporate shareholder. It is a separate legal entity created specifically to embed Anthropic’s public-benefit and safety mission into the company’s long-term governance.

Key features of the Trust include:
– It holds a special class of Anthropic stock whose sole function is to elect board members.
– Trustees do not own Anthropic equity, do not draw a salary from Anthropic, and are not selected by private investors or common shareholders.
– Instead, trustees are chosen by one another, creating a self-perpetuating body designed to be insulated from short-term market pressure.

The current trustees are:
– Neil “Buddy” Shah, from the Clinton Health Access Initiative
– Richard Fontaine, from the Center for a New American Security
– Mariano-Florentino Cuéllar, from the Carnegie Endowment for International Peace

The Trust’s formal mandate is to ensure that Anthropic balances financial performance with its public-interest mission to develop AI responsibly. In practice, that means appointing directors who can push back if short-term profit incentives begin to dominate decisions about model deployment, safety standards, or the societal impact of Anthropic’s technology.

Shah has said the Trust was explicitly looking for someone who had managed groundbreaking science in an environment where oversight, risk management, and patient outcomes are non-negotiable. Narasimhan, who has guided novel therapies through global regulators, fits that design brief.

A Structural Shift: Trust Appointees Gain Board Control

With Narasimhan’s appointment, the Trust has now placed three of seven directors:
– Jay Kreps
– Reed Hastings
– Vas Narasimhan

This trio now constitutes a majority, giving the Trust’s priorities “structural weight” for the first time. The Trust can no longer be dismissed as a symbolic or purely advisory body. It now effectively shapes, and can potentially veto, strategic decisions that conflict with Anthropic’s long-term benefit goals.

This governance architecture is unusual in high-growth tech, where boards are often dominated by founders and major investors whose primary accountability is to equity value. At Anthropic, the board is now balanced – and in some respects tilted – toward trustees mandated to treat safety and societal impact as core, not optional, concerns.

Why Appoint a Pharma CEO Now? The Healthcare and Life Sciences Pivot

The timing is not coincidental. Anthropic has rapidly expanded into healthcare and life sciences use cases over the past year and a half:
– In October 2025, the company launched Claude for Life Sciences, tailored for scientific research, drug discovery, and complex biomedical workflows.
– In January 2026, it rolled out Claude for Healthcare, built with HIPAA-ready infrastructure and tools for clinicians, healthcare systems, and regulatory tasks.

Anthropic is not positioning Claude as a casual assistant in these domains. It is targeting high-stakes environments: clinical decision support, trial design assistance, regulatory documentation, and scientific analysis. These use cases sit inside dense webs of regulation, liability, and ethics.

The company has also inked partnerships with major pharmaceutical and biotech players including Eli Lilly, Novo Nordisk, and Genmab. These collaborations are focused on shortening drug development cycles, improving trial design, and accelerating R&D workflows through AI-powered analysis and simulation.

In this context, adding a sitting pharma CEO brings:
– Firsthand experience working with regulators across multiple regions
– Practical knowledge of risk management in clinical and commercial settings
– A direct understanding of how to integrate advanced technologies into critical workflows without eroding trust or safety

As Anthropic’s models begin to influence decisions in healthcare systems, trial sites, and research labs, that kind of experience is less a “nice to have” and more a governance necessity.

IPO Pressure and Investor Scrutiny

Anthropic’s business has scaled at a pace that puts its governance under a spotlight. The company’s annualized revenue has surpassed 30 billion dollars, up from 9 billion at the end of 2025, driven by demand for Claude models across enterprise customers.

The company is reportedly weighing a public offering at a potential valuation of around 380 billion dollars. At those numbers, board composition is not a formality; it is a signal to public-market investors, regulators, and major institutional customers about how the company will behave once it is accountable to shareholders and quarterly expectations.

A board dominated by founders and traditional venture capital investors can raise concerns about whether long-term safety commitments will survive post-IPO market pressure. By contrast, a board with a Trust-appointed majority – and a prominent pharma CEO among them – sends a different message: that Anthropic is deliberately hard-wiring safety and public benefit into its corporate structure before it lists.

Why the Trust-Majority Board Matters for AI Safety

The move carries implications far beyond healthcare. AI companies increasingly claim to prioritize safety, but those commitments often live in strategy decks rather than governance structures. By giving the Trust real control, Anthropic is attempting to institutionalize:
– Independent oversight on how and where powerful models are deployed
– A buffer against purely short-term revenue incentives in high-risk applications
– A board-level counterweight to an “optimize at all costs” culture

For AI used in critical areas – from healthcare and infrastructure to finance and national security – the trade-offs between speed, capability, and safety are rarely straightforward. A board with expertise in navigating those trade-offs in other regulated sectors is more likely to ask the right questions:
– What failure modes have been considered for this deployment?
– How are users and regulators supposed to verify and audit system behavior?
– What conditions should trigger rollback, suspension, or redesign of a product?

Narasimhan’s presence increases the likelihood that these questions will be framed in terms familiar to regulators and institutional partners, not only in terms of engineering ambition.

Strategic Signaling to Regulated Industries

The appointment also sends a targeted signal to buyers in heavily regulated sectors. For hospitals, insurers, pharma companies, and health systems, adopting frontier AI is not just a technical or financial decision – it is a reputational, legal, and ethical one.

By bringing on a sitting pharma CEO and giving Trust appointees control, Anthropic is telling these customers:
– It understands the environment they operate in.
– It is willing to align how it is governed with how they are regulated.
– It is not framing safety purely as marketing, but as part of its internal power structure.

In practice, this may make it easier for risk-averse institutions to justify deeper, higher-value partnerships with Anthropic, particularly in areas like clinical trial optimization, pharmacovigilance, and medical knowledge synthesis where the consequences of error are severe.

Lessons From Pharma for AI Governance

Pharmaceutical companies spend years and billions of dollars moving a single therapy from discovery to approval. Along the way, every step is stress-tested: from dosing and side effects to manufacturing quality and post-market surveillance. That process is slow, imperfect, and often criticized, but it has built a culture of structured risk management.

Translating that mindset to AI could mean:
– More rigorous pre-deployment testing and “real-world evidence” collection
– Clearer phase-like rollouts (from limited pilots to broader access)
– Stronger post-deployment monitoring for unexpected harms
– Predefined escalation and recall mechanisms when systems misbehave in high-stakes settings

Narasimhan’s background suggests that Anthropic is not only interested in building powerful models for healthcare; it is looking to import some of the discipline that has allowed life sciences companies to operate under intense regulatory oversight for decades.

A Model for Future AI Corporate Structures

Anthropic’s experiment with a Long-Term Benefit Trust and a Trust-majority board may become a reference point for other AI firms wrestling with how to govern systems that could have pervasive social consequences. As models grow more capable, questions about who actually controls deployment decisions will intensify.

The company is effectively arguing that:
– Voluntary safety policies are not enough; they should be backed by binding governance.
– Public benefit and shareholder value can be co-equal priorities if the corporate structure is designed that way from the outset.
– External voices with deep experience in other high-risk, regulator-heavy industries should sit at the top table, not just on advisory councils.

If Anthropic’s approach proves compatible with rapid growth and a successful IPO, it may pressure other leading AI labs to adopt similar structures, or at least to give independent safety-oriented bodies more than symbolic influence.

Beyond Symbolism: Governance That Matches the Story

The addition of a pharma CEO to a board now controlled by Trust appointees closes a gap between Anthropic’s rhetoric and its legal reality. The company has long presented itself as a safety-first AI lab, but until now, that positioning has not been fully reflected in who actually held structural power at the board level.

For a firm preparing to enter public markets and deepen relationships in tightly regulated industries, this alignment is deliberate. Anthropic’s governance framework now more closely matches its stated mission: building and deploying powerful AI systems in a way that prioritizes long-term benefit, not just near-term returns.

Narasimhan’s appointment is therefore more than just another high-profile board pick. It marks the moment when Anthropic’s Long-Term Benefit Trust moves from design feature to active force – and when the company begins testing whether a safety-centric governance model can coexist with the pressures of being one of the most valuable AI companies in the world.