Google is throwing its weight behind a multibillion‑dollar AI infrastructure push in Texas, backing a massive data center campus leased by Anthropic as competition for compute in the United States intensifies.
The facility, operated by Nexus Data Centers, is expected to deliver roughly 500 megawatts of power capacity by the end of 2026 in its first phase, with potential expansion to as much as 7.7 gigawatts across the U.S. over time. The initial build in Texas alone is projected to cost more than $5 billion, according to people familiar with the project.
Google’s role goes beyond simply providing cloud services. The company is expected to extend construction loans to support the buildout, while a syndicate of banks is vying to organize additional financing by mid‑year. Early‑stage debt was already secured from Eagle Point, helping Nexus move from planning into active construction.
Anthropic has signed a long‑term lease on the sprawling 2,800‑acre site, where groundwork is already underway. The campus is being designed as a high‑density AI hub, aimed at hosting thousands of specialized accelerators and tailored for large‑scale model training and inference. With its initial 500 MW target and enormous expansion headroom, the project underscores how AI leaders are pivoting to secure dedicated, long‑lived infrastructure rather than competing solely for shared cloud capacity.
This facility is also the latest expression of the deepening strategic partnership between Google and Anthropic. In October 2025, Anthropic announced it would significantly scale up its use of Google Cloud hardware and services, including a plan to tap into as many as 1 million Tensor Processing Units (TPUs) over time to train and run Claude models. By anchoring a major physical campus with a close cloud collaborator, Anthropic is effectively locking in a stable foundation for its next generations of AI systems.
The Texas location is not just about cheap land and access to fiber. The campus sits near major natural gas pipelines, giving the operator the option to install on‑site gas turbines as a dedicated power source. That could reduce exposure to grid instability, outages, or local capacity constraints-critical issues when AI clusters require continuous, high‑reliability power. Hybrid models that combine grid power with on‑site generation are increasingly attractive as data centers scale into the hundreds of megawatts and beyond.
Google’s backing highlights a broader shift in the AI arms race: the bottleneck is no longer only about chips. Control over land, power, cooling, and financing for hyperscale data centers has become just as important as access to GPUs or TPUs. The Texas project illustrates how large AI developers and the biggest cloud providers are knitting together long‑term infrastructure alliances to ensure they can keep scaling models as demand for AI services accelerates.
At the same time, Anthropic is dealing with a very different kind of challenge in Washington. The company is locked in a legal dispute with the U.S. Department of Defense over the military use of its AI models and the safeguards governing that use. The confrontation escalated when the Pentagon moved to label Anthropic a “supply‑chain risk,” a designation that could severely limit government contracts and influence how other agencies view the company.
A federal judge in San Francisco has temporarily blocked that move. Judge Rita Lin issued an order preventing the Pentagon from formally designating Anthropic as a supply‑chain threat while the case proceeds, saying the government’s actions appeared punitive rather than grounded in clear security concerns. She described the Pentagon’s approach as “arbitrary,” according to court reporting, and signaled that the government had not yet justified such a sweeping label.
The ruling does not require the Department of Defense to continue using Anthropic’s tools. The military is still free to pause, curtail, or terminate its operational deployments of Claude and related systems. However, the decision freezes more expansive sanctions that could have damaged Anthropic’s broader reputation or discouraged other public‑sector partners from working with the company.
The dispute traces back to a fundamental disagreement over how far Anthropic should relax its AI safety constraints for military missions. The company has drawn a hard line around allowing its systems to support surveillance operations and autonomous weapons development. According to accounts of the conflict, the Pentagon pushed for looser restrictions on certain use cases, especially around intelligence workflows and targeting‑related analysis, while Anthropic insisted on maintaining robust guardrails.
Complicating matters further, U.S. military units reportedly used Anthropic’s Claude AI during operations tied to strikes on Iran. That deployment thrust Anthropic into the center of two fast‑moving global debates at once: the geopolitics of military AI and the industrial race to construct the massive infrastructure needed to power next‑generation models. It also sharpened questions about whether AI developers should exert direct control over how their models are used in armed conflict, even after a license or contract is signed.
For Anthropic, the timing is especially sensitive. On one front, the company is working with Google to build out one of the largest dedicated AI campuses in the country, a project aimed at supporting safer, more reliable, and more capable versions of Claude. On another, it is trying to convince regulators and courts that its refusal to relax safety standards for military applications is a principled stance, not a breach of trust or a security risk.
The Texas buildout also illustrates how AI infrastructure is becoming a strategic asset for both technology companies and governments. A campus capable of scaling to gigawatt‑level power consumption effectively becomes a national‑level facility, tying into regional energy markets, local employment, and critical‑infrastructure planning. As more AI workloads move from experimentation to core economic and defense functions, control over such campuses is likely to be viewed through a geopolitical lens, not just a commercial one.
From a technical standpoint, a 500 MW facility dedicated largely to AI workloads signals a new era for data centers. Traditional enterprise data centers and cloud regions rarely approached this scale. AI‑heavy campuses require specialized cooling systems, advanced power distribution, and close coordination with power utilities or fuel suppliers. They also tend to be located where land is available for expansion and where regulators are receptive to large‑scale industrial power usage. Texas, with its energy resources and relatively permissive regulatory environment, fits that profile.
However, this growth raises environmental and policy questions that are only beginning to be addressed. AI‑dedicated data centers can consume as much power as small cities. Developers, operators, and policymakers are under growing pressure to show how these facilities will integrate renewable energy, improve efficiency, and mitigate strain on local grids. Proximity to gas pipelines can provide resiliency and cost advantages, but it also ties AI infrastructure more closely to fossil fuel usage unless offset by parallel investments in low‑carbon power.
For Google, the Texas project reinforces its positioning as not just a cloud provider, but a full‑stack infrastructure partner for leading AI companies. By extending financial support for physical campuses, offering proprietary chips such as TPUs, and delivering managed services on top, Google is building an ecosystem that makes it harder for major AI labs to walk away. In return, Google secures anchor customers whose workloads can fill its hardware fleets and justify continued investment in next‑generation accelerators.
Anthropic, for its part, gains a predictable and scalable foundation for Claude’s evolution. The company’s roadmap depends on training models with trillions of parameters and serving them to tens of millions of users in near real time. That kind of scale is only sustainable if there is a clear line of sight to enough power, racks, and specialized compute-something ad‑hoc leasing and short‑term cloud capacity alone cannot guarantee.
Looking ahead, the combination of legal scrutiny, military interest, and mega‑scale infrastructure will likely shape how AI companies position themselves. Those emphasizing safety and governance, as Anthropic does, may increasingly seek partners and infrastructure arrangements that allow them to enforce use‑case restrictions more directly. That could include physical isolation of certain clusters, stricter access controls, or contractual terms tying use to specific ethical guidelines.
At the same time, governments are unlikely to step back from pursuing AI capabilities they see as essential to national security. The clash between Anthropic and the Pentagon could become a test case for how far a private AI lab can go in dictating the boundaries of military use, and how courts interpret attempts by agencies to retaliate when those boundaries are not relaxed.
In that broader context, Google’s backing of the Texas AI data center is more than a financial move. It signals that the largest tech platforms expect the demand for secure, high‑performance AI infrastructure to continue exploding, even as political and ethical debates around AI in warfare, surveillance, and critical systems intensify. The firms that can navigate both the physical buildout of compute and the contested terrain of regulation and public trust are likely to define the next chapter of the AI industry.

