TSMC taps wind power as AI chip demand soars, Taiwan feels energy crunch
TSMC is increasing investments in wind power to meet surging energy demands from AI chip manufacturing as Taiwan faces an energy crunch. The move reflects the semiconductor industry's need for reliable renewable energy sources to sustain high-volume production of computing chips powering AI systems.
SpaceX may spend up to $119B on ‘Terafab’ chip factory in Texas
SpaceX is planning to invest up to $119 billion on a massive semiconductor manufacturing facility called "Terafab" in Texas to build next-generation chips and advanced computing systems in-house. The project would be a vertically integrated facility enabling SpaceX to reduce dependence on external chip suppliers for its Starship and Starlink operations.
Higher usage limits for Claude and a compute deal with SpaceX
Anthropic has announced higher usage limits for Claude and a compute partnership with SpaceX. The deal aims to expand Claude's capacity and access to computational resources, enabling faster scaling of the AI model.
Silicon Valley bets $200M on AI data centers floating in the ocean
Panthalassa is testing a novel approach to AI infrastructure by deploying floating computing nodes in the Pacific Ocean, with a planned launch in 2026 and $200M in backing. This addresses AI's massive cooling and power demands by leveraging ocean water for thermal management, a critical bottleneck as data center electricity consumption scales.
OpenAI published technical details on how it delivers low-latency voice AI at scale, addressing infrastructure and optimization challenges for real-time voice interactions. This demonstrates OpenAI's system design for supporting high-volume, responsive voice applications across their platform.
OpenAI rebuilt its WebRTC stack to enable low-latency real-time voice AI with global scale and seamless conversational turn-taking. This infrastructure upgrade underpins OpenAI's voice capabilities for products like Voice Mode.
Pentagon inks deals with Nvidia, Microsoft, and AWS to deploy AI on classified networks
The Pentagon has signed contracts with Nvidia, Microsoft, and AWS to deploy AI systems on classified military networks. The agreements reflect the Department of Defense's push to diversify its AI vendor portfolio following tensions with Anthropic over model usage policies.
Inexpensive seafloor-hopping submersibles could stoke deep-sea science—and mining
NOAA's research vessel Rainier is deploying inexpensive seafloor-hopping submersibles to map over 8,000 square nautical miles of the Pacific Ocean for critical mineral deposits. The autonomous vehicles represent a shift toward lower-cost deep-sea exploration that could accelerate scientific discovery while raising environmental concerns about deep-sea mining.
As AI models grow more capable, evaluating their performance has become computationally expensive, creating a new constraint on model development. The cost and complexity of comprehensive evaluation is now limiting how quickly companies can iterate and deploy new models.
Building the compute infrastructure for the Intelligence Age
OpenAI is scaling Stargate, its massive compute infrastructure project, to build data center capacity aimed at powering AGI development and meeting surging demand for AI training and inference.
DeepInfra has been added to Hugging Face's inference provider ecosystem, expanding access to AI model serving infrastructure. This integration allows developers to run models via DeepInfra's platform directly through Hugging Face's tools.
Rural American communities are increasingly opposing AI data center development in their regions due to concerns about environmental impact, energy consumption, and local disruption. The conflict reflects a broader tension between demand for AI infrastructure and local resistance to large-scale industrial projects in less densely populated areas.
Red Hat’s OpenClaw maintainer just made enterprise Claw deployments a lot safer
Tank OS containerizes OpenClaw AI agents to improve reliability and safety for enterprise deployments, particularly for managing fleets of agents in production environments.
Choco, a food distribution platform, integrated OpenAI APIs to automate workflows and improve productivity across its supply chain operations. The case study demonstrates practical deployment of AI agents to solve enterprise logistics challenges.