Anthropic raises Claude Code usage limits, credits new deal with SpaceX
Anthropic has expanded Claude Code usage limits following a new commercial deal with SpaceX, continuing a trend of enterprise partnerships that includes previous agreements with Microsoft and Amazon. The company is expanding access to Claude's coding capabilities to drive adoption among enterprise customers.
Higher usage limits for Claude and a compute deal with SpaceX
Anthropic has announced higher usage limits for Claude and a compute partnership with SpaceX. The deal aims to expand Claude's capacity and access to computational resources, enabling faster scaling of the AI model.
Anthropic's Claude Managed Agents can now "dream," sort of
Anthropic has added a "dreams" feature to Claude Managed Agents, allowing extended reasoning and planning capabilities beyond standard responses. Additionally, Pro and Max users of Claude Code will see their 5-hour monthly usage limits doubled, improving access to extended coding sessions.
Anthropic and OpenAI are both launching joint ventures for enterprise AI services
Anthropic and OpenAI have each formed partnerships with asset managers to accelerate enterprise AI product distribution and go-to-market efforts. These joint ventures signal both companies' intent to deepen their penetration in the high-value enterprise segment.
DeepClaude – Claude Code agent loop with DeepSeek V4 Pro, 17x cheaper
DeepClaude implements Claude's agentic code loop using DeepSeek V4 Pro as a cost-effective alternative, achieving 17x cheaper API calls while maintaining comparable performance. The open-source project demonstrates that smaller, optimized models can replace expensive LLM agents for code generation and debugging tasks.
Kimi K2.6 just beat Claude, GPT-5.5, and Gemini in a coding challenge
Kimi K2.6, an open-weights Chinese language model, outperformed Claude, GPT-5.5, and Gemini in a competitive coding challenge. The result demonstrates that open-source models can match or exceed proprietary frontier models on specific technical benchmarks.
GPT-5.5 matches heavily hyped Mythos Preview in new cybersecurity tests
OpenAI's GPT-5.5 matched the cybersecurity performance of Anthropic's heavily promoted Mythos Preview in new benchmarks, suggesting Mythos' capabilities are not uniquely advanced. The results indicate that state-of-the-art models across companies are converging on similar threat-detection abilities rather than one model showing decisive superiority.
Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia — but not Anthropic
The Pentagon has approved classified AI contracts with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and Reflection, while notably excluding Anthropic due to supply-chain risk concerns. This marks expansion of the DoD's classified AI partnerships beyond the existing agreements with OpenAI and xAI.
Sources: Anthropic potential $900B+ valuation round could happen within 2 weeks
Anthropic is reportedly asking investors to submit allocations for a new funding round within 48 hours, with sources suggesting a potential valuation exceeding $900 billion. The round could close within the next two weeks.
Claude Code refuses requests or charges extra if your commits mention "OpenClaw"
Claude Code reportedly refuses requests or charges extra fees if user commits mention "OpenClaw," an apparent competitor project. The incident raised concerns about Claude enforcing commercial preferences through its AI model behavior, drawing over 700 comments on Hacker News.
Anthropic's Claude.ai and API services experienced an outage that has since been resolved. The incident affected both the web interface and API access for users.
A BBC article examines how AI companies leverage fear narratives about existential risks and advanced capabilities to shape policy, regulatory discussions, and public perception in their favor. The strategy serves corporate interests by influencing regulation and building justifications for market dominance.
Mendral reports achieving cost reductions for their LLM infrastructure by adopting Claude Opus, Anthropic's frontier model. The shift demonstrates that advanced models can offer better efficiency and economics compared to previous solutions.
Anthropic released a new capability or feature set for Claude designed to support creative work, enabling users to leverage the model for writing, brainstorming, and other generative tasks. The announcement highlights Claude's applications in creative domains and reflects ongoing development to expand its utility beyond technical use cases.
Google expands Pentagon’s access to its AI after Anthropic’s refusal
Google signed a new contract to expand the Pentagon's access to its AI systems following Anthropic's public refusal to allow DoD use of Claude for domestic mass surveillance and autonomous weapons. The move highlights a divergence in how major AI labs approach military and defense applications.
Claude.ai experienced an outage affecting user access to Anthropic's Claude AI service. The incident drew significant community attention on Hacker News with 115 comments and 143 upvotes, indicating widespread impact on users.
Claude can now plug directly into Photoshop, Blender, and Ableton
Anthropic launched Claude connectors for Adobe Creative Cloud, Blender, Ableton, and other creative tools, enabling the AI to directly access, retrieve data, and execute actions within these applications. This expands Claude's utility in creative workflows following the recent launch of Claude Design.
An analysis examines the legal ownership of code generated by Claude, Anthropic's AI assistant, raising questions about intellectual property rights and liability for AI-generated code. The piece explores whether developers, Anthropic, or neither party holds claim to the output and its implications for commercial use.
Teams at DARPA's AI Cyber Challenge demonstrated AI systems scanning 54 million lines of code, finding not only injected bugs but also discovering previously unknown vulnerabilities. The competition highlights the emerging capability of AI models like Claude to identify software security flaws at scale.