Taylor Swift is stepping up the legal war on AI copycats
Taylor Swift filed trademark applications to protect two spoken phrases—"Hey, it's Taylor Swift" and "Hey, it's Taylor"—as audio marks, escalating her legal fight against AI voice imitations. The move reflects broader celebrity concerns about synthetic voice generation but faces uncertain enforceability in protecting against AI-generated deepfakes.
Google expands Pentagon’s access to its AI after Anthropic’s refusal
Google signed a new contract to expand the Pentagon's access to its AI systems following Anthropic's public refusal to allow DoD use of Claude for domestic mass surveillance and autonomous weapons. The move highlights a divergence in how major AI labs approach military and defense applications.
Teams at DARPA's AI Cyber Challenge demonstrated AI systems scanning 54 million lines of code, finding not only injected bugs but also discovering previously unknown vulnerabilities. The competition highlights the emerging capability of AI models like Claude to identify software security flaws at scale.
OpenAI outlined its approach to community safety in ChatGPT through model safeguards, misuse detection systems, policy enforcement, and partnerships with safety experts. The commitment demonstrates OpenAI's layered strategy to prevent harmful outputs and abuse of its platform.
4TB of voice samples just stolen from 40k AI contractors at Mercor
Mercor, a platform connecting AI contractors, suffered a data breach exposing 4TB of voice samples from approximately 40,000 contractors. The incident highlights security vulnerabilities in AI training data pipelines and contractor platforms handling sensitive biometric information.