Disrupting malicious uses of AI | February 2026
Our latest threat report examines how malicious actors combine AI models with websites and social platforms—and what it means for detection and defense.
Quick summary
Our latest threat report examines how malicious actors combine AI models with websites and social platforms—and what it means for detection and defense.
Related tags
Companies and people
Story threads
Continue with this story
Follow the same topic through connected articles, entity pages, and active story threads.
Attie.ai
Comments
Bluesky leans into AI with Attie, an app for building custom feeds
Bluesky’s new app Attie uses AI to help people build custom feeds the open social networking protocol atproto.
Stanford study outlines dangers of asking AI chatbots for personal advice
While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.
The first 40 months of the AI era
Comments
Ad slot
Article inline monetization block
A reserved partner slot for relevant tools, services, and contextual editorial integrations.
Related articles
More stories that share tags, source, or category context.
Attie.ai
Comments
Bluesky leans into AI with Attie, an app for building custom feeds
Bluesky’s new app Attie uses AI to help people build custom feeds the open social networking protocol atproto.
Stanford study outlines dangers of asking AI chatbots for personal advice
While there’s been plenty of debate about AI sycophancy, a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.
More from OpenAI News
Fresh reporting and follow-up coverage from the same newsroom.
STADLER reshapes knowledge work at a 230-year-old company
Learn how STADLER uses ChatGPT to transform knowledge work, saving time and accelerating productivity across 650 employees.
Inside our approach to the Model Spec
Learn how OpenAI’s Model Spec serves as a public framework for model behavior, balancing safety, user freedom, and accountability as AI systems advance.
Introducing the OpenAI Safety Bug Bounty program
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.
Helping developers build safer AI experiences for teens
OpenAI releases prompt-based teen safety policies for developers using gpt-oss-safeguard, helping moderate age-specific risks in AI systems.