Tens of millions of users around the world have reacted strongly after OpenAI agreed to let the U.S. Department of Defense use its AI models on a classified government network.
In the days following the announcement, a website tracking user departures reported that roughly 1.5 million people have cancelled their ChatGPT subscriptions or left the platform in protest.
The decision triggered a surge in negative sentiment online as users expressed concern about the ethical implications of military use of artificial intelligence.
Many critics said they were uncomfortable with an AI tool being linked to government defence work, arguing that such partnerships could lead to use in military decision‑making, surveillance, or applications they found morally troubling. As a result, a boycott movement under slogans like “Quit GPT” spread across social media.
Alongside user departures, rival AI platforms such as Anthropic’s Claude chatbot gained attention, including rising download numbers on app store charts, as some people switched services.
The company’s chief executive, Sam Altman, acknowledged the controversy and said the firm is reviewing parts of the agreement and adding safeguards, including explicit language to prevent the use of its systems in domestic surveillance or autonomous weapon deployments.
The backlash has also sparked protests at OpenAI’s headquarters and intensified debate over ethical limits for AI technologies.
While the reported departures represent a small portion of ChatGPT’s total global user base, which numbers in the hundreds of millions, the episode highlights how user trust and public opinion can play an important role in shaping the future direction of technology platforms and their partnerships.