AI companies face scrutiny over ethical practices and claims of doing good
Updated
Updated · The New York Times · Apr 26
AI companies face scrutiny over ethical practices and claims of doing good
7 articles · Updated · The New York Times · Apr 26
The opinion piece highlights founders like Sam Altman, Elon Musk, and Dario Amodei, and references OpenAI and Anthropic as central actors in the debate.
It argues that despite initial altruistic intentions, AI companies are pressured by profit motives, leading to conflicts over what constitutes 'good' and raising concerns about unchecked data use and environmental impact.
The article links the effective altruism movement and figures like Sam Bankman-Fried to the industry's philosophy, noting that regulation is needed to guide AI companies' ethical behavior amid fears of long-term risks to humanity.
As AI's ethical promises clash with commercial reality, can 'doing good' survive inside a for-profit company?
OpenAI's leadership crisis revealed deep rifts over safety. How stable are these trillion-dollar AI giants?
As companies face Pentagon blacklists and boycotts, who should decide AI's role in the military?
With AI data centers consuming vast energy and water, is its environmental cost becoming too high?
With the EU and US taking different regulatory paths, can global AI governance ever be truly effective?
AI models are trained on copyrighted data. Will new laws force companies to finally pay for it?
The 2026 AI Crisis: Military Standoffs, $1.5 Billion Copyright Settlements, and Regulatory Battles
Overview
In 2026, a major conflict arose when the U.S. Department of Defense demanded unrestricted military use of Anthropic's AI model Claude, including for autonomous weapons and surveillance, but Anthropic refused to remove ethical safeguards. Despite a Pentagon compromise offer, the standoff led to Anthropic being labeled a national security risk, sparking industry concern. Meanwhile, OpenAI chose to collaborate with the military, triggering public backlash and shifting user preferences. Regulatory efforts intensified as the SEC cracked down on AI fraud, while states like New York introduced their own AI laws, creating a fragmented compliance landscape. Landmark legal rulings, including a $1.5 billion Anthropic copyright settlement, pushed AI firms toward licensing and ethical data use. Corporations and academia increasingly embraced ethics as a strategic priority, yet systemic risks and geopolitical rivalries complicated global AI governance, highlighting urgent challenges for unified oversight and responsible innovation.