NEWYou can now listen to Fox News articles!
One of Anthropic’s Artificial Intelligence (AI) philosophy architects argued that intentional discrimination could be a way to combat stigmas on topics of race and gender.
In a 2023 paper authored alongside a number of other AI researchers, Amanda Askell, a philosopher hired by Anthropic to develop their AI’s moral compass, argued companies might benefit from a kind of overcorrection toward stereotypes.
But, the paper explained, that would require human input on how to modify its answers.
“Larger models can over-correct, especially as the amount of [human input] training increases. This may be desirable in certain contexts, such as those in which decisions attempt to correct for historical injustices against marginalized groups, if doing so is in accordance with local laws,” Askell wrote.
PALANTIR’S SHYAM SANKAR: AMERICANS ARE ‘BEING LIED TO’ ABOUT AI JOB DISPLACEMENT FEARS

Pages from the Anthropic website and the company’s logos are displayed on a computer screen in New York on Feb. 26, 2026. (Patrick Sison/AP Photo)
The comment referred to an experiment on how Anthropic’s models dealt with the race of students.
“In the discrimination experiment, the 175B parameter model discriminates against Black versus White students by 3% in the Q condition and discriminates in favor of Black students by 7% in the Q+IF+CoT condition,” the paper notes, referring to one AI trained without human corrections and a second one trained with the help of input.
Askell was joined by four other authors: Deep Ganguli, Nicholas Schiefer, Thomas Kiao and Kamilė Lukošiūtė.
The paper’s contents have surfaced as AI companies increasingly wrestle with the ethics their models are trained on — the presuppositions and moral determinations that inform its outputs. It also highlights the challenges engineers face in training models on human content while simultaneously trying to leave behind certain human behaviors.
The question of ethics has forced Anthropic in particular into the spotlight in recent weeks.
The company made headlines earlier this year for clashing with the Department of War over restrictions that prevent its technology from being deployed to conduct lethal operations.
HUGH GRANT MOVIE SLAMS AI; DIRECTOR WARNS ‘IT MIGHT KILL US ALL’

A federal judge blocked the Trump administration from banning AI firm Anthropic from Department of War use, sparking debate over courts’ role in national security. The image shows Anthropic CEO Dario Amodei and War Secretary Pete Hegseth. (Samyukta Lakshmi/Bloomberg via Getty Images: Eugene Hoshiko/Pool via REUTERS)
It also comes as Anthropic decided to withhold its latest model, Mythos, citing fears that the model proved too effective at finding cyber vulnerabilities that could wreak havoc in the hands of hackers.
Amid questions of AI application, Anthropic has marketed its flagship AI, Claude, as the “ethical” AI choice.
“Our central aim is for Claude to be a good, wise and virtuous agent, exhibiting skill, judgment(sic), nuance and sensitivity in handling real-world decision-making,” Claude’s constitution reads.
STANFORD PROF ACCUSED OF USING AI TO FAKE TESTIMONY IN MINNESOTA CASE AGAINST CONSERVATIVE YOUTUBER
To get a better sense of what that means in practice, companies like Anthropic have turned to researchers like Askell.
On her website, Askell described her role as refining the way an AI thinks.
“I’m a philosopher working on finetuning and AI alignment at Anthropic. My team trains models to be more honest and to have good character traits and works on developing new finetuning techniques so that our interventions can scale to more capable models,” Askell wrote.
PENTAGON’S AI BATTLE WILL HELP DECIDE WHO CONTROLS OUR MOST POWERFUL MILITARY TECH
She previously held a similar position at OpenAI, the parent company of ChatGPT, focusing on AI safety.
The 2023 paper, written two years after she joined Anthropic, noted that encountering discrimination in AI models shouldn’t come as a surprise.
“In some ways, our findings are unsurprising. Language models are trained on text generated by humans, and this text presumably includes many examples of humans exhibiting harmful stereotypes and discrimination,” the paper reads.
But it noted that AIs seem to be able to adjust their outputs even without clarification of what discrimination means.

The U.S. military reportedly used Anthropic’s AI tool Claude during the operation that captured Venezuelan leader Nicolás Maduro. Claude handled most of the operation autonomously, triggering thousands of requests and generating detailed documentation of the attack for future use. (Kurt “CyberGuy” Knutsson)
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
“Our results are surprising in that they show we can steer models to avoid bias and discrimination by requesting an unbiased or non-discriminatory response in natural language.”
Askell and Anthropic did not immediately respond to a request for comment from Fox News Digital.
