the free beacon

Biden Admin To Drop Half a Million on Artificial Intelligence That Detects Microaggressions on Social Media

The Biden administration is set to dole out more than $550,000 in grants to develop an artificial intelligence model that can automatically detect and suppress microaggressions on social media, government spending records show.

The award, funded through President Joe Biden’s $1.9 trillion American Rescue Plan, was granted to researchers at the University of Washington in March to develop technologies that could be used to protect online users from discriminatory language. The researchers have already received $132,000 and expect total government funding to reach $550,436 over the next five years.

The researchers are developing machine-learning models that can analyze social media posts to detect implicit bias and microaggressions, commonly defined as slights that cause offense to members of marginalized groups. It’s a broad category, but past research conducted by the lead researcher on the University of Washington project suggests something as tame as praising meritocracy could be considered a microaggression.

The Biden administration’s funding of the research comes as the White House faces growing accusations that it seeks to suppress free speech online. Biden last month suggested there should be an investigation into Tesla CEO Elon Musk’s acquisition of Twitter after the billionaire declared the social media app would pursue a “free speech” agenda. Internal Twitter communications Musk released this month also revealed a prolonged relationship between the FBI and Twitter employees, with the agency playing a regular role in the platform’s content moderation.

Judicial Watch president Tom Fitton likened the Biden administration’s funding of the artificial intelligence research to the Chinese Communist Party’s efforts to “censor speech unapproved by the state.” For the Biden administration, Fitton said, the research is a “project to make it easier for their leftist allies to censor speech.”

A spokesman for the National Science Foundation, which issued the research grant, rebuffed criticism of the project, which he said “does not attempt to hamper free speech.” The project, the spokesman said, creates “automated ways of identifying biases in speech” and addresses the biases of human content moderators.

The research’s description doesn’t give examples of what comments would qualify as microaggressions—though it acknowledges they can be unconscious and unintentional. The project is led by computer science professor Yulia Tsvetkov, who has authored studies that suggest the artificial intelligence model might identify and suppress language many would consider inoffensive, such as comments praising the concept of meritocracy.

Tsvetkov coauthored a 2019 study titled “Finding Microaggressions in the Wild,”


Read More From Original Article Here:

" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."

Related Articles

Sponsored Content
Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker