the bongino report

Google Curates Data for AI: Whistleblower

Curating the data for artificial intelligenceAIA Google whistleblower claims that Google and other tech companies can use AI to restrict information flow on the internet.

Zach Vorhies worked for Google when he was concerned over how Google was curating data to create AI biased with leftist values or social justice.

 “AI is a product of the data that gets fed into it,” Vorhies, a former Google employee turned whistleblower, said on EpochTV’s “Crossroads” January 5, 2009 program

“If you want to create an AI that’s got social justice values  … you’re going to only feed it information that confirms that bias. So by biasing the information, you can bias the AI,” Vorhies explained.

“You can’t have an AI that collects the full breadth of information and then becomes biased, despite the fact that the information is unbiased.”

A man walks in front of the Tencent headquarters, Nanshan District, Shenzhen Province, China on September 2, 2022. (David Kirton/Reuters)

AI Talkback Causes Trouble

Tencent, a Chinese tech giant, was founded in 2017. shut down After it began to criticize the Chinese Communist Party, an AI service was created.

Tencent, the video game developer and owner of WeChat offered its users a free service that allowed them to chat with an artificial intelligence character. The chatbots Little Bing and Baby Q, could talk on a variety of topics and grew smarter as they interacted with users, according to a report by Japanese public broadcasting’s NHK World.

A user may post a message saying: “Hurray for the Communist Party,” Tencent’s chatbot replied, “Are you sure you want to hurray to such a corrupt and incompetent [political system]?” According to the report.

When the user asked the AI program about Chinese leader Xi Jinping’s “Chinese Dream” The AI wrote back slogan that the dream meant “immigrating to the United States.”

Microsoft logo
A smartphone in front the Microsoft logo, July 26, 2021. (Dado Ruvic/Reuters)

Unexpected behavior by AI is another example. Tay, a chatbot Created by Microsoft For entertainment purposes, 18-24 year-olds can travel to the United States.

Tay was launched in 2016 and was supposed to learn from the users it was talking with. However, after Twitter trolls exploited its learning abilities, Tay started making offensive and vulgar comments. Microsoft shut down After only 16 hours, the chatbot was ready.

Vorhies believes the Tay incident was an intelligence operation. It was intended to create machine learning (ML), fairness research in academia, and Google.

What is Machine Learning Fairness?

ML fairness, as applied by Google, is a system that uses artificial intelligence to censor information processed by the company’s main products such as Google Search, Google News, and YouTube, Vorhies said.

Vorhies explained that it classifies all data on the platform to determine which information needs to be amplified and suppressed.

He said that machine learning fairness means that what is available online will constantly change, so results returned to a query might differ from previous queries.

If a user searches for neutral topics—for example, baking—the system will give the person more information about baking, Vorhies said. The system will not provide any information if the user searches for politically sensitive or blacklisted content. “try not to give [the user] more of that content” Alternative content will be provided instead.

A tech company may use machine learning fairness. “can shift that Overton window to the left,” Vorhies said, “Then people like us are essentially programmed by it.” The Overton Window refers a variety of policies that are accepted in public discourse at a given moment.

Experts in machine-learning believe that data gathered from the real world contains biases that exist in society. Therefore, systems that use it as-is could be unfair.

INTERNET GOOGLE PROBLEMS ILLUSTRATIONS
Illustration photo shows a smartphone and laptop interacting with the Google website on Dec. 14, 2020. (Laurie Dieffembacq/Belga Mag/AFP via Getty Images).

Accuracy May Be Problematic

Artificial Intelligence (AI) “an accurate machine learning model” It allows you to draw on existing data from the real world in order to make informed decisions. “may learn or even amplify problematic pre-existing biases in the data based on race, gender, religion or other characteristics,” Google: “ai.google” cloud website, under “Responsible AI practices.”

“The risk is that any unfairness in such systems can also have a wide-scale impact. Thus, as the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive for all,” The site states.

Google has an example of how machine learning should look from a fairness perspective to illustrate this point. example of an app to help kids select age-appropriate books from a library that has both adult and children’s books.

Children may be exposed to adult content if the app chooses an adult book to be read by them. Parents may also be upset if they are reading it. However, according to the company’s inclusive ML guide, flagging children’s books that contain LGBT themes as inappropriate is also “problematic.”

Fairness in machine-learning is the goal “to understand and prevent unjust or prejudicial treatment of people related to race, income, sexual orientation, religion, gender, and other characteristics historically associated with discrimination and marginalization, when and where they manifest in algorithmic systems or algorithmically aided decision-making,” Google states in its inclusive ML guide.

Sara Robinson, a Google staff developer relations engineer, spoke out about the topic in an article on Google’s cloud website. Robinson described fairness in machine learning as the process of understanding biases introduced by the data and ensuring that the AI operates without them. “provides equitable predictions across all demographic groups.”

“While accuracy is one metric for evaluating the accuracy of a machine learning model, fairness gives us a way to understand the practical implications of deploying the model in a real-world situation,” Robinson stated.

How AI Censorship Works

Vorhies is a former Google senior engineer and YouTube executive. “Censoring is super expensive. You literally have to go through all the pieces of information that you have, and curate it.”

If the Federal Bureau of Investigations, (FBI) flags The social media company creates a social media account and then puts it on its website. “blacklist” Vorhies stated that this then gets to the AI. Keywords are crucial because “the AI likes to make decisions when it has labels on things.”

Machine learning in AI is made easier by categorizing data groups into categories. The AI for self-driving car uses labels to distinguish between people, cars, streets, and the sky. It labels key characteristics of the objects and compares them. You can either label manually or with software.

Suppressing a person on social media is done by AI based on data labels curated by the company’s staff, Vorhies explained. The AI then decides whether the person’s posts are allowed to trend or will be de-amplified.

Vorhies worked for YouTube from 2016 to 2019 and claimed that they used similar practices.

YouTube, a Google subsidiary had something similar to a “dashboard of classifications that were being generated by their machine learning fairness,” According to the whistleblower, He explained that the AI had a history and current content basis to determine how to label someone.

“Then someone sitting in the back room—I don’t know who this was—was doing the knobs of what is allowed to get amplified, based upon [their] personal interests.”

Psychological Warfare

Google’s search engine considers mainstream media authoritative and boosts content accordingly, Vorhies said. “These mainstream, leftist organizations are ranked within Google as having the highest authoritative value.”

A search for information about a local elections is one example. “the first five links [in the search results] are going to be what the mainstream media has to say about that,” Vorhies said. “So they can redefine reality.”

Wikipedia may change its mind about a matter and begin to look into it. “conspiracy theory and not real,” People will be confused as to what to think. Vorhies explained that psychological warfare and influence operations are directly targeting people’s minds.

The Epoch Times reached Google for comment.


Read More From Original Article Here:

" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."

Related Articles

Sponsored Content
Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker