AI News, Use of AI in online content moderation artificial intelligence

Compare Theum vs Playment

The ideal way to find out which app fits your needs best is to examine them side by side.

When matching products make sure to analyze their corresponding advantages and mark their differences to obtain a clearer picture of both deals.

Similarly, bear in mind to assess secondary factors like security, backup, ease-of-use, and customer service.

Google’s Hate Speech Detection A.I. Has a Racial Bias Problem

A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study.

Another tool released in 2018 that matched people's selfies with popular artworks, inadvertently correlated the faces of African Americans with artwork depicting slaves, 'perhaps because of an overreliance on Western art,' journalist Vauhini Vara wrote in Fortune.

Despite its ability to automate tasks, thus reducing the load from human moderators, it often fails at understanding context, like deciphering whether a biting joke may be funny to one person and yet upsetting to another.

After inspecting the datasets, the researchers noticed that the human annotators would often label tweets commonly associated with African-American vernacular as being offensive or hateful, despite the phrases being typical to that particular dialect.

These phrases might contain certain words that other social groups may find offensive, like the N-word, “ass,” or “bitch.” The researchers then used the Twitter data to train a neural network—software that learns—to recognize offensive or hateful phrases.

“We really welcome this kind of research,” Keyserling said.“We are constantly attending these conferences and try to support the research community.” The researchers also conducted a small, related experiment to learn more about how human annotators label data.

Sap speculated that these crowdsourced workers found the N-Word to be less offensive when used by African-Americans, but is “probably more offensive if it is said by a white person.” In the case of the original labeled datasets, human annotators “just didn’t know the context,” he said.

However, Keyserling noted that “context really matters” and “that’s absolutely something we are paying attention to when structuring our data.” In a follow up email, Keyserling pointed to a technique Jigsaw uses during the data training process to mitigate bias, which involves so-called model cards that detail how the machine-learning models should be used and ethical considerations to consider.

Here’s what you should know—What CEOs, bankers, and tech execs think about a coming recession—How an alleged Amazon theft ring got the goods—Boeing adds a second flight control computer to the 737 MaxCatch up with Data Sheet, Fortune's daily digest on the business of tech.

Blog | Spectrum

In his second day of congressional testimony, Facebook Founder, CEO, and Chairman Mark Zuckerberg continued his plight to convince regulators and, more importantly, 2.2 billion users that tech's leading social media company is worthy of consumer and institutional trust.