Leaks from Facebook whistleblower Frances Haugen suggest that her problems with extremism are particularly dire in some areas. Documents Haugen provided to the New York Times, Wall street journal and other outlets suggest that Facebook is aware that it fostered serious misinformation and violence in India. Apparently, the social network did not have enough resources to deal with the spread of harmful material in the populous country and did not respond with enough action when tensions flared.
A case study from early 2021 indicated that much of the harmful content from groups like Rashtriya Swayamsevak Sangh and Bajrang Dal was not flagged on Facebook or WhatsApp due to a lack of technical knowledge required to detect content written in Bengali and Hindi. At the same time, it was reported that Facebook refused to mark the RSS for removal due to “political sensitivities,” and Bajrang Dal (linked to Prime Minister Modi’s party) had not been touched despite an internal call from Facebook to remove. your material. The company had a white list of politicians exempt from fact checking.
Facebook was struggling to combat hate speech as recently as five months ago, according to the leaked data. And like a previous test in the US, the research showed how quickly Facebook’s recommendation engine suggested toxic content. A fictitious account that followed Facebook’s recommendations for three weeks was subjected to an “almost constant barrage” of divisive nationalism, misinformation and violence.
As with previous scoops, Facebook said the leaks didn’t tell the whole story. Spokesperson Andy Stone argued that the data was incomplete and did not account for third-party fact-checkers that are widely used outside of the U.S. He added that Facebook had invested heavily in hate speech detection technology in languages such as English. Bengali and Hindi, and that the company continued to improve that technology.
The social media firm followed this by destiny a more extensive defense of their practices. It argued that it had an “industry-leading process” to review and prioritize countries with a high risk of violence every six months. He noted that teams considered issues and long-term history alongside current events and dependency on their applications. The company added that it was engaging with local communities, improving technology and continually “refining” policies.
However, the response did not directly address some of the concerns. India is Facebook’s largest individual market, with 340 million people using its services, but 87 percent of Facebook’s disinformation budget is focused on the United States. Even with third-party fact-checkers on the job, that suggests India isn’t getting a commensurate amount of attention. Facebook also didn’t follow up on concerns that he was tiptoeing with certain individuals and groups beyond a previous statement that it enforced its policies without regard to position or association. In other words, it is not clear that Facebook’s problems with misinformation and violence will improve in the near future.
All Engadget recommended products are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.