• chevron_right

      Brazil riots trigger widespread content bans on Facebook, YouTube

      news.movim.eu / ArsTechnica · Tuesday, 10 January, 2023 - 17:44 · 1 minute

    A view of a broken window after the supporters of Brazil's former President Jair Bolsonaro participated in an anti-democratic riot at Planalto Palace in Brasilia, Brazil on January 9, 2023.

    Enlarge / A view of a broken window after the supporters of Brazil's former President Jair Bolsonaro participated in an anti-democratic riot at Planalto Palace in Brasilia, Brazil on January 9, 2023. (credit: Anadolu Agency / Contributor | Anadolu )

    Claiming “election interference” in Brazil, thousands of rioters on Sunday broke into government buildings in the nation’s capital, Brasília. The rioters relied on social media and messaging apps to coordinate their attacks and evade government detection, The New York Times reported , following a similar “digital playbook” as those involved in the United States Capitol attacks on January 6, 2021. Now, social media platforms like Facebook and YouTube have begun removing content praising the most recent attacks, Reuters reported , earmarking this latest anti-democratic uprising as another sensitive event requiring widespread content removal.

    Disinformation researchers told the Times that Twitter and Telegram played a central role for those involved with organizing the attacks, but Meta apps Facebook and WhatsApp were also used. Twitter has not responded to reports, but a Meta spokesperson told Ars and a Telegram spokesperson told Reuters that the companies have been cooperating with Brazilian authorities to stop content from spreading that could incite further violence. Both digital platforms confirmed an uptick in content moderation efforts starting before the election took place—with many popular social media platforms seemingly bracing for the riots after failing to quickly remove calls to violence during the US Capitol attacks.

    “In advance of the election, we designated Brazil as a temporary high-risk location and have been removing content calling for people to take up arms or forcibly invade Congress, the Presidential palace, and other federal buildings,” a Meta spokesperson told Ars. “We're also designating this as a violating event, which means we will remove content that supports or praises these actions.“

    Read 7 remaining paragraphs | Comments

    • chevron_right

      YouTube algorithm pushed election fraud claims to Trump supporters, report says

      news.movim.eu / ArsTechnica · Friday, 2 September, 2022 - 19:20 · 1 minute

    YouTube algorithm pushed election fraud claims to Trump supporters, report says

    Enlarge (credit: Nathan Howard / Stringer | Getty Images News )

    For years, researchers have suggested that algorithms feeding users content aren't the cause of online echo chambers, but are more likely due to users actively seeking out content that aligns with their beliefs. This week, New York University researchers for the Center for Social Media and Politics showed results from a YouTube experiment that just happened to be conducted right when election fraud claims were raised in fall 2020. They say their results provide an important caveat to prior research by showing evidence that in 2020, YouTube's algorithm was responsible for "disproportionately" recommending election fraud content to users more "skeptical of the election's legitimacy to begin with."

    A coauthor of the study, Vanderbilt University political scientist James Bisbee told The Verge that even though participants were recommended a low number of election denial videos—a maximum of 12 videos out of hundreds participants clicked on—the algorithm generated three times as many to people predisposed to buy into the conspiracy than it to people who did not. "The more susceptible you are to these types of narratives about the election... the more you would be recommended content about that narrative," Bisbee said.

    YouTube spokesperson Elena Hernandez told Ars that Bisbee's team's report "doesn't accurately represent how our systems work." Hernandez says, "YouTube doesn't allow or recommend videos that advance false claims that widespread fraud, errors, or glitches occurred in the 2020 US presidential election" and YouTube's "most viewed and recommended videos and channels related to elections are from authoritative sources, like news channels."

    Read 20 remaining paragraphs | Comments