• chevron_right

      Biden administration: We never coerced Big Tech into suppressing speech

      news.movim.eu / ArsTechnica · Thursday, 10 August, 2023 - 21:45

    Biden administration: We never coerced Big Tech into suppressing speech

    Enlarge (credit: Drew Angerer / Staff | Getty Images North America )

    Today, three conservative-leaning judges with the 5th US Circuit Court of Appeals heard oral arguments to decide if an injunction should be lifted that restricts the Biden administration from communicating with social media platforms and requesting content takedowns.

    The appeal followed a July 4 order from a district court, which found that the Biden administration had coerced platforms into censoring Louisiana and Missouri officials, whose posts were deemed as spreading COVID-19 misinformation.

    Arguing for the Biden administration was attorney Daniel Bentele Hahs Tenny, who requested that either the injunction be reversed or a stay of the injunction should be extended by 10 days "in case the solicitor general wishes to pursue Supreme Court review."

    Read 17 remaining paragraphs | Comments

    • chevron_right

      X sues hate speech researchers whose “scare campaign” spooked Twitter advertisers

      news.movim.eu / ArsTechnica · Tuesday, 1 August, 2023 - 18:29 · 1 minute

    X sues hate speech researchers whose “scare campaign” spooked Twitter advertisers

    Enlarge (credit: Bloomberg / Contributor | Bloomberg )

    As Twitter continues its rebrand as X, it looks like Elon Musk hopes to quash any claims that the platform under its new name is allowing rampant hate speech to fester. Yesterday, X Corp sued a nonprofit, the Center for Countering Digital Hate (CCDH), for allegedly "actively working to assert false and misleading claims" regarding spiking levels of hate speech on X and successfully "encouraging advertisers to pause investment on the platform," Twitter's blog said.

    In its complaint, X Corp. claims that CCDH's reports have caused an estimated tens of millions in advertising revenue loss. The company said it's aware of "at least eight" specific organizations, including large, multinational corporations, that "immediately paused their advertising spend on X based on CCDH’s reports and articles." X also claimed that "at least five" companies "paused their plans for future advertising spend" and three companies decided not to reactivate campaigns, all allegedly basing decisions to stop spending due to CCDH's reporting.

    X is alleging that CCDH is being secretly funded by foreign governments and X competitors to lob this attack on the platform, as well as claiming that CCDH is actively working to censor opposing viewpoints on the platform. Here, X is echoing statements of US Senator Josh Hawley (R-Mo.), who accused the CCDH of being a "foreign dark money group" in 2021—following a CCDH report on 12 social media accounts responsible for 65 percent of COVID-19 vaccine misinformation, Fox Business reported.

    Read 33 remaining paragraphs | Comments

    • chevron_right

      Fake Pentagon “explosion” photo sows confusion on Twitter

      news.movim.eu / ArsTechnica · Tuesday, 23 May, 2023 - 21:01 · 1 minute

    A fake AI-generated image of an

    Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

    On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

    The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

    Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Rewarding accuracy gets people to spot more misinformation

      news.movim.eu / ArsTechnica · Friday, 10 March, 2023 - 23:22 · 1 minute

    a gavel hammers on a chat text bubble

    Enlarge (credit: Getty )

    Piecing together why so many people are willing to share misinformation online is a major focus among behavioral scientists. It's easy to think partisanship is driving it all—people will simply share things that make their side look good or their opponents look bad. But the reality is a bit more complicated. Studies have indicated that many people don't seem to carefully evaluate links for accuracy, and that partisanship may be secondary to the rush of getting a lot of likes on social media . Given that, it's not clear what induces users to stop sharing things that a small bit of checking would show to be untrue.

    So, a team of researchers tried the obvious: We'll give you money if you stop and evaluate a story's accuracy. The work shows that small payments and even minimal rewards boost the accuracy of people's evaluation of stories. Nearly all that effect comes from people recognizing stories that don't favor their political stance as factually accurate. While the cash boosted the accuracy of conservatives more, they were so far behind liberals in judging accuracy that the gap remains substantial.

    Money for accuracy

    The basic outline of the new experiments is pretty simple: get a bunch of people, ask them about their political leanings, and then show them a bunch of headlines as they would appear on a social media site such as Facebook. The headlines were rated based on their accuracy (i.e., whether they were true or misinformation) and whether they would be more favorable to liberals or conservatives.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      YouTuber must pay $40K in attorneys’ fees for daft “reverse censorship” suit

      news.movim.eu / ArsTechnica · Friday, 10 March, 2023 - 20:24

    YouTuber must pay $40K in attorneys’ fees for daft “reverse censorship” suit

    Enlarge (credit: picture alliance / Contributor | picture alliance )

    A YouTuber, Marshall Daniels—who has posted far-right-leaning videos under the name “Young Pharaoh” since 2015—tried to argue that YouTube violated his First Amendment rights by removing two videos discussing George Floyd and COVID-19. Years later, Daniels now owes YouTube nearly $40,000 in attorney fees for filing a frivolous lawsuit against YouTube owner Alphabet, Inc.

    A United States magistrate judge in California, Virginia K. DeMarchi, ordered Daniels to pay YouTube $38,576 for asserting a First Amendment claim that “clearly lacked merit and was frivolous from the outset.” YouTube said this represents a conservative estimate and likely an underestimate of fees paid defending against the meritless claim.

    In his defense, Daniels never argued that the fees Alphabet was seeking were excessive or could be burdensome. In making this rare decision in favor of the defendant Alphabet, DeMarchi had to consider Daniels’ financial circumstances. In his court filings, Daniels described himself as “a fledgling individual consumer,” but also told the court that he made more than $180,000 in the year before he filed his complaint. DeMarchi ruled that the fees would not be a burden to Daniels.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Twitter hit with EU yellow card for lack of transparency on disinformation

      news.movim.eu / ArsTechnica · Thursday, 9 February, 2023 - 16:43 · 1 minute

    Twitter hit with EU yellow card for lack of transparency on disinformation

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    The European Commission, which is tasked with tackling disinformation online, this week expressed disappointment that Twitter has failed to provide required data that all other major platforms submitted. Now Twitter has been hit with a "yellow card," Reuters reported , and could be subjected to fines if the platform doesn’t fully comply with European Union commitments by this June.

    “We must have more transparency and cannot rely on the online platforms alone for the quality of information,” the commission’s vice president of values and transparency, Věra Jourová, said in a press release . “They need to be independently verifiable. I am disappointed to see that Twitter['s] report lags behind others, and I expect a more serious commitment to their obligations.”

    Earlier this month, the EU’s commissioner for the internal market, Thierry Breton, met with Twitter CEO Elon Musk to ensure that Musk understood what was expected of Twitter under the EU’s new Digital Services Act (DSA). After their meeting, Musk tweeted that the EU’s “goals of transparency, accountability & accuracy of information are aligned” with Twitter’s goals. But he also indicated that Twitter would be relying on Community Notes , which let users add context to potentially misleading tweets to satisfy DSA requirements on stopping misinformation and disinformation spread. That process seems to be the issue the commission has with Twitter’s unsatisfactory report.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      For Facebook addicts, clicking is more important than facts or ideology

      news.movim.eu / ArsTechnica · Monday, 23 January, 2023 - 18:27 · 1 minute

    Image of a figure in a hoodie with the face replaced by the Facebook logo.

    Enlarge (credit: Aurich Lawson | Getty Images)

    It's fair to say that, once the pandemic started, sharing misinformation on social media took on an added, potentially fatal edge. Inaccurate information about the risks posed by the virus, the efficacy of masks, and the safety of vaccines put people at risk of preventable death. Yet despite the dangers of misinformation, it continues to run rampant on many social media sites, with moderation and policy often struggling to keep up.

    If we're going to take any measures to address this—something it's not clear that social media services are interested in doing—then we have to understand why sharing misinformation is so appealing to people. An earlier study had indicated that people care about making sure that what they share is accurate, but they fail to check in many cases. A new study elaborates that by getting into why this disconnect develops: For many users, clicking "share" becomes a habit, something they pursue without any real thought.

    How vices become habits

    People find plenty of reasons to post misinformation that have nothing to do with whether they mistakenly believe the information is accurate. The misinformation could make their opponents, political or otherwise, look bad. Alternately, it could signal to their allies that they're on the same side or part of the same cultural group. But the initial experiments described here suggest that this sort of biased sharing doesn't explain a significant amount of information.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Facebook approves ads calling for children’s deaths in Brazil, test finds

      news.movim.eu / ArsTechnica · Thursday, 19 January, 2023 - 16:36

    Brazilian President Luiz Inácio Lula da Silva kisses a child onstage at the end of a speech to supporters.

    Enlarge / Brazilian President Luiz Inácio Lula da Silva kisses a child onstage at the end of a speech to supporters. (credit: Horacio Villalobos / Contributor | Corbis News )

    “Unearth all the rats that have seized power and shoot them,” read an ad approved by Facebook just days after a mob violently stormed government buildings in Brazil’s capital .

    That violence was fueled by false election interference claims, mirroring attacks in the United States on January 6, 2021. Previously, Facebook-owner Meta said it was dedicated to blocking content designed to incite more post-election violence in Brazil. Yet today, the human rights organization Global Witness published results of a test that shows Meta is seemingly still accepting ads that do exactly that.

    Global Witness submitted 16 ads to Facebook, with some calling on people to storm government buildings, others describing the election as stolen, and some even calling for the deaths of children whose parents voted for Brazil’s new president, Luiz Inácio Lula da Silva. Facebook approved all but two ads, which Global Witness digital threats campaigner Rosie Sharpe said proved that Facebook is not doing enough to enforce its own ad policies restricting such violent content.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Brazil riots trigger widespread content bans on Facebook, YouTube

      news.movim.eu / ArsTechnica · Tuesday, 10 January, 2023 - 17:44 · 1 minute

    A view of a broken window after the supporters of Brazil's former President Jair Bolsonaro participated in an anti-democratic riot at Planalto Palace in Brasilia, Brazil on January 9, 2023.

    Enlarge / A view of a broken window after the supporters of Brazil's former President Jair Bolsonaro participated in an anti-democratic riot at Planalto Palace in Brasilia, Brazil on January 9, 2023. (credit: Anadolu Agency / Contributor | Anadolu )

    Claiming “election interference” in Brazil, thousands of rioters on Sunday broke into government buildings in the nation’s capital, Brasília. The rioters relied on social media and messaging apps to coordinate their attacks and evade government detection, The New York Times reported , following a similar “digital playbook” as those involved in the United States Capitol attacks on January 6, 2021. Now, social media platforms like Facebook and YouTube have begun removing content praising the most recent attacks, Reuters reported , earmarking this latest anti-democratic uprising as another sensitive event requiring widespread content removal.

    Disinformation researchers told the Times that Twitter and Telegram played a central role for those involved with organizing the attacks, but Meta apps Facebook and WhatsApp were also used. Twitter has not responded to reports, but a Meta spokesperson told Ars and a Telegram spokesperson told Reuters that the companies have been cooperating with Brazilian authorities to stop content from spreading that could incite further violence. Both digital platforms confirmed an uptick in content moderation efforts starting before the election took place—with many popular social media platforms seemingly bracing for the riots after failing to quickly remove calls to violence during the US Capitol attacks.

    “In advance of the election, we designated Brazil as a temporary high-risk location and have been removing content calling for people to take up arms or forcibly invade Congress, the Presidential palace, and other federal buildings,” a Meta spokesperson told Ars. “We're also designating this as a violating event, which means we will remove content that supports or praises these actions.“

    Read 7 remaining paragraphs | Comments