• chevron_right

      Teen boys use AI to make fake nudes of classmates, sparking police probe

      news.movim.eu / ArsTechnica · Thursday, 2 November, 2023 - 20:30

    Westfield High School in Westfield, NJ, in 2020.

    Enlarge / Westfield High School in Westfield, NJ, in 2020. (credit: Icon Sportswire / Contributor | Icon Sportswire )

    This October, boys at Westfield High School in New Jersey started acting "weird," the Wall Street Journal reported . It took four days before the school found out that the boys had been using AI image generators to create and share fake nude photos of female classmates. Now, police are investigating the incident, but they're apparently working in the dark, because they currently have no access to the images to help them trace the source.

    According to an email that the WSJ reviewed from Westfield High School principal Mary Asfendis, the school "believed" that the images had been deleted and were no longer in circulation among students.

    It remains unclear how many students were harmed. A Westfield Public Schools spokesperson cited student confidentiality when declining to tell the WSJ the total number of students involved or how many students, if any, had been disciplined. The school had not confirmed whether faculty had reviewed the images, seemingly only notifying the female students allegedly targeted when they were identified by boys claiming to have seen the images.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says

      news.movim.eu / ArsTechnica · Monday, 16 October, 2023 - 19:08 · 1 minute

    Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says

    Enlarge (credit: Chesnot / Contributor | Getty Images Europe )

    Today, X (formerly known as Twitter) became the first platform fined under Australia's Online Safety Act. The fine comes after X failed to respond to more than a dozen key questions from Australia eSafety Commissioner Julie Inman Grant, who was seeking clarity on how effectively X detects and mitigates harms of child exploitation and grooming on the platform.

    In a press release , Inman Grant said that X was given 28 days to either appeal the decision or pay the approximately $380,000 fine. While the fine seems small, the reputational ding could further hurt X's chances of persuading advertisers to increase spending on the platform, Reuters suggested . And any failure to comply or respond could trigger even more fines—with X potentially on the hook for as much as $493,402 daily for alleged non-compliance dating back to March 2023, The Guardian reported . That could quickly add up to tens of millions if X misses the Australian regulator's deadline.

    “If they choose not to pay, it’s open to eSafety to take other action or to seek a civil penalty through the courts,” Inman Grant told the Sydney Morning Herald . “We’re talking about some of the most heinous crimes playing out on these platforms, committed against innocent children.”

    Read 24 remaining paragraphs | Comments

    • chevron_right

      AI-generated child sex imagery has every US attorney general calling for action

      news.movim.eu / ArsTechnica · Wednesday, 6 September, 2023 - 21:48 · 1 minute

    A photo of the US Capitol in Washington, DC.

    Enlarge (credit: Getty Images )

    On Wednesday, American attorneys general from all 50 states and four territories sent a letter to Congress urging lawmakers to establish an expert commission to study how generative AI can be used to exploit children through child sexual abuse material (CSAM). They also call for expanding existing laws against CSAM to explicitly cover AI-generated materials.

    "As Attorneys General of our respective States and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions," the letter reads. "And while Internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult."

    In particular, open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation's top prosecutors. (It's worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Apple details reasons to abandon CSAM-scanning tool, more controversy ensues

      news.movim.eu / ArsTechnica · Saturday, 2 September, 2023 - 10:33 · 1 minute

    Apple logo obscured by foliage

    Enlarge (credit: Leonardo Munoz/Getty )

    In December, Apple said that it was killing an effort to design a privacy-preserving iCloud photo scanning tool for detecting child sexual abuse material (CSAM) on the platform. Originally announced in August 2021, the project had been controversial since its inception. Apple first paused it that September in response to concerns from digital rights groups and researchers that such a tool would inevitably be abused and exploited to compromise the privacy and security of all iCloud users. This week, a new child safety group known as Heat Initiative told Apple that it is organizing a campaign to demand that the company “detect, report, and remove” child sexual abuse material from iCloud and offer more tools for users to report CSAM to the company.

    wired-logo.png

    Today, in a rare move, Apple responded to Heat Initiative, outlining its reasons for abandoning the development of its iCloud CSAM scanning feature and instead focusing on a set of on-device tools and resources for users known collectively as “Communication Safety” features. The company's response to Heat Initiative, which Apple shared with WIRED this morning, offers a rare look not just at its rationale for pivoting to Communication Safety, but at its broader views on creating mechanisms to circumvent user privacy protections, such as encryption, to monitor data. This stance is relevant to the encryption debate more broadly, especially as countries like the United Kingdom weigh passing laws that would require tech companies to be able to access user data to comply with law enforcement requests.

    “Child sexual abuse material is abhorrent and we are committed to breaking the chain of coercion and influence that makes children susceptible to it,” Erik Neuenschwander, Apple's director of user privacy and child safety, wrote in the company's response to Heat Initiative. He added, though, that after collaborating with an array of privacy and security researchers, digital rights groups, and child safety advocates, the company concluded that it could not proceed with development of a CSAM-scanning mechanism, even one built specifically to preserve privacy.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Musk stiffing Google could unleash yet more abuse on Twitter, report says

      news.movim.eu / ArsTechnica · Monday, 12 June, 2023 - 17:03

    Musk stiffing Google could unleash yet more abuse on Twitter, report says

    Enlarge (credit: SOPA Images / Contributor | LightRocket )

    In what might be another blow to the stability of Twitter's trust and safety efforts, the company has allegedly stopped paying for Google Cloud and Amazon Web Services (AWS), which host tools that support the platform's safety measures, Platformer reported this weekend.

    According to Platformer, Twitter relies on Google Cloud to host services "related to fighting spam, removing child sexual abuse material, and protecting accounts, among other things." That contract is up for renewal at the end of this month after being negotiated and signed prior to Elon Musk's takeover. Since "at least" March, Twitter has been pushing to renegotiate the contract ahead of renewal—unsurprisingly seeking to lower costs, Platformer reported.

    But now it's unclear if the companies will find agreeable new terms on time or if Musk already intends to cancel the contract. Platformer reported that Twitter is rushing to transition services off the Google Cloud Platform and seemingly plans to drop the contract amid failed negotiations.

    Read 19 remaining paragraphs | Comments

    • chevron_right

      Damning probes find Instagram is key link connecting pedophile rings

      news.movim.eu / ArsTechnica · Thursday, 8 June, 2023 - 15:22

    Damning probes find Instagram is key link connecting pedophile rings

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    Instagram has emerged as the most important platform for buyers and sellers of underage sex content, according to investigations from the Wall Street Journal, Stanford Internet Observatory, and the University of Massachusetts Amherst (UMass) Rescue Lab.

    While other platforms play a role in processing payments and delivering content, Instagram is where hundreds of thousands—and perhaps millions—of users search explicit hashtags to uncover illegal "menus" of content that can then be commissioned. Content on offer includes disturbing imagery of children self-harming, "incest toddlers," and minors performing sex acts with animals, as well as opportunities for buyers to arrange illicit meetups with children, the Journal reported.

    Because the child sexual abuse material (CSAM) itself is not hosted on Instagram, platform owner Meta has a harder time detecting and removing these users. Researchers found that even when Meta's trust and safety team does ban users, their efforts are "directly undercut" by Instagram's recommendation system—which allows the networks to quickly reassemble under "backup" accounts that are usually listed in the bios of original accounts for just that purpose of surviving bans.

    Read 24 remaining paragraphs | Comments

    • chevron_right

      Reddit cracked down on revenge porn, creepshots with twofold spike in permabans

      news.movim.eu / ArsTechnica · Wednesday, 29 March, 2023 - 18:20

    Reddit cracked down on revenge porn, creepshots with twofold spike in permabans

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    A year after Reddit updated its policy on non-consensual intimate image (NCII) sharing—a category that includes everything from revenge porn to voyeurism and accidental nip slips—the social media platform has announced that it has gotten much better at detecting and removing this kind of content. Reddit has also launched a transparency center where users can more easily assess Reddit's ongoing efforts to make the platform safer.

    According to Reddit’s 2022 Transparency Report —which tracks various “ongoing efforts to keep Reddit safe, healthy, and real”—last year Reddit removed much more NCII than it did in 2021. The latest report shows that Reddit removed 473 percent more subreddits and permanently suspended 244 percent more user accounts found to be violating community guidelines by sharing non-consensual intimate media. Previously, Reddit labeled NCII as "involuntary pornography," and the 2022 report still uses that label, reporting that the total number of posts removed was 187,258. That includes non-consensual AI-generated deepfakes , also known as “lookalike” pornography.

    “It’s likely this increase is primarily reflective of our updated policies and increased effectiveness in detecting and removing non-consensual intimate media from Reddit,” the transparency report said.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Twitter suspended 400K for child abuse content but only reported 8K to police

      news.movim.eu / ArsTechnica · Monday, 6 February, 2023 - 20:01

    Twitter suspended 400K for child abuse content but only reported 8K to police

    Enlarge (credit: NurPhoto / Contributor | NurPhoto)

    Last week, Twitter Safety tweeted that the platform is now “moving faster than ever” to remove child sexual abuse materials (CSAM). It seems, however, that’s not entirely accurate. Child safety advocates told The New York Times that after Elon Musk took over, Twitter started taking twice as long to remove CSAM flagged by various organizations.

    The platform has since improved and is now removing CSAM almost as fast as it was before Musk’s takeover—responding to reports in less than two days—The Times reported. But there still seem to be issues with its CSAM reporting system that continue to delay response times. In one concerning case, a Canadian organization spent a week notifying Twitter daily—as the illegal imagery of a victim younger than 10 spread unchecked—before Twitter finally removed the content.

    "From our standpoint, every minute that that content's up, it's re-victimizing that child," Gavin Portnoy, vice president of communications for the National Center for Missing and Exploited Children (NCMEC), told Ars. "That's concerning to us."

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Former Trump official led feds to Telegram group livestreaming child abuse

      news.movim.eu / ArsTechnica · Friday, 3 February, 2023 - 19:24

    Former Trump official led feds to Telegram group livestreaming child abuse

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    New details have been revealed through recently unsealed Cook County court documents, showing how federal investigators in 2020 gained access to encrypted Telegram messages to uncover “a cross-country network of people sexually exploiting children.”

    The Chicago Sun-Times reported that Homeland Security Investigations (HSI) agents based in Arizona launched “Operation Swipe Left” in 2020 to investigate claims of kidnapping, livestreaming child abuse, and production and distribution of child sexual abuse materials (CSAM). That investigation led to criminal charges filed against at least 17 people. The majority of defendants were living in Arizona, but others charged were residents of Illinois, Wisconsin, Washington, DC, California, and South Africa. Ten children were rescued, including four children actively suffering abuse at the time of the rescue. The youngest victim identified was 6 months old, and the oldest was 17 years old.

    Telegram became a preferred tool for defendants in this investigation, many of whom believed that police could never access their encrypted messages. At least one federal prosecutor told a judge that authorities never would have gained access; however, one of the defendants, Adam Hageman, “fully cooperated” with investigators and granted access through his account to offending Telegram groups.

    Read 8 remaining paragraphs | Comments