• chevron_right

      Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

      news.movim.eu / ArsTechnica · Thursday, 22 February - 19:56

    Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

    Enlarge (credit: Bloomberg / Contributor | Bloomberg )

    A judge has dismissed a complaint from a parent and guardian of a girl, now 15, who was sexually assaulted when she was 12 years old after Snapchat recommended that she connect with convicted sex offenders.

    According to the court filing, the abuse that the girl, C.O., experienced on Snapchat happened soon after she signed up for the app in 2019. Through its "Quick Add" feature, Snapchat "directed her" to connect with "a registered sex offender using the profile name JASONMORGAN5660." After a little more than a week on the app, C.O. was bombarded with inappropriate images and subjected to sextortion and threats before the adult user pressured her to meet up, then raped her. Cops arrested the adult user the next day, resulting in his incarceration, but his Snapchat account remained active for three years despite reports of harassment, the complaint alleged.

    Two years later, at 14, C.O. connected with another convicted sex offender on Snapchat, a former police officer who offered to give C.O. a ride to school and then sexually assaulted her. The second offender is also currently incarcerated, the judge's opinion noted.

    Read 28 remaining paragraphs | Comments

    • chevron_right

      Backdoors that let cops decrypt messages violate human rights, EU court says

      news.movim.eu / ArsTechnica · Wednesday, 14 February - 19:49

    Building of the European Court of Human Rights in Strasbourg (France).

    Enlarge / Building of the European Court of Human Rights in Strasbourg (France). (credit: SilvanBachmann | iStock / Getty Images Plus )

    The European Court of Human Rights (ECHR) has ruled that weakening end-to-end encryption disproportionately risks undermining human rights. The international court's decision could potentially disrupt the European Commission's proposed plans to require email and messaging service providers to create backdoors that would allow law enforcement to easily decrypt users' messages.

    This ruling came after Russia's intelligence agency, the Federal Security Service (FSS), began requiring Telegram to share users' encrypted messages to deter "terrorism-related activities" in 2017, ECHR's ruling said. A Russian Telegram user alleged that FSS's requirement violated his rights to a private life and private communications, as well as all Telegram users' rights.

    The Telegram user was apparently disturbed, moving to block required disclosures after Telegram refused to comply with an FSS order to decrypt messages on six users suspected of terrorism. According to Telegram, "it was technically impossible to provide the authorities with encryption keys associated with specific users," and therefore, "any disclosure of encryption keys" would affect the "privacy of the correspondence of all Telegram users," the ECHR's ruling said.

    Read 21 remaining paragraphs | Comments

    • chevron_right

      Cops bogged down by flood of fake AI child sex images, report says

      news.movim.eu / ArsTechnica · Wednesday, 31 January - 22:08 · 1 minute

    Cops bogged down by flood of fake AI child sex images, report says

    Enlarge (credit: SB Arts Media | iStock / Getty Images Plus )

    Law enforcement is continuing to warn that a "flood" of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported .

    Last year, after researchers uncovered thousands of realistic but fake AI child sex images online , quickly every attorney general across the US called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery. Meanwhile, law enforcement continues to struggle with figuring out how to confront bad actors found to be creating and sharing images that, for now, largely exist in a legal gray zone.

    “Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm's way, and making it harder for law enforcement to find actual children being harmed.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Zuckerberg says sorry for Meta harming kids—but rejects payments to families

      news.movim.eu / ArsTechnica · Wednesday, 31 January - 18:50

    Mark Zuckerberg discussed Meta's approaches to child safety at the Senate Judiciary Committee hearing January 31, 2024.

    Enlarge / Mark Zuckerberg discussed Meta's approaches to child safety at the Senate Judiciary Committee hearing January 31, 2024.

    During a Senate Judiciary Committee hearing weighing child safety solutions on social media, Meta CEO Mark Zuckerberg stopped to apologize to families of children who committed suicide or experienced mental health issues after using Facebook and Instagram.

    "I’m sorry for everything you have all been through," Zuckerberg told families. "No one should go through the things that your families have suffered, and this is why we invest so much, and we are going to continue doing industry-wide efforts to make sure no one has to go through the things your families have had to suffer."

    This was seemingly the first time that Zuckerberg had personally apologized to families. It happened after Senator Josh Hawley (R-Mo.) asked Zuckerberg if he had ever apologized and suggested that the Meta CEO personally set up a compensation fund to help the families get counseling.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      Child abusers are covering their tracks with better use of crypto

      news.movim.eu / ArsTechnica · Friday, 12 January - 14:47

    silhouette of child

    Enlarge (credit: Naufal MQ via Getty Images )

    For those who trade in child sexual exploitation images and videos in the darkest recesses of the Internet, cryptocurrency has been both a powerful tool and a treacherous one. Bitcoin, for instance, has allowed denizens of that criminal underground to buy and sell their wares with no involvement from a bank or payment processor that might reveal their activities to law enforcement. But the public and surprisingly traceable transactions recorded in Bitcoin's blockchain have sometimes led financial investigators directly to pedophiles’ doorsteps .

    Now, after years of evolution in that grim cat-and-mouse game, new evidence suggests that online vendors of what was once commonly called “child porn” are learning to use cryptocurrency with significantly more skill and stealth—and that it's helping them survive longer in the Internet's most abusive industry.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Teen boys use AI to make fake nudes of classmates, sparking police probe

      news.movim.eu / ArsTechnica · Thursday, 2 November - 20:30

    Westfield High School in Westfield, NJ, in 2020.

    Enlarge / Westfield High School in Westfield, NJ, in 2020. (credit: Icon Sportswire / Contributor | Icon Sportswire )

    This October, boys at Westfield High School in New Jersey started acting "weird," the Wall Street Journal reported . It took four days before the school found out that the boys had been using AI image generators to create and share fake nude photos of female classmates. Now, police are investigating the incident, but they're apparently working in the dark, because they currently have no access to the images to help them trace the source.

    According to an email that the WSJ reviewed from Westfield High School principal Mary Asfendis, the school "believed" that the images had been deleted and were no longer in circulation among students.

    It remains unclear how many students were harmed. A Westfield Public Schools spokesperson cited student confidentiality when declining to tell the WSJ the total number of students involved or how many students, if any, had been disciplined. The school had not confirmed whether faculty had reviewed the images, seemingly only notifying the female students allegedly targeted when they were identified by boys claiming to have seen the images.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says

      news.movim.eu / ArsTechnica · Monday, 16 October - 19:08 · 1 minute

    Elon Musk’s X fined $380K over “serious” child safety concerns, watchdog says

    Enlarge (credit: Chesnot / Contributor | Getty Images Europe )

    Today, X (formerly known as Twitter) became the first platform fined under Australia's Online Safety Act. The fine comes after X failed to respond to more than a dozen key questions from Australia eSafety Commissioner Julie Inman Grant, who was seeking clarity on how effectively X detects and mitigates harms of child exploitation and grooming on the platform.

    In a press release , Inman Grant said that X was given 28 days to either appeal the decision or pay the approximately $380,000 fine. While the fine seems small, the reputational ding could further hurt X's chances of persuading advertisers to increase spending on the platform, Reuters suggested . And any failure to comply or respond could trigger even more fines—with X potentially on the hook for as much as $493,402 daily for alleged non-compliance dating back to March 2023, The Guardian reported . That could quickly add up to tens of millions if X misses the Australian regulator's deadline.

    “If they choose not to pay, it’s open to eSafety to take other action or to seek a civil penalty through the courts,” Inman Grant told the Sydney Morning Herald . “We’re talking about some of the most heinous crimes playing out on these platforms, committed against innocent children.”

    Read 24 remaining paragraphs | Comments

    • chevron_right

      AI-generated child sex imagery has every US attorney general calling for action

      news.movim.eu / ArsTechnica · Wednesday, 6 September, 2023 - 21:48 · 1 minute

    A photo of the US Capitol in Washington, DC.

    Enlarge (credit: Getty Images )

    On Wednesday, American attorneys general from all 50 states and four territories sent a letter to Congress urging lawmakers to establish an expert commission to study how generative AI can be used to exploit children through child sexual abuse material (CSAM). They also call for expanding existing laws against CSAM to explicitly cover AI-generated materials.

    "As Attorneys General of our respective States and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions," the letter reads. "And while Internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult."

    In particular, open source image synthesis technologies such as Stable Diffusion allow the creation of AI-generated pornography with ease, and a large community has formed around tools and add-ons that enhance this ability. Since these AI models are openly available and often run locally, there are sometimes no guardrails preventing someone from creating sexualized images of children, and that has rung alarm bells among the nation's top prosecutors. (It's worth noting that Midjourney, DALL-E, and Adobe Firefly all have built-in filters that bar the creation of pornographic content.)

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Apple details reasons to abandon CSAM-scanning tool, more controversy ensues

      news.movim.eu / ArsTechnica · Saturday, 2 September, 2023 - 10:33 · 1 minute

    Apple logo obscured by foliage

    Enlarge (credit: Leonardo Munoz/Getty )

    In December, Apple said that it was killing an effort to design a privacy-preserving iCloud photo scanning tool for detecting child sexual abuse material (CSAM) on the platform. Originally announced in August 2021, the project had been controversial since its inception. Apple first paused it that September in response to concerns from digital rights groups and researchers that such a tool would inevitably be abused and exploited to compromise the privacy and security of all iCloud users. This week, a new child safety group known as Heat Initiative told Apple that it is organizing a campaign to demand that the company “detect, report, and remove” child sexual abuse material from iCloud and offer more tools for users to report CSAM to the company.

    wired-logo.png

    Today, in a rare move, Apple responded to Heat Initiative, outlining its reasons for abandoning the development of its iCloud CSAM scanning feature and instead focusing on a set of on-device tools and resources for users known collectively as “Communication Safety” features. The company's response to Heat Initiative, which Apple shared with WIRED this morning, offers a rare look not just at its rationale for pivoting to Communication Safety, but at its broader views on creating mechanisms to circumvent user privacy protections, such as encryption, to monitor data. This stance is relevant to the encryption debate more broadly, especially as countries like the United Kingdom weigh passing laws that would require tech companies to be able to access user data to comply with law enforcement requests.

    “Child sexual abuse material is abhorrent and we are committed to breaking the chain of coercion and influence that makes children susceptible to it,” Erik Neuenschwander, Apple's director of user privacy and child safety, wrote in the company's response to Heat Initiative. He added, though, that after collaborating with an array of privacy and security researchers, digital rights groups, and child safety advocates, the company concluded that it could not proceed with development of a CSAM-scanning mechanism, even one built specifically to preserve privacy.

    Read 9 remaining paragraphs | Comments