• chevron_right

      UK lawmakers vote to jail tech execs who fail to protect kids online

      news.movim.eu / ArsTechnica · Tuesday, 17 January, 2023 - 16:16 · 1 minute

    UK lawmakers vote to jail tech execs who fail to protect kids online

    Enlarge (credit: ilkercelik | E+ )

    The United Kingdom wants to become the safest place for children to grow up online. Many UK lawmakers have argued that the only way to guarantee that future is to criminalize tech leaders whose platforms knowingly fail to protect children. Today, the UK House of Commons reached a deal to appease those lawmakers, Reuters reports, with Prime Minister Rishi Sunak’s government agreeing to modify the Online Safety Bill to ensure its passage. It now appears that tech company executives found to be "deliberately" exposing children to harmful content could soon risk steep fines and jail time of up to two years.

    The agreement was reached during the safety bill's remaining stages before a vote in the House of Commons. Next, it will move on to review by the House of Lords, where the BBC reports it will “face a lengthy journey.” Sunak says he will revise the bill to include new terms before it reaches the House of Lords, where lawmakers will have additional opportunities to revise the wording.

    Reports say that tech executives responsible for platforms hosting user-generated content would only be liable if they fail to take “proportionate measures” to prevent exposing children to harmful content, such as materials featuring child sexual abuse, child abuse, eating disorders, and self-harm. Some measures that tech companies can take to avoid jail time and fines of up to 10 percent of a company's global revenue include adding age verification, providing parental controls, and policing content.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Tweets glorifying self-harm have grown 500% since October, report says

      news.movim.eu / ArsTechnica · Tuesday, 30 August, 2022 - 17:50 · 1 minute

    Tweets glorifying self-harm have grown 500% since October, report says

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    Even though Twitter's terms of service explicitly ban posts glorifying self-harm and media depicting "visible wounds, " independent researchers report that Twitter too often seemingly looks the other way regarding self-harm. Researchers from the Network Contagion Research Institute (NCRI) estimate there are "certainly" thousands, and possibly "hundreds of thousands," of users regularly violating these terms without any enforcement by Twitter. The result of Twitter's alleged inaction: Since October, posts using self-harm hashtags have seen "prolific growth."

    According to reports, Twitter was publicly alerted to issues with self-harm content moderation as early as last October. That's when a UK charity dedicated to children's digital rights, 5Rights, reported to a UK regulator that there was a major problem with Twitter's algorithmic recommendation system. 5Rights' research found that Twitter's algorithm "was steering accounts with child-aged avatars searching the words' self-harm' to Twitter users who were sharing photographs and videos of cutting themselves."

    In October, Twitter told Financial Times that "It is against the Twitter rules to promote, glorify, or encourage suicide and self-harm. Our number-one priority is the safety of the people who use our service. If tweets are in violation of our rules on suicide and self-harm and glorification of violence, we take decisive and appropriate enforcement action."

    Read 13 remaining paragraphs | Comments