Latest Twitter tweak to test what happens when users downvote replies
news.movim.eu / ArsTechnica · Friday, 4 February - 16:45
Twitter, long known for its slow and careful evolution of its core product, the tweet, is rolling out a worldwide test that would allow users to downvote replies, a feature that could significantly change how the service works.
The concept of downvoting posts and comments has been a staple of the Internet for decades, appearing on sites such as Slashdot, Reddit, and Ars Technica. The concept is simple—users who find issue with a post can vote it down.
“We learned a lot about the types of replies you don’t find relevant and we’re expanding this test—more of you on web and soon iOS and Android will have the option to use reply downvoting,” Twitter said in a tweet .
Bill proposes algorithm-free option on Big Tech platforms, may portend bigger steps
news.movim.eu / ArsTechnica · Tuesday, 9 November, 2021 - 21:31
A bipartisan group of lawmakers in the House of Representatives introduced a bill that would force social media platforms to allow people to use the site without algorithms that filter or prioritize the content that users see. The bill joins a similar act proposed in the Senate, and together, the bills suggest that lawmaker animus toward social media companies isn’t going away.
“Consumers should have the option to engage with Internet platforms without being manipulated by secret algorithms driven by user-specific data,” Rep. Ken Buck (R-Colo.) said in a statement to Ars. Buck introduced the bill with three cosponsors, Reps. David Cicilline (D-R.I.), Lori Trahan (D-Mass.), and Burgess Owens (R-Utah).
“Facebook and other dominant platforms manipulate their users through opaque algorithms that prioritize growth and profit over everything else,” Rep. Cicilline said in a statement. “And due to these platforms’ monopoly power and dominance, users are stuck with few alternatives to this exploitative business model, whether it is in their social media feed, on paid advertisements, or in their search results.”
After tagging people for 10 years, Facebook to stop most uses of facial recognition
news.movim.eu / ArsTechnica · Tuesday, 2 November, 2021 - 21:58 · 1 minute
Facebook introduced facial recognition in 2010, allowing users to automatically tag people in photos. The feature was intended to ease photo sharing by eliminating a tedious task for users. But over the years, facial recognition became a headache for the company itself—it drew regulatory scrutiny along with lawsuits and fines that have cost the company hundreds of millions of dollars.
Today, Facebook (which recently renamed itself Meta ), announced that it would be shutting down its facial recognition system and deleting the facial recognition templates of more than 1 billion people.
The change, while significant, doesn't mean that Facebook is forswearing the technology entirely. "Looking ahead, we still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity, or to prevent fraud and impersonation," said Jerome Pesenti, Facebook/Meta's vice president of artificial intelligence. "We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used. We will continue working on these technologies and engaging outside experts."
Algorithms shouldn’t be protected by Section 230, Facebook whistleblower tells Senate
news.movim.eu / ArsTechnica · Wednesday, 6 October, 2021 - 15:39 · 1 minute
Facebook whistleblower Frances Haugen testified before a Senate panel yesterday, recommending a slate of changes to rein in the company, including a Section 230 overhaul that would hold the social media giant responsible for its algorithms that promote content based on the engagement it receives in users' news feeds.
“If we had appropriate oversight, or if we reformed [Section] 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking,” Haugen said. “Because it is causing teenagers to be exposed to more anorexia content, it is pulling families apart, and in places like Ethiopia, it’s literally fanning ethnic violence.”
Haugen made sure to distinguish between user-generated content and Facebook’s algorithms, which prioritize the content in news feeds and drive engagement. She suggested that Facebook should not be responsible for content that users post on its platforms but that it should be held liable once its algorithms begin making decisions about which content people see.
A new formula may help black patients’ access to kidney care
news.movim.eu / ArsTechnica · Saturday, 25 September, 2021 - 11:22
For decades, doctors and hospitals saw kidney patients differently based on their race. A standard equation for estimating kidney function applied a correction for Black patients that made their health appear rosier, inhibiting access to transplants and other treatments .
On Thursday, a task force assembled by two leading kidney care societies said the practice is unfair and should end.
The group, a collaboration between the National Kidney Foundation and the American Society of Nephrology, recommended use of a new formula that does not factor in a patient’s race. In a statement, Paul Palevsky, the foundation’s president, urged “all laboratories and health care systems nationwide to adopt this new approach as rapidly as possible.” That call is significant because recommendations and guidelines from professional medical societies play a powerful role in shaping how specialists care for patients.
California Senate passes warehouse workers bill, taking aim at Amazon
news.movim.eu / ArsTechnica · Saturday, 11 September, 2021 - 10:00 · 1 minute
Warehouse workers in California are one step closer to being able to pee in peace. Yesterday, the state Senate voted 26-11 to pass AB 701, a bill aimed squarely at Amazon and other warehousing companies that track worker productivity. The bill would prevent employers from counting health and safety law compliance—and yes, bathroom breaks—against warehouse workers’ productive time, which is increasingly governed by algorithms. The bill, which organizers call the first in the nation to address the future of algorithmic work, is now en route to Governor Gavin Newsom’s desk for signature.
Although some observers expect Newsom to sign the bill given his record on other pro-worker legislation, such as AB 5, he has thus far remained mum on AB 701. When asked about his intentions, Newsom’s office demurred, saying only, “The bill will be evaluated on its merits when it reaches the governor’s desk.” (The governor is currently fending off a recall election , which takes place September 14.)
AB 701’s passage came as welcome news to advocates like Yesenia Barerra, a former seasonal Amazon worker who traveled to Sacramento to campaign for the bill, helping stage a mock assembly line on the steps of the state capitol. Barrera staffed the company’s Rialto, California, fulfillment center for five months until her termination in 2019. When she was hired, she didn’t realize the rigidity of the productivity system or the extent of Amazon’s camera- and barcode-based employee tracking matrix. She assumed only slackers got fired.
Now that machines can learn, can they unlearn?
news.movim.eu / ArsTechnica · Saturday, 21 August, 2021 - 10:55
Companies of all kinds use machine learning to analyze people’s desires, dislikes, or faces. Some researchers are now asking a different question: How can we make machines forget?
A nascent area of computer science dubbed machine unlearning seeks ways to induce selective amnesia in artificial intelligence software. The goal is to remove all trace of a particular person or data point from a machine learning system, without affecting its performance.
If made practical, the concept could give people more control over their data and the value derived from it. Although users can already ask some companies to delete personal data, they are generally in the dark about what algorithms their information helped tune or train. Machine unlearning could make it possible for a person to withdraw both their data and a company’s ability to profit from it.