• chevron_right

      AI cannot be used to deny health care coverage, feds clarify to insurers

      news.movim.eu / ArsTechnica · Thursday, 8 February - 23:31 · 1 minute

    A nursing home resident is pushed along a corridor by a nurse.

    Enlarge / A nursing home resident is pushed along a corridor by a nurse. (credit: Getty | Marijan Murat )

    Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers .

    The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

    According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don't match prescribing physicians' recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth's MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Twitter posts the code it claims determines which tweets people see, and why

      news.movim.eu / ArsTechnica · Friday, 31 March, 2023 - 22:24

    Section of Twitter's source code, displayed at an angle

    Enlarge / Twitter has posted what it states is the code used by its algorithm to recommend tweets to its users.

    Twitter has made good on one of CEO Elon Musk's many promises , posting on a Friday afternoon what it claims is the code for its tweet recommendation algorithm on GitHub .

    The code, posted under a GNU Affero General Public License v3.0 , contains numerous insights as to what factors make a tweet more or less likely to show up in users' timelines.

    In a blog post accompanying the code release , Twitter's engineering team (under no particular byline) notes that the system for determining which "top Tweets that ultimately show up on your device's For You timeline" is "composed of many interconnected services and jobs." Each time a Twitter home screen is refreshed, Twitter pulls "the best 1,500 Tweets from a pool of hundreds of millions," the post states.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Report: More Twitter drama after Slack shutdown; employees play hooky

      news.movim.eu / ArsTechnica · Friday, 24 February, 2023 - 21:31

    Report: More Twitter drama after Slack shutdown; employees play hooky

    Enlarge (credit: Anadolu Agency / Contributor | Anadolu )

    On Wednesday and Thursday, Twitter’s internal Slack channels were suddenly shut down. Platformer reported that the company manually shut services off. Before that was confirmed, a Twitter employee posting on the anonymous workplace chat app Blind had speculated that it was also possible that Twitter had shut down employee access because it had stopped paying its Slack bills.

    Whatever the reason driving Twitter’s decision to remove Slack access, it resulted in a very unproductive work day for some Twitter employees who were suddenly unable to communicate, Platformer reported. At the same time that employees lost Slack access, they also couldn’t access Jira, a tracking software that Platformer said engineers use to ship code and monitor progress on new features. Rather than being equipped to go “hardcore,” some decided to just take the day off. Other employees took two days off.

    Apparently, Twitter told employees that the Slack channel was down for “routine maintenance,” but a Slack employee told Platformer that was “bullshit.”

    Read 12 remaining paragraphs | Comments

    • chevron_right

      DOJ probes AI tool that’s allegedly biased against families with disabilities

      news.movim.eu / ArsTechnica · Tuesday, 31 January, 2023 - 19:50 · 1 minute

    DOJ probes AI tool that’s allegedly biased against families with disabilities

    Enlarge (credit: d3sign | Moment )

    Since 2016, social workers in a Pennsylvania county have relied on an algorithm to help them determine which child welfare calls warrant further investigation. Now, the Justice Department is reportedly scrutinizing the controversial family-screening tool over concerns that using the algorithm may be violating the Americans with Disabilities Act by allegedly discriminating against families with disabilities, The Associated Press reported , including families with mental health issues.

    Three anonymous sources broke their confidentiality agreements with the Justice Department, confirming to AP that civil rights attorneys have been fielding complaints since last fall and have grown increasingly concerned about alleged biases built into the Allegheny County Family Screening Tool . While the full scope of the Justice Department’s alleged scrutiny is currently unknown, the Civil Rights Division is seemingly interested in learning more about how using the data-driven tool could potentially be hardening historical systemic biases against people with disabilities.

    The county describes its predictive risk modeling tool as a preferred resource to reduce human error for social workers benefiting from the algorithm’s rapid analysis of “hundreds of data elements for each person involved in an allegation of child maltreatment.” That includes “data points tied to disabilities in children, parents, and other members of local households,” Allegheny County told AP. Those data points contribute to an overall risk score that helps determine if a child should be removed from their home.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Algorithms quietly run the city of DC—and maybe your hometown

      news.movim.eu / ArsTechnica · Sunday, 6 November, 2022 - 12:39

    Algorithms quietly run the city of DC—and maybe your hometown

    Enlarge (credit: Dmitry Marchenko/Getty Images)

    Washington, DC, is the home base of the most powerful government on earth. It’s also home to 690,000 people—and 29 obscure algorithms that shape their lives. City agencies use automation to screen housing applicants, predict criminal recidivism, identify food assistance fraud, determine if a high schooler is likely to drop out, inform sentencing decisions for young people, and many other things.

    That snapshot of semiautomated urban life comes from a new report from the Electronic Privacy Information Center (EPIC). The nonprofit spent 14 months investigating the city’s use of algorithms and found they were used across 20 agencies, with more than a third deployed in policing or criminal justice. For many systems, city agencies would not provide full details of how their technology worked or was used. The project team concluded that the city is likely using still more algorithms that they were not able to uncover.

    wired-logo.png

    The findings are notable beyond DC because they add to the evidence that many cities have quietly put bureaucratic algorithms to work across their departments, where they can contribute to decisions that affect citizens’ lives.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Section 230 shields TikTok in child’s “Blackout Challenge” death lawsuit

      news.movim.eu / ArsTechnica · Thursday, 27 October, 2022 - 19:05

    Section 230 shields TikTok in child’s “Blackout Challenge” death lawsuit

    Enlarge (credit: Anadolu Agency / Contributor | Anadolu Agency )

    As lawsuits continue piling up against social media platforms for allegedly causing harms to children, a Pennsylvania court has ruled that TikTok is not liable in one case where a 10-year-old named Nylah Anderson died after attempting to complete a “Blackout Challenge” she discovered on her “For You” page.

    The challenge recommends that users choke themselves until they pass out, and Nylah’s mother, Tawainna Anderson, initially claimed that TikTok’s defective algorithm was responsible for knowingly feeding the deadly video to her child. The mother hoped that Section 230 protections under the Communications Decency Act—which grant social platforms immunity for content published by third parties—would not apply in the case, but ultimately, the judge found that TikTok was immune.

    TikTok’s “algorithm was a way to bring the Challenge to the attention of those likely to be most interested in it,” Judge Paul Diamond wrote in a memorandum before issuing his order. “In thus promoting the work of others, Defendants published that work—exactly the activity Section 230 shields from liability. The wisdom of conferring such immunity is something properly taken up with Congress, not the courts.”

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Experts debate the ethics of LinkedIn’s algorithm experiments on 20M users

      news.movim.eu / ArsTechnica · Monday, 26 September, 2022 - 22:06 · 1 minute

    Experts debate the ethics of LinkedIn’s algorithm experiments on 20M users

    Enlarge (credit: Bloomberg / Contributor | Bloomberg )

    This month, LinkedIn researchers revealed in Science that the company spent five years quietly researching more than 20 million users. By tweaking the professional networking platform's algorithm, researchers were trying to determine through A/B testing whether users end up with more job opportunities when they connect with known acquaintances or complete strangers.

    To weigh the strength of connections between users as weak or strong, acquaintance or stranger, the researchers analyzed factors like the number of messages they sent back and forth or the number of mutual friends they shared, gauging how these factors changed over time after connecting on the social media platform. The researchers' discovery confirmed what they describe in the study as "one of the most influential social theories of the past century" about job mobility: The weaker the ties users have, the better the job mobility. While LinkedIn says these results will lead to changes in the algorithm to recommend more relevant connections to job searchers as "People You May Know" (PYMK) moving forward, The New York Times reported that ethics experts said the study "raised questions about industry transparency and research oversight."

    Among experts' biggest concerns was that none of those millions of users LinkedIn analyzed were directly informed they were participating in the study—which "could have affected some people's livelihoods," NYT's report suggested.

    Read 17 remaining paragraphs | Comments