• chevron_right

      Newspapers Sue OpenAI for Copyright Infringement and ‘Fake News’ Hallucinations

      news.movim.eu / TorrentFreak · Wednesday, 1 May - 12:10 · 3 minutes

    newsprint Starting last year, various rightsholders have filed lawsuits against companies that develop AI models.

    The list of complainants includes record labels, book authors , visual artists, a chip maker , and news publications . These rightsholders all object to the presumed use of their work without proper compensation.

    Keeping pace with the constant stream of legal paperwork is a challenge, but a complaint filed at a New York federal court yesterday deserves to be highlighted. In this case, eight major news publications are suing OpenAI and Microsoft for copyright infringement.

    U.S. Newspapers Sue OpenAI and Microsoft

    The New York Daily News, Chicago Tribune, Orlando Sentinel, Sun-Sentinel, Mercury News, Denver Post, Pioneer Press, and Orange County Register, claim that the AI companies used their publications to train and develop ChatGPT models without obtaining permission.

    In addition, ChatGPT can recall large parts of their copyright-protected articles, which effectively bypasses their paywalls. This has a direct effect on the newspapers’ revenues, they argue.

    “Defendants are taking the Publishers’ work with impunity and are using the Publishers’ journalism to create GenAI products that undermine the Publishers’ core businesses by retransmitting ‘their content’—in some cases verbatim from the Publishers’ paywalled websites—to their readers.”

    Training On and Reproducing Copyrighted Articles

    The complaint alleges that the newspapers’ articles are prominent parts of the training material for OpenAI’s models. GPT-3, for example, has 175 billion parameters and includes the ‘WebText2’ and ‘Common Crawl’ databases that both contain material owned by the plaintiffs.

    This alleged unauthorized use remains ongoing, the newspapers claim, and it will likely continue in the future.

    “On information and belief, Microsoft and OpenAI are currently or will imminently commence making additional copies of the Publishers’ Works to train and/or fine-tune the next generation GPT-5 LLM,” the complaint adds.

    The plaintiffs show that ChatGPT can reproduce content from copyrighted news articles when prompted. In addition, third-party services in the OpenAI store are specifically marketed to bypass their paywalls, they say.

    These tools include a custom GPT called “Remove Paywall” and a tool such as “News Summarizer”, which promises to “save on subscription costs” and “skip paywalls just using the link text or URL.”

    remove paywall

    OpenAI and Microsoft have previously argued that the use of copyrighted works to train its models falls under fair use. In addition, they called out the lack of specific copyright infringements by third parties.

    This lawsuit is likely to trigger similar defenses, but copyright infringement allegations are just part of the newspapers’ complaint.

    ‘Fake News Hallucinations’

    The newspapers are not only concerned by the unauthorized use of their works; they also allege that the AI tools cause commercial and competitive injury by spreading false claims.

    The plaintiffs cite various examples where ChatGPT allegedly links dubious news reporting to their newspapers.

    “As if plagiarizing the Publishers’ work were not enough, Defendants’ products are often subject to ‘hallucinations’ where those products malign the Publishers’ credibility by falsely attributing inaccurate reporting to the Publishers’ newspapers.

    “Beyond just profiting from the theft of the Publishers’ content, Defendants are actively tarnishing the newspapers’ reputations and spreading dangerous disinformation.”

    One example is the spurious claim that disinfectants can cure Covid. While many newspapers reported on these claims, they didn’t endorse them.

    fake news

    These hallucinations dilute and injure the reputation of the newspapers, the complaint alleges. This claim comes on top of the various copyright infringement accusations for which they request compensation.

    Ultimately, the newspapers are not against Artificial Intelligence, but they do want OpenAI and Microsoft to pay for the content they use and, ideally, ensure that their reputations are not harmed in the process.

    “This lawsuit is about how Microsoft and OpenAI are not entitled to use copyrighted newspaper content to build their new trillion-dollar enterprises, without paying for that content.

    “As this lawsuit will demonstrate, Defendants must both obtain the Publishers’ consent to use their content and pay fair value for such use,” the newspapers conclude.

    A copy of the complaint, filed by the newspapers at the U.S. District Court for the Southern District of New York, is available here (pdf)

    From: TF , for the latest news on copyright battles, piracy and more.

    • chevron_right

      Here’s your chance to own a decommissioned US government supercomputer

      news.movim.eu / ArsTechnica · Tuesday, 30 April - 21:52

    A photo of the Cheyenne supercomputer, which is now up for auction.

    Enlarge / A photo of the Cheyenne supercomputer, which is now up for auction. (credit: US General Services Administration )

    On Tuesday, the US General Services Administration began an auction for the decommissioned Cheyenne supercomputer , located in Cheyenne, Wyoming. The 5.34-petaflop supercomputer ranked as the 20th most powerful in the world at the time of its installation in 2016. Bidding started at $2,500, but it's price is currently $27,643 with the reserve not yet met.

    The supercomputer, which officially operated between January 12, 2017, and December 31, 2023, at the NCAR-Wyoming Supercomputing Center , was a powerful and energy-efficient system that significantly advanced atmospheric and Earth system sciences research.

    "In its lifetime, Cheyenne delivered over 7 billion core-hours, served over 4,400 users, and supported nearly 1,300 NSF awards," writes the University Corporation for Atmospheric Research (UCAR) on its official Cheyenne information page . "It played a key role in education, supporting more than 80 university courses and training events. Nearly 1,000 projects were awarded for early-career graduate students and postdocs. Perhaps most tellingly, Cheyenne-powered research generated over 4,500 peer-review publications, dissertations and theses, and other works."

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Mysterious “gpt2-chatbot” AI model appears suddenly, confuses experts

      news.movim.eu / ArsTechnica · Tuesday, 30 April - 19:31

    Robot fortune teller hand and crystal ball

    Enlarge (credit: Getty Images )

    On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena . Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo .

    Currently, the new model is only available for use through the Chatbot Arena website , although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day—dramatically limiting people's ability to test it in detail.

    So far, gpt2-chatbot has inspired plenty of rumors online, including that it could be the stealth launch of a test version of GPT-4.5 or even GPT-5 —or perhaps a new version of 2019's GPT-2 that has been trained using new techniques . We reached out to OpenAI for comment but did not receive a response by press time. On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting , "i do have a soft spot for gpt2."

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Apple poaches AI experts from Google, creates secretive European AI lab

      news.movim.eu / ArsTechnica · Tuesday, 30 April - 14:16

    Apple has been tight-lipped about its AI plans but industry insiders suggest the company is focused on deploying generative AI on its mobile devices.

    Enlarge / Apple has been tight-lipped about its AI plans but industry insiders suggest the company is focused on deploying generative AI on its mobile devices. (credit: FT montage/Getty Images)

    Apple has poached dozens of artificial intelligence experts from Google and has created a secretive European laboratory in Zurich, as the tech giant builds a team to battle rivals in developing new AI models and products.

    According to a Financial Times analysis of hundreds of LinkedIn profiles as well as public job postings and research papers, the $2.7 trillion company has undertaken a hiring spree over recent years to expand its global AI and machine learning team.

    The iPhone maker has particularly targeted workers from Google, attracting at least 36 specialists from its rival since it poached John Giannandrea to be its top AI executive in 2018.

    Read 28 remaining paragraphs | Comments

    • chevron_right

      Critics question tech-heavy lineup of new Homeland Security AI safety board

      news.movim.eu / ArsTechnica · Monday, 29 April - 20:15 · 1 minute

    A modified photo of a 1956 scientist carefully bottling

    Enlarge (credit: Benj Edwards | Getty Images )

    On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term "AI," which can apply to a broad spectrum of computer technology, it's unclear if this group will even be able to agree on what exactly they are safeguarding us from.

    President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

    The fundamental assumption posed by the board's existence, and reflected in Biden's AI executive order from October , is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Customers say Meta’s ad-buying AI blows through budgets in a matter of hours

      news.movim.eu / ArsTechnica · Monday, 29 April - 18:23 · 1 minute

    AI is here to terminate your bank account.

    Enlarge / AI is here to terminate your bank account. (credit: Carolco Pictures)

    Give the AI access to your credit card, they said. It'll be fine , they said. Users of Meta's ad platform who followed that advice have been getting burned by an AI-powered ad purchasing system, according to The Verge . The idea was to use a Meta-developed AI to automatically set up ads and spend your ad budget, saving you the hassle of making decisions about your ad campaign. Apparently, the AI funnels money to Meta a little too well: Customers say it burns, though, what should be daily ad budgets in a matter of hours, and costs are inflated as much as 10-fold.

    The AI-powered software in question is the " Advantage+ Shopping Campaign ." The system is supposed to automate a lot of ad setup for you, mixing and matching various creative elements and audience targets. The power of AI-powered advertising (Google has a similar product ) is that the ad platform can get instant feedback on its generated ads via click-through rates. You give it a few guard rails, and it can try hundreds or thousands of combinations to find the most clickable ad at a speed and efficiency no human could match. That's the theory, anyway.

    The Verge spoke to "several marketers and businesses" with similar stories of being hit by an AI-powered spending spree once they let Meta's system take over a campaign. The description of one account says the AI "had blown through roughly 75 percent of the daily ad budgets for both clients in under a couple of hours" and that "the ads’ CPMs, or cost per impressions, were roughly 10 times higher than normal." Meanwhile, the revenue earned from those AI-powered ads was "nearly zero." The report says, "Small businesses have seen their ad dollars get wiped out and wasted as a result, and some have said the bouts of overspending are driving them from Meta’s platforms."

    Read 3 remaining paragraphs | Comments

    • Sl chevron_right

      Contact publication

      pubsub.blastersklan.com / slashdot · Sunday, 28 April - 02:03 edit

    An anonymous reader shared this report from the Associated Press: Tech giant Cisco Systems on Wednesday joined Microsoft and IBM in signing onto a Vatican-sponsored pledge to ensure artificial intelligence is developed and used ethically and to benefit the common good... The pledge outlines key pillars of ethical and responsible use of AI. It emphasizes that AI systems must be designed, used and regulated to serve and protect the dignity of all human beings, without discrimination, and their environments. It highlights principles of transparency, inclusion, responsibility, impartiality and security as necessary to guide all AI developments. The document was unveiled and signed at a Vatican conference on Feb. 28, 2020... Pope Francis has called for an international treaty to ensure AI is developed and used ethically, devoting his annual peace message this year to the topic.

    Read more of this story at Slashdot.

    Cisco Joins Microsoft, IBM in Vatican Pledge For Ethical AI Use and Development
    • wifi_tethering open_in_new

      This post is public

      slashdot.org /story/24/04/27/2134208/cisco-joins-microsoft-ibm-in-vatican-pledge-for-ethical-ai-use-and-development

    • Sl chevron_right

      Contact publication

      pubsub.blastersklan.com / slashdot · Saturday, 27 April - 21:58 edit · 1 minute

    "A former high school athletic director was arrested Thursday morning," reports CBS News, "after allegedly using artificial intelligence to impersonate the school principal in a recording..." One-time Pikesville High School employee Dazhon Darien is facing charges that include theft, stalking, disruption of school operations and retaliation against a witness. Investigators determined he faked principal Eric Eiswert's voice and circulated the audio on social media in January. Darien's nickname, DJ, was among the names mentioned in the audio clips he allegedly faked, according to the Baltimore County State's Attorney's Office. Baltimore County detectives say Darien created the recording as retaliation against Eiswert, who had launched an investigation into the potential mishandling of school funds, Baltimore County Police Chief Robert McCullough said on Thursday. Eiswert's voice, which police and AI experts believe was simulated, made disparaging comments toward Black students and the surrounding Jewish community. The audio was widely circulated on social media. The article notes that after the faked recording circulated on social media the principal "was temporarily removed from the school, and waves of hate-filled messages circulated on social media, while the school received numerous phone calls." The suspect had actually used the school's network multiple times to perform online searches for OpenAI tools, "which police linked to paid OpenAI accounts."

    Read more of this story at Slashdot.

    A School Principal Was Framed With an AI-Generated Rant
    • wifi_tethering open_in_new

      This post is public

      yro.slashdot.org /story/24/04/27/1831204/a-school-principal-was-framed-with-an-ai-generated-rant

    • Sl chevron_right

      Contact publication

      pubsub.blastersklan.com / slashdot · Friday, 26 April - 13:18 edit · 1 minute

    Sam Altman of OpenAI and the chief executives of Nvidia, Microsoft and Alphabet are among technology-industry leaders joining a new federal advisory board focused on the secure use of AI within U.S. critical infrastructure, in the Biden administration's latest effort to fill a regulatory vacuum over the rapidly proliferating technology. From a report: The Artificial Intelligence Safety and Security Board is part of a government push to protect the economy, public health and vital industries from being harmed by AI-powered threats, U.S. officials said. Working with the Department of Homeland Security, it will develop recommendations for power-grid operators, transportation-service providers and manufacturing plants, among others, on how to use AI while bulletproofing their systems against potential disruptions that could be caused by advances in the technology. In addition to Nvidia's Jensen Huang, Microsoft's Satya Nadella, Alphabet's Sundar Pichai and other leaders in AI and technology, the panel of nearly two dozen consists of academics, civil-rights leaders and top executives at companies that work within a federally recognized critical-infrastructure sector, including Kathy Warden, chief executive of Northrop Grumman, and Delta Air Lines Chief Executive Ed Bastian. Other members are public officials, such as Maryland Gov. Wes Moore and Seattle Mayor Bruce Harrell, both Democrats.

    Read more of this story at Slashdot.

    OpenAI's Sam Altman and Other Tech Leaders To Serve on AI Safety Board
    • wifi_tethering open_in_new

      This post is public

      slashdot.org /story/24/04/26/1249215/openais-sam-altman-and-other-tech-leaders-to-serve-on-ai-safety-board