• chevron_right

      OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

      news.movim.eu / ArsTechnica · Yesterday - 21:58

    OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

    Enlarge (credit: Busà Photography | Moment Unreleased )

    OpenAI is now boldly claiming that The New York Times "paid someone to hack OpenAI’s products" like ChatGPT to "set up" a lawsuit against the leading AI maker.

    In a court filing Monday, OpenAI alleged that "100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts" do not reflect how normal people use ChatGPT.

    Instead, it allegedly took The Times "tens of thousands of attempts to generate" these supposedly "highly anomalous results" by "targeting and exploiting a bug" that OpenAI claims it is now "committed to addressing."

    Read 34 remaining paragraphs | Comments

    • chevron_right

      OpenAI claims New York Times ‘hacked’ ChatGPT to build copyright lawsuit

      news.movim.eu / TheGuardian · Yesterday - 19:30

    In a filing Monday, OpenAI claims a ‘hired gun’ took ‘tens of thousands of attempts to generate the highly anomalous results’

    OpenAI has asked a federal judge to dismiss parts of the New York Times’ copyright lawsuit against it, arguing that the newspaper “hacked” its chatbot ChatGPT and other artificial intelligence systems to generate misleading evidence for the case.

    OpenAI said in a filing in Manhattan federal court on Monday that the Times caused the technology to reproduce its material through “deceptive prompts that blatantly violate OpenAI’s terms of use”.

    Continue reading...
    • chevron_right

      Cops called after parents get tricked by AI-generated images of Wonka-like event

      news.movim.eu / ArsTechnica · Yesterday - 18:02 · 1 minute

    A photo of the Willy's Chocolate Experience, which did not match AI-generated promises.

    Enlarge / A photo of "Willy's Chocolate Experience" (inset), which did not match AI-generated promises, shown in the background. (credit: Stuart Sinclair )

    On Saturday, event organizers shut down a Glasgow-based "Willy's Chocolate Experience" after customers complained that the unofficial Wonka-inspired event, which took place in a sparsely decorated venue, did not match the lush AI-generated images listed on its official website (archive here ). According to Sky News, police were called to the event, and "advice was given."

    "What an absolute shambles of an event," wrote Stuart Sinclar on Facebook after paying 35 pounds per ticket for himself and his kids. "Took 2 minutes to get through to then see a queue of people surrounding the guy running it complaining ... The kids received 2 jelly babies and a quarter of a can of Barrs limeade."

    The Willy's Chocolate Experience website, which promises "a journey filled with wondrous creations and enchanting surprises at every turn," features five AI-generated images (likely created with OpenAI's DALL-E 3 ) that evoke a candy-filled fantasy wonderland inspired by the Willy Wonka universe and the recent Wonka film. But in reality, Sinclair was met with a nearly empty location with a few underwhelming decorations and a tiny bouncy castle. In one photo shared by Sinclair, a rainbow arch leads to a single yellow gummy bear and gum drop sitting on a bare concrete floor.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      OpenAI: ‘The New York Times Paid Someone to Hack Us’

      news.movim.eu / TorrentFreak · Yesterday - 15:15 · 4 minutes

    openai logo In recent months, rightsholders of all ilks have filed lawsuits against companies that develop AI models.

    The list includes record labels, individual authors, visual artists, and more recently the New York Times . These rightsholders all object to the presumed use of their work without proper compensation.

    A few hours ago, OpenAI and Microsoft responded to the New York Times complaint, asking the federal court to dismiss several key claims. Not just that, the defendants fire back with some rather damning allegations of their own.

    OpenAI’s motion directly targets the Times are the heart of its business, putting the company’s truthfulness in doubt. The notion that ChatGPT can be used as a substitute for a newspaper subscription is overblown, they counter.

    “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” the motion to dismiss reads.

    ‘NYT Paid Someone to Hack OpenAI’?

    In its complaint, the Times did show evidence that OpenAI’s GPT-4 model was able to supposedly generate several paragraphs that matched content from its articles. However, that is not the full truth, OpenAI notes, suggesting that the newspaper crossed a line by hacking OpenAI products.

    “The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products,” the motion to dismiss explains.

    nyt hacked

    OpenAI believes that it took tens of thousands of attempts to get ChatGPT to produce the controversial output that’s the basis of this lawsuit. This is not how normal people interact with its service, it notes.

    It also shared some additional details on how this alleged ‘hack’ was carried out by this third-party.

    “They were able to do so only by targeting and exploiting a bug […] by using deceptive prompts that blatantly violate OpenAI’s terms of use. And even then, they had to feed the tool portions of the very articles they sought to elicit verbatim passages of, virtually all of which already appear on multiple public websites.”

    ‘Hired Guns Don’t Stop Evolving Technology’

    The OpenAI defendants continue their motion to dismiss by noting that AI is yet another technical evolution that will change the world, including journalism. It points out that several publishers openly support this progress.

    For example, OpenAI has signed partnerships with other prominent news industry outlets including the Associated Press and Axel Springer. Smaller journalistic outlets are on board as well, and some plan to use AI-innovations to their benefit.

    The Times doesn’t have any agreements and uses this lawsuit to get proper compensation for the use of its work. However, OpenAI notes that the suggestion that its activities threaten journalism is overblown, or even fiction.

    “The Times’s suggestion that the contrived attacks of its hired gun show that the Fourth Estate is somehow imperiled by this technology is pure fiction. So too is its implication that the public en masse might mimic its agent’s aberrant activity,” the defense writes.

    Fair Use

    None of the allegations above address the copyright infringement allegations directly. However, OpenAI stresses that its use of third-party texts should fall under fair use. That applies to this case, and also to many other AI-related lawsuits, it argues.

    This fair use defense has yet to be tested in court and will in great part determine the future of OpenAI and other AI technologies going forward.

    To make its point, OpenAI aptly compares its use of third-party works in the journalistic realm. Newspapers, for example, are allowed to report on stories that are investigated and first reported by other journalists, as the Times regularly does.

    “Established copyright doctrine will dictate that the Times cannot prevent AI models from acquiring knowledge about facts, any more than another news organization can prevent the Times itself from re-reporting stories it had no role in investigating,” OpenAI writes.

    The fair use defense will eventually be argued in detail when the case is heard on its merits. With the current motion to dismiss, OpenAI merely aims to limit the scope of the case.

    Among other things, the defense argues that several of the copyright allegations are time-barred. In Addition, the DMCA claim, the misappropriation claim, and the contributory infringement claim either fail or fall short.

    A copy of the motion to dismiss is available here (pdf) . TorrentFreak broke this story, but other journalists are welcome to use it. A link would be much appreciated, of course, but we won’t sue anyone over it.

    TorrentFreak asked the Times for a response to the ‘hack’ allegations but the company didn’t immediately respond.

    From: TF , for the latest news on copyright battles, piracy and more.

    • chevron_right

      Ces robots travaillent tout seuls sans l’aide d’humains

      news.movim.eu / JournalDuGeek · 2 days ago - 08:00

    1x Technologies Eve Robots

    1X Technologies, une entreprise norvégienne soutenue par OpenAI, a mis en ligne une vidéo de ses robots Eve en action dans un entrepôt. L'intégration des robots humanoïdes dans les secteurs industriels et domestiques devient une perspective de plus en plus concrète !
    • chevron_right

      OpenAI’s new video generation tool could learn a lot from babies | John Naughton

      news.movim.eu / TheGuardian · 4 days ago - 16:00

    The footage put together by Sora looks swish, but closer examination reveals its doesn’t understand physical reality

    “First text, then images, now OpenAI has a model for generating videos ,” screamed Mashable the other day. The makers of ChatGPT and Dall-E had just announced Sora , a text-to-video diffusion model. Cue excited commentary all over the web about what will doubtless become known as T2V, covering the usual spectrum – from “Does this mark the end of [insert threatened activity here]?” to “meh” and everything in between.

    Sora (the name is Japanese for “sky”) is not the first T2V tool, but it looks more sophisticated than earlier efforts like Meta’s Make-a-Video AI . It can turn a brief text description into a detailed, high-definition film clip up to a minute long. For example, the prompt “A cat waking up its sleeping owner, demanding breakfast. The owner tries to ignore the cat, but the cat tries new tactics, and finally, the owner pulls out his secret stash of treats from underneath the pillow to hold off the cat a little longer,” produces a slick video clip that would go viral on any social network.

    Continue reading...
    • chevron_right

      Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora

      news.movim.eu / ArsTechnica · 5 days ago - 16:42 · 1 minute

    Tyler Perry in 2022.

    Enlarge / Tyler Perry in 2022. (credit: Getty Images )

    In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI's recently announced AI video generator Sora can do.

    "I have been watching AI very closely," Perry said in the interview. "I was in the middle of, and have been planning for the last four years... an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me."

    OpenAI, the company behind ChatGPT, revealed a preview of Sora's capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to dramatically surpass other AI video generators in capability. It seems that similar shock also rippled into adjacent professional fields. "Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing," Perry said in the interview.

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Stability announces Stable Diffusion 3, a next-gen AI image generator

      news.movim.eu / ArsTechnica · 6 days ago - 21:28 · 1 minute

    Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background.

    Enlarge / Stable Diffusion 3 generation with the prompt: studio photograph closeup of a chameleon over a black background. (credit: Stability AI )

    On Thursday, Stability AI announced Stable Diffusion 3, an open-weights next-generation image-synthesis model. It follows its predecessors by reportedly generating detailed, multi-subject images with improved quality and accuracy in text generation. The brief announcement was not accompanied by a public demo, but Stability is opening up a waitlist today for those who would like to try it.

    Stability says that its Stable Diffusion 3 family of models (which takes text descriptions called "prompts" and turns them into matching images) range in size from 800 million to 8 billion parameters. The size range accommodates allowing different versions of the model to run locally on a variety of devices—from smartphones to servers. Parameter size roughly corresponds to model capability in terms of how much detail it can generate. Larger models also require more VRAM on GPU accelerators to run.

    Since 2022, we've seen Stability launch a progression of AI image-generation models: Stable Diffusion 1.4 , 1.5, 2.0 , 2.1 , XL , XL Turbo , and now 3. Stability has made a name for itself as providing a more open alternative to proprietary image-synthesis models like OpenAI's DALL-E 3, though not without controversy due to the use of copyrighted training data, bias, and the potential for abuse . (This has led to lawsuits that are unresolved.) Stable Diffusion models have been open-weights and source-available, which means the models can be run locally and fine-tuned to change their outputs.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images

      news.movim.eu / ArsTechnica · 6 days ago - 16:43

    Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king."

    Enlarge / Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king." (credit: @stratejake / X )

    On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that the tool was inserting diversity into its images in a historically inaccurate way, such as depicting multi-racial Nazis and medieval British kings with unlikely nationalities.

    "We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon," wrote Google in a statement Thursday morning.

    As more people on X began to pile on Google for being " woke ," the Gemini generations inspired conspiracy theories that Google was purposely discriminating against white people and offering revisionist history to serve political goals. Beyond that angle, as The Verge points out , some of these inaccurate depictions "were essentially erasing the history of race and gender discrimination."

    Read 9 remaining paragraphs | Comments