• chevron_right

      Nvidia imagine bouleverser la modélisation 3D

      news.movim.eu / Numerama · 6 days ago - 11:23

    Latte3D

    Demain, serons-nous tous modélisateurs 3D ? Nvidia a dévoilé une IA générative, appelée LATTE3D, qui transforme en instant du texte en des représentations 3D. La démonstration s'est focalisée sur des objets et des animaux, mais l'outil pourrait générer en 3D n'importe quoi.

    • chevron_right

      AMD promises big upscaling improvements and a future-proof API in FSR 3.1

      news.movim.eu / ArsTechnica · 7 days ago - 17:20

    AMD promises big upscaling improvements and a future-proof API in FSR 3.1

    Enlarge (credit: AMD)

    Last summer, AMD debuted the latest version of its FidelityFX Super Resolution (FSR) upscaling technology . While version 2.x focused mostly on making lower-resolution images look better at higher resolutions, version 3.0 focused on AMD's "Fluid Motion Frames," which attempt to boost FPS by generating interpolated frames to insert between the ones that your GPU is actually rendering.

    Today, the company is announcing FSR 3.1 , which among other improvements decouples the upscaling improvements in FSR 3.x from the Fluid Motion Frames feature. FSR 3.1 will be available "later this year" in games whose developers choose to implement it.

    Fluid Motion Frames and Nvidia's equivalent DLSS Frame Generation usually work best when a game is already running at a high frame rate, and even then can be more prone to mistakes and odd visual artifacts than regular FSR or DLSS upscaling. FSR 3.0 was an all-or-nothing proposition, but version 3.1 should let you pick and choose what features you want to enable.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Nvidia announces “moonshot” to create embodied human-level AI in robot form

      news.movim.eu / ArsTechnica · Wednesday, 20 March - 20:21 · 1 minute

    An illustration of a humanoid robot created by Nvidia.

    Enlarge / An illustration of a humanoid robot created by Nvidia. (credit: Nvidia)

    In sci-fi films, the rise of humanlike artificial intelligence often comes hand in hand with a physical platform, such as an android or robot. While the most advanced AI language models so far seem mostly like disembodied voices echoing from an anonymous data center, they might not remain that way for long. Some companies like Google , Figure , Microsoft , Tesla , Boston Dynamics , and others are working toward giving AI models a body. This is called "embodiment," and AI chipmaker Nvidia wants to accelerate the process.

    "Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today," said Nvidia CEO Jensen Huang in a statement. Huang spent a portion of Nvidia's annual GTC conference keynote on Monday going over Nvidia's robotics efforts. "The next generation of robotics will likely be humanoid robotics," Huang said. "We now have the necessary technology to imagine generalized human robotics."

    To that end, Nvidia announced Project GR00T, a general-purpose foundation model for humanoid robots. As a type of AI model itself, Nvidia hopes GR00T (which stands for "Generalist Robot 00 Technology" but sounds a lot like a famous Marvel character) will serve as an AI mind for robots, enabling them to learn skills and solve various tasks on the fly. In a tweet , Nvidia researcher Linxi "Jim" Fan called the project "our moonshot to solve embodied AGI in the physical world."

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Nvidia unveils Blackwell B200, the “world’s most powerful chip” designed for AI

      news.movim.eu / ArsTechnica · Tuesday, 19 March - 15:27 · 1 minute

    The GB200 "superchip" covered with a fanciful blue explosion that suggests computational power bursting forth from within. The chip does not actually glow blue in reality.

    Enlarge / The GB200 "superchip" covered with a fanciful blue explosion that suggests computational power bursting forth from within. The chip does not actually glow blue in reality. (credit: Nvidia / Benj Edwards)

    On Monday, Nvidia unveiled the Blackwell B200 tensor core chip—the company's most powerful single-chip GPU, with 208 billion transistors—which Nvidia claims can reduce AI inference operating costs (such as running ChatGPT ) and energy consumption by up to 25 times compared to the H100 . The company also unveiled the GB200, a "superchip" that combines two B200 chips and a Grace CPU for even more performance.

    The news came as part of Nvidia's annual GTC conference, which is taking place this week at the San Jose Convention Center. Nvidia CEO Jensen Huang delivered the keynote Monday afternoon. "We need bigger GPUs," Huang said during his keynote. The Blackwell platform will allow the training of trillion-parameter AI models that will make today's generative AI models look rudimentary in comparison, he said. For reference, OpenAI's GPT-3, launched in 2020, included 175 billion parameters. Parameter count is a rough indicator of AI model complexity.

    Nvidia named the Blackwell architecture after David Harold Blackwell , a mathematician who specialized in game theory and statistics and was the first Black scholar inducted into the National Academy of Sciences. The platform introduces six technologies for accelerated computing, including a second-generation Transformer Engine, fifth-generation NVLink, RAS Engine, secure AI capabilities, and a decompression engine for accelerated database queries.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Avec Blackwell, Nvidia améliore un facteur critique pour le futur de l’IA

      news.movim.eu / Numerama · Tuesday, 19 March - 14:13

    À l'occasion de sa conférence GTC, Nvidia a levé la voile sur la puce Blackwell B200, un nouveau GPU qu'il présente comme une « super puce ». Avec 208 milliards de transistors et une consommation énergétique en baisse, la puce Blackwell est la nouvelle arme fatale pour les acteurs de l'intelligence artificielle générative.

    • chevron_right

      IA : Nvidia frappe un énorme coup avec ses nouveaux GPU Blackwell

      news.movim.eu / JournalDuGeek · Tuesday, 19 March - 12:26

    Huang Nvidia

    Le géant vert exerçait déjà une domination sans partage sur le hardware spécialisé dans l'IA; il s'apprête désormais à pulvériser la concurrence avec une nouvelle génération de GPU aux performances ahurissantes.
    • chevron_right

      NVIDIA écope d’une plainte à cause de son IA générative

      news.movim.eu / JournalDuGeek · Tuesday, 12 March - 10:44

    Nvidia Nemo Plainte

    Les plaintes s'enchainent et se ressemblent, surtout en ce qui concerne l'IA générative.
    • chevron_right

      Nvidia sued over AI training data as copyright clashes continue

      news.movim.eu / ArsTechnica · Monday, 11 March - 16:35

    Nvidia sued over AI training data as copyright clashes continue

    Enlarge (credit: Yurii Klymko | iStock / Getty Images Plus )

    Book authors are suing Nvidia, alleging that the chipmaker's AI platform NeMo—used to power customized chatbots—was trained on a controversial dataset that illegally copied and distributed their books without their consent.

    In a proposed class action , novelists Abdi Nazemian ( Like a Love Story ), Brian Keene ( Ghost Walk ), and Stewart O’Nan ( Last Night at the Lobster ) argued that Nvidia should pay damages and destroy all copies of the Books3 dataset used to power NeMo large language models (LLMs).

    The Books3 dataset, novelists argued, copied "all of Bibliotek," a shadow library of approximately 196,640 pirated books. Initially shared through the AI community Hugging Face, the Books3 dataset today "is defunct and no longer accessible due to reported copyright infringement," the Hugging Face website says.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Authors Sue NVIDIA for Training AI on Pirated Books

      news.movim.eu / TorrentFreak · Monday, 11 March - 13:17 · 2 minutes

    nvidia logo Starting last year, various rightsholders have filed lawsuits against companies that develop AI models.

    The list of complainants includes record labels, book authors, visual artists, even the New York Times. These rightsholders all object to the presumed use of their work without proper compensation.

    “Books3”

    Many of the lawsuits filed by book authors come with a clear piracy angle. The cases allege that tech companies, including Meta, Microsoft, and OpenAI, used the controversial ‘Books3’ dataset to train their models.

    Books3 was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. The dataset was broadly shared online and added to other databases including ‘The Pile‘, an AI training dataset compiled by EleutherAI.

    After pushback from rightsholders and anti-piracy outfits, Books3 was taken offline over copyright concerns. However, for many of the companies that allegedly trained their AI models on it, there are still some legal repercussions to sort out.

    Authors Sue NVIDIA for Copyright Infringement

    On Friday, American authors Abdi Nazemian , Brian Keene , and Stewart O’Nan joined the barrage of legal action with a copyright infringement lawsuit against NVIDIA. The company, whose market cap exceeds $2 trillion, is mostly known for its GPUs and related software and services, but also has its own AI models.

    In a concise class action complaint, filed at a California federal court, the authors allege that NVIDIA used the Books3 dataset to train its NeMo Megatron language models. The models are hosted on Hugging Face where it states that they are trained on EleutherAI’s ‘The Pile’ dataset, which includes the pirated books.

    nvidia

    Putting two and two together, the plaintiffs conclude that NVIDIA’s models were trained on pirated books, including theirs, without their permission.

    “NVIDIA has admitted training its NeMo Megatron models on a copy of The Pile dataset. Therefore, NVIDIA necessarily also trained its NeMo Megatron models on a copy of Books3, because Books3 is part of The Pile,” the complaint reads.

    “Certain books written by Plaintiffs are part of Books3 — including the Infringed Works — and thus NVIDIA necessarily trained its NeMo Megatron models on one or more copies of the Infringed Works, thereby directly infringing the copyrights of the Plaintiffs.”

    Direct Infringement Damages

    Relying on the same logic, the authors accuse the company of direct copyright infringement, noting that NVIDIA copied their books to use them for AI training purposes. Through the lawsuit, the rightsholders demand compensation in the form of actual or statutory damages.

    The class action lawsuit includes three authors thus far, but more may be added to the case as it progresses. NVIDIA has yet to respond to the allegations but in light of similar cases, it will likely oppose the claims and/or argue a fair-use defense.

    Last month, OpenAI managed to ‘defeat’ several copyright infringement claims from book authors in a somewhat related “Books3” lawsuit. However, the California federal court didn’t review the direct copyright infringement claims in this case, which have yet to be argued in detail at a later stage.

    A copy of the class action complaint against NVIDIA, filed by the authors in a California federal court, is available here (pdf)

    From: TF , for the latest news on copyright battles, piracy and more.