• chevron_right

      Nvidia CEO calls for “Sovereign AI” as his firm overtakes Amazon in market value

      news.movim.eu / ArsTechnica · Tuesday, 13 February - 16:41

    The Nvidia logo on a blue background with an American flag.

    Enlarge (credit: Nvidia / Benj Edwards)

    On Monday, Nvidia CEO Jensen Huang said that every country should control its own AI infrastructure so it can protect its culture, Reuters reports . He called this concept "Sovereign AI," which an Nvidia blog post defined as each country owning "the production of their own intelligence."

    Huang made the announcement in a discussion with UAE's Minister of AI, Omar Al Olama, during the World Governments Summit in Dubai. "It codifies your culture, your society’s intelligence, your common sense, your history—you own your own data," Huang told Al Olama.

    The World Governments Summit organization defines itself as "a global, neutral, non-profit organization dedicated to shaping the future of governments." Its annual event attracts over 4,000 delegates from 150 countries, according to Nvidia. It's hosted in the United Arab Emirates , a collection of absolute monarchies with no democratically elected institutions.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Nvidia RTX 4080 Super review: All you need to know is that it’s cheaper than a 4080

      news.movim.eu / ArsTechnica · Wednesday, 31 January - 14:00 · 1 minute

    Nvidia's new RTX 4080 Super is technically faster than the regular 4080 , but, by an order of magnitude, the most interesting thing about it is that, at its launch price of $999, it's $200 cheaper than the original 4080. I am going to write more after this sentence, but that's basically the review. You're welcome to keep reading, and I would appreciate it if you would, but truly there is only one number you need to know, and it is "$200."

    All three of these Super cards—the 4070 Super , the 4070 Ti Super , and now the 4080 Super—are mild correctives for a GPU generation that has been more expensive than its predecessors and also, in relative terms, less of a performance boost. The difference is that where the 4070 Super and 4070 Ti Super try to earn their existing price tags by boosting performance, the 4080 Super focuses on lowering its price to be more in line with where its competition is.

    Yes, it's marginally faster than the original 4080, but its best feature is a price drop from $1,199 to a still high, but more reasonable, $999. What it doesn't do is attempt to close the gap between the 4080 series and the 4090, a card that still significantly outruns any other consumer GPU that AMD or Nvidia offers. But if you have a big budget, want something that's still head-and-shoulders above the entire RTX 30-series, and don't want to deal with the 4090's currently inflated pricing, the 4080 Super is much more appealing than the regular 4080, even if it is basically the same GPU with a new name.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Ryzen 8000G review: An integrated GPU that can beat a graphics card, for a price

      news.movim.eu / ArsTechnica · Monday, 29 January - 19:50

    The most interesting thing about AMD's Ryzen 7 8700G CPU is the Radeon 780M GPU that's attached to it.

    Enlarge / The most interesting thing about AMD's Ryzen 7 8700G CPU is the Radeon 780M GPU that's attached to it. (credit: Andrew Cunningham)

    Put me on the short list of people who can get excited about the humble, much-derided integrated GPU.

    Yes, most of them are afterthoughts, designed for office desktops and laptops that will spend most of their lives rendering 2D images to a single monitor. But when integrated graphics push forward, it can open up possibilities for people who want to play games but can only afford a cheap desktop (or who have to make do with whatever their parents will pay for, which was the big limiter on my PC gaming experience as a kid).

    That, plus an unrelated but accordant interest in building small mini-ITX-based desktops, has kept me interested in AMD’s G-series Ryzen desktop chips (which it sometimes calls “APUs,” to distinguish them from the Ryzen CPUs). And the Ryzen 8000G chips are a big upgrade from the 5000G series that immediately preceded them (this makes sense, because as we all know the number 8 immediately follows the number 5).

    Read 37 remaining paragraphs | Comments

    • chevron_right

      Review: Nvidia’s RTX 4070 Ti Super is better, but I still don’t know who it’s for

      news.movim.eu / ArsTechnica · Thursday, 25 January - 12:30

    Of all of Nvidia's current-generation GPU launches, there hasn't been one that's been quite as weird as the case of the "GeForce RTX 4080 12GB."

    It was the third and slowest of the graphics cards Nvidia announced at the onset of the RTX 40-series, and at first blush it just sounded like a version of the second-fastest RTX 4080 but with less RAM. But spec sheets and Nvidia's own performance estimates showed that there was a deceptively huge performance gap between the two 4080 cards, enough that calling them both "4080" could have lead to confusion and upset among buyers.

    Taking the hint, Nvidia reversed course, " unlaunching " the 4080 12GB because it was "not named right." This decision came late enough in the launch process that a whole bunch of existing packaging had to be trashed and that new BIOSes with new GPU named needed to be flashed to the cards before they could be sold.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Review: Radeon 7600 XT offers peace of mind via lots of RAM, remains a midrange GPU

      news.movim.eu / ArsTechnica · Wednesday, 24 January - 14:00 · 1 minute

    We don't need a long intro for this one: AMD's new Radeon RX 7600 XT is almost exactly the same as last year's RX 7600 , but with a mild bump to the GPU's clock speed and 16GB of memory instead of 8GB. It also costs $329 instead of $269, the current MSRP (and current street price) for the regular RX 7600.

    It's a card with a pretty narrow target audience: people who are worried about buying a GPU with 8GB of memory, but who aren't worried enough about future-proofing or RAM requirements to buy a more powerful GPU. It's priced reasonably well, at least—$60 is a lot to pay for extra memory, but $329 was the MSRP for the Radeon RX 6600 back in 2021. If you want more memory in a current-generation card, you otherwise generally need to jump up into the $450 range (for the 12GB RX 7700 XT or the 16GB RTX 4060 Ti) or beyond.

    RX 7700 XT RX 7600 RX 7600 XT RX 6600 RX 6600 XT RX 6650 XT RX 6750 XT
    Compute units (Stream processors) 54 (3,456) 32 (2,048) 32 (2,048) 28 (1,792) 32 (2,048) 32 (2,048) 40 (2,560)
    Boost Clock 2,544 MHz 2,600 MHz 2,760 MHz 2,490 MHz 2,589 MHz 2,635 MHz 2,600 MHz
    Memory Bus Width 192-bit 128-bit 128-bit 128-bit 128-bit 128-bit 192-bit
    Memory Clock 2,250 MHz 2,250 MHz 2,250 MHz 1,750 MHz 2,000 MHz 2,190 MHz 2,250 MHz
    Memory size 12GB GDDR6 8GB GDDR6 16GB GDDR6 8GB GDDR6 8GB GDDR6 8GB GDDR6 12GB GDDR6
    Total board power (TBP) 245 W 165 W 190 W 132 W 160 W 180 W 250 W

    The fact of the matter is that this is the same silicon we've already seen. The clock speed bumps do provide a small across-the-board performance uplift, and the impact of the extra RAM does become apparent in a few of our tests. But the card doesn't fundamentally alter the AMD-vs-Nvidia-vs-Intel dynamic in the $300-ish graphics card market, though it addresses a couple of the regular RX 7600's most glaring weaknesses.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      They’re not cheap, but Nvidia’s new Super GPUs are a step in the right direction

      news.movim.eu / ArsTechnica · Monday, 8 January - 16:30

    Nvidia's latest GPUs, apparently dropping out of hyperspace.

    Enlarge / Nvidia's latest GPUs, apparently dropping out of hyperspace. (credit: Nvidia)

    If there’s been one consistent criticism of Nvidia’s RTX 40-series graphics cards, it’s been pricing. All of Nvidia’s product tiers have seen their prices creep up over the last few years, but cards like the 4090 raised prices to new heights, while lower-end models like the 4060 and 4060 Ti kept pricing the same but didn’t improve performance much.

    Today, Nvidia is sprucing up its 4070 and 4080 tiers with a mid-generation “Super” refresh that at least partially addresses some of these pricing problems. Like older Super GPUs, the 4070 Super, 4070 Ti Super, and 4080 Super use the same architecture and support all the same features as their non-Super versions, but with bumped specs and tweaked prices that might make them more appealing to people who skipped the originals.

    The 4070 Super will launch first, on January 17th, for $599. The $799 RTX 4070 Ti Super launches on January 24th, and the $999 4080 Super follows on January 31st.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      $329 Radeon 7600 XT brings 16GB of memory to AMD’s latest midrange GPU

      news.movim.eu / ArsTechnica · Monday, 8 January - 15:30 · 1 minute

    The new Radeon RX 7600 XT mostly just adds extra memory, though clock speeds and power requirements have also increased somewhat.

    Enlarge / The new Radeon RX 7600 XT mostly just adds extra memory, though clock speeds and power requirements have also increased somewhat. (credit: AMD)

    Graphics card buyers seem to have a lot of anxiety about buying a GPU with enough memory installed, even in midrange graphics cards that aren't otherwise equipped to play games at super-high resolutions. And while this anxiety tends to be a bit overblown—lots of first- and third-party testing of cards like the GeForce 4060 Ti shows that just a handful of games benefit when all you do is boost GPU memory from 8 to 16GB—there's still a market for less-expensive GPUs with big pools of memory.

    That's the apparent impetus behind AMD's sole GPU announcement from its slate of CES news today: the $329 Radeon RX 7600 XT, a version of last year's $269 RX 7600 with twice as much memory, slightly higher clock speeds, and higher power use to go with it.

    RX 7700 XT RX 7600 RX 7600 XT RX 6600 RX 6600 XT RX 6650 XT RX 6750 XT
    Compute units (Stream processors) 54 (3,456) 32 (2,048) 32 (2,048) 28 (1,792) 32 (2,048) 32 (2,048) 40 (2,560)
    Boost Clock 2,544 MHz 2,600 MHz 2,760 MHz 2,490 MHz 2,589 MHz 2,635 MHz 2,600 MHz
    Memory Bus Width 192-bit 128-bit 128-bit 128-bit 128-bit 128-bit 192-bit
    Memory Clock 2,250 MHz 2,250 MHz 2,250 MHz 1,750 MHz 2,000 MHz 2,190 MHz 2,250 MHz
    Memory size 12GB GDDR6 8GB GDDR6 16GB GDDR6 8GB GDDR6 8GB GDDR6 8GB GDDR6 12GB GDDR6
    Total board power (TBP) 245 W 165 W 190 W 132 W 160 W 180 W 250 W

    The core specifications of the 7600 XT remain the same as the regular 7600: 32 of AMD's compute units (CUs) based on the RDNA3 GPU architecture, and the same memory clock speed attached to the same 128-bit memory bus. But RAM has been boosted from 8GB to 16GB, and the GPU's clock speeds have been boosted a little, ensuring that the card runs games a little faster than the regular 7600, even in games that don't care about the extra memory.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      2023 was the year that GPUs stood still

      news.movim.eu / ArsTechnica · Thursday, 28 December - 11:28 · 1 minute

    2023 was the year that GPUs stood still

    Enlarge (credit: Andrew Cunningham)

    In many ways, 2023 was a long-awaited return to normalcy for people who build their own gaming and/or workstation PCs. For the entire year, most mainstream components have been available at or a little under their official retail prices, making it possible to build all kinds of PCs at relatively reasonable prices without worrying about restocks or waiting for discounts. It was a welcome continuation of some GPU trends that started in 2022. Nvidia, AMD, and Intel could release a new GPU, and you could consistently buy that GPU for roughly what it was supposed to cost.

    That's where we get into how frustrating 2023 was for GPU buyers, though. Cards like the GeForce RTX 4090 and Radeon RX 7900 series launched in late 2022 and boosted performance beyond what any last-generation cards could achieve. But 2023's midrange GPU launches were less ambitious. Not only did they offer the performance of a last-generation GPU, but most of them did it for around the same price as the last-gen GPUs whose performance they matched.

    The midrange runs in place

    Not every midrange GPU launch will get us a GTX 1060 —a card roughly 50 percent faster than its immediate predecessor and beat the previous-generation GTX 980 despite costing just a bit over half as much money. But even if your expectations were low, this year's midrange GPU launches have been underwhelming.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Nvidia introduces the H200, an AI-crunching monster GPU that may speed up ChatGPT

      news.movim.eu / ArsTechnica · Monday, 13 November - 21:44 · 1 minute

    The Nvidia H200 GPU covered with a blue explosion.

    Enlarge / The Nvidia H200 GPU covered with a fanciful blue explosion that figuratively represents raw compute power bursting forth in a glowing flurry. (credit: Nvidia | Benj Edwards)

    On Monday, Nvidia announced the HGX H200 Tensor Core GPU, which utilizes the Hopper architecture to accelerate AI applications. It's a follow-up of the H100 GPU , released last year and previously Nvidia's most powerful AI GPU chip. If widely deployed, it could lead to far more powerful AI models—and faster response times for existing ones like ChatGPT —in the near future.

    According to experts, lack of computing power (often called "compute") has been a major bottleneck of AI progress this past year, hindering deployments of existing AI models and slowing the development of new ones. Shortages of powerful GPUs that accelerate AI models are largely to blame. One way to alleviate the compute bottleneck is to make more chips, but you can also make AI chips more powerful. That second approach may make the H200 an attractive product for cloud providers.

    What's the H200 good for? Despite the "G" in the "GPU" name, data center GPUs like this typically aren't for graphics. GPUs are ideal for AI applications because they perform vast numbers of parallel matrix multiplications, which are necessary for neural networks to function. They are essential in the training portion of building an AI model and the "inference" portion, where people feed inputs into an AI model and it returns results.

    Read 7 remaining paragraphs | Comments