close
    • chevron_right

      We are currently testing the Nvidia RTX 4090—let us show you its heft

      news.movim.eu / ArsTechnica · Wednesday, 5 October, 2022 - 23:26 · 1 minute

    The Nvidia RTX 4090 founders edition. If you can't tell, those lines are drawn on, though the heft of this $1,599 might convince you that they're a reflection of real-world motion blur upon opening this massive box.

    Enlarge / The Nvidia RTX 4090 founders edition. If you can't tell, those lines are drawn on, though the heft of this $1,599 might convince you that they're a reflection of real-world motion blur upon opening this massive box. (credit: Sam Machkovech)

    It's a busy time in the Ars Technica GPU testing salt mines (not to be confused with the mining that GPUs used to be known for ). After wrapping up our take on the Intel Arc A700 series , we went right back to testing a GPU that we've had for a few days now: the Nvidia RTX 4090 .

    This beast of a GPU, provided by Nvidia to Ars Technica for review purposes, is priced well out of the average consumer range, even for a product category where the average price keeps creeping upward. Though we're not allowed to disclose anything about our testing as of press time, our upcoming coverage will reflect this GPU's $1,599-and-up reality. In the meantime, we thought an unboxing of Nvidia's "founders edition" of the 4090 would begin telling the story of exactly who this GPU might not be for.

    On paper, the Nvidia RTX 4090 is poised to blow past its Nvidia predecessors, with specs that handily surpass early 2022's overkill RTX 3090 Ti product . The 4090 comes packed with approximately 50 percent more CUDA cores and between 25 and 33 percent higher counts in other significant categories, particularly cores dedicated to tensor and ray-tracing calculations (which are also updated to new specs for Nvidia's new 5 nm process). However, one spec from the 3090 and 3090 Ti remains identical: its VRAM type and capacity (once again, 24GB of GDDR6X RAM).

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Intel A770, A750 review: We are this close to recommending these GPUs

      news.movim.eu / ArsTechnica · Wednesday, 5 October, 2022 - 13:00 · 1 minute

    We took our handsome pair of new Arc A700-series GPUs out for some glamour shots. While minding standard static-related protocols, of course.

    Enlarge / We took our handsome pair of new Arc A700-series GPUs out for some glamour shots. While minding standard static-related protocols, of course. (credit: Sam Machkovech)

    What's it like owning a brand-new Intel Arc A700-series graphics card? Is it the show-stopping clapback against Nvidia that wallet-pinched PC gamers have been dreaming of? Is it an absolute mess of unoptimized hardware and software? Does it play video games?

    That last question is easy to answer: yes, and pretty well. Intel now has a series of GPUs entering the PC gaming market just in time for a few major industry trends to play out: some easing in the supply chain, some crashes in cryptocurrency markets, and more GPUs being sold near their originally announced MSRPs . If those factors continue to move in consumer-friendly directions, it will mean that people might actually get to buy and enjoy the best parts of Intel’s new A700-series graphics cards. (Sadly, limited stock remains a concern in modern GPU reviews. Without firm answers from Intel on how many units it's making, we’re left wondering what kind of Arc GPU sell-outs to expect until further notice.)

    While this is a fantastic first-generation stab at an established market, it’s still a first-generation stab. In great news, Intel is taking the GPU market seriously with how its Arc A770 (starting at $329) and Arc A750 (starting at $289) cards are architected. The best results are trained on modern and future rendering APIs, and in those gaming scenarios, their power and performance exceed their price points.

    Read 57 remaining paragraphs | Comments

    • chevron_right

      The rest of Intel Arc’s A700-series GPU prices: A750 lands Oct. 12 below $300

      news.movim.eu / ArsTechnica · Thursday, 29 September, 2022 - 21:01 · 1 minute

    Intel arrives at a crucial sub-$300 price for its medium-end GPU option. But will that bear out as a worthwhile price compared to its performance?

    Enlarge / Intel arrives at a crucial sub-$300 price for its medium-end GPU option. But will that bear out as a worthwhile price compared to its performance? (credit: Intel)

    Intel's highest-end graphics card lineup is approaching its retail launch, and that means we're getting more answers to crucial market questions of prices, launch dates, performance, and availability. Today, Intel answered more of those A700-series GPU questions, and they're paired with claims that every card in the Arc A700 series punches back at Nvidia's 18-month-old RTX 3060.

    After announcing a $329 price for its A770 GPU earlier this week, Intel clarified that the company would launch three A700 series products on October 12: The aforementioned Arc A770 for $329, which sports 8GB of GDDR6 memory; an additional Arc A770 Limited Edition for $349, which jumps up to 16GB of GDDR6 at slightly higher memory bandwidth and otherwise sports otherwise identical specs; and the slightly weaker A750 Limited Edition for $289.

    If you missed the memo on that sub-$300 GPU when it was previously announced, the A750 LE is essentially a binned version of the A770's chipset with 87.5 percent of the shading units and ray tracing (RT) units turned on, along with an ever-so-slightly downclocked boost clock (2.05 GHz, compared to 2.1 GHz on both A770 models).

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Intel: “Moore’s law is not dead” as Arc A770 GPU is priced at $329

      news.movim.eu / ArsTechnica · Tuesday, 27 September, 2022 - 17:40

    The Arc A770 GPU, coming from Intel on October 12, starting at $329.

    Enlarge / The Arc A770 GPU, coming from Intel on October 12, starting at $329. (credit: Intel)

    One week after Nvidia moved forward with some of its highest graphics card prices , Intel emerged with splashy news: a price for its 2023 graphics cards that lands a bit closer to Earth.

    Intel CEO Pat Gelsinger took the keynote stage on Tuesday at the latest Intel Innovation event to confirm a starting price and release date for the upcoming Arc A770 GPU: $329 on October 12.

    That price comes well below last week's highest-end Nvidia GPU prices but is meant to more closely correlate with existing GPUs from AMD and Nvidia in the $300 range. Crucially, Intel claims that its A770, the highest-end product from the company's first wave of graphics cards, will compare to or even exceed the Nvidia RTX 3060 Ti , which debuted last year at $399 and continues to stick to that price point at most marketplaces.

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Nvidia’s powerful H100 GPU will ship in October

      news.movim.eu / ArsTechnica · Tuesday, 20 September, 2022 - 16:22

    A press handout showing the Nvidia H100 Hopper GPU and its applications.

    Enlarge / A press handout showing the Nvidia H100 Hopper GPU and its applications. (credit: Nvidia)

    At today's GTC conference keynote, Nvidia announced that its H100 Tensor Core GPU is in full production and that tech partners such as Dell, Lenovo, Cisco, Atos, Fujitsu, GIGABYTE, Hewlett-Packard Enterprise, and Supermicro will begin shipping products built around the H100 next month.

    The H100, part of the "Hopper" architecture, is the most powerful AI-focused GPU Nvidia has ever made, surpassing its previous high-end chip, the A100 . The H100 includes 80 billion transistors and a special "Transformer Engine" to accelerate machine learning tasks. It also supports Nvidia NVLink, which links GPUs together to multiply performance.

    According to the Nvidia press release, the H100 also reportedly delivers efficiency benefits, offering the same performance as the A100 with 3.5 times better energy efficiency, 3 times lower cost of ownership, using 5 times fewer server nodes.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      Nvidia’s Ada Lovelace GPU generation: $1,599 for RTX 4090, $899 and up for 4080

      news.movim.eu / ArsTechnica · Tuesday, 20 September, 2022 - 15:43 · 1 minute

    Time to bust out the checkbook again, GPU lovers. The RTX 4090 is here (and it's not alone).

    Enlarge / Time to bust out the checkbook again, GPU lovers. The RTX 4090 is here (and it's not alone). (credit: Nvidia)

    After weeks of teases, Nvidia's newest computer graphics cards, the "Ada Lovelace" generation of RTX 4000 GPUs, are here. Nvidia CEO Jensen Huang debuted two new models on Tuesday: the RTX 4090, which will start at a whopping $1,599, and the RTX 4080 , which will launch in two configurations.

    The pricier card, slated to launch on October 12, occupies the same highest-end category as Nvidia's 2020 megaton RTX 3090 (previously designated by the company as its "Titan" product). The 4090's increase in physical size will demand three slots on your PC build of choice. The specs are indicative of a highest-end GPU: 16,384 CUDA cores (up from the 3090's 10,496 CUDA cores) and 2.52 GHz of boost clock (up from 1.695 GHz on the 3090). Despite the improvements, the card still performs within the same 450 W power envelope as the 3090. Its RAM allocation will remain at 24GB of GDDR6X memory.

    This jump in performance is fueled in part by Nvidia's long-rumored jump to TSMC's "4N" process, which is a new generation of 5 nm chips that provides a massive efficiency jump from the previous Ampere generation's 8 nm process.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Nvidia’s flagship AI chip reportedly 4.5x faster than the previous champ

      news.movim.eu / ArsTechnica · Friday, 9 September, 2022 - 20:01

    The Nvidia H100 Tensor Core GPU

    Enlarge / A press photo of the Nvidia H100 Tensor Core GPU. (credit: Nvidia )

    Nvidia announced yesterday that its upcoming H100 "Hopper" Tensor Core GPU set new performance records during its debut in the industry-standard MLPerf benchmarks, delivering results up to 4.5 times faster than the A100 , which is currently Nvidia's fastest production AI chip.

    The MPerf benchmarks (technically called " MLPerfTM Inference 2.1 ") measure "inference" workloads, which demonstrate how well a chip can apply a previously trained machine learning model to new data. A group of industry firms known as the MLCommons developed the MLPerf benchmarks in 2018 to deliver a standardized metric for conveying machine learning performance to potential customers.

    In particular, the H100 did well in the BERT-Large benchmark, which measures natural language-processing performance using the BERT model developed by Google. Nvidia credits this particular result to the Hopper architecture's Transformer Engine , which specifically accelerates training transformer models. This means that the H100 could accelerate future natural language models similar to OpenAI's GPT-3 , which can compose written works in many different styles and hold conversational chats.

    Read 2 remaining paragraphs | Comments

    • chevron_right

      As cryptocurrency tumbles, prices for new and used GPUs continue to fall

      news.movim.eu / ArsTechnica · Friday, 17 June, 2022 - 18:26 · 1 minute

    AMD's Radeon RX 6800 and 6800 XT.

    Enlarge / AMD's Radeon RX 6800 and 6800 XT. (credit: Sam Machkovech)

    Cryptocurrency has had a rough year. Bitcoin has fallen by more than 50 percent since the start of the year, from nearly $48,000 in January to just over $20,000 as of publication. Celsius, a major cryptocurrency "bank," suspended withdrawals earlier this week, and the Coinbase crypto exchange announced a round of layoffs this past Tuesday after pausing hiring last month.

    It may be small comfort to anyone who wanted to work at Coinbase or spent hard-earned money on an ugly picture of an ape because a celebrity told them to, but there's some good news for PC builders and gamers in all of this. As tracked by Tom's Hardware , prices for new and used graphics cards continue to fall, coming down from their peak prices in late 2021 and early 2022. For weeks, it has generally been possible to go to Amazon, Newegg, or Best Buy and buy current-generation GPUs for prices that would have seemed like bargains six months or a year ago, and pricing for used GPUs has fallen further.

    As Tom's Hardware reports, most mid-range Nvidia GeForce RTX 3000-series cards are still selling at or slightly over their manufacturer-suggested retail prices—the 3050, 3060, and 3070 series are all still in high demand. But top-end 3080 Ti, 3090, and 3090 Ti GPUs are all selling below their (admittedly astronomical) MSRPs right now, as are almost all of AMD's Radeon RX 6000 series cards.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      Comment mesurer la température de son ordinateur

      news.movim.eu / Numerama · Friday, 17 June, 2022 - 16:11

    Connaître la température de son ordinateur nécessite d'installer un logiciel dédié, que vous soyez sur PC ou Mac. Cela permet de vérifier si la machine chauffe trop et, le cas échéant, d'envisager des mesures pour baisser sa température. [Lire la suite]

    Abonnez-vous aux newsletters Numerama pour recevoir l’essentiel de l’actualité https://www.numerama.com/newsletter/