• chevron_right

      GeForce RTX 4060 review: Not thrilling, but a super-efficient $299 workhorse

      news.movim.eu / ArsTechnica · Wednesday, 28 June, 2023 - 13:00

    PNY's take on the basic $299 version of the Nvidia GeForce RTX 4060.

    Enlarge / PNY's take on the basic $299 version of the Nvidia GeForce RTX 4060. (credit: Andrew Cunningham)

    Nvidia's GeForce 1060, 2060, and 3060 graphics cards are some of the most widely used GPUs in all of PC gaming. Four of Steam's top five GPUs are 60-series cards, and the only one that isn't is an even lower-end GTX 1650.

    All of this is to say that, despite all the fanfare for high-end products like the RTX 4090, the new GeForce RTX 4060 is Nvidia's most important Ada Lovelace-based GPU. History suggests that it will become a baseline for game developers to aim for and the go-to recommendation for most entry-level-to-mainstream PC gaming builds.

    The RTX 4060, which launches this week starting at $299, is mostly up to the task. It's faster and considerably more power efficient than the 3060 it replaces, and it doesn't come with the same generation-over-generation price hike as the higher-end Lovelace GPUs. It's also a solid value compared to the 4060 Ti, typically delivering between 80 and 90 percent of the 4060 Ti's performance for 75 percent of the money.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Four-person dev team gets Apple’s M-series GPU working in Linux

      news.movim.eu / ArsTechnica · Wednesday, 7 December, 2022 - 17:46 · 2 minutes

    SuperTuxKart running on an Asahi Linux system, with Debian logo in terminal

    Enlarge / Has any game been more associated with proof of concept than SuperTuxKart? It's the "Hello World" of 3D racing. (credit: Asahi Linux)

    For the brave people running Linux on Apple Silicon, their patience has paid off. GPU drivers that provide desktop hardware acceleration are now available in Asahi Linux , unleashing more of the M-series chips’ power.

    It has taken roughly two years to reach this alpha-stage OpenGL driver, but the foundational groundwork should result in faster progress ahead, writes project leads Alyssa Rosenzweig and Asahi Lina. In the meantime, the drivers are “good enough to run a smooth desktop experience and some games.”

    The drivers offer non-conformance-tested OpenGL 2.1 and OpenGL ES 2.0 support for all M-series Apple devices. That’s enough for desktop environments and older games running at 60 frames per second at 4K. But the next target is Vulkan support . OpenGL work is being done “with Vulkan in mind,” Lina writes, but some OpenGL support was needed to get desktops working first. There's a lot more you can read about the interplay between OpenGL, Vulkan, and Zink in Asahi's blog post .

    For a while now, Asahi Linux has been making do with software-rendered desktops, but M-series chips are fast enough that they feel almost native (and sometimes faster than other desktops on ARM hardware). And while the Asahi project is relatively new , some core bits of Apple's silicon are backward compatible with known and supported devices, like the original iPhone. And Asahi's work is intended to move upstream, helping other distributions get up and running on Apple's hardware.

    The team of developers includes three core members—Rosenzweig, Lina, and Dougall Johnson—plus Ella Stanforth, who works on Vulkan drivers and future reuse. The developers note that their work stands "on the shoulders of FOSS giants." That includes the NIR backend, the Direct Rendering Manager in the Linux kernel, and the Gallium3D API inside the open source Mesa drivers, which themselves build on 30 years of OpenGL work.

    Installing the new drivers requires running a bleeding-edge kernel, Mesa drivers, and a Wayland-based desktop. The team welcomes bug reports, but not of the "this specific app isn't working" variety. Their blog post details how and where to submit reports about certain kinds of GPU-specific issues.

    Read on Ars Technica | Comments

    • chevron_right

      Hungry for AI? New supercomputer contains 16 dinner-plate-size chips

      news.movim.eu / ArsTechnica · Monday, 14 November, 2022 - 19:16

    The Cerebras Andromeda, a 13.5 million core AI supercomputer

    Enlarge / The Cerebras Andromeda, a 13.5 million core AI supercomputer. (credit: Cerebras )

    On Monday, Cerebras Systems unveiled its 13.5 million core Andromeda AI supercomputer for deep learning, reports Reuters. According Cerebras, Andromeda delivers over one 1 exaflop (1 quintillion operations per second) of AI computational power at 16-bit half precision.

    The Andromeda is itself a cluster of 16 Cerebras C-2 computers linked together. Each CS-2 contains one Wafer Scale Engine chip (often called "WSE-2"), which is currently the largest silicon chip ever made, at about 8.5-inches square and packed with 2.6 trillion transistors organized into 850,000 cores.

    Cerebras built Andromeda at a data center in Santa Clara, California, for $35 million. It's tuned for applications like large language models and has already been in use for academic and commercial work. "Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX," writes Cerebras in a press release.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      AMD : les premières GPU RDNA 3 arrivent pour défier les RTX 4000

      news.movim.eu / JournalDuGeek · Friday, 4 November, 2022 - 18:00

    xtxcov-158x105.jpg

    Nvidia n'a qu'à bien se tenir, car la concurrence fait rage sur le segment des cartes graphiques.

    AMD : les premières GPU RDNA 3 arrivent pour défier les RTX 4000

    • chevron_right

      RTX 4090 review: Spend at least $1,599 for Nvidia’s biggest bargain in years

      news.movim.eu / ArsTechnica · Tuesday, 11 October, 2022 - 13:00

    The Nvidia RTX 4090 founders edition. If you can't tell, those lines are drawn on, though the heft of this $1,599 product might convince you that they're a reflection of real-world motion blur upon opening this massive box.

    Enlarge / The Nvidia RTX 4090 founders edition. If you can't tell, those lines are drawn on, though the heft of this $1,599 product might convince you that they're a reflection of real-world motion blur upon opening this massive box. (credit: Sam Machkovech)

    The Nvidia RTX 4090 makes me laugh.

    Part of that is due to its size. When a standalone GPU is as large as a modern video gaming console—it's nearly identical in total volume to the Xbox Series S and more than double the size of a Nintendo Switch—it's hard not to laugh incredulously at the thing. None of Nvidia's highest-end "reference" GPUs, previously branded as "Titan" models, have ever been so massive, and things only get more ludicrous when you move beyond Nvidia's "Founders Edition" and check out AIB options from third-party partners. (We haven't tested any models other than the 4090 FE yet.)

    After figuring out how to safely mount and run power to the RTX 4090, however, the laughs become decidedly different. You're going to consistently laugh with , not at , the RTX 4090, either in joy or excited disbelief.

    Read 54 remaining paragraphs | Comments

    • chevron_right

      We are currently testing the Nvidia RTX 4090—let us show you its heft

      news.movim.eu / ArsTechnica · Wednesday, 5 October, 2022 - 23:26 · 1 minute

    The Nvidia RTX 4090 founders edition. If you can't tell, those lines are drawn on, though the heft of this $1,599 might convince you that they're a reflection of real-world motion blur upon opening this massive box.

    Enlarge / The Nvidia RTX 4090 founders edition. If you can't tell, those lines are drawn on, though the heft of this $1,599 might convince you that they're a reflection of real-world motion blur upon opening this massive box. (credit: Sam Machkovech)

    It's a busy time in the Ars Technica GPU testing salt mines (not to be confused with the mining that GPUs used to be known for ). After wrapping up our take on the Intel Arc A700 series , we went right back to testing a GPU that we've had for a few days now: the Nvidia RTX 4090 .

    This beast of a GPU, provided by Nvidia to Ars Technica for review purposes, is priced well out of the average consumer range, even for a product category where the average price keeps creeping upward. Though we're not allowed to disclose anything about our testing as of press time, our upcoming coverage will reflect this GPU's $1,599-and-up reality. In the meantime, we thought an unboxing of Nvidia's "founders edition" of the 4090 would begin telling the story of exactly who this GPU might not be for.

    On paper, the Nvidia RTX 4090 is poised to blow past its Nvidia predecessors, with specs that handily surpass early 2022's overkill RTX 3090 Ti product . The 4090 comes packed with approximately 50 percent more CUDA cores and between 25 and 33 percent higher counts in other significant categories, particularly cores dedicated to tensor and ray-tracing calculations (which are also updated to new specs for Nvidia's new 5 nm process). However, one spec from the 3090 and 3090 Ti remains identical: its VRAM type and capacity (once again, 24GB of GDDR6X RAM).

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Intel A770, A750 review: We are this close to recommending these GPUs

      news.movim.eu / ArsTechnica · Wednesday, 5 October, 2022 - 13:00 · 1 minute

    We took our handsome pair of new Arc A700-series GPUs out for some glamour shots. While minding standard static-related protocols, of course.

    Enlarge / We took our handsome pair of new Arc A700-series GPUs out for some glamour shots. While minding standard static-related protocols, of course. (credit: Sam Machkovech)

    What's it like owning a brand-new Intel Arc A700-series graphics card? Is it the show-stopping clapback against Nvidia that wallet-pinched PC gamers have been dreaming of? Is it an absolute mess of unoptimized hardware and software? Does it play video games?

    That last question is easy to answer: yes, and pretty well. Intel now has a series of GPUs entering the PC gaming market just in time for a few major industry trends to play out: some easing in the supply chain, some crashes in cryptocurrency markets, and more GPUs being sold near their originally announced MSRPs . If those factors continue to move in consumer-friendly directions, it will mean that people might actually get to buy and enjoy the best parts of Intel’s new A700-series graphics cards. (Sadly, limited stock remains a concern in modern GPU reviews. Without firm answers from Intel on how many units it's making, we’re left wondering what kind of Arc GPU sell-outs to expect until further notice.)

    While this is a fantastic first-generation stab at an established market, it’s still a first-generation stab. In great news, Intel is taking the GPU market seriously with how its Arc A770 (starting at $329) and Arc A750 (starting at $289) cards are architected. The best results are trained on modern and future rendering APIs, and in those gaming scenarios, their power and performance exceed their price points.

    Read 57 remaining paragraphs | Comments

    • chevron_right

      The rest of Intel Arc’s A700-series GPU prices: A750 lands Oct. 12 below $300

      news.movim.eu / ArsTechnica · Thursday, 29 September, 2022 - 21:01 · 1 minute

    Intel arrives at a crucial sub-$300 price for its medium-end GPU option. But will that bear out as a worthwhile price compared to its performance?

    Enlarge / Intel arrives at a crucial sub-$300 price for its medium-end GPU option. But will that bear out as a worthwhile price compared to its performance? (credit: Intel)

    Intel's highest-end graphics card lineup is approaching its retail launch, and that means we're getting more answers to crucial market questions of prices, launch dates, performance, and availability. Today, Intel answered more of those A700-series GPU questions, and they're paired with claims that every card in the Arc A700 series punches back at Nvidia's 18-month-old RTX 3060.

    After announcing a $329 price for its A770 GPU earlier this week, Intel clarified that the company would launch three A700 series products on October 12: The aforementioned Arc A770 for $329, which sports 8GB of GDDR6 memory; an additional Arc A770 Limited Edition for $349, which jumps up to 16GB of GDDR6 at slightly higher memory bandwidth and otherwise sports otherwise identical specs; and the slightly weaker A750 Limited Edition for $289.

    If you missed the memo on that sub-$300 GPU when it was previously announced, the A750 LE is essentially a binned version of the A770's chipset with 87.5 percent of the shading units and ray tracing (RT) units turned on, along with an ever-so-slightly downclocked boost clock (2.05 GHz, compared to 2.1 GHz on both A770 models).

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Intel: “Moore’s law is not dead” as Arc A770 GPU is priced at $329

      news.movim.eu / ArsTechnica · Tuesday, 27 September, 2022 - 17:40

    The Arc A770 GPU, coming from Intel on October 12, starting at $329.

    Enlarge / The Arc A770 GPU, coming from Intel on October 12, starting at $329. (credit: Intel)

    One week after Nvidia moved forward with some of its highest graphics card prices , Intel emerged with splashy news: a price for its 2023 graphics cards that lands a bit closer to Earth.

    Intel CEO Pat Gelsinger took the keynote stage on Tuesday at the latest Intel Innovation event to confirm a starting price and release date for the upcoming Arc A770 GPU: $329 on October 12.

    That price comes well below last week's highest-end Nvidia GPU prices but is meant to more closely correlate with existing GPUs from AMD and Nvidia in the $300 range. Crucially, Intel claims that its A770, the highest-end product from the company's first wave of graphics cards, will compare to or even exceed the Nvidia RTX 3060 Ti , which debuted last year at $399 and continues to stick to that price point at most marketplaces.

    Read 4 remaining paragraphs | Comments