• chevron_right

      Intel’s $180 Arc A580 aims for budget gaming builds, but it’s a hard sell

      news.movim.eu / ArsTechnica · Tuesday, 10 October, 2023 - 17:28 · 1 minute

    Intel's Alchemist GPU silicon, the heart of the Arc A750, A770, and now, the A580.

    Enlarge / Intel's Alchemist GPU silicon, the heart of the Arc A750, A770, and now, the A580. (credit: Intel)

    Intel's Arc GPUs aren't bad for what they are, but a relatively late launch and driver problems meant that the company had to curtail its ambitions quite a bit. Early leaks and rumors that suggested a GeForce RTX 3080 Ti or RTX 3070 level of performance for the top-end Arc card never panned out, and the best Arc cards can usually only compete with $300-and-under midrange GPUs from AMD and Nvidia.

    Today Intel is quietly releasing another GPU into that same midrange milieu, the Arc A580 . Priced starting at $179, the card aims to compete with lower-end last-gen GPUs like the Nvidia GeForce RTX 3050 or AMD Radeon RX 6600, cards currently available for around $200 that aim to provide a solid 1080p gaming experience (though sometimes with a setting or two turned down for newer and more demanding games).

    The A580 is based on the exact same Alchemist silicon as the Arc A750 and A770 , but with just 24 of the Xe graphics cores enabled, instead of 28 for the A750 and 32 for the A770. That does mean it has the exact same 256-bit memory bus as those higher-end cards, attached to a serviceable-for-the-price 8GB pool of GDDR6 RAM. Reviews from outlets like Tom's Hardware generally show the A580 beating the RTX 3050 and RX 6600 in most games, but falling a little short of the RTX 3060 and RX 7600 (to say nothing of the RTX 4060 , which beats the Arc A750 and A770 in most games).

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Tired of shortages, OpenAI considers making its own AI chips

      news.movim.eu / ArsTechnica · Monday, 9 October, 2023 - 16:17 · 1 minute

    A glowing OpenAI logo on a blue background.

    Enlarge (credit: OpenAI / Benj Edwards)

    OpenAI, the creator of ChatGPT and DALL-E 3 generative AI products, is exploring the possibility of manufacturing its own AI accelerator chips, according to Reuters . Citing anonymous sources, the Reuters report indicates that OpenAI is considering the option due to a shortage of specialized AI GPU chips and the high costs associated with running them.

    OpenAI has been evaluating various options to address this issue, including potentially acquiring a chipmaking company and working more closely with other chip manufacturers like Nvidia. Currently, the AI firm has not made a final decision, but the discussions have been ongoing since at least last year. Nvidia dominates the AI chip market, holding more than 80 percent of the global share for processors best suited for AI applications. OpenAI CEO Sam Altman has publicly expressed his concerns over the scarcity and cost of these chips.

    The hardware situation is said to be a top priority for OpenAI, as the company currently relies on a massive supercomputer built by Microsoft, one of its largest backers . The supercomputer uses 10,000 Nvidia graphics processing units (GPUs), according to Reuters. Running ChatGPT comes with significant costs, with each query costing approximately 4 cents, according to Bernstein analyst Stacy Rasgon. If queries grow to even a tenth of the scale of Google search, the initial investment in GPUs would be around $48.1 billion, with annual maintenance costs at about $16 billion.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      GPUs from all major suppliers are vulnerable to new pixel-stealing attack

      news.movim.eu / ArsTechnica · Tuesday, 26 September, 2023 - 17:40 · 1 minute

    GPUs from all major suppliers are vulnerable to new pixel-stealing attack

    Enlarge

    GPUs from all six of the major suppliers are vulnerable to a newly discovered attack that allows malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites, researchers have demonstrated in a paper published Tuesday.

    The cross-origin attack allows a malicious website from one domain—say, example.com—to effectively read the pixels displayed by a website from example.org, or another different domain. Attackers can then reconstruct them in a way that allows them to view the words or images displayed by the latter site. This leakage violates a critical security principle that forms one of the most fundamental security boundaries safeguarding the Internet. Known as the same origin policy , it mandates that content hosted on one website domain be isolated from all other website domains.

    Optimizing bandwidth at a cost

    GPU.zip, as the proof-of-concept attack has been named, starts with a malicious website that places a link to the webpage it wants to read inside of an iframe , a common HTML element that allows sites to embed ads, images, or other content hosted on other websites. Normally, the same origin policy prevents either site from inspecting the source code, content, or final visual product of the other. The researchers found that data compression that both internal and discrete GPUs use to improve performance acts as a side channel that they can abuse to bypass the restriction and steal pixels one by one.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Do Intel’s new graphics drivers actually overclock its low-end GPUs?

      news.movim.eu / ArsTechnica · Tuesday, 22 August, 2023 - 15:43 · 1 minute

    Intel's latest Arc GPU drivers do come with a firmware update, but contrary to most reports, it's not an "overclock."

    Enlarge / Intel's latest Arc GPU drivers do come with a firmware update, but contrary to most reports, it's not an "overclock." (credit: Intel)

    When we write about Intel's Arc GPUs, we're typically paying the most attention to the A750 and A770 because they're the cards that perform well enough that you might actually put them in an entry-level-to-midrange gaming desktop. But there's one other Arc graphics card of note: the lowly Arc A380, which snuck into some stores a few months before either high-end Arc card was released.

    With its eight Xe cores (down from 32 in the A770), 96-bit memory interface, and 6GB of RAM, the Arc A380 has been (in my case, literally) nothing to write home about. It's an entry-level graphics card that competes reasonably well with ancient and low-end cards like Nvidia's GeForce RTX 1650 and AMD's Radeon RX 6400, and its hardware-accelerated AV1 video encoding support makes it mildly interesting for people who work with video. It's one of the better GPUs you can get for $100, its current street price , but that's not saying much.

    But Intel's latest graphics drivers provided an update specifically for the A380 that seems notable because of how rare it is: the 31.0.101.4644 driver package released last week also includes a firmware update for A380 cards that seems to boost their base clock speed from 2,000 MHz up to 2,150 MHz. That's a 7.5 percent increase, supposedly being provided for free to all A380 owners with a simple firmware update. At least, it would be if it were an actual increase in the card's peak clock speed, which it isn't.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      New Intel GPU drivers help address one of Arc’s biggest remaining weak points

      news.movim.eu / ArsTechnica · Friday, 18 August, 2023 - 13:00 · 1 minute

    Intel is playing up the cumulative performance improvements for DirectX 11 games since its Arc GPUs launched almost a year ago.

    Enlarge / Intel is playing up the cumulative performance improvements for DirectX 11 games since its Arc GPUs launched almost a year ago. (credit: Intel)

    When they launched last fall , Intel's drivers for its Arc dedicated graphics cards were in rough shape. The company's messaging at the time—and for months beforehand—was something along the lines of, "We're aware, and we're working on it."

    I tend to be skeptical of these kinds of " we'll fix it in post " promises; you should buy products based on what they do now and not what the manufacturer promises they will one day be able to do, especially for something like consumer graphics cards where there are plenty of alternatives. But credit where it's due, Intel has put quite a bit of work into improving its drivers in the year or so since the first Arc cards launched.

    Today the company has rounded up a collection of improvements made to its DirectX 11 drivers since launch, with a collection of games that run about 19 percent faster on average than they did last October. Though Arc's performance in modern DirectX 12 and Vulkan games has always been good for the price, older APIs like DirectX 9 and 11 were particular weak points of Arc's when compared to competing cards like the Nvidia GeForce RTX 4060 and 3060 series and the AMD Radeon RX 7600 and 6600 series.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Getting AAA games working in Linux sometimes requires concealing your GPU

      news.movim.eu / ArsTechnica · Wednesday, 9 August, 2023 - 17:57 · 1 minute

    Hogwarts Legacy screenshot

    Enlarge / There are some energies you should not tap for sorcery, something both Hogwarts students and Hogwarts Legacy installs running under Linux should know. (credit: Warner Bros. Games)

    Linux gaming's march toward being a real, actual thing has taken serious strides lately , due in large part to Valve's Proton-powered Steam Play efforts . Being Linux, there are still some quirks to figure out. One of them involves games trying to make use of Intel's upscaling tools.

    Intel's ARC series GPUs are interesting , in many senses of the word. They offer the best implementation of Intel's image reconstruction system, XeSS, similar to Nvidia's DLSS and AMD's FSR. XeSS, like its counterparts, utilizes machine learning to fill in the pixel gaps on anti-aliased objects and scenes. The results are sometimes clear, sometimes a bit fuzzy if you pay close attention. In our review of Intel's A770 and A750 GPUs in late 2022, we noted that cross-compatibility between all three systems could be in the works.

    That kind of easy-swap function is not the case when a game is running on a customized version of the WINE Windows-on-Linux, translating Direct3D graphics calls to Vulkan and prodding to see whether it, too, can make use of Intel's graphics boost. As noted by Phoronix , Intel developers contributing to the open source Mesa graphics project added the ability to hide an Intel GPU from the Vulkan Linux driver.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      Rumors and retail listings point to the return of actual mid-range GPUs

      news.movim.eu / ArsTechnica · Thursday, 11 May, 2023 - 16:44 · 1 minute

    Nvidia's RTX 4080 and 4070 could finally be getting some more reasonably priced relatives.

    Enlarge / Nvidia's RTX 4080 and 4070 could finally be getting some more reasonably priced relatives. (credit: Andrew Cunningham)

    There are two kinds of GPUs you can buy right now if you want to build or upgrade a gaming PC: affordable but old ones and new but expensive ones. Both Nvidia and AMD have been leaning on older products, sometimes with price cuts, to fill the very large gaps in the middle and low ends of their current lineups. But a slowly building buzz of rumors and leaks suggests things should change before long.

    A source speaking to VideoCardz dot com says there are three GeForce RTX 4060-series GPUs coming in the next couple of months, starting with an 8GB version of the 4060 Ti that could be announced as soon as next week and released by the end of the month. A 16GB version of the 4060 Ti and an 8GB version of the 4060 could be announced at the same time but launch at some point in July (Nvidia used the same simultaneous-announcement, staggered-release strategy for the 4090 and 4080 series).

    It's not surprising that the 4060 Ti looks like a big step down from the recently released RTX 4070 —4,352 CUDA cores instead of 5,888, a 128-bit memory bus instead of 192-bit, 8GB instead of 12GB. But it also looks less-than-promising as a step up from 2020's RTX 3060 Ti, which used a 256-bit memory bus, 4,864 CUDA cores, and the same amount of RAM. Extra cache memory, higher clock speeds, and the updated Ada Lovelace architecture should all make the 4060 Ti faster than the 3060 Ti in the end, but it may not be a huge generational leap.

    Read 4 remaining paragraphs | Comments

    • chevron_right

      If 80% of Nvidia 40-series owners turn on DLSS, what’s going on with the others?

      news.movim.eu / ArsTechnica · Friday, 14 April, 2023 - 18:34

    The RTX 4070 and 4080 cards, stacked next to each other

    Enlarge / Buying one of these Nvidia cards is a big commitment, both in dollars and case space. Most people who buy them do turn on DLSS and ray tracing, according to Nvidia. So ... what's going on with the folks who don't? (credit: Andrew Cunningham)

    As part of its push for the RTX 4070 , Nvidia's new $600 entry point into its Ada Lovelace GPU series, Nvidia has some statistics that, depending on how you look at them, are either completely baffling or entirely believable.

    In a blog post and in press materials sent out before the 4070's debut, Nvidia offers stats pulled from "millions of RTX gamers who played RTX capable games" in February 2023. They show that:

    • 83 percent of 40 series gamers "turn RT on" (ray tracing)
    • 56 percent of 30 series
    • 43 percent of 20 series

    As for DLSS, Nvidia's AI-accelerated upscaling and frame-generation tool for games that support it, Nvidia writes that 79 percent of 40 series, 71 percent of 30 series, and 68 percent of 20 series owners turned the feature on.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      2022 in GPUs: The shortage ends, but higher prices seem here to stay

      news.movim.eu / ArsTechnica · Tuesday, 27 December, 2022 - 14:25 · 1 minute

    From left to right and largest to smallest: GeForce RTX 4080 (which is the same physical size as the RTX 4090), Radeon RX 7900 XTX, and Radeon RX 7900 XT.

    Enlarge / From left to right and largest to smallest: GeForce RTX 4080 (which is the same physical size as the RTX 4090), Radeon RX 7900 XTX, and Radeon RX 7900 XT. (credit: Andrew Cunningham)

    In 2021, the biggest story about GPUs was that you mostly just couldn't buy them, not without paying scalper-inflated prices on eBay or learning to navigate a maze of stock-tracking websites or Discords.

    The good news is that the stock situation improved a lot in 2022. A cryptocurrency crash and a falloff in PC sales reduced the demand for GPUs, which in turn made them less profitable for scalpers, which in turn improved the stock situation. It's currently possible to visit an online store and buy many GPUs for an amount that at least gets kind-of-sort-of close to their original list price.

    We also saw lots of new GPU launches in 2022. The year started off less-than-great with the launch of 1080p-focused, price-inflated cards like Nvidia's RTX 3050 and AMD's inspiringly mediocre RX 6500 XT . But by the end of the year, we received Nvidia's hugely expensive but hugely powerful RTX 4090 and RTX 4080 cards, AMD's less-monstrous but still competitive RX 7900 series, and Intel's flawed but price-conscious Arc A770 and A750 cards .

    Read 21 remaining paragraphs | Comments