The worldwide distribution of so-called consumer-oriented GPUs, also known as gaming GPUs, is very high. These GPUs are not only easily accessible, but also much cheaper compared to alternatives such as workstation GPUs.
These GPUs are more suitable for individuals to work with, also in terms of price. Another neglected aspect of such GPUs is their contribution to the developer community. The ease of accessibility means that anyone, regardless of their location, can purchase an AMD Radeon or NVIDIA GeForce GPU for their use. However, the current generations are actually designed in such a way that they have lost sight of this aspect to some extent.
Raja Koduri reiterates this idea and believes that technology giants like AMD and Intel need to rethink their approach to consumer GPUs. He emphasizes that PC developers see this type of technology as essential to their work. Based on how technologies such as AMD’s ROCm and Intel’s SYCL are being developed to eclipse PC GPUs, this means that such developers could be missing out on a lot. Koduri believes that NVIDIA and AMD would be in a much better position than Intel in this regard. Ironically, this has hindered the adoption of Intel’s consumer GPUs by the developer community, as these same developers would like to have the best of both worlds (strong gaming and AI capabilities).
Yes. Developers on PC GPUs are the key enablers to DC GPU success. So all the dev tools need to work flawlessly on PC GPUs. Currently this is largely true with Geforce. Radeons definitely got better these past 6 months and they are showing increased commitment to PC developer .… https://t.co/9r1uhzksZg
— Raja Koduri (@RajaXg) February 18, 2024
Raja Koduri claims that the current AI ecosystem is the main reason why GPU manufacturers are mainly focusing on AI accelerators. This means that existing and future resources are geared towards serving the AI audience exclusively, rather than general users. Although there are solutions, such as the recently surfaced ZLUDA, which allows NVIDIA’s CUDA libraries to be utilized on the ROCm stack, it is evident that modern stacks are not as “open-source” when it comes to their performance on all types of GPUs when focusing at an individual level.
This really captures my sentiment with @IntelAI. Please, please, please put some effort marketing the @IntelGraphics consumer products to ML enthusiasts—they are the people who will help build up your datacenter market share.https://t.co/07oSmGtCe8
— Eric Hallahan (@EricHallahan) February 16, 2024
Recently, NVIDIA has released support for TensorRT-LLM on its consumer GPUs, while AMD has also opened up support for ROCm on a specific series of its Radeon GPUs. For the average gamer, this is certainly not a worrying development, but rather commendable, as having modern AI capabilities and software stacks will be essential in the coming age of AI PCs. However, developers may have to rethink their decision to use a consumer GPU in the future, unless manufacturers change the way the software ecosystem evolves.
Source: @RajaXg
6 Antworten
Kommentar
Lade neue Kommentare
Veteran
Urgestein
Neuling
Urgestein
Mitglied
Alle Kommentare lesen unter igor´sLAB Community →