Artificial Intelligence Latest news

Google’s Gemma is optimized for NVIDIA GPUs

Google’s new AI model Gemma was recently unveiled. So far, so exciting. However, NIVIDIA announced that it had worked with Google to optimize its AI model for NVIDIA GPUs. And that will be exciting.

With Google Gemma, a new AI model was presented, which is a state-of-the-art, lightweight open language model, including models with 2 and 7 billion parameters that can be executed anywhere. This is something that reduces costs and accelerates innovative work for domain-specific use cases.

The development teams from both companies have worked closely together to further accelerate Gemma, which is based on the same research and technology as Google’s Gemini models, using NVIDIA2 TensorRT-LLM technology. This is an open-source library for language models when running on NVIDIA GPUs in the data center, in the cloud and on PCs with NVIDIA RTX GPUs. This gives developers the opportunity to leverage the more than 100 million RTX GPUs installed in powerful AI PCs worldwide.

Gemma will also be usable on cloud-based NVIDIA GPUs. This will also include the A3 instances of Google Cloud, which are based on the H100 Tensor Core GPU. However, Google will still be using NVIDIA’s H200 Tensor Core graphics processors there this year – with 141 GB HBM3e memory at 4.8 terabytes per second. Enterprise developers will then have the opportunity to use NVIDIA’s extensive ecosystem. This includes NVIDIA AI Enterprise with the NeMo framework and TensorRT-LLM to fine-tune Gemma and deploy the optimized model in their production application.

Soon, Gemma will also be supported on NVIDIA’s “Chat with RTX” demo. The demo is an NVIDIA tech demo that works with Retrieval-Augmented Generation and TensorRT-LLM software generative AI capabilities on local, RTX-powered Windows PCs. There it is possible to personalize your own chatbot with your data. If you want to test this, you can also find the downloads on the NVIDIA homepage.

Source: NVIDIA

 

Bisher keine Kommentare

Kommentar

Lade neue Kommentare

Redaktion

Artikel-Butler

1,818 Kommentare 8,811 Likes

Kürzlich wurde Googles neues KI-Modell Gemma vorgestellt. Soweit, so spannedn. Dabei gab NIVIDIA allerdings bekannt, dass man in Zusammenarbeit mit Google dessen KI-Modell auf NVIDIA GPUs optimiert hat. Und das wird spannend. Mit Google Gemma wurde ein neues KI-Modell präsentiert, bei dem es sich um ein hochmodernes, leichtgewichtiges offenes Sprachmodell handelt, wobei es Modelle mit 2 und 7 Milliarden Parameter umfasst, die überall ausgeführt werden können. Das ist etwas, was die Kosten senkt und die innovative Arbeit für domänenspezifische Anwendungsfälle beschleunigt. Die Entwickler-Teams beiderUnternehmen haben eng zusammengearbeitet, um Gemma, das auf derselben Forschung und Technologie basiert, wie Googles Gemini-Modelle, mit […] (read full article...)

Antwort Gefällt mir

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kannst Du per PayPal spenden.

About the author

Hoang Minh Le

Werbung

Werbung