Artificial Intelligence Latest news Pro

NVIDIA brings generative AI to enterprises with cloud services for building large-scale speech and image models

To accelerate enterprise adoption of generative AI, NVIDIA today unveiled a suite of cloud services that enable enterprises to develop, refine and deploy custom big language and generative AI models based on their own proprietary data and designed for their specific industry-specific tasks. Companies that will create and use AI models, applications and services with the new NVIDIA AI Foundations services, which include voice, images, video and 3D, include Getty Images, Morningstar, Quantiphi and Shutterstock.

Enterprises can leverage NVIDIA NeMo voice service and NVIDIA Picasso image, video and 3D service to build proprietary, industry-specific, generative AI applications for intelligent chat and customer support, professional content production, digital simulation and more. In addition, NVIDIA today also announced new models for the NVIDIA BioNeMo cloud service for biology.

“Generative KI treibt die schnelle Einführung von KI voran und erfindet unzählige Branchen neu”, said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA AI Foundations enable companies to fit foundational models with their own data to generate humanity’s most valuable resources – intelligence and creativity.”

Helping enterprises develop custom generative AI applications The NeMo and Picasso services run on the NVIDIA DGX Cloud, accessible through a browser. Developers can use the models offered on each service through simple application programming interfaces (APIs). Once models are deployed, organizations can run inference workloads at scale using NVIDIA AI Foundations cloud services. Each cloud service includes six elements: pre-trained models, frameworks for data processing, vector databases and personalization, optimized inference engines, APIs, and support from NVIDIA experts to help companies customize models for their unique use cases.

NeMo service enables companies to quickly adapt basic language models

The NeMo cloud service enables developers to make large language models (LLMs) more relevant to enterprises by defining focus areas, adding domain-specific knowledge, and teaching functional skills. Models of various sizes – from 8 billion to 530 billion parameters – available in the service are regularly updated with additional training data. This provides organizations with multiple opportunities to develop applications that meet business requirements for speed, accuracy, and task complexity.

With the information-retrieval capabilities included in the NeMo service, customers can augment LLMs with their real-time proprietary data. This allows companies to adapt models to enable precise generative AI applications for market intelligence, enterprise search, chatbots, customer service and more. Morningstar, a leading provider of independent investment insights, is also working with NeMo to explore advanced intelligence services.

“Große Sprachmodelle bieten uns die Möglichkeit, aussagekräftige Daten aus hochkomplexen strukturierten und unstrukturierten Inhalten in größerem Umfang zu sammeln, während wir die Datenqualität und -geschwindigkeit priorisieren”, said Shariq Ahmad, head of data collection technology at Morningstar. “Our quality framework includes a human-in-the-loop process that feeds into model fit to ensure we are producing increasingly high-quality content. Morningstar is using NeMo in its data collection research and development to explore how LLMs can scan and synthesize information from sources such as financial documents to quickly extract market intelligence.”

Quantiphi, an AI-centric digital engineering firm and one of NVIDIA’s service partners, is working with NeMo to develop a modular generative AI solution. The offering, called baioniq, will enable companies to create customized LLMs equipped with up-to-date information to increase the productivity of knowledge workers.

NVIDIA Picasso service accelerates simulation and creative design for image, video, and 3D

NVIDIA Picasso is a cloud service for building and deploying generative AI-powered image, video, and 3D applications with advanced text-to-image, text-to-video, and text-to-3D capabilities to increase productivity for creativity, design, and digital simulation through simple cloud APIs. Software publishers, service providers, and enterprises can use Picasso to train NVIDIA Edify foundation models on their proprietary data and build applications that use natural text prompts to quickly create and customize visual content for hundreds of use cases, such as product design, digital twins, storytelling, and character creation.

To build custom applications, organizations can also start with Picasso’s Edify models, which are pre-trained with fully licensed data. Additionally, they can use Picasso to optimize and run their own generative AI models. Leading visual content companies are already working with NVIDIA to create custom models with Picasso services and increase productivity for creative professionals.

What still reads like the future to many today has long since arrived in the present. And the scary thing is that outsiders will not (be able to) consciously perceive this upheaval. But it’s also a huge change for companies to leverage third-party innovations to create new things themselves. Let’s hope that no one’s control slips. You certainly don’t have to be afraid of these techniques, but rather of those who use them. And we will certainly have to question many things more critically in the future. Or does anyone want to consume only artificially generated texts or images? So I don’t. At least not only.

 

Bisher keine Kommentare

Kommentar

Lade neue Kommentare

Redaktion

Artikel-Butler

1,791 Kommentare 8,664 Likes

Um die Einführung von generativer KI in Unternehmen zu beschleunigen, hat NVIDIA heute eine Reihe von Cloud-Diensten vorgestellt, die es Unternehmen ermöglichen, individuelle, auf ihren eigenen proprietären Daten basierende, große Sprach- und generative KI-Modelle zu entwickeln, zu verfeinern und einzusetzen, die für ihre spezifischen branchenspezifischen Aufgaben konzipiert sind. Zu den Unternehmen, die KI-Modelle, Anwendungen und (den ganzen Artikel lesen...)

Antwort 1 Like

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kannst Du per PayPal spenden.

About the author

Igor Wallossek

Editor-in-chief and name-giver of igor'sLAB as the content successor of Tom's Hardware Germany, whose license was returned in June 2019 in order to better meet the qualitative demands of web content and challenges of new media such as YouTube with its own channel.

Computer nerd since 1983, audio freak since 1979 and pretty much open to anything with a plug or battery for over 50 years.

Follow Igor:
YouTube Facebook Instagram Twitter

Werbung

Werbung