Google and NVIDIA Unveil PaliGemma and Gemma 2, Revolutionising AI-Powered Applications

NVIDIA and Google have announced three new collaborations to enhance AI-powered applications. They are optimizing two new models, Gemma 2 and PaliGemma, for improved performance and efficiency. Gemma 2 is designed for a broad range of use cases, while PaliGemma is an open-vision language model for tasks like image captioning and object detection. Both models will be offered with NVIDIA NIM inference microservices for easy deployment. Google also announced that RAPIDS cuDF, a GPU dataframe library, is now supported on Google Colab, accelerating data analytics. Lastly, a Firebase Genkit collaboration will allow developers to integrate AI models into their apps.

NVIDIA and Google DeepMind Collaborate on Large Language Model Innovation

NVIDIA and Google have announced a partnership aimed at driving innovation in large language models that power generative AI. These models, which handle multiple types of data such as text, image, and sounds, are becoming increasingly common. However, their development and deployment remain challenging. Developers require a method to quickly experience and evaluate models to determine the best fit for their use case, and then optimize the model for performance in a cost-effective manner that also offers optimal performance.

To facilitate the creation of AI-powered applications with world-class performance, NVIDIA and Google have announced three new collaborations. The first of these collaborations involves the optimization of two new models introduced by Google: Gemma 2 and PaliGemma. These models are built from the same research and technology used to create the Gemini models, with each model focusing on a specific area.

Gemma 2 and PaliGemma: The New Models

Gemma 2 is the successor of the Gemma models and is designed for a broad range of use cases. It features a brand new architecture designed for breakthrough performance and efficiency. On the other hand, PaliGemma is an open vision language model (VLM) inspired by PaLI-3. It is built on open components including the SigLIP vision model and the Gemma language model. PaliGemma is designed for vision-language tasks such as image and short video captioning, visual question answering, understanding text in images, object detection, and object segmentation. It is designed for class-leading fine-tuning performance on a wide range of vision-language tasks.

NVIDIA NIM Inference Microservices Support

Both Gemma 2 and PaliGemma will be offered with NVIDIA NIM inference microservices, which are part of the NVIDIA AI Enterprise software platform. This platform simplifies the deployment of AI models at scale. NIM support for the two new models is available from the API catalog, starting with PaliGemma. They will soon be released as containers on NVIDIA NGC and GitHub.

Accelerated Data Analytics on Google Colab

Google also announced that RAPIDS cuDF, an open-source GPU dataframe library, is now supported by default on Google Colab, a popular developer platform for data scientists. With RAPIDS cuDF, developers using Google Colab can speed up exploratory analysis and production data pipelines. RAPIDS cuDF is designed to solve the problem of slow operations on large data sets by seamlessly accelerating pandas code on GPUs where applicable, and falling back to CPU-pandas where not.

Google and NVIDIA also announced a Firebase Genkit collaboration that enables app developers to easily integrate generative AI models, like the new family of Gemma models, into their web and mobile applications. This collaboration allows developers to deliver custom content, provide semantic search, and answer questions. Developers can start work streams using local RTX GPUs before moving their work seamlessly to Google Cloud infrastructure.

In conclusion, NVIDIA and Google Cloud are collaborating in multiple domains to propel AI forward. From the upcoming Grace Blackwell-powered DGX Cloud platform and JAX framework support, to bringing the NVIDIA NeMo framework to Google Kubernetes Engine, the companies’ full-stack partnership expands the possibilities of what customers can do with AI using NVIDIA technologies on Google Cloud.

More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025