Google Cloud and NVIDIA Unveil AI Infrastructure for Global Customers, Boosting Generative AI Development

Google Cloud And Nvidia Unveil Ai Infrastructure For Global Customers, Boosting Generative Ai Development

Google Cloud and NVIDIA have announced new AI infrastructure and software for customers to build and deploy large models for generative AI and speed up data science workloads. Google Cloud CEO Thomas Kurian and NVIDIA CEO Jensen Huang discussed the partnership, which aims to bring machine learning services to large AI customers worldwide. The new hardware and software integrations use the same NVIDIA technologies used by Google DeepMind and Google research teams. Google’s framework for building large language models, PaxML, is now optimised for NVIDIA accelerated computing. Google DeepMind and other Google researchers are among the first to use PaxML with NVIDIA GPUs for research.

“Our expanded collaboration with Google Cloud will help developers accelerate their work with infrastructure, software and services that supercharge energy efficiency and reduce costs.”

NVIDIA CEO Jensen Huang

Accelerated Computing and Generative AI

The collaboration between Google Cloud and NVIDIA is at a crucial juncture where accelerated computing and generative AI are converging to expedite innovation at an unprecedented rate. The expanded partnership will assist developers in accelerating their work with infrastructure, software, and services that enhance energy efficiency and reduce costs. Google Cloud has a history of innovation in AI to foster and speed up innovation for its customers. Many of Google’s products are built and served on NVIDIA GPUs, and many customers are seeking out NVIDIA accelerated computing to power efficient development of Large Language Models (LLMs) to advance generative AI.

NVIDIA Integrations for AI and Data Science Development

Google’s framework for building massive LLMs, PaxML, is now optimised for NVIDIA accelerated computing. Originally designed to span multiple Google TPU accelerator slices, PaxML now allows developers to use NVIDIA H100 and A100 Tensor Core GPUs for advanced and fully configurable experimentation and scale. A GPU-optimized PaxML container is now available in the NVIDIA NGC software catalogue. Google DeepMind and other Google researchers are among the first to use PaxML with NVIDIA GPUs for exploratory research.

Availability of NVIDIA-Optimized Container for PaxML

The NVIDIA-optimized container for PaxML is now available on the NVIDIA NGC container registry. This will be accessible to researchers, startups, and enterprises worldwide that are building the next generation of AI-powered applications.

Google’s Integration of Serverless Spark with NVIDIA GPUs

Google has integrated serverless Spark with NVIDIA GPUs through Google’s Dataproc service. This will assist data scientists in speeding up Apache Spark workloads to prepare data for AI development. These new integrations are the latest in the extensive history of collaboration between NVIDIA and Google, spanning across hardware and software announcements.

“We’re at an inflection point where accelerated computing and generative AI have come together to speed innovation at an unprecedented pace,”

NVIDIA CEO Jensen Huang

Quick Summary

“Google Cloud has a long history of innovating in AI to foster and speed innovation for our customers,” Kurian said. “Many of Google’s products are built and served on NVIDIA GPUs, and many of our customers are seeking out NVIDIA accelerated computing to power efficient development of LLMs to advance generative AI.”

Google Cloud and NVIDIA have announced a partnership to provide new AI infrastructure and software, enabling customers to build and deploy large-scale generative AI models and expedite data science workloads. Google’s framework for building large language models, PaxML, is now optimised for NVIDIA accelerated computing, which will aid developers in advanced experimentation and scaling and speed up data preparation for AI development.

  • Google Cloud and NVIDIA have announced a new partnership to provide AI infrastructure and software for customers to build and deploy large-scale models for generative AI and speed up data science workloads.
  • The collaboration aims to bring end-to-end machine learning services to some of the world’s largest AI customers, including the ability to run AI supercomputers with Google Cloud offerings built on NVIDIA technologies.
  • The new hardware and software integrations use the same NVIDIA technologies that have been used by Google DeepMind and Google research teams over the past two years.
  • Google’s framework for building large language models (LLMs), PaxML, is now optimised for NVIDIA accelerated computing. This allows developers to use NVIDIA H100 and A100 Tensor Core GPUs for advanced experimentation and scale.
  • Google DeepMind and other Google researchers are among the first to use PaxML with NVIDIA GPUs for exploratory research.
  • The companies also announced Google’s integration of serverless Spark with NVIDIA GPUs through Google’s Dataproc service, which will help data scientists speed up Apache Spark workloads to prepare data for AI development.
  • The NVIDIA-optimised container for PaxML is now available on the NVIDIA NGC container registry to researchers, startups, and enterprises worldwide.

Read More.