A Versatile Family of Models for Various Use Cases
PaLM 2 is not only more capable but also faster and more efficient than previous models. It comes in four sizes: Gecko, Otter, Bison, and Unicorn, making it easy to deploy for a wide range of use cases. Gecko, the smallest model, is lightweight enough to work on mobile devices and is fast enough for interactive applications on-device, even when offline. This versatility allows PaLM 2 to be fine-tuned to support entire classes of products, helping more people in various ways.
Powering Over 25 Google Products and Features
PaLM 2 is being used to power over 25 new products and features announced at Google I/O. Its improved multilingual capabilities are allowing the expansion of Bard to new languages and powering a recently announced coding update. Workspace features in Gmail, Google Docs, and Google Sheets are tapping into PaLM 2’s capabilities to help users work more efficiently. Additionally, specialized versions of PaLM 2, such as Med-PaLM 2 and Sec-PaLM, are being developed for medical and cybersecurity use cases, respectively.
Advancements in Medical and Cybersecurity Applications
Med-PaLM 2, trained with medical knowledge by health research teams, can answer questions and summarize insights from dense medical texts. It achieves state-of-the-art results in medical competency and is the first large language model to perform at an “expert” level on U.S. Medical Licensing Exam-style questions. Google plans to add multimodal capabilities to synthesize information like x-rays and mammograms to improve patient outcomes. Sec-PaLM, on the other hand, is a specialized version of PaLM 2 trained on security use cases and has the potential to revolutionize cybersecurity analysis. Available through Google Cloud, it uses AI to help analyze and explain the behavior of potentially malicious scripts and better detect threats to people and organizations.
The Future of AI: Gemini and Beyond
Google is already working on Gemini, the next model designed to be multimodal, highly efficient at tool and API integrations, and built to enable future innovations like memory and planning. Gemini is still in training but is already exhibiting multimodal capabilities never before seen in prior models. Once fine-tuned and rigorously tested for safety, Gemini will be available in various sizes and capabilities, similar to PaLM 2, ensuring its deployment across different products, applications, and devices for everyone’s benefit. Google’s Brain and DeepMind research teams are also being combined into a single unit to accelerate progress in AI and responsibly pave the way for the next generation of AI models.