Google DeepMind Launches Project Genie Prototype To Create Model Worlds

Google DeepMind is today launching Project Genie, an experimental research prototype granting Google AI Ultra subscribers in the U.S. the power to build and explore interactive worlds. Leveraging the capabilities of Genie 3, this innovative system moves beyond static 3D environments by generating real-time paths and dynamic interactions as users navigate and remix creations built from text and image prompts. “Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as you move and interact with the world,” explains the company. This marks a significant step towards Google DeepMind’s broader ambition of building Artificial General Intelligence (AGI) by simulating the complexities of real-world environments and opening new avenues for applications ranging from robotics to storytelling.

Genie 3 World Model Enables Real-Time Interactive Environments

(18+). The core of Project Genie revolves around three capabilities: world sketching, exploration, and remixing. Users initiate creation by prompting with text and images, building “a living, expanding environment” and defining their mode of exploration – from walking to flying. A feature called “World Sketching” integrates with Nano Banana Pro, allowing for image previews and fine-tuning before entering the generated world. Beyond creation, the system facilitates real-time path generation as users navigate, adjusting the camera based on their actions.

Existing worlds can also be remixed, building upon previous prompts or drawing inspiration from a curated gallery. “Building on our model research with trusted testers,” Google DeepMind is now expanding access to gather user insights. While acknowledging current limitations – including occasional inconsistencies in realism and character control, and generation limits of 60 seconds – the team is actively working on improvements, with a future goal of broader accessibility.

Project Genie Prototype: World Sketching & Character Control

Users also define their character’s perspective – first or third-person – dictating the experience before entering the generated space. Beyond creation, Project Genie facilitates real-time path generation as users navigate, and enables “World Remixing,” allowing users to build upon existing prompts or explore curated worlds for inspiration. However, the prototype isn’t without limitations; generated worlds “might not look completely true-to-life or always adhere closely to prompts or images,” and character control can be imperfect, occasionally experiencing latency. While some features previewed in August, such as promptable events, are not yet implemented, Google DeepMind intends to expand access and continue improving the experience, stating they are “excited to share this prototype…to better understand how people will use world models.”

Core Capabilities: Exploration and World Remixing

Project Genie doesn’t simply render pre-built environments; it actively constructs them around the user in real-time, a key distinction from “explorable experiences in static 3D snapshots.” This dynamic creation hinges on Genie 3, a general-purpose world model developed by Google DeepMind to simulate environmental dynamics and predict how actions impact them—a crucial step toward achieving Artificial General Intelligence (AGI). Central to this immersive capability is “World Sketching,” allowing users to initiate creation with text prompts and images. This feature integrates with Nano Banana Pro, enabling previewing and fine-tuning of the envisioned environment before full immersion.

Users aren’t limited to solely creating from scratch; existing worlds can be remixed, “building on top of their prompts” for novel interpretations and inspiration. Diego Rivas and Elliott Breece, Product Managers at Google DeepMind, emphasize the potential for diverse applications, ranging from robotics and animation to exploring historical settings. Currently, the prototype has limitations, including realism and character control, but Google aims to expand access and improve these aspects. The initial rollout, beginning January 29, 2026, is limited to Google AI Ultra subscribers in the U.S. (18+), with plans for broader availability.

Limitations of Genie 3 & Future Development Roadmap

Despite the impressive capabilities of Project Genie, powered by Genie 3, the current prototype exhibits discernible limitations, acknowledged by Google DeepMind as areas for ongoing refinement. Furthermore, character control isn’t yet seamless, with users potentially experiencing “higher latency in control,” impacting real-time responsiveness within the generated worlds. Currently, the system is constrained by generation limits of 60 seconds, restricting the length of explorable experiences. Several features previewed in August, including “promptable events that change the world as you explore it,” are not yet integrated into this initial release, indicating a phased rollout of functionality.

The team is actively gathering data from Google AI Ultra subscribers in the U.S. (18+) to better understand user behavior and prioritize future development. “Building on the work we have been doing with trusted testers,” Google DeepMind intends to expand access beyond the initial cohort, with plans to reach more territories in the coming months. The ultimate goal, they state, is to “make these experiences and technology accessible to more users,” signifying a commitment to democratizing access to this emerging technology and its potential for both AI research and generative media.

Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as you move and interact with the world.

Google DeepMind
Quantum News

Quantum News

There is so much happening right now in the field of technology, whether AI or the march of robots. Adrian is an expert on how technology can be transformative, especially frontier technologies. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that is considered breaking news in the Quantum Computing and Quantum tech space.

Latest Posts by Quantum News:

Multiverse Computing Launches HyperNova 60B 2602, 50% Compressed LLM, on Hugging Face

Multiverse Computing Launches Quantum Inspired HyperNova 60B 2602, 50% Compressed LLM, on Hugging Face

February 24, 2026
AWS Quantum Technologies Blog: New QGCA Outperforms Simulated Annealing on Complex Optimization Problems

AWS Quantum Technologies Blog: New QGCA Outperforms Simulated Annealing on Complex Optimization Problems

February 23, 2026
AWS Quantum Technologies has released version 0.11 of the Qiskit-Braket provider on February 20, 2026, significantly enhancing how users access and utilize Amazon Braket’s quantum computing services through the popular Qiskit framework. This update introduces new “BraketEstimator” and “BraketSampler” primitives, mirroring Qiskit routines for improved performance and feature integration with Amazon Braket program sets. Importantly, the provider now fully supports Qiskit 2.0 while maintaining compatibility with versions as far back as v0.34.2, allowing users to “use a richer set of tools for executing quantum programs on Amazon Braket.” The release unlocks flexible compilation features, enabling circuits to be compiled directly for Braket devices using the to_braket function, accepting inputs from Qiskit, Braket, and OpenQASM3.

AWS Quantum Technologies Releases Qiskit-Braket Provider v0.11, Now Compatible with Qiskit 2.0

February 23, 2026