MIT and Meta Develop PlatoNeRF: A Shadow-Based 3D Modelling System for Hidden Objects

Researchers from MIT and Meta have developed a computer vision technique called PlatoNeRF that uses shadows to create 3D models of scenes, including areas blocked from view. The method combines lidar technology and machine learning to generate accurate reconstructions of 3D geometry.

This could improve the safety of autonomous vehicles, make AR/VR headsets more efficient, and help warehouse robots work faster. Tzofi Klinghoffer, an MIT graduate student, led the research. The research also involved Ramesh Raskar, an associate professor at MIT. Another participant was Rakesh Ranjan, a director of AI research at Meta Reality Labs, along with others.

PlatoNeRF: A New Approach to 3D Scene Reconstruction

Researchers from MIT and Meta have developed a novel computer vision technique. This technique could revolutionize the way autonomous vehicles, AR/VR headsets, and warehouse robots operate. The method is named PlatoNeRF. It creates physically accurate 3D models of an entire scene using images from a single camera position. This includes areas blocked from view.

The technique uses shadows to determine what lies in obstructed portions of the scene. This approach potentially improves the safety of autonomous vehicles. It also enhances the efficiency of AR/VR headsets.

The name PlatoNeRF is inspired by Plato’s allegory of the cave. This is a passage from the Greek philosopher’s “Republic.” In this passage, prisoners are chained in a cave. They discern the reality of the outside world based on shadows cast on the cave wall.

The researchers combined lidar (light detection and ranging) technology with machine learning. This combination generates more accurate reconstructions of 3D geometry than some existing AI techniques. PlatoNeRF also excels at smoothly reconstructing scenes with shadows. It includes scenes with high ambient light or dark backgrounds that are hard to see.

The Science Behind PlatoNeRF

The researchers built on existing approaches using a new sensing modality called single-photon lidar. Lidars map a 3D scene by emitting pulses of light. They measure the time it takes for that light to bounce back to the sensor.

Because single-photon lidars can detect individual photons, they provide higher-resolution data. The researchers used a single-photon lidar to illuminate a target point in the scene. Some light bounces off that point and returns directly to the sensor. However, most of the light scatters and bounces off other objects before returning to the sensor. PlatoNeRF relies on these second bounces of light.

By calculating how long it takes light to bounce twice, PlatoNeRF captures additional scene information. This includes depth. The second bounce of light also contains information about shadows. The system traces the secondary rays of light — those that bounce off the target point to other points in the scene — to determine which points lie in shadow (due to an absence of light). Based on the location of these shadows, PlatoNeRF can infer the geometry of hidden objects.

The Role of Machine Learning in PlatoNeRF

PlatoNeRF relies on a crucial component. This is the combination of multibounce lidar with a special type of machine-learning model known as a neural radiance field (NeRF). A NeRF encodes the geometry of a scene into the weights of a neural network, which gives the model a strong ability to interpolate, or estimate, novel views of a scene. This ability to interpolate also leads to highly accurate scene reconstructions when combined with multibounce lidar.

The researchers compared PlatoNeRF to two standard alternative methods, one that only uses lidar and the other that only uses a NeRF with a color image. They found that their method was able to outperform both techniques, especially when the lidar sensor had lower resolution. This would make their approach more practical to deploy in the real world, where lower resolution sensors are common in commercial devices.

Future Directions for PlatoNeRF

The researchers are interested in exploring how tracking more than two bounces of light could improve scene reconstructions. They also want to apply more deep learning techniques and combine PlatoNeRF with color image measurements to capture texture information. The work shows how clever algorithms can enable extraordinary capabilities when combined with ordinary sensors — including the lidar systems that many of us now carry in our pocket.

More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025