Researchers at the University of Maryland are making strides in the field of 3D scene reconstruction using eye reflections. Their work builds on Neural Radiance Fields (NeRF), an AI technology that can reconstruct environments from 2D photos. While the eye-reflection approach is still in its early stages of development, it offers a glimpse into a technology that has the potential to reveal an environment from a series of simple portrait photos.
To achieve this, the team captured subtle reflections of light in human eyes by taking consecutive images from a single sensor. They started with high-resolution images from a fixed camera position, capturing a person looking towards the camera. They then focused on the reflections in the eyes, isolating them and determining where the eyes were looking in the photos.
The results of their experiments show a reasonably discernible reconstruction of the person’s environment in a controlled setting. The team also generated a more impressive dreamlike scene using a synthetic eye. However, when they attempted to model eye reflections from Miley Cyrus and Lady Gaga music videos, the resulting reconstructions were vague blobs that the researchers could only guess were an LED grid and a camera on a tripod. This illustrates the current limitations of the technology when applied to real-world scenarios.
The researchers overcame several obstacles to reconstruct crude and fuzzy scenes. The cornea introduces inherent noise, making it challenging to separate the reflected light from the complex iris textures. To address this, the team introduced cornea pose optimization and iris texture decomposition during training. They also used radial texture regularization loss to simulate smoother textures and further enhance the reflected scenery.
Despite these advancements, significant barriers still exist. The current real-world results are obtained in a laboratory setup with controlled conditions, such as zooming in on a person’s face, using area lights to illuminate the scene, and deliberate movement. Unconstrained settings like video conferencing with natural head movement present challenges due to lower sensor resolution, dynamic range, and motion blur. Additionally, the team acknowledges that their assumptions about iris texture may be too simplistic to apply broadly, as real eyes rotate more widely than in the controlled setting of their experiments.
Nevertheless, the team considers their progress a milestone that can inspire future breakthroughs. They hope to encourage further exploration of unexpected visual signals that can reveal information about the world around us, expanding the horizons of 3D scene reconstruction. Although more advanced versions of this technology could raise privacy concerns, the current version can only vaguely make out objects like a Kirby doll under the most ideal conditions.
In conclusion, researchers at the University of Maryland are pushing the boundaries of 3D scene reconstruction by utilizing eye reflections. Building on Neural Radiance Fields, their work aims to reconstruct environments from 2D photos by analyzing the subtle reflections of light captured in human eyes. While the technology is still in its early stages, the team has achieved promising results in controlled settings. However, challenges such as inherent noise and limitations in real-world applications remain. Nonetheless, their progress marks an important step towards future advancements and the exploration of unexpected visual signals for a broader understanding of the world around us.