NVIDIA Instant NeRF: NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

Post any marterial related the future technology of laser scanning.
Post Reply
User avatar
Jason Warren
Posts: 4007
Joined: Thu Aug 16, 2007 9:21 am
Full Name: Jason Warren
Company Details: Laser Scanning Forum Ltd
Company Position Title: Co-Founder
Country: UK
Skype Name: jason_warren
Linkedin Profile: No
Location: Retford, UK
Has thanked: 280 times
Been thanked: 168 times

NVIDIA Instant NeRF: NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

Post by Jason Warren »


NVIDIA Developer

When the first instant photo was taken 75 years ago with a Polaroid camera, it was groundbreaking to rapidly capture the 3D world in a realistic 2D image. Today, AI researchers are working on the opposite: turning a collection of still images into a digital 3D scene in a matter of seconds with the implementation of neural radiance fields (NeRFs).


#GTC22 #AI #neuralradiance #neuralgraphics #rendering #deeplearning
NeRF, Deep Learning, Neural Radiance Fields, Neural Graphics, Rendering

Instantly Create Infinite Viewpoints from NeRF!



In this video we took a series of shots using a @DJI Mavic 2 Pro and generated a NeRF visualization using @NVIDIA Developer's Instant NGP NeRFs.

Image Acquisition Time: 10 minutes
Camera Pose Estimation: 5 minutes
NeRF Training: 30 Seconds

About EveryPoint:
EveryPoint is a platform that software developers can use to model and understand spaces and objects from imagery. Currently you can use the service to generate models using the iPhone, images from drones, and installed cameras.

EveryPoint is the technology powering https://www.stockpilereports.com.
Jason Warren

Dedicated to 3D Laser Scanning
V.I.P Member
V.I.P Member
Posts: 1128
Joined: Mon Jan 04, 2010 7:51 pm
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 43 times
Been thanked: 143 times

Re: NVIDIA Instant NeRF: NVIDIA Research Turns 2D Photos Into 3D Scenes in the Blink of an AI

Post by jedfrechette »

NeRF's are interesting, also see the much larger scale results Waymo have published:


From my (very) limited understanding though it is a little hard to see the application for most of the types of things users on this forum are interested in. As far as I can tell, once you've trained the network you essentially have a black box function that given a camera pose will render a new image for you from that viewpoint.

A NeRF isn't reconstructing the scene in the way most of us tend to think of and there is no underlying physically based representation of the geometry, lighting or material properties of the objects in the scene (like you might get from photogrammetry), furthermore there's no obvious way to extract that kind of information from the results. Essentially the NeRF function is just saying "I don't have any data for these pixels, but based on these other images I've seen I think they should look like this..."

I think the second video you linked from EveryPoint is actually a really good example of the limitations of this approach. The areas of the renders where the NeRF has data from actual photos look reasonably plausible, but as soon the new camera starts showing areas that weren't sampled very well the NeRF is just making things up that have no relation to the physical scene.
Post Reply

Return to “Future Technology”