The NeRF Method for Creating 3D Models from 2D Photographs

Advances in the space of creating 3D models from 2D photographs are getting downright amazing. This month a team of computer vision researchers from UC Berkeley, UC San Diego and Google Research showed off their NeRF technique–that’s Neural Radiance Fields–for “view synthesis” on a variety of objects captured as 2D images, and the level of detail extracted is astonishing:

Their research paper is here, and they’ve posted the code to Github.

See Also:

We’re Getting Closer to Creating 3D Models from Single 2D Photographs

No Responses to “The NeRF Method for Creating 3D Models from 2D Photographs”

Post a Comment