rxven. Posted June 19, 2023 Share Posted June 19, 2023 Neural radiance fields (NeRFs) are advanced machine learning techniques that can generate three-dimensional (3D) representations of objects or environments from two-dimensional (2D) images. As these techniques can model complex real-world environments realistically and in detail, they could greatly support robotics research. "Recently members of my lab, the Stanford Multi-robot Systems Lab, have been excited about exploring applications of Neural Radiance Fields (NeRFs) in robotics, but we found that right now there isn't an easy way to use these methods with an actual robot, so it's impossible to do any real experiments with them," Javier Yu, the first author of the paper, told Tech Xplore. "Since the tools didn't exist, we decided to build them ourselves, and out of that engineering push to see how NeRFs work on robots we got a nice tool that we think will be useful to a lot of folks in the robotics community."NeRFs are sophisticated techniques based on artificial neural networks that were first introduced by the computer graphics research community. They essentially create detailed maps of the world by training a neural network to reconstruct the 3D geometry and color of the scene captured in a photograph or 2D image. "The problem of mapping from images is one that we in the robotics community have been working on for a long time and NeRFs offer a new perspective on how to approach it," Yu explained. "Typically, NeRFs are trained in an offline fashion where all of the images are gathered ahead of time, and then the NeRF of the scene is trained all at once. In robotics, however, we want to use the NeRF directly for tasks like navigation and so the NeRF is not useful if we only get it when we arrive at our destination. Instead, we want to build the NeRF incrementally (online) as the robot explores its environment. This is exactly the problem that NerfBridge solves." Quote Link to comment Share on other sites More sharing options...
Recommended Posts