PhD position on Deep View Synthesis for VR Video

Website c_richardt University of Bath

The goal of this project is to capture and reconstruct the visual appearance of dynamic real-world environments to enable more immersive virtual reality video experiences.

State-of-the-art VR video approaches (e.g. Anderson et al., 2016) produce stereoscopic 360° video, which comprises separate 360° videos for the left and right eye (like 3D movies, but in 360°). The videos can, for example, be viewed on YouTube using a VR headset such as Google Cardboard or Daydream. Unfortunately, such videos only allow viewers to look in different directions, but they do not respond to any head motion such as moving left/right, forward/backwards or up/down. Truly immersive VR video, on the other hand, requires ‘freedom of motion’ in six degrees-of-freedom (‘6-DoF’), so that viewers see the correct views of an environment regardless of where they are (3 DoF) and where they are looking (+3 DoF).

This project aims to develop novel-view synthesis techniques using deep learning that are capable of producing high-quality, temporally-coherent, time-varying VR video of dynamic real-world environments from one or more standard or 360-degree video cameras. In particular, the goal is to convincingly reconstruct the visual dynamics of the real world, such as people and moving animals or plants, so that the reconstructed dynamic geometry can provide the foundation for a novel video-based rendering approach that synthesises visually plausible novel views with 6 degrees-of-freedom for the specific head position and orientation of a viewer in VR. This experience will provide correct motion parallax and depth perception to the viewer (like Luo et al., 2018) to ensure unparalleled realism and immersion.

To find out more about this position, the application procedure, the project and/or our group, please contact Christian Richardt (c.richardt@bath.ac.uk).