Skip to main content

Immersive Medientechnologien und AR/VR

Ansprechpartner: Prof. Dr. Sebastian Knorr

Kontaktdetails:

Tel.: +49 3641 205 739

E-Mail: sebastian.knorr@eah-jena.de

Web: Homepage

Overview:

360-degree video, also called live-action virtual reality (VR), is one of the latest and most powerful trends in immersive media, with an increasing potential for the next decades. In particular, head-mounted display (HMD) technology like e.g. HTC Vive, Oculus Rift and Samsung Gear VR is maturing and entering professional and consumer markets. On the other side, capture devices like e.g. Facebook’s Surround 360 camera, Nokia Ozo and Google Odyssee are some of the latest technologies to capture 360-degree video in stereoscopic 3D (S3D).

However, capturing 360-degree videos is not an easy task as there are many physical limitations which need to be overcome, especially for capturing and post-processing in S3D. In general, such limitations result in artifacts which cause visual discomfort when watching the content with a HMD. The artifacts or issues can be divided into three categories: binocular rivalry issues, conflicts of depth cues and artifacts which occur in both monocular and stereoscopic 360-degree content production. Issues of the first two categories have been investigated for standard S3D content e.g. for cinema screens and 3D-TV. The third category consists of typical artifacts which only occur in multi-camera systems used for panorama capturing. As native S3D 360-degree video production is still very error-prone, especially with respect to binocular rivalry issues, many high-end S3D productions are shot in 2D 360-degree and post-converted to S3D.

Within the project QualityVR, we are working on video analysis tools to detect, assess and partly correct artefacts which occur in stereoscopic 360-degrees video production, in particular, conflicts of depth cues and binocular rivalry issues.

Contact: Prof. Dr. Sebastian Knorr

Overview

Methods of storytelling in cinema have well established conventions that have been built over the course of its history and the development of the format. In 360° film, many of the techniques that have formed part of this cinematic language or visual narrative are not easily applied or are not applicable due to the nature of the format i.e. not contained the border of the screen. In this paper, we analyze how end-users view 360° video in the presence of directional cues and evaluate if they are able to follow the actual story of narrative 360° films. We first let filmmakers create an intended scan-path, the so-called director’s cut, by setting position markers in the equirectangular representation of the omnidirectional content for eight short 360° films. Alongside this, the filmmakers provided additional information regarding directional cues and plot points. Then, we performed a subjective test with 20 participants watching the films with a head-mounted display and recorded the center position of the viewports. The resulting scan-paths of the participants are then compared against the director’s cut using different scan-path similarity measures. In order to better visualize the similarity between the scan-paths, we introduce a new metric which measures and visualizes the viewport overlap between the participants’ scan-paths and the director’s cut. Finally, the entire dataset, i.e. the director’s cuts including the directional cues and plot points as well as the scan-paths of the test subjects, is publicly available with this paper.

Downloads

DataSet

CVMP Paper

Contact: Prof. Dr. Sebastian Knorr

Overview

We introduce a novel interactive depth map creation approach for image sequences which uses depth scribbles as input at user-defined keyframes. These scribbled depth values are then propagated within these keyframes and across the entire sequence using a 3-dimensional geodesic distance transform (3D-GDT). In order to further improve the depth estimation of the intermediate frames, we make use of a convolutional neural network (CNN) in an unconventional manner. Our process is based on online learning which allows us to specifically train a disposable network for each sequence individually using the user generated depth at keyframes along with corresponding RGB images as training pairs. Thus, we actually take advantage of one of the most common issues in deep learning: over-fitting. Furthermore, we integrated this approach into a professional interactive depth map creation application and compared our results against the state of the art in interactive depth map creation.

Paper

DeepStereoBrush: Interactive Depth Map Creation

Contact: Prof. Dr. Sebastian Knorr

Overview

Light fields capture all light rays passing through a given volume of space. Compared to traditional 2D imaging systems which capture the spatial intensity of the light rays, the 4D light fields also contain the angular direction of light rays. This additional information allows for multiple applications such as the reconstruction of the 3D geometry of a scene, creating new images from virtual point of view, or changing the focus of an image after it is captured. Light fields are also a growing topic of interest in the VR/AR community.

This project aims to capture 360-degree panorama light fields by mounting a DSLR camera on a rotating platform, and stitching together the multiple images obtained. The panorama light fields could then be used in our own applications, including refocusing, depth estimation, and light field rendering. Rendering of 360-degree light fields in an HMD is of particular interest. Extra care is expected in choosing/designing the scenes to be captured, both from a scientific and artistic point of view.

Contact: Prof. Dr. Sebastian Knorr

Overview

Colour transfer is an important pre-processing step in many applications, including stereo vision, sur- face reconstruction and image stitching. It can also be applied to images and videos as a post processing step to create interesting special effects and change their tone or feel. While many software tools are available to professionals for editing the colours and tone of an image, bringing this type of technology into the hands of everyday users, with an interface that is intuitive and easy to use, has generated a lot of interest in recent years.

One approach often used for colour transfer is to allow the user to provide a reference image which has the desired colour distribution, and use it to transfer the desired colour feel to the original target image. This approach allows the user to easily generate the desired colour transfer result without the need for user interaction.

In our project, the main focus is the colour transfer from a reference image to a 3D point cloud and the colour transfer between two 3D point clouds captured under different lighting conditions.

Contact:  Prof. Dr. Sebastian Knorr