How to do occlusion in augmented reality

how to do occlusion in augmented reality

Occlusion happens when an object nearer the viewer obscures the vision of the objects farther away. In AR, occlusion is present in video displays such as display technologies on a panel basis or on video viewing. In this post we will see how to do occlusion in augmented reality.
Occlusion issues are overcome by two categories: the identification of occlusion and the control of occlusion. Occlusion detection is a method use to assess whether or not any objects overlap digitally with actual objects. When occlusion is treated, the device needs to optimize rendering by not drawing what is not apparent. When overlapping is observe.
There are two main approaches, model-based and profound, to solve this dilemma.

Model-based Approach

Usually only one camera is needed in models. When the actual world is simple and fully realized, model-based approach is use to achieve complete 3D models of real objects. So, the question rises that here, how to do occlusion in augmented reality in a proper way. GRASP system, an interior design system. In medical applications AR Imaging Guided Surgical System (ARGUS) was introduce. This technology is use to aid operations by integrating simulated medical images with live videos. Occlusion detection is implement in this framework between real and virtual objects.

Depth-based Approach

A profound solution attempts to overcome the occlusion in AR of an unknown actual entity often known as a dynamic occlusion procedure. The shape, size, or location of an occluding object are unknown in this situation.

how to do occlusion in augmented reality
Occlusion in AR is one of the main elements in order to enhance the visualization of the AR scene. It realistically improves the presence of the object. This paper gives a description of the method of occlusion in the AR system. Several researchers investigated various methods to solve the occlusion problem. But the aforementioned approaches will in most cases only solve some of the occlusion problem. Based on researchers’ research in AR applications the seamless integration between real and virtual objects, particularly occlusion, has remained critical.
ARCore Depth uses a motion profile algorithm to construct an environment depth map and enable the detection of occlusions.

how to do occlusion in augmented reality

Indeed, earlier this year Apple provided support for human occlusion in AR Kit 3 along with other advanced features, but only for devices that incorporate their True Depth camera and at least use the A12Bionic processor. Designing new sensors and more powerful devices will help in the near future to overcome these obstacles. Three sequences illustrate the effectiveness of our approach: the Stanislaus sequence, the Cow sequence and the Loria sequence. Every sequence demonstrates our algorithm’s ability in different circumstances to handle occlusions.

AR systems and occlusion in augmented reality

Even in some cases which were considered as hard to recover and 3D to rebuild, we want to prove that our algorithm is efficient. We also implemented a new approach to solving AR tasks occlusion. The main concept is, with moderate user engagement, that finely detecting occluding limits is possible. One of the key strengths of our algorithm is its ability to deal with uncertainty about the measured movement between two frames. Our approach tends to be more convenient and reliable by carefully choosing key frames than other methods currently in use.

Current AR systems only monitor sparse geometric features for all pixels, but do not measure depth. This is why most ARs are pure overlays which real objects can never follow. We have a new algorithm that spreads in near-real-time depth to each pixel. The manufactured depth maps are spatially smooth but show sharp deep-edged discontinuities. This makes ARs that can engage entirely and be overshadow by the real scene. A video and a sparse SLAM reworking as input is use in our algorithm. It begins by estimating the soft depth edges from the optical flow gradient. Due to the unstable close to occlusions optical flow, the resulting depth arrays are measured with a new reliability test. Then we find the edges in depth and change them with the edges of the image. Finally, the propagated depth is optimize smoothly but discontinuities are encourage at recovered edges.

We present results for several examples in the domain and show the efficiency of many AR video effects with occlusion-aware. In order to test our algorithm quantitatively, we characterize the properties that make AR applications attractive and present novel evaluation methods that demonstrate how well they are accomplish.

How to do occlusion in augmented reality Depth Lab Information

SLAM. The camera trajectory and geometry of the video stream was compute with simultaneous localization and mapping algorithms. This is an issue with Motion Structure, but a fundamental difference is that SLAM techniques are develop primarily for video sequences, which are optimize for real time applications. In AR applications, SLAM algorithms are usually use for monitoring. However, most methods only monitor a range of individual image functions, resulting in a sparse scene display and restricting AR effects to pure overlays.
(1) a series of video frames, usually taken with a mobile phone camera, (2) the camera parameters on every frame and (3) a sparse annotation of depth (at least for some frames).

Three steps are accompanied by our algorithm:
(2) Define soft edges of the depth: First, an estimation of “soft” edges which are not yet located is carried out on the gradient of optical flow fields. As optical flow is unstable in close proximity to occlusions, we measure flow fields for a future and former frame and fuse the resulting depth edges by observing that the flow gradients are accurate only when the flow vectors differ.

(2) Locate edges for depth: Next, we locate edges with a tweaked edge detector version of the Canny [Canny 1986] depth detector. This technique dilutes the edges and lines them with the image gradient to be place exactly in the middle of the image borders. Hysteresis is use to prevent fluctuations in areas with poor responses.

(3) Densification Finally, by solving the Poisson problem, we disseminate the sparse input depth into each pixel. The data term is optimize for approximation of the limited input depth and for fostering temporal continuity; the smoothness term fosters sharp discontinuities at the observed depth borders and smooth results elsewhere.

For further information :

Results and evaluation

In practice, we introduce our approach to worked on pre-captured video sequences, as it make debugging and reproducible results simpler, even though it was ultimately intend for real-time action, e.g. with the Smart Phone viewer. With a smartphone Google Pixel 2 in full HD resolution (1920 px1080 pixels), we capture and process a range of video sequences. The videos feature a variety of objects and scenes, often featuring objects hard to recreate, such as moving objects, water and reflecting surfaces, as well as thin structures, using conventional multi-view stereo algorithms. In the additional materials we include all input images, final depth maps and intermediate SLAM points videos, soft depth edges and located depth edges for easy inspection in the form of a webpage.

Evaluation Metrics

In order to evaluate our method quantitatively and objectively compare our method with other baseline algorithms below, we suggest three assessment methods which show the extent to which these aims are achieve: (1) occlusion bug penalize depth rims not sharp, (2) Texture bug: it penalize depth at texture rims that are not smooth.

Conclusion about how to do occlusion in augmented reality

In order to disperse a sparse depth source, like SLAM points in all remaining pixels, we have searched a simple algorithm for a video sequence. Without depth edges where there are sharp discontinuities, the resulting thick depth video is spatiotemporally soft. These characteristics are useful to our algorithm for AR video effects. Due to lack of hole in depth maps, effects can interact entirely with the geometry of the scene, and can be obscured by actual objects, for instance.

Hope you like this post “how to do occlusion in augmented reality”. Read more informative posts:

  1. Educational Blogs
  2. News Blogs
  3. Technology Blogs

Leave a Reply