Bringing Photoreal CG to Real-Time AR/VR

Bringing Photoreal CG to Real-Time AR/VR

Introduction

For many years, photorealistic computer graphics (CG) were limited to offline rendering due to the immense computational power required. However, recent advances in real-time rendering technology have made it possible to achieve near-photoreal visuals in augmented reality (AR) and virtual reality (VR) applications. As an AR/VR developer, I am excited by the new creative possibilities this enables. In this article, I will provide an in-depth look at the techniques and technologies bringing photoreal CG to real-time AR/VR.

Enabling Technologies

Several key technologies have paved the way for photoreal real-time rendering in AR/VR:

Powerful Mobile GPUs

  • Modern mobile system-on-chips (SoCs) contain extremely powerful GPUs optimized for parallel processing and floating point calculations needed for 3D graphics. For example, the Qualcomm Snapdragon 865’s Adreno 650 GPU delivers up to 25% faster graphics rendering compared to prior generations.
  • These mobile GPUs are reaching performance levels comparable to desktop GPUs of just a few years ago, providing the horsepower needed for photoreal graphics.

Efficient 3D Engines

  • 3D engines like Unity and Unreal Engine 4 are optimized for real-time rendering. New versions adopt efficiencies like:
  • Vulkan API: A low-overhead graphics API minimizing CPU bottlenecks.
  • Scriptable render pipelines: Allow customized rendering workflows for optimal performance.
  • Prefab architectures: Scene objects and assets can be easily reused to reduce duplication.

Real-Time Ray Tracing

  • Ray tracing accurately simulates how light rays interact with virtual objects. It is resource intensive but Nvidia RTX GPUs have special cores to accelerate these calculations.
  • When combined with techniques like denoising, ray tracing enables photoreal lighting, shadows, reflections in real-time AR/VR apps.

Foveated Rendering

  • Eye tracking in AR/VR headsets allow foveated rendering which reduces quality in the peripheral vision while maintaining focus on the foveal region.
  • This matches how human vision works and significantly cuts down rendering workload without sacrificing perceived quality.

Photoreal Rendering Techniques

With those enablers in place, AR/VR developers can leverage a variety of graphics techniques to achieve photorealism:

Physically-Based Rendering (PBR)

  • PBR materials use real-world physical parameters like base color, metallic, roughness, reflectance to render light interactions accurately.
  • AR/VR apps can use digital material databases or photograph real materials to produce convincingly realistic appearances.

High Dynamic Range Lighting

  • HDR lighting enables a high range of luminosity exceeding standard digital or film to better mimic real-world scenes.
  • Real-time HDR lighting in AR visuals or VR scenes heightens realism through more accurate light bloom, falloff, and reflections.

Volumetric Effects

  • Volumetric fog, smoke, particles scattered with ray marching algorithms add crucial depth and atmosphere to AR/VR environments.
  • These computationally intensive effects are made feasible in real-time by optimizations like lightmap volumes.

AI-Enhanced Assets

  • Photoreal CG assets like characters and objects can be generated with deep learning systems like Nvidia Drive and AI.Reverie.
  • AR/VR developers can integrate these AI-created assets into scenes rather than manually creating and tweaking all virtual content.

Case Study: Photoreal Digital Humans

One major application of photoreal real-time graphics is creating digital humans for VR. Let’s look at an example:

  • Epic Games demonstrated a digital human “MetaHuman” in Unreal Engine with incredible visual fidelity including lifelike skin, eyes, hair, and expressions animated in real-time.
  • This involved high resolution facial scanning and rebuilding the actor’s likeness in 3D with anatomical modeling and rigging.
  • Photoreal appearance was achieved by replicating subsurface scattering in skin using a multi-layer shader. Fine details like pores and wrinkles were added procedurally.
  • The virtual character is able to render in real-time on desktop GPUs by utilizing HDR, volumetric lightmaps, and optimizing bones/joints.

This glimpse of the future shows the exciting potential of CG humans driven by emerging photoreal real-time graphics.

The Future

Real-time photoreal CG will open new frontiers in AR and VR spanning entertainment, simulation, training, manufacturing, and more. Here are some promising directions:

  • More immersive VR worlds: Users will feel an enhanced sense of presence in VR with believable real-time graphics.
  • Seamless AR integrations: As AR objects become photoreal, they will blend flawlessly into real environments.
  • Digital humans: We are nearing the point where real-time CGI humans are indistinguishable from reality.
  • On-device workflows: Photoreal graphics on mobile chips will enable complex workflows from design to simulation completely on AR/VR devices.

Of course, biggest challenges like occlusion handling, transparent objects, global illumination, and physics interactions remain. But with rapid advancements driven by consumer technology, photoreal real-time CG is closer than ever before. The next decade will be defined by creators leveraging these tools to build immersive worlds that are visually indistinguishable from reality.

Facebook
Pinterest
Twitter
LinkedIn

Newsletter

Signup our newsletter to get update information, news, insight or promotions.

Latest Post