NVIDIA has been a behemoth in the gaming PC business for a long time, with every generation of graphics card they release new features that both developers and gamers can take advantage of, the current shift from GTX to RTX is no different.  NVIDIA’s newest generation introduces some incredible features which we’ll be covering here, but it’s the real time ray tracing that’s got me excited, you might be asking why? So let me explain.

Here’s a little background on ray tracing and its uses so far outside of games. Ray tracing itself isn’t a new technique, its been used for lighting, shadows and reflections in hundreds of non-real time rendering applications, from animated movies coming from big studios such as Dreamworks and Pixar to CGI effects and Pre-Rendered cutscenes, but for ray tracing to be used in the things listed above the render time of a single frame is astronomical, I remember using ray tracing to spice up a 3D animation in college and chiefly remember it taking literally hours to render a single frame on my Intel i7 + GTX 780 pairing, the difference in lighting and reflection quality meant getting a low grade on my work or a high grade on my work, purely because of how advanced the technology was and the visual impact it would have on my finished project. The idea that we now can render out 60 frames in a single second, all with ray tracing on in a video game, while the game still runs physics and vfx simulations blows my mind.

You might say “but we’ve got reflections, lights and shadows in games already what’s the big deal” it’s how those elements are created currently versus what ray tracing does differently which makes this RTX feature so special. In previous years game developers would “box capture” reflections, “bake shadows” from lights and set scene elements in a static way, not much would happen on the fly. Those reflections you see in most games would be static in some way or another, they would be set in a way where an invisible boundary box would capture the light sources within the area and ‘fake’ reflect onto a reflective surface, the same principle would work for shadow maps, where before a player is even in game the shadows would be worked out and bake into the world for any static objects such as walls and other static geometry, leaving only select thing to be rendered in real time.

This means it would be easy for one rainy level’s puddle to have reflections and a shadow from a nearby street lamp where the next instance of a rainy level would leave those reflections absent and even those shadows be removed, this could happen for a number of reasons like it was outside of the bounding box for reflection capture or light would be set not to cast shadows. However with ray tracing occuring in real time anything which has been tagged as reflective, anywhere in a game world would be capable of reflecting and refracting its surrounding surfaces, anything acting as a light source would be capable of casting real time rays and form shadows on the environment, simulating real life light and shadow behaviour.

NVIDIA GeForce RTX Battlefield V Screenshot

Make no mistake getting an NVIDIA RTX card won’t make every game look like real life (although you’ll likely be able to run anything you throw at it at max settings for a while as even the RTX 2060 is comparable to a Pascal GTX 1070ti). But this technology is still a HUGE step forward to how realistically rendered a game can be. NVIDIA’s RTX cards come with hardware acceleration for these features, dedicated RT cores and Tensor cores are part of the Turing architecture RTX genetic makeup, it’s what makes NVIDIA RTX so special. Now we must wait for game developers to begin implementing the real time ray tracing feature set into their games but there are already a handful of these games already out there.

The new features that RTX offers don’t stop there though, DLSS (Deep Learning Super Sampling) is another great stride being made by NVIDIA, although one we’re likely to see less support for compared to real-time ray tracing, at least for the time being, DLSS is a new AI driven form of Anti Aliasing, Edge Smoothing, and Scene Composition, one which doesn’t massively tank framerate like traditional high-end AA, it actually has the potential to boost framerate if implemented correctly. DLSS samples the surrounding pixels and creates the same edge-smoothing effect you’d expect from high-end AA such as TAA (Temporal Anti Aliasing) without the performance hit, this is because of the dedicated Tensor Cores doing all the heavy lifting in the background, not the main GPU’s resources.

DLSS also requires fewer samples of the image its smoothing due to the assistance of AI to ‘predict’ what the edge needs to look like, this means you can have high end AA but for the same performance hit low-end AA would offer you, but it doesn’t stop there, those tensor cores are also capable of supersampling the image, effectively giving a higher internal resolution (like turning up resolution/render scaling in games like Fortnite or Overwatch to 200%) without the traditional massive performance sacrifice associated with supersampling. This all sounds perfect on paper but there’s a catch with this, the reason we’re probably going to not see it often just yet, is because it requires knowledge of neural networking, not just that, but NVIDIA’s approach to neural networking, deep learning, and AI and how to integrate all that into a game.

In short, there’s no simple on/off switch for DLSS yet, it’s additional work ranging from moderate to highly difficult to implement and currently that work might outweigh the end result for all but the biggest and most dedicated development teams. That’s typically how new technology goes, adoption for this type of advancement is usually slow when the investment cost is high and then it becomes easier and more widespread with lower investment costs so more developers adopt it. Final Fantasy XV has some implementation of DLSS and it definitely has a positive effect on the overall performance and image quality.

Subscribe to n3rdabl3 on YouTube!

Truly we’re entering a new stage of game development and real time rendering history, which suggests why Nvidia’s shift from the GTX to RTX branding, the feature set is so big and the architectural improvements are so numerous that branding this line of card GTX would be a disservice to these new developments. While the raw performance increases of RTX are very valid, 4K 60fps performance is becoming the norm with cards like the RTX 2080ti and even the mid tier RTX 2060 is as good as the previous generations high end GPUs. The new features introduced by RTX is the real shining star of this line up, I hope to see these features being widely adopted by AAA development studios within the coming year for new releases and even implemented as patches for recently released games.

What games do you feel would benefit from real time ray tracing and DLSS? Sound off in the comments below.

Join the Conversation

avatar
  Subscribe  
Notify of