Many technologies had to mature before modern VR could become the practical product that it is today. We needed smaller and lighter screens, for one thing. Electronic components had to shrink down by many orders of magnitude, and overall computer performance needed to increase an incredible amount as well.

However, the one area of computer technology that sits at the center of VR is 3D graphical rendering – the ability of a computer system to calculate and then draw images in three dimension.

rollercoaster vr

It’s Been a Long Road

The earliest 3D graphics were truly primitive. Simple wireframe objects rendered at a handful of frames every second. As time went by these wireframes were filled with simple flat textures and became more solid and believable. If you play early 3D games from the early 90s you’ll see these simple flat textures in action. The first StarFox game for the Super Nintendo Entertainment System looks pretty crude today, but at the time those flat polygons were pretty impressive.

StarFox is an important milestone in the development of modern 3D graphics for one very important reason. Those true 3D graphics were far beyond the hardware that’s built into the SNES. So a special hardware solution was built into the game cartridge itself; called the SuperFX chip, it is possibly the world’s first example of a dedicated GPU, or Graphical Processing Unit.

It demonstrated the stark difference between using a general-purpose CPU (central processing unit) and a GPU. CPUs are designed to execute just about any command that you can express in binary code, but that comes with serious efficiency compromises. Specifically, CPUs aren’t very good at the mathematics that underlie complex graphical rendering.

PC games that use a software renderer that runs on the CPU inevitably performs poorly, looks awful, and hobbles the CPU from handling other aspects of the software such as game AI and physics. When dedicated add-in cards came to the PC, things started changing more quickly. Each year the cards became independently faster and faster. They also developed dedicated hardware features, which accelerated the raw calculations of 3D graphics.

CPUs vs GPUs

One key architectural difference is that CPUs have a small number of complex chips, or “cores”, to process information. These days it’s pretty normal for a typical computer to have four to eight cores. This means that such a CPU can simultaneously process four to eight streams of instructions at the same time. The first CPUs only had one core! So you can imagine why CPUs weren’t too great at graphics.

GPUs have a lot more cores than CPUs. Like, a LOT more. Low-end GPUs today might have 200-300 cores, with high-end cards boasting nearly 4000. These are not the big, fat complex processor cores of traditional GPUs however. These are relatively simple, small processors. They only handle basic GPU instructions for rendering, but they do it at high speed and in a massive, parallel fashion.

The State of Graphics Today

There are two main legs to the graphical puzzle. One is pre-rendered graphics and the other is real-time graphics.

With pre-rendered graphics the visuals are drawn over a long period of time; usually taking minutes or hours to render a single frame or picture. The computer-generated visuals you see in films are pre-rendered. It’s a way to get the best quality 3D graphics possible with current hardware and the time available. For big-budget CG films, a render “farm” may work for many hours, days, and even months to create the final polished imagery.

With real-time graphics, the graphical hardware has to render enough frames every second to ensure that we perceive smooth motion. Modern video game consoles usually target a frame rate somewhere between 30 and 60 frames per second. For PC gamers, 60 frames per second is often considered the minimum target. VR requires at least a sustained 90 frames per second to maintain immersion.

vr man

When it comes to pre-rendered graphics today, it’s safe to say that we have achieved complete “photorealism” in many types of imagery. In other words, the generated image is indistinguishable from an image that was taken from real life. Photoreal faces still elude us, although some people have come pretty close. When it comes to inanimate objects, we’ve basically done it. Cars, sandwiches, landscapes, and more now look so real it’s unlikely the average person could tell they were fake. Animals are nearly there, and on a case-by-case basis I think some are now rendered photoreal. For example, the lion from the first Narnia film was nearly spot-on, and that was years ago.

For real-time graphics today, photorealism is still quite a bit off. Video games like the Witcher 3 or Battlefield 4 are absolutely gorgeous. Current PC VR is likewise incredibly realistic. But very little of it would fool a human into thinking it was anything but CG imagery. What’s interesting is that we have been able to achieve VR presence without needing complete photorealism. It seems that a feeling of presence and solidity and visuals that are optically the same as reality do not have to go together.

Photorealism Still Matters

That doesn’t mean that in the future of graphics we should give up on photorealism. After all, there are degrees of simulation, and just because we have achieved the first truly convincing level doesn’t mean we should stop. After all, presence is possible even though we do not yet generally simulate things like touch or temperature. That doesn’t mean we shouldn’t bother!

Another reason I think photorealism is worth pursuing is because of something known as the uncanny valley. That term refers to the fact that we still can’t do decent human faces. Just look at a movie like The Polar Express. Their attempts at doing Tom Hank’s face end up being more horrifying than endearing. It’s one of the reasons companies like Pixar don’t bother going for realistic-looking characters. Cartoony designs avoid the creepy, corpse-like look of the uncanny valley.

The Graphics of Tomorrow

The most straightforward prediction we can make about today’s graphics technology is that it will continue to become faster and more powerful. In other words, GPUs will keep getting faster and more capable.

That means shoving more cores down, and shrinking the actual size of the individual transistors even more. The current top-of-the-line GPUs from Nvidia use a 14 or 16 nanometer transistor process. As far as we know, that’s getting towards the lower limit of how small these tiny circuits can be. Once they approach a small enough size, they will no longer be wide enough to keep the electrons inside.

There are quite a few ideas on how this limit can be extended somewhat, but GPU companies are already talking about using multiple GPU chips that act as one. Currently it’s possible to get multiple GPUs to share the graphics load, but it’s a bit of a kludge. They don’t really work together. Instead, they might render alternate frames or use some other way to sync up their independent processing. They can’t share the same pool of motherboard bandwidth or RAM either. When you have two 4GB graphics cards working together you still only have 4GB of RAM in total, since each GPU needs its own local copy of the data. It’s not very efficient. What they are proposing now is that these chips work together in such a way that it’s all one pool of GPU power. DirectX 12 has also made good inroads on this front, leaving the door open for multiple GPUs to work together even if they aren’t exactly the same model.

vr headset graphics

More Than Raw Speed

While increasing raw power will help, reaching a level of graphical rendering that looks indistinguishable from reality needs radically different ways of generating those scenes. Much of the realism we perceive comes from the physics of light, yet the way that most computer graphics are generated has nothing to do with the way photons work in real life.

The basic graphics rendering method used for the most cutting-edge 3D graphics today is still fundamentally the same as the first PCs that ran DOS. It’s called “rasterization” and it describes the mathematical process of generating a 3D scene and then converting it to a 2D image. You can think of rasterization as painting a scene front-to-back, calculating which things obscure other things. When it comes to producing realistic lighting, rasterization isn’t great at it, so graphics experts have devised all sorts of workarounds to create “fake” lighting simulations. When playing a modern video game, it’s clear that they’ve done a fantastic job, but the average person can still pick out a game screenshot compared to a real photo.

Pre-rendered graphics usually use a much different method called “ray tracing”. In this rendering method, a beam of simulated light is shot out from the virtual camera. The path of the light is then traced as it bounces around the scene creating shadows, transparencies, and nuanced colors. Raytraced scenes made with modern render farms are often almost impossible to separate from photos.

The problem is that ray tracing is so computation-heavy that no current GPUs can do it in real time, at least not for complex scenes. So one of the big future leaps we’ll see in VR will come when graphics hardware is powerful enough to perform ray tracing in real time.