Everyone experiences fear in their lives. Fear is both natural and necessary for our survival. It drives us to avoid danger and helps us stay alive. Fear is an evolved part of our nature. We are descended from ancestors that had a healthy level of fear and therefore weren’t eaten by some saber-toothed prehistoric monstrosity. So, although fear is a negative subjective feeling, it’s very good for us as a whole!
Even so, in some cases fear can become a problem in itself. When fear reaches the point where it prevents someone from living a normal life or doing all the things they really want to do, then it can be considered a disorder. For example, if you have a severe fear of flying and avoid airplanes at all cost, then you may be missing work or travel opportunities. If you’re afraid of swimming you might never learn how, or might have to avoid important social events.
Debilitating fear of some specific thing or experience is referred to as a “phobia” in psychology, after the personification of fear from Greek mythology.
There are, of course, degrees of such fears. For example, people with severe arachnophobia (a fear of spiders) might get a panic attack just by seeing a picture of a spider or hearing someone talk about them. Others may be OK with the abstract discussion of spiders, but run in fear if even a small harmless house spider is in the room with them.
Many people who have a phobia learn to adapt their lives in order to live with it. However, advances in psychological therapy and our knowledge of how the human brain works has made it possible to help people get over even severely debilitating phobias using specialized therapies.
One of the most effective methods of helping people get over their phobias is from a branch of psychology known as Cognitive Psychology. Cognitive psychologists study mental processes, treating the brain a little like a computer that can be “programmed” to one extent or another. A more specialized type of therapy was born from cognitive psychology and another, older branch known as behaviorism. This therapy is known as cognitive behavioral therapy, or “CBT”. The specific CBT therapy that can help people suffering from phobias is known as “exposure” therapy.
The idea behind exposure therapy is actually pretty simple. When you are faced with something that makes you very afraid, the natural response is to get away from it as quickly as possible. When you’ve put a safe distance between yourself and the frightening thing you want to escape, the fear goes away. Unfortunately this does nothing to solve the root of your problem – an irrational fear. In exposure therapy you are very slowly exposed to the source of your fear, first at low levels and then with increasing intensity. At each level of exposure the idea is to let your brain, body, and mind get used to that level of presence. We all “habituate” to even extreme circumstances eventually, if given a chance. The point of exposure therapy is to give the person a chance to habituate instead of just avoiding the issue.
While phobias are the most well-known target of exposure therapy, it’s actually useful for quite a few different problems. People who experience debilitating panic attacks can learn to deal with their trigger in a safe environment. If you have social anxiety or just general anxiety, it also works. OCD sufferers have found relief in exposure therapy too, and people who suffer from post traumatic stress disorder are prime candidates.
VR and exposure therapy seem like they are made for each other, but of course you don’t need VR to practice the method. This therapy model has been around far longer than VR has been a feasible way to deliver it. Traditionally, there are three main channels through which exposure therapy is conducted.
With “in vivo” exposure, people are exposed to the real deal. If you’re afraid of spiders the therapist might bring out a real spider in a terrarium, progressing eventually to handling said spider near the end of therapy. Then there’s “imaginal” exposure. This is where the therapist guides the person through a detailed and vivid mental reconstruction of what caused their problem. By deliberately imagining things, people, places, and experiences, the person in question can become used to the idea of dealing with problematic thoughts. Finally, “Interoceptive” exposure happens when the therapist gets the client to cause particular physical sensations that trigger their negative responses. People who have panic attacks often get triggered by their thumping hearts. By making these people exercise to lift their heart rates, the sufferer can learn that an elevated heart rate by itself isn’t dangerous.
In terms of pace, there are three main ways exposure is delivered. You can do a graded exposure where the person looking for help puts together a list of things related to their issue and then ranks them from not particularly scary to “help I’m going to crap my pants and die”. So they might start out with a picture of a spider, then a video, then a real one in a tank, and so on. “Flooding” is when the therapist flips that ranking and starts with the most difficult trigger, which is pretty rough! In other cases they might systematically desensitize the person by mixing exposure and relaxation exercises.
Each specific mix of methods is unique to each case, which is why this therapy is best done under the supervision of a trained mental health professional.
So what is VR’s role in all of this? Well, as you can imagine, a technology that can make people feel they are in a situation that they really aren’t in is incredibly useful. There are plenty of situations where in vivo exposure isn’t practical. It might be too dangerous, too expensive, or simply too much trouble to recreate the real deal.
Virtual Reality also elevates one specific aspect of exposure therapy that’s very important – control. The therapist has complete control over everything the client sees and hears. He or she can dial the intensity of exposure up or down as needed, or stop things when they get out of hand and conjure up experiences that would be impossible by other means.
CBT as a treatment for phobias has been around since the 1950s. Obviously this is quite a long time before VR was even remotely a thing. Still, once VR became more than just an idea it didn’t take CBT practitioners long to figure out that it could play a role in exposure therapy.
Already in the 90s, when VR flopped commercially for the first time, it found some respite within universities and specialized therapy practices. The first paper that detailed how effective VR could be for the treatment of phobias was published in 1995 by Barbara Rothbaum and Larry Hodges. Since then it’s become a fairly normal practice to use VR in the therapeutic setting, and companies that create specialized therapeutic VR software exist to serve this market.
Until recently, these therapeutic VR products had been expensive and specialized. There really wasn’t off-the-shelf hardware you could buy to build your own therapy setup, and whatever was available was all clunky and terribly uncomfortable. Now, the best VR experience you can have essentially comes from the top consumer headsets. Products such as the Oculus Rift and HTC Vive offer amazingly lifelike experiences when coupled with standard high-performance hardware. Yes, AAA consumer VR may be expensive from the perspective of normal users. But if you’ve been paying tens of thousands of dollars for proprietary VR therapy systems, it’s a real revolution.
Mobile VR, too, has a big place in this new generation of therapeutic experiences. Even something like a Gear VR is superior to the systems that have been used in the past for VR therapy. The processing power in a typical smartphone is hundreds of times that of a high-end computer from the 90s.
Mobile VR brings with it the possibility of sending your clients home with a way to complete their own treatments outside the confines of therapy. Think about how useful this would be for someone who suffers from anxiety; they could put on a guided meditation app when they start feeling a little tense. Mobile VR also provides the opportunity to treat people whose phobias prevent them from leaving home, although, of course, a laptop and tethered HMD is also an option.
Then again, easy access to VR and a flood of dubious treatment applications might turn out to be a problem themselves.
First of all, it’s not a good idea to get into mental health treatment completely without the supervision of a mental health professional. Yes, it is sadly true that many people in the world don’t have access to the sorts of help that they need, but unsupervised treatment can do more harm than good.
I don’t think people who suffer from general anxiety can get into much trouble with a meditation application, but if you have a severe phobia there’s no telling what consequences the wrong trigger at the wrong time can have. After all, there’s a whole methodology of constructing personalized exposure regimens for patients.
Because VR hasn’t really been easy to get into public hands until recently, there isn’t really any regulation of applications making therapeutic claims. Perhaps in the future such apps might need approval from a sort of VR therapy FDA. Only time will tell.
There’s one project in particular that I want to highlight. Phobos is an application platform that its developer hopes will one day provide an affordable and scientifically valid way to get targeted VR therapy for the treatment of fear and anxiety into the hands of the masses. The project has been going since at least 2014, when it had an IndieGogo campaign. Unfortunately, they fell far short of their goal, but the project still seems to be alive.
What makes this project different from the many junk “phobia” apps on mainstream app stores is that all the clinical elements of Phobos is validated by actual mental health professionals. Right now, the team behind the platform is working on getting people to actually experience anxiety and fear triggers in VR in a controlled way. When they’ve got that bit down they’ll move on to expanding Phobos to more general cognitive processes such as memory, attention, and perception. They also plan to make use of AR as well as VR.
Because the source code for Phobos is completely free, institutions can use it as they please for mental health purposes, as long as they can provide their own commercial assets (since those can’t be included in the free code).
Most people will experience some sort of mental health issue at some point in their lives. VR provides a way to reach more people in a more effective way like never before. Who knows? Perhaps one day you’ll have an AI therapist who’ll meet and treat you in VR. For now we can marvel at how far we’ve come since those early days of 90s VR with blocky spiders and simple heights. Now you can count the hairs on a tarantula's back. If you want to, that is.
Of course, as with all things VR it might end up being mixed-reality that takes over this therapeutic role in the future. A mixed-reality snake or spider might be much more believable when it’s walking over YOUR coffee table. That’s even before we get into the possibilities that advanced haptic technologies can bring to the party. Imagine feeling the air whipping by as you skydive in VR, or the squeeze of a snake held in your hand. The future of VR therapy still has a long way to go, but at the same time it seems brighter than ever.
We are all very excited about virtual reality. It’s an amazing experience backed by amazing technology. But when the excitement has died down and everyone is used to VR, what are we actually meant to use it for?
Of course, VR gaming is the application everyone thinks of first. It’s also likely to be around for as long as VR. The gaming industry is so big that even relatively niche parts of it can survive with relative ease. At the moment VR gaming may not be mainstream, but it certainly isn’t niche either. The main problem with VR gaming being the main industry supporting VR is the expense inherent to VR gaming systems. Even the low-end of AAA VR is pretty pricey; the Sony PSVR costs about as much as a PS4 console, which means that most only the small fringe of truly hardcore gamers is going to invest in VR. As VR hardware costs go down this might change. As I write this, however, only one in sixty PS4 owners have a PSVR. If we look at PC VR, the vast majority of gaming PC owners don’t have a machine capable of handling VR. Of those, it’s again an even smaller subset who have HMDs.
If not gaming, then what? Virtual reality has some excellent educational and training applications. In fact, I personally find the educational potential of VR to be one of its most exciting possibilities. Education is a huge industry, worth billions, but it will be a while before VR devices can scale in a cost-efficient way to broadly apply to learning. For now, science fields and industries that benefit from hands-on experiential training are pouncing on VR, but we are a long way from every student packing an HMD in their kit.
Productivity is another intriguing possibility for virtual reality. One of my favorite Oculus apps is a virtual desktop program I found on Steam. Basically, it puts you in a 3D room with your choice of virtual monitors. Big ones, small ones – there’s quite a bit of flexibility.
For someone like me who doesn’t have to look at the keyboard to type, it provides a focused work space that’s also quiet and, above all, private. The problem here is not so much VR, but its cousin AR. I really believe that physical screens are going the way of the dodo in future, but it will be AR that makes that happen, and not VR.
There is, however, one area of VR that is set to be its most popular segment. Billions have been spent on it already and things haven’t even started yet. Social VR might just be the killer app that VR needs to become mainstream.
Just like social media, social VR includes VR experiences that are explicitly designed to facilitate people interacting with each other. This could be virtual chat rooms, parties, community games, and anything else you can think of.
Think of social media such as Facebook and Twitter, and chat programs like Hangouts or Skype – the same uses, but translated into a VR environment. The industry believes strongly in the social appeal of VR versus chatting via text or video. So much so that social media giant Facebook bought out Oculus for between two and three billion dollars, depending on who you ask. That’s a huge endorsement of the technology as a new way for people to get connected. But why do they care so much?
That’s the way that Jefferson Airplane put it, back in the day. It’s true, we do all need someone to love. Human beings all have a need for social interaction. It’s our monkey troupe ancestry that drives that need for contact and interaction. We like to stick our noses in each other’s business. People are always dynamically forming groups and building intricate social hierarchies.
We communicate on multiple levels at the same time. Spoken language, as complex as it is, only makes up a fraction of the total communication bandwidth of in-person communication. Body language, vocal tone, facial expressions, and other physical aspects of communication all intersect to generate the total channel of information we send over to the other person in the conversation.
In the early days of telecommunication, that was stripped down to the bare minimum. Telegrams were short, terse text messages.
Tell John STOP
Grandma sick STOP
Come quick STOP
It’s not exactly a flowery speech, but being able to send those (expensive!) words clear across the country in minutes was a revolution. Then came telephones, where you could hear the other person’s voice; hear emotion and feel much closer to them.
Digital communication over the internet also started off as being text-based. With internet relay chat (IRC) we could talk to anyone in the world, as long as it was text. The popularity of text chat also shows up those limits of communication.
It’s very difficult to convey sarcasm in text for, for example. So people have begun to invent new text elements that do the job. For example, you may follow a sarcastic comment with “/s” so that everyone knows you aren’t serious. It’s also why we use emojis, those little happy or sad faces. They started as cheeky repurposing of ASCII symbols, but now with modern smartphones we have elaborate emojis that can even be animated.
Video chat programs like Skype have brought the face and voice together. It’s much more like really sitting across from the person you are speaking to. Just 10 years ago this was a Sci-Fi concept. I still remember how blown away I was by my first video call. Now I Skype with people on a daily basis like it’s nothing.
None of these communication methods really make it feel as if the other person is sharing the same space with you. A feeling of presence is what’s been missing with our telecommunication methods from the very beginning. No matter how sharp the picture, how clear the audio, or how short the latency, it always feels like there is a large distance between you and everyone else in the conversation. When you hang out on Facebook with your pals, it doesn’t feel as good as sharing a couch and watching a movie together. In social VR applications your friends are there. They may be represented by avatars, but if done well it can feel like you really are hanging out together.
In a world where people experience the paradox of being connected, but feel socially isolated, social VR is set to explode in popularity.
That’s the theory, but how long before we can actually try it? The good news is if you have a smartphone and a mobile HMD you can try social VR right now. Many of these programs don’t even require the HMD. It’s a concession to the many mobile users who may not have VR HMDs yet. You can still traverse the virtual social scene using just your phone.
Here are some of the most prominent social VR apps that are either coming soon in beta or already available for play. In general these are usually free (ad-supported) and, as long as you have a fairly modern phone, should run pretty well.
Facebook Spaces is a key part of the plan that started with the purchase of Oculus. Zuckerberg and friends clearly see the Facebookites of the future lounging around its Spaces virtual world. The other part of the puzzle is their Oculus Go headset – a standalone VR headset announced mid-2017 that costs only $200. It doesn’t need a phone or computer, but just exists as a standalone mobile VR solution. That gives Facebook a platform both in the hardware and software sense.
Spaces lets those without VR gear participate with those who are wearing a headset. You can start a live broadcast in VR and others will be able to watch it on tablets or phones. Conversely, you can receive Facebook Messenger video calls in VR, with the feed appearing as a floating screen. Spaces also lets people who are hanging out together in VR call up and discuss the media on their Facebook account. So you could have a chat about the photos and videos that have been accumulating over the years.
In a stroke of pure genius, you can also create new pictures in the VR world with selfies of your avatars doing various activities in the VR world. Spaces also includes activities where you can create VR media – you can collaboratively draw 3D objects in the shared space and generally mess about with your buddies.
One thing that has to be noted about Facebook Spaces is that it currently only runs on the Oculus Rift and requires the Touch Controllers. So this is not something everyone can just jump into right now.
AltSpace VR is probably one of the best-known social VR applications out there. The company went through a rough financial patch in 2017 and went as far as announcing that it would be closing down, but in the nick of time good old Microsoft swooped in with its massive vault of cash and bought them. It’s a good thing too, because it would be a real pity to lose one of the best-put-together social VR apps today.
AltSpace is available on quite a few HMDs, although, sadly, the Google Cardboard is not one of them. If you have Google Daydream, Htc Vive, Oculus Rift, and Gear VR you can hop right into the AltSpace world right now. If you don’t have any of those, then you can still try it out using “2D mode” on your phone. AltSpace is an intuitive and comfortable experience. The spaces are nice to be in and it’s easy to just walk around and chat with people who are hanging around. Altspace is at the top of my list as a recommendation for people to try.
Oculus Rooms is another Gear VR application and is much as you’d expect, based on the name. It provides a social space where you can hang out with other people. It’s possible to do all sorts of shared activities, such as watching films together or playing games. It also acts as a sort of lobby for other multiplayer Gear VR apps, where you can decide what to play together and then launch the software from there.
There’s a fair bit of personalization in Rooms. You can decorate and style your room to your satisfaction before inviting anyone over. That’s quite a cool feature. Apps such as AltSpace just let you hang out in a premade space, so it always feels public. Rooms is designed in such a way that it feels like inviting people to your home. For that reason I think it is quite special. It also helps that it is one of the best looking social VR apps on mobile!
Before all this social VR malarkey, there was Second Life. If you have somehow never heard of Second Life, it’s a shared 3D virtual world that people visit using their plain old computers. Second Life is still running to this very day and has become popular as a virtual space to do everything from hang out with people to doing product launches or giving lectures.
The times are moving on, though, and the people behind Second Life, Linden Labs, are ready for a modern stab at social virtual reality. That’s coming in the form of “Sansar”, a new high-end social VR platform for which Linden has huge plans. Sansar is about community-driven social VR creations. It’s meant to be a virtual space where Linden provides the tools, and the public makes stuff with it. They will let you import assets from most of the major 3D modeling packages and straight up sell assets in an online store. Sansar is also going to provide a way for people to monetize their content, much in the same way that people earn a living making content for YouTube.
Social VR looks like it really could be the next big thing that makes VR a permanent fixture in daily life, but that doesn’t mean it will all be good news. “Normal” social media has already been incredibly disruptive to society. Facebook has been implicated in certain types of depression, as one example. It turns out that people are very careful to curate how they portray themselves on social media. They only show the good things that are happening. Consequently it looks as if everyone in your social circle is living a much better life than you. After all, YOU know about all the not-so-nice things that happen in your life. They happen to everyone, but Facebook facilitates the creation of false realities. The end result is that people feel worse about themselves.
There’s also the specter of cyber bullying. Driven by anonymity and the fact that a significant percentage of the population is always going to be made up of, well, douchebags, there’s no reason to think that we won’t see it in VR. People can already feel severely victimized just with text and pictures; imagine what will happen when you can also feel the presence of a cyber bully. It’s not as if social media companies like Facebook are not aware of this issue, but I still haven’t really seen how they’ll address it effectively.
Apart from that, we might also see people develop an unhealthy obsession with social VR. It happens all the time, whether it’s TV, video games, or even books. Some will take it to unhealthy extremes.
Let’s not get mired in negativity, however. The internet might be the poster-child for technological loneliness, but through VR we might be able to bring back a true sense of community.
I remember the times I spent on fan forums when I was a teen. We organized real life meetups whenever we could, but even just a text-based forum was already an amazing way to find your people and spend time with them. A presence-inducing VR system would have transformed that experience and I, for one, am glad that we are the generation that gets to experience this technology first.
For most people, virtual reality is mainly a visual idea. It’s a way to beam realistic pictures to your eyes so that you feel like you're really in the virtual environment where you have been placed. However, as I’ve outlined in various other articles on this site, VR is actually an intricate dance of various factors that work together to craft the illusion of virtual presence.
One of the key factors in creating a truly convincing and immersive VR experience is having audio that syncs up with what your brain expects to hear. It may not seem like a big deal, but you’ve surely noticed that the real world does not sound like a movie soundtrack played over a pair of stereo headphones. Your brain uses the slight time delay between a sound reaching one ear and then the other to accurately and almost instantly figure out where its position is relative to you. That’s why you can tell if a sound is coming from above, behind, or below you.
Of course, your brain has access to a bunch of other sources of information too, such as how the sound changes as you move your head, and visual cues as well. Most of this auditory processing happens on a subconscious level so we aren’t aware of it, but just about anyone can tell that the sound we typically get from a recording played back on a pair of headphones sounds “fake” compared to real-world audio.
One of the ways that have been put into use when it comes to truly simulating where a sound is coming from using a speaker system is to use a multi-speaker surround-sound setup. If you are surrounded by speakers then it’s relatively easy to make it sound like, for example, there’s a bird chirping above and behind your right-hand side. Unfortunately, multi-speaker audio systems are quite expensive and complex. Most home users typically have a system with five, or perhaps seven, speakers. It’s also not terribly practical for VR use since you’ll also hear all the other sounds that are not part of the VR simulation.
Still, when you think about it, you only have two “microphones” – in the form of your ears. So surely there must be a way to simulate the positional audio signal that your brain can interpret and perceive as a real sound coming from a real location in the space around you. Audio engineers have been working on this problem for a long time and it has given rise to a special recording technique known as “binaural” recording.
Binaural recording is a recording meant for two ears. That may seem like a sort of dumb way to put it, since we use our two ears to listen to everything, after all. What I mean, however, is that a binaural recording is meant to simulate what two ears would have heard had they been present at the time of the recording.
In other words, this is not simply two channels of audio, which is all stereo is. When a stereo track is mixed, all the recording engineer really has control over is how loud and prominent each element of the multitrack recording is in each channel. If a sound is perfectly balanced then you should hear it sort of in the “middle” of your head. The more the sound is pushed to the left or right speaker, in terms of volume balance, the more you’ll hear it on your left or right side. Traditional stereo mixing and recording can really create the right sound to convince you of a sound's origin in a spatial sense.
That’s mainly because what you hear coming into your ears is actually a complex acoustic product. For example, you don’t hear sound just as it enters your ear holes. Instead you also hear sound translated from the inside of your skull, where it all mixes together. This is one of the reasons why we often don’t recognize ourselves on a recording. We hear our own voices as a mix of internal and external sound that’s not reproduced on a recording at all.
The shape of our ears also plays an important role. The ear shell is a complex acoustic funnel. If you’ve ever been in an auditorium or an opera house you may have noticed the complex acoustic panels they use to reflect and redirect sound. All the curves and angles of your outer ear do a similar job. This means that your brain expects to hear sounds that have been put through this part of the process when figuring out where a sound is coming from. The problem is that when you are wearing headphones these natural acoustics of your hearing system are partly bypassed. The sound is pumped directly your ear canal, where you hear it as an undiluted stream of sound.
Traditional binaural recording aims to recreate the acoustics of the head and ears that your inner ear and brain expect, by literally using a dummy head with microphones where your ears would be. This way the distance between the “ears” is correct and the model ear shells do similar things to the sound as your own ears would.
So if we wanted to make it sound as if you were really standing in the middle of a live band, or a nature scene, you just have to set up the real-world situation and then plonk your recording dummy head in the spot where you want the listener to feel they are when they play back the recording.
There’s just one catch – the positions of the audio sources are all fixed relative to your dummy head recording system. So that’s not much use for us in VR, because we want to move our heads around and have the sound stay where it’s supposed to be, relative to our heads.
The virtual reality world often portrays an environment that is dynamic in nature. When it comes to the audio that you hear, this means you’re listening to not a single recording but many small recordings, and sometimes even completely generated (i.e. “made up”) sounds.
Let’s say there’s a virtual bee flying around your head. Unless the bee takes a pre-recorded path around your head, you’ll need some way of making the positional audio work, without having the luxury of perfectly controlling the position of your head as you listen. In other words, you need to generate or modify the sound in such a way that things in the VR world not only sound as if they are where they seem to be, but that their position in audio space changes correctly as the position of your virtual ears changes.
Different VR experience providers each have their own “secret sauce” algorithm and software developer kit aimed at convincing us the origins of VR sounds really are where they appear to be. Regardless of how they achieve the end result, the basics are bound to be the same. These systems have to take three things into account when generating true spatial sound.
First, they have to simulate the difference in the time the sound takes to reach one ear versus the other. So if a virtual pin drops in the room, the computer has to work out exactly when the sound waves reach which parts of the room. If your ear is occupying that space, then it should hear the correct sound at the right time.
Secondly, they have to take into account that each ear should hear the sound at a slightly different volume. After all, the ear facing the sound gets more sound energy than the other.
The last part of the simulation has to reproduce the “spectral filtering” performed by the outer ear. All this refers to is how the shape of the ear highlights or eliminates parts of the sound. That way the audio sounds the way the brain expects it to after passing into the ear canal.
So in the end a spatial audio system is a simulator that takes recorded sound and then simulates how that audio would travel through a room, interact with various surfaces, and then enter your virtual ears. It’s not hard to imagine what such a system has to do, but it’s a technical act of genius. We pay a lot of attention to the (deservedly) hard work that has gone on in the graphics department, but I think the revolution in true spatial audio deserves just as much praise. All these research and interlocking technological solutions come together in an intricate dance just so you can enjoy an aural experience. Hats off to them, I say.
You’ve probably heard the term “mixed reality” come up in conversation about both VR and AR. As you may know, mixed reality refers to a modern, more sophisticated form of augmented reality that combines a suite of environmental sensors, wearable computer hardware, and sophisticated software. A mixed reality system can include full VR, full AR, or other weird in-between blends, such as a sort of reverse AR where real-world objects are displayed in the VR world.
Microsoft, the same people who make Windows and those named “Bill Gates” incredibly rich, are at the forefront of this new revolution in augmented reality. Their Hololens headset made waves when it was first demoed to the public and is still one of the most impressive mixed reality devices you can buy. Of course, to actually own a Hololens you’ll have to part with a small fortune, since it’s mainly aimed at enterprise clients at this point.
Microsoft has never really been a company that wants to corner the hardware market. They’re all about creating dominant platforms and then inviting everyone else in. One of the main reasons that Windows is such a success is that Microsoft never tried to keep it away from everyone else. Apple didn’t want their operating system running on anything but an Apple, but Windows ran on everything, which means more people wanted to make software for it. It wasn’t necessarily the best operating system, but it was the most useful one thanks to how widespread it became.
Windows Mixed Reality is Microsoft trying to create the same sort of open platform, but this time for the purpose of creating a standard framework for developers to use for their software products. Microsoft provides guidelines on hardware specifications as well as software that helps to interpret a lot of the complicated aspects of mixed reality. This allows developers to concentrate on creating their unique content instead of having to reinvent the mixed reality wheel every time.
This sort of standardization is both necessary and common in the tech world. That doesn’t mean that Windows Mixed Reality will be the ultimate winner and become the main standard for the technology, but it makes sense for Microsoft to be the one taking the lead here. Apple is on a similar path with their ARkit developer framework; AR will be built into their future phones and other devices from the start.
Microsoft has won at this game before, though. Back in the early days of 3D accelerators there were a number of graphics APIs that were a way to standardize programming for graphics hardware from different manufacturers. In the bad old days you had to write a piece of software for a specific piece of hardware, but with an API you and the hardware makers just make sure you comply with the standard and, viola! – everything works with everything, at least in theory.
Microsoft’s DirectX won out against OpenGL and Glide in the first big standards war in the graphics world. Today DirectX is still dominant, but it’s being challenged again by new standards and it’s not a foregone conclusion that Microsoft will win again.
Anyway, that’s getting off the topic at hand. So let’s look at the physical manifestation of Windows Mixed Reality.
So far there have been two third-party manufacturers that are bringing Windows Mixed Reality products to market. Acer is releasing a Developer Edition headset, and Lenovo (which bought out IBM’s laptop division) is releasing the Lenovo Explorer, which can also be bought with motion controllers.
Both of these headsets share some common features. The most obvious feature is the pair of front-facing cameras. These lenses are set very widely apart, giving the whole thing a rather comical look. You’ll also notice that both have a sort of “flip up” front, which means you can easily switch between being hooked or not, without having to take off the whole thing – something that Sony has already done with its PSVR.
Since they are both built according to the same general guidelines, there’s not that much to differentiate the headsets, but the Acer unit is one that developers will be building their experiences on. So let’s look at its specifications as a sort of base standard for these headsets.
The Acer HMD has two very high-resolution LCD screens. Each LCD has 1440x1440 pixels, which you’ll notice makes them perfectly square. That’s actually a good approach, since single panel VR HMDs get split into two squares anyway and a lot of the dual 1080p models end up wasting some of that real-estate. This dual-panel setup provides a 95-degree horizontal field of view. This is perfectly in line with premium VR headsets and certainly a lot better than the Hololens, which is based on a fancy but still immature projection system. Just as with the Oculus, the refresh rate of the panel in the Acer HMD is 90 Hz and it is tethered with a four meter cable.
One major difference between this headset and something like the Oculus or Vive is the fact that it uses “inside-out” motion tracking. The camera that tracks motion to provide the full six degrees of freedom are the ones mounted on the headset itself. This is actually an amazing development, because it means that room-scale VR is possible without the need for external sensors.
These Windows Mixed Reality headsets are set to sell at around $350 apiece, which is an incredibly low starting price. That’s almost half as much as the launch price of the consumer Oculus Rift. It’s no surprise Oculus has been slashing the price of its HMD recently.
These headsets are quality devices with cutting-edge technology and yet they are being sold for an absolute song. Honestly, these HMDs went on my shopping list the very moment they were announced.
When the Oculus and Vive originally launched, the minimum computer requirements were quite shocking to a lot of people. Since then the requirements have softened a bit and the GPU technology has become a lot cheaper to boot. The minimum requirement for these Windows headsets are basically the same when it comes to premium VR, but in a very smart move there’s a second lower set of requirements for productivity-type VR and AR.
If you have a computer with an integrated GPU equivalent to the Intel HD Graphics 620, the Windows HMD will clock itself down to 60Hz but still work for Windows itself and the more basic applications. That’s a killer feature for a product that’s supposed to be the real deal mainstream VR solution.
It’s very early days for this new platform, but it looks as if Microsoft has thought of everything that’s holding mainstream AR, VR, and MR back. The only real criticism I have so far, based on the tech demos I’ve seen, is that the inside-out tracking can be a little janky. But that’s with beta developer demos, so we’ll see if it’s an inherent issue.
Other than that, I think they are onto a winner here.
They say that the eyes are the windows to the soul. I don’t know about that, but it is true that our eyes are an important part of how we express ourselves and communicate. We can tell where another human being is looking, for one thing. It seems like a pretty mundane thing, but humans have an incredible ability to determine the gaze of another person.
This is not something that many other animals can do. In fact, dogs can follow the case of a human being, but cats can’t. Why? Because we’ve selectively bred dogs over millennia to care about what we are looking at. Cats have had no need to develop that ability, so Mittens literally could not care less about where you are looking.
Our gaze represents an entirely separate channel of communication. We are sending out and receiving valuable information on this channel all the time. So why not make use of it in a digital context?
That’s exactly what scientists and engineers have been doing for ages. They have used specialized camera sensors to measure the position of the eye and figure out where it is looking. The most common type of tracking is known as “pupil center corneal reflection”. Basically, near-infrared light is shined at the eye and then a special high-resolution camera geared at picking up that wavelength of light sees the reflection off your cornea and calculates the position of your pupil.
There are lots of different applications for this technology. For example, for people who design computer interfaces it can be incredibly useful to know where on-screen the user is looking. Where is the eye naturally drawn? Does the user take a long time to deal with something visually? There’s a good chance that just about every major app or operating system that you’ve used has benefited from eye-tracking as part of its design.
Eye-tracking technology has also been a major tool for market researchers, psychologists, and medical researchers. It’s helped us figure out mental processes, attitudes, and more. So you might be asking, what relevance does that have to VR? You’ll soon see.
When you currently use a VR HMD such as the Oculus or a Gear VR, your “gaze” is essentially the center of the viewport. Some applications use this to interact with the world. You can rest the reticle on a particular object and then make it do something by clicking a button or simply waiting.
The problem with this is that just because the viewport is centered on a spot doesn’t mean that you are actually looking at it. Our eyes and head move independently of each other, so the way that current VR systems determine gaze almost makes it feel as if the user is wearing a set of horse blinkers.
It’s not just about accuracy either. Imagine what a VR developer could do if they knew exactly what the user was looking at. For one thing, other characters in the VR world could be programmed to respond to your gaze in a more natural way. Are you looking past them at an object in the distance? They might turn to follow your gaze. Stare at another character’s eyes for too long and you might make them uncomfortable and provoke them.
Apart from making the virtual world react in a more lifelike way, eye tracking can actually make the virtual world look better. You see, normally we render everything on the screen at full detail, since we don’t know where your eye will happen to fall. If we can track your eye movement with speed and accuracy then we can deliver rendering resources on the fly to sharpen up the parts of the scene that are in focus. Alternatively, you could use this to keep the apparent detail the same as we’re used to today, but reduce the horsepower you need to drive it. Either way, it’s a win-win situation.
So are there any VR HMDs in the real world that can actually do this? It turns out that the answer is “yes”. A company called Fove has been the pioneer in this area. Their HMD has already been out in the wild for a while as I write this. It’s a pretty good HMD overall actually, with pretty decent specifications. A little bulkier than some, but it’s not far away from HMDs we already know in terms of size and weight. The eye tracking is apparently pretty good, and for a first-generation product it’s an amazing start.
The problem is that most people who are going to get a VR headset already have one without eye-tracking built in. So it’s a good thing that veteran eye-tracking company Tobii is planning to provide custom modification to just about any VR headset out there. Tobii has been the world-leading name in eye tracking for as long as I can remember, and is set to be a strong challenger to Fove.
The Tobii site detailing its technology as applied to VR pretty much lays out the same advantage I discussed above. Including “foveated rendering”, which means only rendering sharp graphics in spots the fovea is focused. They also say it will make for much better 3D effects.
There’s no doubt that we should all be keeping our eye on VR eye-tracking tech.
Without a doubt, Valve’s Steam digital game store is the king of PC gaming platforms. For many people, Steam and PC gaming are one and the same, with competitors like Good Old Games, Origin, and uPlay fighting over the scraps.
Valve is also the company behind the HTC Vive platform, so Steam is built from the ground up to work with that headset as well as supporting the Oculus Rift. While you might think that the Oculus store is the best place to get great VR experiences, Steam is where the best VR games live because of its wide user base and relatively easy access for developers.
If you have a premium VR headset and you don’t have a Steam account, then you’re missing out big time. It’s especially VR games created by independent studios that are pushing innovation in the VR space. The problem is that Steam is also absolutely filled with junk titles, and sifting through the mud to find the gems can be a taxing experience. So here’s my personal list of the best VR games on Steam.
Have you watched the movie Gravity? Did you also find the stomach-churning perpetual fall through cold and deadly outer space terrifying? Then why not play a VR game that essentially replicates that experience, except this time it’s YOU cooped up in the space suit.
ADR1FT is a survival game set in space. The game takes place in zero-gravity and you can move in any direction – there is no “up” or “down” in space, after all. To survive you need to keep breathing, which means finding a steady supply of oxygen. Run out of air before you find a new tank and it’s bye-bye. Standing between you and that wonderful air supply are a series of survival puzzles. So yeah, it’s basically an unlicensed game version of Gravity.
Reviews of the game have been admittedly lukewarm, but from my point of view this is a beautiful, unique, and thrilling VR experience. The game is also pretty cheap even when it’s not on special. The game was developed for the Rift, but there’s a special Vive version that makes use of its unique motion controls. There’s a PSVR version in the works too, but as usual PC is the place to get the best experience.
One of the first 3D space games I ever played was called Elite. I played it on an ancient 80386 machine that I received as a hand-me-down from my dad’s friend. The game was originally released in 1984 and is one of the first examples of procedural generation of game content, with every planet a unique mix of parameters.
Many games have aped the formula that Elite started all those years ago. Today we have Elite: Dangerous – a massive online multiplayer game where it’s you, your ship, and the whole galaxy. Elite: Dangerous is a killer app for VR and was designed from the ground up to be an epic cockpit-based experience. Since this is an online-only game, it’s constantly being updated; this is one of the best examples of a VR game that takes you away from reality and really makes you feel as if you’re there. This is the epic experience you dream of when you buy a VR setup.
At first glance, House of the Dying Sun might seem very similar to Elite: Dangerous. This game is, however, very short in comparison, and a tight and contained experience. This is no grand online adventure; it’s a story told through intense, tactical dogfights in the depths of space. While there may not be much meat on its bones, House of the Dying Sun is an incredible space combat sim and you’ll grit your teeth as frequently as you’ll die in the virtual world.
In this game you aren’t the hero but rather a member of an elite corps of personal guards for a brutal dictator, and ordered to hunt down enemies of the empire at any cost. Playing this game on my Oculus Rift, I loved the sense of weight and solidity my ship had. I could feel the claustrophobia of my cockpit glass and the awe of gigantic planets, stars, and asteroids. This is one to get, even if it’s short-lived.
Superhot is an incredibly unique game that takes the idea of “bullet time” to an extreme outcome. In Superhot, time does not move until you do. This is great, but as you stand there in the middle of the room, several assailants are just waiting for you to unpause things so that they can murder you in one of several imaginative ways. It’s an impossible and deadly situation, but you have all the time in the world to plan your solution for dealing with them. Think it through carefully, because once the clock starts ticking again you can go back, but you have to execute your death-defying move perfectly – it only takes one shot or hit to end your game and boot you back to the start of the level again. Superhot is one of the most creative and exhilarating uses of VR I’ve seen so far.
Buy On Steam
What could be better after a long day of slaving away in your cubicle than coming home, strapping on your VR system, and simulating your job all over again. Well, that’s not really what Job Simulator is about. It’s actually a satirical game about a world where robots have taken all the jobs and people don’t actually have to work. So you get to visit a job “museum” and find out what it was like to work back in the old days. Except, the ones who made this museum got it all wrong, leading to hilarious “jobs” that never actually existed.
You can take on a number of “jobs” such as store clerk or auto mechanic. Job simulator is, in the end, really just a very expanded tech demo for the touch controllers. The good news is that it’s a fun and hilarious tech demo that perfectly demonstrates how to do VR motion controls the right way. It’s well worth the small asking price to add this to your library.
Before I even bought my Oculus Rift I have to admit to spending more time than could be healthy playing Euro Truck Simulator 2. I have no idea why this game is so good. It’s literally just driving a truck full of cargo from point A to point B. That being said, the trucks are beautifully detailed and the driving experience is Zen-like.
Thousands and thousands of other fans agree; this game is one of the most positively-reviewed on all of Steam. The VR functionality of the game is still in beta, but it’s already thought of as one of the best VR driving simulators you can buy on Steam. It’s not a racing game, but a unique way to experience what it’s like to be a trucker carting your precious cargo around. You have to obey the traffic rules and, depending on the difficulty level, some maneuvers such as reversing your trailer into a loading bay can be brutally difficult.
Everyone should play Euro Truck Simulator.
Doom is one of the most iconic video games of all time. The original game, which came out decades ago, established the first-person 3D shooter as the premier video game genre. Today’s games such as Call of Duty and Halo all owe a debt to this pioneering title.
The legendary studio ID Software tried to reboot and revitalize the franchise some years ago with Doom III, but although that game was revolutionary in terms of graphics, it failed to capture the essence of what made Doom great in the first place. It went from a pulse-pounding, visceral “monster closet” shooter to a slow-paced survival horror. Not a good look for the game.
In 2016, ID finally atoned for their sins with the release of Doom 2016. This finally brought the Doom formula into the modern age and is an adrenaline-fueled, heavy metal, demon-filled rollercoaster that rightfully received heaps of praise.
DOOM VFR is a spinoff of Doom 2016 and reimagines that experience for virtual reality. It is in fact a sequel where you play a character other than the Marine from Doom 2016. You’re an AI that can possess robots; tasked with cleaning up the remaining demons and getting the station back on track. The game has been designed from the ground up for VR, so there’s no run-and-gun. Instead you can instantly teleport from one spot to the next – neatly solving the movement issue and making sense according to the in-game story.
I’ve only seen some hands-on previews and videos of the game, but based on preview reviews it’s already clear this is going to be something owners of the HTC Vive will want to have. It only works with the HTC hardware and was designed for that system's unique room-scale tracking. There’s no word on an Oculus version at this point.
Oh boy – is this a dream come true for me. Since I was a little kid I wanted nothing more than to sit in a chair on the bridge of the Starship Enterprise. Yes, I’m a Trekker, and thanks to virtual reality there’s finally a way to experience what it’s like to sit in that big chair.
Unfortunately for older Trek fans, this game is based on the rebooted nuTrek, which has been a little divisive in the fandom, to say the least. So the particular bridge that you get to visit is that of the lens-flare heavy JJ Abrams variety, although it’s not apparent in the game. There are four bridge positions that can be taken by four players. If you’re the captain then you get to relay orders to the other three players.
It may not be the Enterprise of our youths, but it’s currently the only official VR Star Trek game out there. There have been other non-VR games such as Artemis Bridge Simulator and Spaceteam that have tried to replicate it, but this is for the time being the real deal. If you want the most authentic Star Trek VR experience, this is it.
The horror genre is one that has taken to the VR medium with great gusto. There are plenty of VR horror apps on various platforms, most of which are short-lived and end up relying on cheap carnival thrills.
The Resident Evil franchise has been a horror stalwart for decades and it’s a brand that we all know well. Resident Evil 7 VR is also the very first AAA horror game that proves VR is a dead-serious platform for video games. If you have the money for a constant supply of fresh underwear, this is a game that has to be a part of your VR library.
OK, iRacing is not really a game. It’s a pure simulator. Not only that, but you can’t buy it as a one-time purchase; you need to pay a monthly subscription. It’s hard as nails, it’s not very consumer-friendly with it’s technical details, and there’s a hardcore worldwide following who wouldn’t trade it for anything else.
iRacing aims to recreate real professional racing as accurately as possible, from the physics of it to how it feels in real life. The game supports a wide array of racing peripherals, including some truly bonkers hydraulic rigs that will shake you in your seat as you corner.
The addition of VR support was an inevitability, and in many ways it’s how iRacing should always have been experienced. iRacing is, when taken as a whole, a ludicrously-expensive hobby to have, although it’s much cheaper (and safer!) than actually racing. If you eat, sleep, and breathe racing sims and have deep pockets, then this is the ultimate state of the art.
Like any new technology that’s getting hype, things are changing on a daily basis when it comes to VR. A lot of people who have never given VR a second thought are now chomping at the bit to get their hands on VR hardware.
Don’t think the business world hasn’t noticed, either. The market is being flooded by products, often from less-than-stellar sources, capitalizing on the VR craze. It’s going to take a while for the market to stabilize and for sanity to prevail. Just think back to when the first iPhone was released and the smartphone revolution really got going. Millions of people who couldn’t afford Apple’s premium device went out and bought bad copies of it. Today the smartphone market is pretty mature and almost any mid- to high-end phone isn’t going to disappoint.
VR has a long way to go before it reaches that point of maturity, but in the meantime you still want in on the action. So here I’m going to lay out the VR market in the simplest way I can, so that you can make sure you’re getting exactly what you expect when the time comes to splash the cash.
The first thing you have to accept is that good VR is expensive. When we’re talking about the best consumer experiences, the stuff that people are raving about, then the amounts are in the thousands of dollars. Just the high-end headsets are liable to set you back almost a thousand bucks. The good news is that these prices are going down constantly and the lower-end hardware of tomorrow is going to be as fast as the top-end stuff of today.
The bottom line is that if a product seems a lot cheaper than it should be, it’s a good idea to do your research before making a purchase. There are three main classes of VR product and I’m going to talk about each of them in turn next.
In terms of sheer numbers this is the most common VR product today. Basically, it's merely a plastic casing you slip your phone into. Inside the case there’s a divider and two lenses; one for each eye.
You run VR apps on the phone, which then provides a split screen image. All the VR graphics processing and motion tracking is done using your phone’s internals. Most of these products are, literally, just a shell. This means that your phone’s specifications and screen resolution are major factors in the quality of the VR experience. That doesn’t mean the nature of the case is irrelevant.
For one thing, the optics matter quite a bit. Cheap plastic lenses might distort the images or scratch up badly. Try to get a mobile headset case that uses lenses that are treated against smudges and scratches. Glass is still the best material, in my opinion, but there are excellent lenses made from other synthetic materials too.
One of the most important features you need to look out for is something called “field of view”. This applies to all VR headsets, not just mobile ones.
Simply put, your field of view is how much of a scene is visible to your eyes. Human beings have a horizontal field of view about 210 degrees wide. The middle 114 degrees are where you can see in 3D and then each eye has some more vision to either side that’s “peripheral” vision. The wider the field of view is for a given VR headset, the less you feel like you’re wearing goggles and the more realistic and immersive things are.
You should not settle for anything under 90 degrees and that’s pretty much par for the course on a mobile VR headset. Some high end sets have achieved 110 degrees or more, but 100 degrees is the gold standard in mobile VR at the moment.
While most mobile VR headsets are nothing more than some lenses, a plastic case, and straps, there are a few that offer more than this. Some might provide a USB pass-through, so that you can charge the phone without removing it from the headset. This may also allow for tethering to external hardware, although that’s not common practice at the moment.
There may also be additional motion sensors, batteries, and control buttons. Often these headsets will only work with a particular make or model of phone. For example, the Samsung Galaxy VR headsets are excellent products, but will only work with certain flagship phones from Samsung itself.
When it comes to the sort of VR experience that you can expect from a mobile smartphone VR case, the truth is that you should manage your expectations. At the moment, mobile VR tends to range from being just OK to being actively unpleasant. Even if you have a very good phone, chances are it doesn’t have the right sort of screen (low persistence, high refresh rate) that good VR requires. On top of this, the best 3D hardware for smartphones is still a long way from having the grunt for great VR. This is changing fast, but it will probably be years before the hardware advances enough to match up to the really premium stuff available today.
For watching 3D or 360-degree video, mobile headsets are pretty great. There are also some really cool games you can play that were designed for the hardware and so come out well. You may want to invest in a wireless gamepad compatible with your phone and the desired apps to make the most of it all.
Mobile headsets are a cheap way to turn the expensive phone you already have into a sort-of OK VR device. If you pick your apps carefully you can have a lot of fun, but it’s the very slightest of toe-dippings into the world of VR.
As I write this there are no standalone mobile headsets on the market. However, at least two have been announced by major players, which means that soon you’ll have the option of buying one of these mysterious new devices. Unlike the mobile headset cases we just talked about, standalone VR headsets have all the hardware needed to run their VR software built into them. In practice, this is the same sort of processing hardware you’d find in a smartphone, but there are some very important differences with these headsets.
Standalone headsets are designed from the ground up to be used for VR. This means they’re better at the job for a number of reasons.
The first advantage is that their screen (or screens) is selected for its suitability for VR. So it will be a low-persistence model (it doesn’t blur) and have a fast refresh rate. There may also be two separate screens angled in such a way so that you get a wider field of view and a more natural picture for each eye.
The next advantage of a standalone headset is hardware performance. Your phone is a multitasking computer. While you’re playing around in VR it still has to dedicate processing time to handle other things like email and monitoring cellular signals. With a standalone system that’s not the case; the hardware can be dedicated 100% to the task at hand.
Hardware can also perform better because of the shape and size of the headset itself. When you have to squeeze all that hardware into an object the size of a smartphone, you have to make a lot of concessions when it comes to heat and battery life. The inside of a VR headset is much roomier than a phone chassis, which means you can run things faster and hotter. The biggest problem with mobile VR is how quickly the phone inside overheats and shuts down – that shouldn’t be an issue with a standalone system.
These devices will also be platforms in their own right, the same way that a Nintendo 3DS or Playstation Vita is a specific platform. Apps will have to be written or ported to the device, even if it runs a modified version of current mobile operating systems like Android. So it would be a good idea to aim for one of the models that seems most likely to get good developer support. You don’t want to be the one who backs Betamax when the world has chosen VHS!
You can expect to pay as much for one of these VR headsets as you would for a premium non-mobile unit. However, you also don’t have to pay for a high-end computer to make it work.
If you already have a pricey phone it may seem like quite an expense, so you’d have to think about whether the improvements in the actual VR experience are worth the outlay. If your phone is a modest model then this is a very viable option from the get go, but if you aren’t sure about wanting VR you are better off trying out cheap VR phone cases. If you’re serious about VR on the go, these promise to be the best that money can buy.
These are the big dogs – the very best VR systems that you can buy as a regular person who does not have a military budget or corporate expense account. These headsets cannot work by themselves, but have to be connected to an external computer.
In most cases this means having a very powerful PC hooked up to the VR headset, which does all the thinking and computing.
At the moment, “tethered” literally means there are cables running from the headset to the computer, which means that this is usually a standing or seated experience. That’s changing, however, and wireless display technology is getting good enough where you might be able to cut the cord without lag or reception problems. In both cases, though, you’re still tethered to an external computer, whether it’s with copper wire or radio waves.
The good news is that computer hardware is getting cheaper and more powerful all the time, so the computer you need to meet the minimum requirements for VR today will be much cheaper in the near future. This has already happened and there are now even VR-certified laptops! Even so, this is by far the most expensive way to get into virtual reality.
Once you’ve experienced this type of VR you’ll understand what the hype is about. The detail, optics, and sense of presence you get when using one of these systems is hard to describe, but it’s an illusion that stays with you no matter how sure you are that it’s just a simulation. This is something they have achieved through years of research and design getting the optics just right; tuning the electronics until there’s no perceptible lag between your movements and those of your virtual body. Your brain receives all the sensory input through your eyes and ears that it expects. The important imperfections that break the illusion of the virtual world have largely been ironed out. Right now this is the best example of VR you can own. QED.
Apart from these three main types of headset, you’ll have to consider some accessories as well. I’ve mentioned a wireless game controller for mobile VR already, but even with tethered VR they are a primary input method. The good news is that most of the premium tethered VR products actually come with a good gamepad in the box. Apart from the gamepad option, there may also be motion controllers provided by that manufacturer. Oculus has the touch controller and Sony has the Playstation Move controllers, for example.
I’ll talk about these non-headset products in other articles on the site.
How much money you are willing to spend is directly correlated to how good your VR experience will be. That’s not to say that the experiences on the cheaper options are bad – just not as good. If you mainly want some simple games and a cool way to watch movies on a plane or in your bedroom, the mobile stuff is worth looking at. If you want the best, well, you know the price and the reward now. The decision is yours.
There’s nothing worse than when a new technology becomes swept up in a flood of hype that sees its buzzwords used in all sorts of inappropriate ways. Remember when everything from toothpaste to shoes somehow had the word “nanotechnology” shoehorned somewhere into their product descriptions? Simply because the greater public had become aware of the scientific excitement around nanotechnology, adding it to your product, despite tenuous links to the field, would make it sound futuristic.
Well, I hate to say it but VR is starting to pick up a little of that unwelcome tendency. It’s being applied to all sorts of areas where it really shouldn’t be. The same thing is happening to artificial intelligence, with all sorts of applications and devices claiming to be “AI-powered”. This is why you’d be right to take with a grain of salt the claim that VR is an effective treatment for pain. And yet, here we are with serious evidence that virtual reality can in fact treat pain. How is this even possible?
To understand how it can even be possible, in principle, for pain to be affected by something like VR, it’s important to understand what pain actually is. Pain is a built-in, evolved warning system that lets you know something’s not right with your body. Whether you step on a tack or have something wrong internally, an aching or stabbing sensation let’s you know about it.
Pain is delivered as a signal from your sense organs, transported by nerves and reaching the pain center of your brain. There it is interpreted and you then get a subjective experience of that signal. Pain medication works by putting a block somewhere along that delivery line. When the doctor applies a numbing agent to your skin before removing a mole, he’s preventing the pain sensors in your skin from reporting the damage in the first place. When you drink certain pain killers, they reduce how well those nerves function, lessening the intensity of the feeling. Strong painkillers such as opioids bind to pain receptors in the brain for a powerful effect that stops you from experiencing the sensation of pain even if the signal makes it all the way to your brain.
That’s the (rather broad) mechanical description of pain. It’s not exactly inclusive, but you get the general idea. The thing is, despite all the physical mechanisms of pain, in the end it is still a subjective mental process that makes up the experience of pain. This is why some people can feel chronic pain, pain that doesn’t go away no matter what medications you take, as a psychological problem. It’s also why some people are more resilient to pain than others. They habituate to it; they can override it mentally and otherwise deal with levels of pain or discomfort that would reduce a crybaby like me to a blubbering mess.
Hypnotherapy too, can be used to reduce the amount of pain a person feels. So it should be clear that it’s not only by using chemical agents that we can reduce how much pain a person experiences. That’s where VR comes in.
I first became aware of this innovative use of VR when I read an article about how dentists were using VR to reduce pain as part of a study. In total, 80 people who needed a cavity fixed or a tooth yanked out took part in the study. These two types of procedures were split into separate groups and given one of two VR experiences. They could explore the coast or explore a city. A very unlucky third group got nothing at all! Ouch. Those who experienced the coastal exploration reported the most reduction in pain compared to both of the other groups.
So promising is the prospect of VR pain reduction, especially in this era of opioid addiction, that one Matthew Stoudt launched an entire company to produce VR experiences aimed at reducing pain. The are putting together a whole library of experiences for many different medical scenarios. There are ones meant to calm you down before going into the operating room. There are ones meant for use during surgery; presumably the type of surgery that does not involve being put under! Finally, they have VR experiences meant to help with the postoperative pain issue.
The people from AppliedVR are working with hospitals to refine and test these VR pain interventions and the results are quite amazing. Researchers have found that 20 minutes of calming VR can reduce pain by 24%, which is significant. It means that you can get to an acceptable level of pain without drugs, or become pain free without needing quite as much. It might even be the case that VR could provide relief to people confined to bed and undergoing palliative care, such as those who have terminal cancer. The possibilities for improvements in quality of life excite me most of all.
There’s still a lot of research and testing that has to be done before we can even think of making VR pain treatments a mainstream approach to pain control. There are still so many questions. Even if they establish a reliable link between VR and pain relief, we still need to figure out what sorts of experiences work best. Which ones work for what sorts of pain? Does it matter what culture the patient is from? What about their gender or age?
I hope this research is well-supported and that we’re graced with another way to make people experience just a little less suffering in life. That’s not too much to ask.
There’s a silent war brewing over the short-term future of virtual reality. It centers around how VR users are going to free themselves from the tethers that are currently part and parcel of the AAA premium VR experience. Yes, there are mobile and self-contained VR headsets, but none of them can compare to the fidelity and experience of a tethered HMD hook-up on a cutting-edge computer with the latest graphics technology.
The hardware needed to meet the minimum standards have indeed already shrunk down a lot; so much so that VR-ready ultrabooks such as the Razer Blade are possible. However, it’s still simply impossible to stick that level of hardware into a self-contained unit. So how do you get rid of that long tether and still have access to those powerful computational resources you need for cutting-edge VR? This isn’t just for VR gamers who feel inconvenienced – there are plenty of training and industrial applications for truly high-end VR that isn’t literally tied to an anchor.
Right now there are two approaches to solving the tethering problem. The obvious one is to create an HMD that can transmit its data wirelessly. The problem with this is that devices like the Oculus send massive amounts of data in both directions. The main killer here is the double HD 90 Hz video signal. The motion-tracking data going the other way is pretty beefy too.
Bandwidth isn’t even the whole story. With aggressive compression you can get it through a thinner pipe, but going wireless and still keeping the entire latency loop under 20ms has so far proven essentially impossible. In fact, measures such as video compression actually add to the total latency, since it takes time to squeeze those frames down before sending them over to the airwaves.
Despite these technical challenges, VR pioneer Oculus seems to be on the verge of releasing in 2018 the first practical wireless HMD. It’s a product that’s gone under the name “Santa Cruz” so far; it seems the time for it to come to market is getting close.
The other potential solution towards untethered AAA VR isn’t quite as compact and elegant compared to the concept of wireless HMDs. It is, however, something you could buy today if you had deep enough pockets.
HP has taken the genius step of taking all those compact, laptop-friendly components and putting them into a battery-powered chassis that can be strapped to your back. This means that the tether is now anchoring you to something that will go wherever you go. You can turn around to your heart’s content without getting tangled up.
The specifications of the two backpack computers offered by HP are right up there. This is a workstation-class machine, so you get a Pascal-based Quadro P5200 GPU with 16GB of RAM and an i7 vPro CPU. When you are done with the VR portion of your task, you can unclip the machine from the backpack harness and then plonk it down into its docking station. Of course, this convenience has a price tag – it starts around three grand, and that’s before you buy your HMD.
The fact that the backpack form is currently expensive is beside the point, however. The big question to me is whether this is an overall better idea than a wireless VR headset.
That’s the question, isn’t it? There are so many complex technical questions that have to be solved in order to connect the HMD wirelessly to a computer that I seriously have to question if it's worth it. Although the backpack computer might look a little cumbersome now, by this time next year they’ll be squeezed into an even smaller box. Even now the whole HP Z computer backpack setup weighs about 4 kilograms. I doubt the typical adult would even notice that weight on their back, and that’s just a first-generation design.
Wireless tethering would be a technological marvel, but is it an over-engineered one? There are so many things that can go wrong with wireless transmission that the very idea of using it for anything other than the least mission-critical task is rather laughable. The only real place it makes sense is if someone already has an existing VR setup and now needs to get rid of the tether.
The thing is, how long before the backpack form factor just becomes another style of case you can buy. Slap your own components into it and there you go. Now the backpack solution becomes an upgrade or a minor conversion. Has Oculus thought this through?
Keeping everything connected with a direct wire link ensures reliable performance. This is the age of power-efficient, desktop-grade components. They don’t make a lot of heat and noise anymore. Mechanical drives are optional. With computers now so small and light, there may be no reason at all to leave them on the floor.
The real problem, if you ask me, is the reliance on an external tracking camera. That’s what really ties the VR user down. There’s no indication that Santa Cruz will be any different at this point. That's a problem, because Microsoft and its hardware partners have essentially made external camera tracking obsolete with Windows Mixed Reality headsets. These new HMDs have outward-facing cameras that track the room, not you – which means if you combine them with a battery-powered VR backpack you can go wherever you want.
Yes, I have to admit that this is probably an application more suited to mixed reality applications. Still, even if complete free-roaming VR is not really practical, the freedom of movement and relatively large space afforded to us by having a backpack-based VR system are features not to be sniffed at.
Here's the bottom line, from my point of view. They’re spending all this time and money developing wireless display technology. The main reason they are doing this is because there’s no way to build the processing power of a full computer into something as small as an HMD. That’s fair enough, but VR-ready hardware has already been reduced in size to the point where it goes in an ultrabook chassis. We can’t really be that far from AAA VR hardware fitting in an HMD or a small tethered box worn on the belt or elsewhere on the body.
Right now VR backpack computers might seem a little silly, but give it a few years and more size reduction and it could be a completely different story.
Before ARcore, when Apple announced and then released their augmented reality developer kit, known as ARKit, it really took the VR and AR world by storm. This was some truly next-generation stuff unleashed on an unsuspecting market. It was augmented reality that did not need special markers. It can map out the dimensions of the room you’re in, find surfaces, remember them, and then project solid and convincing digital objects into the camera feed. ARKit is a quantum leap compared to basic AR apps we’ve already seen on smartphones and tablets.
Of course, we’ve seen this level of augmented reality before. Various outfits have been working on advanced AR systems for years now. Almost all of these have relied on specialized hardware in order to achieve the sort of spatial mapping and processing you need to craft such convincing digital illusions.
The most famous project in this regard has to be Google Tango – a hardware standard created by the tech giant using multiple specialized sensors to do accurate range-finding and quickly create a virtual map of the 3D space around you. It worked fantastically, and the idea was that future phones would ship with all the Tango hardware built right in. The big downside to this is that it limits the number of people who can use this special AR platform to only those who buy phones designed for that purpose. It’s not exactly a recipe for mainstream success.
ARKit will run on just about any new Apple device. That makes it attractive to developers who know that they have access to virtually the entire install base of iOS machines. It becomes a virtuous cycle where people already have the platform so more developers make AR apps, which make money and then attract more developers. It also means that the quality of AR apps keeps getting better because of competition, just as the standards for app innovation have skyrocketed since the idea of a smartphone app was first introduced. There are no fart apps on the Top 10 list anymore!
It’s in this context that Google’s development of ARCore has been happening. Long before Apple was ready to release ARKit (which it now has) Google has been playing catchup. As I write this, ARKit has been out for some time. You can find plenty of writing about it on this site, and even app reviews and recommendations. It’s a mature and shockingly reliable platform. This is why I expect that Google will come out swinging with ARCore, the Android equivalent of ARKit. Unfortunately, given the nature of Android, they have a much bigger job ahead of them.
The big difference between iOS and Android is that iOS is a closed platform. Just like Mac computers, Apple has full control of both the hardware and the software. This is an approach that comes with pros and cons. In my opinion it’s the main reason that Apple computers never became dominant. Microsoft was wise enough to open Windows up to any PC maker, which meant the install base became huge and developers were lured to make software for the larger market. Apple was relegated to catering for niche creative markets like photo manipulation, desktop publishing, and film editing.
In this analogy, Android is like Windows. While Google is the custodian of Android, any phone maker can put it on their product. It’s also driven down the price of smartphones since it runs on low spec and high spec machines. The phone maker can concentrate on making the hardware and if they feel it’s worth it, they can customize the look and feel of Android, which is why Samsung Android phones have TouchWiz and Mi phones have MIUI, as an example. Since Google has given up control of the hardware their OS runs on, it makes it incredibly difficult to put out standardized hardware APIs similar to ARKit.
Apple knows every component in every one of its products. If you take two random Android phones there’s little chance they both will have GPUs, processors, or RAM that are anything alike in type or performance. You can’t count on a given Android phone having a specific quality of camera sensor or level of gyroscopic accuracy.
It’s also why gaming on iOS is a much better experience than on Android in general. How many times have you downloaded a new Android game only to find that it doesn’t play well with your phone model? It might be too slow, and crash or glitch out. There are simply too many hardware combinations to test for all of them. It’s for these reasons that Google has its work cut out in releasing a competitor to ARKit.
Before we go into the details of ARCore, let’s first explain exactly what it is.
Like ARKit, ARCore is a software development kit. Developers who make applications use these as a way to standardize their software with various hardware and operating systems. It also means that they don’t have to reinvent the wheel every time they make new software.
In the case of ARCore, the idea is that a software developer who wants to use AR as a part of their application just needs to send the right inputs and outputs to the ARCore software and, presto, you have an AR application. Obviously it’s much more complicated than this, but it comes down to Google solving the hard problems of AR in Android on behalf of developers and then making that solution available so they can focus on creating cool stuff with AR rather than grappling with AR technology itself.
OK, now it’s time to talk about the actual features of ARCore. Whether you’re someone who wants to make AR apps or just use them, these are the abilities you can expect from the system.
First of all, ARCore is capable of motion tracking using the camera feed and motion sensor data. In other words, just like ARKit it can find “landmarks” and stay oriented, which in turn means 3D objects can “stick” to their location in the real world.
That wouldn’t be much help if it couldn’t tell whether something was a table or a wall. So ARCore also has something known as “environmental understanding”, a fancy way of saying that it can figure out from the camera picture whether something is flat, horizontal, or vertical – the basic building blocks of mapping a room.
Now that it knows where things are and what they are, it needs to actually draw 3D objects in the scene so that they look as if they’re really there. One of the main reasons that AR looks fake is because it doesn’t match the ambient lighting. If the 3D object is lit arbitrarily then it stands out like a sore thumb. It also needs to cast a believable shadow. It’s funny how you can tell immediately when these things are missing, but can’t always put your finger on why it looks so wrong.
Just as with ARKit, there are minimum requirements for ARCore to work on a given phone.
First of all, the phone must run at least Android N or “Nougat”. For the preview version of ARCore the only phones that support it are the Google Pixel phones (obviously) and the top-class Samsung Galaxy S8.
ARCore is big, but still an underdog. With the release of ARCore, millions more people will have access to advanced AR experiences. Although Apple handily beat Google to this milestone, there are still far more Android phones out in the world than iOS ones. This means that ARCore is possibly more important for the AR industry overall than ARKit is. That’s not to say that ARKit means nothing. I’ve said myself that it’s an incredible achievement and a real game changer.
However, if only flagship-class phone owners need to apply when it comes to ARCore, the whole thing may be a moot point anyway. It may be that the average Android phone still doesn’t have the sort of hardware to make ARCore work as intended. One real risk is that ARCore gives AR a bad name thanks to all the inconsistency of the market.
In the end, I’m just ecstatic that AR is moving forward and that it’s not locked into elite, experimental hardware, because that means everyone wins.