Doug Bruey
Electrical Engineering Program Lead

In Reality...?

Augmented reality (AR) and virtual reality (VR) technologies are poised to open up a whole new world of opportunities. We’re already seeing the effects of VR when it comes to gaming. But in future could AR add a new dimension to surgery?

AR and VR both have the ability to alter our perception of the world. AR takes our current reality and adds something to it – virtual objects or information. VR, on the other hand, immerses us in a different – virtual – world.

For many people, AR started in 2013 with Google Glass. The heads-up display delivered two-dimensional content to one eye via a prism projector. However, aside from detecting head movement, it lacked context awareness – limiting its use. The Sony SmartEyeglass came next and provided an increased field of view and improved performance, which allowed the development of applications that recognized objects and provided context- relevant content.

Fast forward to 2016 and Microsoft’s HoloLens appeared – with Kinect-style sensing powering rock-solid anchoring of virtual content. We’ve harnessed this technology to demonstrate what it could mean for the operating theatre of the future, more of which later...

It was also in 2016 that VR achieved its first commercial successes with the introduction of the Samsung Gear VR, the Sony PlayStation VR, Oculus Rift and the Valve-powered HTC Vive. So why has VR come of age now? In a word – speed.

The challenge of VR development is to fool the brain into accepting what is being seen as real. 

The challenge of VR development is to fool the brain into accepting what is being seen as real. Any delay between actually moving and the image moving confuses our senses and can result in a loss of balance or even nausea. The accuracy and latency
 of visual input relative to a change in head position is critical. Hardware and software technology needed to reach a point that they could work that magic.

Today, displays are small enough to sit on a user’s nose and serve up three times more frames per second than a film – and all at very low latencies. VR systems couple these technologies with ‘pose-tracking’ solutions that quickly and accurately detect the position of a user’s head to close the loop with the visual system. It is the speed and accuracy of the pose-tracking technology that truly differentiates the VR experience.

Solutions targeting the mobile phone market use the inertial measurement unit in the phone to detect the motion of the user’s head and scroll visuals accordingly – but cannot track a user’s absolute position in space. External camera-based tracking is another popular technology, which identifies points on an object and computes the pose using computer vision algorithms – but camera resolution and depth of field limit its range.

State of the art today is Valve’s SteamVR Tracking technology which uses scanning lasers to triangulate the position of tracked objects with sub-millimeter accuracy in a 25-square meter room. Outside of gaming, SteamVR Tracking is poised to make an impact on training simulations, physical and psychological therapy, industrial control, architecture and design.

Outside of gaming, SteamVR Tracking is poised to make an impact on training simulations, physical and psychological therapy, industrial control, architecture and design.

But VR is not the only application that will benefit from SteamVR Tracking. The user experience in AR systems is also tied to the speed and accuracy of pose tracking. Internal cameras and other systems can solve many of AR’s challenges – including depth perception, object recognition and pose tracking. However, relying on the ‘inside-out’ tracking of a headset’s camera may not be sufficient in all environments.

In a typical operating theatre today, for example, there are numerous charts, consoles and displays to support a surgical procedure –
with new sensing and imaging technologies adding to this all the time. Add in lab results, patient history and a surgical plan, and presentation of information becomes a challenge. How can we make all this available to the surgeon and yet not distracting?

AR could replace physical screens with floating monitors, improving visibility, reducing clutter and enabling ‘weightless’ reconfiguration – positioning with a simple gesture.

AR could replace physical screens with floating monitors, improving visibility, reducing clutter and enabling ‘weightless’ reconfiguration – positioning with a simple gesture. Virtual screens could be summoned to show information as and when it is needed, with CT scan slices visualized as a three-dimensional (3D) model.

Now imagine if the 3D scan was overlaid onto a patient. Such a step could quite literally provide a surgeon with X-ray vision, allowing a CT scan to be reviewed relative to actual anatomy. Laparoscopic tool handles could be ‘pose tracked’ to give the surgeon ‘virtual sight’ of the tool tips hidden from normal view inside the patient.

We’ve used today’s technology to bring the augmented operating theater to life as never before in a demonstration of what is becoming possible. Of course, the clinical reality is still some way off – but it’s an interesting glimpse into the future. Despite great technological strides, AR is still an infant technology. And visualization is only part of the problem – it is of limited use without a convincing, natural user interface. With improvements in augmented hardware guaranteed, the next battleground will surely be the user interface.

We see the technologies that drive AR and VR as very complementary and potentially convergent.

We see the technologies that drive AR and VR as very complementary and potentially convergent. There are many uses outside of gaming that these will enable and we’re already working on diverse applications with clients. Moore’s law says headsets will get smaller, lighter and faster – so ultimately we will only be limited by our imagination.

CONTACT US

See what else is new...

August 27, 2018

Using Autonomous Robotics to Unlock a Next-Generation Natural UI

Think about your last encounter with a robot. For most of us, communicating with robots isn’t like communicating with another person—not yet anyway! But, we’ve been working on technology that enables a much more natural person-to-person like interface. Creating an autonomous robot that makes interacting with technology as intuitive as talking to a friend is made possible by combining artificial intelligence, voice and gesture recognition, 3D mapping, and spatial awareness.

August 8, 2018

One Step Closer to Instant Package Delivery

The holy grail of last mile logistics is cost-efficient instant package delivery, to wherever the recipient happens to be. The DelivAir system we’ve prototyped takes us a step closer to that by using drones, and a novel precision location and authentication technology to deliver packages directly to your hands.

See what else is new...

August 8, 2018

One Step Closer to Instant Package Delivery

The holy grail of last mile logistics is cost-efficient instant package delivery, to wherever the recipient happens to be. The DelivAir system we’ve prototyped takes us a step closer to that by using drones, and a novel precision location and authentication technology to deliver packages directly to your hands.

June 28, 2018

Making Best Practices the Default with GitLab

GitLab was recently in the house for DevOps Stories Seattle, "a single day symposium dedicated to real-life DevOps transformation stories." Hosting the event here in our Seattle office was a great opportunity for us to cross-pollinate with other people and organizations that are using GitLab to solve their own challenges. Software Engineering Program Lead, Jason Haensly, joined in on the knowledge-sharing with a presentation describing how we use GitLab throughout the product development process, and why we think it's such a win.

August 27, 2018

Using Autonomous Robotics to Unlock a Next-Generation Natural UI

Think about your last encounter with a robot. For most of us, communicating with robots isn’t like communicating with another person—not yet anyway! But, we’ve been working on technology that enables a much more natural person-to-person like interface. Creating an autonomous robot that makes interacting with technology as intuitive as talking to a friend is made possible by combining artificial intelligence, voice and gesture recognition, 3D mapping, and spatial awareness.