Editor’s Note: On November 16, 2017, Synapse and Cambridge Consultants showcased brand new, groundbreaking technology, product design, and innovative development at our annual Innovation Day event.
The Day included presentations highlighting the progress and future of technological innovation and over two dozen technology demonstrations with application in consumer, medical, industrial, telecommunications, and other sectors.
One of the technology demonstrations Synapse presented was “Enhancing Human Capabilities through Augmented Reality”. In this blog, Jeff Hebert introduces the demo.
Demonstrating Our Worker Augmentation Concept
In a recent blog post, I said that augmented reality and machine vision technologies are unlocking the ability to train workers to perform skilled tasks and address some fluid and varied scenarios more economically than by investing in automation. With this concept in mind, we recently embarked on an internal investment project at Synapse to build such a system, picking the use case of an industrial worker assembling high-mix, low-volume circuits. Many other applications for this ‘human augmentation’ technology also exist, including industrial equipment maintenance, worker training, and even medical procedures.
Our demonstration has a workspace with a pegboard and a number of unassembled, colorful circuit parts to be placed onto the board, each with its own special location and orientation. Above the workspace is a basic video camera and in front of the user is a monitor. There are no instructions or other visual clues as to what to do beyond the screen highlighting parts and instructing the user where to place them. The system also checks each step performed for accuracy and helps the user fix issues as they occur.
Our demo setup - an augmented workstation
We built our demo with a basic video camera and screen as opposed to a head-mounted display (HMD) to focus attention on the software portion of the system as well as show how such an augmented workspace can be a relatively inexpensive investment with readily-available hardware. On the periphery of the screen in our demo, we show what the vision algorithm sees and how it works with picture-in-picture visualizations. For applications with a dedicated workspace like the one we’re showing, we’ve noticed that an HMD can pose challenges in terms of comfort for all-day use as well as battery life.
Our Custom Machine Vision Technology
Augmented reality is a hot subject and has lots of promise, but out-of-the-box functionality is focused on projecting images into the user’s field of view with little understanding of the environment. Apple’s AR Kit and software running on AR glasses like Hololens can understand the basic surfaces and space around the user—the walls, floor, ceiling, and basic objects like tables—and project imagery accordingly. This functionality is great for entertainment, gaming, and displaying reference material, but it doesn’t truly enhance the user’s own capabilities and senses. We wanted to show how AR can be used to intelligently guide a user in real time to see things they can’t see and do things they don’t already know how to do.
To get a better understanding of the scene and objects of interest within it, either custom hardware and sensors or much-more-sophisticated machine vision systems are required. For this demo, we wanted to see what we could achieve with only a basic video camera as the input, but other applications may benefit from or require custom sensors and hardware.
To successfully identify the circuit parts for our demo in all orientations, distances from the camera, lighting conditions, velocities, and amounts of occlusion by the user’s hands, we needed to create a robust machine vision system and train it with a lot of data. The classic way to train a vision system how to identify an object is to feed it a huge set of images of the object from all angles, in many environments (think using every result from a Google image search of a cat to identify a cat in a novel photo). Collecting such a data set is incredibly time consuming for a novel object and the process needs to start all over again if the object of interest changes.
To do this in a much more efficient way and set up a platform to speed the training of new objects, our team created a synthetic data set of our circuit parts by creating 3D solid models of them and using a gaming engine to render the objects in all orientations and lighting conditions with thousands of random background images.
On top of this machine vision system and synthetic dataset creation tool, our team built a lightweight application and graphical interface to show the user what part should be assembled next, where it should go, and it what orientation. As the user performs a step, the system checks it for accuracy and then moves on to the next step or suggests corrective action. We tested the system with different users and made improvements and adjustments for usability, accuracy, and latency along the way.
We’re excited about this demonstration not only because it shows how we can use AR and machine vision technology to train workers to do something economically useful, but also because we’ve built reusable components in the process that we can leverage and reference for future client projects to speed them up and reduce risk. We’re seeing an explosion of AR, machine vision, and machine learning opportunities and are looking forward to helping more clients in this space as it develops.
It’s often a significant challenge to collect data in environments where conditions are difficult for electronics to tolerate, like industrial facilities, commercial kitchens, or on the seafloor. But with smart engineering and the right approach, data collected in these harsh environments can provide enormous value.
Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.
Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.
We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.
Synapse is a product development firm. We work with the best companies in the world to drive innovation and introduce cutting-edge devices that positively impact our lives. Fueled by a desire to solve complex engineering challenges, we develop products that transform brands and accelerate advances in technology.