Jeff Hebert
Vice President of Engineering

Automation AR and Jobs: A Demonstration

Editor’s Note: On November 16, 2017, Synapse and Cambridge Consultants showcased brand new, groundbreaking technology, product design, and innovative development at our annual Innovation Day event.

The Day included presentations highlighting the progress and future of technological innovation and over two dozen technology demonstrations with application in consumer, medical, industrial, telecommunications, and other sectors.

One of the technology demonstrations Synapse presented was “Enhancing Human Capabilities through Augmented Reality”. In this blog, Jeff Hebert introduces the demo.

Demonstrating Our Worker Augmentation Concept

In a recent blog post, I said that augmented reality and machine vision technologies are unlocking the ability to train workers to perform skilled tasks and address some fluid and varied scenarios more economically than by investing in automation.  With this concept in mind, we recently embarked on an internal investment project at Synapse to build such a system, picking the use case of an industrial worker assembling high-mix, low-volume circuits.  Many other applications for this ‘human augmentation’ technology also exist, including industrial equipment maintenance, worker training, and even medical procedures.

Our demonstration has a workspace with a pegboard and a number of unassembled, colorful circuit parts to be placed onto the board, each with its own special location and orientation.  Above the workspace is a basic video camera and in front of the user is a monitor.  There are no instructions or other visual clues as to what to do beyond the screen highlighting parts and instructing the user where to place them.  The system also checks each step performed for accuracy and helps the user fix issues as they occur.

Our demo setup - an augmented workstation

We built our demo with a basic video camera and screen as opposed to a head-mounted display (HMD) to focus attention on the software portion of the system as well as show how such an augmented workspace can be a relatively inexpensive investment with readily-available hardware.  On the periphery of the screen in our demo, we show what the vision algorithm sees and how it works with picture-in-picture visualizations.  For applications with a dedicated workspace like the one we’re showing, we’ve noticed that an HMD can pose challenges in terms of comfort for all-day use as well as battery life.

Our Custom Machine Vision Technology

Augmented reality is a hot subject and has lots of promise, but out-of-the-box functionality is focused on projecting images into the user’s field of view with little understanding of the environment.  Apple’s AR Kit and software running on AR glasses like Hololens can understand the basic surfaces and space around the user—the walls, floor, ceiling, and basic objects like tables—and project imagery accordingly.  This functionality is great for entertainment, gaming, and displaying reference material, but it doesn’t truly enhance the user’s own capabilities and senses.  We wanted to show how AR can be used to intelligently guide a user in real time to see things they can’t see and do things they don’t already know how to do.

AR Application.png
A typical AR application showing projection of objects into a scene Image licensed under the Creative Commons Attribution-Share Alike 4.0 International license

To get a better understanding of the scene and objects of interest within it, either custom hardware and sensors or much-more-sophisticated machine vision systems are required.  For this demo, we wanted to see what we could achieve with only a basic video camera as the input, but other applications may benefit from or require custom sensors and hardware.

To successfully identify the circuit parts for our demo in all orientations, distances from the camera, lighting conditions, velocities, and amounts of occlusion by the user’s hands, we needed to create a robust machine vision system and train it with a lot of data.  The classic way to train a vision system how to identify an object is to feed it a huge set of images of the object from all angles, in many environments (think using every result from a Google image search of a cat to identify a cat in a novel photo).  Collecting such a data set is incredibly time consuming for a novel object and the process needs to start all over again if the object of interest changes.  

To do this in a much more efficient way and set up a platform to speed the training of new objects, our team created a synthetic data set of our circuit parts by creating 3D solid models of them and using a gaming engine to render the objects in all orientations and lighting conditions with thousands of random background images.

e2e_scratch_2.gif
Animation showing images from our synthetic training data generation system

On top of this machine vision system and synthetic dataset creation tool, our team built a lightweight application and graphical interface to show the user what part should be assembled next, where it should go, and it what orientation.  As the user performs a step, the system checks it for accuracy and then moves on to the next step or suggests corrective action.  We tested the system with different users and made improvements and adjustments for usability, accuracy, and latency along the way.

Looking Forward

We’re excited about this demonstration not only because it shows how we can use AR and machine vision technology to train workers to do something economically useful, but also because we’ve built reusable components in the process that we can leverage and reference for future client projects to speed them up and reduce risk.  We’re seeing an explosion of AR, machine vision, and machine learning opportunities and are looking forward to helping more clients in this space as it develops.

CONTACT US

See what else is new...

December 6, 2018

Why Your Smart Home Isn’t Truly Smart...Yet

Smart speakers have taken us a huge step forward in human-digital interactions, but the user experience must become more intuitive to deliver on the promise of a smart home. We present a technology demonstration that shows one approach to making interactions with smart devices more natural.

December 11, 2018

What We Can Expect to See in Beauty Tech at CES 2019

It’s the largest stage for consumer tech innovation and it’s just around the corner—get ready for CES 2019 with our predictions of what new beauty tech will be on the show floor in Vegas this January.

See what else is new...

December 13, 2018

Data Gathering Tools to Make the Right Crop Management Decisions

Data in agriculture is flowing freely, which normally means a sensor network for soil information, and drone or visual inspection of plants from the air. Smart data gathering platforms will deliver the most complete data set that starts to unlock the mysteries of crop yield and disease detection.

December 7, 2018

DeepRay™: The Wiper Blades of the Future

Machine vision technology continues to rapidly advance and improve, performing object recognition at increasing rates and with increasing accuracy. What happens when the images being processed are obscured by the rain, snow, and mud of the real world? DeepRay™ is an AI technology that presents a clever solution to this deceptively difficult problem while making it look easy.

December 6, 2018

Why Your Smart Home Isn’t Truly Smart...Yet

Smart speakers have taken us a huge step forward in human-digital interactions, but the user experience must become more intuitive to deliver on the promise of a smart home. We present a technology demonstration that shows one approach to making interactions with smart devices more natural.