Jeff Hebert

Automation, AR, and Jobs: How AR Will Train and Assist Workers in the Context of Automation

Automation replacing workers has been a persistent news headline recently.  Advances in technology have changed the economics enough that investments in automation have become more reasonable and the return on investment equation now pencils out for more and more applications.  An industry like fast food, for example, is seeing the encroachment of automation as wages increase and the cost of technology falls—Wendy’s is adding self-service ordering kiosks to 1,000 (15%) of its stores in 2017 alone.  Similarly, Amazon is seeking to fully automate brick-and-mortar checkout with its Go grocery store concept.  The predictions can sound pretty bleak.

While technology advancements in areas like machine learning and robotics are major factors enabling wider-spread applications of automation, the same advancements are also fueling a revolution in human augmentation.  An explosion of artificial intelligence and augmented reality (AR) technologies is shedding light on the future capabilities of augmented humans.  While automation will take over high-volume, repetitive tasks where the high cost of infrastructure can be more easily recuperated over time, lower-volume tasks, especially those requiring more flexibility and adaptation, will be better-served by augmented workers than complete automation.

What is Augmented Reality?

AR means the real-time use of information, delivered to the user via graphics, audio, text, and other methods, to augment human senses, knowledge, capabilities, and experiences.  At a high level, it represents contextual computing and the future of human-machine interfaces.

AR Embodiments

Most people think of head-mounted displays (HMDs) when AR is mentioned, Google Glass being the early example.  While HMDs will be a common way to deliver AR  (as they are coupled with the user’s changing field of view and are hands-free), not all use cases lend themselves to HMDs.  For the lowest-cost and least-immersive AR experiences, mobile phones and tablets are proving to be a powerful platform.  Apple recently incorporated ARKit into iOS 11, enabling developers to use the onboard camera, inertial sensors, and display to deliver AR experiences through app development alone.  For experiences requiring custom sensing or alternative display experiences, non-HMD applications are more appropriate, such as heads-up displays in automobiles and infrared or chemical sensing for industrial workers.

How AR and Automation Will Coexist

It makes the most sense to replace jobs with automation when those jobs involve highly repetitive tasks—especially those which don’t require complex perception, manipulation, creativity, or social intelligence.  Whether it’s repetitive assembly, scanning barcodes, or sorting packages, automation pencils out when a system can perform a predictable task over and over again.  This enables engineers to implement relatively simple algorithms, robotics, and fixtures while avoiding higher-level artificial intelligence and robotic flexibility.

But many jobs and tasks aren’t so repetitive.  For example, an auto mechanic must perform hundreds of procedures on different models and different areas of cars using many different tools.  Developing sophisticated-enough artificial intelligence and flexible-enough articulation and actuation to fully take over such tasks from humans would be a very significant undertaking.  Given this, we will see a proliferation of AR systems (HMDs, non-HMDs, and mobile devices) which augment workers with the contextual information necessary to perform and check tasks as the worker performs them.  The auto mechanic (or physician, for that matter), will benefit from sensors feeding machine learning algorithms with respect to diagnosis as well as checks for accuracy and completeness during procedures, but will not lose their job to automation any time soon.

AR promises a method to train and assist workers in a way that will remain economically favorable for certain tasks and industries for a while to come despite continued advancements in machine learning and robotic automation.  Workers displaced by automation will add additional economic incentive for employers to retrain people through AR in a way that wasn’t possible during previous periods of job loss due to technical innovation.  At Synapse, we’re investing in our machine learning, sensing, and robotics capabilities to support both automation and AR in this rapidly changing landscape.

Main image via Adobe Stock

Oops! Something went wrong! Please try again!

See what else is new...

April 4, 2022

Building an IoT ecosystem leveraging LoRaWAN? Learn what factors to consider early-on for the highest success.

The LoRaWAN protocol is efficient and flexible communication technology for IoT, but it has many specific characteristics that can make or break a successful IoT implementation. Learn about some of the LoRaWAN characteristics and what to address early on when architecting an IoT ecosystem leveraging LoRaWAN.

See what else is new...

February 16, 2021

Fulfilling the Promise of Natural UI Through Inclusive Design

Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.

October 6, 2020

Natural UI: 5 Design Tenets for Uniting Physical and Digital Interfaces

Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.

September 8, 2020

Bringing Physical User Interfaces Back in a Connected World: An Intro to RPCIs

We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.