There was a lot of excitement for innovation in augmented reality (AR) at CES this year. With the advent of Windows Mixed Reality, the announcement of the Magic Leap One headset, and the HTC Vive Pro’s front-facing cameras, opportunities abound for developers to create compelling AR applications. In previous years, available augmented reality systems lacked the processing power necessary to drive the artificial intelligence engines required to build the engaging experiences that would drive mass adoption. As a result, the experience generally has been limited to projecting screens and objects into space rather than having full contextual awareness of the scene.
Now, however, with further advancements in mobile computing power, combined with aggressive advancements in the fields of machine learning and computer vision, augmented reality developers stand poised to solve long-standing barriers to workplace augmented reality adoption. By integrating artificial intelligence techniques with these more powerful AR platforms, developers will be able to create a new category of enterprise development - namely, the development of fully customized AR applications to train and assist workers as they perform tasks.
Cambridge Consultants and Synapse enjoy a unique position among product development consultants. Our breadth of experience and expertise covers every phase of development from ideation, research, and design to engineering and manufacturing. The augmented reality and AI demos we shared at CES serve to demonstrate our ability to integrate AI into any product and engagement. Using our unique neural computer vision training techniques along with our knowledge of deep learning, we can quickly assemble solutions in a wide variety of situations, ranging from manufacturing and industrial maintenance to training and quality control.
It’s often a significant challenge to collect data in environments where conditions are difficult for electronics to tolerate, like industrial facilities, commercial kitchens, or on the seafloor. But with smart engineering and the right approach, data collected in these harsh environments can provide enormous value.
Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.
Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.
We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.
Synapse is a product development firm. We work with the best companies in the world to drive innovation and introduce cutting-edge devices that positively impact our lives. Fueled by a desire to solve complex engineering challenges, we develop products that transform brands and accelerate advances in technology.