Jake Sprouse
Director of Software Engineering

Looking Ahead to the Next Year of AI In Product Development

Here at Synapse, we spend a good deal of time watching the major trends in technology that underlie product development. Over the past year, artificial intelligence has become a major theme, both for us and our industry as a whole—we recently spent some time with our colleagues at Cambridge Consultants thinking about what the latest trends in AI and related technologies mean for our clients in the year ahead.

Research laboratories at the biggest companies and institutions in the tech industry, as well as some focused startups, are making huge strides in advanced AI techniques like deep learning, and are taking advantage of ever-more-powerful hardware to execute massively parallel algorithms. As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.

Doing More with Less Data

We see rapid and significant progress being made in human-level AI systems, with driverless vehicles being a flagship technology getting lots of recent media attention. But along with the development of novel algorithmic techniques, these systems rely on massive amounts of data to do their job. These data sets can be very expensive and time consuming to create, yet are needed both for training the systems (e.g. in visual recognition tasks), and for interpreting what they sense (e.g. detailed digital maps). Our engineers have made strides towards building capable systems without the need for an expensive, time consuming data curation effort.

Duncan Smith, Head of the ICE division at Cambridge Consultants, describes Vincent’s deep learning capabilities to the crowd at CES 2018.
Duncan Smith, Head of the ICE division at Cambridge Consultants, describes Vincent’s deep learning capabilities to the crowd at CES 2018.

Our art-generating system, Vincent, demonstrates a novel application of AI in augmenting human creativity, though the underlying advancement is in training techniques that work with limited data sets. Other recent projects make use of data synthesis techniques to automatically generate thousands or millions of labeled training examples instead of a cumbersome manual process.

“As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.”  

Augmented Reality Beyond Phones and Glasses

Virtual and augmented reality are also very much in the current spotlight—we’ve  seen incredible demos that combine the most advanced display technology with computer vision technology that overlays visual information on what a user sees. We’ve been prototyping how this technology can be brought to bear for applications with user interfaces other than complex head-mounted displays or mobile phones, and specifically how these systems can enhance human capabilities using strong computer vision technology that can model and understand the world around us.

To this end, we have showcased an augmented reality system that uses computer vision techniques to guide a human through an assembly task (we call it HECTAR), with visual cues projected directly onto the parts being assembled.  We have also developed a machine-vision recycling assistance system that assists hurried users with the task of sorting their recyclables, compostables, and garbage into the proper bin.

The HECTAR demo showcase at CES 2018.
The HECTAR demo showcase at CES 2018.

Novel Voice User Interfaces

And of course, everyone is talking about voice technology and it’s clear to us that voice UI is going to be a big factor in product development going forward. So we’re focusing on the specific problems to be solved outside of in-home voice assistants—like how we can efficiently customize the interaction model, enable the technology under resource constraints (battery power, or less-expensive computation platforms), and make it function reliably outside of the home (e.g. office, connected venues, and industrial settings) using signal processing techniques for noise reduction and speaker isolation. Additionally, we’re very involved in pushing what voice assistants can do beyond speech recognition, like speaker identification and understanding the context behind the speech.

Walking a delegate through AKSENT, a showcase of voice understanding.
Walking a delegate through AKSENT, a showcase of voice understanding.

All around us there are examples of AI quickly moving out of the laboratory and into our everyday lives. Synapse and Cambridge Consultants are excited to work with our clients to combine their novel product concepts and enabling technologies with advances that are happening in AI to see what product and service transformations will emerge to make a positive impact on our world.

Oops! Something went wrong! Please try again!
CONTACT US

See what else is new...

November 16, 2020

[Watch] How to Incorporate Sustainability into Your Product Development Process

Learn about the methods the Synapse team has developed for understanding how to achieve sustainability goals for a new product in this TEDx Talk with Mechanical Engineering Tech Lead, Will Harrison.

October 19, 2020

The ME Team Goes Virtual: 4 Ways We’ve Tackled the Challenge of Making Things in a Virtual World

The mechanical engineering team at Synapse has gotten creative in finding solutions for working together remotely. Following Ann Torres’ (our VP of Engineering in San Francisco) great discussion with Fictiv and Cooper Perkins on How to build a Physical Product in the Virtual New World, our team tackled some of the same challenges and developed solutions of our own.

See what else is new...

October 6, 2020

Natural UI: 5 Design Tenets for Uniting Physical and Digital Interfaces

Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.

September 8, 2020

Bringing Physical User Interfaces Back in a Connected World: An Intro to RPCIs

We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.

December 4, 2019

[Watch] Stop Yelling at Alexa, She Doesn’t Get You…Yet

The recent success of smart speakers has been a great leap in human-digital interaction, but there’s still a lot of space for developers and companies to cover to create smart devices and environments with truly engaging and intuitive interfaces. While voice command technology can handle simple tasks, the interaction can fall short because it has modest understanding of human intent.