Looking Ahead to the Next Year of AI In Product Development
Here at Synapse, we spend a good deal of time watching the major trends in technology that underlie product development. Over the past year, artificial intelligence has become a major theme, both for us and our industry as a whole—we recently spent some time with our colleagues at Cambridge Consultants thinking about what the latest trends in AI and related technologies mean for our clients in the year ahead.
Research laboratories at the biggest companies and institutions in the tech industry, as well as some focused startups, are making huge strides in advanced AI techniques like deep learning, and are taking advantage of ever-more-powerful hardware to execute massively parallel algorithms. As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.
Doing More with Less Data
We see rapid and significant progress being made in human-level AI systems, with driverless vehicles being a flagship technology getting lots of recent media attention. But along with the development of novel algorithmic techniques, these systems rely on massive amounts of data to do their job. These data sets can be very expensive and time consuming to create, yet are needed both for training the systems (e.g. in visual recognition tasks), and for interpreting what they sense (e.g. detailed digital maps). Our engineers have made strides towards building capable systems without the need for an expensive, time consuming data curation effort.
Our art-generating system, Vincent, demonstrates a novel application of AI in augmenting human creativity, though the underlying advancement is in training techniques that work with limited data sets. Other recent projects make use of data synthesis techniques to automatically generate thousands or millions of labeled training examples instead of a cumbersome manual process.
“As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.”
Augmented Reality Beyond Phones and Glasses
Virtual and augmented reality are also very much in the current spotlight—we’ve seen incredible demos that combine the most advanced display technology with computer vision technology that overlays visual information on what a user sees. We’ve been prototyping how this technology can be brought to bear for applications with user interfaces other than complex head-mounted displays or mobile phones, and specifically how these systems can enhance human capabilities using strong computer vision technology that can model and understand the world around us.
To this end, we have showcased an augmented reality system that uses computer vision techniques to guide a human through an assembly task (we call it HECTAR), with visual cues projected directly onto the parts being assembled. We have also developed a machine-vision recycling assistance system that assists hurried users with the task of sorting their recyclables, compostables, and garbage into the proper bin.
Novel Voice User Interfaces
And of course, everyone is talking about voice technology and it’s clear to us that voice UI is going to be a big factor in product development going forward. So we’re focusing on the specific problems to be solved outside of in-home voice assistants—like how we can efficiently customize the interaction model, enable the technology under resource constraints (battery power, or less-expensive computation platforms), and make it function reliably outside of the home (e.g. office, connected venues, and industrial settings) using signal processing techniques for noise reduction and speaker isolation. Additionally, we’re very involved in pushing what voice assistants can do beyond speech recognition, like speaker identification and understanding the context behind the speech.
All around us there are examples of AI quickly moving out of the laboratory and into our everyday lives. Synapse and Cambridge Consultants are excited to work with our clients to combine their novel product concepts and enabling technologies with advances that are happening in AI to see what product and service transformations will emerge to make a positive impact on our world.
As sustainable design becomes a higher priority in the development of new products & systems, it’s becoming critical for engineers to learn how to apply industry-accepted sustainable design and manufacturing practices for new product development.
The recent success of smart speakers has been a great leap in human-digital interaction, but there’s still a lot of space for developers and companies to cover to create smart devices and environments with truly engaging and intuitive interfaces. While voice command technology can handle simple tasks, the interaction can fall short because it has modest understanding of human intent.
The ubiquitous “wake words” used by today's smart speakers can make for an awkward if not frustrating experience when designed into custom devices. Recent technology advancements and some creative design could allow us to get the attention of our digital assistants in a more natural way.
Synapse is a product development firm. We work with the best companies in the world to drive innovation and introduce cutting-edge devices that positively impact our lives. Fueled by a desire to solve complex engineering challenges, we develop products that transform brands and accelerate advances in technology.