See what else is new...
No items found.
Here at Synapse, we spend a good deal of time watching the major trends in technology that underlie product development. Over the past year, artificial intelligence has become a major theme, both for us and our industry as a whole—we recently spent some time with our colleagues at Cambridge Consultants thinking about what the latest trends in AI and related technologies mean for our clients in the year ahead.
Research laboratories at the biggest companies and institutions in the tech industry, as well as some focused startups, are making huge strides in advanced AI techniques like deep learning, and are taking advantage of ever-more-powerful hardware to execute massively parallel algorithms. As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.
We see rapid and significant progress being made in human-level AI systems, with driverless vehicles being a flagship technology getting lots of recent media attention. But along with the development of novel algorithmic techniques, these systems rely on massive amounts of data to do their job. These data sets can be very expensive and time consuming to create, yet are needed both for training the systems (e.g. in visual recognition tasks), and for interpreting what they sense (e.g. detailed digital maps). Our engineers have made strides towards building capable systems without the need for an expensive, time consuming data curation effort.
Our art-generating system, Vincent, demonstrates a novel application of AI in augmenting human creativity, though the underlying advancement is in training techniques that work with limited data sets. Other recent projects make use of data synthesis techniques to automatically generate thousands or millions of labeled training examples instead of a cumbersome manual process.
“As an engineering services company, we are also pushing the boundaries of what is possible, with a focus on enabling next-generation AI-based product designs for our clients.”
Virtual and augmented reality are also very much in the current spotlight—we’ve seen incredible demos that combine the most advanced display technology with computer vision technology that overlays visual information on what a user sees. We’ve been prototyping how this technology can be brought to bear for applications with user interfaces other than complex head-mounted displays or mobile phones, and specifically how these systems can enhance human capabilities using strong computer vision technology that can model and understand the world around us.
To this end, we have showcased an augmented reality system that uses computer vision techniques to guide a human through an assembly task (we call it HECTAR), with visual cues projected directly onto the parts being assembled. We have also developed a machine-vision recycling assistance system that assists hurried users with the task of sorting their recyclables, compostables, and garbage into the proper bin.
And of course, everyone is talking about voice technology and it’s clear to us that voice UI is going to be a big factor in product development going forward. So we’re focusing on the specific problems to be solved outside of in-home voice assistants—like how we can efficiently customize the interaction model, enable the technology under resource constraints (battery power, or less-expensive computation platforms), and make it function reliably outside of the home (e.g. office, connected venues, and industrial settings) using signal processing techniques for noise reduction and speaker isolation. Additionally, we’re very involved in pushing what voice assistants can do beyond speech recognition, like speaker identification and understanding the context behind the speech.
All around us there are examples of AI quickly moving out of the laboratory and into our everyday lives. Synapse and Cambridge Consultants are excited to work with our clients to combine their novel product concepts and enabling technologies with advances that are happening in AI to see what product and service transformations will emerge to make a positive impact on our world.
No items found.
Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.
Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.
We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.