[Watch] Why Your Smart Home Isn’t Truly Smart...Yet
Smart speakers have taken us a huge step forward in human-digital interactions, but the user experience must become more intuitive to deliver on the promise of a smart home.
As these devices become more and more ubiquitous they are giving easy access to digital information in the cloud, with 90% of people using them for music and 80% for real-time information like weather and traffic. At the same time, there has been an increase in the number of other smart devices available for the home, with everything from toilets to ovens having an internet connection. Smart devices have the potential to improve productivity, safety, and enjoyment in our lives, but using a voice user interface for interacting with these physical devices is still a frustrating experience.
One reason is that VUI today lacks much of the richness that we take for granted in everyday interactions with other people. An improved UI would understand context, such as our gestures, body language, facial expressions, and physical surroundings. It would go beyond speech to text and understand meaning from our voice, including things like emotion and identity. The ultimate goal, really, is "Zero UI" or an interface so natural you don't realize it's an interface.
The ultimate goal, really, is "Zero UI" or an interface so natural you don't realize it's an interface
We've been developing solutions to this, incorporating different sensor technologies and more capable artificial intelligence to make human-digital interactions more intuitive, both for our clients and through technology demonstrations. One example is a home assistant we call Gerard. Gerard has integrated machine vision, voice and gesture recognition, and 3D mapping capabilities which give it the ability to understand much more than the words you say. For example, Gerard knows if you are looking at it when you talk, so an awkward wake word is not necessary. Gerard also understands what objects are around you, where you are, and where it is, so that you can gesture to objects and say things like "turn on that light," or even ask "where did I leave my keys?"
It's possible now to bring interactions like these to many devices around the home beyond smart speakers. Nest executed this nicely with their thermostat, which "wakes up" when you approach it, without requiring any touch or voice to do so; but, the company has also seen challenges with Zero UI—their first generation smoke and CO2 detector had a gesture interface to silence the alarm, which was buggy and was eventually disabled. This shows how solving these problems isn't as easy as just adding sensors to a device. It requires a full system approach, which, in Nest’s case, meant algorithms that worked reliably in a challenging environment. We have experience developing gesture-based algorithms and know how hard it can be to get them to work reliably for diverse populations and use cases.
A system engineering approach is what Synapse and Cambridge Consultants specialize in, and developing a device that benefits from natural interactions like this can leverage many of our technical capabilities. If integrating natural UI is on your product roadmap, or you’d like to discuss advances in intuitive user interfaces, please get in touch!
It’s often a significant challenge to collect data in environments where conditions are difficult for electronics to tolerate, like industrial facilities, commercial kitchens, or on the seafloor. But with smart engineering and the right approach, data collected in these harsh environments can provide enormous value.
Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.
Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.
We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.
Synapse is a product development firm. We work with the best companies in the world to drive innovation and introduce cutting-edge devices that positively impact our lives. Fueled by a desire to solve complex engineering challenges, we develop products that transform brands and accelerate advances in technology.