As digital innovation explodes, the interface between people and the digital world is undergoing an equally impactful transformation. Whether you aspire to launch a breakthrough product or to significantly improve your operating efficiency, the latest in sensing, machine learning, and user interfaces will enable you to capitalize on the game-changing trends of human augmentation.
Fulfilling the Promise of Natural UI Through Inclusive Design
Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.
Natural UI: 5 Design Tenets for Uniting Physical and Digital Interfaces
Consumers are seeking out Natural User Interfaces (NUIs)—technology that can be controlled by whatever method is most convenient in that moment, therefore blending seamlessly into our surroundings. Today’s smart devices attempt to achieve this by combining physical control interfaces with layers of digital innovation, from voice commands and gesture recognition to gaze tracking and machine vision. But is this a guaranteed improvement? Not without deliberate design.
Bringing Physical User Interfaces Back in a Connected World: An Intro to RPCIs
We believe that connecting products to the Internet and otherwise adding digital “smarts” to them can enable powerful new functionality and make products much more useful to their users. That being said, we care deeply about the user experience of physical products. We feel strongly that the industrial design and user experience of a product should be constricted as little as possible by the addition of digital technology. That’s why we started exploring the concept of reactive physical control interfaces (RPCIs)—physical controls that self-actuate in response to secondary digital control.
[Watch] Stop Yelling at Alexa, She Doesn’t Get You…Yet
The recent success of smart speakers has been a great leap in human-digital interaction, but there’s still a lot of space for developers and companies to cover to create smart devices and environments with truly engaging and intuitive interfaces. While voice command technology can handle simple tasks, the interaction can fall short because it has modest understanding of human intent.
Moving Beyond the Wake Word
The ubiquitous “wake words” used by today's smart speakers can make for an awkward if not frustrating experience when designed into custom devices. Recent technology advancements and some creative design could allow us to get the attention of our digital assistants in a more natural way.
[Watch] Why Your Smart Home Isn’t Truly Smart...Yet
Smart speakers have taken us a huge step forward in human-digital interactions, but the user experience must become more intuitive to deliver on the promise of a smart home. We present a technology demonstration that shows one approach to making interactions with smart devices more natural.
Using Autonomous Robotics to Unlock a Next-Generation Natural UI
Think about your last encounter with a robot. For most of us, communicating with robots isn’t like communicating with another person—not yet anyway! But, we’ve been working on technology that enables a much more natural person-to-person like interface. Creating an autonomous robot that makes interacting with technology as intuitive as talking to a friend is made possible by combining artificial intelligence, voice and gesture recognition, 3D mapping, and spatial awareness.
Alexa, what more could you do?
We're exploring ways to take the voice enabled user interface to the next level by augmenting it with other technologies or optimizing it with custom engineering.
Looking Ahead to the Next Year of AI In Product Development
Artificial intelligence (AI) had a starring role at this year’s Consumer Electronics Show and has been everywhere in the news since. At Synapse, we’re excited to build this trending tech into novel products and look forward to pushing AI to the next level in 2018.
Combining AR and AI: Contextual Awareness
AR at CES demonstrated some leaps forward by combining the latest in AI and machine vision with progress in AR displays and HMDs to take AR experiences and opportunities to the next level.
Automation AR and Jobs: A Demonstration
Synapse has created a demonstration system showing how we can use AR and machine vision technology to train industrial workers to perform high-mix, low-volume assembly tasks which aren't good candidates for full automation.
Automation, AR, and Jobs: How AR Will Train and Assist Workers in the Context of Automation
Recent news headlines have highlighted jobs being lost to robots and automation. Jobs that involve high-volume, repetitive tasks are at the highest risk of being displaced, but for more complex, non-repetitive, and fluid tasks, augmented reality (AR) technologies show promise in training and assisting workers to perform these roles more economically than with full automation by leveraging the brains and hands of workers.
New Tech On Display at Display Week
We discover two new display technologies on display at Display Week, the Society for Information Display's annual symposium
At first glance, a gamer playing Pokémon Go has little in common with a surgeon saving lives in an operating theatre. But dig a little deeper and you’ll discover that might not be the case for much longer...