When engineers design a robot to perform an automated task, the robot is typically built or selected for the worst-case scenario of moving the largest expected mass in the highest number of degrees of freedom while maintaining sufficient accuracy for the job. That’s a tall order and it leads to expensive, heavy, over-built robots. In addition, highly-accurate robotic systems can require expensive secondary encoders which are only needed for the final step of alignment to account for calibration of joint positions and tolerances in the system.
Think about an ice skater, lifting his partner above his head. The motion is fluid, leverages momentum, and is performed with a strikingly-high payload-to-weight ratio. Classic approaches to robotic actuation and open-loop encoding often fail to achieve the same performance criteria.
Accuracy At Scale
Another key challenge for automated robotics is scaling across large work areas and workpieces. While most robots are great at performing small-scale, precise tasks relatively close to their fixed bases, that approach is very limiting and can also further drive up the cost of automation when the work area is large—it requires many robots to cover that larger area or a single, especially over-built robot to do the large-scale job on its own.
Consider an order picker running through an online retailer’s warehouse. This person is able to perform a series of varied tasks and adapt their frame of reference and accuracy for the task at hand across a huge work area. This is possible because humans use our eyes as an outside-in tracking system for our hands and feet. As Elon Musk recently said with respect to automation at Tesla, “humans are underrated.” Classic approaches to automate this task require a plethora of different robots working together to divide up the labor (which can be expensive) or highly-sophisticated and expensive vision and navigation systems to let robots move around the space (which then don’t match the accuracy of rigid, fixed-base robots).
Outside-In Tracking Benefits
We see a breakthrough opportunity to change the design paradigm for some automation use cases to unlock much lower-cost and fit-for-purpose robots. We see that moving from (or supplementing) the “inside out” encoding most robotic actuators currently use in order to know their position and orientation in space to an “outside-in” approach which can establish the absolute position and orientation of the robot and correct for motion and deflection is a game changer. Imagine a robotic arm as light as a swing-arm lamp, deflecting when picking up a payload, but being able to correct for this deflection in real time—most robots are so rigid and over-built that we never see deflection like this. Imagine a robot moving around a large space and still performing tasks with sub-millimeter accuracy.
With this approach, we can create much cheaper, lighter-weight robots that maintain high accuracy, enabling them to go many more places than are affordable today. This approach can also extend to human augmentation as well where we can add accuracy to a task a person is roughly guiding by adding this absolute position tracking, much like this handheld CNC router, but without the need for stickers to orient the system. This opens up new opportunities with cobotics (human-robot collaboration), not just automation.
We’re so excited about the potential applications of this paradigm shifting approach that we’re investing in a demonstration to show just how lightweight and inexpensive an accurate robot can be—stay tuned!
While micromobility solutions have flourished and grown during the pandemic, the existential question of how to solve the biggest challenge ahead for the micromobility segment—how users ride—remains unanswered. In this post, we’ll touch on a number of opportunities for the industry to embrace innovation and technology in order to remain the chosen method of transportation, even in a post-COVID world.
Connected devices are leveraging rapid developments in voice control and machine vision to enable more seamless user experiences known as natural user interfaces (UI) or zero UI. But “seamless” and “natural” to whom? And in what context? Combining physical and digital interfaces so that a product can support various modes of interaction results in the most accessible products and intuitive experiences.
The mechanical engineering team at Synapse has gotten creative in finding solutions for working together remotely. Following Ann Torres’ (our VP of Engineering in San Francisco) great discussion with Fictiv and Cooper Perkins on How to build a Physical Product in the Virtual New World, our team tackled some of the same challenges and developed solutions of our own.
With major improvements happening in virtual reality technology and many companies operating remotely in the face of unprecedented health and safety concerns, is COVID the perfect time for VR to finally go mainstream?
Are you a startup developing a prototype needed to reach your next fundraising milestone? Or are you on the path to mass production? Either way, there's an Alpha prototype in your future—but, they're not all created equal. Ian Graves, Mechanical Engineering Program Lead, describes a few different Alpha prototype scenarios and discusses some of the downsides and highlights of each.
Synapse is a product development firm. We work with the best companies in the world to drive innovation and introduce cutting-edge devices that positively impact our lives. Fueled by a desire to solve complex engineering challenges, we develop products that transform brands and accelerate advances in technology.