Mon Apr 1, 02019, 10:00PM UTC
Bruce Blumberg
Robotic Futures: Reading Minds & Mastering Gentle Touch
Coding, dogs, and robots has been Dr. Bruce Blumberg’s work and passion. This experience informs his expectations of the robotic future we can anticipate: rather than featuring generalized human-scale robots, our human capacities will likely be vastly enhanced by new generations of powered tools, personal aids and enabling devices, some potentially embedded in us – not a robotic but a cyborg future.
Coding, dogs, and robots has been Dr. Bruce Blumberg’s work and passion. This experience informs his expectations of the robotic future we can anticipate: rather than featuring generalized human-scale robots, our human capacities will likely be vastly enhanced by new generations of powered tools, personal aids and enabling devices, some potentially embedded in us – not a robotic but a cyborg future.
Bruce Blumberg is a Principal UX Engineer at Universal Robots, the world’s leading collaborative robot manufacturer. Previously, he was Research Fellow at Rethink Robotics, and held management positions at the MIT Media Lab, Blue Fang Games, Apple Computer and NeXT Computer. At Universal and Rethink, Bruce helped usher in the new age of collaborative robots. Rather than being caged for safety and programmed for a single purpose, collaborative robots are designed to be safe, small and economical and to work side by side with people. At Blue Fang, Bruce led the development of expressive, intelligent and engrossing animal characters, raising the bar for digital entertainment. Bruce was an associate professor at The MIT Media Lab and served as director of the Synthetic Characters Group leading groundbreaking research on behavior, learning and motor control for autonomous animated characters. Bruce earned an MS in management from the MIT Sloan School of Management and a PhD in media arts and sciences from the MIT Media Lab.
Bruce Blumberg
Speakers
Introduction:
From his early work developing breakthrough products with Steve Jobs, through his creative work at Blue Fang coding lifelike animal characters for virtual worlds, to his recent efforts designing the latest generations of robots, Bruce Blumberg has developed a deep respect for the capacities of dogs and humans to make extremely complex physical tasks seem so very simple. From a design and coding standpoint, the key is what Bruce refers to as the 95% rule. Solving the first 95% can be easy – but that’s just preparation for the most difficult 5%.
The 95% Challenge
The 95% rule is a critical breakpoint for understanding the nature of technology development. Some problems only need 95% solutions for successful breakthroughs to occur. Examples include: the Internet, image and speech recognition, and the iPhone. But in some cases, if you don’t solve 100% of the problem, you HAVE NO SOLUTION AT ALL! Examples include fully autonomous vehicles and generalized robots. Flying is a 100% problem: Autopilots have been around for decades – yet airliners still have two pilots in the cockpit.
In tackling a 100% problem, the most difficult 5% is often unexciting and boring and requires deep application knowledge. One solution is to solve the 95% in such a way that it enables the user to solve the remaining 5%.
Dogs are totally 95% solutions. They don’t do everything perfectly, but they achieve the 95% by using simple representations, much to our delight. Dogs attend to humans (unlike wolves). They solicit human approval and respond to human cues. They can also learn new behaviors with 40 “samples”. Deep learning algorithms, in contrast, require 40,000 samples. Dogs rely on simple but reliable rules:
spatial/temporal correlation approximates causality;
proximate expectations drive observable behavior;
they respond to differential rewards by varying behavior
We can use simple models to predict and modify their behavior—Reward, Shape and Lure.
Collaborative Robots
How do we use an understanding of dog learning to build better robots? Think of training as a conversation. Seek to develop systems that can learn on as few as six examples and can show us what they are relying on to reach a conclusion. Then the robot can be lured and behavior shaped to the desired result. Also – don’t break things while learning.
Baxter (Rethink Robotics) is a sorting robot built on these ideas. The robot interaction is a conversation/demonstration of the desired behavior. The user demonstrates the salient features and the robot fills in the rest. This is based on a naive view that the more complete the underlying representation is, the simpler the interaction can be. In its first animated incarnation in 2010, an instructor guided the robot to pick up and place red or blue disks. By “looking” with its gaze, the robot could communicate its attention and its intent, and by nodding it would acknowledge the instruction. For simple sorting tasks, this worked pretty well. When the demonstration got more complex, the rules got much harder to demonstrate – being smart became the enemy. As Bruce noted: “And we failed.”
Being predictable, reliable and extensible is more important than being smart. The key is to augment the user/designer’s smarts (a 95% problem), not try to replace them (a 100% problem). And the 95% representation needs to be transparent and extensible to the human who needs to get the last 5% right.
Robots as Tools
At Universal Robots, the robot is viewed as a tool, not a collaborator. Safety systems (a 100% problem) are paramount. This is achievable in industrial settings by:
building “cobots” around a core safety system,
restricting motion of the arm so it can work safely in a given installation,
including sensors and motion planning, and
restricting operations to a constrained, predictable environment.
The difficulty of achieving 100% solutions in a generalized environment is exemplified by the problem of grasping and manipulating, something humans do easily with their hands. From a coding standpoint, manipulation is a far more impressive achievement than winning at chess. It requires vision, motion planning, tactile sensing, exquisite force control, learning how to be gentle enough, quickness and efficiency, and generalization.
The critical importance of hands to the functional capacities of humans can be seen in the 3D sensory homunculus model (see image). The size of the body parts is proportional to the amount of the human brain dedicated to its functioning. The hands are enormous and dominate all other body parts.
“The Trades are not going away anytime soon. The problem solving and physical skills associated with the trades are several orders of magnitude beyond what robots can do. In some ways, their jobs are much safer than those of so-called information workers.” – Bruce Blumberg
Robots in the Home
The home environment is a much more challenging situation for general-purpose, human-sized robots than an industrial setting. There are no shared conventions of interaction, space is tight, and the organization of space and objects is ad hoc and flexible. General-purpose robots for the home will require significant advances in perception, motion and behavioral control and safety engineering. Safety standards will need to be developed and agreed upon by manufacturers, consumers, safety agencies, and insurance companies. General-purpose robots for the home will be very difficult to achieve with 100% safety and probably won’t be available for a long time. if ever.
Conclusion:
Dogs demonstrate the power of 95% solutions, but will never be in charge of a nuclear reactor. Yet putting generalized human scale robots in the home will demand a 100% solution. The tough part is not getting to artificial general intelligence – that is not even the relevant question. The big challenge it to be able to develop the physical intelligence, as well as communication skills, that are required to successfully and seamlessly navigate the human physical environment. We are a very long way from that.
A cyborg approach may be the answer. Humans solve the hard problems, supplemented by devices, including some potentially implanted, that provide 95% solutions to specific problems. The Apple watch, the iPhone, Google lens/glass with augmented reality, and automatic blood or heart monitors are existing examples. These are just the beginning. What might we be able to do for balance and perceptual decline? Nutrition and food preparation? Companionship and stimulation?