Agents vs Robots

For the sake of discussion, let’s call a system an agent if it has a mental embodiment, and a robot if it has both a mental and physical embodiment. So HAL in 2001: A Space Odyssey is an agent, while C3PO is a robot.

These definitions means that robots have more issues to consider than agents: they have to move atoms in the real world (in such a way that bad things don’t happen, and good things do), and they have to worry about energy control since they (mostly) don’t remain connected to external power at all times.

But otherwise there’s a huge amount of overlap in how they’re designed and how they operate. Both have to solve problems of how to understand the world they find themselves in, how to choose what to do next in a smoothly varying and responsive way, how to interact with the environment and humans around them to move towards desirable states, and how to learn and so refine all of these steps.

You would never know that successful agents and robots required solutions to such similar problems from the research literature. Agent researchers think that robots are “just” agents with embodiment; while robotics researchers think that robots without embodiment are “just” simple bits of software. As a result, neither side learns from the gains of the other.

Killer apps for social robots?

Any research field advances more quickly when those in the field understand what the potential applications are.

Social robotics suffers from a lack of clarity about what the payoffs will be, odd considering how much attention and worry is being devoted to ‘artifical intelligence’ (which mostly seems to mean clever heuristics).

A count of the published literature shows that the applications that are most in the minds of researchers are: care of the elderly, and care of children with autism spectrum disorders. This seems rather a limited vision.

The main advantages of a robot, compared to conventional software tools, are:

  • They can be instructed more as we would instruct another human (and may eventually be able to infer instructions, i.e. desires);
  • They are general purpose, so one robot may be able to replace functionality that would otherwise require multiple specialised devices (much as cell phones have replaced mp3 players, GPS and Satnav devices, radios, and some electronic games);
  • They can (potentially) do things that are hard to mechanise now, for example house cleaning, and personal care, especially in institutional settings;
  • They can be prosthetics for our physical limitations (moving heavy furniture, delivering packages place to place, not just door to door), our mental limitations (reminding us of things we may have forgotten, well beyond what dayplanners and alarm clocks currently do), and our social limitations (chaperoning/livening up dates, helping with negotiations).
  • They can provide security as police, security guards, and bouncers do now.

and there are no doubt many other possibilities. Focusing on some of these possibilities would help to guide researchers in (a) building useful pieces, (b) designing with layers of abstraction, and (c) shooting for complete systems.

What is social robotics?

This question is harder to answer than it appears.

At the simplest level, a social robot is one that is designed to interact with humans, in the same way as humans interact with humans.

But there are different levels of interaction. A robot that moves along the sidewalk in a way that moves with with flow, rather than causing chaos by moving like a vehicle, needs some level of social skills to be able to do this. It needs to know which side to keep to for the direction its going, and it needs to judge how fast and how erratically other pedestrians will move in the next few seconds. These are sophisticated skills; in busy cities humans don’t do so well at these.

At the next level of sophistication, a social robot needs to understand what humans are saying to it. One of the obvious benefits of social robots is that they can be instructed rather than programmed. But understanding the intent of what a human says, so that the human can sketch the desired behavior, is much harder than requiring the human to make a clear, unambiguous statement of the desired behavior, i.e. programming.

At an even greater level of sophistication, a social robot needs to understand what’s going on in the minds of the humans around it. Now it doesn’t necessarily have to be instructed; it can figure out what humans desire, without them having to express it.

To look at the media, you would easily get the impression that social robots are either very close to mass production, or at least only a few years away. I don’t think that this perception is even close to accurate. The glitzy presentations that you can find on the Web are invariably closely scripted so that the robot is just following canned actions and conversations. Roughly speaking, no social robots at any of these three levels exist today. But there are many interesting research problems, and many potential payoffs even outside the area of robotics. After all, if a social robot can understand humans, then maybe human can understand humans too.