Tavita Sharma, the only one of the original development team that we haven't seen since the documentary's opening, develops Gubots these days. She is still plump and graceful, with a gleaming smile which hints at a sense of mischief. Her hair gleams under the lights of her minimalist, sparkling clean office.
"So, what does that mean, developing Gubots? That would be mainly a software thing, wouldn't it?" you ask.
"Almost entirely, though we do, naturally, have input as part of the developer group into new hardware capabilities. The thing is, though, if Gu works for Gupe it works for us, at least for the anthroforms."
"Let's talk about that. All the early science fiction, right back to Čapek, assumes that a robot is automatically humanoid - anthroform, as you say. Many of yours aren't."
"Right back to Homer, in fact - he has artificial female assistants helping Hephaestus forge Achilles' armor in the Iliad. But industrial robots never were humanoid, unless you think of a single mechanical arm as humanoid. If you have a practical as opposed to a literary purpose, in fact, most applications don't require humanoid form. We have kitchen robots for dishwashing and even cooking which are basically a set of tentacles surrounding a few sensors on a stick. We've had non-anthroform housecleaning robots since the Roomba. Some people like to feel like they've got a maid or a butler when really they aren't in that income bracket, but a lot of other people just want the dishes done and the house cleaned and they aren't particularly fussed what the device that achieves that looks like - despite what our marketing department frequently argues to the contrary." She sparkles a smile.
"So what proportion of Gubots are anthroform?"
"Depends where you are. In Japan, for example, the proportion is very high, and even the ones that don't look actually human as such usually have something that looks like a face. It's considered important that a robot is kawaii. In the west, not so much. More often than not, if you see an anthroform in the west it's actually a Gupe."
"And in Africa or South America?"
"It varies. Some cultures favor anthroforms, some favor zooforms - animal forms, which are also quite popular in Asia - and some favor what we call pragmaforms, which are shaped to the task that they're doing."
"Form following function?"
"Exactly, though in cultures like Japan, 'function' includes much more than just accomplishing a task. In all cultures, really. How you feel about something is part of its function, in a sense."
"Do you believe that there's an Uncanny Valley - a disturbing place between what's indistinguishable from human and what's clearly non-human?"
"Well, I do and I don't. That is, I don't think it's necessarily a general phenomenon, but it is something that some people struggle with. The lines are getting blurrier all the time, and that bothers some people. You go on a support line, it's not always clear whether you've got an AI or a human on the other end at first. The Turing test keeps getting passed, in limited circumstances. Same with Gubots. The ones we design to be interactive - using pretty much the same models of human interaction as are used by the support lines, naturally enough - within their limited scope could easily be mistaken for a human who's a bit insincere, a bit stereotyped in their interactions. Someone who's working off a script and isn't very fluent with it. And there are people like that running Gupes, of course, and not every Gupe looks anthroform either, let's not forget. Shive-ers and furries and technos, oh my!"
"So a lot of the time you just can't tell."
"No, and that's what makes people uncomfortable. You like to be able to tell whether you're talking to a thing or a person, but the line is getting harder to detect and will eventually get hard to draw, if we carry on as we're going. Strong artificial intelligence is still a long way away. People are complex and we don't understand ourselves yet enough to reproduce ourselves any way except the old-fashioned, unskilled way." She gives a chipmunk grin for a second. "A couple of careless teenagers still have our top scientific minds all beat when it comes to creating intelligent beings."
"A general-purpose robot is some way off?"
"In the sense of one like Asimov's or Čapek's or Homer's, a self-aware being that acts more or less human, definitely. The funny thing is that a lot of the early pulp depictions of robots assumed that as machines they naturally wouldn't have or understand human emotions - like Data in Star Trek TNG. But actually, an understanding of emotions predates and forms a necessary foundation for an understanding of any other form of human interaction. The 20th century wasn't comfortable with its own emotions - legacy of the 19th century, I suppose - so it treated them as secondary and logic as primary, and we know how far that got us. Strict logic works wonderfully, right up to the point where it doesn't, and that point comes fairly early on in human interaction. Which isn't to say it can't be studied or reproduced scientifically. That's what we're busy doing - I say 'we', but I'm on the technical end, I just implement what other people discover to a large degree. Eventually you will be able to interact with your robot valet as if it were Jeeves, and it might even help to sort out your tangled love life, though probably not with a Jeeves-like degree of creativity and subterfuge. At the moment, though, we keep our focus on enabling robots to perform effectively in limited domains where the task is clearly defined and can be broken down into a relatively straightforward set of procedures that cover all cases."
"That sounds like a quote."
"Pretty much, yes." Again the chipmunk smile. "One day we'll do it. But not any day soon."
"That has to be good news for people whose jobs are one level up from the ones that robots - Gubots - are taking as we speak."
Sharma turns serious. "Yes, that's - an inevitability that is kind of the downside of what we do. If only the social engineers could come up with that system whereby everyone lives a life of leisure and creativity. But in the meantime, everyone, even the poor, can live in a clean house. The streets are cleaner than ever before in history. Public toilets are even clean. Automated farms produce, automated transport ships and automated kitchens prepare food that's so cheap it's very nearly free. If I could invent a robot that gives meaning to people's lives, I would, but I'm only clever, I'm not a genius." The smile is back, briefly, but it doesn't last for long.
"So what, will we need Asimov's Three Laws soon? A robot may not harm a human or allow a human to come to harm, all of that?"
"Gu is building all of that in anyway, to a greater and greater extent. The First Law isn't a law for robots, it's a law for - aware matter, I suppose you could say, matter that is able to detect and identify potential harm and prevent it. And at the moment that has to be a physical definition of harm - we'll get into all kinds of trouble if anyone tries to broaden it too far. Huge, huge can of worms. The Second Law, about a robot not allowing itself to be harmed except in service of the First Law, well, Gu is cheap and easily replaceable. We don't need that one. And the Third Law, obeying humans, that's pretty much assumed. Any authorized human can give an order, a set of instructions, to Gu which is set to accept instructions from him or her; that's how Gu works. No, when it comes to robots we can pretty much throw out most of the 20th century's fiction, I'm afraid. It was always more about humans than it was about robots, really."