Footage: A carnival float. Cheerful people in anime coses are waving from it. Suddenly, their coses attack them and they fall thrashing and choking to the ground.
Reset: Same float. This time, the coses suddenly stiffen, then lurch from the float, the people inside helplessly screaming as they are carried into the crowd, where the coses force them to attack.
Reset: Same float. The coses abandon their wearers and begin attacking the crowd themselves, wrapping tentacles around their throats.
And a final reset: This time, the coses simply fall off, leaving the embarrassed cosplayers in their underwear.
You are Serena Koslowski, now Chief Security Officer of the Adaptable Materials Corporation, and you are talking to Halwaz.
"Only the last of those four scenes actually happened," you say. "Fortunately. Not because the others weren't possible, just because the crackers who 0wnz0red the Gu on that float didn't have anything worse in mind than embarrassing their friends in front of half of Berlin. That's why Gu security is so important.
"Right from when we first designed Gu, we knew that it had to be as robust against cracking as we could possibly make it. Fortunately, grayware had already solved some of the hard problems, since people really do not want their nervous systems cracked. If your PC got incorporated into a botnet and some of its processing and bandwidth was scrumped by spammers, it ran slower, and a lot of people got unwanted email. That's annoying, but hardly fatal. The stakes are a lot higher for nervous systems - and for real-world morphable objects.
"The thing is," you say, standing up and starting to pace, "you can't make an uncrackable system. You can't. You can make it very difficult, but you can't make it impossible, not as long as it's connected to the outside world and general-purpose - and if it isn't both those things it isn't very useful. It's always going to be possible for someone to install code that does something you don't want it to do."
"So what was your approach?" asks Halwaz. You stop pacing, turn, and look at her with your knuckles resting on your desk, and scrape a tendril of your long, pale hair back behind your ear.
"We swiped the personal signing system off the grayware guys," you say. "Your Gu is yours, and responds to your signals, because you have one end of an encryption code and your Gu has the other. Then you shift codes by mutual agreement, rapidly enough that it's not computationally possible to crack them before they've shifted. The new code gets transmitted in the old code. It works fine, as long as the software at your end is secure, which means, as long as you keep up your patch levels and don't install anything stupid. But you can make the patches free - as we're legally mandated to do; you can warn people against malware; you can set up a central registry where people can check their downloads to see that they're safe; you can make it illegal, as a number of jurisdictions have, to have ware more than two minor patch levels behind, and issue instant fines when your security monitors detect it. You can make the system idiot-resistant, but never idiot-proof. There only needs to be one old piece of un-upgraded, unpatched software, or one person who downloads and installs a Trojan without checking, and there you have it - a potential zombie. One that actually shuffles round, in the case of Gu, or, in the case of grayware, tries to eat your brains." You smile wryly.
"We've tried to build in more features to, in effect, get compromised Gu to signal 'Help, I have been 0wnz0red!' to the nearest security monitor, but - it's an arms race. So far, there haven't been any major hostile incidents, though. So far."
"You're being very frank," says Halwaz.
"That's because I don't want people to get complacent," you say. "There's more and more Gu out there. Now, increasingly, we're focusing on the other side of the equation. It's one thing to prevent other people's Gu from being used by hostiles - it's important, but it's not our biggest challenge. We also need to make Gu safe when the people who it legitimately belongs to are setting out to cause maximum chaos, or even when they're just being stupid. Not all of our measures are public, naturally. But here are a few that are."
You trigger a corporate holo with the same kind of reassuring announcer voiceover that's been around since the 1930s.
"Today's Gu has more safety and security features than ever before," it enthuses over the Adaptable logo (3-D morphing letters that continually spell A, M, C). "Here are just a few that we're working on or have already implemented."
Cut to a shining orbital laboratory.
"At AMC Laboratories, we're attempting to answer the ancient question: What is man, that you are mindful of him? Each generation of Gu has more sensory capacity than the last, and our Human Detection team is taking advantage of this, continually refining the fuzzy logic that enables Gu to sense probable humans and avoid hurting them."
Enter a stereotypical scientist, with safety glasses and white coat - even though probably most if not all of the team are high-end programmers and have never touched a beaker in their lives. The scientist is attractive in a slightly nerd-girl way, with dark shoulder-length hair and an earnest expression quickly lightened by a pleasant, shy smile.
"Our inspiration," she says, "is Asimov's famous Three Laws of Robotics. Particularly now that Gu is used so much in domestic robot applications, that seems increasingly appropriate. The First Law, and the one we are focusing on here, is: A robot - or, for our purposes, Gu - shall not harm a human, or allow a human to come to harm. Which you and I as humans understand intuitively, but it's remarkably complex to implement. To get a complete solution, you need to encode an understanding of what a "human" is and what would harm one, which involves physics, biology, chemistry and even some psychology and sociology."
The next part is in voiceover.
"Starting with version 4.2, though, we've built in the capacity for Gu to recognize living flesh and go soft and rubbery when approaching it." An actor tries to cut himself with a Gu knife and pound himself with a Gu hammer, to no avail. A car decorated with Gu foams up at the front as it approaches another actor, and he bounces off the soft front and stands up unhurt.
"That's a big step forward right there. And from 4.7, Gu now recognizes a human neck and won't squeeze it, and 5.3 introduced the ability to recognize when someone is breathing and ensure that their breathing isn't blocked. In fact, Gu 5.6 and greater will attempt to supply more oxygen if your breathing becomes difficult for any reason. So we're well on the path to our goal." This one is illustrated by a Gu oxygen mask spontaneously forming over a gasping actor's face in a smoky room, and a fan creating itself to compress air, with a filter to take out the smoke. "Already, dozens of people have been enabled to escape from fires thanks to our initiatives."
Cut back to the scientist, smiling her little smile. "Eventually, we'll solve the general problem of recognizing and protecting humans, but for now, our issue-by-issue approach is working really well." The smile widens and we go out on triumphant music and the morphing logo.
"All those measures are hackable, of course," you tell Halwaz flatly. "In theory, at least. In practice, it's not yet been done. We're under close scrutiny by governments all over the world, many of whom have threatened a public safety tax on our product if we don't make every effort to render it harmless and secure. We've been hit by fines, occasionally, when we've let something slip by, and court cases. There's nothing like an economic incentive to make a company do the right thing."
You smile yourself, rather cynically.