Mar 22

Because our brains can process any sensory inputs they’re given, we’re not limited to the senses we have.

Because our brains can process any sensory inputs they’re given, we’re not limited to the senses we have. Not only can we substitute for senses that we’ve lost; we can integrate new sources of data into our sensorium and learn to experience it and respond intuitively. 

(20 minutes.)

https://www.youtube.com/watch?v=4c1lqFXHvqI&feature=share

Mar 21

Facial and vocal indicators of emotion are constant between cultures, and there’s now a body of research which…

Facial and vocal indicators of emotion are constant between cultures, and there’s now a body of research which enables computers to read them with good accuracy.

This panel (a venture capitalist, an academic, and several entrepreneurs working to develop emotional computing applications) discuss the implications. For example, we’re good at being aware of others’ emotions, but not our own. Could a computer assistant help us with our lifestyle choices and guide us towards practices and ways of living that put us in a better mood and in better health overall? Could we assess mental health and physical pain more accurately? (Answer: it looks highly likely.)

On the other hand, will the use of emotion tracking while people consume media and experience products put us into the hands of manipulators? (Answer: not yet, but perhaps soon. However, the payoff for users will need to be there for this to gain acceptance.)

There are cultural differences in the expression of emotion, too, which show up in aggregated data. 

(1 hr 18 min)

My speculations:

1. We’ve already heard about the “bubble”, where FB or Google will show you things they think you’ll respond positively to, and you end up not aware of contrasting viewpoints. What happens if they start only showing you things that make you happy? How does that affect online activism, for example? (I have in mind Paulo Bacigalupi’s short story “The Gambler”, in which click-driven journalism drowns out serious and important issues with a tide of celebrity scandal.)

2. It’s already possible to assess a crowd’s predominant affect in near-real time (this video shows an example near the end). If this became real-time, and you played it back concurrently to, say, a political speaker – the kind of person who’s currently driven by polls, but has the mental agility to adapt his or her speech based on what people are responding to – what kind of politics would you get? I’m envisioning a standard app for speakers here, designed to prompt a boring executive to hurry through the PowerPoint when the audience starts to disengage, but repurposed for mass manipulation by a clever and adaptable demagogue. Instead of a teleprompter, the speaker watches an affect evaluation screen.

https://www.youtube.com/watch?v=WSj26ncU_po&feature=share

Mar 15

A fascinating video (8:29) on the “technological disobedience” of Cubans, who repurposed technology in creative ways…

A fascinating video (8:29) on the “technological disobedience” of Cubans, who repurposed technology in creative ways during the crisis of scarcity known euphemistically as the “Special Period in Time of Peace”. 

Potential inspiration for postapocalyptic and dystopian authors here. 

https://www.youtube.com/watch?v=v-XS4aueDUg&feature=share

Mar 15

Via Charlie Loyd’s newsletter: a study that suggests the key thing about conspiracy theories is the belief that a…

Via Charlie Loyd’s newsletter: a study that suggests the key thing about conspiracy theories is the belief that a conspiracy exists – not the content.  The more people believe that Princess Diana faked her own death, the more they also believe she was murdered. The more people believe that bin Laden was already dead when Seal Team Six arrived at his compound, the more they believe he’s still alive. 

http://www.academia.edu/1207098/Dead_and_alive_Beliefs_in_contradictory_conspiracy_theories

Mar 14

These devices, which physically change their shape and provide a two-way interface between the virtual and physical…

These devices, which physically change their shape and provide a two-way interface between the virtual and physical worlds, are the first clumsy steps towards something like I describe in my novella Gu (http://csidemedia.com/gu).

Duration: 9:22.

https://www.youtube.com/watch?v=8sheoGMsy3Q&feature=share

Mar 14

An introduction to “transhumanism”, the idea that the next step in human evolution could be taken deliberately, by…

An introduction to “transhumanism”, the idea that the next step in human evolution could be taken deliberately, by changing ourselves.

Philosopher Anders Sandberg summarizes the state of research and thinking, without overhyping it. He mentions exercise, education, meditation, drugs, brain training, genetic engineering, neurological interfaces, collaborative technologies, cybernetic human enhancements and body modification.

Few of these have had widespread real-world trials, and there are various ethical and philosophical questions; the more fundamental something is to our sense of self, the more hesitant we are to change it. 

From about the 1-hour (halfway) mark, another philosopher (whose name I didn’t catch) responds, questioning some of the underlying ideas of transhumanism. What is humanity, what is the self, and if we change it, is it still humanity? If we delegate too much to technology, do we diminish our own capacity? 

All of these questions were mentioned in the first part of the lecture, in fact, and the responding philosopher keeps everything at a high theoretical level, rather than sticking with practicalities as Dr Sandberg did. They’re important questions, but I don’t know that the way they were raised was particularly useful. 

The moderator then raises the question of enhancement of ourselves in order to be better than our instincts and save the world, versus enhancing ourselves to enjoy our current type of life more (moral enhancement versus enhancement of abilities). Sandberg’s response is that transhumanism is about tools, but you can use those tools in the service of various different value systems. 

The respondent questions whether human enhancement is the solution to the world’s problems at all; collective organization, and taking collective responsibility – not enhancement – is where the solutions lie. 

Sandberg: Government is also a technology. Meditation is a technology. New forms of government are enabled by new technologies for producing and disseminating information. We are getting better at organizing ourselves in positive ways. (This isn’t necessarily “transhumanism” as such.)

Moderator: Where does humanity leave off? What about posthumanism?

Sandberg: Over the years he has become more interested in near-term, practical technologies than the big picture about where we might go. There’s not much we can say about the posthuman condition, by its nature. 

Respondent: Can we actually assume that humans consist primarily of information, so an uploaded person is human? Or is technological uploading the death of humanity? 

Sandberg: Would it be a failure if humanity evolved into something else, rather than remaining the same? 

Respondent: But how do we actually decide on our direction as a species?

Sandberg: We don’t know yet, so we should try to become smarter, so we can figure out these questions. 

Audience question period ensues. 

Q: Is transhumanism inevitable? Will we eventually merge with our technology as it becomes more powerful?

Sandberg: Not inevitable, but highly likely. There’s a ratchet effect in the development of technology, and once some people adopt something there’s a strong pressure for others to do so in order to keep up. However, some people may remain human as others become transhuman. Parallel of the Amish; a kind of backup in case the advanced technology fails.

Q: Is there only one direction of enhancement, or will posthumanism be widely divergent, with people choosing which enhancements they accept? Will that lead to conflict among different groups?

Sandberg: This may be like Mac/Windows/Linux. Transhumanists do often talk as if there’s one true way forward, but he doesn’t. The key is to have some cooperative framework so we can live together. This may lead to complementary groups rather than competing ones, though if people are too radically different it may lead to issues. However, liberal democracy is quite good at handling diversity; we might just need liberal democracy 2.0. 

Respondent: Is there not something to be said for lessening our technology and dependency upon it? 

Sandberg: We can give up some things in part because we have a safety net and can get them back. 

Moderator: But if there’s a cognitively enhanced group and a group of have-nots, it will inevitably lead to a class separation. 

Sandberg: Discrimination is bad when it’s about something that doesn’t matter, but we don’t want a society that takes no notice of ability when it does matter. Enhanced people will have great responsibility to go with their great power. But most people’s life projects would be helped by enhanced intelligence, so most people will probably go for it. 

Q: Would it be possible to allow people to experience both human and transhuman life? 

Sandberg: We can already have “monkey experiences” through alcohol, for example. But there are some experiences that only make sense while enhanced. Each level of brain development builds on the one before, and gives us a new level of meaning. The lower levels of enjoyment are not lost as we reach the higher. 

Respondent: There’s often an assumption that enhancements will be irreversible – you won’t be able to put them on the nightstand when you go to bed. 

Sandberg: It’s like the irreversibility of learning – learning changes you, you’re a different person afterwards. 

Q: Enhancements are accepted when compensating for disabilities, but not so much when we are going from human to superhuman. To what extent would people accept some decrements in order to gain enhancements, like a shorter life in exchange for a better robot arm?

Sandberg: This would depend on your life goals and value system, and on how well we can predict the drawbacks. Unknown, long-term side effects are a big part of what people worry about with things like cognitive enhancement pills. There are problems with getting ethics approval to study this kind of thing. 

Moderator: In conclusion, while predictions are notoriously unreliable, making them does have utility. We don’t know what will happen in 10 years, but we do know that things are going to change and confront us with ethical, social and political choices, so considering the issues imaginatively is something that helps to prepare us for the future – even if we don’t know what it will be. 

https://www.youtube.com/watch?v=Etrl4Z-9tfc&feature=share

Mar 11

Manufacturing is now following publishing into a democratised model.

Manufacturing is now following publishing into a democratised model.

It’s become literally child’s play to make things on your desktop, on gear with a low four-figure cost. Also, it’s now possible to download a free design app to your phone, design a physical thing… and have a robot factory in China manufacture it in bulk and send it to you. Some of them take PayPal.

We can print not only in metal, wood and plastic, but in biology and electronics. 

Chris Anderson’s story is that, as editor of Wired, he was given two things to review: a Lego Mindstorms robot and a model plane. When he took them home at the weekend, his kids weren’t impressed with either one separately, so they and he mashed them together and made a drone with a robot autopilot in an afternoon. That gave him a “What just happened?” moment.

He started a hobbyist website called DIYDrones, and ended up starting a business when people asked for kits. He hired a new high school graduate in Tijuana, who he’d met on the Internet, to assemble his kits when his kids wouldn’t do it any more, and discovered (when his 19-year-old employee did it and then told him about it) that you can buy factory equipment on eBay, out of cash flow, if your product’s something people want. 

Now he does it a bit more professionally, and his company makes more drones than the whole of the US aerospace industry (because they’re cheap, consumer-level gear). They’re more advanced than military drones – because the users are less advanced. They’re kids and untrained hobbyists. The innovation has shifted to the low end of the market, where people want to get magic at the press of a button without worrying about the technology. 

He follows an “open innovation” model, on the Bill Joy principle of “the smartest people are not working for you” – wanting to get the smartest people working for him, just not necessarily employed by him. His software, and a lot of his hardware design, is open sourced. Customers become support for each other. The community finds new use cases, which contributes to the spread of the technology. 

By recognising contributions to the project, he builds a funnel of contributors, many of whom end up working for him full-time. The platform for innovation attracts talent to it.

Production, he says, has moved from an industrial act to a technological act to a social act. 

https://www.youtube.com/watch?v=i03GLcn_ceE&feature=share

Mar 11

What will be the impact of AI on jobs?

What will be the impact of AI on jobs? 

John Markoff is optimistic. He believes that they will transform, rather than destroy, jobs, and help compensate for the demographic shifts in the world’s population. We will need to retrain a lot, though (so liberal arts degrees are good preparation). 

That said, data science and the intersection of biology and information technology are current hot areas for employment.

To the title: there are two streams of development in computer science, AI and human-computer interaction, which don’t have a lot of crossover. They are two different philosophical approaches to the relationship between humans and computers. 

Although there’s no real evidence that machines will be self-aware in the near future, they will be autonomous, and we need to design human values into their systems. 

45 minutes.

https://www.youtube.com/watch?v=KPGMTXCDZAs&feature=share

Mar 11

Artificial intelligence in SF tends to be “general AI” – machines that are basically humanlike in their abilities,…

Artificial intelligence in SF tends to be “general AI” – machines that are basically humanlike in their abilities, and in possessing consciousness, but that are either slightly less or slightly greater than humans (sometimes both at once). AI in SF, in fact, tends to be about AI as an allegory of humanity more than it is about AI-as-it-actually-might-be; robots and AIs stand in for underclasses or aggressive foreign Others. 

This is a fascinating lecture by a professor at Oxford, who has a computer science background but is in the philosophy department, about the various philosophical challenges and implications of actual AI (all of which, so far, is “narrow” AI, confined to a specific domain, rather than AGI – artificial general intelligence).

The biggest issue is that we don’t really know how to make AGI, and if we do succeed in making it we won’t understand exactly how it works or be able to predict what it will do in new situations it wasn’t designed for – even if we managed to build it correctly to our original design, which, as anybody who actually practices in the software world will tell you, tends not to happen. 

There’s also the problem that we don’t really understand how our own ethical system works, and even our best approximations aren’t susceptible to being reduced to code. Perhaps we need to make sure that robots recognise “human” as a very basic category with special value and importance (which immediately put me in mind of an Asimov story, “That Thou Art Mindful of Him”, in which the robots decide that everything they’ve learned about humanity convinces them that they’re a part of it and should have the same rights). 

One of the questions in the Q&A session is about how SF might or might not prepare us to encounter the kind of new situations that AI will bring. Dr Sandberg’s answer is that specific stories generally aren’t that useful (because they’re not about how AI really works), but the general mindset of SF – thinking about how to deal with new things and how to interact with the Other – is helpful. 

(1.5 hours, including half an hour of excellent Q&A.)

https://www.youtube.com/watch?v=N8lcK2Ep1Og&feature=share