Via Raja Mitra.
I can imagine a story in which all the human characters work with AIs. One of them parents AIs to socialise them; one plays D&D with them to teach them about human interaction and storytelling; and one watches them for signs of bias and corrects it.
Originally shared by Eli Fennell
New IBM Tool Aims To Detect A.I. Bias
For all the promise of Artificial Intelligence, it also carries a huge risk: their algorithms can develop biases, and these biases can frequently be invisible to its human creators and operators, and quite dangerous as well.
IBM wants to help solve this with their new Fairness 360 Kit, a new project to detect biases in A.I. and make humans aware of them. Open source and designed to work with many commonly used frameworks for building learning algorithms, Fairness Kit 360 aims to provide Real Time insights, via a visual dashboard, into how learning algorithms are making their decisions.
This represents another valuable approach in solving the A.I. “Black Box Problem”, along with teaching A.I.’s to be better at showing their work (http://bit.ly/2xtugL9), and utilizing more transparent learning systems (http://bit.ly/2NjMBo7).
#AI #ArtificialIntelligence #MachineLearning