I just listened to a Flash Forward podcast episode on this kind of stuff (“Robocop”).

I just listened to a Flash Forward podcast episode on this kind of stuff (“Robocop”). The problem with training AIs like this is to avoid the bias already in the training data, though given that it’s based on whether people actually did or did not reoffend, it is, on the face of it, less risky than other ways of training.

Originally shared by Rhys Taylor

Police in Durham are preparing to go live with an artificial intelligence (AI) system designed to help officers decide whether or not a suspect should be kept in custody. The system classifies suspects at a low, medium or high risk of offending and has been trialled by the force. It has been trained on five years’ of offending histories data.

Data for the Harm Assessment Risk Tool (Hart) was taken from Durham police records between 2008 and 2012. The system was then tested during 2013, and the results – showing whether suspects did in fact offend or not – were monitored over the following two years. Forecasts that a suspect was low risk turned out to be accurate 98% of the time, while forecasts that they were high risk were accurate 88% of the time. This reflects the tool’s built in predisposition – it is designed to be more likely to classify someone as medium or high risk, in order to err on the side of caution and avoid releasing suspects who may commit a crime.

The Durham system includes data beyond a suspect’s offending history – including their postcode and gender, for example. However, in a submission about the system to a parliamentary inquiry on algorithmic decision-making, the authors express confidence that they have mitigated the risks involved: “Simply residing in a given post code has no direct impact on the result, but must instead be combined with all of the other predictors in thousands of different ways before a final forecasted conclusion is reached.”

They also stress that the forecasting model’s output is “advisory” and should not remove discretion from the police officer using it. An audit trail, showing how the system arrived at any given decision should scrutiny be required later, will also be accessible, Prof Sherman said.

http://www.bbc.com/news/technology-39857645

0 thoughts on “I just listened to a Flash Forward podcast episode on this kind of stuff (“Robocop”).

Leave a Reply to Mike Reeves-McMillan Cancel reply

Your email address will not be published. Required fields are marked *

Subscribe without commenting