10% false positives is an enormous rate given how Police like to ‘find’ evidence and ‘elicit’ confessions.
It isn’t predicting individual crimes, just pattern recognition and extrapolation like how the weather is predicted.
“There are on average 4 shootings in November in this general area so there probably will be 4 again this year.” is the kind of prediction that AI is making.
So they are using it to try and decide on deployment?
If that is all they’re using it for I guess it isn’t too bad. As long as it isn’t accusing individuals of planning to commit a crime with a zero evidence.
No, it’s bad, because ultimately it’s not leading anywhere, such tools can’t be used by unqualified people not understanding how they work (not many qualified people do too, my teamlead at work, for example, is enthusiastic and just doesn’t seem to hear arguments against, at least those I can make with my ADHD, that is, avoiding detailed explanations to the bone).
If ultimately it’s not applicable where people want to apply it, it shouldn’t even be tested.
This is giving such applications credibility.
It’s the slippery slope that some people think doesn’t exist. Actually they exist everywhere.
I mean you can train ai to look for really early signs of multiple diseases. It can predict the future, sort of