Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not even just incompetence, but malice. "AI says so" is going to be the perfect catch-all excuse for literally everything anyone might want to do that they shouldn't. You know how techbros love to excuse every horrifying outcome of their torment nexi with "don't blame me, the algorithm did it"? It's going to be like that, but now everyone can do it.
 help



It's also why people start parroting the phrase "the purpose of a system is what it does". Look at where we are right now: a precipice before this becomes widely used in all forms of policing. We still have a chance to police the police's use of the AI.

The purpose of using AI to identify suspects in criminal cases is to ease the burden of manual searching for a suspect (or insert whatever the purpose of statement you want). Ok, but we're getting false positives that are damaging people's lives already in the early stages. And I don't want to hear "trust me bro, it will get more accurate" as an excuse to not regulate it.

At a minimum, we should enshrine the right to appeal AI and have limits on how it can be used for probable cause.

This isn't even the only recent case of this happening. There was another case of mistaken identity due to AI. [0] Sure 4 hours isn't the same as 5 months, but still this guy wanted to show multiple forms of ID to prove who he was! The bodycam footage was posted a few months back but never got traction here.

Like if the police officer can't read numbers, they can't do breathalyzer tests on people. If the AI can't be used responsibly, then it can't be used at all.

[0]: https://www.youtube.com/watch?v=lPUBXN2Fd_E




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: