When the idea of self-driving cars first started becoming mainstream, I remember a lot of debate about liability. If an accident occurs, who would be at fault? I think a lot of those questions are still unanswered.
Fast forward and now we have software like ChatGPT. I assume they’ll only become more capable (and connected) over time.
Which makes it strange I haven’t really heard any similar discussion around liability. What happens when it makes mistakes or causes damage?
Maybe in people’s minds it doesn’t matter, because AI is either something that helps with homework questions, or something that’s taking over humanity. Reality is probably in between those two, with much more mundane mistakes or damages done.
What happens when the first ransomware is deployed by AI, on behalf of a user who just wanted tips on how to make more side income?
No way they don’t force you to agree to some “terms and conditions” along the lines of, “You accept full responsibility of all risk and if we get sued, you agree to pay on our behalf. And because we know you won’t read this, here’s all the risks so we can say you gave informed consent: …”