Love this perspective, the accountability vacuum is absolutely the elephant in the room, and it makes me think the next big leap isn't about better models but building proper human-AI oversight into the systems from the get go.
In the context of LLMs, I TOTALLY agree with you. But in the context of machines, we put our lives in their hands every day. Planes on Autopilot. Traffic lights that make choices based on traffic-flows. CT Scanners... Planes crash, cars crash and medical machines might (but rarely) misdiagnose. But they do make less errors than humans.
Of course I don’t mean “ban all the automation”! That would be silly—not implying your comment is.
But I fear we’re too fast delegating jobs (and roles) that require human judgement to AI systems.
I hope I’m wrong, but I think we’re going to see a lot of catastrophes where people and businesses will get their private data exposed as a result of letting these systems run wild and unsupervised.
Dang, loved the way you laid this all out man, really good stuff to sit and think with.
Love this perspective, the accountability vacuum is absolutely the elephant in the room, and it makes me think the next big leap isn't about better models but building proper human-AI oversight into the systems from the get go.
In the context of LLMs, I TOTALLY agree with you. But in the context of machines, we put our lives in their hands every day. Planes on Autopilot. Traffic lights that make choices based on traffic-flows. CT Scanners... Planes crash, cars crash and medical machines might (but rarely) misdiagnose. But they do make less errors than humans.
Of course I don’t mean “ban all the automation”! That would be silly—not implying your comment is.
But I fear we’re too fast delegating jobs (and roles) that require human judgement to AI systems.
I hope I’m wrong, but I think we’re going to see a lot of catastrophes where people and businesses will get their private data exposed as a result of letting these systems run wild and unsupervised.