I'm a designer, so the nature of my job make me a little..... optimistic. But you took the time to write your concerns, so I thought I'd add my perspective.
1. Transparency and traceability - it's impossible to tell what it is truly doing, and within software engineering that's extremely dangerous
I won't comment, my expertise don't lie on the engineering side of AI models.
2. Security exposure through unintended use and consequences. It's a nuclear bomb in the hands of an infant
In what context? E.g. Are you talking about training AI? Accessing information through AI?
3. Directed use for nefarious purposes - just head over to twitter, and the rampant porn fakes and paedophilia generated by AI. One hideous example of a huge iceberg
For sure. Can't disagree with this. As a consumer product, it's the wild west.
That being said, would you destroy the internet under the same premise? I mean if AI is the dealer, the internet is the supply network, right?
4. Human interaction. It's not human. People have died due to their interactions, mostly through suicide
True. I'm not going to quibble by throwing suicide stats around. People are lonely, this is something that we, as a society, need to consider in a far more compassionate light. People are in a horrific place if they feel as though they need to get validation from a machine.
5. Extreme use of resources, pushing the planet ever further to an irreversible climate doom
Holding this specific example as the harbinger of doom is a convenient scapegoat from the actual problem at hand.
We need for better energy solutions.
We are using more and more energy by the day - if it's not this, it'll be something else.
6. It's really, really not accurate, nor is it based on accuracy
This is true for the consumer models. Some R&D models are leaps and bounds ahead of what is available to the general public. I won't delve too deep into this.
7. No one knows where this will go. Not the people who built it, own it, or you and I.
In what way? Are you talking about decision making? or future development? something else?
8. Human interaction - so many assholes out there will use this to further disinformation, crime and all sorts of other hideous stuff.
Personally, I think that this falls in the same category as 3.
Look, I understand your perspective. But, this is the nature of progression, it scares people and rightly so. But every piece of technology that makes our lives better, can also make it worse.
Cars allow us to traverse large spaces in short amounts of time, but they can also be used to end peoples lives.
Social Media allow us to instantaneously communicate to loved ones, across the globe, but they also allow our enemies to harrass us in our own homes.
The benefits far outway the negatives in both of these cases, even if you can't see it right now. Bare in mind, I deal with R&D/custom models that aren't available to the public. From the projects I see going on;
GP's will be able to diagnose patients quicker and more accurately when using a model that is trained around medical libraries.
Law can become less subjective and more consistent when trained on precedents, situations and previous rulings.
I'll be clear; I don't think that we should ever take our hands off the wheel of this sort of technology and there needs to be proper oversight established for guiding it's development and application. I just believe that progression should be guided - not sunk by the unknowns.