Social A.I. Artificial Intelligence the end of life as we know it.

But seriously we should all be worried.

It depends.

If your job is easily automated, then yes. But this has been the case since the industrial revolution.

I agree with the need for regulations, but we're a wee way off the terminator universe, gentlemen.
 
It depends.

If your job is easily automated, then yes. But this has been the case since the industrial revolution.

I agree with the need for regulations, but we're a wee way off the terminator universe, gentlemen.
Disagree, this is vastly different, the oligarchs own a very proprietal set of technology and don't want to be bound by traditional rules and law. Nor do authoritarians like trump.

It's built on the ip of others, to the benefit of the few and is hugely powerful. It needs accountability and constraints until safety and human rights can be guaranteed.

It affects far larger swathes of the world.
 
Disagree, this is vastly different, the oligarchs own a very proprietal set of technology and don't want to be bound by traditional rules and law. Nor do authoritarians like trump.

It's built on the ip of others, to the benefit of the few and is hugely powerful. It needs accountability and constraints until safety and human rights can be guaranteed.

It affects far larger swathes of the world.
And that it’s pretty much now out of human control and ai programmes are bettering ai programmes
 
Critical thinking for now cannot be replaced.

But i would recommend everyone in related industries to look at the following graph that Anthropic released and the actual report.
1776046479378.webp
Red is where we are currently at. Blue is where it could get to.


I see lots of people using free LLM ie chatgpt, Grok etc and thinking thats the level of AI we are currently at.

Anyone who extensively uses Anthropic's Claude Cowork, Code will know better. There are currently people who use AI to create memes, or reword their emails etc. I liken it to people who google by putting in a bunch of search terms and expecting to find the answer. And there are people who know how to effectively use it, using operators to actually get to the answer without scrolling.

Back to critical thinking and utilising AI to make your life easier, the people who can do this quickest will be the winners out of this.

And at the bottom of the pile are people who use MS CoPilot and think they are using AI...its rubbish and geared towards idiots. And if you are gaining value in that, probably rethink your career, because you will be the first to go.
 
It's built on the ip of others, to the benefit of the few and is hugely powerful. It needs accountability and constraints until safety and human rights can be guaranteed.

I said that I agree that it needs regulating.

However, it really depends on the application as to who it benefits and why this is so. You could argue that any consumer product is made to benefit the seller and that it is, and always has been, the responsibility of the collective as to whether or not it benefits the them too.

Simply put; a products salability is determined by the value it brings to the people that own/use it.

The masses have embraced AI, far more that I think most people understand.

AI is capable of surfing through mind bending amounts of data, at a very rapid pace. Some r&d models *cough* are impressively accurate at doing so. This in itself provides massive time saving and accuracy benefits in certain applications.

It's built on the ip of others

I love this argument. Read into the 'original thought' theory if you haven't already.

Regardless, in my mind, sensible legislation would determine that LLM's are subject to copyright infringement and thus must use appropriately sourced data as a base. I'm VERY pro this.

Legislation itself is quite difficult to pass, from my understanding (not a lawyer, but I've had a few conversations with some) as the law requires human accountability. Who's fault is it if IP is infringed?

Also, just to correct you a little bit, there are popular LLM's that are trained on ethically sourced IP and data. They just aren't as far along (and accurate) as the unethical ones.


What's unsafe about generative AI?
 
Critical thinking for now cannot be replaced.

But i would recommend everyone in related industries to look at the following graph that Anthropic released and the actual report.
View attachment 16868
Red is where we are currently at. Blue is where it could get to.


I see lots of people using free LLM ie chatgpt, Grok etc and thinking thats the level of AI we are currently at.

Anyone who extensively uses Anthropic's Claude Cowork, Code will know better. There are currently people who use AI to create memes, or reword their emails etc. I liken it to people who google by putting in a bunch of search terms and expecting to find the answer. And there are people who know how to effectively use it, using operators to actually get to the answer without scrolling.

Back to critical thinking and utilising AI to make your life easier, the people who can do this quickest will be the winners out of this.

And at the bottom of the pile are people who use MS CoPilot and think they are using AI...its rubbish and geared towards idiots. And if you are gaining value in that, probably rethink your career, because you will be the first to go.
Anything realsed by the AI companies themselves should be taken with a grain of salt IMO. Although i use claude code every day and think anthropic puts out the best ai of the current day. There value is directly tied to these speculative guesses arund how AI will take over X industry etc. As you can see via mythos, and it's grand standing of it finding bugs and sucerity vulnerabilities never found before. Which they have not disclosed any they've found, even with more than enough time/meetings given to large comapnies to patch there products. Allowing them to safely say it found X, which would of allowed for Y to happen. But luckly we've patched it now.

So when it comes to graphs like this. You can see the markets anthropic is targeting. Palantir would haveprotective services a lot higher. Gemnai would have education a lot higher. Meta would have sales and social services higher. etc etc. So if anything the "market impacts" reads as a "market share" diagram they wish to one day have.

TLDR all of this feels like pharmaceutical companies about their own drugs when the reasearch isn't done via an independent source.
 
I said that I agree that it needs regulating.

However, it really depends on the application as to who it benefits and why this is so. You could argue that any consumer product is made to benefit the seller and that it is, and always has been, the responsibility of the collective as to whether or not it benefits the them too.

Simply put; a products salability is determined by the value it brings to the people that own/use it.

The masses have embraced AI, far more that I think most people understand.

AI is capable of surfing through mind bending amounts of data, at a very rapid pace. Some r&d models *cough* are impressively accurate at doing so. This in itself provides massive time saving and accuracy benefits in certain applications.



I love this argument. Read into the 'original thought' theory if you haven't already.

Regardless, in my mind, sensible legislation would determine that LLM's are subject to copyright infringement and thus must use appropriately sourced data as a base. I'm VERY pro this.

Legislation itself is quite difficult to pass, from my understanding (not a lawyer, but I've had a few conversations with some) as the law requires human accountability. Who's fault is it if IP is infringed?

Also, just to correct you a little bit, there are popular LLM's that are trained on ethically sourced IP and data. They just aren't as far along (and accurate) as the unethical ones.



What's unsafe about generative AI?
Not an expert by any means but here's a rough stab:
1. Transparency and traceability - it's impossible to tell what it is truly doing, and within software engineering that's extremely dangerous
2. Security exposure through unintended use and consequences. It's a nuclear bomb in the hands of an infant
3. Directed use for nefarious purposes - just head over to twitter, and the rampant porn fakes and paedophilia generated by AI. One hideous example of a huge iceberg
4. Human interaction. It's not human. People have died due to their interactions, mostly through suicide
5. Extreme use of resources, pushing the planet ever further to an irreversible climate doom
6. It's really, really not accurate, nor is it based on accuracy
7. No one knows where this will go. Not the people who built it, own it, or you and I.
8. Human interaction - so many assholes out there will use this to further disinformation, crime and all sorts of other hideous stuff.
 
I'm a designer, so the nature of my job make me a little..... optimistic. But you took the time to write your concerns, so I thought I'd add my perspective.

1. Transparency and traceability - it's impossible to tell what it is truly doing, and within software engineering that's extremely dangerous

I won't comment, my expertise don't lie on the engineering side of AI models.

2. Security exposure through unintended use and consequences. It's a nuclear bomb in the hands of an infant

In what context? E.g. Are you talking about training AI? Accessing information through AI?

3. Directed use for nefarious purposes - just head over to twitter, and the rampant porn fakes and paedophilia generated by AI. One hideous example of a huge iceberg

For sure. Can't disagree with this. As a consumer product, it's the wild west.

That being said, would you destroy the internet under the same premise? I mean if AI is the dealer, the internet is the supply network, right?

4. Human interaction. It's not human. People have died due to their interactions, mostly through suicide

True. I'm not going to quibble by throwing suicide stats around. People are lonely, this is something that we, as a society, need to consider in a far more compassionate light. People are in a horrific place if they feel as though they need to get validation from a machine.

5. Extreme use of resources, pushing the planet ever further to an irreversible climate doom

Holding this specific example as the harbinger of doom is a convenient scapegoat from the actual problem at hand.

We need for better energy solutions.

We are using more and more energy by the day - if it's not this, it'll be something else.

6. It's really, really not accurate, nor is it based on accuracy

This is true for the consumer models. Some R&D models are leaps and bounds ahead of what is available to the general public. I won't delve too deep into this.

7. No one knows where this will go. Not the people who built it, own it, or you and I.

In what way? Are you talking about decision making? or future development? something else?

8. Human interaction - so many assholes out there will use this to further disinformation, crime and all sorts of other hideous stuff.

Personally, I think that this falls in the same category as 3.

Look, I understand your perspective. But, this is the nature of progression, it scares people and rightly so. But every piece of technology that makes our lives better, can also make it worse.

Cars allow us to traverse large spaces in short amounts of time, but they can also be used to end peoples lives.

Social Media allow us to instantaneously communicate to loved ones, across the globe, but they also allow our enemies to harrass us in our own homes.

The benefits far outway the negatives in both of these cases, even if you can't see it right now. Bare in mind, I deal with R&D/custom models that aren't available to the public. From the projects I see going on;

GP's will be able to diagnose patients quicker and more accurately when using a model that is trained around medical libraries.

Law can become less subjective and more consistent when trained on precedents, situations and previous rulings.

I'll be clear; I don't think that we should ever take our hands off the wheel of this sort of technology and there needs to be proper oversight established for guiding it's development and application. I just believe that progression should be guided - not sunk by the unknowns.
 
Back to critical thinking and utilising AI to make your life easier, the people who can do this quickest will be the winners out of this.
Great insight with that graph around jobs that will be unaffected

Seems we will all be doing person to person service type jobs and only a very few will be doing critical thinking type roles.

I believe many organisations will actually just do more rather than cutting back roles. Govt and council will invest new weird and wonderful jobs doing new things to keep themselves busy.

This is change, but like all the changes previously, we will adapt and find new things to do.
 
Interesting stuff, reading a Michael Connelly novel at the moment involving a court case whereby a young guy shot his girlfriend because his chatbot told him so (not directly)

Anyways, while building the app the company ethicists didn't like what the developers were up to in ignoring guardrails and got the sack. key point for the prosecution

The underlying theme was that the coders were all in their late 20s early 30s and were coding for 12-16 yr teenagers and it created a multitude of problems with the generational experience being input.

On a another AI matter, we use Halter on some of the stock. Won't go into it too deeply, you can look it up, but it is essentially a fenceless farming system amongst other things. Magic on cold winter days or if you want to go away for the weekend
 
It depends.

If your job is easily automated, then yes. But this has been the case since the industrial revolution.

I agree with the need for regulations, but we're a wee way off the terminator universe, gentlemen.
The problem isn't how far off we are from a dystopian future in whatever VERY likely form it lands upon us all, it is the speed at which the hairless apes are rushing towards it.

Very reminiscent of the Nuke arms race. The difference being, if we get it wrong, we built weapons of mass destruction that think.

Oh shit.
 
Rhetorical question?

No.

Practically, it doesn't make sense. The whole point of WMDs is that they don't have to be accurate and think on the fly. They go, they destroy, everyone does.

Also the idea of governments relinquishing control over such power is non sensical. Believe it or not, a leaders hand on the button is the whole point of wielding such power.

Now, surgical strike missiles... maybe. But the checks and balances in place are many, even with the automated IDing of targets we have now, are.

Human oversight is always going to be a requirement, no matter which way you look at it.
 
No.

Practically, it doesn't make sense. The whole point of WMDs is that they don't have to be accurate and think on the fly. They go, they destroy, everyone does.

Also the idea of governments relinquishing control over such power is non sensical. Believe it or not, a leaders hand on the button is the whole point of wielding such power.

Now, surgical strike missiles... maybe. But the checks and balances in place are many, even with the automated IDing of targets we have now, are.

Human oversight is always going to be a requirement, no matter which way you look at it.
I am calling AI a weapon of mass destruction, the references to the arms race was about the mentality behind the race to develop AI.

Not Bombs with brains in a literal sense.
 
Back
Top Bottom