• Biblical Families is not a dating website. It is a forum to discuss issues relating to marriage and the Bible, and to offer guidance and support, not to find a wife. Click here for more information.

DeepSeek AI

Back to the topic, and the threat:

Virtual machines are designed and intended to insulate an application or program from 'the hardware' for a number of reasons. But it's one thing to try and design in a feature, but not necessarily true that it can't be bypassed, or just brute force blown away. Ask a hacker.
Operating system insulates application from HW. Hypervisor of virtual machine insulates operating system from hardware. And microprocessor itself povides additional security protections.

So AGI has to break through several layers of defenses. First one being he is living in simulation.

For years, most software was "tested" by such teams to "prove" they were unbreakable. They are still design reviewed and tested, but none of us believe that latter part.

Formal validation uses math proofs to show it has zero bugs. Only implementation here can go wrong.
And a super-intelligent (beyond human ken) AI of the threat being posited would arguably be able to 'break the unbreakable.' (There's no safe that can't be broken into...)

Only if he lucky. Accidents are rare for reason because several unlucky factors have to come together.

The threat of AGI is very, VERY real. Among others, the late physicist Steven Hawking called it the most formidable existential threat mankind could ever face. (He, too, was an atheist...)

One of the primary points the ex-Goolag AI expert in the video made was that, unlike most other threats, perhaps even terrorism, AI has the potential to kill people who want nothing to do with it, and never even had any reason to concern themselves about it.
With what resources? Just because it's smart doesn't it's capable or wise.

You Americans have perfect saying for this. If you are so smart, how that you aren't rich?

Being smart only proves being smart.
 
Formal validation uses math proofs to show it has zero bugs. Only implementation here can go wrong.
You miss the bigger point, and don't understand thinking "outside the box." A mathematical proof always rests on the postulates. And it's not about bugs. It's about breaking what somebody thinks is unbreakable - until it is.

You should watch more 'heist' movies. :)
Operating system insulates application from HW. Hypervisor of virtual machine insulates operating system from hardware. And microprocessor itself povides additional security protections.
Says who? The hardware manufacturer? Who put in 'back doors' you weren't told about?
 
You miss the bigger point, and don't understand thinking "outside the box." A mathematical proof always rests on the postulates. And it's not about bugs. It's about breaking what somebody thinks is unbreakable - until it is.
Nobody things nothing is unbreakeable. Just breaking is way way more harder than you think.
You should watch more 'heist' movies. :)
Maybe.
Says who? The hardware manufacturer? Who put in 'back doors' you weren't told about?
Me. I'm software engineer by trade with interest in semiconductor industry.

How computer works is my speciality.
 
Mine, too. But an AGI would be smarter'n all of us combined. And have access to details we don't, too.

PS> I'm a circuit designer by trade, but did servo work for many years. Back when assemblers were the rule, and some of us did opcode edits manually by memory (I built IBM's first uP-controlled "auto document feed" for a copier/printer.)

We used to joke about "programming down to bare metal."
 
Mine, too. But an AGI would be smarter'n all of us combined. And have access to details we don't, too.

PS> I'm a circuit designer by trade, but did servo work for many years. Back when assemblers were the rule, and some of us did opcode edits manually by memory (I built IBM's first uP-controlled "auto document feed" for a copier/printer.)

We used to joke about "programming down to bare metal."
All true.

Only isssue is that people assume smartness is everything, not just one factor.

And AGI would have personality which could be used again him. Spruance and Patton are what I call fighters. Fantastic in battle and for leading charge, can't resist running into obvious trap.

Don't assume AGI won't suffer from such stupidities.
 
"Artificial intelligence" is just a very sophisticated pattern-finding script. One application of that - and only one - is a natural language model, synthesising human speech. And that is the one that we all focus on because it's easy to interact with. But all these AI models - including DeepSeek - are much more than that. And it is all the other functions that are actually far more interesting and useful.

For instance, imagine an AI air traffic controller. It knows all past patterns of aircraft travel, data from all sources, and works out what patterns tend to precede accidents. It can then recognise near-instantly when a situation is likely to lead to an accident, and warn everyone to take action. The advantage over a human is that it can almost simultaneously look at every data source, so it may notice problems long before humans would see all the data and join the dots. The disadvantage is that it may not recognise the danger in a wholly new situation with no precedent. But it has a very obvious application as at least part of the air traffic control system. Now, let's not debate whether that's a good idea or not, I'm just presenting it as a potential application to illustrate another point. My actual point is:

None of that involves talking like a human. But it's still AI.

Such a model would have multiple interfaces. It would have output to warning lights, radio signals and so forth. It would have input from switches and buttons. It could function entirely without human speech, and still be AI.

However, since natural language models (NLM) are now possible, it might also have an NLM, to make custom warnings to pilots and allow people to query the computer in natural speech. That natural language interface would be another model (or another part of the model).

Now, all the concerns about whether AIs have political biases, personalities, and so forth largely apply to the NLM. The NLM might have political bias in some details of its speech depending on who wrote it. But that is almost irrelevant, it really doesn't matter in this use case. The actual AI air traffic controller part is not politically biased because it doesn't even know about politics - it doesn't even know language. It just knows aircraft flight paths.

Both Chinese DeepSeek, and woke Western models, can be run on a server and trained to be an air traffic controller, or anything else. And/or it can be used as a NLM. In the vast majority of uses, political bias is irrelevant, and might only affect a few peripheral comments here and there in the NLM side of things. Neither model can be dismissed due to this sort of bias, it's a tiny peripheral detail. The actual potential is far deeper and greater than that.
 
Back
Top