Operating system insulates application from HW. Hypervisor of virtual machine insulates operating system from hardware. And microprocessor itself povides additional security protections.Back to the topic, and the threat:
Virtual machines are designed and intended to insulate an application or program from 'the hardware' for a number of reasons. But it's one thing to try and design in a feature, but not necessarily true that it can't be bypassed, or just brute force blown away. Ask a hacker.
So AGI has to break through several layers of defenses. First one being he is living in simulation.
For years, most software was "tested" by such teams to "prove" they were unbreakable. They are still design reviewed and tested, but none of us believe that latter part.
Formal validation uses math proofs to show it has zero bugs. Only implementation here can go wrong.
And a super-intelligent (beyond human ken) AI of the threat being posited would arguably be able to 'break the unbreakable.' (There's no safe that can't be broken into...)
Only if he lucky. Accidents are rare for reason because several unlucky factors have to come together.
With what resources? Just because it's smart doesn't it's capable or wise.The threat of AGI is very, VERY real. Among others, the late physicist Steven Hawking called it the most formidable existential threat mankind could ever face. (He, too, was an atheist...)
One of the primary points the ex-Goolag AI expert in the video made was that, unlike most other threats, perhaps even terrorism, AI has the potential to kill people who want nothing to do with it, and never even had any reason to concern themselves about it.
You Americans have perfect saying for this. If you are so smart, how that you aren't rich?
Being smart only proves being smart.