Technological discussion on PS3 security and crack.*

Well, people can be misdirected or lured. I know friends who were offered at least US$1 million cash for one-off jobs. I am okay with that, but I dislike people who hide behind "principles" when the real objective is the money, or other selfish reasons. Sometimes I am flabberghasted when extremely smart people were used/tricked for the wrong reasons.

Other times, it's just what they want to do with their time/lives.
 
Feel free to boycott such lockout mechanisms, and also vote for those who oppose them
Boycotting is not a viable alternative when ALL the current platforms employ such measures. I'm not ready to quit my favorite poison.

The pirate party's a bunch of maroons from what I've seen, I'd rather vote for an amalgamate of Ronald Reagan, Margaret Thatcher and Charlton Heston than for those clowns...

In the meantime, I will enjoy the possibility of buying (and selling!) games via different business models - Gilette, downloadables, DLC - made possible only by secure hardware.
Secure hardware does not exclude open hardware. Commercial software can absolutely continue using their security measures while at the same time accommodating free development as well. It's just short-sighted greed from manufacturers that make consoles completely locked-down to everyone but licensed developers.

but I think the hardware manufacturers, in turn, are completely in their right to try to stop you.
Why?

I own the device. I'm not RENTING it from them. Therefore I should be the one who decides what I can do with it.

Edit: Even the basic model of "make game, sell copies" seems to be possible only on secure hardware.
Yes, we all know the original PS was such a commercial failure... ;)
 
Secure hardware does not exclude open hardware. Commercial software can absolutely continue using their security measures while at the same time accommodating free development as well. It's just short-sighted greed from manufacturers that make consoles completely locked-down to everyone but licensed developers.

In this case, the exploit starts from the OtherOS open system mechanism. So while it is true that secure hardware does not exclude open hardware, it is extremely difficult to do. It usually means that performance will suffer since extra checks need to be performed during run-time.

As for short-sighted greed, show me the finances of Sony and Microsoft that says they are raking in tons of money now compared to the resources they have already sunken in (and will continue to invest this fall again). Nintendo adopted a different business model and is more resilient to hacks like this, but it also means we don't get to play with advanced hardware invested by Sony and MS ahead of time.

Why?

I own the device. I'm not RENTING it from them. Therefore I should be the one who decides what I can do with it.

Not going to tell others how they should use their devices. But their actions can affect the developers and other users negatively.

Yes, we all know the original PS was such a commercial failure... ;)

Doesn't mean it will always follow the same rule. It's not like the law of physics. It's people. Things can happen in undesirable ways.

I can see potentially positive outcome from the incident, but by no means they are guaranteed to happen as "scripted". The economy and landscape are different from decades ago. I have been to Asia rather frequently. I know how bad things can go.

In any case, this is certainly an interesting development. Curious to see how Sony will react, especially on the online side and general user experience. They largely neglected the PSP in the early days, and Homebrew offered more usable and more compelling solution than the standard software. Hopefully they learn from that lesson. Don't make us _show_ them how to do usable software. :)
 
This may assist conversation about what's been achieved and what's not been achieved:

http://www.ibm.com/developerworks/power/library/pa-cellsecurity/

I think it's a bit premature to claim the system's been hacked or cracked. The security architecture is actually pretty much designed around the 'what if' of the hypervisor being compromised and the repercussions of that. The system does not treat the hypervisor as the 'last word' or the root of security of the system. It is not wholly trusted.

The hypervisor has been compromised, but there's a number of safeguards designed for just such a situation. It would be silly to suggest the system is hack-proof, but there's some ways to go depending on how easily or otherwise he can get access to the keys necessary to run custom code.

Bits from the above link relevant to a compromised hypervisor:

The architecture's main strength is its ability to allow an application to protect itself using the hardware security features instead of the conventional method of solely relying on the operating system or other supervisory software for protection. Therefore, if the operating system is compromised by an attack, the hardware security features can still protect the application and its valuable data. As an analogy, consider the protection the supervisory software provides as the castle's moat and the Cell BE security hardware features as the locked safe inside the castle.

The goal of isolating a process thread is not new; however, in contrast to the hardware-based method, existing approaches have used software to enforce the separation. The operating system or the hypervisor (also known as the virtual machine monitor -- the layer of software with the most authority in a virtualized system) has the responsibility of separating processes. For example, the operating system would ensure that the memory location of the high-value digital content is protected from reads and writes from non-authorized processes. The problem with this approach is that if an adversary takes control of the operating system or the hypervisor, all bets are off.

...


The fundamental problem with existing approaches is that they rely on software to provide the isolation, but at the same time software can be manipulated by an adversary. A better approach is for the hardware design to isolate the process in such a way that the software cannot override the isolation, and this is precisely what the Cell BE processor's Vault provides.

The Vault is implemented as an SPE running in a special mode where it has effectively disengaged itself from the bus, and by extension, the rest of the system. When in this mode, the SPE's LS, which contains the application's code and data, is locked up for the SPE's use only and cannot be read or written to by any other software. Control mechanisms which are usually available for supervisory processes to administrate over the SPE are disabled. In fact, once the SPE is isolated, the only external action possible is to cancel its task, whereby all information in the LS and SPE is erased before external access is re-enabled. From the hardware perspective, when an SPE is in this isolation mode, the SPE processor's access to the LS remains the same, while on the other side of the LS (the bus side), external accesses are blocked.

Tying into keys:

Despite their critical role, keys are usually stored in plain text form in storage. Ideally, instead of in this naked state, the keys will be sealed in an envelope (in other words, encrypted) when in storage, and only unsealed when given to an application that has been authenticated. However, this implies that another key is used for the sealing and unsealing (in other words, for encrypting and decrypting the first key); how is this key stored? Eventually, there must be a key that is not encrypted, and because this is the key that is at the root of all unsealings, we will refer to it as the root key.

Because of the root key's importance in keeping all other keys hidden, it must be robustly protected. The Cell BE processor accomplishes this with its Hardware Root of Secrecy. The root key is embedded in the hardware, and you cannot access it with software means; only a hardware decryption facility has access to it. This makes it much more difficult for software to be somehow manipulated so that the root key is exposed, and of course, the hardware functionality cannot be changed so that the key is exposed.

In fact, the decryption based on the root key can only happen within an isolated SPE and not outside of it; no access to the root key is available, by hardware or software means, from a non-isolated SPE or the PPE. First, this implies that a system designer can force all data decryptions by the root key to happen within the protected environment of the Secure Processing Vault; the keys unsealed by the root key will always be placed (at least initially) in the Vault only. Second, only applications that have successfully passed the Runtime Secure Boot authentication are given access to the keys unsealed by the root key. Any software that might have been adversely modified will not be given access to the unsealed keys. Because the foundation of this control is grounded in both the Runtime Secure Boot and Hardware Root of Secrecy features, the process is more resistant to manipulation than with a pure software-controlled access mechanism.

So the foundation of the security doesn't even trust the hypervisor. Even the hypervisor cannot see anything that's going on in an isolated SPE wrt key usage and decryption.

That's the theory anyway. From his blog post it doesn't sound like he was too aware of these whitepapers - he seemed to be holding out hope for a key policy similar to iPhones, or where keys are not embedded in hardware. But the root key here is embedded in hardware, and access to it can be kept out of the sight of even the hypervisor - if they are careful with their vault software. If the software running in the vault were to do something silly like copy out a decrypted key to general memory then, of course, the hypervisor could see it. But assuming they handle keys purely within the isolated SPE with no slip-ups along the way in their usage, the cracked hypervisor won't help him out. He'd possibly have to go back to hardware approaches rather than it being 'all software from here out' as suggested in the blog post.

No system can claim total security. But barring some silly human slip-up in isolated SPE software or elsewhere, he may be a little ways off a proper crack. I can understand why he did jump to claim that though - a cracked hypervisor or OS typically does = a cracked system. But in this case it's only part of the story.
 
That's why you are one of my most favorite posters on B3D. ^_^

Still... my "concern" now is the execs in Sony (e.g., Did their balls shrink ? so to speak :devilish: How would they react ? How much do they trust their techies ?), not the technical stuff yet.
 
That's why you are one of my most favorite posters on B3D. ^_^

Still... my "concern" now is the execs in Sony (e.g., Did their balls shrink ? so to speak :devilish: How would they react ? How much do they trust their techies ?), not the technical stuff yet.

Bless your heart.

I would say some balls shrank.

On the other hand, though, I think the people who worked on the chip probably expected this to happen much sooner. Clearly they designed the security architecture around this scenario of the hypervisor being compromised, so I think they probably expected it to happen faster than it did.

Of course, all their work is for nout if Sony's software is lax about handling keys. It's dependent on any process engaging in decryption and keyhandling and so on being executed in a isolated SPE with no sharing of any potentially compromising data outside of that SPE's LS.

So if I were a Sony exec I might be asking the techie guys to double check their key handling policy in the software. It would be a shame for them if keys were exposed because of their software not taking rigourous advantage of the architecture afforded to them.

Beyond that possibility of 'mistakes' in key handling though, of course there's other vectors that could be used to try and get at the keys or the root key. But they may be more limited to hardware approaches at that point i.e. more difficult.

We'll see though. It would not surprise me if there was some stupidity somewhere along the chain that handily exposes a key or keys to the hypervisor. But right now I don't think it can be called a 'proper' hack.
 
This may assist conversation about what's been achieved and what's not been achieved:

So he broke the easy part and what was expected to be broken at some point anyway and now he is where the real problem starts at the gates of the Cell embedded security.

But does this mean that he isn´t able to actually run his "own software" or in what way is he limited with what he got now.

Thanks for the info btw, really interesting. I wonder what his reaction will be :)
 
So he broke the easy part and what was expected to be broken at some point anyway and now he is where the real problem starts at the gates of the Cell embedded security.

But does this mean that he isn´t able to actually run his "own software" or in what way is he limited with what he got now.

Thanks for the info btw, really interesting. I wonder what his reaction will be :)

I wouldn't say it's easy to break...I mean he is the first to do it as far as we know anyway, and no one else had done it in the 3+ years since launch. So credit for that.

But yes, now he's tapping around the SPE that handles keys and decryption etc. To quote his own comment on his blog:

I know, I'm not looking for keys in the dump directly. But I now have all the routines that set up and talk to the SPU

IF things are working the way they're supposed to work, though, he shouldn't be able to see anything of what the SPE is doing from a hypervisor level of priveledge. We'll see where he goes with this though. There may be holes here that the theory doesn't cover... :)

Oh and yeah...I don't think he can run custom code without these keys. He can do things with the hypervisor, but executing his own code isn't one of them (yet).
 
I dont think he's after the root-key. Or atleast he will fail if he tries to go after it - unless someone messed up real badly at Sony then this key will only be used to decode further keys, boot-code (which reside in the firmware and can be changed) and authenticating firmware-updates.

This is the real meat of the Cells/PS3s security, unless this first step is broken the whole security can be changed with an update and you`re back at the beginning.

This hack could however allow running homebrew/backups until the hole is beeing patched out (and I dont see how this couldnt be possible). Once running the XMB, there is no encryption on the code (or you`d have to decode the whole code-stream after each I-Cache miss) so you probably can patch out verification and decrypt signed binaries with the original routines (possibly even without knowing the keys)
 
I dont think he's after the root-key. Or atleast he will fail if he tries to go after it - unless someone messed up real badly at Sony then this key will only be used to decode further keys, boot-code (which reside in the firmware and can be changed) and authenticating firmware-updates.

This is the real meat of the Cells/PS3s security, unless this first step is broken the whole security can be changed with an update and you`re back at the beginning.

This hack could however allow running homebrew/backups until the hole is beeing patched out (and I dont see how this couldnt be possible). Once running the XMB, there is no encryption on the code (or you`d have to decode the whole code-stream after each I-Cache miss) so you probably can patch out verification and decrypt signed binaries with the original routines (possibly even without knowing the keys)

Does controlling the hypervisor even allow him to control what the XMB runs? Or specifically, allow him to 'make' the xmb run unsigned code?

I'm no expert, but I didn't think it would...I'd have assumed the XMB/GameOS would be just another bit of software that the hypervisor runs but can't interfere with per se...software that's decrypted and checked for any modification before running...and so on for any software the gameos runs also.

(In theory anyway..in practise maybe there's exploits to be taken of advantage of there).

From the blog comments it does certainly sound like he's going after keys though, to enable him to run code at the hypervisor level - he mightn't need the root key to do that, but some of the other keys that are decrypted with the root key. If he could get unsigned code running through the XMB I'd have assumed that's what he'd be doing next rather than trying to go key hunting.
 
Does controlling the hypervisor even allow him to control what the XMB runs? Or specifically, allow him to 'make' the xmb run unsigned code?
Dunno, I dont know the innards of the PS3-System but as IO is passed though the hypervisor - probably yes. And he claims to have R/W access to system level memory - which I cant imagine being encrypted (once loaded into memory, save a few modules that are used for "secure" stuff and run on the isolated SPU). Apart from the isolated SPUs the memory-protection is quite like normal PCs, especially everything running on the PPU

I'm no expert, but I didn't think it would...I'd have assumed the XMB/GameOS would be just another bit of software that the hypervisor runs but can't interfere with per se...software that's decrypted and checked for any modification before running...and so on for any software the gameos runs also.
yeah, decrypted and checked - but afterwards you could patch everything the next time the hypervisor is called (if he really has R/W access to system memory).

(In theory anyway..in practise maybe there's exploits to be taken of advantage of there).

From the blog comments it does certainly sound like he's going after keys though, to enable him to run code at the hypervisor level - he mightn't need the root key to do that, but some of the other keys that are decrypted with the root key. If he could get unsigned code running through the XMB I'd have assumed that's what he'd be doing next rather than trying to go key hunting.
Signatures are typically using RSA-Variants, so getting the key from the Firmware wouldnt even help getting through authentication (you need the other key of the pair). If he can patch out authentication he should be able to patch out decryption aswell. Kinda like with the PSP where CFW authors used the PSP`s hardware to decrypt firmware modules (still cant do it without PSP), then patched them up to skip decryption/authentication.
 
Dunno, I dont know the innards of the PS3-System but as IO is passed though the hypervisor - probably yes. And he claims to have R/W access to system level memory - which I cant imagine being encrypted (once loaded into memory, save a few modules that are used for "secure" stuff and run on the isolated SPU). Apart from the isolated SPUs the memory-protection is quite like normal PCs, especially everything running on the PPU

yeah, decrypted and checked - but afterwards you could patch everything the next time the hypervisor is called (if he really has R/W access to system memory).

This assumes the spu or whatever doesn't do another check of the data or code, right?

IIRC there's allowance for this checking beyond the first boot, precisely so if something is modified beyond the initial boot, it can still be trapped and execution stopped.

But I dunno what policy they actually have in place here...e.g. if xmb/gameos data/code integrity is checked just at first boot or subsequently at given intervals also, or even every time the code/data is used. If it's only checked at the initial boot then there could be room for patching...but I know part of the security scheme available was boot checks AND runtime checking of data/code for alterations. Just dunno what Sony is or isn't using... :)

Signatures are typically using RSA-Variants, so getting the key from the Firmware wouldnt even help getting through authentication (you need the other key of the pair). If he can patch out authentication he should be able to patch out decryption aswell. Kinda like with the PSP where CFW authors used the PSP`s hardware to decrypt firmware modules (still cant do it without PSP), then patched them up to skip decryption/authentication.

Hmm. I dunno then. 'Getting the keys' seems to be his goal as described in the blog and in his comments. He seems to be under the impression they'll give him the same control as on iPhone for example. Personally I've no idea of the value of snooping the keys, just that this is what he's trying to do.

edit - reading a bit of a previously cited link, the question of a patched xmb/gameos might be addressed here:

The drawback of this approach is it assumes that checking for compromises in the software at power-on time is enough. It does not protect against software compromises that happen after power-on time. However, most software-based attacks happen during runtime, and if this happens, the chain of authentication breaks, and any software that is launched after that time can not necessarily be trusted.

The Cell BE processor addresses this problem with its Runtime Secure Boot feature. It lets an application secure boot from the hardware an arbitrary number of times during runtime. Thus, even if other software in the system has been compromised in the past, a single application thread can still be robustly checked independently. In essence, the application can renew its trustworthiness as many times as needed even as the system stays running longer and gets more stale. Specifically, a hardware implemented authentication mechanism uses a hardware key to verify that the application has not been modified, and the authentication is based on a cryptographic algorithm.

This runtime secure boot, in fact, is tightly coupled with an SPE entering isolation mode. An application must go through the hardware authentication step before it can execute on an isolated SPE. When isolation mode is requested, first, the previous thread is stopped and cancelled. Then, the hardware will automatically start fetching the application into the LS, and the hardware will verify the integrity of the application. If the integrity check fails, the application will not be executed. The check can fail for one of two reasons. The application might have been modified within memory or storage. Then, the assumption is that the functionality might have changed and it cannot be trusted anymore. Or, the writer of the application does not know the cryptographic secret that is needed for a successful authentication. Otherwise, if the authentication check is successful, the hardware will automatically kick-start the application's execution in isolation mode. Because the hardware controls all of these steps, the verification of the application's integrity cannot be skipped or manipulated and will happen consistently and correctly.

So this hardware secure boot I presume can be invoked for xmb-code run on the PPE or isolated SPE...but it must run for the isolated SPE code. If your patch touches on the code modules sent to the isolated SPE for execution, it should see that the code's been modified and flag it. I would guess a fair bit of xmb/gameos code runs on that reserved SPE...

I certainly would presume all the more critical things...like application launching, happen in modules that are either run in an isolated SPE or are runtime secure booted every time they run...

If not then maybe there's room for patching, but I'd be surprised if Sony wasn't using this stuff while executing xmb/gameos code modules.
 
Yes, but has anyone written anything that warrants more performance? If PS3 is broken open and the hombrew community gets free reign, what will actually appear that would never happen with the current limited system?

As I see it, the only reason to bother with PS3 development is for Cell, as that's unique to the platform. That isn't gimped, and no-one's interested. If you want to write games or apps, there are a zillion other, easier-to-work-with platforms!

You know all these media center things that people sell, etc. Know how they really came about and got a swift kick to becoming mainstream? No? Well I'll tell you then. A hacked build for the original Xbox that STILL has functionality beyond what most of the media center style devices can provide, including the 360 and PS3.
 
This hack could however allow running homebrew/backups until the hole is beeing patched out (and I dont see how this couldnt be possible). Once running the XMB, there is no encryption on the code (or you`d have to decode the whole code-stream after each I-Cache miss) so you probably can patch out verification and decrypt signed binaries with the original routines (possibly even without knowing the keys)

Is that something you know or you presume? It would be incredibly stupid on Sony's side not to do run time verification of the code. The xbox360 does memory hashes to prevent code injection/replacement. I assume the PS3 has a similar system.
 
Geohot's point seems to be that his hack is at such a low level that he can prevent measures like this actually being activated in the first place.

The blog is interesting in that those who've been trying to hack the system have repeatedly posted about how he's wasting his time right from Day One, but clearly some milestone has been reached that nobody else has got to yet.

With regards the Cell's security measures and the documentation surrounding them, it's also fairly obvious from his tweets and other comments that he's read it too.

I'm not sure that anything good is going to come of this (piracy probably won't impact legit gamers so much in the longer term, but wholesale ruination of PSN online gaming will), but it is a fascinating story.
 
He's not doing backup support. All he's said is he'll post keys if he finds them. A lot of work would need to be done. If keys are leaked you will prob. see other release groups work on something, but you won't see it from geo.
 
Surely he's hoping for something that will generate donations? A PS3 equivalent to the jailbreak or carrier unlock? His stated aims in the past have been to turn retail units into TEST units (pretty much a backdoor into piracy) and to "de-DRM" the retail unit to work as a kind of cheap Blade server.
 
And you sign up to B3D today to tell us that by hacking into PS3, you can finally use PS3 Linux efficiently regardless of the consequences ?
[size=-2]I am trying to understand your motivation.[/size]

Im actually an older timer here. 2004- Around that time. I just could not remember username etc. So I had to start over again. Ultimately it will be up to the end user what they will be doing with this. So the good the bad the ugly may surface.
 
Geohot's point seems to be that his hack is at such a low level that he can prevent measures like this actually being activated in the first place.

He doesn't really know. He hopes he can access the locked off SPU via calls, at least according to his blog. According to IBM's whitepaper, that's not possible. There might still be a hole somewhere, but he's depending on someone else screwing up.

With regards the Cell's security measures and the documentation surrounding them, it's also fairly obvious from his tweets and other comments that he's read it too.

Is it obvious? 'I'm hoping to find the decryption keys and post them, but they may be embedded in hardware. Hopefully keys are setup like the iPhone's KBAG.' That's right from the blog post.
 
Back
Top