Interview with Chris Satchell - XNA and Security

For a traditional (read IBM) VM, you basically have fully compiled object code running directly on the metal. The security/isolation is provided via TLB entry security and privileged instruction trapping to the HV. While this can work well in a purely computational environment, it gets much more complex on something like PS3 which has to account for additional actors like the GPU.

Now the problem becomes, the HV really can only allow/disallow access to specific memory spaces and to higher protection levels of the processor. By default the application, esp in an environment like PS3, have lots of default access to things like NVRAM and disk. The fear is that someone figures out how to utilize the scripting interface of a game to access the disk access abilities of the base program and then put malicious code on the disk.

For a managed runtime, you are effectively trapping ALL operations and therefore can restrict

I have read somewhere that Sony's HV also layers on top of I/O accesses, and one SPU has been reserved for security specific use (where Local Memory is totally blocked off/unmapped ?). This is why I am interested to find out how they lay it out and partition the box.

EDIT: Nevermind, I think I have the answer now (at least the part I'm interested in).
 
Last edited by a moderator:
Have you ever read SecurityFocus? Have you seen the kind of crap that goes unpatched month after month, or gets fixed incorrectly over and over again? It's a simple fact that most developers simply don't understand security, and if they do, are rarely budgeted to manage it properly.

Have you ever considered the fact that such a finite set of evidences is not & cannot be wholly representative of the state of security for most (id est the vast majority..) works across such a massive range of companies..

Also such evidences don't even factor in console based titles which make up a huge proportion of works by a large proportion of developers..

You're assumptions that most developers don't understand security or aren't somehow competent enough to deal with it is largely bullsh** & hot air..

Granted it's probably true that most of us rarely have the budget to manage it properly but in most instances (predominantly when you're not developing an online multiplayer game) it's purely an issue of priorities. We're content makers & as such it means we'd prefer to focus predominantly on content, if that means some games will ship with potential security risks for the system then it's really a matter of oversight since such concerns aren't normally at the top of the agenda..

You only have to look at how many games are shipping with gameplay related bugs/issues to see that most companies don't even have the resources to really invest in tightening up the core game itself let alone security, unless there's a large enough risk of compromise to the core game itself (online multiplayer or MMO cheating for example..) & even then, once a game has shipped you have even less resources to invest in support over a prolonged period of time (unless you're Bungie etc..)

It has literally nothing to do with competency or understanding & to infer such an idea is both ignorant & shows a lack of consideration towards the amount of work we have to do to develop a game & the time/resources we have available to do it..
 
It has literally nothing to do with competency or understanding & to infer such an idea is both ignorant & shows a lack of consideration towards the amount of work we have to do to develop a game & the time/resources we have available to do it..

First off, chill out.

The fact that a huge percentage of developers really don't get security at the level that they need to is not a knock of developers. It's a statement of how hard it is to really do security right.

Even if every dev on your team has a reasonably good understanding of the security concepts that relate to their product, it still won't matter because unless it's all they're doing, security is simply too big of an area for a dev whose doing feature work to devote the time that they would need to to get right. And even if they could, one person wouldn't be enough.

Even if they could, it wouldn't matter because most companies (and game studios in particular) don't have the test resources they need to find those sorts of issues.

Even if they did, it wouldn't matter because most companies prioritize shipping products over shipping secure products.

That's just plain reality. It's not a question of how smart someone is or how good of a developer they are. It's a question of how equipped and able they are to build and ship a product that is not in any way vulnerable. The simple reality is that in almost every single development house on Earth, they aren't. Whether it's through any fault of their own is beside the point.
 
There is also the using insecure libraries problems. A game developer might have perfect programmers who write 100% secure code, but if they use someone elses libraries in their project, all that work might be for nothing because of a bug in the library e.g. the stupid LibTiff overflow that is continuely being exploit.
 
Yes, the LibTiff exploits hit everyone (Sony, Apple, Microsoft, ...). Third party libraries are also used by MS (e.g., The physics libraries), so they are not immune to this potential threat. The only effective way forward is for everyone to be diligent (not just MS). I guess that's why they proposed the safe string libraries to the standard bodies to adopt.

W.r.t. open source libraries, can the platform owners extend their policies to the 3rd parties (e.g., blacklisted/whitelisted open source projects) ? Do they have replacement/common libraries (within their developer networks) to have more secure alternatives ?

In any case, the proof is in the pudding. We will have to wait long enough to see how they fare, especially with Sony pushing user generated content.
 
Last edited by a moderator:
Back
Top