CPU Security Flaws MELTDOWN and SPECTRE in the Console Realm *spawn*

Xbox OS mimics fairly closely Windows 10 S. Locked to run UWP apps in a container signed by Microsoft.

And that is why you wouldn't deploy a Spectre exploit using UWP when it'd be preferable to deploy a win32 .exe masquerading as an update version of Acrobat Reader, or Flash or Chrome, or any other popular binary not using UWP (which is most software) that can run under Windows 7 onwards and has zero scrutiny.

It's not far fetched, it's reassurance that the platform as a whole is secure.

You've mis-read my post :yep2:
 
And that is why you wouldn't deploy a Spectre exploit using UWP when it'd be preferable to deploy a win32 .exe masquerading as an update version of Acrobat Reader, or Flash or Chrome, or any other popular binary not using UWP (which is most software) that can run under Windows 7 onwards and has zero scrutiny.



You've mis-read my post :yep2:
whoops I did. i understand now where you mean far fetched lol, wrong perspective my bad. But the need to ensure its users that it's secure is the point of the tweet.
 
It's definitely going to affect game servers. Ouch.

https://www.epicgames.com/fortnite/forums/news/announcements/132642-epic-services-stability-update

Fortnite servers look like they use 15-40% more cpu after patching.

Edit: It's going to be interesting to see how this affects new game launches, which already tend to have issues with a huge influx of players trying to log in at the same time. PSN and Xbox Live and Steam services must be hit pretty hard. Also, this will have a measurable impact on power consumption in data centers. CPU goes up, power consumption goes up, which means more air conditioning, which means power goes up etc etc. Will this actually lead to increasing costs for service providers to be passed on to consumers?

Edit: I just realized the way I wrote this is kind of confusing. CPU utilization actually doubled, or worse. If you follow the link, there's a graph. The 15-40% I was referring to is reading from the graph. So in some cases it goes from 10 to 25%, and in others it goes from around 20 to 60%
 
Last edited:
How long to design CPUs that don't have this problem and sell them, returning us to where we ought to be?

It's looking like maybe we should never have been quite where we are now, given the double edged sword some aspects of modern performance enhancing technology have ended up being. *shrug*

I'd guess it's a minimum of two years from when they started working on the problem till chips show up, so maybe late 2019? Though given that the full implications of SPECTRE are still being discussed and researched, I think ongoing work in this area will be making it's way into processor designs for years to come.
 
If CPUs didn't feature these performance enhancements, which include these vulnerabilities, how much slower would they be? 15-40%? ;)
 
If CPUs didn't feature these performance enhancements, which include these vulnerabilities, how much slower would they be? 15-40%? ;)

Haha ... I doubt it'll cost that much to fix these current issues, but it does seem in hindsight that the focus was a little too much on performance rather than security.
 
Haha ... I doubt it'll cost that much to fix these current issues, but it does seem in hindsight that the focus was a little too much on performance rather than security.
From a statement from Redhat, there are indications that at least one of these vulnerabilities affects POWER and SystemZ, and Apple isn't a slouch when it comes to security given their strong pursuit of secured enclaves and per-device encryption. I wouldn't characterize Big Iron as being soft on security, and Intel had a raft of features present or incoming for a lot of exploit types, just not a timing issue from their speculative pipelines.
Sometimes you can invest heavily in fighting the threats as you see them, and the things you didn't know you didn't know catch you.

I'm still curious if there will ever be a story explaining what made AMD pick the speculative limit it did, since the decisions about this go back so far in time. Could have been a coin toss...
 
I'm still curious if there will ever be a story explaining what made AMD pick the speculative limit it did, since the decisions about this go back so far in time. Could have been a coin toss...

Alex, I'll take cheaper and quicker for 600!
 
How is this going to effect Crackdown 3 which uses a separate vm instance to run physics on a each seperate small group of individual buildings destruction, meaning you can have 12 instances running at once. Crackdown 3 most likely uses cpu based physics calcs since last i checked there weren't enough gpu equipped Azure data centers around the globe to satisfy the games needs across most regions where XO has a significant presence, and at the same time have enough reserve to satisfy all their other clientel. Maybe I'm wrong last I checked was 2016 and their gpu deployment wasn't widespread, and since then there has been the deep learning explosion necessitating gpus.
 
How is this going to effect Crackdown 3
They'll redesign the game (assuming it's worth the bother)? Would depend on how many people are [still] playing this title; if it's relatively few maybe not much needs to be done... (Insert pun of comparative lack of success of XBone here... :p)
 
I would just assume it would cost more CPUs per session, whatever the CPU impact is. As long as the game is not a run away success this doesn’t seem to be an issue like Fortnite is having.
 
Thought this might be interesting from a historical view...

Finding a CPU Design Bug in the Xbox 360

https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-design-bug-in-the-xbox-360/

Tommy McClain

The xdcbt instruction as described goes one step further than Spectre or Meltdown in the damage it did. The current exploits did not purposefully break the cache subsystem, which is effectively what happens if data is loaded into the L1 incoherently with no means of flagging or purging the data before something in the standard domain could accidentally misuse it.
Presumably, there was some kind of policy on setting a barrier or invalidation of the affected cache entries that would be skipped if the branch predictor managed to mispredict execution into the middle of a xdcbt-using function, then either backed out prior to the cleanup code or in a perverse twist was not allowed to execute the cleanup instructions.

Interesting to read the tidbit about Alpha, which in its twilight was among the most aggressively speculative processors and had among the weakest memory orderings for widely-deployed CPUs. A lot of assumptions take from elsewhere were broken even when following the non-speculative flow of code.
 
If CPUs didn't feature these performance enhancements, which include these vulnerabilities, how much slower would they be? 15-40%? ;)
Speculative execution (branch prediction) and caches are very important for performance. If you remove caches, memory accesses become 100x+ slower. If you remove branch prediction, the CPU stalls every time it encounters a branch to wait for the input data to be available in registers. Removing branch prediction would also be a huge cost 10x+ penalty in some cases.

These optimizations are fully transparent and do not affect the program correctness. CPU produces end result that is bit precise without these optimizations in place. Programming model correctness is not defined to be timing dependent. Timing dependent execution is defined to be a bug (called race condition). There are several problems: A) modern CPUs have invisible state that doesn't affect program correctness, but you can measure this invisible state by timing. B) If you know the CPU architecture, you can manipulate the invisible state (associative caches, branch predictor history, etc) in a way that causes more deterministic measurable performance differences in future code (for example a kernel call). C) Modern CPUs have lots of shared hardware between multiple threads (hyperthreading, caches, etc). You can roughly measure what the other thread is doing by calculating your own execution throughput. You can manipulate the other thread's execution timing by altering the shared state (for example associative L1 cache makes it possible for you to evict a chosen cache line of the other CPU thread).

I don't believe you could prevent all information leak between processes. Simply too much hardware resources would be left unused in modern multicore CPUs with wide CPU cores. Calls to kernel can be protected by flushing branch predictor history, etc. But I am unsure how well this works against all potential timing attacks from a thread running concurrently on the same CPU core (hyperthreading). I am not a security expert. I would assume CPU engineers have thought about these timing info leaks when they have chosen which resources they are sharing between the two threads running concurrently on the same CPU core.
 
Back
Top