CPU Security Flaws MELTDOWN and SPECTRE

AMD also gets sued for Spectre
Not just Intel, chip manufacturer AMD now also has been sued for the Spectre security leak. It is a so-called 'class action' lawsuit filed on behalf of the company's investors.
In the indictment, both AMD, the financial director and director of the company, are accused of having artificially increased the share price of the company by not previously announcing that AMD processors are vulnerable to the Specter leak. AMD was informed at the beginning of June last year, sources claim
....
Investors of the company have, according to the charges, "made significant losses" as a result of the price drop after the publication of the leak.
http://www.guru3d.com/news-story/amd-also-sued-for-spectre.html
 
But for Intel a lawsuit makes some sense, right ? Especially since the its CEO sold all its shares before it was announced, devaluating the positions of the remaining shareholders
 
Your comments regarding mainframe, are you coming more from theory or working at a system engineer level on both that and say AS/400-Power x? - asking as if trying to see if this is a theoretical debate, albeit it does not really matter due to the vulnerability exists anyway.
It's theoretical, extended to a more broad attempt at interpreting what has been done for multiple architectures in general.
I try to ground that in what would be publicly available documentation or presentations, although that is less likely for the more insular products and for modern products in general.

From working with IBM the focus in my experience was specifically virtualisation and security around that without same level of flexibility and ease of use as say AS/400, virtualisation is integral to part of the context around focus on security leakage from CPU-software-memory; same theory you can build security around speculative execution but at a cost to performance.
Power series is more similar to AS/400 than it is to the System Z architectures.
Virtualization implies some level of co-residency in shared hardware. That choice is a prerequisite for these exploits, and the increasing layers of checks in hardware and software are what motivated novel methods of inferring hidden information like timing attacks.
This part of why I was taking a more theoretical angle, because of how impactful decisions made a decade or more ago in a wildly different security context can be.

There are elements to the implementation of a complex CISC architecture in models like Z6 and its and shared elements with Power6 that might have created pressure to defer its checks outside of the streamlined critical loop, which would open a window for speculative side effects. However, doing so could have avoided more easily predicted problems like bugs in natively supporting all the corner cases outside of the front end and put-away stages.
The scalability and RAS features also run counter to security, since they tend to create new state that could be timed.
Power6's runahead mode interests me in terms of meltdown, and it's leveraging the checkpoint stage for part of it. Since Nvidia's Meltdown-susceptible Denver has a similar runahead method, it might matter for the Power 6 as well.

For Meltdown, it can come down to what stage or stages in the pipeline hand specific exceptions related to permissions. It's a long-standing and generally reasonable choice to defer a lot of it to the end of the pipeline.
Spectre and Meltdown have a dependence on when speculation occurs, and what needs to be covered by speculative rollback.
Essentially everyone past the advent of pipelines and caches reverts the IP, register state, flags, deferred stores, or any other state related to instruction semantics visible to the running thread.

I do not know of an architecture that has already made the inciting choice for speculation restoring the state of the cache hierarchy for non-modifying data movement, and given constraints of the early processors that had pipelines+prediction+cache (1970s?), that would have negated the utility of having cache.

The paper and some of the security experts are coming from this with the understanding that for years it has always been a concept of attack via this route, this is not something new and so why the paper frames the sentence as they did along with what some of those researchers say in public;

This is the part I have questions about. I did not interpret that level of criticism from the Spectre paper, so I wanted clarification on which statements in the paper (or discussions outside of it) stated there was a purposeful reduction of security emphasis. I have seen some statements to that effect from some sources, though not usually from sources I'd assume had much visibility on the research or the vendors. For those involved in this research or in the design groups that allegedly chose to implement features they knew would compromise security, I would like to see their reasoning or recollection of events.

I would note that the paper is incomplete, given its being finalized before we found out that Meltdown applies to ARM A75, Intel, Power, Cavium, and Denver. Suggestions are there may be news for Qualcomm's custom core and Fujitsu's SPARC variants.

My interpretation is that there's been an increasing set of security features and countermeasures being added over time onto the architectures, sometimes in reaction to new exploits becoming known. The elements used for Spectre and Meltdown predate nearly everything, and are almost first principles of processor design. It seems plausible to me that resources were heavily invested into new security measures and counter-exploits for attacks as the designers knew them, and they were placed on top of or used elements of those first principles, usually with significant delay due to architectural cycles (not cores or families, but bedrock architectural choices for CPU lines). It then turned out that the things that were grandfathered in were exploitable in a way that they hadn't anticipated.

The Spectre paper stated that these types of attacks were a new class of exploits, and from a meta standpoint the dates of the papers they cite can give some insight as to what was known and when.
There's a succession of papers dealing with side channel attacks, including electrical, branch, and cache timing.
Putting the electrical one aside (although dedicated crypto engines and software mitigations exist in part because of that research), the earliest branch exploits target outside and known algorithms like SSL and AES.
Assuming those were supposed to be taken into account by a new processor, nothing before 2009-2012 would have a chance. They all generally posited software mitigation. So long as the hardware functioned as they assumed, it would have been considered sufficient.
Cache timing goes back to 2003 in relation to DES, with algorithmic/software mitigation posited. Those mitigations have been long-deployed.

The dates for when high-precision timers start showing up are more recent, and many of the old papers concerning key extraction weren't trying to escape any form of isolation.
The combination of timers, branch prediction exploits, cache exploits, and the decision to change focus from a target algorithm to virtual memory or CPU pipeline itself didn't show up until more recently.
A significant hardware change for Spectre (a halt on forwarding for Meltdown may be faster) would be years out. A more comprehensive architectural review would be something on the order of a new platform or notable paradigm shift like clean-sheet CPU architectures or a whole new mode.

just finally they have been successful, or more worringly it was successful years ago and has been exploited quietly by certain state organisations across the world.
So yes one could say done purposefully with focus of performance over security.
It's possible that some nation-states or other large organizations had knowledge of this, although historically OS, software, IO, or explicit hardware errata would have been lower-hanging fruit. The big payoff and major use case impacted by these fixes--cloud services open to the internet running formerly internal services--is more recent.
I wouldn't rule out a lot of them not seeing this coming, though I suppose we'd need to ask some of those agencies if they isolated their kernel mappings to be sure.
Perhaps some of the leaked toolsets out there could be checked for LFENCE instructions or retpolines.

These attacks require local code execution or other forms of access to the machine. Up until recently, that was considered already to be game-over with regards to a nation-state adversary. The legacy of the cloud is one where the world decided to do something that had been considered profoundly stupid from a security standpoint, and not for CPU performance reasons.
 
It's theoretical, extended to a more broad attempt at interpreting what has been done for multiple architectures in general.
I try to ground that in what would be publicly available documentation or presentations, although that is less likely for the more insular products and for modern products in general.


Virtualization implies some level of co-residency in shared hardware. That choice is a prerequisite for these exploits, and the increasing layers of checks in hardware and software are what motivated novel methods of inferring hidden information like timing attacks.
This part of why I was taking a more theoretical angle, because of how impactful decisions made a decade or more ago in a wildly different security context can be.


There are elements to the implementation of a complex CISC architecture in models like Z6 and its and shared elements with Power6 that might have created pressure to defer its checks outside of the streamlined critical loop, which would open a window for speculative side effects. However, doing so could have avoided more easily predicted problems like bugs in natively supporting all the corner cases outside of the front end and put-away stages.
The scalability and RAS features also run counter to security, since they tend to create new state that could be timed.
Power6's runahead mode interests me in terms of meltdown, and it's leveraging the checkpoint stage for part of it. Since Nvidia's Meltdown-susceptible Denver has a similar runahead method, it might matter for the Power 6 as well.

For Meltdown, it can come down to what stage or stages in the pipeline hand specific exceptions related to permissions. It's a long-standing and generally reasonable choice to defer a lot of it to the end of the pipeline.
Spectre and Meltdown have a dependence on when speculation occurs, and what needs to be covered by speculative rollback.
Essentially everyone past the advent of pipelines and caches reverts the IP, register state, flags, deferred stores, or any other state related to instruction semantics visible to the running thread.

I do not know of an architecture that has already made the inciting choice for speculation restoring the state of the cache hierarchy for non-modifying data movement, and given constraints of the early processors that had pipelines+prediction+cache (1970s?), that would have negated the utility of having cache.



This is the part I have questions about. I did not interpret that level of criticism from the Spectre paper, so I wanted clarification on which statements in the paper (or discussions outside of it) stated there was a purposeful reduction of security emphasis. I have seen some statements to that effect from some sources, though not usually from sources I'd assume had much visibility on the research or the vendors. For those involved in this research or in the design groups that allegedly chose to implement features they knew would compromise security, I would like to see their reasoning or recollection of events.

I would note that the paper is incomplete, given its being finalized before we found out that Meltdown applies to ARM A75, Intel, Power, Cavium, and Denver. Suggestions are there may be news for Qualcomm's custom core and Fujitsu's SPARC variants.

My interpretation is that there's been an increasing set of security features and countermeasures being added over time onto the architectures, sometimes in reaction to new exploits becoming known. The elements used for Spectre and Meltdown predate nearly everything, and are almost first principles of processor design. It seems plausible to me that resources were heavily invested into new security measures and counter-exploits for attacks as the designers knew them, and they were placed on top of or used elements of those first principles, usually with significant delay due to architectural cycles (not cores or families, but bedrock architectural choices for CPU lines). It then turned out that the things that were grandfathered in were exploitable in a way that they hadn't anticipated.

The Spectre paper stated that these types of attacks were a new class of exploits, and from a meta standpoint the dates of the papers they cite can give some insight as to what was known and when.
There's a succession of papers dealing with side channel attacks, including electrical, branch, and cache timing.
Putting the electrical one aside (although dedicated crypto engines and software mitigations exist in part because of that research), the earliest branch exploits target outside and known algorithms like SSL and AES.
Assuming those were supposed to be taken into account by a new processor, nothing before 2009-2012 would have a chance. They all generally posited software mitigation. So long as the hardware functioned as they assumed, it would have been considered sufficient.
Cache timing goes back to 2003 in relation to DES, with algorithmic/software mitigation posited. Those mitigations have been long-deployed.

The dates for when high-precision timers start showing up are more recent, and many of the old papers concerning key extraction weren't trying to escape any form of isolation.
The combination of timers, branch prediction exploits, cache exploits, and the decision to change focus from a target algorithm to virtual memory or CPU pipeline itself didn't show up until more recently.
A significant hardware change for Spectre (a halt on forwarding for Meltdown may be faster) would be years out. A more comprehensive architectural review would be something on the order of a new platform or notable paradigm shift like clean-sheet CPU architectures or a whole new mode.


It's possible that some nation-states or other large organizations had knowledge of this, although historically OS, software, IO, or explicit hardware errata would have been lower-hanging fruit. The big payoff and major use case impacted by these fixes--cloud services open to the internet running formerly internal services--is more recent.
I wouldn't rule out a lot of them not seeing this coming, though I suppose we'd need to ask some of those agencies if they isolated their kernel mappings to be sure.
Perhaps some of the leaked toolsets out there could be checked for LFENCE instructions or retpolines.

These attacks require local code execution or other forms of access to the machine. Up until recently, that was considered already to be game-over with regards to a nation-state adversary. The legacy of the cloud is one where the world decided to do something that had been considered profoundly stupid from a security standpoint, and not for CPU performance reasons.
Yeah and why IBM Z series was meant to be more ideal than using other solutions as it was more focused on security in the factors I mentioned; more of a niche product these days relative to Intel and Power X, notice I mentioned there is a higher security requirement with regards to security leakage in that instance.
As an example it is known System Z has memory and storage (more recently) encryption to assist against security leakage, because the background of this design was primarily virtualisation unlike others and that has a requirement and importantly scope for stronger CPU-memory-storage-OS/application security.

Just to also say, bear in mind I was talking specifically about Spectre rather than Meltdown that has its own seperate research paper and security experts/research public comments, my context is with regards to all the research and not just googleprojectzero, which some may be looking at heavily such as AMD at the expense of the rest that was done by other teams.
Cloud computing can be secured within reason, it is something a team I was involved with was working on years before cloud solutions even became available and well known; also this is meant to be an area the System Z was to be ideal for as well, but it is rather niche these days and sort of academic as it has the vulnerability anyway.

Edit:
Just to say this discussion from my POV is more around my surprise with System Z suffering same vulnerabilities (although it may be the only IBM architecture immune to Meltdown) due to its background/foundation and those engineers who worked on it; it is more of a niche solution-architecture relative to Intel/Power x/etc.
 
Last edited:
Just to add, while System Z has been confirmed to have the Spectre vulnerability, IBM has not said so with regards to Meltdown for this architecture-system; could be that it has security design with immunity or they are still working on a fix that would be unusual considering how easy it is to come up with a patch solution regarding Meltdown (although requires more development resources to limit performance degradation).

And just to add a quick example another security expert coming forward saying exactly the issue is the focus on performance then security just as I have been mentioning, not only the researchers concluding this;

Spectre is a problem in the fundamental way processors are designed, and the threat from Spectre is “going to live with us for decades,” said Kocher, the president and chief scientist at Cryptography Research, a division of Rambus.

“Whereas Meltdown is an urgent crisis, Spectre affects virtually all fast microprocessors,” Kocher said. An emphasis on speed while designing new chips has left them vulnerable to security issues, he said.

“We’ve really screwed up,” Kocher said. “There’s been this desire from the industry to be as fast as possible and secure at the same time. Spectre shows that you cannot have both.
 
Last edited:
Hello everybody.



So I did some benchmarks, before and after the kb4056892 patch, primarily on my core i7-860, but also some quick tests on the i5-8600k. My 2500k will have to wait for a while, because I am running some other projects at the same time and I need to finish up with the i5-860.



This effort is completely hobbyist and must not be compared with professional reviews. It’s just that I believe no reviewer will actually take old systems under consideration, so that’s where I stepped in. There are some shortcomings on this test anyway, some of which are deliberate.


The test is not perfect because I have used mixed drivers. Meaning that in some tests I have used the same driver, in another a couple months different drivers, but in Crysis I have used a year old driver for the pre patch run (it made no difference anyway). From my personal experience, newer drivers rarely bring any performance improvement. After the first couple drivers have come out, nothing of significance changes. Usually Nvidia’s game ready drivers, are ready right at the game launch and very few improvements come after that. Actually it’s primarily the game patches that affect the game’s performance, which granted, may need some reconsideration on the driver side. Still the games I have used did not have that problem (except one specific improvement that occurred in Dirt 4).



In this test, there are three kinds of measurements. First I did the classic SSD benchmark, before and after the patch. The I have gaming/graphics benchmarks, which consist from either the built in benchmark of the game or from my custom gameplay benchmarks. Then I have World of Tanks Encore, which is a special category, because it’s an automated benchmark, but I used fraps to gather framerate data while the benchmark was running, because it only produces a ranking number and I wanted fps data.



In my custom gameplay benchmarks, I rerun some of my previous benchmarks of my database, with the same settings, same location etc. Keep in mind that these runs are not 30-60 second runs, but several minutes long ones. I collected data with fraps or Ocat depending on the game.



Please note that ALL post patch benchmarks, were with the 390.65 driver for both the 1070+860 and 970+8600k configurations. This driver is not only the latest available, but also brings security updates regarding the recent vulnerabilities.



So let’s begin with the core i7-860 SSD benchmarks. Please note that the SSD benchmarks on boths systems, were done at stock clocks. The 860’s SSD is an old but decent Corsair Force GT 120GB.







As you can see, there are subtle differences, not worthy of any serious worries imo. However, don’t forget, that we are talking about an older SSD, which also is connected via SATA2 since this is an old motherboard as well.


Now let’s process to the automated graphics benchmarks, which were all done with the core i7-860@4Ghz and the GTX 1070@2Ghz.




Assassins Creed Origins 1920x1080 Ultra








Ashes of the Singularity 1920x1080 High



















Crysis classic benchmark 1920X1080 Very High









F1 2017 1920X1080 Ultra









Unigine Heaven 1920x1080 Extreme









Forza Motorsport 7 1920X1080 Ultra






 
Gears of War 4 1920X1080 Ultra










Rainbow Six Siege 1920X1080 Ultra









Shadow of War 1920X1080 Ultra









Unigine Valley Extreme HD









Gears of War Ultimate Edition 1920X1080 maxed









Total War Warhammer 2 1920x1080 Ultra















And the special category of World of Tanks Encore I told you about, since it essentially belongs in the automated category.




World of Tanks Encore 1920X1080 Ultra







Now as you can see, the differences are not big. They are quite insignificant I would date say actually. They seem to be well within the margin of error. I actually decided to benchmark the i7-860 with the 1070 first, since it’s the weakest of my processors and any impact on the cpu performance would be directly highlighted. Many of these runs are cpu limited already.



The two games that showed a measurable and repeatable performance drop, were both UWP games, Gears of War and Forza Motorsport 7. Even so, the drop was not such to make you jump off your chair! Actually in Gears 4, in spite the general performance drop, we did get a 5% lows performance increase. Note that I sued the same driver for both the pre and post patch runs.
 
My custom gameplay benchmarks follow suit. You can see the game titles and the settings in the screeshots. All at 1920X1080.































Here we can see that there are no big differences on the average framerate, which is the first primary and most important result. There are some fluctuations on the 0.1% lows mostly, but not in all games. Prey seem to had a harder time that the rest of them. I did notice two momentary hiccups during the run and I was certain they would appear in the 0.1% lows.


The better 0.1% and 1% lows you see on Dirt 4, I have to admit that have been affected by the newer driver. This is the exception to the rule however and not the other way around. There is a specific part at the beginning of the run, which makes the framerate to dip. It dips on the newer driver also, but a little less. The overall average framerate was not affected as you can see however and I felt the game running exactly the same.



Now let’s move on the quick Core i5-8600k with the 970, which is essentially a preview, since I will do more testing later on, with the 1070 installed, in order to better highlight any differences in performance.


Again the SSD test come first with the system also at stock. Please that all post kb4056892 patch benchmarks on the 8600k, were done with the 1.40 Asrock Z370 Extreme 4 BIOS, which included microcode fixes for the cpu, regarding the Spectre and Meltdown vulnerabilities. It is therefore more solidly patched, compared to i7-860.




Ashampoo cpu check report verifies the system to be ok.







I have two SSDs. A Samsung 850 EVO 500GB and a Sandisk Extrepe Pro 240GB. Needless to say that the screeshots with the lower performance, are the ones of the patched system. I have the windows version captured on those.















Unfortunately, there is a substantial and directly measureable performance drop on both SSDs, that reach 1/3 of performance loss on the smaller file sizes. On the bigger file sizes things are much better of course. I did a mistake and used different ATTO versions for the Samsung and Sandisk drives, but the performance drop has been recorded correctly for both anyway.


As for pure cpu tests, I didn’t do much. Just cpuz and cinebench.









Cinebench didn’t show significant difference, but cpuz showed a drop on the multicore result. I then realized that I had used version 1.81 on the pre patch test and version 1.82 on the post patch test. I am not sure if this would affect things. Still I trust cinebench more, since it’s a much heavier test.


Ok then, gaming benchmarks time. i5-8600k@5Ghz, GTX 970@1.5Ghz.


The pattern is the same as above.


Assassins Creed Origins 1920X1080 Ultra






 
Gears of War 4 1920X1080 Ultra









World of Tanks Encore 1920X1080 Ultra






Grand Theft Auto V 1920X1080 Very High






And I left Ashes of the Singularity for the end, because I only have post patch measurements, but there’s a reason I am including those too.
















Again as you can see, we have a measureable drop on Gears of War 4. It’s probable that with the 1070 the difference will be higher. The point is not just that however.



Let’s do a comparison on the above numbers. Take GTA V for example on the i7-860+1070. You will see that it has the same benchmarking result of 75fps average, as the 8600k with the much slower 970. However its 0.1% and 1% lows are quite better. You can feel it while playing. This is a direct result of how cpu limited this game is. For reference, the 1070 with the 8600k gave me 115fps average, but this is a discussion for another time.



After that, you can take a look at Gears of War 4 post patch results for both cpu scores. 364fps for the 8600k, 199 fps for the 860.



And of course we cannot defy the king of cpu limits, Ashes of the Singularity, which for the Vulkan test being the best for both systems, gave us post patch, an average cpu framerate of 152fps for the 8600k and 74fps for the i7-860.



Why am I saying all that and why am I comparing first and eighth generation cpus? Because as you can see even from these few tests, the i5-8600k continues to perform as an 8th gen cpu. It did not suddenly turn into a Lynnfield or something. Also the Lynnfield stayed a Lynnfield and did not become a Yorkfield or whatever.



I generally observe a severe doom and gloom attitude and the consensus that our systems are only fit for the trash, seems to have taken hold on some users minds. This is not what I am seeing however. Always talking from a home user perspective.



I am not trying to diminish the importance of the issue. It’s very serious and it is sad that it has broken out the way it did. However I do see some seriousness from all affected parties. I mean Asrock brought out the BIOS with what, less than a week or something.



Now regarding the professional markets, I can understand that things will be much worse, especially with the very real IO performance degradation. Even some home users with fast SSDs will be rightfully annoyed. In these situations I believe some form of compensation should take place or maybe some hefty discounts for future products. Heck I know that I would be furious if I had seen a severe degradation on the gaming graphics department, which is my main focus.


Of course testing will continue. I have a good 1070+8600k pre patch gaming benchmarks database already, which I will compare with some select post patch benchmarks. If I find anything weird I will repost.



For reference, here are my pre patch benchmarking videos, from which the above pre patch results came from. I did no recordings for the post patch runs.


Take care.


Assassin's Creed Origins 1920X1080 ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Tom Clancy's Rainbow Six Siege 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Forza Motorsport 7 1920X1080 Ultra 4xAA GTX 1070 @2Ghz CORE i7-860 @4GHz


Ashes of the Singularity 1920X1080 High DX11+DX12+Vulkan GTX 1070 @2Ghz CORE i7-860 @4GHz


Gears of War 4 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Gears of War Ultimate 1920X1080 maxed GTX 1070 @2Ghz CORE i7-860 @4GHz


Prey 1920X1080 very high GTX 1070 @2Ghz CORE i7-860 @4GHz


Total War Warhammer 2 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Unigine Valley 1920X1080 Extreme HD GTX 1070 @2Ghz CORE i7-860 @4GHz


Shadow of War 1920X1080 Ultra+V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


World of Tanks Encore 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


The Evil Within 2 1920X1080 Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Road Redemption 1920X1080 fantastic GTX 1070 @2Ghz CORE i7-860 @4GHz


Dirt 4 1920X1080 4xAA Ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


F1 2017 1920X1080 ultra + high GTX 1070 @2Ghz CORE i7-860 @4GHz


Dead Rising 4 1920X1080 V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


ELEX 1920X1080 maxed GTX 1070 @2Ghz CORE i7-860 @4GHz


Project Cars 2 1920X1080 ultra GTX 1070 @2Ghz CORE i7-860 @4GHz


Grand Theft Auto V 1920X1080 V.High GTX 1070 @2Ghz CORE i7-860 @4GHz


i5-8600k + 1070


World of Tanks Encore 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz


Grand Theft Auto V 1920x1080 V.High outdoors GTX 970 @1.5Ghz Core i5-8600k @5GHz


Gears of War 4 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz


Assassin's Creed Origins 1920x1080 Ultra GTX 970 @1.5Ghz Core i5-8600k @5GHz
 
If some mod/admin can alleviate the 10000 characters and 20 images limitation and merge the above in one post, it would be great.

thanks
 
Yeah and why IBM Z series was meant to be more ideal than using other solutions as it was more focused on security in the factors I mentioned; more of a niche product these days relative to Intel and Power X, notice I mentioned there is a higher security requirement with regards to security leakage in that instance.
As an example it is known System Z has memory and storage (more recently) encryption to assist against security leakage, because the background of this design was primarily virtualisation unlike others and that has a requirement and importantly scope for stronger CPU-memory-storage-OS/application security.
Memory and storage encryption are examples of heavy investment in security that is also unhelpful for either Meltdown or Spectre.
The CPU pipeline still needs the plaintext value to work, and that would leak the value all the same.
We know AMD has introduced SVE for memory, and Apple's per-device encryption heavily protects storage. That hasn't stopped Spectre for both and Meltdown for Apple.

The problem is that at some point, the hardware has to know the true value, and so its side effects reflect that.
Doing something to break the relationship or block access, such as more in-depth memory space tagging or protection keys, some kind of varying hash function for mappings to cache lines, more absolute physical isolation, timer fuzzing, architected separate spaces, or maybe someday a theoretical method including some form of homomorphic encryption for certain operations or parts of the pipeline/hierarchy. To varying degrees, these could prevent side effects, make them happen somewhere inaccessible, or make them increasingly unreflective of the hidden values an attacker is seeking--with increasing levels of cost the more effective they get at hiding things.

Just to also say, bear in mind I was talking specifically about Spectre rather than Meltdown that has its own seperate research paper and security experts/research public comments,
I was looking at the Spectre paper for comments about a willful choice to add vulnerability for the sake of performance. That's a distinct interpretation from adding performance features without realizing there would be a vulnerability open to exploit discovered later.

Just to say this discussion from my POV is more around my surprise with System Z suffering same vulnerabilities (although it may be the only IBM architecture immune to Meltdown) due to its background/foundation and those engineers who worked on it; it is more of a niche solution-architecture relative to Intel/Power x/etc.
It can also depend on the vulnerabilities in play. Bounds check escape is a particularly difficult thing to fully contain without additional context passed in to give the hardware any idea as to whether there's a problem predicting past the end of an array (or automatically prefetching lines due to a strided access pattern). Inappropriate access to data a process believes should not be seen despite being in the same space may require a hint or two.

There are also other features that might be somewhat unique, as some examples of System Z have hardware blocks for things like compression. Knowing what test values load faster in bandwidth-limited situations than others based on their content might be interesting, depending on implementation details.

Just to add, while System Z has been confirmed to have the Spectre vulnerability, IBM has not said so with regards to Meltdown for this architecture-system; could be that it has security design with immunity or they are still working on a fix that would be unusual considering how easy it is to come up with a patch solution regarding Meltdown (although requires more development resources to limit performance degradation).
Meltdown is almost a coin toss, as we see from Intel vs AMD, various ARM cores, and maybe other architectures like SPARC(some could be?) and Itanium(no). Potentially, other architectures may have a base design decision that would tend to make it more likely that a kernel access gets blocked or cannot translate from a user program trying to exploit it.
There are alternate protection schemes besides bits in page table entries that might also serve as a barrier, though paging is virtually everywhere.
AMD didn't seem to have trouble figuring out Meltdown didn't apply, nor apparently ARM and others whose cores weren't susceptible. ARM even went further and disclosed a variant of Meltdown specific to its cores in relatively short time. Silence isn't proof, but it may be indicative of a situation more complicated than "No".

Unless the base architecture itself happens to define the amount of information that can be measured or transmitted based on side effects or speculation--which I do not think is a thing for any of the ones under discussion, the possibility for Spectre exists now or could happen in the future since the architecture would not forbid it.
It would require explicitly measuring and mandating the observability of behaviors, which I think would stand out in an architectural discussion.
RISC-V's rather superfluous press release about how it's safe since no current chips speculate isn't a guarantee unless their working groups define a language and definition to make it so going forward.

The future case is something where AMD was being a little presumptuous about exempting itself with vendor ID checks. Past performance is no guarantee of future success, if the architecture remains silent on the correctness of such behavior. People could assume AMD wouldn't ever make a future core that was worse, but then again we all know what happens when we assume.
I think it would be funny if somehow Zen2's projected performance improvement in its roadmap suddenly dropped by a few percent around January 2018.

And just to add a quick example another security expert coming forward saying exactly the issue is the focus on performance then security just as I have been mentioning, not only the researchers concluding this;
His quote is that the industry thought it could have both high performance and security, and it thought wrong. That's not the same criticism as saying the industry saw something as blatant as the potential for Meltdown or Spectre and willfully added hardware features that enabled them. For Spectre, that's effectively impossible since the elements that make it happen predate most of the industry.
 
Also tl;dr about the results would be nice

You are correct.

For the i5-8600k at stock, bios+windows patch, 1/3 performance loss after the patch, on mainstream SATA3 SSDs with SATA3 connections, for 16KB file sizes and below. No real problems above that.

For the i7-860 at stock, only with windows patch, SSD with SATA2 connection showed minimal performance loss.

For gaming at max clocks, the performance loss was also minimal. UWP games showed a tendency for more performance loss, but still very little.

Both systems can still run Crysis! xD
 
The PoC I've compiled requires at least a CPU supporting SSE2 (more specifically the rdtsc instruction). This can avoided with a timer loop instead but I don't know how effective would be in practice. There are another couple of 'odd' instructions which are not part of the standard x386 set (cflush and mfence) but I'm fairly sure they've been supported since the Pentium CPU.

I'll let you know if I can come up with something that can work on really old PCs.

We did a small test that worked really well on some older PCs by reducing the amount of data you pull out per attempt to 4 bits, and having the speculating code touch 4 lines, all same offset but different base. Those locations all had 0 written to them, and then the measure side would do *(*(*(*(base1+offset)+base2+offset)+base3+offset)+base4+offset). Basically, multiplying the time taken by 4, amplifying the difference between the slow and the fast access.

Only having to check 16 locations but doing them all 4 times to get half the data per attempt means that overall this is twice as fast as using bytes. With 4 lines checked in series, the timing signal is big enough to be detectable even with crappier timing sources.
 
Just to add, while System Z has been confirmed to have the Spectre vulnerability, IBM has not said so with regards to Meltdown for this architecture-system; could be that it has security design with immunity or they are still working on a fix that would be unusual considering how easy it is to come up with a patch solution regarding Meltdown (although requires more development resources to limit performance degradation).

And just to add a quick example another security expert coming forward saying exactly the issue is the focus on performance then security just as I have been mentioning, not only the researchers concluding this;

So...basically, it's as vulnerable or more vulnerable as AMD's Ryzen? Except at least with Ryzen, AMD has stated it isn't susceptible to Meltdown. That doesn't seem to make it markedly better than the coin toss which other CPU manufacturer's have had WRT Meltdown and Spectre.

Regards,
SB
 
Memory and storage encryption are examples of heavy investment in security that is also unhelpful for either Meltdown or Spectre.
The CPU pipeline still needs the plaintext value to work, and that would leak the value all the same.
We know AMD has introduced SVE for memory, and Apple's per-device encryption heavily protects storage. That hasn't stopped Spectre for both and Meltdown for Apple.
.
My point was background and focus/scope of System Z relative to AS/400-Power X.
Like I said I worked with those who implemented System Z over decades and going back to 390 systems, from an IBM perspective it is fair to say from their architecture the System Z was built from ground up to be a virtualised system and requiring extensive security requirements (also one reason IBM considers System Z ideal for Cloud), I just gave those 2 as an example that are missing from Power x, especially when implementing such encryption on massive virtualised scaled systems.
Unfortunately though you will not get your answers without having access to the IBM engineer architecture operational books, but so far from IBM only System Z has so far been ommitted from their Meltdown vulnerabilities.
 
Last edited:
So...basically, it's as vulnerable or more vulnerable as AMD's Ryzen? Except at least with Ryzen, AMD has stated it isn't susceptible to Meltdown. That doesn't seem to make it markedly better than the coin toss which other CPU manufacturer's have had WRT Meltdown and Spectre.

Regards,
SB
So far the only IBM system not mentioned to have Meltdown vuln is System Z, make of it what you will *shrug*.
But I would be tentative in comparing statements from AMD to that of IBM who are more conservative/careful (especially around the mainframe division).
Unfortunately you also need to be a mainframe System Z client to see its current vulnerability/support list.
 
Last edited:
Back
Top