Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
Ok, so then how would that fit in with their comment about a 'minor boost if used for rendering'? Any guesses?
I guess any "'extra' ALU" can't hurt. It's just ... curious that it doesn't show up in the FLOPS figures.

But I think we should take ERP's word on this and stop flogging this invisible horse.
 
Because bandwidth is never 100% efficient and ps4 will never use the full 176 for read or write. It will be used for simultaneous read and write.

Here is a real world example of Durango memory

That's true, but the limitation you are imposing on the GDDR5 in the PS4 would also apply to the DDR3 inside the X1. Again, though, this is beyond the scope of my comparison.

Maximum potential of the PS4's GDDR5 read bandwidth is 176GB/s.
Maximum potential of the X1's DDR3 read bandwidth is 68GB/s.
Maximum potential of the X1's eSRAM read bandwidth is 50GB/s(according to AnandTech).

Forgo looking at the GDDR5 and DDR3 write capability for my comparison. I am not unjustly sticking a limitation to the X1 and precluding the PS4. I am strictly speaking of each systems theoretical peak read bandwidth. Nothing more.
 
Read and write.


So is the eSRAM.

I apologize for my confusion. I reread the AnandTech article, then its source(vgleaks) and I misinterpreted their description of the eSRAM. I incorrectly thought that the eSRAM limit was 50GB/s in either direction(for a 102GB/s) total, but it can, in fact, have all of its bandwidth dedicated to read or write as well. This makes the X1's peak read bandwidth 170GB/s as most of you have already known.

Sorry again for my confusion :oops:
 
They did leave it at that. That 200 GB/s was said in passing at the tech panel. Its funny now people are taking it and jumping all the specs we have on the console.

Why is it 'funny' to *not* assume that simply because MS said something you didn't expect somehow it *must* be a lie or an effort to mislead on their part? Is it funny to take them at their word? Should we laugh at Sony for saying they have 176GB/s? No. Btw, I just re-watched the panel and it's not mentioned in nearly as casual a manner as you seem to suggest here. Nick Baker, the guy who says it, does so afte ra long monologue about SUPER in depth tech stuff where he goes into much detail about *almost* everything...the one area he leaves off? Clocks. He only says it is doing 'billions' of cycles per second. That's the only part of his entire schpiel that is even remotely casual or qualitative and he only do that because Major Nelson brings it up while Baker is in mid-sentence. In nearly the same breath the guy gives us the 768 ops/cycle figure so should we dismiss that as a lie too?! Come on...we don't NEED to make the kinds of assumptions you guys are pushing here.

This notion that Baker ("Distinguished Engineer") said the 'over 200GB/s' bit in some sort of vague terms is totally without support. It's on youtube at around the 25-27min mark for anyone interested in listening to it.

Occam's razor.... :cool:
That's...not at all what Occam's Razor suggests is the more intelligent line of thought. :???:




There has been rationale for why the figure seems disingenuous. Mostly from lack of legroom to reach that figure. Either they upped the GPU clock which is highly unlikely...

Stop. Why is this "highly unlikely"? I see ppl eager to just toss the idea out as if there is no rationale in support of it but in my post I laid out a variety of reasons that support that possibility. Would you mind refuting my reasons first before just dismissing the thought out of hand? For the record, I'm moreso asking for refutations of my corroborating evidence that seems to point towards 200GB/s+ being legitimate.

The most reasonable way for reaching the 200GB/s is by adding the coherent bus or by some other creative math.
Why is this presumed to be the 'most reasonable' way to get there? See my math below...I disagree with your contention here. Occam's Razor suggests we should avoid the kinds of assumptions being peddled here, not embrace them and base our entire conclusions off of them! A clock speed increase to the level of AMD's other similarly spec'd GPU's does the trick nicely. No assumptions necessary.

I'm not saying i'm certain that is the case, but there seems to be little to no evidence of the X1 being able to reach that high of bandwidth without a bit of number sliding from irrelative parts.
What's the argument that forces you to conclude something like that? Lay it out for me if ya don't mind. I ask because having, say, a 1.066GHz GPU means 204.8GB/s bandwidth. This isn't fuzzy math or number sliding. It's simple algebra. Here:

(1066/800)*102.4=136.5GB/s
136.45+68.3=204.8GB/s total

Note: The 7770 and the 7790, the two cards that seem to be the parents of the X1 GPU design, are sold with a 1GHz and 1.075GHz version to consumers.

Again I ask...why is this assumed to be out of the question? Don't just assert that it is, give me a set of reasons please.




Because I can't stand another 14+4 debate, all 18 of the CU's are identical.
I can guess where VG Leaks got the 14+4 from, I think they were just taking a slide out of context.

Ok, I can trust this. Thanks for the input!




The math isn't really that creative. The coherent bus is an alternate memory pathway that is in full use by the gpu. Add in onion and garlic if you wish. More than one dev has pointed to 200gb/s on this board.

Who else besides sebbbi? Gotta link?
 
Last edited by a moderator:
So the PS4 will never reach a peak of 176GB/s

Yet the DDR3 in that very picture is running at its peak of 68GB/s?.

wth.

In any one direction for graphics related tasks? Probably not. Other tasks will consume that bandwidth too.

Vgleaks only provided this real world example spider.
 
Why is it 'funny' to *not* assume that simply because MS said something you didn't expect somehow it *must* be a lie or an effort to mislead on their part? Is it funny to take them at their word? Should we laugh at Sony for saying they have 176GB/s? No. Btw, I just re-watched the panel and it's not mentioned in nearly as casual a manner as you seem to suggest here.



That's...not at all what Occam's Razor suggests is the more intelligent line of thought. :???:






Stop. Why is this "highly unlikely"? I see ppl eager to just toss the idea out as if there is no rationale in support of it but in my post I laid out a variety of reasons that support that possibility. Would you mind refuting my reasons first before just dismissing the thought out of hand? For the record, I'm moreso asking for refutations of my corroborating evidence that seems to point towards 200GB/s+ being legitimate.



Why is this presumed to be the 'most reasonable' way to get there? See my math below...I disagree with your contention here. Occam's Razor suggests we should avoid the kinds of assumptions being peddled here, not embrace them and base our entire conclusions off of them! A clock speed increase to the level of AMD's other similarly spec'd GPU's does the trick nicely. No assumptions necessary.



What's the argument that forces you to conclude something like that? Lay it out for me if ya don't mind. I ask because having, say, a 1.066GHz GPU means 204.8GB/s bandwidth. This isn't fuzzy math or number sliding. It's simple algebra. Here:

(1066/800)*102.4=136.5GB/s
136.45+68.3=204.8GB/s total

Note: The 7770 and the 7790, the two cards that seem to be the parents of the X1 GPU design, are sold with a 1GHz and 1.075GHz version to consumers.

Again I ask...why is this assumed to be out of the question? Don't just assert that it is, give me a set of reasons please






Who else besides sebbbi? Gotta link?

Its not running at 1Ghz unless its gimped, same with the CPU. Those are clock rates never reached in a APU before, and one thats above what appears to be the clock ceiling for the GPU (1ghz @ 28nm). This is wishul thinking with nothing more then 'I want it to be so' as evidence.

The most reasonable explaniation is the coherent bus is being counted as well.

102.6GB/s + 68.3GB/s + 30GB/s >200GB/s.

This is the same creative maths they did with the XBOX360 in regards to bandwidth and computing power so its not surprising they did the same here.

You cannot compare these boards to discreet cards, only to APU's.

Theres just as much 'evidence' for a downclock as there is for your scenario, the rumours of heat, etc, maybe having to backport the design to 40nm, etc.
 
Last edited by a moderator:
Stop. Why is this "highly unlikely"? I see ppl eager to just toss the idea out as if there is no rationale in support of it but in my post I laid out a variety of reasons that support that possibility. Would you mind refuting my reasons first before just dismissing the thought out of hand? For the record, I'm moreso asking for refutations of my corroborating evidence that seems to point towards 200GB/s+ being legitimate.


Why is this presumed to be the 'most reasonable' way to get there? See my math below...I disagree with your contention here. Occam's Razor suggests we should avoid the kinds of assumptions being peddled here, not embrace them and base our entire conclusions off of them! A clock speed increase to the level of AMD's other similarly spec'd GPU's does the trick nicely. No assumptions necessary.


What's the argument that forces you to conclude something like that? Lay it out for me if ya don't mind. I ask because having, say, a 1.066GHz GPU means 204.8GB/s bandwidth. This isn't fuzzy math or number sliding. It's simple algebra. Here:

(1066/800)*102.4=136.5GB/s
136.45+68.3=204.8GB/s total

Note: The 7770 and the 7790, the two cards that seem to be the parents of the X1 GPU design, are sold with a 1GHz and 1.075GHz version to consumers.

Again I ask...why is this assumed to be out of the question? Don't just assert that it is, give me a set of reasons please.

You have to assume much less buy adding a given number from the leaks to achieve 200GB/s. You assume much more by saying they upped the GPU clock, added additional cooling, have decreased yields and are eating the increased cost.
 
Why is it 'funny' to *not* assume that simply because MS said something you didn't expect somehow it *must* be a lie or an effort to mislead on their part? Is it funny to take them at their word?

If you were paying close attention to what both Sony and Microsoft were saying during the PS360 release, then it is much easier to trust the leaks/devs than the talking heads.
 
Why is it 'funny' to *not* assume that simply because MS said something you didn't expect somehow it *must* be a lie or an effort to mislead on their part? Is it funny to take them at their word? Should we laugh at Sony for saying they have 176GB/s? No. Btw, I just re-watched the panel and it's not mentioned in nearly as casual a manner as you seem to suggest here.



That's...not at all what Occam's Razor suggests is the more intelligent line of thought. :???:






Stop. Why is this "highly unlikely"? I see ppl eager to just toss the idea out as if there is no rationale in support of it but in my post I laid out a variety of reasons that support that possibility. Would you mind refuting my reasons first before just dismissing the thought out of hand? For the record, I'm moreso asking for refutations of my corroborating evidence that seems to point towards 200GB/s+ being legitimate.



Why is this presumed to be the 'most reasonable' way to get there? See my math below...I disagree with your contention here. Occam's Razor suggests we should avoid the kinds of assumptions being peddled here, not embrace them and base our entire conclusions off of them! A clock speed increase to the level of AMD's other similarly spec'd GPU's does the trick nicely. No assumptions necessary.



What's the argument that forces you to conclude something like that? Lay it out for me if ya don't mind. I ask because having, say, a 1.066GHz GPU means 204.8GB/s bandwidth. This isn't fuzzy math or number sliding. It's simple algebra. Here:

(1066/800)*102.4=136.5GB/s
136.45+68.3=204.8GB/s total

Note: The 7770 and the 7790, the two cards that seem to be the parents of the X1 GPU design, are sold with a 1GHz and 1.075GHz version to consumers.

Again I ask...why is this assumed to be out of the question? Don't just assert that it is, give me a set of reasons please.






Ok, I can trust this. Thanks for the input!






Who else besides sebbbi? Gotta link?



Wow.

Given past experiences with comments from console manufacturers and their various people whether PR or not tend to give credence to support that the 200 GB/s is largely fluff talk. It is not on blackjedi or anybody else to prove that it is fluff talk.

And there's plenty of reasons not to up the clock and they've previously been pointed out on this forum. I don't see how it's reasonable to just up the clocks from 800 Mhz to 1066 Mhz. That's a major jump and will require additional power and a lower yield come manufacturing time. If MS did that then bravo for them, but I don't see anything pointing in that direction. There's a wealth of information at our hands regarding the architecture of XBone and that information isn't telling us MS has upped the clocks. And another reason not to change clocks is because it's getting closer to launch and last minute changes like that could potentially put manufacturing off by a month or 5.

If MS discovers their yields are good enough to pursue the increased clockspeed then good for them, but they have a behemoth of an APU with all that esram.
 
This has been touched on before- but is the PS4 GPU in a similar 'spot' the RSX was vis a vis PC GPU's of its time (2006) in terms of price\performance\tdp?

At the time, what were the most powerful cards out there. How much did they cost?

Many are lamenting that both Sony and MS lowballed their machines in performance wise. How true is this, reasonably speaking?

bandwidth and RAM wise, I think the machine is fairly up to date certainly.

The Xbone seems to significantly lag behind the PS4 in the GPU department- is this weaker than normal for the performance trend?
 
Its not running at 1Ghz unless its gimped, same with the CPU. Those are clock rates never reached in a APU before...

And why is this a strong counterpoint? You're basically saying since it's never been done before there it can't ever happen which is surely silly. What's the GPU clock in Kaveri? Do we know yet?

This is wishul thinking with nothing more then 'I want it to be so' as evidence.

Had you bothered to actually my posts before replying you'd see there are reasons laid out for you. Stop asserting things and start reading what I've taken the liberty of typing up please. :rolleyes:

The most reasonable explaniation is the coherent bus is being counted as well.

'Most reasonable' in what sense? Your logic here begins with an assumption...:???:

102.6GB/s + 68.3GB/s + 30GB/s >200GB/s.

This is the same creative maths they did with the XBOX360 in regards to bandwidth and computing power so its not surprising they did the same here.

...the same math done by MS's marketing team almost a decade ago. That is what you are using as justification for dismissing what Nick Baker said? I'm not sure it's reasonable to hold Nick Baker accountable for what Major Nelson (a marketing guy) claimed in 2005.

You cannot compare these boards to discreet cards, only to APU's.

I agree it's a different setup, but why conclude that therefore it's out of reach?

Theres just as much 'evidence' for a downclock as there is for your scenario, the rumours of heat, etc, maybe having to backport the design to 40nm, etc.

I'd argue that murmurs of machines running hot and poor yields would support my hypothesis of increase clocks rather than lower clocks. I can see the opposite conclusion too; namely that they lowered clocks in response to heating issues...but that would beg the question of where those heating issues came from the in the first place. Hence I find my hypothesis more likely.




If you were paying close attention to what both Sony and Microsoft were saying during the PS360 release, then it is much easier to trust the leaks/devs than the talking heads.

Ah, so because someone at working for MS's PR department claimed soemthing almost a decade ago we should therefore assume that during aa hardware and architecture panel their 'dinstinguished engineer' Nick Baker infused his long winded, highly detailed technical discussion with some loose lipped casual, vague talk of bandwidth? Did you actually watch that part? It's anything but a throw away line.

You have to assume much less buy adding a given number from the leaks to achieve 200GB/s. You assume much more by saying they upped the GPU clock, added additional cooling, have decreased yields and are eating the increased cost.

I'm not assuming *anything*. I'm simply not dismissing what Nick Baker said during the panel and I have a host of supportive reasons that point towards the possibility of clock increases. Do you not understand what constitutes the difference between an assumption and a speculative conclusion?

And btw I don't disagree about that sort of thing bringing with it lower yields and machines running hotter, but additional cooling is a stretch. Not sure how much engineering experience you have but I have plenty and I can promise you that anytime engineers are working on a competitive project like this they leave wiggle room in the areas where they feel they can adjust later on if need be. There is already built in wiggle room for these machines thermodynamically. They already will have some give in that area. So I don't agree that just because clocks go up in my described scenario therefore we automatically have to totally retool the cooling system or even just more fans, etc. That's far from a given.

That said, we have heard murmurs about the machine's running hot and yield issues. We also hear that they axed the subsidized sku model for launch. Perhaps they decided bad yields meant low launch shipments which encouraged them to just axe that sku as they will sell out regardless? It's possible, no? Add to that the other stuff I noted too...
 
Wow.

Given past experiences with comments from console manufacturers and their various people whether PR or not tend to give credence to support that the 200 GB/s is largely fluff talk. It is not on blackjedi or anybody else to prove that it is fluff talk.

If someone is claiming it's 'fluff' the burden is on them to establish that with logical arguments...and no, using your conclusion as your premise is NOT a rational line of argumentation. Nor is it rational to hold Nick Baker accountable for something Major Nelson, a PR guy, claimed about 360 a full 8 yrs ago. 'One time, a PR guy said this thing like 8 yrs ago...' By that same logic we have to dismiss the 768 ops/cycle (and thus 12 CU's with it) as he said that in the same breath!

If this really is the best 'logic' ya guys have then it's rather weak. Even if you don't think my argument is particularly strong, you surely agree that your conclusions are incredibly fragile, no?

And there's plenty of reasons not to up the clock and they've previously been pointed out on this forum.

I'm less interested in trying to discern motivations than I am wanting to openly discuss the possibility. I'd imagine we can all agree the only reason for them to want to do what I am suggesting is presumably to erase Sony's perceived power gap and along with it some marketing disadvantages.

That's a major jump and will require additional power and a lower yield come manufacturing time.

No disagreement there. Of course, we have heard murmurs about heating issues all the sudden and yield concerns. And rumor has it they delayed the subsidized sku, which is precisely the sort of thing you'd do in response to your team committing to accept lower yields.

There's a wealth of information at our hands regarding the architecture of XBone and that information isn't telling us MS has upped the clocks.

...it's not telling us *anything* about the clocks, just to be clear about it. ;)
 
And why is this a strong counterpoint? You're basically saying since it's never been done before there it can't ever happen which is surely silly. What's the GPU clock in Kaveri? Do we know yet?



Had you bothered to actually my posts before replying you'd see there are reasons laid out for you. Stop asserting things and start reading what I've taken the liberty of typing up please. :rolleyes:



'Most reasonable' in what sense? Your logic here begins with an assumption...:???:



...the same math done by MS's marketing team almost a decade ago. That is what you are using as justification for dismissing what Nick Baker said? I'm not sure it's reasonable to hold Nick Baker accountable for what Major Nelson (a marketing guy) claimed in 2005.



I agree it's a different setup, but why conclude that therefore it's out of reach?



I'd argue that murmurs of machines running hot and poor yields would support my hypothesis of increase clocks rather than lower clocks. I can see the opposite conclusion too; namely that they lowered clocks in response to heating issues...but that would beg the question of where those heating issues came from the in the first place. Hence I find my hypothesis more likely.






Ah, so because someone at working for MS's PR department claimed soemthing almost a decade ago we should therefore assume that during aa hardware and architecture panel their 'dinstinguished engineer' Nick Baker infused his long winded, highly detailed technical discussion with some loose lipped casual, vague talk of bandwidth? Did you actually watch that part? It's anything but a throw away line.



I'm not assuming *anything*. I'm simply not dismissing what Nick Baker said during the panel and I have a host of supportive reasons that point towards the possibility of clock increases. Do you not understand what constitutes the difference between an assumption and a speculative conclusion?

And btw I don't disagree about that sort of thing bringing with it lower yields and machines running hotter, but additional cooling is a stretch. Not sure how much engineering experience you have but I have plenty and I can promise you that anytime engineers are working on a competitive project like this they leave wiggle room in the areas where they feel they can adjust later on if need be. There is already built in wiggle room for these machines thermodynamically. They already will have some give in that area. So I don't agree that just because clocks go up in my described scenario therefore we automatically have to totally retool the cooling system or even just more fans, etc. That's far from a given.

That said, we have heard murmurs about the machine's running hot and yield issues. We also hear that they axed the subsidized sku model for launch. Perhaps they decided bad yields meant low launch shipments which encouraged them to just axe that sku as they will sell out regardless? It's possible, no? Add to that the other stuff I noted too...


The vast majority of points you have brought up have been discussed on these forum, and have quite directly been analyzed and concluded to be not viable.

Just to bring up some points that I remember, the 200 GB/s internal bandwidth calculation actually first came from Bilikan (sorry to quote you again) as a suggestion to how the 200GB/s was obtained right after the panel.
We know DDR3 is a constant and the eSRAM is more or less a constant. Probably the only way to change that would be to upclock, however this theory also gets blown out of the water.

Shortly after also have real world examples to how upclocking the APU to 2Ghz/1Ghz was unviable due to the massive increase in power usage or something like that., therefore many here have concluded that the 200GB/s pretty much have been done by creative math.

The way they use a vague "internal bandwidth" instead of some more precise terms like "bandwidth to both memory pools add up to 200GB/s" also points to using creative math while stating a fact, otherwise they would be outright lying.
 
This has been touched on before- but is the PS4 GPU in a similar 'spot' the RSX was vis a vis PC GPU's of its time (2006) in terms of price\performance\tdp?

At the time, what were the most powerful cards out there. How much did they cost?

Many are lamenting that both Sony and MS lowballed their machines in performance wise. How true is this, reasonably speaking?

bandwidth and RAM wise, I think the machine is fairly up to date certainly.

The Xbone seems to significantly lag behind the PS4 in the GPU department- is this weaker than normal for the performance trend?

IIRC Xenos in the 360 could do some tricks that weren't available to PC users until the release of the Radeon 9800 XT but the RSX was a bit weaker than the then available nvidia GeForce 7800 (bit unsure of the exact nvidia card, how I hated this era of gpu gibberish names). Xenos definitely outperformed RSX in raw perf, and in particular transparencies, but there were some minor areas RSX could hold it's own such as shadow filtering. As to cards now we're talking 7770/7790 for XBOne and 7850/7870 for PS4

The two CPUs seemed much of a muchness unless the dev put a lot of work into SPU optimisation, the recent DF article on Metro:LL perf and subsequent interview highlights the potential in Cell if your code is very highly threaded (suggesting that 8 cores could work quite well this time) I lack the expertise to judge PPC versus the x86 of 8 years ago but they seemed impressive at the time. Moaning that 8 low power cores aren't fast in terms of IPC compared to Ivy Bridge or Steamroller is really missing the point I believe.
 
Its not running at 1Ghz unless its gimped, same with the CPU. Those are clock rates never reached in a APU before, and one thats above what appears to be the clock ceiling for the GPU (1ghz @ 28nm). This is wishul thinking with nothing more then 'I want it to be so' as evidence.

The most reasonable explaniation is the coherent bus is being counted as well.

102.6GB/s + 68.3GB/s + 30GB/s >200GB/s.

This is the same creative maths they did with the XBOX360 in regards to bandwidth and computing power so its not surprising they did the same here.

You cannot compare these boards to discreet cards, only to APU's.

Theres just as much 'evidence' for a downclock as there is for your scenario, the rumours of heat, etc, maybe having to backport the design to 40nm, etc.

Hey beta we kinda had this discussion 24 hours ago. I now understand why we were at a crossroads of understanding on the math.

FWIW you were looking at bandwidth in the PS4 way as "copy from storage to RAM at 176gb/s so GPU and CPU can do work."

You were under the impression that in order for ESRAM in XB1 to get filled and do work, it had to utilize that same pathway thereby reducing overall bandwidth. That would be the case in a considerable amount of scenarios I'm sure.

However, ESRAM being a RAM unit itself, can also be populated (just like main RAM)along a wholly different pathway that includes the 30gb/s link from storage to the graphics memory controller plus 102gb/s from the graphics controller to the ESRAM. That path is 132gb/s additive at peak.
 
Hrm. But were they placed more or less in the same place as the console GPU's are now, relative to their PC brethren?

The GPU in the PS4 is "mid range" (right?), excluding banana-tier enthusiast cards like the 7990 or Titan. Were RSX and Xenos in a similar position or were they closer to the "GTX680's of their day"?
 
Hey beta we kinda had this discussion 24 hours ago. I know understand why we were at a crossroads of understanding on the math.

FWIW you were looking at bandwidth in the PS4 way as "copy from storage to RAM at 176gb/s so GPU and CPU can do work."

You were under the impression that in order for ESRAM in XB1 to get filled and do work, it had to utilize that same pathway thereby reducing overall bandwidth. That would be the case in a considerable amount of scenarios I'm sure.

However, ESRAM being a RAM unit itself, can also be populated (just like main RAM)along a wholly different pathway that includes the 30gb/s link from storage to the graphics memory controller plus 102gb/s from the graphics controller to the ESRAM. That path is 132gb/s additive at peak.

Yeah 132GB/s read at peak, but i really doubt that your going to be finding 1GB of data in 5MB of the CPU caches per frame thats relevant to the GPU :) therefore its still not fair to add up like that, its not the same.

The problem people have is that people are treating small scratch pads (32MB, eSRAM, 102GB/s) and the small cache link (5MB, cache, 30GB/s) as if they have access to the entire ram, theres a good chance your not going to get anywhere near peak for either of these links and its going to be harder to utilise then the PS4's single GDDR5 pool.

You cannot add up all the scratchpad and cache links together and say its the same bandwidth as the PS4 which has access to all of the ram at that speed
 
Yeah 132GB/s read at peak, but i really doubt that your going to be finding 1GB of data in 5MB of the CPU caches per frame thats relevant to the GPU :) therefore its still not fair to add up like that, its not the same.

I don't think its a function of caches on the CPU. Much like it wouldn't be a function of caches on the CPU to fill the 8GB of data placed in RAM. The GPU can effectively fetch 8GB +32 MB simultaneously. The 8gb would utilize the 68 GB/s pathway while the ESRAM would get populated by some combination of the Directly through the NB pathway (132gb) and/or Main RAM.
 
Status
Not open for further replies.
Back
Top