Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Wait, are you saying that the inclusion of the esram is useless?
No, it's not a polar argument. The SRAM (or whatever flavour RAM it is) is gaining utilisation for the GPU on an otherwise relatively slow bus, and may be very efficient as a memory pool. pjbliverpool's argument (and one I agree with) is that 32 MB eSRAM will not double or triple the output of your GPU logic.

While I won't personally begin to speculate on the use of esram, it seems like it will be a win in terms of reducing latency. Anyway, all I will say is, for the past 3 to 4 generations we have always had a console or two with a form of embedded RAM, the 360 gpu was designed in a way that a significant silicon budget was spent on the paltry 10mb edram.
Right, and that console hasn't had a hands-down graphical advantage. It gains in some areas and loses out in others. eSRAM in Durango is part of the whole system design to deal with data flow and power draw and component costs to hit a (competitive) performance target in a competitive budget. It's not added to magically multiply the output of the graphics chip. ;)
 
Wait, are you saying that the inclusion of the esram is useless?
No, it's not a polar argument. The SRAM (or whatever flavour RAM it is) is gaining utilisation for the GPU on an otherwise relatively slow bus, and may be very efficient as a memory pool. pjbliverpool's argument (and one I agree with) is that 32 MB eSRAM will not double or triple the output of your GPU logic.

Right, and that console hasn't had a hands-down graphical advantage. It gains in some areas and loses out in others. eSRAM in Durango is part of the whole system design to deal with data flow and power draw and component costs to hit a (competitive) performance target in a competitive budget. It's not added to magically multiply the output of the graphics chip. ;)

Never said it will double or triple it. That is not my point and I am pretty sure I did not say that in my post. My point is that despite the fact that esram or edram have not appeared on pc does not mean that it is not useful, neither does it mean that efficiency and throughput are not improved by its use. If we are going by what is in the vgleaks with regards to the esram it seems to have been included to maintain peak throughput and because of its low latency. This is the specific quote "The difference in throughput between ESRAM and main RAM is moderate: 102.4 GB/sec versus 68 GB/sec. The advantages of ESRAM are lower latency and lack of contention from other memory clients—for instance the CPU, I/O, and display output. Low latency is particularly important for sustaining peak performance of the color blocks (CBs) and depth blocks (DBs)". This seems to imply that low latency is directly related to sustaining peak performance. And this doesn't even take into account the fact that, unlike the 360 implementation, the one supports read/write/modify. Another quote from the same article "Durango has no video memory (VRAM) in the traditional sense, but the GPU does contain 32 MB of fast embedded SRAM (ESRAM). ESRAM on Durango is free from many of the restrictions that affect EDRAM on Xbox 360. Durango supports the following scenarios:

Texturing from ESRAM
Rendering to surfaces in main RAM
Read back from render targets without performing a resolve (in certain cases)"

I seem to recall sebbbi and others stating that had the 360 edram implementation been similar to the ps2 one, which is similar to the durango setup, it will enable the implementation of some post-processing real cheap. And as has been stated by others its exclusion from pc architecture does not mean its useless but rather it might need engine and api fine tuning and exposure to take advantage of it. You are definitely not going to see that on the pc space until MS introduce and api that will expose it to developers.
 
7770 > 680gtx.
1,500M transistors, 80 watts > 3,540M transistors, 195 watts.
Is Durango going to have a 195 watt GPU? No, we all agree. Instead, MS have taken a 7770 and got 680gtx performance from it by adding 32 MBs SRAM. In some people's theory.

Unified shader tech didn't come out of the blue. Everyone knew about it and was working towards it. How come no-one learnt that adding a 32 MB SRAM cache can triple overall performance of your GPU? Because it can't. ;)

I don't subscribe that ESRAM will magically allow a 7770 produce theoretical performance like a 680gtx. However the ideal that AMD and Nvidia hold a monopoly on the ingenuity when it comes to gpu design and that their current high end performance design represent that ingenuity, I find a bit naive.

High performance gpus is a rather small market and small part of AMD and Nvidia business. Even with MS and Nintendo's contract, AMD total profits from its gpu business is around $100-$200 million annually. Nvidia total business isn't as large as the Xbox division. Because of that circumstance its more advantageous for AMD and Nvidia to take their basic design and simply feed it with steroids. The 680 gtx is the epitome of a brute force design. The 680's insensitivity to yield (high price) and nature of gpu hardware allows Nvidia to simply throw more streaming processors, faster RAM and pump up the speed. High end performance gpus aren't exercise in efficiency.

You said that unified shaders weren't new when it was incorporated into the 360. Nor is the ideal of embedding RAM into a gpu and is probably far older than the ideal of unified shaders. No common notion exists that embedded RAM provides no benefits to a gpu. If embedding RAM was cheap, it probably would have been incorporated a long time ago. Its a matter of cost not performance. AMD nor Nvidia are encouraged to embed RAM into their products because it would not only affect their margins through growth in die size but also through lower yield.

If AMD or Nvidia were the manufacturers of consoles with significant userbases, I believe that their console gpus would be highly customized, efficient and powerful and would be the drivers of their PC performance gpu design. Furthermore, its highly unlikely AMD offered a basic 7770 design to MS and MS told them to throw some ESRAM in there. More than likely AMD look at what MS was trying to do and chose embedded RAM as a way to overcome some of the limitations faced by MS.
 
Last edited by a moderator:
anexanhume said:
Because games react to CPUs and can be bound by them in some situations?

Yes so to understand the relative performance of the GPU's you want to compare in as least CPU limited a scenario as possible. The comparison wasn't to show how BF3 would run on a next generation console. It was to show the relative power difference of the two GPU's. Whether Jaguar would have held back the GPU performance in BF3 is irrelevant since console games will be balanced to take full advantage of both the CPU and GPU.

Well that performance difference shrank from 10x to 4x. I wonder what the difference would be at 720p? or some in-between resolution like 1440x1080?

Where CPU limitations become more prevailent than GPU limitations? What would be the point in that when trying to compre the relative power of the GPU's?

expletive said:
My point is that its relatively expensive regardless if its 32MB in a $600 card or 8MB in a $150 card, its % of the BOM is still "high." Also, as bkillian suggests (i think this is what he's saying) , introducing such a drastic change to the video card market that doesn't perform well in all scenarios is very risky from a business standpoint as neither nvidia nor AMD control the API (and therefore the way the hardware is utilized) the way MS does with Durango.

Yes but the argument being put forward is that the utilisation of modern GPU's is (very) low due to a lack of low latency memory access to feed the GPU with data. So if that is true then how can improving utilisation through a better memory system be anything but better for all games, whether they are old or new? Unless we're saying that future games will be a worse fit for the existing cache system that current games to such an extent that perormance will drop by a third which aside from sounding very far fetched also doesn't tally at all with Orbis featuring - as far as we can tell - the same cache system as the existing GCN.

scently said:
Wait, are you saying that the inclusion of the esram is useless?

No, I'm not saying that. I'm saying the claims of esram making 800Mhz CU's behave like 2400Mhz CU's is ridiculous. The esram may well be benefitial from a latency point of view - although the degree of that usefullness to graphics processing which is already very latency tolerent is debatable. Or it may simply be the best way to allow 8GB of DDR3 to be incorporated into the console while at the same time retaining reasonable bandwidth and keeping costs to a minimum.

scently said:
We can all speculate why its embedded memory is not used on general purpose pcs nut hte answer is fairly obvious, at least from a high level point of view; its simply because they are general purpose, while consoles are specialist systems.

Or it's simply a bad idea to include a wide external memory bus in a console because it doesn't scale down over time like embedded memory does. Thus rather than being there for pure performance reasons, it's actually a more of a "best bang for you buck over the life of the console" decision. One only has to look to Orbis to understand that a no edram approach coupled with a wide off chip interface to high speed memory is not an obviously less performant solution in a console environment.

Incidentlly, edram is present in the PC - or at least it very shortly will be in Haswell so there's certainly nothing making it impossible on that platform.
 
That is the problem what is a 680GTX so call typical use.?

How can any one measure this even more on hardware they basically know nothing about it.?

That's a great question. I dunno. My guess is it's heavily engine dependent though. Maybe MS sees the trends towards areas in game engine designs that could hugely benefit from virtualized tech and built their architecture around fully leveraging all that has to offer, the result being a setup less reliant on flops and bandwidth without sacrificing actual game visuals output? I dunno.

MS can claim 1080p wit 8XAA now reaching that is another thing,i learn that with the xbox 360 which was say to have all its games at 720p minimum with 4XAA,that was the estimate it fell short,sony as well fell short with their claims of 1080p 60FPS.

This isn't about MS claiming anything. If they built this sort of design they did so knowing it comes with real world, verifiable pay offs. They aren't designing a significantly more complex approach that could very likely end up costing them more to manufacture just so they can enjoy a notably weaker gaming device. If the architecture was thoroughly leveraging OS/media features the context of the discussion may be different, but I think the current consensus is that's not the case, the exception possibly being some ppl assume the display planes are only there for the OS (which also makes no sense, but whatever).
 
http://www.tomshardware.com/charts/...X-11-B-Performance,Marque_fbrandx13,2968.html

1080p

680GTX 143 FPS.

http://www.tomshardware.com/charts/...X-11-B-Performance,Marque_fbrandx32,2968.html

1080p

7770 35FPS...

Now i don't know why you want and 8 core jaguar,i am sure the CPU on this test setup from Intel was far more powerful and capable than an 8 core jaguar.

Amazing I've improved the relative performance of the 7770 by more than 200% without even using eSRAM. The 680 would probably bottleneck on the jaguar cores and make them even closer. Is eSRAM going to turn Durango's GPU into a 680? Probably not, but I'm sure it's not on there for cosmetic purposes.
 
The 680 gtx is the epitome of a brute force design. The 680's insensitivity to yield (high price) and nature of gpu hardware allows Nvidia to simply throw more streaming processors, faster RAM and pump up the speed. High end performance gpus aren't exercise in efficiency.

The same 680 which achieves equal performance with fewer resources to GCN? The same GCN that features - without esram, display planes and move engines - in another upcoming high end console?

The fact is that both Kepler and GCN are both extremely efficient architectures. Just because they contain massive computational resources doesn't preclude those resources from being efficient. In fact, efficiency is everything in the PC world. NV and AMD are just as constrained by power, die size and cost as Microsoft and Sony are (albeit the ceilings are higher) so given that these factors are more or less equal between both competitors, efficiency is more or less the greatest area of competition between the two companies.

If the 680 were so inefficient and GCN more so then why would Microsoft bother paying for GCN at all if it is as you say capable of designing something much more efficient? Why not create an efficiency focussed GPU designed specifically for a console environment and save paying AMD a premium for it's inefficient designs?
 
Yes so to understand the relative performance of the GPU's you want to compare in as least CPU limited a scenario as possible. The comparison wasn't to show how BF3 would run on a next generation console. It was to show the relative power difference of the two GPU's. Whether Jaguar would have held back the GPU performance in BF3 is irrelevant since console games will be balanced to take full advantage of both the CPU and GPU.



Where CPU limitations become more prevailent than GPU limitations? What would be the point in that when trying to compre the relative power of the GPU's?



Yes but the argument being put forward is that the utilisation of modern GPU's is (very) low due to a lack of low latency memory access to feed the GPU with data. So if that is true then how can improving utilisation through a better memory system be anything but better for all games, whether they are old or new? Unless we're saying that future games will be a worse fit for the existing cache system that current games to such an extent that perormance will drop by a third which aside from sounding very far fetched also doesn't tally at all with Orbis featuring - as far as we can tell - the same cache system as the existing GCN.



No, I'm not saying that. I'm saying the claims of esram making 800Mhz CU's behave like 2400Mhz CU's is ridiculous. The esram may well be benefitial from a latency point of view - although the degree of that usefullness to graphics processing which is already very latency tolerent is debatable. Or it may simply be the best way to allow 8GB of DDR3 to be incorporated into the console while at the same time retaining reasonable bandwidth and keeping costs to a minimum.



Or it's simply a bad idea to include a wide external memory bus in a console because it doesn't scale down over time like embedded memory does. Thus rather than being there for pure performance reasons, it's actually a more of a "best bang for you buck over the life of the console" decision. One only has to look to Orbis to understand that a no edram approach coupled with a wide off chip interface to high speed memory is not an obviously less performant solution in a console environment.

Incidentlly, edram is present in the PC - or at least it very shortly will be in Haswell so there's certainly nothing making it impossible on that platform.

Indeed it is. And that is a cpu so its implementation has nothing to do with a gpu, and it only goes to tell you that it is a design win in terms of performance. IBM has been implementing edram in their supercomputer parts, what does that tell you exactly. Intel are implementing it in Haswell, what does that tell you. It obviously has performance implications, if not, it certainly would not be used. And if the edram implementation on it is with regards to the integrated gpu, it only goes to show that it is necessary to get and or maintain throughput, which is my argument, and that has nothing to do with 7770 performing like a 680. As for your other statement about it being a bad idea, I think you should really look into what you are trying to say and what is actually happening. I think you should read my other comment on the issue.
 
Yes but the argument being put forward is that the utilisation of modern GPU's is (very) low due to a lack of low latency memory access to feed the GPU with data. So if that is true then how can improving utilisation through a better memory system be anything but better for all games, whether they are old or new? Unless we're saying that future games will be a worse fit for the existing cache system that current games to such an extent that performance will drop by a third which aside from sounding very far fetched also doesn't tally at all with Orbis featuring - as far as we can tell - the same cache system as the existing GCN.

I dont think you can just drop ESRAM onto a GPU in an open platform like the PC and expect commensurate performance increases. Clearly MS has made a number of customizations and tweaks in order for its existence be worthwhile in even a closed box environment.

I do agree that the efficiency argument here implies inefficiency everywhere else but i dont see why theyd even bother with all the apparent hoops they are jumping through with these custom blocks unless there was a real benefit at the end.

This isn't a vs thread but i would not infer that Sony simply chose not to use ESRAM, it could just be simply that they wanted something non-esoteric, easy to manufacture and did not have the stomach for the R&D that MS was willing to spend on solving the inherent problems with using it.
 
Indeed it is. And that is a cpu so its implementation has nothing to do with a gpu,

The edram is there specifically to address the bandwidth needs of the GPU.

and it only goes to tell you that it is a design win in terms of performance. IBM has been implementing edram in their supercomputer parts, what does that tell you exactly. Intel are implementing it in Haswell, what does that tell you. It obviously has performance implications, if not, it certainly would not be used. And if the edram implementation on it is with regards to the integrated gpu, it only goes to show that it is necessary to get and or maintain throughput,

What it tells me is that where you don't have access to a wide external interface to high speed memory (like Durango) then you can incorporate edram (or esram) to make up the bandwidth defficiency.

No-one has ever claimed that ed/sram does not have benefits. Of course it does, it's a great alternative to big external memory buses for achieving high bandwidth in a way that can scale down with new process nodes. It
can even provide much higher bandwidth than external buses (ala Xenos) or lower latency (ala Durango). But that doesn't make it the universally better solution. As shifty has mentioned several times, it's a trade off.

As for your other statement about it being a bad idea, I think you should really look into what you are trying to say and what is actually happening. I think you should read my other comment on the issue.

I'm not sure what statement you're referring too. I don't think I've ever claimed it was a bad idea?
 
I dont think you can just drop ESRAM onto a GPU in an open platform like the PC and expect commensurate performance increases. Clearly MS has made a number of customizations and tweaks in order for its existence be worthwhile in even a closed box environment.

But that's not what I'm suggesting. I'm suggesting that if the memory hierarchy of GCN is so inefficient that the GPU can be made vastly more efficient via the use of 32MB edram, then their must be massive room for improvement in the cache system. And that's something that is perfectly suited for implementation in a PC environment.

I do agree that the efficiency argument here implies inefficiency everywhere else but i dont see why theyd even bother with all the apparent hoops they are jumping through with these custom blocks unless there was a real benefit at the end.

The reasoning certainly could be to improve efficiency, just not by 3x, or even 2x. On the other hand, the esram solution may well be less efficient than the straight up single fast memory poolk of Orbis.

There is another obvious solution though which has nothing to do with increasing efficiency and that's to allow the console to incorporate 8GB of memory - which would be too costly on a wide bus with GDDR5 - thus leaving the only solution to the bandwidth problem to be the incorporation of esram and move engines to help keep the data flowing around the more complex memory system.
 
speculation:

Intel implementing edram in Haswell has a lot do with bandwidth starvation. They doubled FP throughput and cache bandwidth in the cpu cores, and are more than doubling the GPU EUs to 40. 256bit memory buses on consumer laptops are unlikely until we get stacked memory, eDRAM was the next best option.

MS on the other hand has a lot more flexibility in a closed box so the picked the 256bit low hanging fruit first. They were still short on bandwidth so eDRAM had to be done as well. I don't doubt there will be other benefits from eDRAM beside bandwidth, but to say they it wasn't done for bandwidth first and foremost is a bit out there.
 
The same 680 which achieves equal performance with fewer resources to GCN? The same GCN that features - without esram, display planes and move engines - in another upcoming high end console?
Yep, and how does the 680 manage to do this? Could one reason be that it has caches with 10x lower latency? No, couldn't be.

Look, I'd seriously doubt a 3x increase in utilization, especially with modern GPUs that can process on other threads while they wait for data, but it seems silly to claim that it won't improve performance overall.
 
But that's not what I'm suggesting. I'm suggesting that if the memory hierarchy of GCN is so inefficient that the GPU can be made vastly more efficient via the use of 32MB edram, then their must be massive room for improvement in the cache system. And that's something that is perfectly suited for implementation in a PC environment.

The reasoning certainly could be to improve efficiency, just not by 3x, or even 2x. On the other hand, the esram solution may well be less efficient than the straight up single fast memory poolk of Orbis.

There is another obvious solution though which has nothing to do with increasing efficiency and that's to allow the console to incorporate 8GB of memory - which would be too costly on a wide bus with GDDR5 - thus leaving the only solution to the bandwidth problem to be the incorporation of esram and move engines to help keep the data flowing around the more complex memory system.

It's a bit complex though. Is the benefit enough with adding a large pool of low latency eSRAM to a GPU? There's obviously going to be "some" benefit (as seen with Nvidia's professional products doing better in latency sensitive applications versus AMD's with regards to low latency memory access). But for a GPU, the benefits may not be as large as it is for an SOC/APU.

In the consumer space you don't "need" more memory than can be supplied with GDDR. With the budget range, the cost of eSRAM outstrips the savings of moving to a cheaper DDR solution.

GPGPU hasn't really taken off in the consumer space due to a variety of factors. In theory GPGPU will benefit more from low latency than traditional graphics rendering. On a desktop PC graphics card you're already dealing with not only the high latency of PCIE but also the low bandwidth of PCIE. The benefits of eSRAM in that situation becomes much smaller and much more localized to corner cases.

Eliminate that PCIE bottleneck and would GPGPU become more desireable/commonplace for consumer applications (like not being limited to a narrow set of potential features for games)? And thus would the benefits of eSRAM start to become desireable as well?

It's going to be interesting to see how PC solutions evolve overtime as APU/SOCs become more prevalent in consumer desktop PCs. It wouldn't surprise me at all if fast, low latency memory solutions such as eSRAM increase significantly in importance as the need for greater GPU power on an APU/SOC increase. After all, unlike desktop GPUs, system memory for PCs has a requirement for a large pool of memory, that pool will only grow in size as APU/SOCs increase in capability. At that point can you feasibly have 8-32 GB of GDDR? Or perhaps APU/SOC designs will change in such a way as to be able to address 2 discrete pools of memory (DDR + GDDR) while still maintaining coherency between GPU and CPU.

Regards,
SB
 
MS on the other hand has a lot more flexibility in a closed box so the picked the 256bit low hanging fruit first. They were still short on bandwidth so eDRAM had to be done as well. I don't doubt there will be other benefits from eDRAM beside bandwidth, but to say they it wasn't done for bandwidth first and foremost is a bit out there.

The quote from the vgleaks article says:

The difference in throughput between ESRAM and main RAM is moderate: 102.4 GB/sec versus 68 GB/sec. The advantages of ESRAM are lower latency and lack of contention from other memory clients—for instance the CPU, I/O, and display output. Low latency is particularly important for sustaining peak performance of the color blocks (CBs) and depth blocks (DBs).

This doesn't contradict what you are saying, but they specifically list the benefits of the ESRAM to be lower latency and that its bandwidth is dedicated to serving the GPU.
 
The same 680 which achieves equal performance with fewer resources to GCN? The same GCN that features - without esram, display planes and move engines - in another upcoming high end console?

The fact is that both Kepler and GCN are both extremely efficient architectures. Just because they contain massive computational resources doesn't preclude those resources from being efficient. In fact, efficiency is everything in the PC world. NV and AMD are just as constrained by power, die size and cost as Microsoft and Sony are (albeit the ceilings are higher) so given that these factors are more or less equal between both competitors, efficiency is more or less the greatest area of competition between the two companies.

If the 680 were so inefficient and GCN more so then why would Microsoft bother paying for GCN at all if it is as you say capable of designing something much more efficient? Why not create an efficiency focussed GPU designed specifically for a console environment and save paying AMD a premium for it's inefficient designs?

Im not trying to denigrate the underlying arch of either company. AMD and nVidia are no slouches when it comes to arch design. I am not spitting on GCN or 600 series tech, but those two techs are implemented using a brute force method on the higher end cards. The reality is they are serving a small market not sensitive to high prices with their high end wares. They simply drop more transistors on the silicon and put on more high speed ram to serve that market. A larger market or a market more sensitive to prices doesn't automatically mean that AMD or nVidia couldn't provide 680gtx performance with a more thoughtful design.
 
Last edited by a moderator:
Have we any confirmation that it's 6T SRAM though? I'm getting lost on what current knowledge is!

1T SRAM is not really called SRAM right? (this is what the Wii U is using right?).

What are the differences in latencies between 1TSRAM, 6TSRAM and vanilla EDRAM (like in Xenos)?

The low latency claim from vgleaks might preclude 1TSRAM (and as Proelite says, if it is only 1TSRAM they could have probably afforded a lot more of it, even Wii U manages to have 32 MB)
 
Last edited by a moderator:
VGLeaks IIRC. Devs were being encouraged to hit the metal on Orbis but had to go through the APIs on Durango, leading to speculation that Durango is supporting a device-independent software platform and might present an upgradeable console (new revision in much shorter time-frame than 6 year console cycle).

It was the Edge article actually.
 
Amazing I've improved the relative performance of the 7770 by more than 200% without even using eSRAM. The 680 would probably bottleneck on the jaguar cores and make them even closer. Is eSRAM going to turn Durango's GPU into a 680? Probably not, but I'm sure it's not on there for cosmetic purposes.


Now that you are adding Jaguar cores to the test why not ESRAM as well for the 680GTX.?

Maybe you should change the argument to the 7770 can match the 680GTX,if i pair the 680GTX with a pentiun 4 3.2GHZ,and 1 GB of system ram,if your point is trying to prove that one much weaker GPU can be more efficient than another maybe that is a better way.

Oh and i chose a 680GTX with 2GB of ram imagine one with even more ram.
 
I don't subscribe that ESRAM will magically allow a 7770 produce theoretical performance like a 680gtx. However the ideal that AMD and Nvidia hold a monopoly on the ingenuity when it comes to gpu design and that their current high end performance design represent that ingenuity, I find a bit naive.

High performance gpus is a rather small market and small part of AMD and Nvidia business. Even with MS and Nintendo's contract, AMD total profits from its gpu business is around $100-$200 million annually. Nvidia total business isn't as large as the Xbox division. Because of that circumstance its more advantageous for AMD and Nvidia to take their basic design and simply feed it with steroids. The 680 gtx is the epitome of a brute force design. The 680's insensitivity to yield (high price) and nature of gpu hardware allows Nvidia to simply throw more streaming processors, faster RAM and pump up the speed. High end performance gpus aren't exercise in efficiency.

You said that unified shaders weren't new when it was incorporated into the 360. Nor is the ideal of embedding RAM into a gpu and is probably far older than the ideal of unified shaders. No common notion exists that embedded RAM provides no benefits to a gpu. If embedding RAM was cheap, it probably would have been incorporated a long time ago. Its a matter of cost not performance. AMD nor Nvidia are encouraged to embed RAM into their products because it would not only affect their margins through growth in die size but also through lower yield.

If AMD or Nvidia were the manufacturers of consoles with significant userbases, I believe that their console gpus would be highly customized, efficient and powerful and would be the drivers of their PC performance gpu design. Furthermore, its highly unlikely AMD offered a basic 7770 design to MS and MS told them to throw some ESRAM in there. More than likely AMD look at what MS was trying to do and chose embedded RAM as a way to overcome some of the limitations faced by MS.

We are talking about companies who routinely build and sell boards near $1000 just to have bragging rights for thier flagship designs.

I am pretty sure cost is not the reason why they do not use edram on thier high end.

For the low end to midrange , you might be able to make that argument about cost effectiveness.
 
Status
Not open for further replies.
Back
Top