Who makes 8Gb at 2133 ?
Who makes 8Gb at 2133 ?
I've always kept it in the realm of possibilities because of the Yukon Roadmap, and also for the AMD/IBM team that MS assembled. I also consider it because MS took direct action in seeing to Yukon's removal from various sites. I believe that was a significant move on their part.
I've repeated this more than a few times so I hope people will forgive me for sounding like a broken recording. lol
I honestly don't have any specific requisites on how it would be configured, APU+APU, APU+GPU, or even something reflecting the AMD patent for APU+APD. I'm naturally for whatever works graphics and performance wise.
In either case however it works out or DOESN'T work out, I look forward to hearing the un-mingled truth. I want to know how they did it, or didn't do it. It's all very interesting to me how this hardware puzzle will finally come together.
Does anyone know an answer to that?
I tell you, cognitive dissonance is a powerful thing and we're seeing it play out on a large scale here.
You need great engineers to design even low-end parts.
Yes, but if they don't care about performance, why invest so much into R&D for something which is readily available (an x86 dx11 apu) without the expense?
As I said, Trinity is on the shelf right now. Been there for almost a year, about to be replaced.
No R&D involved.
If it's all about max profits upfront and performance be damned, why didn't they go with this option?
Because you maximise profits by making the best APU you can and using the least parts, with the best layout, best thermals, etc..
So for the best profits you want the best engineers even for a low end design.
Because you maximise profits by making the best APU you can and using the least parts, with the best layout, best thermals, etc..
So for the best profits you want the best engineers even for a low end design.
Which one?For the best profits, there already exists a design on the shelf.
For the best profits, there already exists a design on the shelf. And a new one will replace it shortly. According to the rumored spec, they didn't choose this existing apu.
The point is, they do care about performance. Their actions clearly indicate that. To what extent, is to be revealed shortly.
Which parts aren't in production, nor sampling? Except for the SoC, which is an expensive custom part (and secret), I don't see much else that wouldn't be available for procurement right now. Certainly not the memory which they need an average of one or two million chips per week, and a very good price. Sure we all thought 8GB GDDR5 was improbable, but still, these chips definitely exist. They were sampling in late 2012 or early 2013.Lots of parts going into these products aren't currently in production for consumer consumption. Apparently that isn't a huge barrier.
Which parts aren't in production, nor sampling? Except for the SoC, which is an expensive custom part (and secret), I don't see much else that wouldn't be available for procurement right now. Certainly not the memory which they need an average of one or two million chips per week, and a very good price. Sure we all thought 8GB GDDR5 was improbable, but still, these chips definitely exist. They were sampling in late 2012 or early 2013.
Micron does offer "Graphics DDR3" at 2133 which they call 1GHz class, they are only available at 2Gb and 4Gb densities.
The best I found is the Micron 8Gb 1.35v which could probably be used at 1.5v, but it's not binned that way.
And the usable data per frame has very little to do with the total texture memory budget. Take any game you'd care to play, in how many of them can you see the entire level at full texture resolution all the time? Say Skyrim, just because you can't see the scenery behind you doesn't mean it's textures are not in RAM. Gears of War, it uses streaming textures now, but you could preload those textures with more RAM, so there's no pop-in. right there, you have a use case for 2x the texture memory from what you can address in a frame. Megatexturing would benefit greatly from more RAM, since you can have more tiles in memory, and manage your tiles more efficiently.Oh i wasnt talking about game perfomance.
I was just talking about useable data per frame, which is limited by the bandbwidth of main ram. (Of course 2 GB per frame ist still plenty for the GPU to use)
The eSRAM is pretty useful for any operation that needs a lot of fast reads/writes of small data (thats also where the low latency should be very useful).
I can see some nice alpha particle effects.
And the usable data per frame has very little to do with the total texture memory budget. Take any game you'd care to play, in how many of them can you see the entire level at full texture resolution all the time? Say Skyrim, just because you can't see the scenery behind you doesn't mean it's textures are not in RAM. Gears of War, it uses streaming textures now, but you could preload those textures with more RAM, so there's no pop-in. right there, you have a use case for 2x the texture memory from what you can address in a frame. Megatexturing would benefit greatly from more RAM, since you can have more tiles in memory, and manage your tiles more efficiently.
And that's just textures.
Yes, but if they don't care about performance, why invest so much into R&D for something which is readily available (an x86 dx11 apu) without the expense?
As I said, Trinity is on the shelf right now. Been there for almost a year, about to be replaced.
No R&D involved.
If it's all about max profits upfront and performance be damned, why didn't they go with this option?