The only place I've ever seen the codename "loop" is that rumor site.Ps- I will be ticked if Loop is the next real Xbox! But there is a thread to discuss that rumor and our feelings.
(Oh, and it's what we called the main screen of the Kin)
The only place I've ever seen the codename "loop" is that rumor site.Ps- I will be ticked if Loop is the next real Xbox! But there is a thread to discuss that rumor and our feelings.
Im quite intrigued by the idea of some sort on on die cache with the gpu, i menntioned the xdr2 idea, but i havnt the knowledge to comment on what sort of size cache would be worthwhile..360 had 10mb on daughter die..which meant 250gb/s bandwith for some simple things like z stencil, AA, etc is that right?
I also read, maybe in an article here from dave baurman? that if you moved the cache completely on die, the whole gpu gets access to that bandwidth? is that correct?
So would putting say 20mb edram on that tahiti gpu be beneficial?
Then you could just unify the rest of the system with some high density/cheap GDDR3.
Yep. Everything seems to be moving towards deferred rendering; at least some parts of the pipeline like light prepass. It's not safe to think of the FB as a small file any more. It's gonna be built out of lots of buffers and render targets which won't all fit in eDRAM concurrently unless there's a truckload of eDRAM, which would cost too much. eDRAM as a working space per buffer makes sense, with each buffer being written out to main RAM and then copied over for final composite, but then you have significant RAM BW consumption which is what the eDRAM is supposed to alleviate.(And here's when the deferred rendering crowd lynches me.)
That whole line wasn't speculation but console warring. Console A is more powerful than Console B. No actual hardware specualtion involved, unlike the rest of this thread which is looking at well explained options even if they are unlikely or rampant (PS4 will be 100% raytraced graphics using PowerVR OpenRL engine!). Had the original article had any hardware speculaiton in then it could have been considered. As it was, bkilian was right:Isn't that what I said? Possible hardware speculation is speculation.
Power speculation isn't the same as hardware speculation. Hell, we can't even measure power on existing, well known systems, so how we're supposed to rationally discuss relative power on unknown boxes, I don't know!My point was that discussing the ephemeral concept of "power" (which as we've seen in the last 5 years, is a difficult concept to measure) in relation to rumors seems a little premature.
Right sort of got that, so what you are saying is that for some tasks such as texturing the edram would have to be significantly bigger, which wouldnt justify its cost...as some of the rumours focused around 64mb of L3 on the power pc core, would that be enough cache to be worth the cost?
And the reason that they won't do that and it won't work is that it would sacrifice performance on drawing textures, and texture detail is one of those things that is very easily visible when you put two differently specced machines side by side.Basically what im getting at, is there a neat trick where you could hit every target you want..high bandwidth, 4+gb ram, small pin for better cost reductions in future- all without taking too much of the hardware budget?...
If you wanted to texture from eDRAM, you'd need lots, but texturing is fairly served by a normal RAM bus and GPU caches - it's not a huge BW consumer.Right sort of got that, so what you are saying is that for some tasks such as texturing the edram would have to be significantly bigger...
Those are stupid rumours. 64 MBs eDRAM on the CPU makes zero sense in a console.which wouldnt justify its cost...as some of the rumours focused around 64mb of L3 on the power pc core, would that be enough cache to be worth the cost?
Nope. Hence the choices of varying compromises. Every possible solution is imperfect in some way and the hardware vendors have to make the mst of them. Best chance looks like XDR2 at the moment, which is still in the vapourware phase.Basically what im getting at, is there a neat trick where you could hit every target you want..high bandwidth, 4+gb ram, small pin for better cost reductions in future- all without taking too much of the hardware budget?...
Why? The visual advantages can't be disputed IMO. I think the console companies should focus on deferred rendering and design the hardware to maximise it.I know everyone is moving to deferred rendering, and I still think it's probably not the right way to go for consoles.
Yep. Everything seems to be moving towards deferred rendering; at least some parts of the pipeline like light prepass. It's not safe to think of the FB as a small file any more. It's gonna be built out of lots of buffers and render targets which won't all fit in eDRAM concurrently unless there's a truckload of eDRAM, which would cost too much. eDRAM as a working space per buffer makes sense, with each buffer being written out to main RAM and then copied over for final composite, but then you have significant RAM BW consumption which is what the eDRAM is supposed to alleviate.
Then I wonder if 28nm EDRAM exists or is in the near future, but anyway.
Renesas Electronics Corporation (TSE: 6723), a premier provider of advanced semiconductor solutions, today announced the development of a basic structure for embedded DRAM (eDRAM) highly compatible with standard logic circuit design assets (IP: intellectual property) of the next generation system LSIs at the 28-nanometer (nm) node and beyond.
It's a start, but... probably not close enough yet for meeting demand of a high performance part (it'd be easier/sooner for mobile than a game console, of course). There's just not enough information.
and do you want to $5000 for it?
First announced in February of last year, Imagination Technologies has officially announced the licensing availability of its first two GPUs based on the Series6 platform. The PowerVR G6200 and G6400 each promise to bring low power graphics to unprecedented levels and are said to deliver up to 20 times more horsepower than the current generation while also being five times more efficient. In tangible terms, the Series6 GPU cores are capable of exceeding 100 gigaflops and are said to approach the teraflop range. All chipsets based on Series6 are backward compatible with Series5 and fully support OpenGL 3.x, 4.x and ES, along with OpenCL 1.x and DirectX 10. Further, specific models will also support DirectX 11.1 with full WHQL compliance.
...
Yea i nice release and all that, but realistically the rogue couldn't be used for a console, they are talking about later additions being able to scale up to 1 teraflop and DX11.1
But the arly versions will be no where near that, and besides the high end consoles will need multi teraflops performance.