Predict: The Next Generation Console Tech

Status
Not open for further replies.
Disclaimer: trying again to find arguments backing Charlie's claim.
Could it be that Sony royalties aside is unhappy about their system overall power consumption and thermal dissipation? After three shrinking the system remains huge and bulky, they still need a healthy dissipation system, etc. I don't imply that the situation is better for other system say the 360 and I know that the Cell is highly effective in the Watts per Flop department still could have Sony came to the conclusion that this kind of thermal/power characteristics is no longer what they want for their next system?

It could also be that they have decided that a two CPU chip system with two memory pools doesn't make sense given the trend to increasing W/mm^2 and in terms of board complexity, the cost of shrinking one not two chips, more frequent revisions and the difficulties of working with outside third parties to cooperate with them in terms of getting the device out on time to market. Perhaps they looked at the Cell and RSX and decided that they couldn't afford to drop the Cell but Nvidia was expendable.

In regard to software implications. I think it could be really "Sonyesque" to want to differenciate them selves from whatever standard Ms/Kronos set in the PC/360 realm? Sony is still building his army of studios and I think that their offering on its own to justify the acquisition of the system a bit like Nintendo (actually better as they can cover way more genres than them). All they need is execution, they now have the compelling line up in almost every genre, Gt/wipe out/ MNR for racing, GoW for H&S, Uncharted for action/adventure, FPS, whatever games using sackboy (imho the most charismatic game character I've seen in a while, huge potential if Sony is willing to get more traction in the casual and kid market), etc.
The side effect of owning so many studios (or having really such strong partnership with some studios) is that Sony has gathered quiet some talented/genius guys that are able to deal with whatever exotic archs Sony offer them to work with. On top of it if Sony were to create a EE v3 (given the time line v2 would be an unfair nickname) or a Cell V2, they would not start from scratch in regard to software.

I suspect they are putting more and more emphasis on their own software performance at the expense of everyone else. My suspicion is that had they had their way Phyre engine, Edge tools etc would not have been created. They probably wanted third parties to be able to produce adequate games but they intended to outspend and outperform the third party competition by a significant margin and therefore produce relatively visually outstanding games to dominate their own platform in the way that Nintendo dominates the Wii. If they go exotic again its only because they feel confident in this approach and want to give it another shot.

Actually say the rumour is true is that really a problem? I mean perfs may fall short of PC part under disguise but I don't think that is what drive the console market, it's about execution and Sony vision for the product:

I think they can outperform even more powerful PC hardware because the gulf between efficiency in execution and outright performance will grow even wider given a console which can be designed to render 3D at a fixed output resolution compared to a design which has to work with brute force on multiple configurations and be overbuilt in each field. They can afford to be far more elegant in operation because they control the hardware and software layers.

If Sony comes with with a low power system in a nice package,leverage all the search they did on motion/image recognition, ship a "complete" system from scratch (in regard to input/controller I mean pad, nun-chuck, cam, whatever tech they want to push ), a functional and complete on-line offering, and launch with the proper line-up covering most bases, all this at an acceptable price will the absolute 3d performances of the system be that relevant to the market reception of product? Honestly I don't think so.

I think so. If they leverage existing technology, don't pay royalties out where they don't have to (DVD/Blu Ray are a cost+ extra) and ditch Nvidia who charge like a wounded bull and roll their own solution they can have the best of both worlds. They already have an excellent and relatively efficient processor under their belt so why not throw all the high level shader programs towards the cell and keep a more basic programable ROP unit based on say the Emotion engine mated to basic texture units which they can get from S3 for cheap for instance and wrapped around a decent quantity of ED-Ram as a small external chip which can be quickly integrated into the main CPU after a die shrink as they'd own the I.P. for both??
 
I think anything less than a real GPU...from Nvidia or AMD ..will be detrimental to PS4....i just dont see Sony making a competitive GPU....they have not make one since 10 years back...whats more..the trend in 2012 ....a gaming system will also have very beefy general purpose processing power....we already see that in the form of quad/hexa core i7...made a thread about how much the x86 CPU is under-utilised today...by 2012 the new Sandy Bridge and Bulldozer will enter their 2nd generation re-spin...imagine how much more general processing power is left untapped!

Xbox 720 will come with at least 6 IBM PowerPC (good/full) cores and another custom forward thinking AMD GPU....beefy CPU cores and hundreds of tiny shader units in the GPU....where does this leave Cell ...much less the Emotion Engine! and their tiny vector units....

The PS4 should have at least 4 PPU (good/full) with 4 SPU per PPU...i think that is the best in yield and perf? I think the time of small vector units with their scratchpad memory is gone with the arrival of clusters of hundred shader units...?
 
Wait a minute... i just thought of a question....just how much general processing power should next gen consoles ideally have? The ratio of gp power to floating point/vector in their CPU...more gp power or more fp/vector power?

What could one do in game design with gp power?
What could one do in non-game design/OS tasks? Next gen consoles will have to "talk" more with mobile products(moving your data around), 3D interface, cloud like connection.
What could one NOT do if the CPU is void of enough fp/vector power?
How close could the custom console CPU possibly match Intel offerings in the desktop area?

....if i am understanding all these right
 
EE-based would be an insane nonsense nowadays. Pixel/vertex/geometry-shaders are the requirement. A future in-house design would surely be Cell-based, with the possibility of using in other devices which would be the only point to an in-house component. EE based?...not on your nelly!

I could believe this if it was the new CPU for the PSP2, if they replaced the PPC core in Cell with the EE core and kept a few SPUs, that would make it fairly easy to emulate PS2-games and you would have access to a huge catalogue of AAA DLC titles at launch.

It would also allow developers leverage some libs from the PS3 as well when making new titles.

Perhaps Charlie got some information mixed up?
 
So this isn't just the annual holographic storage hoax, but coupled with an actual newfound material?
A new non organic medium with small grain size is nice and all, but you still have to get the light to it. The diffraction limit of blue laser light is the limiting factor at the moment, not the medium.

It might be a good material for actual recording in Super-RENS, but that has a lot more issues to solve than just the storage layer of the medium.
 
Wait a minute... i just thought of a question....just how much general processing power should next gen consoles ideally have? The ratio of gp power to floating point/vector in their CPU...more gp power or more fp/vector power?

What could one do in game design with gp power?
What could one do in non-game design/OS tasks? Next gen consoles will have to "talk" more with mobile products(moving your data around), 3D interface, cloud like connection.
What could one NOT do if the CPU is void of enough fp/vector power?
How close could the custom console CPU possibly match Intel offerings in the desktop area?

....if i am understanding all these right

The non-game design/OS tasks you name won't need any power at all. Just think about it, many people use Netbooks for those things (you only need video acceleration for HD content) in a Windows system. I guess, if Sony'd use a non-exotic browser like Chromium, the browser could be much more efficient, too. Linux on PS3 was mostly handicapped by a low memory environment. The PPU is probably (I don't know for sure though) much faster in GP than an Intel Atom.

Even a Mobile Core i3 is totally oversized for these tasks. And that is a "cheap" 35 Watt CPU.
 
I think anything less than a real GPU...from Nvidia or AMD ..will be detrimental to PS4....i just dont see Sony making a competitive GPU....they have not make one since 10 years back...
You know, if Sony had releaed 5 GPUs in that time you'd have a point, but they haven't even attempted to make a competitive GPU (save whatever secrets they were trying for PS3), so it's not really much of a reference point to say they couldn't pull it off! Though it don't think it'll happen, it's not beyond the realms of possibility that a new take on GPU architectures from original thinkiers who have observed everything happening in the GPU space will be able to compete with conventional GPUs from nVidia/AMD.
 
I think anything less than a real GPU...from Nvidia or AMD ..will be detrimental to PS4....i just dont see Sony making a competitive GPU....they have not make one since 10 years back...whats more..the trend in 2012 ....a gaming system will also have very beefy general purpose processing power....we already see that in the form of quad/hexa core i7...made a thread about how much the x86 CPU is under-utilised today...by 2012 the new Sandy Bridge and Bulldozer will enter their 2nd generation re-spin...imagine how much more general processing power is left untapped!

Xbox 720 will come with at least 6 IBM PowerPC (good/full) cores and another custom forward thinking AMD GPU....beefy CPU cores and hundreds of tiny shader units in the GPU....where does this leave Cell ...much less the Emotion Engine! and their tiny vector units....

The PS4 should have at least 4 PPU (good/full) with 4 SPU per PPU...i think that is the best in yield and perf? I think the time of small vector units with their scratchpad memory is gone with the arrival of clusters of hundred shader units...?

Xbox 720 with 6 cores? Are games cpu limited? It seems most games are ram and gpu limited.

I think MS's best and cheapest chance is the same cpu, 4 gigs of ram, more edram to prevent tiling, and a faster gpu. That's it. Maybe even include a blu-ray drive.

Sony should do the same thing. Same cell, maybe reengineered for gddr5 ram instead of xdr, 4 gigs of ram and a faster gpu.
 
If they are going to resign the mem controller they'd probably go with XDR2. Rambus has been good to Sony despite their designs being niche in the larger markets.
 
It could also be that they have decided that a two CPU chip system with two memory pools doesn't make sense given the trend to increasing W/mm^2 and in terms of board complexity, the cost of shrinking one not two chips, more frequent revisions and the difficulties of working with outside third parties to cooperate with them in terms of getting the device out on time to market. Perhaps they looked at the Cell and RSX and decided that they couldn't afford to drop the Cell but Nvidia was expendable.
I agree with that but I my pov was a bit more drastic. What I meant is that the power and thermal characteristics of the chip have pretty much sky-rocketed from the PS2 to the PS3. I could find much information about the ps2 in this regard so I might be off, my only source is wiki and they gave 15 Watts as a value without reference to the revision of the ps2 they are speaking about. Power consumption may have been higher in first ps2 revisions but I don't think they were close to the ps360. What if Sony adopt a more embedded philosophy in this regard? The Cell does great in this regard but it does well for a 3.2GHz part. Sony is now pretty much down the road in regard to shrinking for both the RSX and the Cell and they could be unhappy about the form factors, characteristic of the product.
I suspect they are putting more and more emphasis on their own software performance at the expense of everyone else. My suspicion is that had they had their way Phyre engine, Edge tools etc would not have been created. They probably wanted third parties to be able to produce adequate games but they intended to outspend and outperform the third party competition by a significant margin and therefore produce relatively visually outstanding games to dominate their own platform in the way that Nintendo dominates the Wii. If they go exotic again its only because they feel confident in this approach and want to give it another shot.
Well that would be risky business, I think that they main intensive to go with something more proprietary would costs associated to royalties and possibly (see) above power/thermal characteristics (why would they go with MIPS rather than ARM? I don't know, license cost? corporate knowledge?). They can go this because due to their cheer editing power the system could sell leaving the editors with no choice but to support the system. And as multi core heterogeneous system are to become the standard I'm not sure that the system would end that different from the competing systems.
I think they can outperform even more powerful PC hardware because the gulf between efficiency in execution and outright performance will grow even wider given a console which can be designed to render 3D at a fixed output resolution compared to a design which has to work with brute force on multiple configurations and be overbuilt in each field. They can afford to be far more elegant in operation because they control the hardware and software layers.
Clearly a closed platform allows for making the most out of what you have but if Sony were to move to way lower power and thermal characteristics, optimizations would not be enough. Anyway perceived performance has nothing to do with real world performances, most gamers are not pixels counters (do not read as an offence), I'm sure that once "whatever system" is powerful enough to push x2 MSAA along with a good MLAA implementation @702p and can benefit for an on-board performant scaler that will be a great leveller in regard to "perceived" quality vs whatever PC pushing 2500*1600 x8 MSAA etc.

I think so. If they leverage existing technology, don't pay royalties out where they don't have to (DVD/Blu Ray are a cost+ extra) and ditch Nvidia who charge like a wounded bull and roll their own solution they can have the best of both worlds. They already have an excellent and relatively efficient processor under their belt so why not throw all the high level shader programs towards the cell and keep a more basic programable ROP unit based on say the Emotion engine mated to basic texture units which they can get from S3 for cheap for instance and wrapped around a decent quantity of ED-Ram as a small external chip which can be quickly integrated into the main CPU after a die shrink as they'd own the I.P. for both??
That's the bothering part, putting together a larrabee doesn't seem that easy. To some extend I think that Sony may have been in a better situation if they had already a EE/GS V2 at work in the PS3. As Gongo put it they haven't gotten their hand on something concrete for a decade.
It would be a huge effort both in costs and human times to play catch-up with the big guys (Nvidia, ATI, IMG) now. But who knows if they plan to amortize the efforts on both the PSP2 and the PS4.

I think anything less than a real GPU...from Nvidia or AMD ..will be detrimental to PS4....i just dont see Sony making a competitive GPU....they have not make one since 10 years back...whats more..the trend in 2012 ....a gaming system will also have very beefy general purpose processing power....we already see that in the form of quad/hexa core i7...made a thread about how much the x86 CPU is under-utilised today...by 2012 the new Sandy Bridge and Bulldozer will enter their 2nd generation re-spin...imagine how much more general processing power is left untapped!

Xbox 720 will come with at least 6 IBM PowerPC (good/full) cores and another custom forward thinking AMD GPU....beefy CPU cores and hundreds of tiny shader units in the GPU....where does this leave Cell ...much less the Emotion Engine! and their tiny vector units....

The PS4 should have at least 4 PPU (good/full) with 4 SPU per PPU...i think that is the best in yield and perf? I think the time of small vector units with their scratchpad memory is gone with the arrival of clusters of hundred shader units...?
I somehow agree but things could turned out different if Sony is after really lower power and thermal profiles for their next system.


Charlie's claim have me wondering about how a ps3 based on a EE/GS V2 combo could have looked and how far or close it would have ended for the actual ps3. It could be interesting because quiet some devs worked on the PS2 and could have their say on the matter. I think I will open some sort of spin-off thread on the matter.
 
I agree with that but I my pov was a bit more drastic. What I meant is that the power and thermal characteristics of the chip have pretty much sky-rocketed from the PS2 to the PS3.

If they drop the majority of the GPU functions and integrate them into a standard pool of multipurpose computing resources they can have better utlization without doubling up on flexible compute resources compared to say going for both a modern processor and a modern PC GPU without losing essential fixed function hardware. Consider this example:

ASSAO+pipeline.png


They can perform the majority of the functions on the Cell processor and finally merge everything together with a relatively basic Texture + ROP/ED-Ram GPU and use the emotion engine as a control for this process. They could feed the whole scene into the GPU and then process it quickly within the ED-Ram buffer to spit out two distinct frames for each eye rather than compute them separately.

So instead of having a cutting edge and expensive GPU they can use a relatively cheap and low power equivalent which has been stripped of the external memory bus and rely entirely on the CPU for the majority of the heavy lifting. This would be a vastly simplified architecture and it still follows an embedded model.


Well that would be risky business, I think that they main intensive to go with something more proprietary would costs associated to royalties and possibly (see) above power/thermal characteristics (why would they go with MIPS rather than ARM? I don't know, license cost? corporate knowledge?). They can go this because due to their cheer editing power the system could sell leaving the editors with no choice but to support the system. And as multi core heterogeneous system are to become the standard I'm not sure that the system would end that different from the competing systems.

I can't really say which central processor they ought to use if they did indeed use a different Cell 'PPU'.

Clearly a closed platform allows for making the most out of what you have but if Sony were to move to way lower power and thermal characteristics, optimizations would not be enough. Anyway perceived performance has nothing to do with real world performances, most gamers are not pixels counters (do not read as an offence), I'm sure that once "whatever system" is powerful enough to push x2 MSAA along with a good MLAA implementation @702p and can benefit for an on-board performant scaler that will be a great leveller in regard to "perceived" quality vs whatever PC pushing 2500*1600 x8 MSAA etc.

See above. They can be a lot more efficient in implementing technology because they design the hardware to fit their needs and the software is designed to suit the hardware. In the case of PC implementations of 3D you need twice the framerate which translates to according to Carmack as 3 times the performance required. However if you could process both frames simultaneously your overhead may only be 20-30% which is an order of magnitude less. The system can be tweaked in such a way that all of the performance shortfalls of 3D are compensated for ahead of time such as the fillrate, triangle setup or anything of that nature.

That's the bothering part, putting together a larrabee doesn't seem that easy. To some extend I think that Sony may have been in a better situation if they had already a EE/GS V2 at work in the PS3. As Gongo put it they haven't gotten their hand on something concrete for a decade.
It would be a huge effort both in costs and human times to play catch-up with the big guys (Nvidia, ATI, IMG) now. But who knows if they plan to amortize the efforts on both the PSP2 and the PS4.

Sony has an advantage over Intel in that they already have a captive audience of 3rd party developers and their own development studios. There are already software pipelines in place which take advantage of the Cell in a graphical sense, this is simply extending that to its logical conclusion.
 
but on 128bit even in 2012 the max memory size will be 2GB...

Indeed about 128bit limitations,but today we already use notebooks and desktops with 8GB RAM on high settings, probably in 2012 / 2013 pcs with 16GB or 24GB configurations are common and enable development costs for consoles have at least 4GB RAM / VRAM.
 
Last edited by a moderator:
Squilliam actually we pretty much agree, I tried to think about how a PS3 based on EE/GS evolution would have looked. I opened a thread on the matter, by the way it's the most unsuccessful B3D thread in ages... :LOL:
Here the system as I tried to described it

It turned out something a bit more radical the Cell/RSX in regard to available resources.
In the ps3 you have for resources:
*General computation: SPU
*Geometry computation: not programmable,not that flexible, vertex shaders
*Fragment processing: not that programmable, not that flexible, not that efficient pixel shaders tied to texture units.
As shown by the ps2, the PS3 SPU and in the paper I linked in the aforementioned post geometry can be pretty much handled efficiently by general purpose vector units, not that true for fragment processing.

The PS3 as I tried to describe it has a more radical split:
At one end you have pretty much general purpose vector units crunching numbers (swallowing geometry by the way.
At the other end highly efficient by definition not programmable fixed function hardware filling pixel.

I come to wonder if pushed this could be a solution to the problem Intel ATI and Nvidia are facing in their quest to turn GPU into a general purpose resource for various computation.
Trying to be fun, inner dialogue of an "ATI/Nvidia/Intel" engineer:

"those ugly texturing latency are a pain in the A**"
"I need an insane amount of thread to hide them"
"I need more control on my execution units and I want them to do more things"
"Damned thread scheduling, flow control logic is really biting into my silicon budget"
"throwing more silicon to it"
"damned I spent five more time the silicon on the problem and still I'm only halfway to where I wanted to be"
"so many threads, data, data dependencies, etc. it's such a pain in the A**"
"throwing more silicon to it"
"Hum, we need better programming language"

Ok as I say I'm just trying to be fun, I'm not trying to downplay all those really clever guys that provide us with pretty fantastic hardware.
From my outsider POV it looks like so far engineers have made the right decisions given the constrain of their silicon budget, programmable, more flexible hardware was more likely the only way to go. Now things are changing and the big actors of the market want to bring general computation of GPUs to the next level.
Soon ROPs/RBEs could disappear, Intel wanted to get rid of the rasterizer, the only fixed function hardware considered worth it is texture units.
The funny thing is from my outsider POV is that texturing (not directly the texture units) and the "ugly latencies" associated to it is still what drive nowadays and future GPU design.

So I've a question, possibly stupid but anyway, could they be wrong?
Could filling and texturing turn fixed function once again?
Now engineers have billions of transistors to allocate, couldn't they allocate some hundred millions to some fast, efficient fixed function hardware whom sole purpose would be to create(rasterization) fill/texture pixels?

OK I know it sounds like looking backward but that would free the whole design so much. One would no longer need "that many threads" those ugly latencies would no longer be such a pain in the ass they would pretty much be insulated in a pretty tiny part of the chip. So much freedom for chip designers.
Once again I'm a complete outsider and what I describe may be prove a complete misunderstanding or nonsense about how shading and texturing are related.
I think of something that would work that way (for graphics):
Many general purpose vectors units handle all the geometry when done they send the data to "pixel pipelines" and move to something else.
The "pixel pipelines" output filled/textures pixels in V-RAM in some raw format.
The GP vector units process what the pixel pipeline has outputted.

It would somehow insulate the latency problem and free the design from many burdens heavy multi threading etc.
I feel like I'm missing something :???:
 
Last edited by a moderator:
I feel like some some people are giggling behind their screen... :LOL:

Anyway I fell more and more confident with my idea, I don't know the "how to" or it costs but I'm confident that the engineers that brought us so many generation of performing hardware would pull it off.

From my outsider POV it's more a matter of choice. The promise general purpose hardware handling being efficient at more than a handful of tasks while handling graphics and its constrains properly may not turn out true any time soon. On the other hand the world wants lot of computing power for cheap for greater good (research, etc.) or not (financial madness...). The latter is possible almost now imho.
I feel like "Graphics" should no longer be in driver seat while design GPU (sounds idiotic I know) when GPU are set to become GPGPU or whatever they will be called.

I really think that now that they are playing with playing with billions of transistors engineers should look back at the problem and consider insulating "the problem" for greater good.
 
Last edited by a moderator:
How effective is a SPU as a vertex shader? A pixel shader? Tesellator?
Does it still need the 256kb LS if used as a shader, or is most of that unused in that case?
 
How effective is a SPU as a vertex shader? A pixel shader? Tesellator?
Does it still need the 256kb LS if used as a shader, or is most of that unused in that case?
It would be interesting to see how they compare to the SIMD arrays present in current GPU. "Grossly" a SPU could be considered as a 800MHz 16wide SIMD unit which is a Quad in ATI latest design (transcental units aside, their is four quad per SIMD array in R8xx and R7xx so the array can be mostly considered as 64 wide SIMD unit).
Nao quiet some time ago said that SPU were pretty impressive at culling. It looks like quiet some devs use them for vertex processing. I don't know if they are good a tessellation but Joker454 wrote a de-tessellator for I don't know which sport game. I can't provide figure but my gut feeling is that they do pretty well or good enough.
My point was more likely you can design vectors units without having to rely on heavy multi-threading and the burden associated to it. (I remember hotted discussion on the matter an how GPU "threads" have actually nothing to do with CPU "threads").

My idea is that if you could "hide" latencies induced by texturing I see no reason why the same units could not process efficiently pixel too.

In regard to the LS size, and watching at larrabee I don't remember complain about the size of L2 cache whose was 256KB. People argued about the fact coherent memory space was a waste, not scalable, etc but not about the size. Maybe it's not perfect one may want more or less but by watching at the die space it takes on a SPU I'm not sure going for less would free that much space. Actually it could save external bandwidth and work like Larrabee was intended to a tile based renderer. Anyway SPU are not perfect, the idea is really that vector unit with fast serial execution supporting 1 or 2 threads could do the job and quiet some others if SPUs are any clue.

I would really want to see insiders actually discussed the matters instead of stating de facto it goes nowhere. a bit like people thank rasterization can't be parallelized, the guys working on Larrabee did it, we can discuss performances but they made it. a lot of people didn't that the PS2 would achieve this or that effect and devs pulled it.
I'm a lot of people in the industry would have a WTF moment like "we were promise complete freedom sometime ago and now we ourself with a "pool" of fixed function resources as big ten years ago!" forgetting the backed up by a lot of efficient general purpose computational most likely more "easy going" than nowadays shader arrays. And the exact people are also really talented and I'm confident that they will find ways to do what they want and pull out thing people would have not think possible out of the hardware.
I think it possibly the stance in regard to hardware designers, they could feel like they are close to achieve "the make it " vector accelerator with minimal amount of fixed function hardware so close that they don't consider other options ( actually they are not responsible for directx .OpenGL specifications so they are not "free") but possibly the curve could be kind of asymptotic and while "that close" never it make or not anytime soon.
The cost to design a pool of fixed function hardware up to date with nowadays rendering technique can be high in time, resource, transistors but once done the cost will only goes down due to Moore's law and the number of pixel to push is not to explode anytime soon. More having more fixed function hardware does mean than the aforementioned units can't evolve or even disappear latter on.
 
Last edited by a moderator:
a bit like people thank rasterization can't be parallelized, the guys working on Larrabee did it
Don't believe the hype, hierarchical (ie. parallel) rasterization is par for the course in 3D hardware and Larrabee's specific implementation is very close to what you could find in IMG patents from more than a decade ago.

BTW, I think you are confusing parallelized rasterization with parallel rasterizers ... no one did parallel rasterizers since it simply wasn't necessary till recently, and this includes Larrabee as far as we know. NVIDIA is the first in this respect ... IMO not so much because it's difficult, but because it wasn't worth the effort before then.
 
Don't believe the hype, hierarchical (ie. parallel) rasterization is par for the course in 3D hardware and Larrabee's specific implementation is very close to what you could find in IMG patents from more than a decade ago.

BTW, I think you are confusing parallelized rasterization with parallel rasterizers ... no one did parallel rasterizers since it simply wasn't necessary till recently, and this includes Larrabee as far as we know. NVIDIA is the first in this respect ... IMO not so much because it's difficult, but because it wasn't worth the effort before then.
I won't dismiss your claim ;)
Ok so if I understand your point, what larrabee does within a bin is not that really different from what the rasterizer(s for Fermi) does in our GPU. Intel has virtually one rasterizers per bin (core) hence their claims. Ok I think I got it.

Other than that what do you think of the idea of the return of more fixed function hardware?
 
Other than that what do you think of the idea of the return of more fixed function hardware?
It's a bit non sequitur with the rest of your post ... how about you provide an example of something you think would provide a net mm2 savings in the average use case.
 
Status
Not open for further replies.
Back
Top