Larrabee delayed to 2011 ?

So can GPU that is even more general purpose than Fermi work better or will it just hold real time graphics back ?
What's this weird differentiation you're making between "general purpose" and "real time graphics"? Real time graphics is just a throughput workload now that happens to generate an image... most of the new generality and "compute" features in these modern cards are just as useful for graphics as anything else, and in a lot of cases lets you do things *more* efficiently than you could previously.

The reality is that "graphics" isn't some magical thing that you can devote more transistors to nowadays... graphics is what an application may choose to do with the general purpose power, but there are very few fixed-function pieces specific to graphics left. Basically just the texture units (which are actually somewhat useful even in HPC), rasterizer and perhaps tessellator in general, but even the latter two are negotiable between hardware and software.
 
Last edited by a moderator:
Prime example being HDAO in Battleforge done through compute shaders is faster than through "traditional" shaders.

And the CUDA stuff in Just Cause 2 certainly results in some impressive graphics improvements. Although I'm not sure they result in any speed increases.

Regards,
SB
 
Why do I keep hearing all this console vs PC speculation and all this PC is not having ROI etc?

Seriously now... When has the PC sold more than consoles in the past? I can go back to the NES and even the Atari, the consoles sold more units for hardware and software. Was also the case in the SNES/Genesis, PlayStation/N64/Saturn, etc etc...

PC is not doing worse, it depends on the games you want to play. I don't see any of the big name PC developers except EPIC leaving (and as much excuses they can throw around, simply their latest games for the PC suck...).
 
Why do I keep hearing all this console vs PC speculation and all this PC is not having ROI etc?

Seriously now... When has the PC sold more than consoles in the past? I can go back to the NES and even the Atari, the consoles sold more units for hardware and software. Was also the case in the SNES/Genesis, PlayStation/N64/Saturn, etc etc...

PC is not doing worse, it depends on the games you want to play. I don't see any of the big name PC developers except EPIC leaving (and as much excuses they can throw around, simply their latest games for the PC suck...).

ROI was also higher for PC back then and developement costs (thus initial investment) was lower. Fast forward to today, and ROI on PC isn't terrific, while consoles still have decent ROI and a higher potential ROI. Whether a person believes it or not is not relevant. The publishers and their accountants see the trends over the past few years and thus more and more developement funds are earmarked for consoles rather than PC.

And this is the wrong thread to go into all that, it's been beaten to death many times in both the console and PC forums.

Regards,
SB
 
Sure, although I think the reality of it is you need to develop for the lowest (reasonable) common denominator right now, other than "simple" differentiating features. Whether the current lower-end hardware is on the console or the PC that doesn't really change too much... everyone is going multi-platform nowadays and you simply need to manage the majority of your engine tech as portable, *especially* where it affects the art/tools pipeline.

That's simply the market reality... one could argue this devalues non-portable hardware features - and it clearly does - but on the grand scheme of things all of the platforms are similar enough that you can still make games that look very, very similar across all of the SKUs. I personally choose to look at this as a positive and indication of the cleverness of the software people involved, but I'm obviously biased :

I think this at most delays the inevitable shift to even more general-purpose and faster hardware, but to be honest, it hasn't happened yet. PC graphics hardware keeps moving as fast as ever and seems to be doing just fine!
 
Last edited by a moderator:
ROI was also higher for PC back then and developement costs (thus initial investment) was lower. Fast forward to today, and ROI on PC isn't terrific, while consoles still have decent ROI and a higher potential ROI. Whether a person believes it or not is not relevant. The publishers and their accountants see the trends over the past few years and thus more and more developement funds are earmarked for consoles rather than PC.

How come Blizzard and Valve accountants disagree so much? Certainly, without inside information we are just speculating. I will admit that both companies are not really pushing the graphics detail much, but for example, left for dead 2 on a triple monitor is actually pushing a 5800 class GPU.

There is also the fact that hardware from both nVidia and ATI is generating profits and this is undeniably connected to PC game sales. The moment I see either ATI or nVidia leave the graphics market is when I will start to get concerned.

And this is the wrong thread to go into all that, it's been beaten to death many times in both the console and PC forums.

Regards,
SB

It's off topic, sure, but not as much as it seems. Graphics performance is a big part of this forum and the discussion is about whether or not the consoles, indirectly by lowering the baseline for development, are affecting the need for fast hardware on the PC.

I think the need for performance is there and the right balance between flexibility and raw power may indeed be met by a many (>50) core chip. If they design such a chip, the PC industry (be it pro or for gaming) will provide the demand.
 
OT
Actually what bothering for PC gamers is that the 360 is 5 years with no successor in sight. The hardware is out of date even against low/mid PC GPU now. Things will get even worse next year when AMD and Intel will push out their APU/fusion processors.

Technologically its out of date, but performance wise, its still better than the low end. Performance of integrated and budget parts still suck.
 
Well, one last round of comments for me to avoid taking this thread too far off topic...

How come Blizzard and Valve accountants disagree so much? Certainly, without inside information we are just speculating. I will admit that both companies are not really pushing the graphics detail much, but for example, left for dead 2 on a triple monitor is actually pushing a 5800 class GPU.

Sure but Valve is also supported by being the premiere online direct download outlet for a variety of publishers on PC. That gives them a unique opportunity to avoid having to lose their independance due to the spiral of lower ROI + increased dev cost. You'll also notice they've increased their efforts on console while at the same time taking the plunge into the Apple Mac market.

Blizzard could see the writing on the wall, even with the cash cow that is WoW and merged with Activision which has a large stake in consoles.

There is also the fact that hardware from both nVidia and ATI is generating profits and this is undeniably connected to PC game sales. The moment I see either ATI or nVidia leave the graphics market is when I will start to get concerned.

Graphics hardware profitability is completely unaffected by pirating, yet another offshoot that's even more off-top, although on-top with the off-shoot of Publishers moving more dollars to console.

It's off topic, sure, but not as much as it seems. Graphics performance is a big part of this forum and the discussion is about whether or not the consoles, indirectly by lowering the baseline for development, are affecting the need for fast hardware on the PC.

I think the need for performance is there and the right balance between flexibility and raw power may indeed be met by a many (>50) core chip. If they design such a chip, the PC industry (be it pro or for gaming) will provide the demand.

And yet, interestingly, it's not the graphically advanced games on PC that do particularly well justifying a PC only strategy (The Sims, WoW...). Blizzard will continue targetting graphics hardware a couple generations old in order to appeal to the largest base possible which still delivering a somewhat modern looking game.

A lot (althought not all) of the new wave of graphically advanced PC games are either by new startups or those that are going multi-platform to help fund the R&D for those engines and in hopes they'll be able to recoup the costs and perhaps get a decent ROI.

As I "think" Andrew Lauritzen was somewhat hinting at above. Going multiplatform gives the best chance to get a positive ROI from developement of expensive engines and technologies. Throw in the additional expenses of art developement for those increasingly more complex graphical games and you need a broader base to recoup your investment.

I don't think new techniques using tesselation are going to make things cheaper either, so as time goes on, costs will continue to rise.

And at the bottom of the whole pile, you have a virtually immutable price point of ~50 USD that PC games have been stuck at since the mid-90's, and interestingly enough games at the time were trending higher in price (up to the 80+ USD) for PC games prior to that. We can all thank MS for releasing Win95 and Directx leading to the explosion of consumer computers and thus being able to go for volume rather than margins. And even then we saw the death of many good developers and publishers during the heyday of PC gaming. :)

Bleh, I'm starting to babble and go waay off topic. Anyways, you can have the last point, I'll stop with this offshoot in this thread here.

Regards,
SB
 
Sure but Valve is also supported by being the premiere online direct download outlet for a variety of publishers on PC. That gives them a unique opportunity to avoid having to lose their independance due to the spiral of lower ROI + increased dev cost. You'll also notice they've increased their efforts on console while at the same time taking the plunge into the Apple Mac market.

Yes, but steam is really a side effect of them wanting a system for their games more than anything else. Its grown a lot but it probably isn't a massive cash cow. The Mac market was relatively easy for them to enter.

Blizzard could see the writing on the wall, even with the cash cow that is WoW and merged with Activision which has a large stake in consoles.

What has bliz released on the consoles? Also, Blizzard hasn't been an independent company since 94. Vivendi merged blizzard with activision partly as a way to monetize the division into a holding. By doing so they gained a majority stake in both companies. Another way to think of it, is that they BOUGHT Activision and then spun the merged entity off as a separate company. Blizzard so far has been solely focused on PCs.
 
Technologically its out of date, but performance wise, its still better than the low end. Performance of integrated and budget parts still suck.
That's hy I said mid/low not low end think of a product like the HD5570 which is cheap (~75€) and clearly out perform the consoles GPUs.

Next year depending on the cost of the "fusion" CPU thing could get worse, Intel demonstrated ME2, AMD AVP2 and both seem to run at setting not available to this gen of consoles.
 
So can GPU that is even more general purpose than Fermi work better or will it just hold real time graphics back ?

For the future we talk about using GPU for everything and left CPU to become the manager again. Is that a good thing for real time graphics though ? Wouldn't it still better to spend every one of those six billions transistors at doing graphics instead of getting side track into general purpose realm ?

I see what you are saying about Fermi and how it has dedicated a lot of space to functions that do not directly benefit the rendering of pixels on the screen.

At 3 billion transistors perhaps resourcing to TU's, ROP's and more SP's would have given Fermi higher performance rather than concentrating on things like programmability.

We could say the same for the 5870 - if we just increased die size to 500mm and added more SP units, gave it a 384bit memory controller and then let it basically catch on fire we would have had more resources for rendering pixels.

It is still a learning curve and NVIDIA may find itself ousted out of the HPC market as other players develop technologies to compete, and beat, with adding GPU's for massively parrallel tasks...

Larabee for its part was trying to kill two birds with one stone as well, just like NVIDIA and it has failed so there is an argument for what you are saying. ATI has not jumped on the bandwagon but as a natural result of extra processing power afforded by Moore's Law and the direction for more realistic visuals (which is accomplished by more open standards and programmability) the side effect is GPGPU performance as well - as Andrew alludes to.

However developers still need to step up and take this extra processing power and not go for the low hanging fruits to utilise it - e.g. ultra-high resolutions and AA on their own for a 3 year old, regurgitated graphics engine.
 
At 3 billion transistors perhaps resourcing to TU's, ROP's and more SP's would have given Fermi higher performance rather than concentrating on things like programmability.
It's not like Fermi is light years ahead of Evergreen in terms of programmability, actually I would say they are pretty close (as exposed by current APIs). For instance Fermi semi-coherent r/w caches don't make it more programmable.
 
At 3 billion transistors perhaps resourcing to TU's, ROP's and more SP's would have given Fermi higher performance rather than concentrating on things like programmability.
I don't think it's quite that simple anymore. Believe it or not we're getting near the limit on how much "free" parallelism can be extracted from a typical forward rendered graphics pipeline. GPUs today mortgage this *heavily* not only across a very wide SIMD machine but to cover often excessively long memory and instruction latencies. It has worked so far but if resolutions continue to plateau we're nearing the limit of what a GPU can do with a conventional renderer. You'll notice that old games plateau in performance at some point, partially because of CPU bottlenecks (non-multithreaded especially) and partially due to this effect.

Larabee for its part was trying to kill two birds with one stone as well, just like NVIDIA and it has failed so there is an argument for what you are saying.
I'd say the most we know is that it might still be too early for such an architecture. I definitely would *not* say that more general architectures have been proven to have "failed", especially considering all of the features that look to be required in the future to make these things reasonably programmable (virtual memory, at-least-semi-coherent caches, etc). And while Fermi may not be the best trade-off for current graphics workloads, that may change in the next few years. The one thing that is clear is that ATI has an exceptionally good architecture at the moment :)

However developers still need to step up and take this extra processing power and not go for the low hanging fruits to utilise it - e.g. ultra-high resolutions and AA on their own for a 3 year old, regurgitated graphics engine.
Sure, but many actually are doing this to my delight. Some because they have to (on PS3 especially... many of the things they do apply straightforwardly to modern GPU programming models) and some because they want to (there are always the "enthusiast developers"!).

It's not like Fermi is light years ahead of Evergreen in terms of programmability, actually I would say they are pretty close (as exposed by current APIs). For instance Fermi semi-coherent r/w caches don't make it more programmable.
Agreed. Fermi has some neat tricks no doubt but I wouldn't say there's a huge difference in programmability right now. I have yet to play with function pointers in CUDA though which definitely provide something unique, but I don't anticipate amazing performance given how much current GPUs rely on static register analysis to maintain efficiency.
 
Last edited by a moderator:
It's not like Fermi is light years ahead of Evergreen in terms of programmability, actually I would say they are pretty close (as exposed by current APIs). For instance Fermi semi-coherent r/w caches don't make it more programmable.

Well from all ive read and heard, fermi's compute performance is way ahead of cypress right now. Not to mention Nvidia's tools and support is superior as well. Case in point is fermi is already in use in the 2nd fastest supercomputer in the world. ATI hasnt even released a firestream version of cypress yet. Maybe ati will improve with the next gen, until then fermi is better
 
Well from all ive read and heard, fermi's compute performance is way ahead of cypress right now.
Where did you "read" this? NVIDIA marketing? :) In the stuff I've done so far they've traded blows and I actually find Cypress' performance more predictable (this is a good thing). Don't get me wrong - they're both good - but that's precisely why I'd classify neither as "way ahead". Claims like that are pure marketing, nothing more.

I'd give NVIDIA the edge in tools and support - particularly once Nexus is available (hopefully publicly and freely). Conversely, I'd give ATI's drivers the edge right now.

Long term some of the features in Fermi are definitely going to be useful and rolled into APIs and such. However that does not imply that their architecture is the right place to start necessarily (same with Larrabee). It may be, but maybe putting the same features into ATI's architecture might be better overall... we'll have to wait and see.
 
Last edited by a moderator:
What's this weird differentiation you're making between "general purpose" and "real time graphics"? Real time graphics is just a throughput workload now that happens to generate an image... most of the new generality and "compute" features in these modern cards are just as useful for graphics as anything else, and in a lot of cases lets you do things *more* efficiently than you could previously.

Yes, this generality is an expense, is it a good trade off ? The more general your design, the less throughput, typical CPU Vs GPU argument. The argument is always efficiency in algo provided by generality. But has GPU gone too far into generality and not having enough throughput ? See Larrabee for example, Intel axed it because is not competitive. Fermi is having this problem too, more transistors that doesn't translate much into performance.

I mean an algo can only be so efficient, so is something like Larrabee, Cell, or even Fermi got carried away in pursue of algorithm efficiency and overlook real throughput ?
 
Well from all ive read and heard, fermi's compute performance is way ahead of cypress right now. Not to mention Nvidia's tools and support is superior as well. Case in point is fermi is already in use in the 2nd fastest supercomputer in the world. ATI hasnt even released a firestream version of cypress yet. Maybe ati will improve with the next gen, until then fermi is better

nAo was talking about programmability. You're talking about performance. "Better" usually applies to things on the same scale.
 
But has GPU gone too far into generality and not having enough throughput ?
I think the answer here is a definite no. We have a ridiculous number of flops/pixel and usually memory and data access patterns are the bottlenecks nowadays. We've gone about as far as we reasonably can with "brute force" regular algorithms in a lot of cases and need to start working with some irregularity now. Handling these cases requires more general hardware (dynamic scheduling/dispatch, coherent caches, etc) and Fermi at least is not even enough yet.

So yes, there will always be a trade-off (and texture sampling seems to be a clear case where having fixed-function hardware is enough of a win to justify it sitting idle in some cases) but the push lately is towards generality mostly which is the right direction IMHO. We need to start expressing fundamentally more efficient (and often irregular) algorithms, not keep pushing the brute force ones which scale poorly.
 
Well from all ive read and heard, fermi's compute performance is way ahead of cypress right now. Not to mention Nvidia's tools and support is superior as well. Case in point is fermi is already in use in the 2nd fastest supercomputer in the world. ATI hasnt even released a firestream version of cypress yet. Maybe ati will improve with the next gen, until then fermi is better

FYI:
New FireStream cards is planned for realese later this month.

Source:
J Fruehe
Future silicon advances, including our next generation of AMD FireStream™ solutions, planned for released later this month, will go a long way towards making this a reality.
 
Well from all ive read and heard, fermi's compute performance is way ahead of cypress right now. Not to mention Nvidia's tools and support is superior as well. Case in point is fermi is already in use in the 2nd fastest supercomputer in the world. ATI hasnt even released a firestream version of cypress yet. Maybe ati will improve with the next gen, until then fermi is better
Perhaps you might want to re-read my comment, I haven't written a word about Fermi performance, just about its programmability. Performance wise Fermi better be faster than Cypress, given that it is ~50% larger than Cypress, has more memory BW and draws more power.
 
Back
Top