Tim Sweeney : "This mess will go away"

Reverend

Banned
I was re-reading an article here, specifically this page. My memory regarding this is thin but I'm actually wondering what T.Sweeney actually meant (wondering, because I had meant to asj him back then but never did) when he said :
Tim Sweeney said:
Long-term (over the next 2-5 years) this mess will go away. The moves from DX6 to DX7-8 and DX9 have been marked by radical changes in hardware architecture. To take full advantage, developers have had to radically rearchitect their engines,[...]
My question is regarding the meaning of the word "mess" in the context of his full reply (on that page). AFAIK the "mess", or problem, was the relevance of new-architecture hardware vis-a-vis what the public can expect of such hardware via probably the only outlet available to them, i.e. breaking reviews by media outlets. I cannot relate to Tim's use of the word "mess" in this regard, as well as the rest of what he said.
 
Having read it, I think "mess" meant the difficulty of:

What you're trying to do -- evaluate hardware performance in the context of future upcoming software -- just happens to be really hard.

And specifically the need to use synthetics to try to get a forward-looking approximation of the worth of a particular video card.

He thinks as gpu architectures generalize that it becomes just a performance scaling question.

Thing is, I think there is some truth in that to a degree. . . but maybe not enough to be useful, or at least not enough to be dispositive. We'll see if IHV's quit innovating on features. I mean, how does a Transparency AA/Adaptive AA fit into that model, for instance?
 
The tradeoffs will then be things like number of pipelines, cache size, number of floating point units, which is much less dramatic than the questions of hardware T&L, 8-bit integer vs 24/32-bit floating point, etc.

I believe this is the mess he was referring to.
 
The mess at that time was how hard it was to really gadge the future proof aspects of 3d hardware; the 18 to 24 month window that many end-users own a card, imho!
 
<snip>

A major design goal of Unreal Engine 3 is that designers should never, ever have to think about "fallback" shaders, as Unreal Engine 2 and past mixed-generation DirectX6/7/8/9 engines relied on. We support everything everywhere, and use new hardware features like PS3.0 to implement optimizations: reducing the number of rendering passes to implement an effect, to reduce the number of SetRenderTarget operations needed by performing blending in-place, and so on. Artists create an effect, and it's up to the engine and runtime to figure out how to most efficiently render it faithfully on a given hardware architecture.

http://www.beyond3d.com/interviews/sweeneyue3/

Now is a great time because 3D hardware is coming out of the dark ages of being toy game acceleration technology, and morphing into highly-parallel general computing technology in its own right. The last hurdle is that the GPU vendors need to get out of the mindset of "how many shader instructions should we limit out card to?" and aim to create true Turing-complete computing devices.

We're already almost there. You just need to stop treating your 1024 shader instruction limit as a hardcoded limit, and redefine it as a 1024-instruction cache of instructions stored in main memory. Then my 1023-instruction shaders will run at full performance, and my 5000-instruction shaders might run much more slowly, but at least they will run and not give you an error or corrupt rendering data. You need to stop looking at video memory as a fixed-size resource, and integrate it seamlessly into the virtual memory page hierarchy that's existed in the computing world for more than 30 years. The GPU vendors need to overcome some hard technical problems and also some mental blocks.

In the long run, what will define a GPU -- as distinct from a CPU -- is its ability to process a large number of independent data streams (be they pixels or vertices or something completely arbitrary) in parallel, given guarantees that all input data (such as textures or vertex streams) are constant for the duration of their processing, and thus free of the kind of data hazards that force CPU algorithms to single-thread. There will also be a very different set of assumptions about GPU performance -- that floating point is probably much faster than a CPU, that mispredicted branches are probably much slower, and that cache misses are probably much more expensive.

http://www.beyond3d.com/interviews/sweeney04/index.php?p=3

The really interesting things in shader land will start to happen in the late 2005 to early 2006 timeframe. That's when there will be a good business case for shipping DirectX9-focused games.

http://www.beyond3d.com/interviews/sweeney04/index.php?p=4
 
Thanks Ailuros,
You need to stop looking at video memory as a fixed-size resource, and integrate it seamlessly into the virtual memory page hierarchy that's existed in the computing world for more than 30 years. The GPU vendors need to overcome some hard technical problems and also some mental blocks.

In the long run, what will define a GPU -- as distinct from a CPU -- is its ability to process a large number of independent data streams (be they pixels or vertices or something completely arbitrary) in parallel, given guarantees that all input data (such as textures or vertex streams) are constant for the duration of their processing, and thus free of the kind of data hazards that force CPU algorithms to single-thread. There will also be a very different set of assumptions about GPU performance -- that floating point is probably much faster than a CPU, that mispredicted branches are probably much slower, and that cache misses are probably much more expensive.
Maybe in the future there will be no difference between CPU and GPU in the home computing at all.
 
This from Carmack sometime ago: http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20000429013039
This is something I have been preaching for a couple years, but I
finally got around to setting all the issues down in writing.

First, the statement:

Virtualized video card local memory is The Right Thing.

Now, the argument (and a whole bunch of tertiary information):

If you had all the texture density in the world, how much texture
memory would be needed on each frame?
...
 
I don't think I've read Rev's interviews more often or more carefully than him; I'd rather think he's asking on purpose (as usual) ;)
 
Back
Top