new demo(s) coming that put Double Cross (Ruby) to shame ?

http://www.theinquirer.net/?article=19773

Pixel Shader 2.0b is more than enough for everything

ATI Shader daze

By Fuad Abazovic in Londinium: Friday 19 November 2004, 09:52
ATI SENT A CLEAR message to the press invited to its London "Shader Day" event. Richard Huddy, the firm's head of European developers relations told us all that you can shade everything you need to with the 2.0b shader.

He made it clear that most complex shader program in Half life 2 is 90 instructions long. Shader model 2.0b is limited to 768 instructions which is more than enough.

Shader model 3.0, heavily promoted by Nvidia, is limited to 65536. Crytek's most complex shader that it uses to creat incredible looking water in Far Cry is 50 instructions long.

Crytek CEO and key Far Cry person Cevat Yerili showed us a very interesting presentation and talked about the firm's future plans and the current Shader situation. It presented a new demo that it's delivered to ATI. It's based on the Far Cry 1.3 engine and I have to admit it looked very impressive and puts the Ruby demo to shame. I guess that ATI will use this demo to highlight the potential of new X850 cards that are scheduled for launch on the first day of December.

The Crytek folk answered quite a few questions and we were very intrigued to see Motion blur used in the demo. The demo worked well, made sugary use of sweet candy and didn't kill the card's performance. In the demo they showed a very impressive Shader that is only 50 lines long. Shader model 2.0b supports up to 768 long Shaders.

ID software lead programmer Robert A. Duffy showed up and gave a nice little speech about its flagship game Doom 3 and told us chaps that you really need 512MB graphic cards to see the uncompressed textures and real images.

I guess those cards will arrive before the summer. We have to say that this guy definitely lives in the clouds. He spent two minutes talking to the press and avoided talking about most of the significant stuff. He didn’t talk about Shaders as Doom 3 is OpenGL and this API uses different things to create all these cool effects.

You could almost smell ATI's position on Shader Model 3.0 and even if the company declines to talk about it, it looks like as though Nvidia will do the hard work persuading people to use it. The "Fudo" chip will have these Shaders.

Still, this technology is Q2 2005 stuff so ATI hasn't even started to talk about it. I guess that it will look cool in AMR (ATI's multiple rendering marchitecture), its version of SLI. µ

Images
109760_1100786634.jpg


http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786639.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786644.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786658.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786663.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786668.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786672.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786679.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786684.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786691.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786696.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786701.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786711.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786736.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786743.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786748.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786813.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786827.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786834.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786839.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786881.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786889.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100786895.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787052.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787057.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787061.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787066.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787070.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787080.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787092.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787098.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787107.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787112.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787116.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787121.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787125.jpg
http://gamesfiles.giga.de/fotostories/bilderpool/109760_1100787129.jpg
 
Bah. There's more to pixel shader versions than instruction count. Way more.

And I'd rather be selling PS1.4 than PS2.0b. I can't imagine developers are overly excited about supporting that shader version.
 
Inane_Dork said:
Bah. There's more to pixel shader versions than instruction count. Way more.

And I'd rather be selling PS1.4 than PS2.0b. I can't imagine developers are overly excited about supporting that shader version.

PS 1.4 doesn't have floating point precision does it? Why would you find it more favorable to ps2.0b??
 
Inane_Dork said:
Bah. There's more to pixel shader versions than instruction count. Way more.

And I'd rather be selling PS1.4 than PS2.0b. I can't imagine developers are overly excited about supporting that shader version.
PS2.0b is ATI's attempt to counter NVIDIA's PS3.0 capabilities for the time being.. to tide developers over until R520...... to a degree, ATI is basically trying to tell developers that the majority of the things that you can do on NVIDIA's GeForce 6 series cards, you can do on ATI's R4XX cards using PS2.0b...
 
That's such idiotic bullshit. There's much, much more to shader model 3.0 than simply more instructions.
 
Re: new demo(s) coming that put Double Cross (Ruby) to shame

www.theinquirer.net said:
He made it clear that most complex shader program in Half life 2 is 90 instructions long. Shader model 2.0b is limited to 768 instructions which is more than enough.

"The average user is never going to need more than 768kb of RAM." - Bill Gates, 20-25 years ago
 
Chal,
Don't you think ATi knows that? :p What maybe you need to consider is they're concerned with what will be most useful at a particular point in time, not what will look best on a marketing specs-sheet.

When games today, soon 2 1/2 years after DX9 premiered, still only offer spurious shader support even in showcase titles, there's little reason for ATi to reengineer their hardware to take advantage of SM3.0.
 
Re: new demo(s) coming that put Double Cross (Ruby) to shame

cloudscapes said:
www.theinquirer.net said:
He made it clear that most complex shader program in Half life 2 is 90 instructions long. Shader model 2.0b is limited to 768 instructions which is more than enough.

"The average user is never going to need more than 768kb of RAM." - Bill Gates, 20-25 years ago

Heh, it was 640K and its almost certainly apocryphal.
 
Cat said:
My biggest gripe is ATi's lack of texture indirections.

Really? First of all it is not LACK but LIMIT, and that's a little difference :). One can easily write shader that goes beyond four allowed levels of indirection but so far I have not seen real world shader that could not compile because of that limitation.
Probably general idea behind that "ps_2_b is enough" is that in fact developers use HLSL exclusively now and accidentaly all shaders they write simply compile for ps_2_b profile - and they will compile for quite a while.
ps 3.0 shaders compiled from the same source might be little more effective (thanks to dynamic branching, arbitrary swizzles, abs modifier, maybe a few more things), but that's about all - "ps3 only" shaders are far away.
 
XxStratoMasterXx said:
PS 1.4 doesn't have floating point precision does it? Why would you find it more favorable to ps2.0b??
Bigger install base. ATi is asking developers to cover for their decision to not have SM3.0. This is the exact situation nVidia put developers into with their FX cards. It boils down to something like, "Hey, reorder your game to take advantage of our hardware. Forget that only about 2% of the people who play your game will gain anything from it."

I was/am a fan of the 9700 and 9800 because they provide great baseline SM2.0 support. No special knobs, no odd tweaks. Same with the 6800 and 6600 for SM3.0 support. The rest is a bother to implement considering its transient nature.
 
frost_add said:
Probably general idea behind that "ps_2_b is enough" is that in fact developers use HLSL exclusively now and accidentaly all shaders they write simply compile for ps_2_b profile - and they will compile for quite a while.
ps 3.0 shaders compiled from the same source might be little more effective (thanks to dynamic branching, arbitrary swizzles, abs modifier, maybe a few more things), but that's about all - "ps3 only" shaders are far away.
HLSL does not magically make shader versions transparent. Yes, you can take your PS2.0 shader and compile it for PS2.0b, but I doubt that's what ATi intended. If you can compile a shader successfully for PS3.0 and PS2.0b, all that really shows is that you're not using PS3.0.
 
Inane_Dork said:
HLSL does not magically make shader versions transparent. Yes, you can take your PS2.0 shader and compile it for PS2.0b, but I doubt that's what ATi intended. If you can compile a shader successfully for PS3.0 and PS2.0b, all that really shows is that you're not using PS3.0.
I don't get it. I have single HLSL source file, when compiled to ps 3.0 it uses dynamic branching, static flow control, texture loads are ordered differently, generated code uses some extra ps 3.0 features I alredy mentioned (like arbitrary swizzles - even in case of no flow control ps3.0 code tends to be little shorter). By using single #define I made it use texldl instead of texld on shadow maps (so that it can be put inside conditinally executed block).
The same source file compiles to ps_2_b, but generated code is completely different (looops are unrolled etc etc). How else would you define "version transparency"???
Sure, there are things even in HLSL that would not compile to pre-ps3 profiles (tex2Dlod is good example). But saying I am not using ps3.0 goes way too far. In fact my shaders seem to be transparently ( :) ) using all ps3.0 features people are so excited about, with almost no work on my side ...
I can only guess that most game developers will choose to go that route (one source file for 2_b and 3_0), at leat for next year or two. Of course it is not what they'd tell to the press (there's $$$ involved - so they'll at least make it looke like they favor one vendor or another) but that makes a lot of sense from cost effectiveness point of view. Gamers aren't going to notice anyway.
 
XxStratoMasterXx said:
Shader debates aside for the moment...did you guys notice that there is HDR in those shots? And this is running on an ATi card? Thoughts?
Nothing new. :) There's many ways to do HDR, after all.
 
Ostsol said:
XxStratoMasterXx said:
Shader debates aside for the moment...did you guys notice that there is HDR in those shots? And this is running on an ATi card? Thoughts?
Nothing new. :) There's many ways to do HDR, after all.

Really? What are the different ways? Can that good HDR be done with an 8 bit framebuffer? through fragment programs?

EDIT: Also, does that mean I will be able to run HDR on my 9800pro with that demo?
 
XxStratoMasterXx said:
Really? What are the different ways? Can that good HDR be done with an 8 bit framebuffer? through fragment programs?
HDR isn't a function or variable within D3D that you just switch on - it's a name to describe lighting that it's clamped to a narrow range of values. With your thinly disguised hints, you're talking about HDR, the FarCry / NV40 way, and HDR, the any other way. In the case of the former, various pixel operations write their outputs to a FP target; consecutive operations do the same but the hardware blends the values together. The final frame that is viewed is still 8-bit though. In the case of, say, the X800, the blending has to be done via a shader - so one operation writes to a FP target, which is then read into a temp register (I presume) for the next set of operations. A shader is used to blend the old and new values together, and then they are written into another FP target (or perhaps the old one, overwriting the previous values - somebody else would tell you for sure). Final frame is also 8-bit too.

The differences then? Apart from implementation - performance.
 
Back
Top