Looks Like Far Cry will support 3dc, AMD 64, and HDR

Sxotty said:
To be honest I think the HDR looks kinda bad. It makes a lovely addition to be sure but it seems to be amplifying other things that look fake, the vines dangling from the roof for instance look very very fake now. Although I do admit it is pretty darn spiffy looking.
That's why Cry-tek said they had some issues.
 
Or rather, why they said that the overbrightening looks pretty nice in screenshots, but just looks unrealistic when overused in-game. We probably, therefore, won't see those levels of saturation in-game.
 
pat777 said:
K.I.L.E.R said:
Vertex instancing is..?
You can send an entire vertex done the pipe instead of sending parts of the vertex down the pipe one by one. :)

Ahh so you compress a set of vertices, send them down/up the pipeline and decompress them on site so the chip can work on them?
 
So if I'm understanding right, HDR in Far Cry will only be available to 6800 owners? HL2 supports HDR on all radeons from the 9500 and up and with decent performance, so we know FP blending is not absolutely necessary. Seems to me ATI owners are getting the short end of the stick, and that's not a good thing. Then again I can understand that if enabling HDR on Radeons would require alot more work for Crytek.
 
Different games, different rendering algorithms. To get HDR on a pre-SM3 part, you have to use more "hacks and tricks," whereas with SM3 you can render pretty much everything like you do without HDR, and just do tone mapping afterwards.

That's a huge difference, and it means that if you attempt HDR on pre-SM3 parts, you have to limit the possible rendering algorithms that you can use. Perhaps Crytek wasn't willing to do this on a game that's already shipped.
 
pat777 said:
K.I.L.E.R said:
Vertex instancing is..?
You can send an entire vertex done the pipe instead of sending parts of the vertex down the pipe one by one. :)
I'm guessing the "smilie" indicated a joke, because that answer didn't make much sense to me :?
 
Simon F said:
pat777 said:
K.I.L.E.R said:
Vertex instancing is..?
You can send an entire vertex done the pipe instead of sending parts of the vertex down the pipe one by one. :)
I'm guessing the "smilie" indicated a joke, because that answer didn't make much sense to me :?

I interpreted as:

"..compress a set of vertices, send them down/up the pipeline and decompress them on site so the chip can work on them.."

Does my explanation make sense?
 
K.I.L.E.R said:
I interpreted as:

"..compress a set of vertices, send them down/up the pipeline and decompress them on site so the chip can work on them.."

Does my explanation make sense?

Not to me.
 
K.I.L.E.R said:
I interpreted as:

"..compress a set of vertices, send them down/up the pipeline and decompress them on site so the chip can work on them.."

Does my explanation make sense?
It sort of makes sense but that's not what it's likely to be <shrug>

I suspect it is meant to mean geometry instancing where a base model (eg a car) is replicated in the scene with some customisations on each copy, such as a different colour or animation.

Each vertex has several (up to 16) 4D vector components. In DX9 (VS3.0?) these can now be supplied at different rates so that, eg, the same colour data can be supplied to all the vertices of one copy of the car and a different colour to another clone of the vehicle. This cuts down on your data requirements and, potentially, bandwidth requirements.

Saying "vertex instancing" is a bit silly because indexed meshes are already instancing (i.e. reusing) vertex data. <shrug>
 
Simon F said:
K.I.L.E.R said:
I interpreted as:

"..compress a set of vertices, send them down/up the pipeline and decompress them on site so the chip can work on them.."

Does my explanation make sense?

It sort of makes sense but that's not what it's likely to be <shrug>

I suspect it is meant to mean geometry instancing where a base model (eg a car) is replicated in the scene with some customisations on each copy, such as a different colour or animation.

I didn't know that you could compress/decompress vertices in flight in the chip. Reusing vertex data from vertex/index buffers is another matter I thought?
 
LeStoffer said:
I didn't know that you could compress/decompress vertices in flight in the chip.
I think it'd probably be quite difficult to do, at least with indexed triangles, because you wouldn't have random access into the vertex buffer.
Reusing vertex data from vertex/index buffers is another matter I thought?
I don't understand what you are asking here. :(
 
Simon F said:
LeStoffer said:
I didn't know that you could compress/decompress vertices in flight in the chip.
I think it'd probably be quite difficult to do, at least with indexed triangles, because you wouldn't have random access into the vertex buffer.

Thanks. I just misunderstood your response to K.I.L.E.R. to mean that compress/decompress vertices on chip was a feasible or common thing. I couldn't understand that, but now we are in 'agreement'. ;)

Simon F said:
LeStoffer said:
Reusing vertex data from vertex/index buffers is another matter I thought?
I don't understand what you are asking here. :(

See above. No question really, so just forget about this part, Simon.
 
LeStoffer,
One thing I forgot to say is that "index rate/Geometry instancing" technique can be used for data (well, bandwidth) compression. Whether it's of interest to developers is, OTOH, a different matter.
 
fallguy said:
Does AA work with HDR? I thought I read somewhere (here) it didnt.

I believe nvidia's fp blending hdr will not function with AA. I don't know about the method they used for r3xx with hl2.
 
Since they probably don't use a floating-point framebuffer for HL2's HDR, I'm sure AA is possible (it'd be rather pointless: without blending, you can't do any multipass algorithms, so any HDR has to be done entirely within the shader, so you might as well just use a standard 8-bit integer framebuffer).

Just keep in mind that you won't get the same HDR with an integer framebuffer as you can with a FP framebuffer, realistically.
 
Chalnoth said:
Just keep in mind that you won't get the same HDR with an integer framebuffer as you can with a FP framebuffer, realistically.
What would the difference be? FP framebuffer has better gradient/transitions? (BTW-Thanks for the AA/HDR info, I still won't give up AA until I get a monitor that can do 16x12@85Hz.)
 
The difference is that you have the HDR values actually available to do the postprocess (tone mapping).

There are a couple of other ways it could be done, putting some sort of scale in the alpha value etc etc. But these all have issues with transparent blending, and would require ping ponging of the framebuffer to produce correct results. It might be possible if D3D allowed seperate alpha and color blending modes.

Using an blending in an fp16 frame buffer is much easier.

You don't need that amount of precsion for convincing HDR and I'd imagine we'll see different framebuffer formats in future cards that are somewhat optimised for HDR purposes.
 
Back
Top