AMD: R9xx Speculation

Again, you are discussing two different things.

There is the rate/frequency beyond which human eye sees a sequence of frames as smooth video. That's closer to 30Hz than 240Hz for most people - that's why movies get away with ~24Hz. (Fluorescent lamps and CRT displays require a higher frequency because they go dark(er) in between.)

Then there's the perception of latency. I haven't actually read any scientific literature on the subject. However, simple reaction time is on average somewhere around 200ms, so I doubt <10ms is perceptible. Also, the brain is good at hiding latency, e.g. between visual and auditory data from a distance. 10ms is the time sound lags light after a 3.5m distance.
 
It's all based on the elitism of first person shooter players who think being good at games makes their intuitions on the matter the right one :) Scientific research into actual performance increases for >60 fps simulations (ie. not toy tests with minimal visual information) is non existent, he'd have linked it if it were there by now. Now undoubtedly if you are good at first person shooters frame rates above 75 Hz still give you an edge, but IMO it's mostly because the system latency keeps going down with increasing frame rates ... not because you can process any of those extra frames.
 
Last edited by a moderator:
Actual scientific testing says otherwise. You shouldn't try to correct people if you are not familiar with the subject matter.
This is a useless argument. Nobody would want to have one quarter the rendering power per frame for 240 fps instead of 60 fps, nor would they want to pay 2-4x for a system that provides it. There's a reason that the vast majority of gamers don't aim higher than 30 fps min or 60 fps avg when choosing quality settings.

The mere existence of a perceptible difference is irrelevent.
 
Guys, could you move with this discussion out of the R9xx thread, please? I doubt R9xx will be able to render games at 240 FPS, so it's really OT...
 
Again, you are discussing two different things.

There is the rate/frequency beyond which human eye sees a sequence of frames as smooth video. That's closer to 30Hz than 240Hz for most people - that's why movies get away with ~24Hz. (Fluorescent lamps and CRT displays require a higher frequency because they go dark(er) in between.)

Then there's the perception of latency. I haven't actually read any scientific literature on the subject. However, simple reaction time is on average somewhere around 200ms, so I doubt <10ms is perceptible. Also, the brain is good at hiding latency, e.g. between visual and auditory data from a distance. 10ms is the time sound lags light after a 3.5m distance.

That's mostly due to visual cues that things like film uses, blurring during motion, etc.

Computer images which are sharp and distinct are easy to notice. On my LCD which is limited to 60 hz, no computer game looks as "smooth" as 24 fps film. Everything looks like a choppy mash of individual screens. But your eye try's to interpolate images to attempt to form a smooth series of images, but for some like me it's still a bit of a stuttery mess.

It's similar to experiments done with a rotating wheel with holes punched in it at set intervals. Rotating the wheel while looking through those holes, will produce something similar to what you see with rendered 3D images except with a "blank" between each image. No matter how fast you rotate the wheel, you can still distinguish individial "frames." looking through it. It can also similate what film would look like without any motion blur, as 24 images per second become a bit of a strobe effect.

It's no surprise that games like Crysis that heavily use motion blur appear smoother and faster at lower FPS than other games rendered without any motion blur.

The only thing that ruins that, is control latency where you'll certainly notice a disconnect between the apparent smoothness of your motion and your input into the game.

However, if you remove that, by watching someone else play for example. Crysis at 20 or even 15 FPS with motion blur will appear smoother than another game running at 60 fps without motion blur.

Regards,
SB
 
Maybe I can finally offer something of value here on B3D! :) Anyway, when I used to play MOH:Spearhead all the time, as long as my framerates were above ~30fps and smooth, the rest really made no difference whatsoever. I liked to have the refresh rate at 60Hz. What mattered more than anything was latency, I was quite successful at anything less than 120ms ping, although <80ms was ideal. Latency would change from server to server obvioulsy, so it was just a matter of adjusting to the slight variations in lag. And not to blow my own horn or anything, but I was consistently on the top of every server I played on, and against the best there was. And that was using a rifle against smg's and mg's. A single shot rifle was all I ever used. Ridiculous realism mods completely ruined that though. :D Although still did very well considering. Anyway, it basically comes down to having smooth framerates that are stutter free. Everything else can be adapted and adjusted to.
 
This "refresh rate" argument is over. Take it to a new thread if need be, but the arguments have devolved into pure banter.
 
no it isn't possible by the nature of the technology. AFR does nothing about response rate. It in fact will always do nothing about response rate. Its the nature of the technology in that it is generating the secondary frame before the first frame has been displayed. And no latency is not the trade off for better graphics quality, certainly when you are getting no better quality.

That depends on what you sync your game engine to. There was a good article in Game Developer a few months ago about how to program jumping, and how to make it all work out. It is pretty simplistic, but it shows off many of the problems you are discussing, especially what to sync to what, why, and how.

It is a good read for most noobs, and basically shoots down a lot of the back and forth arguments that are shown here. I have it in the magazine, and can't find it now, but if anyone has a link on GamaSutra or something, please post it.

-Charlie
 
Whether users believe in the lag issues or not, an alternative to AFR would be great especially if it can be done in a way that is seamless for the developers on both sides. Something that is more compatible, requiring less driver manipulation / profile creation and less developer awareness would be an automatic win for ATI (or Nvidia).
 
Might as well quote them ...

Discrete equivalent of the DX11 GPU will continue to be produced in bulk technology. We're not moving the discrete GPUs to SOI.

You could read that as saying the existing designs won't be moved to SOI but a DX11.1 generation might :)

PS. below 22nm bulk planar CMOS will be dead and SOI will almost certainly be the option of choice for foundries and fabless semiconductor companies ... so it's obviously a statement with a time limit.
 
Could you explain why SOI will survive over bulk below 22 nm?

PS. below 22nm bulk planar CMOS will be dead and SOI will almost certainly be the option of choice for foundries and fabless semiconductor companies ... so it's obviously a statement with a time limit.

Also, are there non planar bulk processes? Just asking to make sure about my understanding.
 
Might as well quote them ...



You could read that as saying the existing designs won't be moved to SOI but a DX11.1 generation might :)

PS. below 22nm bulk planar CMOS will be dead and SOI will almost certainly be the option of choice for foundries and fabless semiconductor companies ... so it's obviously a statement with a time limit.


That's how I interpreted it as well reading that interview :p
It's interesting they are implementing quite advanced power saving features into Llano APU. Wonder how many of them will trickle to enthusiast GPU designs in next generation. Granted, to use advanced power gating they will need to manufacture GPUs on SOI, or be creative again.

Interesting times ahead!
 
SOI roadmap is behind the bulk roadmap at GF. Having enjoyed the process advantage for a long time now, and being rather adept at it, I doubt if they'll move GPU's to SOI and lose the advantage.
 
SOI roadmap is behind the bulk roadmap at GF. Having enjoyed the process advantage for a long time now, and being rather adept at it, I doubt if they'll move GPU's to SOI and lose the advantage.

Bulk roadmap is behind the SOI roadmap at GF......

IMG0027366.gif
 
Back
Top