AMD: R9xx Speculation

At what frame rate does latency cease to be apparent to the viewer/player? Is 1/30th of a second per GPU enough? 1/60th?

-Charlie

Depends on the player skill and the type of game. For competitive FPS, mS can matter between two skilled players. Which is why they play without sync and lower quality settings to reduce computation lag and for a while why they would play with CRTs because of the lag on LCDs.
 
is a 28LP with test structures on it (yes, I am 100% sure of that being test structures, take BS New's version at your own risk). You should see 28LP products this year.

Not necessarily. 2011 sounds likelier and that mostly because SoCs and their final integration takes a considerable amount of time. We might see some Tegra2 powered devices this year but even if just a few. Mass production for T2 started in Q1 2010. When did it have its initial tape out though?

2010 is the year where the 40/45nm generation of SoCs is sampling. The amount of final products with those SoCs integrated should be quite small. Most players are showcasing their first 40/45nm prototypes at the MWC in Barcelona at the moment.
 
I'm not sure I'd consider that to be a particularly reliable source of information.
I noticed that this version of the GF roadmap:

http://www.anandtech.com/video/showdoc.aspx?i=3740&p=7

has risk production of 32nm SOI in Q3 this year. Seems GF changes roadmap on a whim, seemingly much like AMD did. Anyone found a place on GF's website with up-to-date roadmaps?

I can see reasons to tape out a GPU before you try and integrate with the CPU (i.e. Llano), but there's no good reason to use SOI for commercial GPUs shipping in high volume. SOI's more expensive than bulk, and isn't really a good fit with low cost or low power.
High performance (or, if you prefer, discrete) GPUs aren't low power.

Moreover, Llano has already taped out...so unless this mythical SOI GPU taped out a while back...it isn't really going to be all that helpful in familiarizing the guys at ATI with the 'benefits' of SOI. The whole point is to start working with SOI ahead of time, so they are prepared. In order for that to really happen, they would have needed to tape out such a GPU a while back (say 6 months).
Or even further back? I can't remember when it was that AMD decided that it was no longer going to make MCM its first Fusion processor.

Of course, if AMD did have a 32nm product taped out 6 months ago...you think it might get to market. While SOI does add cost, that pales compared to the benefits of moving to HKMG and 32nm.
This is the crux of it. Theoretically AMD is now on an ~annual GPU schedule (though RV770 and Cypress are only two points on a line).

With 40nm at TSMC 1 year late, it's run into the launch window of TSMC's succeeding process, whatever that was. Back when 40nm was supposed appear on shelves (end of 2008) the succeeding process would have been roadmapped for end of 2010 I guess.

Since AMD's been watching the 40nm trainwreck in slow-mo for nearly 2 years now, presumably it's had time to adjust. There was no choice about Evergreen, it had to be 40nm. The choices for R900 seem to be either, delay the GPU for TSMC's succeeding node or use GF.

Using GF is problematic because AMD has committed to both bulk and SOI for GPUs (latter Fusion only, at minimum). With GF being problematic (new foundry and two new nodes) it seems logical to take a really long run at it. AMD (GF as was, effectively) also needed to get experience with bulk manufacture/libraries and emplace infrastructure to deal with third parties. ATI GPUs, at least as test beds, appear to be a useful platform in this regard. Wouldn't it be surprising to learn that ATI's expertise in working with foundries was not a key part of GF's ramp?

Bottom line: AMD could do a GPU on SOI, but it doesn't sound very plausible.
Yeah, no matter how convenient a variety of factors in the closeness of AMD/GF look, it seems too risky.

At best a 32nm refresh of Cypress would be ~230mm² I suppose, which seems to be in the ballpark of Llano's die size - would such a big chip be a viable "pipe cleaner"? "New features" that were left out of Cypress would add 30-50%, surely super-risky.

If AMD's GPU schedule is really 15-18 months then we shouldn't be expecting anything till 2011.

Jawed
 
Jawed, why would they need another pipe cleaner? Llano would deliver exactly the purpose of that, though at an estimated 200+mm2 it might be on the large side of such.

GF32 SOI still remains "AMD's process" at large. Perhaps IBM's gate-first approach generating too much variance for usability outside of very streamlined partnerships?
 
Cease to be apparent or provide negligible benefits?

Even a casual gamer can notice a 10ms difference in input latency. More serious gamers can probably reap tangible benefits up to ~120 FPS, and I imagine "pro" gamers would see better results up to ~240 FPS. Of course, currently most people are going to be limited by their LCD monitor's refresh rate of 60Hz anyway.

240??? ... then you should study how human eye and brain works.
 
240??? ... then you should study how human eye and brain works.

This is a common discussion. I will add my 2cents:

I did some experiments with a led display from 0hz to 100hz. Led display with high contrast and "flashy" red color (I think green would have been more appropriate but red is what I had).

I could tell the difference until at about 65hz (I think 60 is the natural boundary as many have claimed before). Normal claim is that the eye only sees up to 30 and I think this is incorrect, or at least it varies from one individual to another.
 
Jawed, why would they need another pipe cleaner? Llano would deliver exactly the purpose of that, though at an estimated 200+mm2 it might be on the large side of such.
I'm thinking in terms of getting graphics working. What libraries and tools are required to get graphics working on SOI? Whose IP are these libraries? :???:

AMD announced a while back that its x86 will be licensable for third parties to integrate for their own uses. That implies some kind of library and also a reduction in the customisation that AMD normally uses for its "internal" nodes.

Maybe there's some overlap in these libraries: the libraries required to get graphics running on GF SOI and the libraries required to get x86 running on bulk, either GF or non-GF.

I don't know how these libraries are structured - e.g. top-most layers that are functional upon bottom-most layers that are node-specific? I know practically nothing about these libraries...

Apart from all that, GPUs are more "disposable" (see RV740), so despite my scepticism, I still think there's a slim-to-decent possibility that a GPU has been produced upon 32nm SOI ahead of Llano.

Supposedly Llano is sampling 2010H1 - is it normal for a CPU to sample 6 months ahead of introduction to market? Does the fact it has a GPU lengthen the sampling period? Since it's a consumer APU and targetted at mobile too, does that explain the 6-month sampling period?

GF32 SOI still remains "AMD's process" at large. Perhaps IBM's gate-first approach generating too much variance for usability outside of very streamlined partnerships?
The alternative is "fully custom GPU" I suppose. I dunno, to what degree is a CPU "fully custom"?

The logistics and practicalities are a mystery to me.

Jawed
 
Wasn't only Bobcat going to be a synthesizeable core? I assume they have their own set of tools, macros and cells (which they won't give out) without which their higher end processors can't be made.
 
I thought that was the case with all 'large' MPUs, even from the fabless vendors.
 
I'm thinking in terms of getting graphics working. What libraries and tools are required to get graphics working on SOI? Whose IP are these libraries? :???:
I don't know about it being graphics-specific. It may be the GPU section's emphasis on density and more modest clocks might influence what is used, but AMD has only hinted that there were some nifty implementation features about the GPU section.
I'm not sure the circuits themselves actually care about the actual workload, just that the structure and emphasis of the GPU's design might require porting certain items not used in the high-performance CPU section.

Supposedly Llano is sampling 2010H1 - is it normal for a CPU to sample 6 months ahead of introduction to market? Does the fact it has a GPU lengthen the sampling period? Since it's a consumer APU and targetted at mobile too, does that explain the 6-month sampling period?
Intel's Westmere sampled in April 2009. Llano doesn't seem to be anything special in this regard.
Llano will most likely have a different socket and a different platform, which would add time. I also suspect CPUs are not given as much slack as GPUs are.

The alternative is "fully custom GPU" I suppose. I dunno, to what degree is a CPU "fully custom"?
I'm trying to track down a slide I read somewhere about Llano's CPU core. It think the amount of custom circuits was in the tens of percent -snip-.
(edit: I cannot find that slide, so I snipped a number in the previous sentence, curse my fuzzy memory)
 
Last edited by a moderator:
I noticed that this version of the GF roadmap:

http://www.anandtech.com/video/showdoc.aspx?i=3740&p=7

has risk production of 32nm SOI in Q3 this year. Seems GF changes roadmap on a whim, seemingly much like AMD did. Anyone found a place on GF's website with up-to-date roadmaps?


High performance (or, if you prefer, discrete) GPUs aren't low power.


Or even further back? I can't remember when it was that AMD decided that it was no longer going to make MCM its first Fusion processor.


This is the crux of it. Theoretically AMD is now on an ~annual GPU schedule (though RV770 and Cypress are only two points on a line).

With 40nm at TSMC 1 year late, it's run into the launch window of TSMC's succeeding process, whatever that was. Back when 40nm was supposed appear on shelves (end of 2008) the succeeding process would have been roadmapped for end of 2010 I guess.

Since AMD's been watching the 40nm trainwreck in slow-mo for nearly 2 years now, presumably it's had time to adjust. There was no choice about Evergreen, it had to be 40nm. The choices for R900 seem to be either, delay the GPU for TSMC's succeeding node or use GF.

Using GF is problematic because AMD has committed to both bulk and SOI for GPUs (latter Fusion only, at minimum). With GF being problematic (new foundry and two new nodes) it seems logical to take a really long run at it. AMD (GF as was, effectively) also needed to get experience with bulk manufacture/libraries and emplace infrastructure to deal with third parties. ATI GPUs, at least as test beds, appear to be a useful platform in this regard. Wouldn't it be surprising to learn that ATI's expertise in working with foundries was not a key part of GF's ramp?


Yeah, no matter how convenient a variety of factors in the closeness of AMD/GF look, it seems too risky.

At best a 32nm refresh of Cypress would be ~230mm² I suppose, which seems to be in the ballpark of Llano's die size - would such a big chip be a viable "pipe cleaner"? "New features" that were left out of Cypress would add 30-50%, surely super-risky.

If AMD's GPU schedule is really 15-18 months then we shouldn't be expecting anything till 2011.

Jawed

Can somebody explain to me what this means?
GF HKMG roadmap said:
Customer Product Introductions in Q1 2010:

GLOBALFOUNDRIES’ customers will begin to announce HKMG product results in early 2010 - not test chips, not 64M SRAMS, not IP shuttle results, but full products. As with the 45/40nm ramp, this will be far ahead of any other pure play foundry.

What does "product results" mean wrt a pure play foundry?A
 

Your brain will never ever get 240 "images" per second, after around ~70-75 "images" per second under normal daylight lighting things start getting blurry (quotation marks around images as it's not really images, but continous flow of "data" that your brains receive)
 
Of course 75 Hz is not enough to make a sample and hold display look good in motion, even if it maxes out the information processing ability of the HVS in bits.
 
Kaotic said:
Your brain will never ever get 240 "images" per second,
Actual scientific testing says otherwise. You shouldn't try to correct people if you are not familiar with the subject matter.
 
Actual scientific testing says otherwise. You shouldn't try to correct people if you are not familiar with the subject matter.

You're no doubt referring to the (in)famous 220FPS test? There's one big flaw with it, seeing something for 1/220th of a second while there's nothing else to see for the rest of the time is completely different from seeing 220 different images in a second. With 220 different images, no-one would identify anything, with 1 image flashed for 1/220th of a second, you get afterimage and your brain has a lot more time to "see" what was there.

The whole big issue is that eye doesn't see frames, it sees continous flow of "data", and the barrier where our normal vision is "fine" where it's all smooth and dandy is around 70-75 "FPS" in normal lightning, while peripheral vision can be a bit more sensitive.
 
Last edited by a moderator:

So let's assume you're right, without providing any more info than blatant "no", if people could see up even up to 240Hz (funny how this went from "I think" to "scientific testing" btw? without any links, too), could you explain why for example on CRT even most picky people are good with 100Hz, and most don't notice any extra eyestrain or flickering at 75Hz?
Surely if we could see and react up to 240Hz changes, we would easily see the individual refreshes as flickering at 75 or 100Hz?
 
So let's assume you're right, without providing any more info than blatant "no", if people could see up even up to 240Hz (funny how this went from "I think" to "scientific testing" btw? without any links, too), could you explain why for example on CRT even most picky people are good with 100Hz, and most don't notice any extra eyestrain or flickering at 75Hz?
Surely if we could see and react up to 240Hz changes, we would easily see the individual refreshes as flickering at 75 or 100Hz?

Even better is this:
http://en.wikipedia.org/wiki/Fluorescent_lamp#Flicker_problems

If he was correct, they would have to operate at +240Hz...or we would all feel like in a disco with stobes going on...
 
Back
Top