NVIDIA Fermi: Architecture discussion

Honest question: do dates on chip represent the real tape out date?


possibly not, I'm not sure either, but even then well maybe a week difference when the chips are made to when the stamps are put on them? I can't see chips coming out over month before and they are just sitting around the factor.
 
That isn't a list of games that work with Eyefinity, it's a list of games that are problematic with extra-wide display support like Eyefinity or TH2Go but are fixed by a program called Widescreenfixer. Many games simply work by selecting the correct resolution, no Widescreenfixer required. I don't have a single game that doesn't work with Eyefintiy. Since getting it in Early October, I haven't started a game in single monitor mode.

Well, in online multiplayer games, radically adjusting FOV is considered a cheat. In Half-Life game engines for example, the server locks down FOV (at 90 degrees), so the only option for super wide is stretching. I think the aspect ratio being shown has diminishing returns on usability.
 
possibly not, I'm not sure either, but even then well maybe a week difference when the chips are made to when the stamps are put on them? I can't see chips coming out over month before and they are just sitting around the factor.

I think but would like to stand corrected that there can be several weeks difference between actual tape out and the date stamps on chip (hence my question). If there should be merit to that one, Charlie might be closer to reality than the actual time stamp on the chip.

In any case it doesn't make any worthwhile difference when A1 came back from the fab. What's more important is when and in what shape A2 came back and even more important when they can go into production.
 
But...
It was said several times that radeon/GF performance in F@H is not "fair"due to different workloads, no?
So one will buy 5 cards (wtf), not because he cares about solving cancer but because of "points" he gets ...
What I do with my money is my business and how you choose to spend your money is yours. AMD could have that money if they would work with Pande Lab to get their performance up there to match.

Just incase you think my case is unique, just visit any team's DC forum and you'll notice nVidia is selling hardware. It does offer a market that nVidia is willing to invest into.

P.S. And get off your high horse. The chance our help leads to cures is worth it to many.
 
But...
It was said several times that radeon/GF performance in F@H is not "fair"due to different workloads, no?
So one will buy 5 cards (wtf), not because he cares about solving cancer but because of "points" he gets ...

Where did you get that?

The point system tends to reflect the actual scientific value for given workload. You might say it's "unfair" only because it doesn't match the GFLOPS output. But that's only a result of different optimization. On AMD you have to do certain calculations twice, whereas on Nvidia/CUDA you can use LDS.

Things will do doubt change when an optimized GPU3/OpenCL implementation is released. The wait for that one will be far longer than for Fermi though.
 
That isn't a list of games that work with Eyefinity, it's a list of games that are problematic with extra-wide display support like Eyefinity or TH2Go but are fixed by a program called Widescreenfixer. Many games simply work by selecting the correct resolution, no Widescreenfixer required. I don't have a single game that doesn't work with Eyefintiy. Since getting it in Early October, I haven't started a game in single monitor mode.
Yes I know , but how about this :
While most games will run at the 7680 x 1600 resolution enumerated by ATI's driver, they don't know how to deal with the 48:10 aspect ratio of the setup (3 x 16:10 displays) and apply the appropriate field of vision adjustments. The majority of games will simply try to stretch the 16:10 content to the wider aspect ratio, resulting in a lot of short and fat characters on screen (or stretched characters in your periphery). Below is what MW2 looks like by default:
Are you sure you have tried many of the "non-popular" or "non-recent" games ? Even Dragon Age Origins needed a patch to correct this problem .
 
Really ... ? Sounds quite fanciful, Nvidia being ready with an Eyefinity solution so soon out of the clear blue sky (since AMD told the world about it).

NVIDIA has their own "Eyefinity" for years, with their Quadro line. It should pretty straightforward to have it available for GeForces aswell.
 
Hi, long time lurker but now decided to stick my finger into this soup.

Tape-out date and package markings are two different things. Tape-out occurs when design team completes design and sends database to mask manufacturing. At this point mask generation starts and can take several days or even >week (depends on geometry node and number of mask steps in each process, several tens in case of latest <65nm CMOS processes). If a prototype lot in being built, and time is valuable, the actual processing of the wafers can start as soon as first mask layers are ready, even while later masks (metals and vias etc) are being generated. The wafers also get special treatment to accelerate the process through the fab (called hot lot, hand carry or whatever). Basically they bypass all queues to each processing step increasing the number of masks per day the lot achieves. Once wafers are complete some additional steps like wafer thinning etc. occur. Typically at this point one would do wafer testing, but it may be that fist samples are simply packaged blind (time, not yield is key concern). Once packaging is done the chips get the markings on the package. Difference between tape-out and package marking is several weeks. (I do not have TSMC mask steps for 40nm in front of me, nor masks/day metrics for hot lot or manufacturing but considering the number of steps involved it takes time.)

Note that when estimating volume availability dates production lots do not move as fast through the fab as prototype lots. Full wafer and packaged testing is also needed. Fab processing speed can be down by half or more.

Some of the more knowledgeable guys on this forum surely knew this already but maybe not everybody...
 
That would be around 300mm² I guess.

From the GF100 that's been pictured, and assuming 548mm² (23.4x23.4mm):
  • 256-bit GDDR5 physical I/O = 42.4mm²
  • 8 clusters = 137.4mm²
  • scheduling/control/blah-blah (that nice square in centre of die - assume it's halved) = 9.7mm²
  • MCs/ROPs/L2s (2/3 of remaining central portion of GF100 - ignoring the fact that some of this could be single-instance) = 102.9mm²
That totals 292mm².

Feel free to adjust ;)

Jawed

I actually speculated about that a few pages back (page 68), in this post:

http://forum.beyond3d.com/showpost.php?p=1362798&postcount=1694

I used G92 to G94 as reference to come up with 355mm2, but using your numbers, it looks even better :)
 
NVIDIA has their own "Eyefinity" for years, with their Quadro line. It should pretty straightforward to have it available for GeForces aswell.
ATM, Eyefinity simply doesn't exist, it needs at least bezel management to be useful and perhaps even more, surround support for example.

That's less of an issue than "3DVision Ready" games requiring to disable major features (HDR, SSAO?), though...

I actually speculated about that a few pages back (page 68), in this post:

http://forum.beyond3d.com/showpost.php?p=1362798&postcount=1694

I used G92 to G94 as reference to come up with 355mm2, but using your numbers, it looks even better :)
Are you seriously thinking this could compete against Cypress? 64 low speed TMU and 32 low speed ROP, along with slow rasterizer and at most 1/3rd of Cypress raw SP throughput is clearly not enough, even if we consider Cypress "bad" scaling is due to the GPU arch itself.
 
Talking of derivatives... FWIW, I'm currently naively expecting NV's line-up to look something like this:

GF100: 480SP, 384-bit GDDR5 (no SKU with 512SP, some down to 416/448) [Q2]
GF102: 320SP, 256-bit GDDR5 [Q3]
GF104: 160SP, 128-bit GDDR5 [Q2]
GF106: 64 SP, 128-bit DDR3 [Q3]
GF108: 32 SP, 64-bit DDR3 [Q4]

Given how wrong I've been about every single of my family predictions in the last trillion years, I heavily suggest everyone to ignore this or even make a clear attempt to change their own predictions if they seem similar to mine :p
 
Why would they don't release a 512SP part? I think it's impossible that they will don't get enough chips with full cluster from the wafer to make a real product.
 
Talking of derivatives... FWIW, I'm currently naively expecting NV's line-up to look something like this:

GF100: 480SP, 384-bit GDDR5 (no SKU with 512SP, some down to 416/448) [Q2]
GF102: 320SP, 256-bit GDDR5 [Q3]
GF104: 160SP, 128-bit GDDR5 [Q2]
GF106: 64 SP, 128-bit DDR3 [Q3]
GF108: 32 SP, 64-bit DDR3 [Q4]

Given how wrong I've been about every single of my family predictions in the last trillion years, I heavily suggest everyone to ignore this or even make a clear attempt to change their own predictions if they seem similar to mine :p

Why do you expect the 160SP chip to be released before the 320SP one? Doesn't Nvidia usually do this top to bottom?
 
I would say that more people will have the chance to experience ATi's deficiencies even more , as people pointed out , the software for Eyefinity just isn't there , it only supports some games , in the majority of games however , the picture gets stretched too far , enough to make all 3D pbjects look falt and ugly , compare that to the situation when Nvidia released their 3Dvision , and you will see the difference , Nvidia had a list of games compitabilities with a rating system , and they supported large number of old and new games .

The FOV requirements need developer applications support since this is actually rendering more stuff when the full FOV is used. Fortunatly, for many older titles that are pre-existing there are ways of getting round the FOV issues; take a look at the Widescreen Gaming Forums guides. Some games already natively support the ultra-wide aspect ratios through their involvement with Matrox and there will be more titles that support it natively now we have Eyefinity.

I think comments from Anandtech are a little misgiuded there as these 3rd party tools are making changes to software configurations - something that I don't think that we should partake in; if we can control what we do to the rendering of an app through our driver (like adding AA, or AF, and the like) then its fair game, but if altering the app is required then we shouldn't really go there. Our job is to evangelise it upfront to the developer and get support directly into the game - given the titles we've partnered on since the release of Eyefinity we're already seeing good traction.

Well, in online multiplayer games, radically adjusting FOV is considered a cheat. In Half-Life game engines for example, the server locks down FOV (at 90 degrees), so the only option for super wide is stretching. I think the aspect ratio being shown has diminishing returns on usability.

Thats not the case everwhere though - read Repi's comments concerning multiplayer Eyefinity and Battlefield Bad Company 2. Additionally, this is the case with Codemasters Racing games such as Dirt2 and Grid; I've also heard a number of comments that iRacing users are adopting Eyefinity rather quickly.
 
Talking of derivatives... FWIW, I'm currently naively expecting NV's line-up to look something like this:

GF100: 480SP, 384-bit GDDR5 (no SKU with 512SP, some down to 416/448) [Q2]
Nvidia has already confirmed the presence of 512-parts for both Geforce and Tesla , However Tesla parts will be further down the road :

http://www.pcper.com/comments.php?nid=8160

NVIDIA also specifically stated that the GF100 products that are due out next year are not going to be the same as the Tesla products discussed at the supercomputing conference and that "there will be 512 (shader) parts on both sides." What would be different between the two products would be WHEN the 512 options were introduced. It sure seemed like NVIDIA was trying to say that they would have a consumer-based 512 shader GF100 part when the GeForce lineup is revealed without just telling us.
 
The FOV requirements need developer applications support since this is actually rendering more stuff when the full FOV is used. Fortunatly, for many older titles that are pre-existing there are ways of getting round the FOV issues; take a look at the Widescreen Gaming Forums guides. Some games already natively support the ultra-wide aspect ratios through their involvement with Matrox and there will be more titles that support it natively now we have Eyefinity.
Thanks Mr.Dave for the remark , but I guess that was the whole point of Anand's article , to show that some games have native support for wider FOVs , the majority don't however , I didn't say that there is no way to support this , it can be done with driver optimizations of course , but that's AMD's job , accroding to Anand , that still needs more work .
 
Except that you could save some (minor?) die area and achieve the same gaming relevant throughputs.
Is that true ? Fermi has two separate pipelines : Integer 32 and FP 32 , when doing DP , both pipelines are used FP32(8+24) + Int32 = FP64 (8+56) , which result in half the throughput of both pipelines , hence the ratio of 2:1 SP : DP , If so how saving the die area is possible ?

Guys I need confirmation whether this is true or wrong !
 
Back
Top