NVIDIA GF100 & Friends speculation

Because a few years back, everyone got fed up with paper launches and spoiler launches where there was months of hype and spin before there was a product to buy or even read reviews of.

It doesn't do anyone any good when companies "sell" products, but don't have anything you can buy for months.

Look from the producer side though, if they are forced into paper or spoiler launches they are obviously not in control of the situation. This implies on average they will make less profit, and the customer will get a better deal.

As you said, the tactics used above had long term consequences "everyone got fed up" meaning they were less likely to be repeated in future by the company when the game was replayed ie the behaviour was self correcting.
 
I wonder if any 512 SP fermi's will get seeded for reviews even if GTX 480 ends up being only 480 SP.

Or maybe a GTX 480 Ultra with 512 SP's priced at 999 USD with availability of 1 per etailer with no resupply. :) Just so it can be used in reviews...

Regards,
SB
 
As gpu-s have now biger on chip cache with speeds over 1 TB/s shouldnt the bandwith mean much less than before ?
The radeon 5870 doesnt show much much speed gain with increased bandwith.
GF100 has 768 KB very fast read/write cache. Couldnt that be used also as small on chip tile cache for less memory acess.:?:

It all goes back to what uses bandwidth in GPUs: textures reads and buffer writes. What causes those? TPUs and ROPs in traditional designs. We're at the point in time where generally things like ROP throughputs and TPU throughputs are generally designed to be in balance with available bandwidth. In these throughput oriented designs the buffers will be sized so that we can effectively saturate memory for the target memory bandwidth, increasing the bandwidth beyond that point is generally pretty hard without also increasing the number of outstanding transactions possible to memory because of the round trip latency which minimally improves when increasing dram frequency.

The caches on board provide some reduction in texture reads but in traditional designs there isn't enough cache to enable reasonable reductions in buffer writes.
 
Why is the "Hard Launch" so deified? :cry:

The whole idea behind "Hard Launch" is to intimidate suppliers into charging less and customers into paying more, ie increasing long term profit. It is a subconscious demonstration to the counter party of who exactly is in charge in the transaction.

Huh? If the product is available, that is best for the consumer, it means that the channel is pre-loaded which means price competition can actually happen.

Soft launches and Paper launches should be celebrated, it means the suppliers and customers are much more in control and getting a good deal.

soft and paper launches are should not be celebrated, it means the manufacturer either is behind schedule on their product or has little product and therefore has significant leverage over both the middlemen and the consumer.
 
Yes and no. Think about compressed textures that are stored uncompressed in texture cache.
Individual texels might be fetched only once, but the cache is effectively acting as a bandwidth amplifier.

Technically it's the decompression step that amplifies bandwidth. You would get that benefit even without a cache.
 
Huh? If the product is available, that is best for the consumer, it means that the channel is pre-loaded which means price competition can actually happen.
To get the chips, boards packaging produced and channel pre-loaded takes months and months. That requires alot of control of the line to avoid any leaks, especially when production and sales and testing are all outsourced....the control itself doesn't come from thin air...it implies one party in the line has some power over the others, this is expressed in the end as a higher profits for the controlling party and higher price for the end consumer.

soft and paper launches are should not be celebrated, it means the manufacturer either is behind schedule on their product or has little product and therefore has significant leverage over both the middlemen and the consumer.
Not necessarily, just that they don't have good control over the third parties any more and are forced into soft launching which signals it all the way to consumers. Either consumers should be able to then force a discount or they will baulk at dealing with the company in future transactions(ie reduced good will).

Middlemen is difficult to assess, obviously an one that relied primarily on one supplier would be in big trouble(hello BFG!), others that had alternatives should be able to drive a very hard bargain(hello Asus!).

Edit: Didnt really answer first part very well, re actually available mean anything much to price competition, price competition happens always.
1) No new product known about - whatever is currently available competes
2) New product known but not available - consumers "guess" at new products ability and instantly depreciate current products, depending on their assessment of how good new product is.
3) New product suddenly introduced - new product likely will carry premium thanks to marketing as consumer take awhile to independently assess things, old products quickly depreciated.

For producer 3) is by far the best for profits, 2) is uncertain, maybe up, neutral or even worse than 1) if the expectations are not in the end matched by reality. For consumer either 1) or 2) is the best situation
 
Last edited by a moderator:
Sorry, but practically any product that ithe market is eagerly waiting for will be a soft launch. Demand peaks at the launch, while production has just started. A soft launch with limited supply is ok imho, if the next batches come quickly.
 
Not necessarily, just that they don't have good control over the third parties any more and are forced into soft launching which signals it all the way to consumers.
Historical evidence from many past GPU paperlaunches does not agree with you unfortunately. 'Not having good control over third parties' is just trying to shift the blame from where it belongs, IE the designer of the product, the ones that set the timetable and are supposed to be in control of it, to someone else.

I don't think most people would buy that re-interpretation.
 
It all goes back to what uses bandwidth in GPUs: textures reads and buffer writes. What causes those? TPUs and ROPs in traditional designs. We're at the point in time where generally things like ROP throughputs and TPU throughputs are generally designed to be in balance with available bandwidth. In these throughput oriented designs the buffers will be sized so that we can effectively saturate memory for the target memory bandwidth, increasing the bandwidth beyond that point is generally pretty hard without also increasing the number of outstanding transactions possible to memory because of the round trip latency which minimally improves when increasing dram frequency.

The caches on board provide some reduction in texture reads but in traditional designs there isn't enough cache to enable reasonable reductions in buffer writes.

Texture reads with 160 GB/s should be enough for 1GB textures to read in 6.25ms. And there is texture cache for texel reusability. But dram memory has much bigger problem with writes than reads.
It would be interesting to see what writes speeds can reach today cards. Advanced shaders like to write into textures. Or high resolution shadow maps. Than there is the frame buffer.
Usualy in games if u turn off shadows u gain a huge fps boost.
 
Sorry, but practically any product that ithe market is eagerly waiting for will be a soft launch. Demand peaks at the launch, while production has just started. A soft launch with limited supply is ok imho, if the next batches come quickly.
Yeah sure soft launch is a most likely thing to happen, it is the easiest. As you said above "demand peaks at launch while production has just started" implies though from the producers point of view they haven't optimised the situation by not fully meeting demand at launch.

So instead the producers need to try and force their supply chain to toe the line so that sufficent supplies are available, i am saying this requires some market power and can only really be achieved by a company with some control over their counter parties. The control allows them to also extract extra profits from the transaction.

Historical evidence from many past GPU paperlaunches does not agree with you unfortunately. 'Not having good control over third parties' is just trying to shift the blame from where it belongs, IE the designer of the product, the ones that set the timetable and are supposed to be in control of it, to someone else.

I don't think most people would buy that re-interpretation.

Dont confuse what i am saying - paper launches are probably pretty bad for the producer, reduces profit. What i am saying is that paper launches relative to hard launches are much better for the customer. They get a better deal in the end, the producer takes less of their money for the same product.
 
I was just thinking that they could do it specificaly for gtx4xx (with the L2 cache the compute things could run much faster there as they showed with the raytracing demo). I mean the complexity of the DOF. So the radeons will run it as fast as the gtx4xx in dx10 but as u turn on DX11 nvidia cards will gain a masive lead.
On nvidia site http://www.nvidia.co.uk/object/gf100_uk.html the advanced cinematic efects are represented with metro 2033 picture and depth of field. Its a TWIMTBP title after all.
It could be for them a good way to show the gtx4xx dx11 power if done the way they thinked.
Dirt 2 (i think AVP too) had direct compute depth of field amongst other efects and the leaked benchmarks showed similar fps in dx11 than the radeons.
Still without gtx4xx cards and fps numbers my whole theory is nonses :oops:.

That's an interesting theory indeed. To say the truth PCGH's results surprised me. Since this is a TWIMTBP title and all. I suspect there's some hidden performance for the GF100 cards. I mean these guys didn't even bother mentioning the Radeon cards in the game requirement. Instead they specifically mention :

Minimum System Requirements:
• Dual core CPU (any Core 2 Duo or better will do)
• DirectX 9, Shader Model 3 compliant graphics cards (GeForce 8800, GeForce GT220 and above)
• 1GB RAM

Recommended System Requirements:
• Any Quad Core or 3.0+ GHz Dual Core CPU
• DirectX 10 compliant graphics card (GeForce GTX 260 and above)
• 2GB RAM

Optimum System Requirements:
• Core i7 CPU
• NVIDIA DirectX 11 compliant graphics card (GeForce GTX 480 and 470)
• As much RAM as possible (8GB+)
• Fast HDD or SSD

Enabling Nvidia 3D Vision:
Metro 2033 utilizes NVIDIA 3D Vision with compatible cards and hardware. To play in 3D you will require:
• NVIDIA GeForce GTX 275 and above recommended
• A 120Hz (or above) monitor
• NVIDIA 3D Vision kit
• Microsoft Windows Vista or Windows 7

In any case, it will be interesting to see this game benchmarked on GF100. I suspect there's something fishy going on, in a good way (for Nvidia)!:devilish:

I managed to do some testing of my own, without exporting benchmark numbers though. Just made the following videos with fraps running, so to investigate the performance. The game is very heavy indeed. I found that disabling Advanced depth of field does wonders for the performance. Settings used, other than those mentioned in the video titles, no adof, AAA. Yeah I was surprised to see Crossfire support out of the box. I used Catalyst 10.3 beta though.

Here are the videos for anyone that might be interested (the last one should still be encoding)

YouTube - METRO 2033 1920X1080 DX11 VERY HIGH CROSSFIRE 2X ATI 5850 OC CORE i7-860 @4.0GHz PART 1

YouTube - METRO 2033 1920X1080 DX11 VERY HIGH CROSSFIRE 2X ATI 5850 OC CORE i7-860 @4.0GHz PART 2

YouTube - METRO 2033 1920X1080 DX11 HIGH ATI 5850 OC CORE i7-860 @4.0GHz

THe problem with metro is the lack of adjustable options.

Its slly that you can't set the Tessliation levels even setting it low , medium and high would render it alot more playable on current hardware.

This is where Crysis succeeded and Metro has failed.

There are reports of people editing files and turning off dof on dx 11 for massive boosts or tessliation for massive boosts.

There are settings in options->video->DX11 in order to do that. No need to edit inis! :)
 
That's an interesting theory indeed. To say the truth PCGH's results surprised me. Since this is a TWIMTBP title and all. I suspect there's some hidden performance for the GF100 cards. I mean these guys didn't even bother mentioning the Radeon cards in the game requirement. Instead they specifically mention :



In any case, it will be interesting to see this game benchmarked on GF100. I suspect there's something fishy going on, in a good way (for Nvidia)!:devilish:

I managed to do some testing of my own, without exporting benchmark numbers though. Just made the following videos with fraps running, so to investigate the performance. The game is very heavy indeed. I found that disabling Advanced depth of field does wonders for the performance. Settings used, other than those mentioned in the video titles, no adof, AAA. Yeah I was surprised to see Crossfire support out of the box. I used Catalyst 10.3 beta though.

Here are the videos for anyone that might be interested (the last one should still be encoding)

YouTube - METRO 2033 1920X1080 DX11 VERY HIGH CROSSFIRE 2X ATI 5850 OC CORE i7-860 @4.0GHz PART 1

YouTube - METRO 2033 1920X1080 DX11 VERY HIGH CROSSFIRE 2X ATI 5850 OC CORE i7-860 @4.0GHz PART 2

YouTube - METRO 2033 1920X1080 DX11 HIGH ATI 5850 OC CORE i7-860 @4.0GHz



There are settings in options->video->DX11 in order to do that. No need to edit inis! :)

The problem is there is no fine tuning. I have 4 options for each level of dx .

So I have DX 9 low , medium , high , very high
Dx 10 low medium high very high
and the same with dx 11.

Moving from very high to high disables alot of stuff which may run just fine on a 5770. The same with dx 10. I'm sure there are tweaks between the settings that can get me big speed increases without turning off everything thats turned off between the two.

But i wish i could run the game in full screen. It only works for me in window mode.
 
Was there any proof of that anywhere?
According to what i've heard AA in BAA is made via NVAPI and this is the main reason why it's tied to NV cards only.

Um , I played the game with AA by doing the id hack. So there is plenty of proof of it.

But nvidia has a long history of playing dirt over te last decade. It all started with the fx line and pulling features from Tomb Raider (forgot which one) because it showed how fast the 9700 was at dx 9. Creating drivers that replaced everything in farcry dx 9 with int 8 and fp 16 instead of fp 16 and fp 32 while the radeons were happy being just as fast running everything at fp 24. Then you move to AC and them removing the faster better looking dx 10.1 path when it showed the ati cards zoom past the nvidia cards with that path . To batman which had the vendor aa lock out and also the fact that when turing on physx features less cpu cores were used then without the physx features. The best part is that nvidia then excluded physx geforces paired with radeons for rendering when we learned you can buy a $50 geforce card and pair it up with an ati card for even better performance.

I would not at this point put anything beyond nvidia. They don't really have a dx 11 benchmark. AVP , Dirt 2 and even Lotro is made with ati hardware in mind. Not to mention that developers have had at least half a year with ati dx 11 hardware ad there are most likely around 8m or so dx 11 ati cards on th market. While nvidia hasn't even sold 1 card.

They need a dx 11 title to show off fermi and I bet that the dx 11 modes are tuned to greater than 1GB cards and fermi cards will show great performance increases because of it.
 
The problem is there is no fine tuning. I have 4 options for each level of dx .

So I have DX 9 low , medium , high , very high
Dx 10 low medium high very high
and the same with dx 11.

Moving from very high to high disables alot of stuff which may run just fine on a 5770. The same with dx 10. I'm sure there are tweaks between the settings that can get me big speed increases without turning off everything thats turned off between the two.

But i wish i could run the game in full screen. It only works for me in window mode.

Ah I see. Yes, this is a fair request/demand.

Have you tryed using ALT+Enter? You never know..! :oops:
 
But i wish i could run the game in full screen. It only works for me in window mode.
I guess you've tried changing the in game resolution away from what it's set to, applying, and then back again? Good luck with your issue!

P.S. Check that the game hasn't altered your desktop resolution. I see that a lot of people aren't playing at their native resolutions and that sometimes causes desktop resolutions to change also.

Well, I've noticed that in the past, with other games, anyway.

Sorry that I can only speak as a bystander.
 
I see that a lot of people aren't playing at their native resolutions and that sometimes causes desktop resolutions to change also.

Games haven't done that for years and if they do then they're usually very cheap puzzle games or something.
 
Ah I see. Yes, this is a fair request/demand.

Have you tryed using ALT+Enter? You never know..! :oops:

Yea , I've even done the suggestions on the steam forum. I had to play with the config file (two of them !!!) to get it to run in window mode. But when i change the settings like they sugest to dx 9 and exit steam , edit the config back to ful lscreen it still just shows a black screen although the sound works and i hear the menus.
 
NVIDIA tapes out GF108

According to our dear Charlie, nVIDIA has tapped out GF108:

GF108 is what Nvidia taped out, so its triumphant vaporware GF100 is indeed going to be followed up by a huge wave of GPUs, all of them low end.

Even with somewhat good news, he cant resist to make it look bad :LOL::

At this rate, Nvidia will have a full line of parts, in quantity, but not until Q4. Q4 2011, that is.

http://www.semiaccurate.com/2010/03/17/nvidia-tapes-out-gf108/
 
Q4 2011? At least Charlie boy hasn't lost his sense of humour :LOL:

What good is GF108 over GT218 though? I didn't expect anything in that segment until 28nm. I guess the DX11 tickbox is worth another chip.
 
Back
Top