How will NVidia counter the release of HD5xxx?

What will NVidia do to counter the release of HD5xxx-series?

  • GT300 Performance Preview Articles

    Votes: 29 19.7%
  • New card based on the previous architecture

    Votes: 18 12.2%
  • New and Faster Drivers

    Votes: 6 4.1%
  • Something PhysX related

    Votes: 11 7.5%
  • Powerpoint slides

    Votes: 61 41.5%
  • They'll just sit back and watch

    Votes: 12 8.2%
  • Other (please specify)

    Votes: 10 6.8%

  • Total voters
    147
Status
Not open for further replies.
The G92's and their infinite rebrandings have made it hard to total the total sales since RV770's release

And that's precisely why any such comparison is useless.

ZerazaX said:
GT200 had numerous changes, but kept the core concepts from G80... but as we saw in RV770 vs. GT200, the chip size and transistor count of GT200 vs. RV770 from a production cost standpoint certainly is in favor of the RV770 by far. Smaller and fewer transistors by a larger margin than the performance difference.

Why doesn't the R600 excuse apply to GT200 as well? i.e RV770 is only so good because GT200 stunk?
 
Why doesn't the R600 excuse apply to GT200 as well? i.e RV770 is only so good because GT200 stunk?
Because GT200 doesn't have any major bugs, which would decimate performance (like on R600), and there are also no signs of missing the targeted clock speeds (early samples of R600 were 850MHz; this part was never released due to extreme power consumption caused by miserable 80nm HS by TSMC).
 
Because GT200 doesn't have any major bugs, which would decimate performance (like on R600), and there are also no signs of missing the targeted clock speeds (early samples of R600 were 850MHz; this part was never released due to extreme power consumption caused by miserable 80nm HS by TSMC).

So poor design decisions on R600 are bugs but low clock speeds on GT200 are an architectural feature? :LOL:
 
So poor design decisions on R600 are bugs but low clock speeds on GT200 are an architectural feature? :LOL:
Non working (but present) circuity for HW MSAA resolve was hardly a poor design decision ;)

GT200 shader core was clocked at 1300MHz - 75% higher than on RV770. How can it be qualified as "low clock speed"?
 
Non working (but present) circuity for HW MSAA resolve was hardly a poor design decision ;)

Who says it was present?

GT200 shader core was clocked at 1300MHz - 75% higher than on RV770. How can it be qualified as "low clock speed"?

Are you seriously comparing different architectures in order to determine what's considered a low clock speed? It's based on targets for a given architecture, not what the other guy is doing.
 
So, tell me please, what was the target and why nVidia didn't met it? I have no reason to believe, that their target was significantly different - they had experience with 65nm manufacturing process, they knew its capabilities and they also had a clue, that 600mm² GPU cannot operate as easily at the same frequency, as 200-300mm² GPUs. All high-end nV's GPUs were always clocked lower, than the mainstream parts - NV35/NV36, NV40/NV43, G70/G73, G80/G84 and GT200 perfectly fits this line, since its clock is 50MHz lower, than contemporary model of G92. I really can't see any signs of unexpectedly slow clock speed (at least until I won't start to believe pre-release fake specs).

Also, if there was any specific issue with GT200 on 65nm, which would prevent achievment of targeted clocks, 55nm process would help to avoid it - but even GT200b had only mildly increased clock speed (~50MHz).
 
It's a whole "open standards" thing, some people seem to prefer it and thinks it stands a much better chance of success than a proprietary solution.

Good thing then that Nvidia's OpenCL solution is actually out there to use in Snow Leopard for everyone with a 8400 and up. As opposed to those unfortunate Mac ATI owners who do not specifically have a 4870 or 4890. Then again, calling the implementation on those usable is being kind as well.
 
Sigh, all you have to do is look at GT200's theoretical max flops at launch and the corresponding implications for marketing.
They promised DP for Q4/07 + some GPGPU features, which were not delivered even by GT200. How could we extrapolate from evidently wrong statements? nVidia's marketing told us, that their $129 GPU with its proprietary PhysX circuity is faster/better than HD5870. Should we believe it also? :smile:
 
Uhm, no. I disagree with ATi/AMD being bad with drivers. I can't remember the last time they missed a monthly release or didn't have some improvement to them.

Releasing tons of updates doesn't make them good. It makes them plentiful.

And no ATI isn't that good still. Their graphics work, but their other peripherals were and are pretty terrible. AMD of course has their act together and between killing products and the merger I hope ATI drivers get significantly better. (talking about their remotes, hd tuners, and all that kind of junk)
 
They promised DP for Q4/07 + some GPGPU features, which were not delivered even by GT200. How could we extrapolate from evidently wrong statements? nVidia's marketing told us, that their $129 GPU with its proprietary PhysX circuity is faster/better than HD5870. Should we believe it also? :smile:

Unlike Carsten it seems you need another hint - 1 Teraflop :) Btw, what GPGPU features are you talking about that weren't delivered?
 
cuz like it or not, open cl is pretty awesome. everyone can use it and it uses all cpu cores and all gpus.

it will set, if it gets big, a standard for a lot of things for games and such.

Nv's physx/cuda only works for thier things thus game companies arnt totaly for it because it leaves ati holders out of the full game. with open cl, nv or ati can play the full game.


Thats all very well and good, but if works on your competitions parts just aswell well as your own, ITS NOT A FUCKING FEATHER IN YOUR CAP! And right now, between Havok and PhysX, only PhysX enabled games have anything real that can be seen in game, Havok is box dressing. And until games luanch that do Havok Physics(much in the same way as PhysX) in OpenCL, we wont know if OpenCL is a good thing or bad compared to CUDA. So until it hits, it would be nice to see ATI fans not try and use it as the savior to the gmaing world for PhysX.
 
intel and amd are working on havoc together.... intel alone can eat up, crap out, and redigest nv in one sitting. Havoc will out do physx eventually due to the sheer $ in r&d. U are right that until something comes out, its all show and talk and useless, but it will eventually come to and then physx is hurt. It wont be any time soon becuase the drivers have to come out, then gamers have to sit on it for a while, then they make a game. So, physx has the whole time advantage right now, but havok has the $ and the ability to be universal.
as for a game real physics in a game, look at diablo 3. not out yet, but its coming. also, dont go bashing that its the only one. i am just saying that there is ONE.
 
How difficult would it be for Nvidia to hack something like Eyefinity into their next line of cards (or even do a prototype using a current chip for demo purpose)?

I guess pretty easy, at SIGGRAPH, they have been showing Google Earth across eight HD screens on a QuadroPlex -- and the QuadroPlex does exactly the same as "Eyefinity", that is, to the OS it looks like one GPU with one <lots of pixels>x<lots of pixel> screen. The told me at SIGGRAPH that due to being part of the Quadro line, they cannot show games, but I guess if necessary, they can hook up a few Quadros and show you Quake on 24 screens *today*; at least there is nothing specific to that solution, you just need to crank up the in-game resolution.

The only "problem" might be due to hardware synchronisation in the Quadros, which is not available on normal Gefore cards, but I'm not sure that Eyefinity synchronises the output actually (my guess would be no).
 
I guess pretty easy, at SIGGRAPH, they have been showing Google Earth across eight HD screens on a QuadroPlex -- and the QuadroPlex does exactly the same as "Eyefinity", that is, to the OS it looks like one GPU with one <lots of pixels>x<lots of pixel> screen. The told me at SIGGRAPH that due to being part of the Quadro line, they cannot show games, but I guess if necessary, they can hook up a few Quadros and show you Quake on 24 screens *today*; at least there is nothing specific to that solution, you just need to crank up the in-game resolution.

The only "problem" might be due to hardware synchronisation in the Quadros, which is not available on normal Gefore cards, but I'm not sure that Eyefinity synchronises the output actually (my guess would be no).

Addtionally aren't even the Quadro's still limited to 2 display outputs per chip? Which would mean 12x chips compared to just 4x for ATI.

Regards,
SB
 
Status
Not open for further replies.
Back
Top