NVIDIA GT200 Rumours & Speculation Thread

Status
Not open for further replies.
Meh, faster than realtime transcoding for uploading video to your I-Phone could be marginally useful, and relevant to more customers than those doing offline encoding.

They both matter (a little).

PS. just knowing FPS in ideal situations is not that relevant, rushing through ME and mode decision with the simplest algorithms is good for FPS but not good for quality.

The time, that I needed to re-encode Firefly pilot ("Serenity", DVD, ~ 83 minutes) for my iPhone with Nero 8 and two-pass encoding&X2 3600+ with 2x2.0 GHz, was something about 2 hours (faster than realtime).

And there are a lot of people with PS3s, PVRs etc., who need more than iPod resolutions for their HDTV archives.
 
Well, yeah, CABAC is nearly impossible to parallelize. However, I wouldn't be surprised if you could accelerate it a bit if you were smart; either way Elemental claims CPU utilization is low, presumably also with CABAC given they already had it working when they said that, so let's wait and see.

Doesn't Elemental cost quite a bit though(perhaps I'm remembering it wrong)?Would some dude wanting to encode for his Iphone actually buy such a thing?
 
The time, that I needed to re-encode Firefly pilot ("Serenity", DVD, ~ 83 minutes) for my iPhone with Nero 8 and two-pass encoding&X2 3600+ with 2x2.0 GHz, was something about 2 hours (faster than realtime).

And there are a lot of people with PS3s, PVRs etc., who need more than iPod resolutions for their HDTV archives.

This is wholly unacceptable to me. If I want to copy a movie to my iPod, about 1-5 minutes is all I can tolerate. HDTV archives are nice, but this is a niche. There are more people for whom 480p is more important, and there's oodles more portable devices for which that resolution is more suitable.
 
GT200-VS-RV770.png


http://bbs.expreview.com/thread-12644-1-1.html

http://www.hardware-infos.com/news.php?news=2092

;)
 
I'm amazed at the lop-sided TMU/ROP power. You've got FLOPs "on par" between 260 and 4850, and 280 and 4870, but texturing and fillrate on the GT200 far outpace the ATI brethren in spec. Unless games are really burning the midnight oil and shader-bound, this won't be good for ATI. And while on paper, ATI's shader resources look far superior, and on peak hand-coded demo shader, I'm sure it will paste the GT200, I have to wonder in practice, if they will achieve better utilization than NV.

On the other hand, the NV cards are way more expensive and draw way more power, so one must look at the gain in usable performance vs the money paid.

Interesting times.

p.s. I'm almost afraid to calculate the zixel-rate for this thing.
 
I'm amazed at the lop-sided TMU/ROP power. You've got FLOPs "on par" between 260 and 4850, and 280 and 4870, but texturing and fillrate on the GT200 far outpace the ATI brethren in spec. Unless games are really burning the midnight oil and shader-bound, this won't be good for ATI. And while on paper, ATI's shader resources look far superior, and on peak hand-coded demo shader, I'm sure it will paste the GT200, I have to wonder in practice, if they will achieve better utilization than NV.

On the other hand, the NV cards are way more expensive and draw way more power, so one must look at the gain in usable performance vs the money paid.

Interesting times.

p.s. I'm almost afraid to calculate the zixel-rate for this thing.

It's not a bad tradeoff for AMD if they can get more of their cards in GPGPU scenarios where energy efficiency and floating point through-output trump texturing. Their margins seem much higher on their Firestream products.
 
I'm amazed at the lop-sided TMU/ROP power. You've got FLOPs "on par" between 260 and 4850, and 280 and 4870, but texturing and fillrate on the GT200 far outpace the ATI brethren in spec. Unless games are really burning the midnight oil and shader-bound, this won't be good for ATI. And while on paper, ATI's shader resources look far superior, and on peak hand-coded demo shader, I'm sure it will paste the GT200, I have to wonder in practice, if they will achieve better utilization than NV.

On the other hand, the NV cards are way more expensive and draw way more power, so one must look at the gain in usable performance vs the money paid.

Interesting times.

p.s. I'm almost afraid to calculate the zixel-rate for this thing.


How to fully utilize all the RV770's theoretical power ?:p
 
Question: is Nvidia doing hardware AA in the ROP/TMU's while AMT-ATI is doing AA in the shaders? Is there any place you can point me to so I can read up on this? Thank you for all help.
 
Man Nvidia must be pissed that ATI beat them to the teraflop mark. I'm sure that was something they were looking forward to hyping up.
 
Yeah its got to be embarrassing that a 1+ billion transistor chip is pushing only 900+ GFLOPs while an ~800 million transistor chip is pushing 1000+ GFLOPs / 1 TFLOP.

Not that it really matters, yet.

However since there will be no GX2 dual-GPU GT200 this year, what happens when R700 / 48x0 X2 is pushing around 2 TFLOPs?
 
Yeah its got to be embarrassing that a 1+ billion transistor chip is pushing only 900+ GFLOPs while an ~800 million transistor chip is pushing 1000+ GFLOPs / 1 TFLOP.

Not that it really matters, yet.

However since there will be no GX2 dual-GPU GT200 this year, what happens when R700 / 48x0 X2 is pushing around 2 TFLOPs?

Different design philosophy will bring different result in market place. :D
 
Yeah its got to be embarrassing that a 1+ billion transistor chip is pushing only 900+ GFLOPs while an ~800 million transistor chip is pushing 1000+ GFLOPs / 1 TFLOP.

Not that it really matters, yet.

However since there will be no GX2 dual-GPU GT200 this year, what happens when R700 / 48x0 X2 is pushing around 2 TFLOPs?

NVIDIA has its secret weapon fully against RV770/RV770XT, apart from its GT200 lineup.:D
 
Status
Not open for further replies.
Back
Top