So.. What will Nvidia Bring to Counter the R520?

T2k said:
jvd said:
NOthing .


Quite simply because they need nothing .

the nv40 already has all or the majority of features ati will have in the r520 and with sli they will have a faster card .

That'd be utterly dumb if they'd claim this - think:

Speed list would be
$1000 SLI vs $499 R520
and all the solo NV40s would be slower for the same $499... nah, that's suicide.
But by the time the R520 is out, NV40s should be cut in price.
 
I don't think either IHV is much in a hurry right now to release anything new; manufacturing processes and higher speced ram could be two good reasons as an example.

To me there are two ways to increase performance right now for upcoming SM3.0 sollutions: higher frequencies or more units. Both at the same time are a no go IMHO.

So far speculations/guessing games seem much more reliable for the R520, because we know at least the manufacturing process which helps. I don't expect any increase in the number of quads here, rather a very high clock frequency instead.

Whatever NVIDIA is working on it also highly depends on the manufacturing process they've targetted which still seems to be a big question mark. An increase from 4 to 6 quads isn't impossible, as long as you don't expect any higher frequencies than currently.

If all that should be as close as possible, we'll face two quite bandwidth limited graphics cards until higher speced ram becomes available and no I don't expect from either/or any huge performance increases over current products either. Nonetheless though the major interest will fall on how ATI implemented SM3.0.
 
Hmm.. I dont see how my speculation on the R520 is being to abitious. In fact what is anyone thinking in this thread that is to ambitious. No one here has suggested that the R520 was going to Doubble the performance of the current generation.

We all pretty much expect it to be 50-75% faster in most cases with Shader performance perhaps being more improved than that. Also The Memory Controller and AA should be faster.
 
Well, actually there can be both answers: a simple NV40 die shrink with higher clocks to take todays 6600GT<->6800 positions PLUS a more complex chip with more pipelines in the high-end segment.

The question is when and how are they going to release them? It is possible that we'll see a 90nm or 110nm, higher clocked and tweaked a little NV40 in the spring to counter R520 (rumored to be 16x1 with a new V-pipes able to do P-pipes work to some extent) and then some 24x1 part based on NV4x architecture in the autumn 2005. 2006 will bring NV50 with WGF2 capabilities.

As i understand it, since ATI is the runner-up today, NV will wait and see what's R520 gonna offer and then decide what kind of answer to bring to the market.

Edit: And, no, i don't think that NV will try to counter R520 with SLI setups. That's just plain stupid.
 
Hellbinder said:
What will Nvidia counter with technology wize? They always play the leader when it comes to the technology front or at least the technology that you can Claim through PR to be leading.
Sorry for coming to the thread late, but here's my quick response to this original question:

It's simple, really. Just look at what nVidia has put forth for their refresh parts in the past: an optimized, slightly improved version of the previous core. In this vein, we can expect the NV48 to be brought to the table with increased clock speeds, possibly more pipelines, and noticeably more efficient pipelines.

That said, what really is unknown is what sorts of improvements to the NV4x technology, other than general efficiency improvements, the NV48 will bring to the table. I have my own wishlist, but I don't expect anything to be implemented that turns out to be a significant departure, on a microarchitecture level, from the NV40. But since I don't know the microarchitecture, I can't really say what sorts of things nVidia may change. So, here's my wishlist:

1. At least an option for NV2x-style anisotropy
2. MSAA on FP16 framebuffers
3. Implementation of a higher-precision rendertarget that is designed to allow output through higher-precision DAC's, like a 10-10-10-2 framebuffer, for still better quality of a tonemapping pass in conjunction with high dynamic range rendering.
4. Higher-sample MSAA modes.

...I could probably think of a few more things if I spent more time, but here it is as it stands.
 
Ratchet said:
The rumour mongers over at the Inqurier are saying that nVidia's next high-end chip, a rumoured 24-pipe beast codename NV47, does not and never did actually exist. They site anonymous sources for the info, but have nothing concrete to back it up (as is the norm with TheInq).
We can confirm that Nvidia had an unusual amount of meetings in the last quarter, which indicates that the company made a huge turn. I guess Nvidia wants to recapture the numero uno place one more time. It’s all about pride – being a leader and not being a follower.
I have a feeling that nVidia has grown a little tired of running with ATI and decided, quite awhile ago, that they'd turn things up a whole bunch of notches and try and bury the competition once and for all. Time will tell if they will succeed, but you can rest assured ATI isn't sitting still either.
[source: http://www.rage3d.com/board/showthread.php?t=33795997 ]
 
I thnk the real question is: when did NVIDIA decide on their 90nm fab solution. Get the answer to that and you can extropolate when they will have 90nm parts.
 
If NVidia has any sense then they'll release a 24 pipe version of the 6800Ultra, at roughly the same clockspeeds.

The 6800GT with 16 pipes seems to have outsold X800Pro with its 12 pipes (3:1 ? in retail - ignoring OEM/systems integrators) for six reasons:

1. Doom 3
2. SM3
3. 16 pipes
4. slight performance lead in some DX9 titles (before HL-2 hit the streets)
5. overclockability, without the need to mod
6. slight availability advantage?

Of these reasons, I dare say that 3 is the dominant factor since it implies 1 and 4. The average enthusiast sees that it's a battle between 16 pipes and 12 pipes and picks the former. It's obvious, innit.

It's so obvious that it's the only one of these four reasons that's never even argued-down by fanatics. Even if the 12 pipes in an X800 Pro are faster in the toughest parts of HL-2 than the 16 pipes of a 6800Ultra. :)

Sadly, NVidia will prolly think that SM3 was the real reason, but even if the PR department thinks that, they should recognise that the number of pixel shader pipelines is the next statistic that most enthusiasts recognise.

The big question seems to be which process will NVidia be using? If they can get 90nm then they're home and dry with the increased transistor count of a 24-pipe part.

It'll be NVidia's 24 pipes plus 6 vertex shaders versus ATI's 16 pipes plus 8 vertex shaders hybrid (the vertex shaders are unified, so that they can also perform as 8 pixel shaders). In the past I've suggested that the 8 unified pipes will be arranged in quads, so that R520 can operate as 24-0 or 20-4 or 16-8 (pixel shader - vertex shader count), in other words the unification is quite coarse-grained.

NVidia will rubbish the 8 unified pipelines, saying that R520 is really just a 16-pipe part. They'll be saying this till the cows come home, i.e. until they're forced to go with a fully-unified architecture some time in mid-2007, as a catch-up to the fully-unified WGF-busting R600.

Personally I wonder how much a 16+8-unified pipe card will benefit from the 8-pipes unified. For practical purposes I wouldn't be surprised if R520 performs on average like an 18 pixel shader card with 6 vertex shaders. It'll only be competitive with a 24-pipe NVidia card due to a 20%-ish faster clock. Oh, and the new AA algorithm that's been rumoured. How much will that ease bandwidth pressures? Will it be a part of R520?

If NVidia is going with 110nm then I suppose they can eke out an extra 100MHz. 6600GTs are hitting 550MHz core on overclock as far as I can tell, so 520MHz for a die with nearly twice as many transistors could be doable. I suppose that's the least risky approach for NVidia. Bit boring though.

Jawed
 
Hellbinder said:
Hmm.. I dont see how my speculation on the R520 is being to abitious. In fact what is anyone thinking in this thread that is to ambitious. No one here has suggested that the R520 was going to Doubble the performance of the current generation.

We all pretty much expect it to be 50-75% faster in most cases with Shader performance perhaps being more improved than that. Also The Memory Controller and AA should be faster.

6 quads with extremely high clock frequencies is and that goes for either/or IHV even on low-k 90nm theoretically.

Assume a raw fill-rate level of ~10 GPixels/sec or more, you can either reach it with 4 quads + (=/>)600MHz or with 6 quads + (=/>)400MHz. It depends how s.o. wants to make better use of the additional theoretical die space 90nm can give in comparison to 130nm.

Of course is "more" possible, but it then boils down to how expensive you want the final result to be, with what power consumption/cooling sollution and most importantly in what quantities.
 
Jawed said:
DaveBaumann said:
Your thinking is a little off WRT R520.

Yes, but in what ways?

Jawed

Probably the part where the eight 'hybrid shaders' are 'reassigned to do PS work'. I doubt this on R520.

I'll bet the eight 'hybrid shaders' will be just assigned to do VS 3.0 really well - esp things like vertex texturing (this has also been hinted by some ATI webpages and by devs here at B3D ;)).
 
DaveBaumann said:
I thnk the real question is: when did NVIDIA decide on their 90nm fab solution. Get the answer to that and you can extropolate when they will have 90nm parts.

I wouldn't say that an assumption that NV is ~6 months behind (or more) ATI when it comes to 90nm, is irrational (yes I still pay attention to your newsblurbs), but on the other hand I'd say that the SONY/NVIDIA deal was kept very well under wraps too and far too many were caught by surprise with that particular announcement.

It wouldn't be the first time that NV sends around test chips with weird codenames to cause confusion (and for other reasons). Is the picture really that clear after all on NV's side when it comes to manufacturing processes?
 
to see a low~k 09. part from NVDA.. not this year. Maybe IBM will give up some fabtime on there strained 09 for a very complex GPU? not this year.
 
Back
Top