Latest NV30 Info

errr I mean tightlipped in the sense of board level products not the OpenGL specs or what's been seen so far :)
 
ben6 said:
errr I mean tightlipped in the sense of board level products not the OpenGL specs or what's been seen so far :)

It's rather interesting on what they have said/done so far. Basically, with 40.xx det's and nvemulate, the OpenGL specs, they have been telling developers for 3 months or so exactly what to use when programming for the NV30. Now they get around to releasing the board level specs to the public. So they told those who write software how to write for it long before they told us user-only types what to expect. Hmmm, target the developers highly first. And I don't know what support they have given under NDA's to big software houses.

Am I wrong, or this completely opposite of past tactics where companies hyped to get developers on board?
 
SteveG

Umm, you just answered your own question. A "source" that changes his facts from day to day isn't a source at all, just another poser spreading baseless speculation in the guise of informed fact.
I didn't call myself a "source" of any kind. You are free to not believe in my words. Just don't call me a poser, since you have to know everything about NV30 for doing that. Do you?

I wouldn't even join this topic, if it wasn't for alexsok :)

Geeforcer

So Nvidia re-designed NV30 in 3 days? Amazing.
No, of course not. I don't have a direct hack into NV's corporate network, guys. There are things that NV tell to people and there things that people finds out by themselfs. Important here is that NV will NEVER say something that could damage it and basically all info about NV30 were from NV till yesterday.

Yesterday i've got first info from a source, close to NV, but still independent source. And that info is somewhat different from NV's "official" info...

btw, it looks like NV indeed redesigned initial NV30 for at least 2 times. Mostly cutting features down for the chip price reduction. Initial NV30 design though might see the light of day in NV35 (initial design was REALLY impressive)...

first his undercover all knowing contact gives him "Tha Real Specks" of NV30.
I never said that they were "Tha Real". I even marked tight moments w/question marks. Then things changed a little. They've already done this for at least two times since spring. What you want me to do? I don't dreamed these specs from my *ss, i don't have reasons for doing that.

Then, 3 days later, the same contact gives him some "new and shocking" detailes.
It wasn't the same contact. Do you ever read or you just talk?

Considering that his all-knowing insider semi-NDA'ed spy source could not have possibly been wrong the first time, one can deducts that the new and shocking revelations are the result of an amazing 3-day redesign of NV30.
Where did i say that my first specs were 3 days old? No, these specs were given me in august.

noko

What if Nvidia just post paper specs 13 days from now vice having a real card available to play around with like ATI did for the Radeon 9700 pro?
There will be real cards demoed at the launch, you can be sure in that. Question is - when the heck these cards will be avialable in retail?..

Randell

then his friends need to be more careful in promoting their sources as all knowlegable
NV30 is changed so changed information. Specs of any GPU are not final till the night before the announcement (esp. regarding MHz). And there are no "all knowlegable sources" on this planet :)

Ah, hell... So long to my "read only mode"...
 
DadUM said:
It's rather interesting on what they have said/done so far. Basically, with 40.xx det's and nvemulate, the OpenGL specs, they have been telling developers for 3 months or so exactly what to use when programming for the NV30. Now they get around to releasing the board level specs to the public. So they told those who write software how to write for it long before they told us user-only types what to expect. Hmmm, target the developers highly first. And I don't know what support they have given under NDA's to big software houses.

Am I wrong, or this completely opposite of past tactics where companies hyped to get developers on board?

Damn, you just took the words right out from under a new article I've been working on :) I do agree that allowing NV30 emulation was definitely a strategic move by NVIDIA to enlist developer support.
 
Some things said above are partially corect ;) Keep going, in 13 days you will hit the nail right on the head. Seriously now, the redesigns existed, but not only because of price, but also due to the near impossibility of doing some things with current silicon budgets and processes.
 
They had to. Developers are sitting there with DX9 beta's and 9700 DX9 drivers and using DX9 features (or what ATI's drivers will let them do at the moment) - they have to do something to ensure developer interest in their upcoming products.
 
MikeC said:
Damn, you just took the words right out from under a new article I've been working on :) I do agree that allowing NV30 emulation was definitely a strategic move by NVIDIA to enlist developer support.

As I think about it a little more, it's more than just NVIDIA enlisting developer support. All companies go after the likes of ID (John Carmack), Epic, and other large firms. But these projects typically have been done under NDA's, both software and hardware based. NVIDIA is targeting smaller developers and letting them know "we want you to see where we are going so you can start to come up with ideas." One big reason I can come up with this is that large firms typically have long development times. Look at Doom3, it was first announced June 2000. This could limit how fast features are implemented. But smaller developers may be able to release projects that can take advantage of features much faster. Hell, we already have a completed Cg contest and that new feature is <6 months old.
 
To add further fuel to the NV30 fire... The obligatory 3rd hand source (from TG's neck of the woods) spoke of "microcode", but that it's unlikely to ever appear as a published spec...
 
DegustatoR said:
btw, it looks like NV indeed redesigned initial NV30 for at least 2 times. Mostly cutting features down for the chip price reduction. Initial NV30 design though might see the light of day in NV35 (initial design was REALLY impressive)...

Well I had a feeling might actually release a slower NV30 on purpose.

This is my theory

Nvidia a couple of months back were close to a tape out(actually sometime in the beginning of the year around april or so) with a 128bit bus card. ATi though came out with the R300 and the 256bit bus. and 19.2 GB bandwidthand this threw Nvidia for a spin. Nvidia, not wanting to be out done decieded to redo their chip with a 256bit. bus and so the tapeout of the NV30 was late(september). Lately Nvidia have seen that they could actually compete with the R300 with the original 128bit NV30 with it's double theoretical bandwidth and have since deceided to release it. This I think is a ploy to make ATI come out with a DDR2 R300 and maybe surpassing the NV30 in performance. But I think Nvidia might have the 256bit Bus waiting in the wings for this moment. Another thing is that I think the NV30 might actaully be released with slower memory(700-800) and they might just hold off on the 1000mhz memory until sometime early next year when ATi release the DDR2 card.

Who will show their hand first though!!??

US <-- just theorizing you understand :D
 
Prometheus said:
Why does everyone take for granted that 256bit bus is the only way to go?
:-? ;)

Exactly. What would happen if you combine a high speed DDR-2 memory solution along with a 2X increase in the efficiency of the next iteration of NVIDIA's Lightspeed Memory Architecture on a 128-bit bus.
 
What would happen if you combine a high speed DDR-2 memory solution along with a 2X increase in the efficiency of the next iteration of NVIDIA's Lightspeed Memory Architecture on a 128-bit bus.

Apparently, you get a chip that's very late. ;)
 
Well wouldn't 256bit give the NV30 a larger bandwidth limit?? I mean if the R300 has a bandwidth of 19.2GB's wouldn't it increase the NV30's bandwidth capabilities when using 256bit and the Tiling Tech of 3DFX to double it's bandwidth??

I mean with 128bit Nvidia are giving us a theoretical bandwidth of 32GB's or higher.

US <-- Just asking?
 
Why would they stop at twice the efficiency? Perhaps they are now three, four, maybe 10 times as efficient :D
 
MikeC said:
Prometheus said:
Why does everyone take for granted that 256bit bus is the only way to go?
:-? ;)

Exactly. What would happen if you combine a high speed DDR-2 memory solution along with a 2X increase in the efficiency of the next iteration of NVIDIA's Lightspeed Memory Architecture on a 128-bit bus.


Going to a dual controller/bus architecture has really given the nForce2 mobo a bandwidth boost.

Maybe they borrowed a page from their office mates and put dual 128-bit LMA controllers and busses on the NV30? :D

The other page they should borrow is syncing MCLK to the core clock to simplify the timings. That would take one tweakable parameter out of the hands of the OCers, though, so maybe it's not such a good idea :D
 
Why don't they go with that bitboys vapoware, and go with eDRAM and 128 bit external bus ?
 
Unknown Soldier said:
Well wouldn't 256bit give the NV30 a larger bandwidth limit??
In practice it should, but we saw how the Matrox Parhelia turned out. NV30 could have a 256-bit memory bus, but we learned that NVIDIA's chief scientist David Kirk that it was overkill. Until we see some concrete performance results from NV30, we'll have to wait and see if Kirk's comment pans out.

On the other hand, NVIDIA had a decision to make in regards to moving to a 256-bit memory bus or further optimizing a 128-bit memory bus. It's possible that NVIDIA realized that designing and manufacturing a .13 mircon graphics processor was a risky venture and didn't want to include any other significant design changes to increase this risk.

I really hope we eventually find out from NVIDIA what caused the NV30 to be delayed.
 
Kirk may have said 256-bit memory is 'overkill', but as we all know, you can't have too much bandwidth.

Surely, this can't be overkill. The only reasons a company wouldn't want to go the 256-bit route are financial.

Therefore, just another PR-ish statement from Kirk which is fine, as he's only doing his job. We'll see if 128-bit is enough when NV30 is released - this assumes, of course, that it does use 128-bit!
 
256-bit bus is not the only way to get outstanding memory performance 8) It`s just another mean to reach the main goal-keep your VPU well fed, so that your client is a happy gamer :LOL:
 
Back
Top