NV30 shown running in hardware....but only at a few KHz...

2800 CPUs in just one server room alone. Gawd! NVidia is definately in the super-computer-site range.

They should outsource their CPU farm to Pixar/WETA for movies when it's not being used. :)
 
If they use that NSIM system I wouldnt be surprised at it being a lot slower (than FPGA simulators).
 
RussSchultz said:
Or the sheer amount of servers? Egad.
I wonder what are they using them all for? If it's synthesis software they are running, how much are they paying for the licenses?! :eek:
 
Well now we know how they have been able to host all of those "pupular" game mirrors lately (AA, Black HawkDown, UT2k3Demo,ect)....
 
Well, at least they have been able to do a lot more testing and validation work on the part because of the delay...
 
Can someone summarize the differences between ATI and NVIDIA in this Anand article? I'm too tired and lazy to think atm.
 
Simon F said:
The "few kHz" surprised me. They must be deliberately using exactly the same logic as is intended for the final chip. I would have thought it'd be easy enough to make it run at least at several MHz by inserting more register stages.

The question you're trying to answer is "Does THE netlist I sent for manufacture work OK?" What's the point in validating something different? Drivers can be developed using non-gate-accurate C code. You'd expect the IKOS to be using THE netlist. Yes you could do little modifications to speed it up, but then it's not THE netlist.

(Maybe I've gone a little over the top on the "THE netlist" point) :D

RussSchultz said:
Anybody else suprised that NVIDIA had their own FIB? Or the sheer amount of servers? Egad. Makes the company I work for look like DIY hobbyists in comparison.

The servers I knew about, but the FIB and the chip inspection tools.....wow! That's always a trip back to the fab house for us.

I don't know how you keep a processor farm like that fed with enough work though. They seem to have about a 5-1 Machine to engineer ratio (I've heard about 300 engineers recently ->1500 processors, about 40 racks worth, not inconcievable looking at those photos).

All seems like overkill to me.
 
Reverend said:
Can someone summarize the differences between ATI and NVIDIA in this Anand article? I'm too tired and lazy to think atm.

Actually, the whole GPU making process was described for Anand by ATI and no photos were shown (apparently, ATI didn't allow them to take any photos), while the nVidia part of the article focused on showing various places in nVidia (supercomputers, etc...) and a tiny bit of NV30 info... :D
 
You can't really summarize the differences between ATI and NVIDIA based on this article. The ATI part of the article is pretty lame - it is just a textbook description of how high-end ICs are made. There's no pictures, no company specific details, nada.

The NVIDIA part of the article is more or less an NVIDIA site tour -- it shows the hardware and labs used to design and test chips.

The most interesting part of the article to me is how NVIDIA plans to start using the AMD Hammer. If I were a high-end CPU company like Sun or Intel I would be worried to hear this.
 
duffer said:
You can't really summarize the differences between ATI and NVIDIA based on this article. The ATI part of the article is pretty lame - it is just a textbook description of how high-end ICs are made. There's no pictures, no company specific details, nada.

The NVIDIA part of the article is more or less an NVIDIA site tour -- it shows the hardware and labs used to design and test chips.

The most interesting part of the article to me is how NVIDIA plans to start using the AMD Hammer. If I were a high-end CPU company like Sun or Intel I would be worried to hear this.

You pretty much summarized the article! :D
 
Sabastian said:
I understand that they are using the coding language HDL to input the chips configuartion into the FPGA card.

To clarify, HDL = Hardware Description Language, the language is called Verilog. VHDL is another similar language, but I think Verilog is more commonly used.
 
Humus said:
Sabastian said:
I understand that they are using the coding language HDL to input the chips configuartion into the FPGA card.

To clarify, HDL = Hardware Description Language, the language is called Verilog. VHDL is another similar language, but I think Verilog is more commonly used.

Depends on company tradition. It used to be that VHDL was used in the states and Verilog in europe (might have been the other way round), but these days it just depends on what the particular development house uses.

VHDL tends to be a lot more formal than Verilog (VHDL is a refined version of ADA I think) and so the tools tend to be faster with verilog (fewer checks to perform I guess). In theory there's fewer bugs possible in a set of VHDL code that parses correctly.

Personally I hated VHDL when I started using it, but now I'm used to it it's fine, if a little picky sometimes. Most Verilog I read now tends to look a bit raw and unrefined. That could just be 'coz I'm not used to it now.

edited: I got a little ", but..." crazy
 
DemoCoder said:
2800 CPUs in just one server room alone. Gawd! NVidia is definately in the super-computer-site range.

They should outsource their CPU farm to Pixar/WETA for movies when it's not being used. :)

Pixar has a nice setup already...

http://www.nwfusion.com/you/2002/where.html

40 Sun Fire 3800 servers
( http://www.sun.com/servers/midrange/sunfire3800/specs.html )

210 Sun Enterprise 4500
( http://www.sun.com/servers/midrange/e4500/specs.html )
(capable of 3360 CPUs there)


Pixar already has the power.

I spoke with thier telecomm guy when I worked at Avaya. We ended up just talking about the renderfarm for about an hour instead of troubleshooting a PBX.

Still kinda funny Pixar dropped SGI and went *Nix.
 
alexsok said:
Reverend said:
Can someone summarize the differences between ATI and NVIDIA in this Anand article? I'm too tired and lazy to think atm.

Actually, the whole GPU making process was described for Anand by ATI and no photos were shown (apparently, ATI didn't allow them to take any photos), while the nVidia part of the article focused on showing various places in nVidia (supercomputers, etc...) and a tiny bit of NV30 info... :D
Ah... sorry... I looked at the title of Anand's article before the whole page loaded, got bored and thought it was a "shootout".

Oh well... at least Jenson looks like the guy next door in the article!
 
Humus said:
Sabastian said:
I understand that they are using the coding language HDL to input the chips configuartion into the FPGA card.

To clarify, HDL = Hardware Description Language, the language is called Verilog. VHDL is another similar language, but I think Verilog is more commonly used.

Ehrm I don't think so. I have learned both and VHDL is much more flexible. In both languages you can program low-level on operatorlevel, but in VHDL you can program really hich-level, much more than in Verilog.
I guess they both have their uses, but I believe VHDL is more common. Atleast it's better IMO.
 
You'd simply buy an FPGA on a card for your computer with the most amount of transistors and fastest clock and then "software upgrade" your GPU by downing "tapeouts" (configurations) to the FPGA.

Someone needs to get on this right away!

Maybe we can talk Bitboys into doing some R&D on this.. that seems to be their gig... never ending R&D.

Seriously though.. what a cool concept.. i wonder if something like this will ever be possible on a mass scale? 50 years off?
 
It will be way to slow.
I am pretty sure ASIC's always will be faster, and people will probably want the fastest solution.
Downloading a "new gpu" would probably only be useful for bugfixes, and since the FPGA is the same there will not be better performance (nothing to write home about atleast:) ).
 
As long as we are using traditional technologies sure ... but with molecular computing and self assembly things will likely change (firstly because of failure densities, secondly because you are not going to be able to have something of the complexity of a "large" hardwired device self assemble).
 
Simon F said:
RussSchultz said:
Or the sheer amount of servers? Egad.
I wonder what are they using them all for? If it's synthesis software they are running, how much are they paying for the licenses?! :eek:

I assume they either just coded their own software, or that there are business models for massive computing. Considering those sun servers supposedly cost $1 million each, I doubt the licensing costs are going to be any more of a concern to them.
 
Back
Top