Nvidia GT300 core: Speculation

Status
Not open for further replies.
No, it makes the verification much easier because it compartmentalizes the verification. If it's done properly, of course. There are always ways to make software completely unmaintainable. But there are also equally good ways to make it easy to track down (and fix) bugs. Good object-oriented programming is one of those ways.

Not really. OO means that the developers only tend to test individual components of a problem and may forgo true end-to-end testing.

That's where the complexity lies, in the interactions between different classes, methods, etc.

OO creates a lot more of these 'in between' spaces that collect bugs and create verification problems.

No amount of unit testing is equivalent to end to end testing.

DK
 
But wouldn't you be limited by the lower number of threads on CS4?

Not any more than people are limited by the number of threads on G80 vs GT200. But that's beside the point. Would you rather code a routine for non-existent DX11 hardware or for the millions of G80+ and RV770+ cards already out there?
 
Not really. OO means that the developers only tend to test individual components of a problem and may forgo true end-to-end testing.

That's where the complexity lies, in the interactions between different classes, methods, etc.

OO creates a lot more of these 'in between' spaces that collect bugs and create verification problems.

No amount of unit testing is equivalent to end to end testing.

DK
Presumably the end complexity of the program is the same whether it's object-oriented or not, so that the difficulty of end-to-end testing is going to be the same regardless. At least with an object-oriented system you can do unit testing and be quite confident that the class is going to work properly, as long as everything is properly-encapsulated.
 
dkanter said:
OO creates a lot more of these 'in between' spaces that collect bugs and create verification problems.
Presumably the end complexity of the program is the same whether it's object-oriented or not, so that the difficulty of end-to-end testing is going to be the same regardless.

A bit offtopic for this thread, but dkanter is generally right. The consequence of OO (or anyother layering technique) is that every layer tries/has to have more general approach to a problem than minimalistically necessary. Additionally there are probably additional checks for erroneus input and whatnot. If the goal is branch/line coverage (in addition to essential path coverage), then the testing effort absolutely increases.

Of course, if the software is built to evolve over long time, redundant testing due to layering is less of a concern than rewriting large parts of codebase in case it has turned into a unmaintainable mess.
 
A bit offtopic for this thread, but dkanter is generally right. The consequence of OO (or anyother layering technique) is that every layer tries/has to have more general approach to a problem than minimalistically necessary. Additionally there are probably additional checks for erroneus input and whatnot. If the goal is branch/line coverage (in addition to essential path coverage), then the testing effort absolutely increases.
The testing effort increases in practice, but it's wrong to suggest that OO requires more testing; it permits more testing and thus allows you to be more certain that it will work before the system goes live! This is a desirable feature, not a problem.
 
Last edited by a moderator:
Not really. OO means that the developers only tend to test individual components of a problem and may forgo true end-to-end testing.
There is always ad-hoc quality assurance testing in the end, so there is end-to-end testing.

In the end ad-hoc testing is the only kind of testing possible. Formal verification is nice for matching VHDL to circuits, proving absence of dead-locks and truthfulness of assertions etc. ... in the end it can't prove you are not an idiot who writes buggy code and forgot to specify important tests though.
 
In an ideal scenary where you have a good formal specification of what your block of code or block of logic does it may be as easy to verify RTL or software. However that's not usually the case, neither for RTL or for software. Given the cost of bugs in hardware you would usually expect that better formal specifications would be used for hardware design than for software development.

In any case if you have a big complex piece of hardware with a large number of blocks interconnected however good the specification is you will have all kind of bugs. Some may be quite obscure. And in a very complex piece of hardware, like a CPU, the number of possible test cases required to fully validate it is incredible large. Can you test all the programs that can be written in x86? There can be millions of possible combinations that could trigger that obscure bug.

Poor validation of a RTL design, for whatever reason, bad specification, bad validation tools or test sets, is much more dangerous and expensive than bad validation of software. The price of fixing hardware (even just metal layers changes) is orders of magnitude more expensive than fixing software. How many patches can MS release in a year? How many steppings of a given CPU can Intel produce in a year? And at what price?

There is also the problem about how fast you can validate RTL versus how fast you can validate software. If you are basically emulating the logic gates by software it's going to be orders of magnitude slower to run the same amount of testing in a piece of RTL code than on a piece of software code. Of course FPGA emulation could be somewhat faster, if available, but that also has problems of its own. The only thing that can reach the speed available for software testing is the actual hardware. But discovering bugs on the actual hardware is very expensive (how much it costs to create the masks and produce the chips for validation) and debugging silicon can be quite more difficult than debugging software.
 
Charlie says: "pigs do fly"

http://www.semiaccurate.com/2009/07/29/miracles-happen-gt300-tapes-out/

As we learnt long ago there are things like risk production to speed up the first wave of cards and it's possible to hold back some wafers part-way through processing to make metal changes - so it's not necessary to be totally doomful.

23x23mm he reckons, maybe a bit less than that if the casing is ignored, call it 22.5x22.5mm = 506mm².

Also he reckons GT215 is 12x12mm. 11.5x11.5=132mm²?

Jawed
 
Charlie is been fed some strange stuff. How can G300 be the first 40nm part for NV when they are doing GT21X parts already?
 
Nvidia showed C&C 3 at G80 launch. Did C&C 3, ever make it to DX10?

I dont think that presentation of C&C 3 was a DX 10 demo. I remember it being a display of load balancing or the shader units in vertex/pixel shader limited enviroments to demonstrate the unified shaders.. I dont remember the geometry shaders ever doing any work in that demo.
 
Well he claimed they were cancelled....

DK

Well, some were cancelled, all for the same reason the first tape-out of GT300 was not successful. This spin is going into production because it's the best they can muster. expect a really good refresh on nV's part in 2010.
 
Well, some were cancelled, all for the same reason the first tape-out of GT300 was not successful. This spin is going into production because it's the best they can muster. expect a really good refresh on nV's part in 2010.

That's the fairy tales he's been spreading since last year. I'd still suggest you shouldn't believe even half of that kind of horseshit whatever IHV it may concern.
 
Status
Not open for further replies.
Back
Top