Nvidia GT300 core: Speculation

Discussion in 'Architecture and Products' started by Shtal, Jul 20, 2008.

Thread Status:
Not open for further replies.
  1. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    Not really. OO means that the developers only tend to test individual components of a problem and may forgo true end-to-end testing.

    That's where the complexity lies, in the interactions between different classes, methods, etc.

    OO creates a lot more of these 'in between' spaces that collect bugs and create verification problems.

    No amount of unit testing is equivalent to end to end testing.

    DK
     
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,405
    Likes Received:
    402
    Location:
    New York
    Not any more than people are limited by the number of threads on G80 vs GT200. But that's beside the point. Would you rather code a routine for non-existent DX11 hardware or for the millions of G80+ and RV770+ cards already out there?
     
  3. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,830
    Likes Received:
    166
    Location:
    Seattle, WA
    Presumably the end complexity of the program is the same whether it's object-oriented or not, so that the difficulty of end-to-end testing is going to be the same regardless. At least with an object-oriented system you can do unit testing and be quite confident that the class is going to work properly, as long as everything is properly-encapsulated.
     
  4. Lux_

    Newcomer

    Joined:
    Sep 22, 2005
    Messages:
    206
    Likes Received:
    1
    A bit offtopic for this thread, but dkanter is generally right. The consequence of OO (or anyother layering technique) is that every layer tries/has to have more general approach to a problem than minimalistically necessary. Additionally there are probably additional checks for erroneus input and whatnot. If the goal is branch/line coverage (in addition to essential path coverage), then the testing effort absolutely increases.

    Of course, if the software is built to evolve over long time, redundant testing due to layering is less of a concern than rewriting large parts of codebase in case it has turned into a unmaintainable mess.
     
  5. nicolasb

    Regular

    Joined:
    Oct 21, 2006
    Messages:
    421
    Likes Received:
    4
    The testing effort increases in practice, but it's wrong to suggest that OO requires more testing; it permits more testing and thus allows you to be more certain that it will work before the system goes live! This is a desirable feature, not a problem.
     
    #1625 nicolasb, Jul 28, 2009
    Last edited by a moderator: Jul 28, 2009
  6. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,672
    Likes Received:
    441
    There is always ad-hoc quality assurance testing in the end, so there is end-to-end testing.

    In the end ad-hoc testing is the only kind of testing possible. Formal verification is nice for matching VHDL to circuits, proving absence of dead-locks and truthfulness of assertions etc. ... in the end it can't prove you are not an idiot who writes buggy code and forgot to specify important tests though.
     
  7. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,779
    Likes Received:
    2,023
    Location:
    Germany
    Nvidia showed C&C 3 at G80 launch. Did C&C 3, ever make it to DX10?
     
  8. RoOoBo

    Regular

    Joined:
    Jun 12, 2002
    Messages:
    308
    Likes Received:
    31
    In an ideal scenary where you have a good formal specification of what your block of code or block of logic does it may be as easy to verify RTL or software. However that's not usually the case, neither for RTL or for software. Given the cost of bugs in hardware you would usually expect that better formal specifications would be used for hardware design than for software development.

    In any case if you have a big complex piece of hardware with a large number of blocks interconnected however good the specification is you will have all kind of bugs. Some may be quite obscure. And in a very complex piece of hardware, like a CPU, the number of possible test cases required to fully validate it is incredible large. Can you test all the programs that can be written in x86? There can be millions of possible combinations that could trigger that obscure bug.

    Poor validation of a RTL design, for whatever reason, bad specification, bad validation tools or test sets, is much more dangerous and expensive than bad validation of software. The price of fixing hardware (even just metal layers changes) is orders of magnitude more expensive than fixing software. How many patches can MS release in a year? How many steppings of a given CPU can Intel produce in a year? And at what price?

    There is also the problem about how fast you can validate RTL versus how fast you can validate software. If you are basically emulating the logic gates by software it's going to be orders of magnitude slower to run the same amount of testing in a piece of RTL code than on a piece of software code. Of course FPGA emulation could be somewhat faster, if available, but that also has problems of its own. The only thing that can reach the speed available for software testing is the actual hardware. But discovering bugs on the actual hardware is very expensive (how much it costs to create the masks and produce the chips for validation) and debugging silicon can be quite more difficult than debugging software.
     
  9. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    Charlie says: "pigs do fly"

    http://www.semiaccurate.com/2009/07/29/miracles-happen-gt300-tapes-out/

    As we learnt long ago there are things like risk production to speed up the first wave of cards and it's possible to hold back some wafers part-way through processing to make metal changes - so it's not necessary to be totally doomful.

    23x23mm he reckons, maybe a bit less than that if the casing is ignored, call it 22.5x22.5mm = 506mm².

    Also he reckons GT215 is 12x12mm. 11.5x11.5=132mm²?

    Jawed
     
  10. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Charlie is been fed some strange stuff. How can G300 be the first 40nm part for NV when they are doing GT21X parts already?
     
  11. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    No wonder what he spits out resembles puke more than anything else. :)
     
  12. ChrisRay

    ChrisRay <span style="color: rgb(124, 197, 0)">R.I.P. 1983-
    Veteran

    Joined:
    Nov 25, 2002
    Messages:
    2,234
    Likes Received:
    26
    I dont think that presentation of C&C 3 was a DX 10 demo. I remember it being a display of load balancing or the shader units in vertex/pixel shader limited enviroments to demonstrate the unified shaders.. I dont remember the geometry shaders ever doing any work in that demo.
     
  13. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    Well he claimed they were cancelled....

    DK
     
  14. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Well, tell this to the guy from A*** who wants me to buy 1200 GT21X based notebooks for the company.
     
  15. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Well, some were cancelled, all for the same reason the first tape-out of GT300 was not successful. This spin is going into production because it's the best they can muster. expect a really good refresh on nV's part in 2010.
     
  16. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    How could the first tape-out not be a success, if it just happend :D
     
  17. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,413
    Likes Received:
    174
    Location:
    Chania
    That's the fairy tales he's been spreading since last year. I'd still suggest you shouldn't believe even half of that kind of horseshit whatever IHV it may concern.
     
  18. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,405
    Likes Received:
    402
    Location:
    New York
  19. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
    Easy. You find out that your first tape out is fucked up even before it gets back from the fab.

    DK
     
  20. dkanter

    Regular

    Joined:
    Jan 19, 2008
    Messages:
    360
    Likes Received:
    20
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...