jimmyjames123
Regular
Well, you don't have an instruction pointer per element in a SSE vector, but yeah - it's a massive load of bullshit if I say so myself. The 'stream processor' nomenclature is already a bit stupid, but it's far from the 'errr that's FUD' line from a legal POV. Calling them cores, on the other hand, is really pushing it IMO...
Going back to GT200, could it be that both 240 SPs and 384 SPs are correct? i.e. GT200 would have 80 TMUs, 240 SPs, 32 ROPs/GDDR3 and a refresh coming out later would have 96 TMUs, 384 SPs and 32 ROPs/GDDR5? That would be quite aggressive on the same process node (or just one half-node ahead; i.e. 65->55nm) but not entirely impossible, especially given that GT200 did get delayed apparently.
I listened to all ~7 hours of the Financial Analyst presentation given by NVIDIA, and there was some very interesting and fascinating stuff being presented.
Did you notice that JHH spoke about how NV would move from hundreds to thousands of "cores" during the presentation?
He also said they were working things that were miles ahead of the competition. Surely he must have meant being able to use CUDA to run those many parallel processors for good use in computational finance, medicine, weather, etc?
No wonder NV is so confident moving forward. Their architecture is solid enough so that they can scale to thousands of cores over time, they have a really solid programming tool in CUDA to take full advantage of advanced GPU parallel processing, they have PhysX processing which can be incorporated into the GPU and make use of CUDA, and they have incredibly clever low power high performance devices designed that can be immediately used in next gen iPhones and such, in addition to everything else that we don't know about.
Will be fascinating to see how things work out, but I can tell from the presentation that NVIDIA is amped about the future.