Nvidia GT300 core: Speculation

Status
Not open for further replies.
One pointer that GT300 is not ready is that all announced DX11 games have some sort of ATI/AMD sponsorship on them. This is a first for nV marketing.

When will DX11 games appear? when DX11 is available. Most will be patched up DX10.1 titles and all current news lead us to believe that therew will be DX11 hardware available with the launch of DX11 on which we can play DX11 games.
This includes a new AAA title.

From the realistic view, DX11 hardware is only playable in terms of DX10/DX9 Game.
 
Hardware can be very difficult to validate. Consider GPUs, they are very complex state machines. Since the current results can depend on state across many different blocks, bugs can be quite tricky to track down. Sure, a hardware designer may know what they are designing, but that doesn't mean they know everything that's going on in the chip that depends on their result or that they are depending on for their result.

Formal state space for a hardware design is minuscule compared to the formal state space of any modern day program.

Also, I have seen all sorts of "bizzaro" HW bugs over the years. The advantage of SW bugs is that you can always release a new driver. HW bugs can much more difficult to resolve.

certainly software bugs are easier to fix because they are not set in stone but they also tend to be harder to find and more complex to root cause.
 
Speaking as a professional OO programmer, in a sense you're right that we don't know what's going on a lot of the time - but that's kind of the point. A well-written software component does what it is advertised to do, and you don't need to know how it does it. If you do need to know what's going inside an object, that indicates that it hasn't been written correctly. Not knowing what is going on is thus the highly desirable goal of the exercise. :)

But it is precisely this that makes software so much harder to verify than hardware.
 
But it is precisely this that makes software so much harder to verify than hardware.
No, it makes the verification much easier because it compartmentalizes the verification. If it's done properly, of course. There are always ways to make software completely unmaintainable. But there are also equally good ways to make it easy to track down (and fix) bugs. Good object-oriented programming is one of those ways.
 
You see, in this particular case, things are very much in favor of FPGA. So much so that I suspect that the overhead is probably lower than 5%. Even at 1%, they still have 15 redundant clusters, 7 times more than our GPU.
Actually, it seems Altera thinks even 8% or so overhead for redundancy is acceptable. Granted I've taken that number from a very old paper (2002 - http://www.altera.com.cn/literature/wp/wp_stx_compare.pdf). (Very strange marketing btw, big hoopla over die size vs. competitor, while never even disclosing die size, and calculating "effective die size" including waver defects , like customers would care. Guess there was some strange battle going on then between xilinx and altera.... The 8% comes from altera's claim of actual die size being 13% larger than xilinx, without overhead they say it would be less than 5%.) It's quite possible though that 8% number has gone down indeed with the much higher unit count.
 
One pointer that GT300 is not ready is that all announced DX11 games have some sort of ATI/AMD sponsorship on them. This is a first for nV marketing.

When will DX11 games appear? when DX11 is available. Most will be patched up DX10.1 titles and all current news lead us to believe that therew will be DX11 hardware available with the launch of DX11 on which we can play DX11 games.
This includes a new AAA title.

I'd be willing to bet that most, if not all, DX11-able games for some time are using the DX11 CS4 fallback path, ie designed for DX10 hardware. And of these, nearly all will have a console version, which means that developers are not even close to really using DX11. Just getting the art pipelines (and engines) ready for tessellation is going to take some time, not to mention developers learning how to use compute beyond trivial post process optimization!

Will be interesting to see how important DX11 launch will be, maybe having DX11 GPU ready at MS's DX11 launch is less important than keeping good price/performance in current hardware lines.
 
One pointer that GT300 is not ready is that all announced DX11 games have some sort of ATI/AMD sponsorship on them. This is a first for nV marketing.

When will DX11 games appear? when DX11 is available. Most will be patched up DX10.1 titles and all current news lead us to believe that therew will be DX11 hardware available with the launch of DX11 on which we can play DX11 games.
This includes a new AAA title.

To go more to those games, we know DiRT2 supports it, DICEs Frostbite does but it's not known wether there'll be Frostbite using game this year, some rumors say there's 6 confirmed DX11 titles for this year.
 
When was RV740 launched? In March? OK, let's say May. It's been 2+ months. Was RV770 successful in September? How much time do we need to consider a chip successful?
I'm just pointing out a reason why a "ready" chip may be "put on hold" because of the issues besides the chip itself.
R520 was "ready" in November 2004 (supposed to have taped-out then). Was another tape out required to make R520 work, finally? That was a library problem.

Of course in this case there are people asserting that no version of GT300 has taped out. Seems pretty unbelievable.

RV670, G92, G94, G94b. Dunno about G92b.
I'm talking about what the roadmap has on it, for example RV670 was supposed to have been January 2008.

So why do you think that these chips (there where more than 3; let's not forget GT214 and GT212) are all that NVs been working on since last year? Wouldn't it make much sense for them to work on G30x generation in parallel to using GT21x generation as a "testing" chips?
Of course.

...Or it could be the case of 40G being unable to handle GT21x line-up...
I suppose it's possible. Bumpgate seems to indicate NVidia was trying to do a combination of things that was "wrong", and which outsiders apparently would advise as being wrong. Who knows what NVidia's intentions were.

GT200b went to production in B2 revision. B3 was made for GTX295. It's not like B2 was faulty or anything.
Wasn't a version before B2 in Teslas? I think there was quite a few months of 55nm Teslas out there ahead of the consumer version.

Otherwise it's hard to say anything about NV's execution because it's hard to determine the reasons for NV's woes lately. Was GT21x postponement/cancellation a problem of NV's execution or TSMC's technology? Was GT200 lateness (and comparative suckiness) a problem of NV's execution or NV's tactical mistake? I'd say that they are executing as good as they can right now. It's not like they have many options other than TSMC's 40G.
Tactics and execution, it seems. There doesn't seem to have been anything wrong with 55nm, if you look at AMD's results with it. That's not to say that RV770 hasn't gotten a lot better since June last year. The power/temperatures of recent RV770s are much better than the first ones.

If NVidia's chosen an irksome region of the "55nm schmoo" then it kinda seems likely that 40nm will treat GT300 worse. So while it's strange to hear accusations that GT300 hasn't taped out, you can't blame people for thinking there might be a grain of truth in it. Worst case we could be looking at another "G100->GT200" situation here. What's different this autumn, as compared with autumn 2007, is AMD has signified high-end competitiveness and there is an important OS launch, alongside the prospect of significantly rising sales in post-recession confidence.

Jawed
 
Wasn't a version before B2 in Teslas? I think there was quite a few months of 55nm Teslas out there ahead of the consumer version.

B2 was used in Tesla's and Quadro's , consumer cards were shipped with B3 except for some corner cases. IOW B2 was not ready for the Consumer market, can you say "binning"?
 
Last edited by a moderator:
I'd be willing to bet that most, if not all, DX11-able games for some time are using the DX11 CS4 fallback path, ie designed for DX10 hardware. And of these, nearly all will have a console version, which means that developers are not even close to really using DX11. Just getting the art pipelines (and engines) ready for tessellation is going to take some time, not to mention developers learning how to use compute beyond trivial post process optimization!

Will be interesting to see how important DX11 launch will be, maybe having DX11 GPU ready at MS's DX11 launch is less important than keeping good price/performance in current hardware lines.

DX11 titles will be based on DX10.1 titles, not DX10.
Performance improvements in 10.1 (think H.A.W.X.) will be there for the DX11 path as well. According to Huddy the biggest improvements will be in shading for those titles.
As far as I know, all the DX11 titles had DX10.1 and the DX11 path is based on that and not on DX10. Native DX11 titles will use tesselation, the first of those two will be DiRT2 and C&C4.
 
DX11 titles will be based on DX10.1 titles, not DX10.
Performance improvements in 10.1 (think H.A.W.X.) will be there for the DX11 path as well. According to Huddy the biggest improvements will be in shading for those titles.
As far as I know, all the DX11 titles had DX10.1 and the DX11 path is based on that and not on DX10. Native DX11 titles will use tesselation, the first of those two will be DiRT2 and C&C4.

Has C&C4 been confirmed as DX11 title?
And DiRT2 "native DX11"? Not really, but they will do some extra efforts for it, they pushed the planned september release to december for DX11
 
Has C&C4 been confirmed as DX11 title?
And DiRT2 "native DX11"? Not really, but they will do some extra efforts for it, they pushed the planned september release to december for DX11

I suppose native may be open to interpretation, but I believe the PC port of Dirt 2 was always targetted at some DX11 features.

Regards,
SB
 
Huddy was referring to compute shaders not graphics shaders.

He said putting thinks like post-processing, physics and more into a compute shader to speed up.

Also, no official word on C&C4, but it should be pretty much a given.
 
Last edited by a moderator:
Also, no official word on C&C4, but it should be pretty much a given.

Well, the SurRender 3D based W3D / SAGE / RNA engine isn't exactly known from being always on the cutting edge tech, apparently the RNA-generation used by RA3 did have some DX10 and 10.1 effects though, but I didn't much too much solid information on it on quick googling. C&C4 will use RNA-engine too, but of course it is possible it'll be upgraded for DX11.
 
B2 was used in Tesla's and Quadro's , consumer cards were shipped with B3 except for some corner cases. IOW B2 was not ready for the Consumer market, can you say "binning"?
GTX285 was B2, wasn't it? B3 was GTX295 and GTX275.

And way before GTX285 appeared there were Teslas based on GT200. It's my understanding that no 65nm Teslas were sold, NVidia delayed them till the 55nm version of the chip was working "good enough", which was around September.

I wouldn't be surprised if some demo/donated Teslas with 65nm GT200s are out there.

Jawed
 
GTX285 was B2, wasn't it? B3 was GTX295 and GTX275.
nope,

http://www.overclock3d.net/gfx/articles/2009/01/13172638795l.jpg

that's from the oc3d review. only a few B2 sampels made their way to reviewers

http://www.geeks3d.com/public/jegx/200901/geforce-gtx-285-gpu.jpg
http://images.bit-tech.net/content_images/2009/01/nvidia-zotac-geforce-gtx-285-1gb/7.jpg
http://resources.vr-zone.com//uploads/6431/9106.jpg

And their 55nm parts were relatively easy. The same problems they are having with the 55nm GT200 and 40nm GT21X's is plagueing the GT300

And way before GTX285 appeared there were Teslas based on GT200. It's my understanding that no 65nm Teslas were sold, NVidia delayed them till the 55nm version of the chip was working "good enough", which was around September.

I wouldn't be surprised if some demo/donated Teslas with 65nm GT200s are out there.

Jawed

The B2's had to lower the clock with 50mhz and memory with 200mhz to have them run properly (S1070) newer revisions run at the same core frequency as the B3's.
The first batches that were made in July were deemed "good enough" for the very low volume workstation market. First "mass"production chips came back in august, the first card to use the B2's was the FX5800 launched in November, It took them 5 months to create that card. The Tesla however, was announced in June, a full month ahead of the first B2 respin chips came back and released at the end of september. The B3 Tesla's were shipped on January 22nd a week after the GTX285.

In Tesla's case, the chip was still after the side-nomenclature T10P.

Computerbase had an article on the C1060 and it had an A2 on it, don't know if they actually had the card or just user their own gtx280. But basically, up until September there was no 55nm Tesla.
 
Last edited by a moderator:
He said putting thinks like post-processing, physics and more into a compute shader to speed up.

Yep, all of which can be done on DX10 hardware. It looks like most of the early perf gains / quality improvements will be via CS and the multi-threading hijinx.
 
Yep, all of which can be done on DX10 hardware. It looks like most of the early perf gains / quality improvements will be via CS and the multi-threading hijinx.

But wouldn't you be limited by the lower number of threads on CS4?
 
Status
Not open for further replies.
Back
Top