NVIDIA shows signs ... [2008 - 2017]

Status
Not open for further replies.
G92b and GT200b are from the same generation, too.

Since when? They don't share the same architecture. You wouldn't say that Sandy Bridge is the same generation of CPU as Donkey chip even though some of them will be produced on the same process node, would you?
 
Juniper has no DP, too. So it is not the same architecture. There is no other real difference between g92 and gt200.

No real difference? Cypress and Juniper share the same baseline architecture and aside from the DP support they are almost exactly the same chips aside from the fact that one is twice the size of the other. Juniper is to Cypress what the G94 is to the G92.

On the other hand GT200 and the G92 are different architectures. G92 has its basis in the G80 whilst the GT200 architecture is something newer again.
 
No real difference? Cypress and Juniper share the same baseline architecture and aside from the DP support they are almost exactly the same chips aside from the fact that one is twice the size of the other. Juniper is to Cypress what the G94 is to the G92.

On the other hand GT200 and the G92 are different architectures. G92 has its basis in the G80 whilst the GT200 architecture is something newer again.

But who cares?
Why don't we compare the other specifications like TMUs, ROPs, shaders and memory interface?
5870 mobile vs 5870 desktop:
40 TMUs <> 80 TMUs = 50% of Cypress
16 ROPs <> 32 ROPs = 50% of Cypress
128bit <> 256bit = 50% of Cypress
800 shader <> 1600 shaders = 50% of Cypress

GTX280m (g92) vs GTX280:
64 TMUs <> 80 TMUs = 80% of GT200
16 ROPs <> 32 ROPs = 50% of GT200
256bit <> 512bit = 50% of GT200
128 shader <> 240 shaders = 53,3% of GT200

Nobody cares about minor changes like threads, better mul, Cuda 2.3 (or so) or three sm per cluster instead of two. The basic architecture is the same: Vec8 ALU, 8 TMUs per Cluster, D3D10, 4 ROPs per partition. I don't see a significant difference between AMD and nVidia.
 
Well, they made enough noise about the fact that they're not allowed to use QPI or DMI, so is anyone surprised?

And hey Pineview is cursed with a GMA chip on board, it's not as if it doesn't need all the help it can get.
 
But who cares?
Why don't we compare the other specifications like TMUs, ROPs, shaders and memory interface?


Nobody cares about minor changes like threads, better mul, Cuda 2.3 (or so) or three sm per cluster instead of two. The basic architecture is the same: Vec8 ALU, 8 TMUs per Cluster, D3D10, 4 ROPs per partition. I don't see a significant difference between AMD and nVidia.

Nobody cares about improved power management in an application where its most important? Nobody cares about programmability when its becoming extremely useful for encoding/decoding as well as photoshop etc. Nobody cares about the differences in the shader architecture of course not.

The major difference between the two is that the AMD chips are from the same generation and share a common feature set. The Nvidia chips are not from the same generation no matter how similar some of their characteristics are. Furthermore the AMD chips follow a consistant naming scheme whereas the Nvidia parts do not. x8xx is always high end, x6xx is always enthusiast and so on.
 

I have to say that doesn't put a very positive light on it there sir at least as a mobile chipset. As a light weight desktop chipset we'll have to see what AMD comes up with in competition, but I wouldn't suspect that its 100% rosy there either as they can get both the GPU boys and the CPU boys actually co-operating. My overall suspicion is that both Nvidia and Intel would have done a lot better here had they actually worked together.
 
The major difference between the two is that the AMD chips are from the same generation and share a common feature set.

So can you tell me the difference in the feature set between g92 and gt200?

The Nvidia chips are not from the same generation no matter how similar some of their characteristics are. Furthermore the AMD chips follow a consistant naming scheme whereas the Nvidia parts do not. x8xx is always high end, x6xx is always enthusiast and so on.

G92 is the mainstream part of gt200. You have no poblem that amd will sell juniper as 5800 mobile but you complain about the same fact when nvidia is doing this?
 
G92 launched in October 2007.
GT200 launched in June 2008.

NV ALWAYS launches the high-end derivative of a product family first.

You are wrong.

And which was the performance part? Oh wait: They only selling GT200 cards since june 2008. :rolleyes:
g92b came in june. At the same time nVidia launched the GT200 cards.
 
Last edited by a moderator:
You seem to be forgetting about Tegra2 :)

So is the rest of the industry. Yes, I know about the Google tablet, yes I know about Unreal engine, and yes, I know that both of those are also in bed with many others.

How many design wins did NV claim for Tegra? How many materialized? How many sold more than 1K units? Tegra2 has the industry buzz, along with Qualcomm for ARM based widgets, but for real products to come out, meaning actual sales, they have to get their power use under control.

Good luck there guys, the phone (and related widgets) guys don't believe spec sheets any more, they test. NV... umm... is.... err.... better at the paper side.

-Charlie
 
Not telling you what to accept but unless your complaining actually has a chance of changing something I just don't see the point. OEM's obviously don't mind peddling the products and they are more informed than anybody else. They would be the frontline in any pushback.

Why did they announce the mobile line on the Tuesday after Christmas, and a week and a few days before their big press conference at the largest trade show in the world for consumer electronics? I did beat them by a day, but it was purely coincidental.

They are in reactionary mode, or their marketing people should be taken out back and shot on grounds of improving humanity's average IQ.

-Charlie
 
G92 is the mainstream part of gt200. You have no poblem that amd will sell juniper as 5800 mobile but you complain about the same fact when nvidia is doing this?

G92 is the mainstream part of the G80. GT200 doesn't have much in the way of mainstream parts as it appears they were cancelled. It would be the same thing if mobile 5870 was simply the RV770 shrunk to 40nm and called the mobile 5870. Then it would be a last generation product posing as a current generation product and both would be doing the same thing.
 
Sontin said:
G92b and GT200b are from the same generation, too

Juniper has no DP, too. So it is not the same architecture. There is no other real difference between g92 and gt200.


so I guess that would mean that the GTX 285/275/265...etc are simply renamed 8800 GTX/GTS/GT (same generation.. so a GTX 295 = 9800GX2), ahh ok thanks for clearing that up

Also interesting that the Fermi based Tesla (2050/2070) are not the same architecture as the Geforce GT300 series, being that the geforce lacks ECC. I guess that would also mean that the G80 and G92 are not the same architecture as the G80 lacks PureVideo VP2 and Compute 1.1 that the G92 has.. but wait... so they are all the same "generation" but not of the same "architecture"..

Damn I'm soo confused now..
 
Status
Not open for further replies.
Back
Top