Nvidia GT300 core: Speculation

Status
Not open for further replies.
If so, could a 32nm "GT300" arrive February/March/April time? NVidia could be planning for 1 re-spin?
We can expect the first 32nm chips in 1H/2010, but I do not think that Nvidia shrink GT300 simply to 32nm. I think we will see some changes like the switch from G80 to G92, because GT300 will be not so big as GT200 and will not consume as much energy, so there is not the important to shrink on fastest way.

It would be fine if we see the 32nm mainstream soultions of GT300 in the first quarter of 2010.
 
It would be fine if we see the 32nm mainstream soultions of GT300 in the first quarter of 2010.

I would be surprised to see any GT300 parts in Q1-10. Getting GT215 out the door in that timeframe is what we should expect. not a shrink of a processor that is too big for 40nm just jumping to 32 without a hassle.
 
Well, "zero" relation is probably going a bit too far...but I think we can expect that they are independent projects.

In a way, yes, GT300 is way different from GT200, but they share oh so many things (and o so many of the same problems.)
 
If you don't plan to sort secondary rays to extract locality you shouldn't even bother to try to run that stuff on a wide SIMD architecture.
Even if you do, whenever in a given region (either in the bounding volume hierarchy or at the actual geometry level) the ratio of rays to primitives <<1 it won't do you any good.

PS. the structures necessary to group rays (apart from the trivial grouping of rays which intersected the same surface on the first bounce) will not fit in cache either of course ...
 
Last edited by a moderator:
I'm not sure that a SW renderer is really that much less efficient (power wise) than a hardware one. I haven't seen any hard numbers one way or another and it really depends on the workload. I wouldn't trust the 20X he mentioned, I bet its far lower on average.
I'm thinking it's some kind of narrowly defined power load estimation:
it would be something like how much power we'd expect one ROP to pull for a given pixel versus what a software loop running on a CPU would.

I'd wonder if the assumed case is one where a ROP is fully exercised, including all parts of its functionality that are specialized or cut corners versus running on a full-precision CPU ALU.
Such a case would ignore idle times and also not take into account the fact that this is one part of the overall power consumption of the die.

It might be more compelling if the GPU were able to to what Larrabee does with its L2 caches and keep the tile on-chip at all times.
The power cost of the non-tiled solution's higher bandwidth consumption might negate part of the specialization advantage.


Anyway, Bill's really known for his emphasis on bandwidth at the expense of latency...and that kind of thought process is a better fit at the national labs or NV than at AMD or intel.
I think that's why he was emphasizing on-chip hierarchies since he noted off-chip bandwidth had become so valuable.
 
GT21x line has zero relation to GT3xx line.

I think Charlie means you when he writes this:

The fanbois are sure to say something stupid about Nvidia having better performance, but they would need to be twice as fast to make up for the cost deficit. Based on information we have about the architecture, that is not going to happen, no chance. Keep in mind these people are the ones who claim that GT300 taped out months ago, and is smaller than GT200b, and.... and.... and Nvidia must be sitting on the warehouse full of parts for some conspiracy laden reason.

http://www.semiaccurate.com/2009/08/13/gt300-have-nvio-chip/
 
Wow... if that is even close to right, G300 really is a R600 fiasco of sorts.
G300 is bigger and more expensive than Cypress at about the same performance while R600 was smaller and slower than G80.

The best part is down in my signature from a few months ago.
Good call neliz! :yes::runaway:
 
Your not really going to believe anything that comes out of that spew hole charlie calls a mouth are you? Seriously, the most reliable information that can be garnered from that fool is to just take everything he says, and believe the opposite.
 
Your not really going to believe anything that comes out of that spew hole charlie calls a mouth are you? Seriously, the most reliable information that can be garnered from that fool is to just take everything he says, and believe the opposite.
Now, now....even a broken watch is correct twice a day. The opposite of the time on a broken watch is also only correct twice a day ;)
 
Your not really going to believe anything that comes out of that spew hole charlie calls a mouth are you? Seriously, the most reliable information that can be garnered from that fool is to just take everything he says, and believe the opposite.

LOL. Like his 'crazy' claims about NV's packaging problems?

Sorry, Charlie actually gets things right. He does have a bias, but yours appears to be worse.

DK
 
LOL. Like his 'crazy' claims about NV's packaging problems?

Sorry, Charlie actually gets things right. He does have a bias, but yours appears to be worse.

DK

What bias? The only bias I have is against a fool who drags the entire tone of internet journalism down. Sure he gets the odd thing right. I could do the same with a couple of dodgy sources and enough random predictions. But most of the rubbish his spews is plain hate filled drivel. Be it for NV or Microsoft.
 
http://farrarfocus.blogspot.com/2009/08/siggraph-2009-thursday-frostbite.html#comments

repi said:
Good post :) Really would like to try to get some time to try out both doing the parallel reduction for the min/max z calculation as well as non-atomic stream compaction for the visible light index list.

Though it felt quite good to have such a rather naive compute shader (wrt to the atomic usage) run as well as it did on AMD DX11 HW, will be interesting to see (quite soon) if this changes on Nvidia DX11 HW!
Quite soon, hey!

Jawed
 
What bias? The only bias I have is against a fool who drags the entire tone of internet journalism down.

I don't wholly agree with his tone either, but that hardly makes him a fool. Frankly, if you want to look at fools, there are quite a few out there in more distinguished places. Folks who don't fact check, or who are surprised by things like the P4's well documented replay mechanism.

Sure he gets the odd thing right. I could do the same with a couple of dodgy sources and enough random predictions.

Charlie was the first or one of the first to write about AMD/ATI, AMD and Abu Dhabi, quite a few Intel related things, the packaging problems for GT200, etc.

He's gotten a lot right. He's gotten stuff wrong too, but that's what happens when you make predictions.

I wrote an analysis of what I thought CSI would be, and the reality is that I was 95% correct, but I totally missed the boat on current mode signaling. That hardly makes me a moron.

If you can do better than Charlie, then perhaps you should start writing and making predictions. I know Charlie very well and there's no way that I could do better than him - he has amazing sources, and a lot of the info he seems he never writes about.

But most of the rubbish his spews is plain hate filled drivel. Be it for NV or Microsoft.

He does have an axe to grind against both of them (from your perspective, although I'd also agree to some extent). All that means is that as an educated reader you should take what he says about those two companies with a grain of salt. I don't believe everything Charlie says, but I definitely listen/read carefully.

Maybe it's just obvious that someone who runs a website like AMDzone or nvnews has a bias...but frankly, everyone has a bias, even myself.

And people who discount what Charlie says are doing so because of a bias that I (and most others) perceive to be irrational.

David
 
As if that wasn't obvious, you Intel fanboy you. At least you like Myth, and a few other good games, which sortof atones for some of your sins:p

Then let's all kiss and make up, since Charlie has an actual Williams Defender arcade box.

Cheers
 
Status
Not open for further replies.
Back
Top