Nvidia GT300 core: Speculation

Status
Not open for further replies.
How necessary would that be? The integer side would control the program counter, and so it would control instruction fetch.
By deciding not to branch to a certain portion of a fiber, it can dictate what the vector unit does.
Well you got me thinking about how they're going to make predication nest - surely this is a purely software construct.

I'm thinking it must be a software construct because there are only 4 hardware contexts - fibres don't have a hardware context. So the data to push into and retrieve from the mask register must be maintained by the programmer (compiler) which seems to mean that there'll be code maintaining software implemented predicate stacks, per qquad (and/or per fibre).

So I think you're right, branching is purely manipulation of some user state per fibre and optionally per qquad and doesn't need dedicated instructions.

Jawed
 
By the way albeit LRB could theoretically do a whole damn lot through its drivers, I'd rather describe as a software TBR than a TBDR. I doubt it'll defer much more than any IMR out there.

As far as overdraw hypothetically being speculative execution I severely doubt its the case on a PowerVR either.

In any case in the back of my mind the most important aspect is that if the game engine/application code is not tailored to occlude there's next to nothing any software or hardware occlusion can do in the end.
 
Yes, G300 development runs well from what I have heard. But I do not think G300 will be taped out at this quarter.

It will be a very nice chip - many told us to me. Of course mostly Nvidia-guys, but I am really optimistic, too. They have changed several things. You can be sure that it will not be a lame update like G70 or G200.
 
compared to larrabee

based on this info from wikipedia

"Graphs show how many 1 GHz Larrabee cores are required to maintain 60 FPS at 1600x1200 resolution in several popular games. Roughly 25 cores are required for Gears of War with no antialiasing, 25 cores for F.E.A.R with 4x antialiasing, and 10 cores for Half-Life 2: Episode 2 with 4x antialiasing. It is likely that Larrabee will run faster than 1 GHz, so these numbers are conservative."

File:Slide_scaling.jpg


that means 3 on the chart is 60fps, in fear a gtx280 does 120fps at the same settings, im not sure about the rest of the hardware setup but i doubt they would be demonstrating it on slow hardware. also i doubt that they would demo games that arent particularly well suited to larrabee.

so if the gt300 is twice as fast as the gtx280 it should be equally as fast as a 32 core larrabee at 3ghz, in fear at least.

i doubt itll actually launch at 3ghz though.

what would the power requirements be at that speed?
 
You don't want to predicate this:
<snip>
You don't want to use conditional branch instructions for this at all, you use zero overhead looping on any sane architecture.

I wonder if Larrabee breaks with x86 tradition and finally implements the loop instruction in hardware again.
 
By the way albeit LRB could theoretically do a whole damn lot through its drivers, I'd rather describe as a software TBR than a TBDR. I doubt it'll defer much more than any IMR out there.
Except the entirety of rasterization until after binning of course.
 
You don't want to use conditional branch instructions for this at all, you use zero overhead looping on any sane architecture.

I wonder if Larrabee breaks with x86 tradition and finally implements the loop instruction in hardware again.
Zero overhead looping is the last of your worries if you have to execute a loop like that. You do want to use conditional branches when they can save you from performing a lotl of computations. (well, unless your code is running on NV4x/G7x :) )
 
Not to mention it will be another 6- 9 months before GT300s variant tap into Main Stream Market...

My Hope is that ATI will continue to kick some butt and move Nvidia Forward....
 
Not to mention it will be another 6- 9 months before GT300s variant tap into Main Stream Market...

My Hope is that ATI will continue to kick some butt and move Nvidia Forward....

None of the two will or better can be on a constant winning roll. To be frank though I'd personally prefer if the price/performance ratio will remain on today's levels.
 
Well if ATI pulls another rv670 -> rv770, and nVidia stays with their lackuster improvements like g80 to g90 to gt200, then ATI will be back on the performance leadership.

I doubt it very much and have high hopes for the gt300.
 
History tends to repeat itself, that would mean that GT300 will be successful while R8xx family will be plagued by driver problems, bad design decisions, other unanticipated problems not directly related to AMD/ATI, and oh did I mention terrible drivers?
 
Status
Not open for further replies.
Back
Top