Nvidia GT300 core: Speculation

Status
Not open for further replies.
If I take that to mean G302 would be a G300 shrink then I reaaally have no idea what you're trying to say. Don't tell us that you think G300 is a 55nm part now......
I mean that it may turn out to be a mistake that they went with top-end G300 first instead of a more direct RV870 competitor in form of less complex G30x chip (G302 or whatever - it doesn't really matter).

Haha, wow. you've recycled some other users posts in there to get to the same point.
As i've said, you may think whatever you want.

G300 did not fly out of the gates or we would have some leaks of it at Computex.
How many "leaks" about G80 we had at Computex'06?
 
You are just ignoring the whole basis of the link you posted and taking the discussion into another area (who is more reliable over who), my point in case here is that Theo is utterly unreliable when he's asking CJ to comment on his "news story".

...and I've told you already what the answer from CJ will be albeit I personally have never talked with the gentleman directly. For the time being I doubt anyone knows with certainty about the development stage of the chip apart from real insiders.

As for someone like Theo or anyone else I find it quite natural that they constantly ask around as much as they can otherwise they'd never find out anything. To steal from one source is plagiarism, to steal from many research.
 
http://www.xbitlabs.com/news/video/...formance_by_2015__Nvidia_Chief_Scientist.html

I tried to read Damien's interview with Dally through online translation but the result is as always close to useless.

In addition, Mr. Dally emphasized importance of further parallelism in graphics processors and implied that in future graphics processing units should transit to MIMD architecture.

Well I can read out of the translated mumbo jumbo something like increase of parallelism but I fail to see anything that would directly imply MIMD.

Mr. Triolet are you reading?
 
I can translate the relevant sentence (a bit long with complicated grammer for a stupid translator's parser)

Bill Dally indique également que les GPUs devront évoluer pour être plus efficaces dans l’exécution des tâches moins multi-threadées, c’est-à-dire pouvoir exploiter le parallélisme au niveau des instructions en plus du parallélisme au niveau des données ou des threads (identiques) auquel sont limité les GPUs actuels, contrairement au futur Larrabee d’Intel.

"Billy Dally also points out that GPUs will have to evolve and become more efficient in executing less multi-threaded tasks; i.e. to be able to exploit instruction-level parallelism, in addition to data-level or thread-level parallelism (identical), to which GPUs are currently limited, unlike Intel's future Larrabee."
 
Last edited by a moderator:
Hmmm maybe it's just me but that doesn't necessarily imply directly MIMD units. Fore mentioned increases can also be achieved with optimised SIMD units IMHLO.
 
Well, all this talk about MIMD may just mean that it is now able to run 2 kernels in parallel. And new blocks get scheduled as the old ones retire.
 
There's nothing 'MIMD' about that. The dynamic warp formation theory is much more credible.
 
Why not? Aren't the CPU's of today and lrb of tomorrow supposed to be MIMD? And if say, a 64 core chip can execute 4 kernels simulatenously, then isn't it atleast 4 way mimd?

As for credibility? It's just my hunch.
 
OK, so if I get it correctly. On nv/amd gpu's, alll threads that are switched around are from the same kernel. On IMG's, SGX, all can be from different kernels. Right?
 
Bill Dally didn't mention MIMD directly. I asked him what he likes and dislikes about current GPUs. On the dislike part he said everybody would like better efficiency with only a few threads (instead of thousands of them). I asked him more specicifally about increasing instruction parallelism inside threads. He wouldn't be too specific but insisted again that increasing single thread performance was a priority. Asked about the timeframe for such changes he said it was a many-years work.
It was of course a way to ask if it refers to "GT300" without falling in the "can't comment on unannouced products" trap.
 
Bill Dally didn't mention MIMD directly. I asked him what he likes and dislikes about current GPUs. On the dislike part he said everybody would like better efficiency with only a few threads (instead of thousands of them). I asked him more specicifally about increasing instruction parallelism inside threads. He wouldn't be too specific but insisted again that increasing single thread performance was a priority. Asked about the timeframe for such changes he said it was a many-years work.
It was of course a way to ask if it refers to "GT300" without falling in the "can't comment on unannouced products" trap.

And MIMD doesn't help at all with single threaded performance so..
 
Increasing single threaded IPC is totally opposite to the philosophy of GPUs, so far atleast. If you really need the single threaded IPC, you should be using CPUs not GPUs.
 
Increasing single threaded IPC is totally opposite to the philosophy of GPUs, so far atleast. If you really need the single threaded IPC, you should be using CPUs not GPUs.

By single thread of course I didn't mean the whole GPU would execute one thread only. However at some point you would want SMs or any future structure to do better with a only few threads and even with a single thread.
 
There's a wide chasm between single thread performance of a GPU and CPU that won't be closed, however that doesn't mean improvement in this area for a GPU isn't useful.

Dally probably isn't thinking about single thread performance though, rather a smaller number of many threads than is required today to get good performance. Sometimes you might want to execute a task and have more than the 8 threads supported by a quad core CPU, but fewer than the thousands necessary to get good performance out of a GPU.

I realize I'm saying the same thing as Tridam, just trying to explain it in a different way.
 
That can be achieved by having larger shared memories and/or VLIW. Speculative execution, ROBs, superscalar execution etc. on GPUs would be a bad idea.
 
Bill Dally didn't mention MIMD directly. I asked him what he likes and dislikes about current GPUs. On the dislike part he said everybody would like better efficiency with only a few threads (instead of thousands of them). I asked him more specicifally about increasing instruction parallelism inside threads. He wouldn't be too specific but insisted again that increasing single thread performance was a priority. Asked about the timeframe for such changes he said it was a many-years work.
It was of course a way to ask if it refers to "GT300" without falling in the "can't comment on unannouced products" trap.

Thank you for the clarification; most helpful ;)
 
Status
Not open for further replies.
Back
Top