The G92 Architecture Rumours & Speculation Thread

Status
Not open for further replies.
So do you think NV will deactivate 8C-289mm²-G92 to 6C an try to clock it to 2.4GHz SD on a GX2-SKU? :LOL:

GX2 is imo a stupid/less-than-ideal concept until we have real multi-gpu-technologies, which I doubt we will see in 2008.
 
Why is G80@65nm = G92 not a high-end-chip?
Put a dual-slot cooler on it, rise VGPU to 1.25-1.3V and clock it up to ~800/2400MHz and buy some GDDR4 at Samsung with 1.2GHz+.

Summarized you will get a solution, which can beat 2xRV670 in most cases. . .

Well, I don't know that we know for sure re RV670 yet, but I will say that for an actual known factor they are competing against themselves for 8800GTX owners upgrade dollars. And I, being in that class, look at your bid for my upgrade dollars and laugh.

It's been a year. This part doesn't get "refresh" treatment, it seems to me. I think it's going to take a GX2 to really get a "wowza, where's my wallet" from their primary market. Tho it also might be the case that their traditional market isn't the only fish they're trying to get into the frying pan this time around.

Any rate, for legal reasons I should probably shut up now, even tho I don't know anything (as of right this minute). :D
 
Well, I don't know that we know for sure re RV670 yet, but I will say that for an actual known factor they are competing against themselves for 8800GTX owners upgrade dollars.
I do not think, that upgrading on the RV670X2X(2xRV670@750MHz + 2x512MB@ 1x00MHz GDDR4) is the best idea for 8800GTX-owner, since I expect from it 2x 2900XT performance in good cases, if crossfire works good, which would deliver ~40% more performance in average.

And I, being in that class, look at your bid for my upgrade dollars and laugh.
Not worse than RV670X2X and my solution would have an advantage, if your wallet is big enough, you can buy three of it, to experience "The new Ultimate Gaming". ;)
 
So do you think NV will deactivate 8C-289mm²-G92 to 6C an try to clock it to 2.4GHz SD on a GX2-SKU? :LOL:

GX2 is imo a stupid/less-than-ideal concept until we have real multi-gpu-technologies, which I doubt we will see in 2008.
I'm slightly more optimistic. But not much.

Jawed
 
I think the explosion upwards of total graphics solutions price ranges have lead IHVs to believe that a pure top-down scalability model is no longer the best solution. That aiming more in the middle and scaling in both directions is the way to go. Looking at the fat part of the market, I'm not sure I can disagree with them either.

But we've had this conversation before. Time will tell. :)
 
It seems you might just as well ask why G80's true successor, the 512-bit 192-SP, 2.4GHz SP clock monster isn't releasing on Monday.

Because it doesn't need to be on Monday ;)

Remember that nV generally always relesed a new high-end about twice as fast as the previous gen? That's exactly what you'll see soon and I'm talking single chip solution. Mark my words :)
 

Based on the liberty by which Nvidia decided to call the top mobile GPU the "8700M GT" (on the other hand, the 8400M GT is in fact equivalent to the 128bit bus desktop 8500 GT) when it was just a 8600M GT with higher clocks, i still have some doubts we're talking about the same G92 core for notebooks.

But if it's true, then it will be a hell of a speed up in notebook parts, that's for sure. ;)
 
So do you think NV will deactivate 8C-289mm²-G92 to 6C an try to clock it to 2.4GHz SD on a GX2-SKU? :LOL:

GX2 is imo a stupid/less-than-ideal concept until we have real multi-gpu-technologies, which I doubt we will see in 2008.


I don't know about shader speed, but it makes as much sense as deactivating 2C on a single core part and selling it as a performance part, ala the all-but-confirmed original plan for the 8800GT. Those parts may now possibly be relegated to a 8800GS and/or a GX2. The differance being on the GX2 part, it would be done because:

A. It would fit the thermal envelope of 225W.
B. It would be faster than anything a single core could muster.
C. It would keep from surrendering the high-end to R680, if in fact it is faster than a full-fledged 8C single-core part, which I believe it will/would be.

If they really intended to get 1TFLOP from 128SPs, what about modifying the SPs from MADD+MUL to MADD+MADD?

You know Nvidia. Dual issue MADD+MUL or a 3-issue MADD that can handle special function, the later being unlikely, they will quote the top theoretical performance, even if it is only in CUDA.

Wither it be 1core with 128 shaders @ ~2600 shader clocks or 2cores with 192 shaders @ ~1750 shader clocks...or heck, 224 shaders @ ~1500 shader clocks ...To us it'd be 660+Gflops + SF, to the marketing team and the uneducated it'd be one teraflop, and that's what matters...The check box. :)

Same thing with R680. Even if it's more cut and dry on issuing the ALUs, the goal of the product (in marketing, not business terms) undoubtedly will be to have a part >1TF. Heck, they even mention it will have a clock above 800...which is the lowest "tidy" clock capable of being greater than 1Tflop (320x2x10x2x800=1.024Tflops). If that isn't blatant, I don't know what is. :p
 
Last edited by a moderator:
hey, didnt we see these before (CJ ;) )

2iiv8qo.jpg

2nrlc2q.jpg

r03tpv.jpg

jsbuhz.jpg


http://www.tomshardware.com/cn/126,news-126.html
 
If you read his posts (i wouldn't call them "stories") there's always a sense of almost personal grudge against Nvidia, for reasons i can't quite ascertain.
The Inq (and Charlie specifically) has been blacklisted by Nvidia for quite a while now. Nvidia actually directly responded to an Inq story a few months back and the Inq had an article about how maybe they were no longer blacklisted.
He caught his wife having a threesome with Nvidia and Vista.
You clearly have never seen the Nvidia t-shirt that says "the ultimate threesome" on the front and lists GeForce, nForce, and Vista on the back. I really wonder how the hell that got through marketing and printed.
 
The Inq (and Charlie specifically) has been blacklisted by Nvidia for quite a while now. Nvidia actually directly responded to an Inq story a few months back and the Inq had an article about how maybe they were no longer blacklisted.

That's right, i almost forgot it...

rules.jpg


:smile:
 
Hmm too bad no GTX performance in this review.

I really wonder if the O/Ced 8800GT is working with stock cooler or not ?
Its really give me big hope. :)
 
2nrlc2q.jpg


I know CJ described it before, but I'll be damned if that isn't the most ghetto thing I've ever seen in GPU technology...Even more-so than the dreaded dongle, and that was ghetto.
 
Interesting... ;)


8800GT:
Fill Rate - Single-Texturing 4842.562 MTexels/s Feature Tests
Fill Rate - Multi-Texturing 25018.213 MTexels/s Feature Tests
Pixel Shader 453.709 FPS Feature Tests
Vertex Shader - Simple 252.614 MVertices/s Feature Tests
Vertex Shader - Complex 148.657 MVertices/s Feature Tests
Shader Particles (SM3.0) 100.402 FPS Feature Tests
Perlin Noise (SM3.0) 145.330 FPS Feature Tests
8 Triangles 19.917 MTriangles/s Batch Size Tests
32 Triangles 78.807 MTriangles/s Batch Size Tests
128 Triangles 280.877 MTriangles/s Batch Size Tests
512 Triangles 294.477 MTriangles/s Batch Size Tests
2048 Triangles 297.528 MTriangles/s Batch Size Tests
32768 Triangles 298.471 MTriangles/s Batch Size Tests


8800GTS:
Fill Rate - Single-Texturing 5136.310 MTexels/s Feature Tests
Fill Rate - Multi-Texturing 12084.927 MTexels/s Feature Tests
Pixel Shader 340.032 FPS Feature Tests
Vertex Shader - Simple 212.609 MVertices/s Feature Tests
Vertex Shader - Complex 113.446 MVertices/s Feature Tests
Shader Particles (SM3.0) 97.809 FPS Feature Tests
Perlin Noise (SM3.0) 98.889 FPS Feature Tests
8 Triangles 23.375 MTriangles/s Batch Size Tests
32 Triangles 77.990 MTriangles/s Batch Size Tests
128 Triangles 214.921 MTriangles/s Batch Size Tests
512 Triangles 248.264 MTriangles/s Batch Size Tests
2048 Triangles 253.191 MTriangles/s Batch Size Tests
32768 Triangles 254.327 MTriangles/s Batch Size Tests

http://forums.vr-zone.com/showthread.php?t=198459
 
Status
Not open for further replies.
Back
Top