NVIDIA Maxwell Speculation Thread

Prices in Europe are always more expensive because of 20%-25% VAT in most EU countries. Wait for the US RRP to be announced.
 
Prices in Europe are always more expensive because of 20%-25% VAT in most EU countries. Wait for the US RRP to be announced.

123€+VAT in Italy is 150€.
The faster GTX 650 Ti starts at 120€ and the GTX 660 2GB starts at 160€ so I don't see this GTX 750 as being a good deal at all.

And at 165€ we start seeing the R9 270 2GB, which seems to be worlds apart from that GTX 750.


And it doesn't really matter if the prices in Italy are "high". We enjoy free trading in Europe so an italian can order a graphics card from France with a rather small shipping fee.
 
That sounds a bit expensive, but I have no idea about hardware prices in Italy...

on a well-known site that shows multiple prices on various e-shops in Italy, which is called trovaprezzi.it, a gtx 650 is around € 80 and 650ti around €100, both vat at 22% included
 
Please note when GTX 650 Ti launched back in 2012, it was $149 and $109 for GTX 650. Both cards are old and probably discontinued by now. If GTX 750 retails for $109 and is slightly faster than the GTX 650 Ti while using a lot less power and providing Maxwell feature set, it's simply a better card at a better price.
 
If GM107 is using 28HPM, would it be viable for a chip sized say 300mm2 to use it as well?

Even bigger. Or let me put it that way: 28HPM is currently much much more viable for any large chip than 20nm SOC. When this will change is a big guess, mine is not before H2/2015.
 
It doesn’t look like GM107 is bringing any new technology to the table, it is just better binned and much more efficient processor. Technically you could easily call Maxwell a Kepler Refresh^2.
I don't know how much binning is required for a process as mature as 28nm, but doesn't "a much more efficient processor" hold for pretty much all recent GPUs?

Kepler was a much more efficient processor than Fermi. GCN was more efficient than the earlier architecture. Etc.

When there's no new API on the horizon to support, what exactly do they expect from a new GPU?
 
I don't know how much binning is required for a process as mature as 28nm, but doesn't "a much more efficient processor" hold for pretty much all recent GPUs?

Kepler was a much more efficient processor than Fermi. GCN was more efficient than the earlier architecture. Etc.

When there's no new API on the horizon to support, what exactly do they expect from a new GPU?

There could have been significant high-level changes, as there were in G80, Tesla, Fermi and Kepler. That said, what I imagine must be a myriad of lower-level improvements seem to have worked out quite well for Maxwell. It's less interesting (especially since we're unlikely to ever know much about what they actually changed) but, you know, whatever works.
 
There could have been significant high-level changes, as there were in G80, Tesla, Fermi and Kepler. That said, what I imagine must be a myriad of lower-level improvements seem to have worked out quite well for Maxwell. It's less interesting (especially since we're unlikely to ever know much about what they actually changed) but, you know, whatever works.

High level changes are rather in a NV30---------G80-------Kepler-------etc. cadence (while Fermi being somewhat an in between case sample). Anything between G80 and Fermi were based on a similar architectural backbone. On hindsight were are the "high level improvements" at the competition? Does GCN sound to you like something they'll overhaul all that soon?

Wasn't GT200 just a G80 with more units and a handful of FP64 units added to the mix under that very same reasoning?
 
GT200 has improved geometry shader, performance is way higher than G80, register file size is doubled etc. G80 is Compute Capability 1.0, GT200 is Compute Capability 1.3. GT200 fully decodes H.264 in hardware while G80 didn't. It's not just more units, it's very different internally almost everywhere.

http://www.anandtech.com/show/2549/5

Also GT2xx GPUs can do something that G8x/G9x GPU can't do.

* Zero-copy access to pinned system memory
Allows MCP7x and GT200 and later GPUs to use system memory without copying to dedicated (video) memory for significant perf improvement.
 
Last edited by a moderator:
All necessary changes that come with the job, but the backbone arch is still there. The problem is that we got spoiled by the rapidly changing architectures in recent years .. NVIDIA practically introduced two sub-archs in one generation : GF100 and GF104 (still true for Kepler).. And after a long stale period of VLIW5, AMD followed suit with VLIW4 and VLIW5 in one generation (HD 6000). However I believe that was just a transitional period .. they seem to be quite settled with GCN and Kepler.
 
GT200 has improved geometry shader, performance is way higher than G80, register file size is doubled etc. G80 is Compute Capability 1.0, GT200 is Compute Capability 1.3. GT200 fully decodes H.264 in hardware while G80 didn't. It's not just more units, it's very different internally almost everywhere.
Yes the units all are a bit different but nothing fundamentally really changed. And I wouldn't say performance is way higher (outside the higher unit count).
The h.264 decoder is completely unrelated to the 3d architecture, as it's separate (in fact some G9x chips feature a newer version than what GT200 did).
 
Yes the units all are a bit different but nothing fundamentally really changed. And I wouldn't say performance is way higher (outside the higher unit count).

I was refering to the geometry shader, not the overall performance.
 
Back
Top