NVIDIA Maxwell Speculation Thread

You think it's a mediocre product because you're an elitist and don't care about cards with such level of performance (sorry for phrasing it like that)
It's fine, e.g. you can put it in a micro ATX tower without clogging some of the drive bays and use a 300W PSU.
I thought ToTTenTranz explained his position well and didn't sound elitist. Besides, you called it "fine" in your final sentence which sounds like a mediocre endorsement to me.
 
I suppose it depends on which chip you're comparing to which chip, ExtremeTech tested khash/watt against R9 270, and R9 270 won (even though just barely)

Tom's numbers for Radeons seem a tad on the low side based on R9 270 at least, but one has to remember that there's ~10% differences from 1 card to the next within same models and slight adjustments to clocks or voltages can cause huge variations (well beyond 10%) too, for good or bad

Dave remarked before about platform consumption being more important metric than card consumption alone. Extremetech looks to be doing the same, and on that metric radeons are still competitive, at least with a single card(and a big CPU):

Yet we have a hard time recommending the GTX 750 Ti over the R7 265 as the Radeon offers 13% more performance (on average) and over 20% in games such as Battlefield 4. AMD's solution consumed 32% more power than Nvidia's, though in the games we tested during that scenario the R7 265 was also 24% faster, so that point is moot. Again, on paper, assuming both cards are available for $150, the R7 265 is a better buy.

http://www.techspot.com/review/783-geforce-gtx-750-ti-vs-radeon-r7-265/page12.html

both are using i7 4770
 
Last edited by a moderator:
Tomshardware has the traditional (for CPU at least) graph that plots average memory latency for random accesses within a block of a certain size. The result is remarkable in that the latency has gone down dramatically, especially for the external memory, where it goes from 280 cycles for a 650Ti to 180 for the 750Ti. That's really a massive improvement.

(Link: http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750.html)

Nice catch, I wonder what happened there. Do you know of similar figures for GCN chips? Or older GPUs, for that matter?
 
Dave remarked before about platform consumption being more important metric than card consumption alone.

Platform power is naturally important, but to accurately compare GPU perf. per watt, one needs to isolate power specifically consumed by the GPU, and compare it to the GPU performance achieved. If you read Tom's Hardware review of 750 Ti, they did that with multiple games, and the 750 Ti is way ahead of anything else with respect to power efficiency.
 
I think these numbers are irrelevant - it's not possible to ignore CPU power consuption caused by GPU driver. It's the integral property of the GPU and it can differ quite significantly among different GPUs and architectures.
 
Dave remarked before about platform consumption being more important metric than card consumption alone. Extremetech looks to be doing the same, and on that metric radeons are still competitive, at least with a single card(and a big CPU):



http://www.techspot.com/review/783-geforce-gtx-750-ti-vs-radeon-r7-265/page12.html

both are using i7 4770

I hardly think it's fair to diminish the importance of Maxwells power efficiency on the basis of it being diluted by the rest of the platform. If your interested in an energy efficient platform then Maxwell would be doing its part and its up to the user to match it with the correct components.

And lets face it, moving focus from the GPU's power efficiency onto the rest of the system isn't exactly going to go in AMD's favour.

The crazy thing about this GPU is that coupled with a low power i5 and a big SSD you could put together a near silent and tiny PC with similar potency to a PS4 drawing something like 120w. That's just an insane proposition that if you'd have suggested back in 2006 would have caused people to ask what you smoked this morning.
 
Not really. The marketing material was this big 256 (sic!) block of green squares that was Kepler and the 4x 32 with indiviual control that is Maxwell. I guess the very good diagrams from Damien come from clever analysis of more than Press_Preso.pdf copy-and-paste that many sites resorted to.
Yes indeed he made a great job and contacted the Nvidia tech guys which have been willing to let some info filter out.
 
Oh - btw: Beware if someone posts ridiculously high Luxmark scores... they probably did not work very thoroughly... ;)

http://www.pcgameshardware.de/Nvidi...s-OpenCL-Performance-Luxmark-Maxwell-1109949/

Oh, for good measures:
http://www.pcgameshardware.de/Grafi...s/Geforce-GTX-750-Ti-im-Test-Maxwell-1109814/

NVIDIA is back where it was before the release of CUDA 4.0. It would be interesting to understand if it is the result of some LuxMark specific optimization or it is a gain obtainable with most OpenCL applications. The performance drop with CUDA 4.0 was measurable in many OpenCL applications.

Now, if the increase is common to more OpenCL applications, it is an extremely good news for all OpenCL developers.
 
As you can imagine: I've asked time and again. Up until now the answer always was, that OpenCL performance would not matter as much as Cuda performance. The last iteration of said question remained unanswered (maybe due to the Maxwell launch) for now.

Some conspirationists are already readying their virtual flamethrowers for evil Nvidia to withhold OpenCL Performance for this long. *shrugs*
 
You think it's a mediocre product because you're an elitist and don't care about cards with such level of performance (sorry for phrasing it like that)
It's fine, e.g. you can put it in a micro ATX tower without clogging some of the drive bays and use a 300W PSU.


How exactly am I an elitist if my whole argument was based around performance/cost?

In my country the GTX750 Ti have appeared at 165€. For 150€, one can get a much more powerful R9 270 (Pitcairn @ 925MHz).
According to tomshardware, the power consumption between a R9 270X and a 750 Ti is around 60W. Let's even assume the R9 270 consumes the same (even though it doesn't) as a 270X.

For a person who regularly plays 15 hours/week, after a year the power consumption delta between a R9 270 and a GTX 750 Ti will be:
60W * 15 hours * 4 weeks/month * 12 months = 43.2kW.h
In my country, the price for kW.h is 0,121€, so 43.2kW.h will cost 5,23€.

So one would have to wait 3 years of (very) regular gaming for the power consumption difference between one card and the other to pay off, while getting worse performance with the 750 Ti.

As I said, the GM107 chip seems great. It's the MSRP of these cards that made them mediocre. Like always, there are no bad products, there are bad prices.


But please, feel free to explain how this logic is "elitist".
 
http://www.legitreviews.com/nvidia-geforce-gtx-750-ti-2gb-video-card-review_135752/15

Here are the GPU-Z shots for the NVIDIA GeForce GTX 750 Ti 2GB reference card that shows one monitor on the left and two monitors on the right. As you can see the NVIDIA GeForce GTX 750 Ti clock speeds, voltage and fan speeds all don’t change when a second monitor is hooked up. The only change is roughly a 5% increase in the memory controller load and about a 0.2% higher TDP (power consumption) as a result of the higher memory controller load. So, there was a 1C increase in temperature and a 2W increase in power consumption due to this, which is minor compared to the 20W or higher difference seen on comparable cards from AMD.

Efficiency is amazing.
 
According to CCC my new R9 290 idles at 300/150 with two different monitors connected to the DVI ports.

Did I get lucky?
As long as they use the same timings, no. I don't think there's really any difference between Kepler, Maxwell or newer AMD cards there from a hardware point of view, the problem is always the same (can't reclock the memory if vblank period isn't synchronized - typically for dvi monitors this means they need to be driven from the same clock source, and nvidia definitely did that earlier, it's also possible the card bios needs to play along not just the driver I'm not really sure there).
 
Back
Top