NVIDIA Maxwell Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 9, 2011.

Tags:
  1. 3dcgi

    3dcgi Veteran Subscriber

    I thought ToTTenTranz explained his position well and didn't sound elitist. Besides, you called it "fine" in your final sentence which sounds like a mediocre endorsement to me.
     
  2. silent_guy

    silent_guy Veteran Subscriber

    Tomshardware has the traditional (for CPU at least) graph that plots average memory latency for random accesses within a block of a certain size. The result is remarkable in that the latency has gone down dramatically, especially for the external memory, where it goes from 280 cycles for a 650Ti to 180 for the 750Ti. That's really a massive improvement.

    (Link: http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750.html)
     
  3. elroy

    elroy Regular

    Hopefully we see low profile cards using this!! Will be perfect for a small HTPC with Pico PSU.
     
  4. gamervivek

    gamervivek Regular

    Dave remarked before about platform consumption being more important metric than card consumption alone. Extremetech looks to be doing the same, and on that metric radeons are still competitive, at least with a single card(and a big CPU):

    http://www.techspot.com/review/783-geforce-gtx-750-ti-vs-radeon-r7-265/page12.html

    both are using i7 4770
     
    Last edited by a moderator: Feb 19, 2014
  5. Alexko

    Alexko Veteran Subscriber

    Nice catch, I wonder what happened there. Do you know of similar figures for GCN chips? Or older GPUs, for that matter?
     
  6. ams

    ams Regular

    Platform power is naturally important, but to accurately compare GPU perf. per watt, one needs to isolate power specifically consumed by the GPU, and compare it to the GPU performance achieved. If you read Tom's Hardware review of 750 Ti, they did that with multiple games, and the 750 Ti is way ahead of anything else with respect to power efficiency.
     
  7. no-X

    no-X Veteran

    I think these numbers are irrelevant - it's not possible to ignore CPU power consuption caused by GPU driver. It's the integral property of the GPU and it can differ quite significantly among different GPUs and architectures.
     
  8. pjbliverpool

    pjbliverpool B3D Scallywag Legend

    I hardly think it's fair to diminish the importance of Maxwells power efficiency on the basis of it being diluted by the rest of the platform. If your interested in an energy efficient platform then Maxwell would be doing its part and its up to the user to match it with the correct components.

    And lets face it, moving focus from the GPU's power efficiency onto the rest of the system isn't exactly going to go in AMD's favour.

    The crazy thing about this GPU is that coupled with a low power i5 and a big SSD you could put together a near silent and tiny PC with similar potency to a PS4 drawing something like 120w. That's just an insane proposition that if you'd have suggested back in 2006 would have caused people to ask what you smoked this morning.
     
  9. DSC

    DSC Banned

  10. liolio

    liolio Aquoiboniste Legend

    Yes indeed he made a great job and contacted the Nvidia tech guys which have been willing to let some info filter out.
     
  11. Dade

    Dade Newcomer

    NVIDIA is back where it was before the release of CUDA 4.0. It would be interesting to understand if it is the result of some LuxMark specific optimization or it is a gain obtainable with most OpenCL applications. The performance drop with CUDA 4.0 was measurable in many OpenCL applications.

    Now, if the increase is common to more OpenCL applications, it is an extremely good news for all OpenCL developers.
     
  12. CarstenS

    CarstenS Legend Subscriber

    As you can imagine: I've asked time and again. Up until now the answer always was, that OpenCL performance would not matter as much as Cuda performance. The last iteration of said question remained unanswered (maybe due to the Maxwell launch) for now.

    Some conspirationists are already readying their virtual flamethrowers for evil Nvidia to withhold OpenCL Performance for this long. *shrugs*
     

  13. How exactly am I an elitist if my whole argument was based around performance/cost?

    In my country the GTX750 Ti have appeared at 165€. For 150€, one can get a much more powerful R9 270 (Pitcairn @ 925MHz).
    According to tomshardware, the power consumption between a R9 270X and a 750 Ti is around 60W. Let's even assume the R9 270 consumes the same (even though it doesn't) as a 270X.

    For a person who regularly plays 15 hours/week, after a year the power consumption delta between a R9 270 and a GTX 750 Ti will be:
    60W * 15 hours * 4 weeks/month * 12 months = 43.2kW.h
    In my country, the price for kW.h is 0,121€, so 43.2kW.h will cost 5,23€.

    So one would have to wait 3 years of (very) regular gaming for the power consumption difference between one card and the other to pay off, while getting worse performance with the 750 Ti.

    As I said, the GM107 chip seems great. It's the MSRP of these cards that made them mediocre. Like always, there are no bad products, there are bad prices.


    But please, feel free to explain how this logic is "elitist".
     
  14. mczak

    mczak Veteran

  15. DSC

    DSC Banned

    Last edited by a moderator: Feb 19, 2014
  16. CarstenS

    CarstenS Legend Subscriber

    Not all of this has come through a simple e-mail I might think. :)
     
  17. DSC

    DSC Banned

    http://www.legitreviews.com/nvidia-geforce-gtx-750-ti-2gb-video-card-review_135752/15

    Efficiency is amazing.
     
  18. cal_guy

    cal_guy Newcomer

  19. CaptainGinger

    CaptainGinger Newcomer

    According to CCC my new R9 290 idles at 300/150 with two different monitors connected to the DVI ports.

    Did I get lucky?
     
  20. mczak

    mczak Veteran

    As long as they use the same timings, no. I don't think there's really any difference between Kepler, Maxwell or newer AMD cards there from a hardware point of view, the problem is always the same (can't reclock the memory if vblank period isn't synchronized - typically for dvi monitors this means they need to be driven from the same clock source, and nvidia definitely did that earlier, it's also possible the card bios needs to play along not just the driver I'm not really sure there).
     
Loading...

Share This Page

Loading...