Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

The least AMD and Oxide should do is shut up about the visual corruption with the press drivers. And wait to see if it is present in the final version, instead they opted to make fools out of themselves, which lead them to look unprofessional, and made Oxide look like a pawn in AMD hands, surrendering to AMD's whims and releasing useless statements without checking facts first.

Wait what?

You do know that the visual difference was brought up by Nvidia 'fanboys' saying AMDs AotS bench run looked worse, right? AMD just responded to that in a reddit post AKA the whole 'AotS controversy'.
 
AMD's response specifically stated the drop in precision gives increased performance for nV hardware, if the person even wanted to think further than trying put lipstick on a pig (cover up the situation or blame the opposition for something they didn't do to begin with), he would have known that that type of optimization hasn't been a valid performance increase for how many years now?

here is the specific quote

So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead.

I agree its not AMD's problem to go look into the issue, because its not their hardware. but when a company makes statements like that, they should at least try not to put their foot in their mouth.
 
AMD's response specifically stated the drop in precision gives increased performance for nV hardware, if the person even wanted to think further than trying put lipstick on a pig (cover up the situation or blame the opposition for something they didn't do to begin with), he would have known that that type of optimization hasn't been a valid performance increase for how many years now?

here is the specific quote



I agree its not AMD's problem to go look into the issue, because its not their hardware. but when a company makes statements like that, they should at least try not to put their foot in their mouth.

I guess what I was trying to say is I don't believe it was AMDs intention to ever go there. They were called out and forced to respond.
 
I agree I don't think AMD is at fault at all with what they showed. I don't even think they even knew about the glitch prior to it being pointed out.
 
Asus Republic Of Gamers Strix GTX 1080 Aura RGB OC Review
Kitguru says: The Asus Republic Of Gamers Strix GTX 1080 sets a new high performance benchmark. It takes the superb Nvidia GTX 1080 Founders Edition and pushes it forward in all areas. It runs 14c cooler and not only outperforms all other video cards we have tested to date but it has a built in, highly customisable RGB lighting system. Asus even allow the connection of two extra 4 pin fans if you want to get creative.

http://www.kitguru.net/components/g...epublic-of-gamers-strix-gtx-1080-aura-rgb-oc/
 
I dont even know why there's a controversy about it.. Well i was not even know they was one for be honest.

- Its confirmed that 1080 was render the terrain incorrectly. ( this type of bugs will in general no more product performance gain or lost ( )

- Nvidia driver used is the "press / launch driver" ( We dont even know if the test for the Nvidia gpu's have been conducted by AMD themselves or by a partner of them ). Its even better as this way, performance are the same of the reviews ..

- "Day 1 driver" for the retail launch have been released 3 days before the AMD presentation. Im pretty sure that if AMD was aware of the "difference" they / or their partner will have use it instead..

But lets be honest, i really doubt that the tests for this slide have been conducted 2-3 days before the presentation.
( do the tests, give the number to the graphical designer for create the slides, do approval by the marketing team. put the slides together for the video presentation ).. yes ofc, they will have redo all this test " Because " a day 1 driver was released. ( with normally no difference on the result )
 
Last edited:
Thanks for retesting Ryan, that's good to hear. I didn't want my cynical side to get any more cynical than it already is. :)
A bit off topic, but IMO, it's important not to be cynical.

GPUs are more exciting than they have been for a long time, and compared to basically every other PC component, we're still getting 70%+ performance gains about every 2 years. Try not to get buried in the minutiae of the technology or these constant controversies-that-aren't. We live in some amazing times and PC gaming is a blast.:)
 
A bit off topic, but IMO, it's important not to be cynical.

GPUs are more exciting than they have been for a long time, and compared to basically every other PC component, we're still getting 70%+ performance gains about every 2 years. Try not to get buried in the minutiae of the technology or these constant controversies-that-aren't. We live in some amazing times and PC gaming is a blast.:)
You basically just removed the need for most of the gaming forums on the internet. :D
 
Just checked it this morning on a GTX 1070. The average performance difference between 368.19 and 368.39 is 0.6%, well inside the margin of error.

It would be best if there was an article out of that instead of hiding it on this forum. A real article, by the way, with methodology. Not an OP-ED we sometimes see on some of these websites.

1. True but AMD had a non-retail card, which they should not had as the only cards out there were under NDA.
They then went on to use that card publically without permission.
2. Because they are non-public drivers and never meant to be.
3. See point 2.

Cheers

1. Not their problem. Do you really believe Nvidia doesn't get access to AMD hardware and software as soon as they can?
2. Did AMD upload the driver to customers? If the results from the driver couldn't ever become public there wouldn't be any reviews.
3. See point 2.
4. Let's agree to disagree, if you will. I don't see any issue nor harm coming from AMD.

The least AMD and Oxide should do is shut up about the visual corruption with the press drivers. And wait to see if it is present in the final version, instead they opted to make fools out of themselves, which lead them to look unprofessional, and made Oxide look like a pawn in AMD hands, surrendering to AMD's whims and releasing useless statements without checking facts first.

They are fine. There was a commotion about the video and AMD and Oxide responded ASAP

By the way, AMD info was correct: Nvidia's driver was wrong and it could impact performance. Hallock was so professional that he even corrected his boss' claim of dual RX480 utilization.

Oxide said that they would investigate it, though they failed to say if there was any performance impact.
 
It would be best if there was an article out of that instead of hiding it on this forum. A real article, by the way, with methodology. Not an OP-ED we sometimes see on some of these websites.



1. Not their problem. Do you really believe Nvidia doesn't get access to AMD hardware and software as soon as they can?
2. Did AMD upload the driver to customers? If the results from the driver couldn't ever become public there wouldn't be any reviews.
3. See point 2.
4. Let's agree to disagree, if you will. I don't see any issue nor harm coming from AMD.



They are fine. There was a commotion about the video and AMD and Oxide responded ASAP

By the way, AMD info was correct: Nvidia's driver was wrong and it could impact performance. Hallock was so professional that he even corrected his boss' claim of dual RX480 utilization.

Oxide said that they would investigate it, though they failed to say if there was any performance impact.
Yeah it is an agree to disagree :)
But only point I would emphasise is where you say AMD info was correct; it would had been correct if they had a footnote in the chart and comparison stating it was pre-retail 1080 (card and driver).
The only professional responses start to finish IMO was by Dan Baker at Oxide.
Cheers
 
Galax GeForce GTX 1080 HOF hits 2.2GHz on air, 2.5GHz on LN2

Concerning the standard GeForce GTX 1080's single 8-pin power connector Mad Tse thinks that it isn't a limiting factor when attempting to OC using air cooling. Rather the HOF makes it easier to push the GPU clock/voltage with its digital rather than analogue controls, said the OC expert. Mad Tse speculates that the Nvidia Founders Edition cards might be OC limited by the BIOS or Nvidia's driver software.

During his extreme overclocking efforts at Computex, Tse achieved 2.5GHz+ core speeds, at 1.38V, and a memory clock 5500MHz. He ran the 3DMark FireStrike Extreme benchmark without issues to verify the achievement.

Looking forward, Tse went on to predict achieving GeForce GTX 1080 OC speeds of between 2.8GHz and 3GHz as it "handles heat very well".

http://hexus.net/tech/news/graphics/93512-galax-geforce-gtx-1080-hof-hits-22ghz-air-25ghz-ln2/
 
Last edited by a moderator:
2.2 GHz on air to 2.5GHz seems like a modest increase, while air to a cryogenic cooling solution is not so modest.
Perhaps water can get somewhere in-between?

I'm curious about the expected longevity of a GPU at that voltage, although the expenditure of LN2 across a sufficiently large sample size and period would be impractical.
 
Galax GeForce GTX 1080 HOF hits 2.2GHz on air, 2.5GHz on LN2

Achieved by a world known professional overclocker, hired by the OEM, while using a custom (and probably hand-picked) card with digital PWM and 2*8pin connectors, and it doesn't say if the 2.2GHz were stable (if I were to guess, I'd say no).

With such a person in such conditions only managing to reach 2.2GHz, one has to wonder how nvidia managed to show those 2.1xxGHz in the presentation with supposedly a reference card.


It's like those developers showing "alpha" demos of games at E3, only to release the game some months/years later with much lower graphic quality... but now on hardware and overclock expectations.


(INB4 the old tired "overclocker's dream" quote from AMD at Fury X's reveal that Razor1 and silent_guy must have repeated 5000x each thread, yes that was almost as bad but at least they didn't show unattainable clock values)
 
Last edited by a moderator:
What was the Nvidia sample running when it hit those clocks?
The criterion for success on the LN2 run was running FireStrike to completion.
 
(INB4 the old tired "overclocker's dream" quote from AMD at Fury X's reveal that Razor1 and silent_guy must have repeated 5000x each thread, yes that was almost as bad but at least they didn't show unattainable clock values)
AFAIK, the 1080 overclocks just fine, unlike the Fury X, despite the presence of water cooling. It didn't help that the Fury X hardly had anything else going for it.

(So, is the 480 a paper launch or not? You haven't made yourself heard yet on this matter that you cared so much about with the 1080.)
 
Last edited:
The card reached 2.1 Ghz in a demo that was limited in scope and render time, every 1080 card can reach that speed, but is it stable? Can it sustain it without crashing for extended periods of time?
 
Achieved by a world known professional overclocker, hired by the OEM, while using a custom (and probably hand-picked) card with digital PWM and 2*8pin connectors, and it doesn't say if the 2.2GHz were stable (if I were to guess, I'd say no).

With such a person in such conditions only managing to reach 2.2GHz, one has to wonder how nvidia managed to show those 2.1xxGHz in the presentation with supposedly a reference card.


It's like those developers showing "alpha" demos of games at E3, only to release the game some months/years later with much lower graphic quality... but now on hardware and overclock expectations.


(INB4 the old tired "overclocker's dream" quote from AMD at Fury X's reveal that Razor1 and silent_guy must have repeated 5000x each thread, yes that was almost as bad but at least they didn't show unattainable clock values)

Well they have now reached 2.8GHz if you want to go by the known professional overclocker.
I was waiting for further information on this (it is meant to be a stable clock) but now is just as good as any other time to post awareness.
https://twitter.com/TEAMEVGA/status/740228112563068929

IMO as impressive is the memory clock at 6200MHz, giving an effective memory just over 12GHz.
Ignore the voltage as it does not take into account the additional EPower VRM board used.

Cheers
 
CkXRV1_UYAApOr9.jpg


I think this benchmark is different from the one posted at Hexus. Since it TeamEVGA it's most likely the Classified .
 
CkXRV1_UYAApOr9.jpg


I think this benchmark is different from the one posted at Hexus. Since it TeamEVGA it's most likely the Classified .
Yeah just giving an example of another professional overclockers results,
earlier in the week they achieved over 2.5GHz on ln2 with the reference FE EVGA 1080 (linked it here in the past), about the same time as the Galax team.
So not sure if this is the same EVGA FE card with the extra VRM board, or something like the Classi with the VRM board.
Cheers
 
Back
Top