Nvidia Pascal Announcement

Regarding this HDR thing - that's just 10 bpc output right? I've run 10bpc on Radeon 6950 and GTX 970. I have a Benq BL3200PT monitor.

I experimented a bit with Alien Isolation's deep color setting. I really couldn't see anything different. I assume higher color depth should reduce banding problems.
That's a plain usage of a 10bpc output format...

MS give a little presentation at the GDC (slides 30+) about HDR on Windows gaming: http://1drv.ms/1T8iew9
 
That 100mm2 was for the relative difference in die area to some of the estimates for the 1080/1070.

Indeed, 232mm at Samsung/GloFlo's smaller node process implies 2560-3072 of AMD's "compute" units. If we just naively scale with clockspeed to the same speed as a 1080 we get anywhere between 3% to 25% faster than a Fury X not counting efficiency gains from the new architecture. But then at the moment Nvidia's own gains besides "Faster than a Titan X" are fantastically obfuscated. An almost markerless chart using just 2 actual titles in comparison to the 980 (instead of the much closer 980ti) aren't much help other than saying "it's faster, but not by a fantastic margin, than the 980ti". So any actual comparison will have to wait till reviews for both probably. Still, good job to Nvidia, while they've stretched the comparison of cards rather a bit, it's being bought consumers at the moment so kudos to their PR team again.

Still, if we go by this we can guess:

Nvidia-GTX-1080-Benchmarks.png


Going by a 980ti as a benchmark instead of a 980, we can guess around a 10%-18% performance boost on these 2 titles. Though since they fixed async compute it'll be larger on titles that use that (Ashes of The Singularity and Hitman so far really, but more to come).

Regarding this HDR thing - that's just 10 bpc output right? I've run 10bpc on Radeon 6950 and GTX 970. I have a Benq BL3200PT monitor.

I experimented a bit with Alien Isolation's deep color setting. I really couldn't see anything different. I assume higher color depth should reduce banding problems.

There's a bunch of acronyms thrown around that get confusing, even if you're trying to research it. "Deep color" in the case of Alien Isolation, and other things, is just 10bit per pixel srgb output. Which can provide less banding in dark areas if you look hard, but that's about it. HDR, as others have said, is something different, which is P-3 to REC2020 Gamut colorspace output, with 10-12bit per pixel color "depth", 1k to 10k nits brightness, and a variable minimum brightness below 1nit (what's the standard? Is there one yet? This shit is too hard to find at the moment).
 
Indeed, 232mm at Samsung/GloFlo's smaller node process implies 2560-3072 of AMD's "compute" units. If we just naively scale with clockspeed to the same speed as a 1080 we get anywhere between 3% to 25% faster than a Fury X not counting efficiency gains from the new architecture. But then at the moment Nvidia's own gains besides "Faster than a Titan X" are fantastically obfuscated. An almost markerless chart using just 2 actual titles in comparison to the 980 (instead of the much closer 980ti) aren't much help other than saying "it's faster, but not by a fantastic margin, than the 980ti". So any actual comparison will have to wait till reviews for both probably. Still, good job to Nvidia, while they've stretched the comparison of cards rather a bit, it's being bought consumers at the moment so kudos to their PR team again.

Still, if we go by this we can guess:

Nvidia-GTX-1080-Benchmarks.png


Going by a 980ti as a benchmark instead of a 980, we can guess around a 10%-18% performance boost on these 2 titles. Though since they fixed async compute it'll be larger on titles that use that (Ashes of The Singularity and Hitman so far really, but more to come).



There's a bunch of acronyms thrown around that get confusing, even if you're trying to research it. "Deep color" in the case of Alien Isolation, and other things, is just 10bit per pixel srgb output. Which can provide less banding in dark areas if you look hard, but that's about it. HDR, as others have said, is something different, which is P-3 to REC2020 Gamut colorspace output, with 10-12bit per pixel color "depth", 1k to 10k nits brightness, and a variable minimum brightness below 1nit (what's the standard? Is there one yet? This shit is too hard to find at the moment).

There are 2 standards, talking about video, HDR10 and Dolby Vision which will be the standards used to master in HDR movies and video content. In pictures and video games I haven't heard of one.

If your interested I can share some videos of experts talking about it.
 
There are 2 standards, talking about video, HDR10 and Dolby Vision which will be the standards used to master in HDR movies and video content. In pictures and video games I haven't heard of one.

If your interested I can share some videos of experts talking about it.
Those two standards are bad and going to fail. We need a third standard! ( ͡° ͜ʖ ͡°)
 
Those two standards are bad and going to fail. We need a third standard! ( ͡° ͜ʖ ͡°)

Is't Dolby Vision a superset of the "other" big standard though which is UltraHD Premium (presumably what is referred to as HDR10 above)? Personally I'm holding out for a Dolby Vision certified OLED TV.
 
In addition, i think remember a long long discussion about HDR here when Polaris have been presented. ( just need to check the Polaris thread )
I looked for it(page 14) and only one post about the article in anand and the rest was a very interesting conversation about AMD "advance fan control" :runaway:
 
Just grab a 100% DCI-P3 display and you are set. No need to worry about the sub-standards. All movies and TV-shows are being color-graded in either PCI-P3 or Rec.2020 going forward.

Sorry for the OT.
 
Just grab a 100% DCI-P3 display and you are set. No need to worry about the sub-standards. All movies and TV-shows are being color-graded in either PCI-P3 or Rec.2020 going forward.

Sorry for the OT.
The color gamut is just a small part of the story. the display/TV/Monitor needs to be compatible with the standard. A TV compatible with HDR10 cannot process Dolby Vision content and vice versa. Also as far as I know the same goes to VGAs.
 
Going by a 980ti as a benchmark instead of a 980, we can guess around a 10%-18% performance boost on these 2 titles. Though since they fixed async compute it'll be larger on titles that use that (Ashes of The Singularity and Hitman so far really, but more to come)......
.
Comparing reference to reference (because it is now fair to say there is some headroom with the 1080) the TitanX is a better comparison when using 1440p.
Using TechPowerUp with the reference cards used the TitanX is around 4% faster (varies a fair bit with game) than 980ti at 1440p:
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_980_Ti/31.html

Also using TPU and looking at Witcher 3 (specifically focusing on 1440p).
https://www.techpowerup.com/reviews/Performance_Analysis/The_Witcher_3/3.html
It means reference to reference the GTX1080 is just about 25% faster than the TitanX, and anywhere from 27-30% compared to a reference 980ti at 1440p - emphasised because this can change when looking at 1080p.

I am basing that on the GTX1080 being 66% faster than the 980 in the NVIDIA presented chart for Tomb Raider and Witcher 3.
Personally I prefer to use 1440p to 1080p or 4k as it is a nice in-between that is relevant to these card's performance to usability when looking at a single GPU.
Cheers
 
Last edited:
Not only that at 4k, the 1080 should actually pull away further because most games at 4k are GPU limited vs bandwidth limited.
 
Probably, those are best case numbers for nV, so I would think they would have that on as it would be more beneficial to Pascal.
 
Still wondering who will buy this 1080.
The reference design is priced 150$ more as the 980 it replaces., 700$ vs 550$ (at launch).
People buying mid range don't have that kind of money to spend.
Most people buying high end already have a 980ti and will wait for the next new high end card.
So why the high price ? Low yields, lack of competition ?
 
The best explanation I have heard so far, is nV doesn't want to create internal competition between themselves and their partners, so people that really want that aluminum shroud "top notch" build quality straight from nV, can pay the extra to get it. If they priced it competitive to the their board partners, that would have pissed them off, why would anyone buy from a board partner when they can switch out components to lower over all production costs?

That being said, I don't see people that tend to buy stock over clocked cards really that interested in the founder's edition anyways. And the people that want to buy the lower priced cards for the same performance wouldn't be either. So its kind of in an odd place.
 
Still wondering who will buy this 1080.
The reference design is priced 150$ more as the 980 it replaces., 700$ vs 550$ (at launch).
People buying mid range don't have that kind of money to spend.
Most people buying high end already have a 980ti and will wait for the next new high end card.
So why the high price ? Low yields, lack of competition ?

It's like early access. If you want a 1080 at launch, you get a Founder's card for a price premium. If you don't mind waiting you get a partner's card.

It's a way to take advantage of launch stock typically being far lower than launch demand while still communicating what the long term MSRP floor (since AIBs can still price it higher than MSRP if they wish) is. There will always be people that will pay a premium in order to be the "first" to have something.

Think of it another way. Typically a high demand product like an enthusiast graphics card will sell out due to launch stock not matching launch demand. Internet retailers will often jack up the price of said video cards during the first few weeks to months (Radeon 7970 during Bitcoin mining heyday) to take advantage of this. Nvidia are just cutting to the chase and saying they want a piece of that pie as well.

Regards,
SB
 
The reference design is priced 150$ more as the 980 it replaces., 700$ vs 550$ (at launch).
The $150 price comparison with a 980 doesn't make a lot of sense. Who cares how big the die size is, or whether or not the name ends or doesn't end with Ti.

It will sell for the same reason tons of people bought a GTX980 when it came out: it's the fastest GPU out there and it's expected to remain that way for quite a while.

People buying mid range don't have that kind of money to spend.
It's not a mid-range card. It's the highest-end you can buy. It will only become mid-end when there'll be something better.

Most people buying high end already have a 980ti and will wait for the next new high end card.
We live in a world with 7B people. You're betting that there won't be enough people who will buy a 1080 despite having a 980Ti already. Or who will upgrade from one market segment to another. Or who are on a 2 generation cycle and move from a 780(Ti) to a 1080. Etc.

So why the high price ? Low yields, lack of competition ?
Because they will sell plenty at this price. If I were in the market for a GPU, I'd buy one. I don't expect many MSPR versions. Most will probably be around $650. So it's only $50 difference, but you get it earlier. Not a difficult choice.[/quote][/QUOTE]
 
Back
Top