NVIDIA Maxwell Speculation Thread

What about a chip where half the L2s are turned off?... Still 256-bit...

Why is L2 so prone to failure that a SKU arises with one broken? Surely L2 should be easy to keep working. e.g. with Bulldozer, a 6-core processor has the full 8MB of cache.

I wonder if this is NVidia's strategy to keep people/AIBs from overclocking 970 so that it exceeds 980? Hobble an L2 arbitrarily (it isn't actually broken) and the chips will always be slower than 980...

L2 should be easy to keep working, i.e. any failure should be easy to compensate for thanks to redundancy, but I think L2 slices are tied to ROPs, which should be roughly as likely to fail as just about anything else.
 
It could be an interesting experiment to find out if disabling every other L2 slice would lead to a uniform half-bandwidth solution, although if it is a recovery mechanism the odds of there being a convenient failure on every other slice seem long.
At that point, the access mode needed for the partitions would be consistent chip-wide.

At least from a physical standpoint, the L2 slices should be able to handle a doubled amount of DRAM capacity per slice if the logic can be programmed to operate as if they were linked to higher-density DRAM chips.

I suppose technically, it may not just be an L2 recovery mechanism as much as a ROP, L2, L2 to MC, and points of failure specific to that slice in the L2-crossbar interface.
The area for that swath would be larger, and the static linkage between L2 and all those components is what made it so much simpler to inactivate whole partitions in prior chips.
 
I have been following this over the weekend/today, and Team Green has stated that it was a Pr/Marketing mistake, cuz they weren't aware of the new way Maxwell could shutoff portions of the chip, yadda yadda yadda.
Since they basically are throwing " It was a mistake in marketing/press review/spec" at us, I have a question that maybe some of you programmers can answer.
Does the Marketing/Pr team that writes the specs ALSO have the job of programming the cards Bios?
/JK

GPU-Z shows 64ROPS, doesn't GPU-Z get it's information from the cards Bios?
 
I have been following this over the weekend/today, and Team Green has stated that it was a Pr/Marketing mistake, cuz they weren't aware of the new way Maxwell could shutoff portions of the chip, yadda yadda yadda.
Since they basically are throwing " It was a mistake in marketing/press review/spec" at us, I have a question that maybe some of you programmers can answer.
Does the Marketing/Pr team that writes the specs ALSO have the job of programming the cards Bios?
/JK

GPU-Z shows 64ROPS, doesn't GPU-Z get it's information from the cards Bios?

GPU-Z retreives all data from a database who is manually updated. So if they have get the wrong information ( as everyone ) , it will display wrong informations . It's not read out from the GPU.

It should be the same for many software as Aida64.
 
GPU-Z retreives all data from a database who is manually updated. So if they have get the wrong information ( as everyone ) , it will display wrong informations . It's not read out from the GPU.

It should be the same for many software as Aida64.
The kicker is that even if you know what registers to poke, all 4 master ROP/MC partitions are still online. So if that's what you're reading it would appear to be a fully enabled GPU.
 
It could be an interesting experiment to find out if disabling every other L2 slice would lead to a uniform half-bandwidth solution
Or maybe it would lead to the chip turning into four discrete partitions, each accessing at 28GB/s like on the 970... That would be a genuine nightmare! :p (All academic speculation of course, as not even NV would actually release a product such as this one... Even the old Tesla or if it was Kepler generation flagship ASICs with large quantities of disabled hardware didn't have half their stuff fused off, heh.)
 
Nothing against its performance, 970 is still a very good card, but I for one find it extremely unlikely that not even a single person from Nvidia Engineering didn't notice the "discrepancy" in specs in the last four months..

And given what we now know, it should not be advertised as a 4 GB card.
If there will be such, there was a japanese article a while back in which NVIDIA rep (unless something got lost in translation) said quite clearly there won't be Maxwell Tesla-parts because of the lack of FP64 units, and that Pascal would be next Tesla, which would suggest GM200 won't include additional FP64 units either

I can confirm that I've heard the same thing from my rumour mill. GM200 will have the same "slow DP" that GM204 does. Anyway I have to say that's a bit of a surprise as Nvidia hasn't made big GPU just for gaming for quite a while now.
 
GPU-Z retreives all data from a database who is manually updated. So if they have get the wrong information ( as everyone ) , it will display wrong informations . It's not read out from the GPU
nope

The kicker is that even if you know what registers to poke, all 4 master ROP/MC partitions are still online. So if that's what you're reading it would appear to be a fully enabled GPU.
yes
 
Nothing against its performance, 970 is still a very good card, but I for one find it extremely unlikely that not even a single person from Nvidia Engineering didn't notice the "discrepancy" in specs in the last four months..
Agreed but I can easily believe Technical Marketing didn't notice while the actual engineers wrongly assumed it was intentional and there was no need to explain this - i.e. marketing would have wanted to talk about this in minimal detail to avoid future drama, but they honestly didn't know because engineering didn't think they needed to know or didn't realise they didn't know.

If true, it is unfortunate that technical marketing at NVIDIA doesn't have the time/desire/ability to look into that kind of information in sufficient depth, however, and does reveal a problem there.
 
Agreed but I can easily believe Technical Marketing didn't notice while the actual engineers wrongly assumed it was intentional and there was no need to explain this - i.e. marketing would have wanted to talk about this in minimal detail to avoid future drama, but they honestly didn't know because engineering didn't think they needed to know or didn't realise they didn't know.

If true, it is unfortunate that technical marketing at NVIDIA doesn't have the time/desire/ability to look into that kind of information in sufficient depth, however, and does reveal a problem there.


I completely agree, this said, i can only ask me why nobody have correct it then( it will not have been the first time reviewers will have got a mail some days after the reviews for correct this type of misstake ).

( i can understand that the "marketing team" have not the right informations, or wanted to keep the technical details simple for the commercial launch, but not that the leaders of Nvidia was not know how the gpu was configured ).

Not that i really care, this will not change anything about the performance of the gpu..
 
Last edited:
So we are supposed to believe that no one from their engineering team uses their own product or even glances at news/press about it?

Silverforce11 at anandtech brought up a good point. How did the driver team know to maximize the full speed VRAM if no one else knew?
 
It's pretty obvious they knew, and chose to not disclose because not doing so would obviously make their product look better. Convoluted hypothetical explanations after the fact are hard to believe, because usually, NV marketing does get the specs of their hardware right, and the one instance they don't is when it hides a deficiency in their chip.

Occam says: they lied.
 
It's pretty obvious they knew, and chose to not disclose because not doing so would obviously make their product look better. Convoluted hypothetical explanations after the fact are hard to believe, because usually, NV marketing does get the specs of their hardware right, and the one instance they don't is when it hides a deficiency in their chip.

Occam says: they lied.

Well, that and the Geforce FX ... sorry to bring it up again.

It simply just seems convenient that absolutely no one with technical insight into GM204 didn't read a single review of the Geforce GTX 970.
 
And if they did where should they send the email to? For example there's an error on first page (hint 3/8) do you assume you colleagues messed up or do you assume the reviewer made a mistake?
 
Agreed but I can easily believe Technical Marketing didn't notice while the actual engineers wrongly assumed it was intentional and there was no need to explain this - i.e. marketing would have wanted to talk about this in minimal detail to avoid future drama, but they honestly didn't know because engineering didn't think they needed to know or didn't realise they didn't know.

If true, it is unfortunate that technical marketing at NVIDIA doesn't have the time/desire/ability to look into that kind of information in sufficient depth, however, and does reveal a problem there.

You also have to wonder whether the lapse in Technical Marketing knowledge was due to the "mission accomplished, beats competition" attitude that follows successful tests/benchmarking/introduction of a "new" product scheduled to be released. I don't condone not giving the fully specs on any product, but feel this may influenced how the "time/desire/ability to look into that kind of information in sufficient depth" by Tech Marketing.

Once you start to think about it this could happen with the introduction of any "new product" where you really don't know if a "feature" is being fully utilized per spec. If the performance is there you really tend to not even suspect anything could be "otherwise" from a Marketing, consumer and reviewer perspective.
 
Last edited:
Back
Top