It's not just N21 that's ridiculously oversized. N22 with 40CUs and 192bit bus is rumored to be 340mm2.
Something being advertised as 150W TDP and consuming 165W was not an issue here it was the load distribution like you correctly pointed out and is...
What the heck are you even talking about? What false power consumption? I'm talking about power draw over PCIe slot slightly exceeding specs (75W)...
How is that comparable? If anything you could compare this to the 'PCIe gate' on RX400 series launch. The difference of course being PCIe slot...
I think this is pretty good explanation of wtf is going on with crashing.
Intels gaming advantage.. .aaaand it's gone.
There's literally a firmware dump with clocks and other info a few pages back. I think I'll trust those more than some newegg rando.
I don't know. That 40-80 gap seems way too big. Something has to slot in there.
While the clocks are high, memory bandwidth still makes absolutely no sense to me.
Jesus H Christ. I expected at least 2.2 like PS5, but 2.5GHz?
Well we have a marketing slide by Nvidia (we now know how much that is worth) and actual game running on PS5 with 0 loading screens.
We began shipping GPUs to our partners in August, and have been increasing the supply weekly.
So how long does it take to 1st partners actually...
With no power increase? I don't see that happening. Same thing happens with consoles. Better utilization through console lifespan inevitably leads...
If Ampere is 'underutilized' and already hitting a power wall. What exactly would you gain by increasing utilization?
Isn't this exactly what you would expect going from 320 to 350W power limit?
I'm pretty sure there's nothing interesting under the backplate. At least according to the ES board we've seen.
That part of Cernys talk was interesting and implies it was actually their idea. Still, PS5 has 36 CUs. A much smaller beast to feed.
Whats the latest on N21 and bus width? Everyone still on the 256bit train? As a contrarian I have to call BS.
Aren't modern CPUs more than capable enough for decoding?
As long as tweaked means it actually does the job.
Looks like fins the entire length of the card. Wait wheres the bracket?
For N21? To feed 80CUs? I'm sure consoles would benefit from that alien tech.
Not really. What AMD should do is follow their own release schedule and completely ignore all the noise. There's no need for subtle marketing...
What exactly are we suppose to conclude from this when we don't know which RDNA2 based GPU this is? And we know there are probably at least 4...
I mean huge cache would definitely explain the enormous 500+mm2 die size.
That's the unfortunate state of things. The amount of nonsense being pumped out by techtubers claiming they have 'sources' is ridiculous.
Was Turing throttling down from specified boost speeds in normal operation?
What it means is the clock drops because of power or heat constraints.
Gave me a good chukle ngl.
I think it's pretty obvious at this point Nvidia marketing was very creative with those 1.9x perf/W improvement. If the 3080 benchmark leaks are...
Almost none of this games are new.
Why the CPU cooler? :shock:
Lets get real. If XSX is any indication even a very moderately clocked 80CU Navi2x won't have any problems beating 3070. And at lower power. Real...
Most of the people responsible for this are at Intel now. And it shows ;)
In before AMD marketing 1ups Nvidia and 25 TF becomes 50.
And let's say they hint it's faster in one specific cherry picked benchmark. It's not hard to predict what the response will be.
Late for what? Pretty much everyone expected november release. At least that's what 'end of the year' or 'late 2020' means.
Well, same. I already voiced my concerns in Ampere thread as soon as I saw cooler leaks.
If they decide pushing it just to beat 3080 they better not screw it up with another 290X jet furnace. Ain't no one buying that.
Techtubers and their sources. And they always have a few of them. To cover all the bases. :)
Was kinda expecting them to go with 4 stacks of HBM2 for both bandwidth and power reasons.
Now I'm really puzzled.
No way they are doing that. It's HBM for top SKU.
True. And yes, considering how unimportant dGPU is to AMDs bottom line, there is no way in hell it take away wafers from much more lucrative CPUs....
I'm pretty sure any reasonable buyer takes a wait and see approach when you have 2 next gen launches that are so close. People jumping on...
Nvidia would NEVER price this big bois so low if they didn't know something extremely competitive is coming. It's like some people started...
Was IO RTX demonstrated in real world example or is this just marketing slide?