Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
Why would Microsoft purposely downclock Lockhart's GPU? If anything, Lockhart should have faster clocks than Anaconda because of the better thermals (or less heat generated from having a single GPU chiplet design). Unless, Microsoft went with a super-shitty vapor/heatsink design for Lockhart, which I doubt.
The point of Lockheart is cost minimization though. Yields.

To a lesser extent, one might want the higher end SKU to still sport the faster front-end for the GPU.
 
Last edited:
The point of Lockheart is cost minimization though. Yields.

To a lesser extent, one might want the higher end SKU to still sport the faster front-end for the GPU.

To add to that, if the Hovis mobo thingy had a cost associated, you might be able to eliminate that in addition to improving yields by clocking lower. Maybe you could go with cheaper VRMs, less vdroop protection, and ... other mobo-y stuff.

The right combination of chip and clocks could have knock on savings from power delivery through chip costs through cooling costs, and through those perhaps affect things like case cost and design and ... other stuff.
 
Anyone saw this?
Recently removed from /r/XboxOne
[...]

Like others have pointed out already this seems like the scribble with some changes and a lot of make-up to make it readable. But this is some voynich manuscript shit right there. Anyone seems to read something different from the scribble. For example the RCC. I've seen:
  • Raytracing Compute Core
  • Ray Tracing Chiplet
  • Raytracing Central Core
  • Ray Tracing Compute Chiplet
  • Ray Tracing Custom Chiplet
But let's consider just for the moment that this is real: as technically interested we know that the CPU and SSD alone would be a gigantic upgrade from the Xbox One, but how would they explain to the average user that Lockhart is better then the Xbox One X?

By combining the TFLOPs of the GPU and RCC to reach 6.3? If they see the TFLOPs number of the GPU alone wouldn't they think it's worse? Just like in the past when many thought the Pentium 4 must be better then AMD Athlon because 3 GHz > 2 GHz even though the Athlon was actually faster in most cases. Or by slapping a big fat 299 as the most prominent feature on the retail box?

[...]

My personal take on PS5...
32GB DDR4 memory ($100-$130 bulk) or 24GB GDDR6 memory

If you would only choose DDR4 then you would most likely not have enough memory bandwidth to feed both the GPU and CPU. For example if you pick the fastest DDR4 that still runs inside the JDEC spec (3200) then you could reach the following bandwidth:
  • 25.6 GB/s (single channel)
  • 51.2 GB/s (dual channel like on mainstream PCs)
  • 102.4 GB/s (quad channel like on HEDT platforms)
  • 153.6 GB/s (hexa channel like Intel server)
  • 204.8 GB/s (octa channel like AMD server)
  • 307.2 GB/s (dodeca channel like Intel Bruteforce Lake AP)
LPDDR4 running inside the JDEC specs would allow between 34 GB/s and 410 GB/s going by the number of channels above, I believe. The PS4 Pro already has 218 GB/s so your GDDR6 solutions would be more viable.
 
Anyone saw this?
Recently removed from /r/XboxOne
If there is one thing I'll give back to this graph, is that, crazily enough it could work even in the worst case scenarios. At least from an API perspective, we know that this could be supported since DX12 started doing multi-adapter. I believe that Eidos is getting very good saturation from both cards in their MGPU setups for tomb raider.

That being said, it's more work on developers working with mGPU.

The RCC chips, as long as it's 1 chip bundled with 1 GPU, this would also work.

The only thing on that diagram that doesn't make any sense at all is how the RCC chip performance is being rated. They'll need to fake in some fluff numbers in there to make this diagram more believable. Aside from that, I technically see this chip working. Though costs and price points don't make any sense to me.
 
Apologies if the source for this post is considered poor. He is new to me at least.

He claims multiple sources for the following:
PS5 - Zen 2
XBOX - early Zen 3 (3-way SMT with 4 being possible for other version of Zen 3 but 1 to be disabled in most applications, including XBOX)

Will post more later after finishing. If there is anything worth it.
 
If this 3smt vs 2smt is the reason microsoft is more "advanced" that's imho little disappointing. I'm doubting it can bring huge gains in case of game performance.
 
Assuming it’s true, what’s the difference to basic people like me?
If both are producing correctly, the same price = same performance. Suggesting otherwise is a major blunder on a company and one we can't expect given the scenarios heading into next gen.
 
Not the person to ask. The channel talks about IPC gains from additional threads and cores (including how you get less of an IPC gain for each core and each thread), but this is beyond me. He did mention something about using more cores and threads to boost RT. Once again, out of my realm. I only have the laymen knowledge that Hollywood mostly uses CPU RT for higher precision and the gaming world is trying to do it on dedicated hardware integrated into a GPU for higher speed. Even that is simplifying. Dumb enough for me to get the idea though.
 
Thought he said that 7nm+ and 6nm were supposed to be close enough (paraphrasing so I don't incorrectly use technical terms and screw everything up thereby) that moving from + to 6 is relatively easy. From a design standpoint I mean. The design carries over pretty easily IF designed for 7nm+ to begin with.
 
From what I understand, 6nm is the halfnode for 7nm, while 7nm EUV needs deep reworking and will be followed by 5nm EUV.
It's a lot of money to spend for little gain.
 
Hypothetical, if Microsoft uses Zen-3, then a 4 physical core setup (16 logical threads), would be more ideal than an 8 core (16 logical threads) Zen 2 setup, on achieving higher clocks and saving die-space in favor for a larger GPU. Could be possible....
 
From what I understand, 6nm is the halfnode for 7nm, while 7nm EUV needs deep reworking and will be followed by 5nm EUV.
It's a lot of money to spend for little gain.
TSMC has repeatedly said that they expect 5nm to be a ”long” node. EUV will be more broadly deployed and cost per transistor vs. 7nm is down from the get go (assuming decent yields) We’ll see. 7nm has won a much wider customer interest than was predicted a few years back, and TSMC seems very active in moving these forward to their 5nm megafab.
The proof of any pudding is in the eating though, and benefits for high-power devices is likely to be a bit less pronounced when compared to mobile chips.
 
Assuming it’s true, what’s the difference to basic people like me?
A CPU has a bunch of functional units, so one for integer maths, one for floating point maths, sort of thing. When a piece of code needs to do some integer maths, it doesn't use the floating point unit so that sits there twiddling its silicon thumbs. Where multithreading comes in is you can run two pieces of code sort of in parallel. That is, when one piece of code needs to do integer maths, and one needs to do floating point maths, both can work independently. However, when they're done and both need to do some memory accessing, they have to wait in turn.

As such, more threads enables you to get more utilisation from your CPU. The overhead is fairly low, so it's worth it. Something like 30% better overall processing performance peak for Intel, with sustained benefits somewhere below that. Googlage threw up this first benchmark that actually shows HT slowing down a game (FFXV)! More workload benchmarks - raytracing sped up 20% with SMT. For more than two threads, though, certainly other architectures wouldn't benefit much. It depends on what functional units you have per core and how the threads can use them. If the cores went fatter with more resources, you could get more, resulting in more like half-cores than just threads. Otherwise, I doubt 3 threads could bring much more than what 2 threads can. It'd also not be especially good for raytracing unless extra FPUs were added, but even then you'd be bottlenecked by memory accesses. Really, if you want CPU based tracing, you want lots of small CPU cores.

Hypothetical, if Microsoft uses Zen-3, then a 4 physical core setup (16 logical threads), would be more ideal than an 8 core (16 logical threads) Zen 2 setup, on achieving higher clocks and saving die-space in favor for a larger GPU. Could be possible....
You'd have way less overall processing power though. If Zen 3 could get 50% better utilisation from 4 threads, you'd have the equivalent of 6 cores work from the CPU. An 8 core setup with 20% better utilisation through 2 threads per core would be 9.6 cores worth of processing. And more importantly, the eight core solution would be more consistent, dropping to a worst case of 8 cores versus 4 cores worst case for the Zen 3. If you can clock 4 cores higher, enable switching off 4 cores on the 8 core for boost modes.
 
You'd have way less overall processing power though. If Zen 3 could get 50% better utilisation from 4 threads, you'd have the equivalent of 6 cores work from the CPU. An 8 core setup with 20% better utilisation through 2 threads per core would be 9.6 cores worth of processing. And more importantly, the eight core solution would be more consistent, dropping to a worst case of 8 cores versus 4 cores worst case for the Zen 3. If you can clock 4 cores higher, enable switching off 4 cores on the 8 core for boost modes.
yup
 
A rumor from youtuber "Moore's Law is Dead": According to to his sources, the next xbox will have a 3 threads per core zen 3 processor (4 threads, with one disabled) for ray tracing. Just a rumor.
 
A rumor from youtuber "Moore's Law is Dead": According to to his sources, the next xbox will have a 3 threads per core zen 3 processor (4 threads, with one disabled) for ray tracing. Just a rumor.

His sources also said Sony was targeting 2080 TI level of perf, so you can write him off as reliable.
 
Does a quad channel memory get a huge boost in performance from dual channel?

Can next gen APUs use the already cheap 3000+ 32 GB memory in quad channel?
 
His sources also said Sony was targeting 2080 TI level of perf, so you can write him off as reliable.

Right, but most general consumers aren't going to know that AMD's TF performance metrics aren't necessarily equal or perf-for-perf match for an Nvidia based product. If PS5 or Xbox-next miraculously achieve 12.5-13.5TF... then PR wise they can make such a claim. Debating the authenticity of such claims will be left up to the tech and gaming boards on parsing over. Not so much within the general public domain.

But yes, I agree, I wouldn't necessarily believe him or his suppose sources.
 
Last edited:
Status
Not open for further replies.
Back
Top