D
Deleted member 11852
Guest
Just rename Power On as "Fuck the planet".
At least you have the option!
At least you have the option!
I see. That's more reasonable and indeed possible, and I think the touch bar MacBooks kinda already do that with the touch bar. But that maybe more restrictive and not worth the cost.
Currently at least on Xbox One, the games' and apps' CPU quota changes based on whether the game is in foreground, and background services like friends already take very few cycles. GameDVR on the other hand, I guess there's a dedicated hardware encoder somewhere.
You might as well go one step further and remove the fancy dashboard completely. Relinquish every secondary function like voice chat to your phone, I’m sure everyone has got one these days, and there’s usually more than one high performance ARM cores inside one. Then you can have all the CPU POWER for the game! Now that’s peak performance, not some half-ass hybrid mess.Right now your wasting resources for a fancy UI and your ear marking resources for video recording , voip and other features that we have come to expect from a console since the dreamcast / xbox 360. A second apu running arm would allow you to off load all that and free up the resources for the actual games. You'd also sip power when watching netflix or doing non game related tasks in the os. MS has windows running really well on arm and collaborated on the sq1 and 2 apus so they can modify a gpu as they want. They added a lot of ML stuff to it already. I would wager the only question is how much ram you'd need to run the os and stuff you want to unload. 2gigs ? 4 gigs. So what would be the price of the apu and ram and is it worth the down side. Although again if they are going to do a streaming stick you could have had the same arm apu in both products bringing the cost down for both and you'd already have had to have done all the work
Interesting....Then can you fill me in why Cerny was emphasizing Cache Coherency and stale data in his talk?
Interesting....Then can you fill me in why Cerny was emphasizing Cache Coherency and stale data in his talk?
I'm just interested as to why (potentially)neither manufacturer decided to implement that into their Soc design? From first glimpse, it appears to be an architectural design with significant gains.
larger bus width means more bandwidth. And usually when you have more compute, you're going to need to have more bandwidth. how you accomplish that, could showcase the differences as to why Series X may have gone with 1 design and 6800XT with another.Quick question for you folks. We've seen speculation regarding AMD's new cache setup and while looking at the Series X, I noticed that it was 320 bit.... The 6800xt is 256 bit so why did Microsoft choose to go with a 320bit bus? This has got me thinking about whether or not the Series X soc uses the new cache architecture by AMD. The 6800xt with more CU's requires less bandwidth than the series X and it's 560 gb/s of bandwidth(10GB GDDR6)? Am i off base or does something seem amiss in their design?
The coherency engines and cache scrubbers seem to be intended to reduce performance loss due to cache flushes resulting from the GPU reading from locations that have been overwritten by an SSD read.Interesting....Then can you fill me in why Cerny was emphasizing Cache Coherency and stale data in his talk? I don't want to start a vs war here. I'm just interested as to why (potentially)neither manufacturer decided to implement that into their Soc design? From first glimpse, it appears to be an architectural design with significant gains.
And possibly that there was no guarantee the tech would be ready in time for the consoles launch.
I dont think it much matters with reliability. Though it would be cheaper to repair.
From original manufacture perspective, one may be a bit cheaper in parts (no slot and mount kit) but then it may be costlier to manufacture (more items to solder in and larger pcb). I really don't know the costs of these tradeoffs.
The SeriesX isn't using infinity cache for sure because they've shown the die's x-ray and there's a slide with the chip's total SRAM breakdown with no sign of any GPU LLC.The 6800xt is 256 bit so why did Microsoft choose to go with a 320bit bus? This has got me thinking about whether or not the Series X soc uses the new cache architecture by AMD.
weird, isn't today RDNA 2 day?The SeriesX isn't using infinity cache for sure because they've shown the die's x-ray and there's a slide with the chip's total SRAM breakdown with no sign of any GPU LLC.
For the PS5 it's still unknown. Cerny did mention a "generous amount of SRAM" that the I/O complex would have access to, but the diagrams shown suggest it was for exclusive use of that block.
weird, isn't today RDNA 2 day?
so close.. yet so far.Two more days. The 28th.
In principle they do sound similar. But X1's eSRAM was more of a scratchpad whereas Infinity Cache is probably... a cache?I remember the days we had a huge thread about the advantages of esram on xbox one. That small 32MB that pushed upwards to 192GB/s. So is this infinity cache basically just another esram only larger? I was under the impression that HBM was the true successsor to esram due to it's size/cost advantages while still being able to pump out large bandwidth numbers.
Ye I guess in principle. But when you’re cache is 128MB it’s not entirely clear how it works in conjunction with L2 and L1.In principle they do sound similar. But X1's eSRAM was more of a scratchpad whereas Infinity Cache is probably... a cache?