Frame buffer/ render target can be split up to reside in ESRAM and DRAM. That's nice I guess?
I'd say so, yeah.
10) Frame buffer/ render target can be split up to reside in ESRAM and DRAM.
Frame buffer/ render target can be split up to reside in ESRAM and DRAM. That's nice I guess?
I'd say so, yeah.
10) Frame buffer/ render target can be split up to reside in ESRAM and DRAM.
Was this ever in doubt? I assumed the system was perfectly flexible as the GPU has read/write to both pools. It'd be odd to limit the GPU to only outputting to ESRAM.I'd say so, yeah.
10) Frame buffer/ render target can be split up to reside in ESRAM and DRAM.
"Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU," Goosen reveals. "Adding the margin on the CPU... we actually had titles that were losing frames largely because they were CPU-bound in terms of their core threads. In providing what looks like a very little boost, it's actually a very significant win for us in making sure that we get the steady frame-rates on our console."
Importantly for me, the design for XB1 has centred around the same value of Wii U - power efficiency. It's an interesting choice rather than going with significant power draw. I'd like to know why they picked that as a primary objective? I wonder if there are long-term objectives for the design more power-sensitive than a mains powered CE device? That's more business discussion, I guess.
Theyre saying that any memory subsystem is never utilized 100%, not just theirs, so apply the same math across the board to all systems' memory.
Also, I'm not clear on if the box enabled the 14 CUs or they're there but they went with the upclock only. Sounds like just the upclock but then say that sony has 4 more CUs instead of 6.
if that was the case, then the total theoretical limit should be 218gb/sec not 204, right?The ESRAM BW also isn't doing anything mysterious. It's just...do we call it dual-ported as there are channels for read and write? 106 GB/s tops in either discrete direction, but bonus BW if you perform simultaneous R/W. The given figure was lowest safe target.
if that was the case, then the total theoretical limit should be 218gb/sec not 204, right?
I read again and again the section about the esram bandwidth, but i stil can't understand how it works in order to get the extra BW.
Didn't Cerny debunk this?
Importantly for me, the design for XB1 has centered around the same value of Wii U - power efficiency
Just like our friends we're based on the Sea Islands family. We've made quite a number of changes in different parts of the areas...
We have been used to having more than able CPU power in regards to gaming for so long, that one is forgetting how relatively puny those Jaguar cores are. So while this quote shouldn't come as a surprise, it is definitely worth keeping in mind IMO.
We have been used to having more than able CPU power in regards to gaming for so long, that one is forgetting how relatively puny those Jaguar cores are. So while this quote shouldn't come as a surprise, it is definitely worth keeping in mind IMO.
No, he actually confirmed it. "Not entirely round".
There is no split, we know this as a fact, so can we stop with this rubbish.
You can gate and throttle so hardware doesn't consume its peek at all times, or can chuck in a low power CPU/mobile SOC for low-power stuff. They could have chosen to design a box with 200+ watts draw while gaming and far less while just streaming media if they had wanted.It's designed as an (almost) always on device. Same goes for the reason behind the size.
The article says as much.I heard they were thinking of enabling two more CU's. Looks like that was true. Pretty amazing what you learn on forums.
Last I heard they are NOT enabling those 2 extra CU's.
OTNo, he actually confirmed it. "Not entirely round".
"Andrew said it pretty well: we really wanted to build a high performance, power-efficient box,"I dont necessarily see it that way, it gets back to what we thought imo, to target 8GB at a reasonabe cost they more or less figured DDR was the way to go.
That's OT platform comparison. Irrespective of what rivals are doing, MS had a choice for their console whether to go high power draw, high performance, or low-end, and they have chosen the low end. This article admits that, which helps understand some of the technical choices (like no second GPU ). Whether that's the right choice or not is a business discussion rather than technical investigation.I maintain in 90% of alternate universes, Sony would have stuck at 4GB of RAM and Microsoft's decision would have looked very good. It only looks "bad" now because Sony swung for the fences at the last second. And even then, the consolation prize is a lifetime of much lower RAM costs which isn't so bad.
Yes, they didn't give any details on how the ESRAM improves performance, such as an example of where in their software testing they are seeing the ESRAM as an enabler. We haven't got any nitty-gritty low-level details, not even timings. How low is their low latency anyway?! So a lot of our questions remain answered, but the rest of the internet has at least some much needed clarification.Overall I wanted to hear more "Low latency ESRAM makes the GPU turbocharged and here's how" type talk, so the article was disappointing to me.
Rangers wasn't talking physical split. Cerny confirmed usage split and seeking gpgpu horsepower early on the console life.
You can gate and throttle so hardware doesn't consume its peek at all times, or can chuck in a low power CPU/mobile SOC for low-power stuff. They could have chosen to design a box with 200+ watts draw while gaming and far less while just streaming media if they had wanted.
The article says as much.
OT
"Andrew said it pretty well: we really wanted to build a high performance, power-efficient box,"
"Having ESRAM costs very little power and has the opportunity to give you very high bandwidth. You can reduce the bandwidth on external memory - that saves a lot of power consumption and the commodity memory is cheaper as well so you can afford more. That's really a driving force behind that... if you want a high memory capacity, relatively low power and a lot of bandwidth there are not too many ways of solving that."
That's OT platform comparison. Irrespective of what rivals are doing, MS had a choice for their console whether to go high power draw, high performance, or low-end, and they have chosen the low end. This article admits that, which helps understand some of the technical choices (like no second GPU ). Whether that's the right choice or not is a business discussion rather than technical investigation.
Yes, they didn't give any details on how the ESRAM improves performance, such as an example of where in their software testing they are seeing the ESRAM as an enabler. We haven't got any nitty-gritty low-level details, not even timings. How low is their low latency anyway?! So a lot of our questions remain answered, but the rest of the internet has at least some much needed clarification.
Hell, we didn't even get an answer to if XB1 is tier 1 or Tier 2 PRT!
Yes and no. Cerny debunked the speculation that 14 CUs were dedicated to graphics and 4 were dedicated to GPGPU but Cerny's comments lend some credence that the PS4's balance is 14 CUs for graphics. Sony are banking on GPGPU being big in a few years and don't want developers to have to scale back graphics to free up CUs to the GPGPU work.
Obviously Microsoft believe GPGPU will get more use (hence, the "overhead") but less so than Sony. Only time will tell.