Could/Should XB1S get a new, faster hardware revision? *spawn

Yeah, economics permitting. I wouldn't propose a tweaked CPU and DCC if MS hadn't already developed them and deployed them with no compatibility concerns. Likewise, I'd only propose marginal frequency gains if it weren't for proof of techniques already developed and deployed on a mass scale allowing them to reach significantly boosted clocks. Even allowing for a lower cost unit with less emphasis on bleeding edge performance than X1X, a 2.1 gHz CPU and a 1.1 gHz small scale GPU doesn't appear fanciful (1.3 gHz boost for mobile Ryzen SoCs, 1.18 for the much larger X1X GPU).

Likewise, by 2018 DDR4 2666 will be eminently mainstream. Ryzen processors already support 2666 as standard, without overclocking or unofficial (but totally hinted at) multiplier support. Don't think there's even a premium for DDR4 2400 over 2133 any more....

Xbox One G(Hz). ;) 2Ghz CPU/1GHz GPU Nothing to write home aboot with +14% CPU, +17% GPU. The latter can line-up with the, presumably less expensive, DDR4-2400 bin.

I was more thinking about this at a new node. If they're going to do any further work with the APU design, they can do it in one go at the next node rather than spending any further money on 16nmFF, and the timing would line up such that it's far enough away from the 1S launch anyway in terms of a market standpoint.

I think despite being mostly "900p", a cheap system with really stable frame rates and top of the line texture filtering (which makes a big difference to what you can actually see) and fast load times could gain fans, or at least lose less and do so more slowly.

Forced AF may be more difficult. OneX can get away with it since you're looking at over 2x texture fillrate, and >4x main memory bandwidth (where the textures are being sampled) + 4x L2 cache.

The Durango lineage can just get simple boost improvements, while marketing can keep the Scorpio lineage at a comfortably higher tier feature set.

--------

ramblings
 
Last edited:
Yeah, economics permitting. I wouldn't propose a tweaked CPU and DCC if MS hadn't already developed them and deployed them with no compatibility concerns. Likewise, I'd only propose marginal frequency gains if it weren't for proof of techniques already developed and deployed on a mass scale allowing them to reach significantly boosted clocks. Even allowing for a lower cost unit with less emphasis on bleeding edge performance than X1X, a 2.1 gHz CPU and a 1.1 gHz small scale GPU doesn't appear fanciful (1.3 gHz boost for mobile Ryzen SoCs, 1.18 for the much larger X1X GPU).

Likewise, by 2018 DDR4 2666 will be eminently mainstream. Ryzen processors already support 2666 as standard, without overclocking or unofficial (but totally hinted at) multiplier support. Don't think there's even a premium for DDR4 2400 over 2133 any more....

I think despite being mostly "900p", a cheap system with really stable frame rates and top of the line texture filtering (which makes a big difference to what you can actually see) and fast load times could gain fans, or at least lose less and do so more slowly.
Interesting, I agree.
In ROP heavy engines, clocking higher will help alleviate these issues, and increase the bandwidth on esram.

That being said we moved from 28nm -> 16nm and we got a boost of 853Mhz -> 914Mhz. I do wonder how much further the clock can go here on a smaller node. Better cooling would increase costs, so I don't see them going that route. if they can get up to 1200 Mhz though ;) lol, but without additional cooling I don't see this as possible.
 
Higher clocks alone wouldn't even be worth it though imo...12CUs @1.2 GHZ would only be 1.8TF's...

The only thing I could see is if Microsoft essentially did the same thing Sony did with the Pro and double the CU count of the base console with minor upclock...24CUs @ 1GHZ would give 3TF...

But then there is the issue(s) of:

- cost of developing this
- what type of memory is this using?
- wouldn't have the benefits of "console optimisation"
 
Last edited:
But then, this particular topic of memory contention always comes to mind for me. Is it plausible that if the CPU load was particularly heavy, and heavy in accessing memory, could memory contention be so drastic that there is little to no available bandwidth left for the GPU to perform higher resolution?

Suppose a scenario for PS4 that memory contention was so great that it would drop to below 100 GB/s maybe 70 GB/s for the GPU, then your available bandwidth to ROPS and CUs is so low that to meet a 30 fps target, resolution reduction is your only chance to meet the frame time?

There's also memory contention on XB1 (DDR3) and XB1 usually hits the same resolution in other third party games, where this problem is supposed to be less impactful, than on Unity.

If your theory was right, we should see a lower resolution on XB1 too.
 
Last edited:
There's also memory contention on XB1 (DDR3) and XB1 usually hits the same resolution in other third party games, where this problem is supposed to be less impactful, than on Unity.

If your theory was right, we should see a lower resolution on XB1 too.
Split memory pools! Esram + ram!
1024 bits that can read and write simultaneously with no loss is used to supplement the asymmetric memory contention of its main memory system.

There are some advantages of a split pool. Optimization is Issue, Contention is not
 
Split memory pools! Esram + ram!
1024 bits that can read and write simultaneously with no loss is used to supplement the asymmetric memory contention of its main memory system.

There are some advantages of a split pool. Optimization is Issue, Contention is not

But the XB1 GPU still needs the DDR3 to function at its peak and the problem is exactly the same than on PS4... the DDR3 is not dedicated to the CPU on XB1...

So, if lower bandwith is avaible on Unity due to memory contention, it's also true on XB1, yet the game has the same resolution than on other third party games... we should see a drop in resolution, though not to the same extent than on PS4 (going from 1080p to 900p).

Esram might mitigates memory contention issues, but it can't remove them.
 
Last edited:
But the XB1 GPU still needs the DDR3 to function at its peak and the problem is exactly the same than on PS4... the DDR3 is not dedicated to the CPU on XB1...

So, if lower bandwith is avaible on Unity due to memory contention, it's also true on XB1, yet the game has the same resolution than on other third party games... we should see a drop in resolution, though not to the same extent than on PS4 (going from 1080p to 900p).

Esram might mitigates memory contention issues, but it can't remove them.
I didn't say it wasn't true, it certainly is. Contention is certainly there, and I'm willing to bet under heavy load the contention will probably drop to 20-40 GB/s, which is 30% - 60% loss outright. Which is idealistically close to the bandwidth of it's available DMA channels.

Why does XBO need DDR3 at it's peak? It alone only provides 67GB/s at maximum output as a pure read or write function. With memory contention and lots of read/writes it's probably closer to 20GB/s - 40 GB/s.

It's obvious that you can't produce the graphics that XBO does with 20-40 GB/s, so esram is the main reason for it's success here. Proper optimization and usage of this secondary pool enables developers to produce results expected for it's computational power and aligned well with its available ROPS.

As sebbbi has made obvious, the results of work completed by the GPU do not have to be written to esram and then copied to DDR3. Depending on what you need to do next, you place the result either back in esram or into DDR3. On the same note you do not have to copy information form DDR3 to esram to perform work on that data.

GPU pulls from DDR3 --> runs shader code --> places results in esram (no copy required)
GPU pulls from esram --> runs shader code --> places results back into esram or DDR3 (no copy required)

ROPs can be run on data within DDR3 or esram or both (I suppose there is some limitations here)
But if bandwidth is at a premium, then esram has plenty of dedicated bandwidth for modify functions.

optimization is hard as hell though, your data needs to be in the right place at the right times to make use of everything and to keep anything from stalling.
 
The XB1 GPU can't function with 32mb of memory. The main memory is essential to its performances. Every part of the hardware is essential, it's especially true on consoles well-known for their limited ressources.

If memory contention is really high on the DDR3, then it's a problem for the XB1. And i'm certain that MS tried to explain to the developers the best way to avoid contention, just like Sony.
 
Last edited:
The XB1 GPU can't function with 32mb of memory. The main memory is essential to its performances. Every part of the hardware is essential, it's especially true on consoles well-known for their limited ressources.

If memory contention is really high on the DDR3, then it's a huge problem for the XB1. And i'm certain that MS tried to explain to the developers the best way to avoid contention, just like Sony.
It's less then ideal. But contention problem is will always exist, but is lessened by split pools. How could Xbox be managing 1080p titles on 20-40 GB/s of bandwidth sometimes at 60fps? Ideally you optimize for less contention but what if you can't?

Xbox one can't function well without esram, the bandwidth it provides is absolutely critical to performance.
 
Xbox One G(Hz). ;)

X1G. It's perfect. G for geforce. G for game. G for "G-Unit", like Fiddy's tag-alongs.

I was more thinking about this at a new node. If they're going to do any further work with the APU design, they can do it in one go at the next node rather than spending any further money on 16nmFF, and the timing would line up such that it's far enough away from the 1S launch anyway in terms of a market standpoint.

Could work, especially if you share development costs with the shrink of X1X. Potentially limited in the shrink by the 256-bit memory bus though, even if the interface is smaller for DDR3/4 than GDDR5. DDR5 insn't here till 2020+

Good thing about 16nm is that it's here now, it going to be really cheap once 10nm and 7nm are taken up, and MS already have the designs and the clocks figured. Frequency scaling on it is really good now. Well, for Nvidia anyway...

Forced AF may be more difficult. OneX can get away with it since you're looking at over 2x texture fillrate, and >4x main memory bandwidth (where the textures are being sampled) + 4x L2 cache.

On some X1X enhanced games like Halo 5 it's hitting 5X the resolution of the X1 version with enhanced texture filtering to boot (not 16x but still improved). I know aniso isn't free, but a huge boost to effective memory bandwidth and 20%+ jump in raw clocks should be more than enough to force apply a high level of aniso while also significantly increasing performance across the board.

[snip]...That being said we moved from 28nm -> 16nm and we got a boost of 853Mhz -> 914Mhz.

To be fair, the priority with X1S was dropping power consumption. And they nearly halved it. The small clock boost was to prevent HDR hurting performance. When MS targeted frequency, they hit 1.18 gHz, and on a chip with 3x the CUs.

I do wonder how much further the clock can go here on a smaller node. Better cooling would increase costs, so I don't see them going that route. if they can get up to 1200 Mhz though ;) lol, but without additional cooling I don't see this as possible.

1.2 wouldn't be a problem on 7nm, at least if TSMCs guideline figures area meaningful. Up to 40% higher frequency for the same power or something (I'm sure there are some caveats!). Limits of physically shrinking the chip, in the form of IO, might be a show stopper though. :/

I didn't say it wasn't true, it certainly is. Contention is certainly there, and I'm willing to bet under heavy load the contention will probably drop to 20-40 GB/s, which is 30% - 60% loss outright. Which is idealistically close to the bandwidth of it's available DMA channels.

Why does XBO need DDR3 at it's peak? It alone only provides 67GB/s at maximum output as a pure read or write function. With memory contention and lots of read/writes it's probably closer to 20GB/s - 40 GB/s.

I think this is where the advantage of DDR4 and DCC can be readily demonstrated. For a dynamically scaling game with a locked 60hz, a CPU boost wouldn't result in further BW erosion for the GPU. So any
increase in BW would mostly help the GPU.

Taking your 20 ~ 40 GB/s example figure for the GPU from the current DDR3 setup, you can see that a DDR4 2666 setup would offer something like 37 ~ 57 GB/s. That's a 40 ~ 80% increase.

Throw in DCC and you're looking a system that could show performance gains even greater than the percentage by which the SoC's clock was increased.

Even wihtout higher clocks or DCC, such an X1 revision would run games like Wolfenstien 2 at a higher average resolution and higher average frame rate.[/quote][/QUOTE]
 
How could Xbox be managing 1080p titles on 20-40 GB/s of bandwidth sometimes at 60fps? Ideally you optimize for less contention but what if you can't?

But developers would have failed to meet those targets with bad management of memory contention... if you can have 1080/60fps on XB1, it's precisely because you have an optimal use of every part of the hardware and this includes an optimal use of its main memory.

So, if Unity runs at 900p on PS4 due to memory contention issues, it should run at a lower resolution than 900p on XB1. The drop in resolution should be lower compared to the PS4 though.
 
But developers would have failed to meet those targets with bad management of memory contention... if you can have 1080/60fps on XB1, it's precisely because you have an optimal use of every part of the hardware...

So, if Unity runs at 900p on PS4 due to memory contention issues, it should run at a lower resolution than 900p on XB1. The drop in resolution should be lower compared to the PS4 though.
It's a hypothetical situation I'm asking about. We're making broad assumptions here. how games are optimized are unknown. At least esram optimizations are not well known
 
Both the XBox One and PS Pro upgrade threads are similar. Sure it's nice to bump up RAM bandwidth and increase overall performance for a better user experience but it doesn't address the fair or unfair narrative of the console. Unless devs add a 1080p target specifically for upgraded hardware, which is doubtful, games that are 900p will stay 900p. Likewise PS4Pro will be largely a 1440p machine. Of course, the XBO X is looking like a 1800p machine so it's not hitting 4K regularly which we probably won't get until 9th gen consoles.

If MS and Sony decide to increase performance for their own manufacturing reasons, that'll be a nice bonus for consumers who are in the market for a console but it likely won't be much more than that.
 
Hey Sebbbi, question about this one, perhaps one that I've had a long time trying to break down and figure out.
I've seen _a lot_ of people say that resolution won't affect CPU load. Such that if your CPU can sustain 60fps at 1080p it can also sustain 60fps at 4K (provided your GPU was large enough).
I've read this everywhere an without fail, this is the common thought process on this one, it's recently come up again as well with the recent Digital Foundry comparisons between 4Pro and 1X, so it's got me thinking that we're all wrong? (Too many fanboys are trying to benchmark the hardware, when we should be bench marking how the software utilizes the hardware. Typical console war BS. )

But I reckon now that this is a general statement and is more centered around the behaviour of PC than they are to consoles. This particular topic arises when I think Assassin's Creed Unity was released at 900p on both consoles and they claimed that CPU was the reasoning for the resolution. And everyone had just screamed BS because if CPU is the bottleneck that resolution could increase upwards with the GPU as required. I think this is true for situations that's if the CPU doesn't have enough cycles, it becomes a bottleneck, with having no effect on GPU load.

But then, this particular topic of memory contention always comes to mind for me. Is it plausible that if the CPU load was particularly heavy, and heavy in accessing memory, could memory contention be so drastic that there is little to no available bandwidth left for the GPU to perform higher resolution?
Assassin's Creed Unity had very high CPU cost, because the game simulated and rendered huge crowds of people in dense city environment (AI, animation, skinning, cloth simulation). AC:Unity team has given several presentations about their game and tech, and you can see from the numbers that hitting 30 fps with the 1.6 GHz Jaguar cores was difficult and required lots of workarounds. One of their GDC presentations describes moving their cloth simulation technology to GPU. This obviously saves lots of CPU time (as you don't need to simulate hundreds of cloth pieces on CPU), but consumes some GPU time. Of course if you need to offload some of your CPU work to GPU, that will either reduce the rendering quality, drop resolution or drop frame rate. It seems plausible that GPU offloading among other things resulted in 900p resolution on consoles.

Memory bandwidth contention between CPU and GPU is a real thing on shared memory systems. 10%+ frame time difference when running a captured GPU frame alone (no CPU work) is common. On PC laptops, the difference is even greater, because the CPU and the GPU share also TDP budget. For example in 15W i7 based Ultrabook, the GPU performance will drop drastically if the game has heavy CPU usage as well.
 
MS just cant get the underpowered base Xbox one to sell. They couldnt win November. They need this.

If it happened, I think Xbox One S games would need to dip as low as 600P in some cases. Even 720P isn't enough for scaling. For example if a developer wants to develop a 900P game on hypothetical 2.6 Xbox, 720P will be too much for Xbox S.

You can scale 1x-8x just from 720P-4k. There's a lot of room for different performance SKU's in there.
 
Maybe not if you go generation-less.
You don't truly want to do that. You want a definitive reset-button press, to drive hardware sales, create hype, and a reason to include exclusive new functionality.

Since consoles have gone the x86 disguised PC route, I'm sure they want backwards compatibility with old games, but without the pomp and circumstance of creating a new generation, a new console with new features run the risk of simply 'meh'-ing itself into obsolescence, as many, perhaps most people will simply see it as the exact same thing they've got now, just more expensive, so they'll buy the old, cheaper console instead.

But sure, post a lolicon .... I guess?
Ummm.... That word, it doesn't mean what you think it does... :p

And NO, don't go google it at work. Or near your significant other. Or, you know, near anyone else really.
 
As a consuner I never want a hard-reset on the ability to play all the games I own.

I'm not certain how cutting off Xbox One X owners when Xbox Next is revealed in 2020-2022 can be a positive benefit, unless you're saying developers should treat their games as cross-gen platforms like X360/XOne games did. I can see some benefit in cutting off Xbox One or Xbox One S owners from a game developer perspective since the hardware is so much lower speced.

I think the wow-factor is mostly gone from next-gen in 2020 or later unless there is substantial breakthrough in performance per cost. I dont think improved cpu is going to have much sizzle that can be shown. Sure it can bring changes to what a game can do, but I dont think that will shock and awe everyone.
 
MS just cant get the underpowered base Xbox one to sell. They couldnt win November. They need this.
MS can't get the xb1x to sell either. Getting the raw specs crown changed absolutely nothing to the market share. I don't see how a new mid-range xb1 would do anything.

Instead of wasting money on yet another hardware performance target (they already have 3 this gen), maybe they should spend those hundreds of millions on developing games.
 
Just because MS learned their lesson from annoucing games too early ala ScaleBound, whos to say MS isnt developing currently unannounced games now? Or should they do another dozen of remakes? *shrug*

Anyways, that's more off topic.

I do agree the ship has sailed and MS should have done a new base console at least on par with the competition back in 2014 if they wanted a realistic chance at market share.
 
MS can't get the xb1x to sell either. Getting the raw specs crown changed absolutely nothing to the market share. I don't see how a new mid-range xb1 would do anything.

Instead of wasting money on yet another hardware performance target (they already have 3 this gen), maybe they should spend those hundreds of millions on developing games.
it takes time for this type of thing to have an affect. It's not going to strike like lightning and re landscape the console land, nor is it meant to. Keep to your pledge of most powerful console, and keep holding that crowd from now to the next gen is where you want to go with this.

Games will come too in good time.
 
Back
Top