VGTech Video Technical Discussion [2017]

Status
Not open for further replies.
Lol. Why should it be a bug ? XBX is not that omnipotent machine that should run everything twice faster than any other machine. If you think that then you have being mistaken by only hearing one bell, and not some others.

Keep it civil. Stay calm. Fly a kite.


its not patched. Meaning it's not optimized for it. Meaning it's running the esram emulated variant of the title.

Potentially redundant buffer swaps that would have some impact to system bandwidth especially if the engine is assuming heavy concurrent usage?

We should also consider the impact of forced AF (I haven't seen comparison) since that will have a hit to available resources (including bandwidth), so while 1344x756 is basically half 1920x1080, there's only a "30%" extra shader/tex left-over.

Halo 5 is interesting in that regard, but we know they have a rather conservative target with other trade-offs involved as well.

@1:50 in in the Doom vid is where the major drops occur, but it's also being hit very hard on XO (mid-low 40s). Assuming it only gets that bad on XO after dropping to the min resolution, there's only so much the 3TF can do anyway. If it's the CPU being hit there, then +30% is only going to do so much from low-40s as well.
 
Last edited:
We know Doom is getting the One X treatment, so why bother discussing it before it's even patched?
Didn't we (and Digital Foundry) talk previously about the boosted games on Pro before any enhanced mode? Now we don't have the right to talk about the similar boosted games on XBX because things are unfortunately not perfect on every games ?
 
Didn't we (and Digital Foundry) talk previously about the boosted games on Pro before any enhanced mode? Now we don't have the right to talk about the similar boosted games on XBX because things are unfortunately not perfect on every games ?
We did? Boosted seems so boring of a topic. DF talks about Boosted modes yes, for _unpatched_ titles. If it's going to stay unpatched, then boosted is a topic worth discussing.
 
No issues from my perspective in analyzing unpatched games. Even with Doom receiving a patch later, it's still interesting to see how it performs.

I'm curious to know how many ROPs One X has, and one the setup is. I'm assuming 32 total, so if it's running half GPU in unpatched games, probably 16 ROPs? Then it's forcing 16xAF and doing some ESRAM simulation by creating a 32MB of fake ESRAM in GDDR5. Bandwidth seems like the most likely culprit for any slowdowns, but maybe fillrate? I wouldn't think Doom is a CPU heavy title, but I could be wrong.
 
I'm curious to know how many ROPs One X has, and one the setup is. I'm assuming 32 total, so if it's running half GPU in unpatched games, probably 16 ROPs?
Nothing fancy, just 8 per shader engine, and yes. In boost mode, OneX would have about ~37% higher raw fillrate owing to clock speed.

Then it's forcing 16xAF and doing some ESRAM simulation by creating a 32MB of fake ESRAM in GDDR5. Bandwidth seems like the most likely culprit for any slowdowns, but maybe fillrate? I wouldn't think Doom is a CPU heavy title, but I could be wrong.
Higher AF will incur penalty to texfill & bandwidth. ROP-rate will matter for simpler shader passes and depth pass.
 
I'm curious to know how many ROPs One X has, and one the setup is. I'm assuming 32 total, so if it's running half GPU in unpatched games, probably 16 ROPs? Then it's forcing 16xAF and doing some ESRAM simulation by creating a 32MB of fake ESRAM in GDDR5.
The older the SDK the less of the GPU that can be used. Newer variants have access to the whole GPU even unpatched. They just suffer from esram emulation.
 
The older the SDK the less of the GPU that can be used. Newer variants have access to the whole GPU even unpatched. They just suffer from esram emulation.
if it's built on the new sdk, non patched, does it still receive the benefits of unpatched games 16x AF, ram disk etc.

I'm guessing you would have to explicitly tell it to run in legacy mode or something, but then would you still have access to the full gpu?
 
We did? Boosted seems so boring of a topic. DF talks about Boosted modes yes, for _unpatched_ titles. If it's going to stay unpatched, then boosted is a topic worth discussing.

Boring? It's critical for Microsoft in order to deliver on it's promise of a generation-less forward compatible future for Xbox. If there are games that requires patches to work well, that's a bit of a problem because aside from online games, games generally don't get support beyond a year. These are insights into the games or technologies may be an issue going forward.
 
Boring? It's critical for Microsoft in order to deliver on it's promise of a generation-less forward compatible future for Xbox. If there are games that requires patches to work well, that's a bit of a problem because aside from online games, games generally don't get support beyond a year. These are insights into the games or technologies may be an issue going forward.
That's different. Boring as in, the end result is better frame consistency, on dynamic titles we are a stronger hold of higher resolution. Those are always the outcomes of boosted.

Compatibility is awesome, don't get me wrong, I think it's critical. And for games that shipped/run poorly, boosted is great.
 
if it's built on the new sdk, non patched, does it still receive the benefits of unpatched games 16x AF, ram disk etc.

I'm guessing you would have to explicitly tell it to run in legacy mode or something, but then would you still have access to the full gpu?

Did developers know about the X when this SDK first launched?

I am just trying to understand how or why the hardware allocation changes for the games. The old SDK gets a pretty massive GPU boost so why not all of the GPU?

It's interesting they split the GPU, is this to ensure that there is no register conflict or some other reason?
 
if it's built on the new sdk, non patched, does it still receive the benefits of unpatched games 16x AF, ram disk etc.

I'm guessing you would have to explicitly tell it to run in legacy mode or something, but then would you still have access to the full gpu?
Yes.
Second question I don't understand what you're asking.

There is 1 major change in the SDK that I can think of and that is the removal of DX11 mono/slim/predx12.
That splits the timeline into 2 SDKs, conveniently, pre DX12 and post DX12; this happens around late 2015 if my memory serves me right. Xbox One released Nov of 2013 for timeline comparison.

They've set it up that pre DX12 SDK games will have access to 3TF, 16xAF, ram disk
Post DX12 SDK games will have access to 6TF, 16xAF, ram disk
Those values are locked by the 'OS' I guess.

Enhanced games, are completely different and it's up to the developer to code the new settings.

It's interesting they split the GPU, is this to ensure that there is no register conflict or some other reason?
I'm sure there are compatibility challenges with the older SDK such that they couldn't use the entire 6TF.
 
KOTOR on the XBX

there is a clear performance gain, kind of inline with the CPU clock I suppose
they also mention the resolution being 1280x960 on the XB1 and 2560x1920 on the X, the difference is very clear to see since it seems to lack AA, also AF looked better, but I'm not sure if it's just the res,

anyway, as I suspected not enough to run at 60FPS, I think the per core performance is simply not there on these CPUs.
 
KOTOR on the XBX

there is a clear performance gain, kind of inline with the CPU clock I suppose
they also mention the resolution being 1280x960 on the XB1 and 2560x1920 on the X, the difference is very clear to see since it seems to lack AA, also AF looked better, but I'm not sure if it's just the res,

anyway, as I suspected not enough to run at 60FPS, I think the per core performance is simply not there on these CPUs.
every game that is has not get a specific xbox one x patch uses 16x AF on xbox one x hardware. I noticed that in e.g. Fable 2 which really looks way better than on xb1/xb360 hardware.
 
Lol. Why should it be a bug ? XBX is not that omnipotent machine that should run everything twice faster than any other machine. If you think that then you have being mistaken by only hearing one bell, and not some others.

Also there are others minor drops throughout the short video, it's not an isolated case. And Doom can drop lower on XB1 and PS4 anyways even using a dynamic resolution that can be quite drastic on the peasant consoles.
Just because the game is not patched, has a dynamic resolution to prevent drops like this. On xbox1x it uses a 3tf gpu and a much higher clocked jaguar (vs the xb1 base hardware) and much more bandwidth. It can only be a bug in the game if the same drops occur on base-hardware. If it would be a cpu-limit, the drops would be even harder on the base hardware, and doom is normally a almost locked 60fps game. So any cpu-limitation from the base model should be catched by the faster cpu (if there weren't already drops down to the 30-40th on base-hardware which I don't remember). GPU should also not limit in any way (resolution would drop before the framerate drops). Bandwidth wise, there should also be nothing that would run worse.
Only limitation I see is a bug in the software or the HDD was limited by to much work (loading-time issue).
 
Not sure why they opted to cap framerates of cut-scenes at 30fps now.

---
Patch 1.06 on Xbox One X significantly increased the resolution during cutscenes that are now capped to 30fps. A scene that rendered at approximately 1440x2160 on 1.04 now renders at approximately 3000x2160 on 1.06. These cutscenes still use a dynamic resolution that can reach native 3840x2160 and still seems to use the same temporal reconstruction technique as before.

 
But frame-pacing + screen tearing added on top of 30fps in those cutscenes ? That's stupid. It's going to be noticeable compared to locked 60fps.
 
Status
Not open for further replies.
Back
Top