Digital Foundry Article Technical Discussion Archive [2014]

Status
Not open for further replies.
I knew something was wrong. Perhaps they're bringing some additional graphical improvements with the final build, too? We'll see.

The article did say that was the case as well so comparisons should probably wait for now. Its also worth reigning in expectations a little. The XBO version runs at (from what I've heard) an unsteady 30fps at 900p. To get a steady 60fps at.1080p would take more than 2.88x the XBO's performance although thats theoretically within reach of a 980.
 
The article did say that was the case as well so comparisons should probably wait for now. Its also worth reigning in expectations a little. The XBO version runs at (from what I've heard) an unsteady 30fps at 900p. To get a steady 60fps at.1080p would take more than 2.88x the XBO's performance although thats theoretically within reach of a 980.

The 980 could push for even more. I'd say 1440p at 60FPS could be plausible. Or maybe at 1080p with a little bit of supersampling.

I'm more interested in seeing what GPU would be required to achieve Xbox One level performance.
 
Yeah, I was using one vert = one poly.
theres no fixed number but for meshes its usually about the same number of vertices as the number of triangles
eg a simple shape to picture
cube
12 triangles & 8 vertices // smooth shading
12 triangles & 24 vertices // flat shading

ok most models will have far more smoothshading than flat so I'ld expect the vert count to generally be lower than tri count (unless you perhaps need diffferent texturecoordants)

other models you can see here
http://beyond3d.com/showthread.php?t=43975&page=63

I would have thought the same about poly counts not having a major impact, but for the developer comments I've read about the need to use aggressive culling on Cell to get the same kind of performance from PS3 as the 360. The 360 GPU being twice as fast at setting up, testing and rejecting polys means that without CPU culling the PS3 is at a major disadvantage.
you're forgetting about the 360 number is total for both vertex&fragments. i.e. I think you'll want to see the triangles onscreen. in realworld tests with normal shaders I believe you'll see the ps3 can push more triangles onscreen (and thats ignoring cell helping out) but like I said tri counts havent really mattered that much since the turn of the century, in the vast majority (all!) of realworld examples fragment processing is the bottlenext
 
The lack of tearing certainly does make the WiiU version of Bayonetta the definitive one. Tearing is awful.

Triple buffering doesn't have much of a performance impact at all, but it does require a few megabytes of extra memory - something that the WiiU certainly does have over the 360.

I have a hypothesis that the most significant factor in performance differences between versions of Bayonetta 1 is vertex/polygon processing. A lack of complex CPU culling would explain why the PS3 was so far behind, and why the WiiU was very slightly ahead of the 360.

PS3 was 250 mpps, 360 is 500, U is 550. Superficially that might seem to explain the most significant aspects of the performance differences. Cell is a culling monster, but without that, a high poly game would well and truly fuck over the PS3, and be massively faster on the 360 and a few percent better still on the WiU.

I can't state it's as a fact, but perhaps something worth considering?

Everything I have read suggest that triple buffering reduces the negative impact of vsync on framerate, but doesnt completely eliminate it. Wii U's biggest advantage seems to most certainly be the memory. Both the main ddr3 and edram are over twice as plentiful. If DF's anylsis is to be believe, they suggest that the framerate dips are typically caused by heavy fillrate effects such as particle and post processing effects. If my novice level understanding of things serve me correctly, this should suggest that the edram on the Wii U's GPU is in fact sufficient as to not be a bottleneck.

I hadnt heard that the PS3 was as such a disadvantage in polygon performance. You mentioned the Cell being good at culling, was the Xenon not very good at culling? I know polygons per second are no longer a big benchmark for games, but still, that was a pretty big deficit.
 
theres no fixed number but for meshes its usually about the same number of vertices as the number of triangles

That's quite more complicated than that. Certain attributes like UV seams or shading breaks can terminate triangle strips and increase actual vert count significantly. More sets of UVs, skin weights etc. can also increase processing time and introduce a bottleneck before the triangle setup stage.
 
Hands on with Ryse PC
http://www.eurogamer.net/articles/d...er&utm_medium=social&utm_campaign=socialoomph
The motion blur is way over done in motion, the 1080p upgrade is pretty significant in foliage shots as expected, I'm also finding PC even on Low settings looks better simply due to the increased resolution.


Now I think we all wanna find out how this thing runs with a naughty naughty 7850 at 1080p don't we ;)?

:oops:
vaQebw4.png

1lj0vZI.png
 
Last edited by a moderator:
That's quite more complicated than that. Certain attributes like UV seams or shading breaks can terminate triangle strips and increase actual vert count significantly. More sets of UVs, skin weights etc. can also increase processing time and introduce a bottleneck before the triangle setup stage.
& thats why I said diff texturecoords (UV's) shading = different normals (like flat & smooth shading, or creases in the model) like I said theres no fixed rule but theyre gonna be similar, perhaps vert count slightly lower

from the "how many polygons" thread, the most recent 10 models with tri/vert counts

V 3912 T 6480
V 3974 T 3876
V 5771 T 7700
V 1871 T 3056
V 5222 T 8563
V 7805 T 5613
V 5453 T 8464
V 18556 T 14394
V 2955 T 3411
V 1634 T 2836

bold is less tris than verts

with those ryse pictures OK pc might be higher res sure, but it looks like the xbone has some sort of heavy blur on the whole screen (quincux :D)
 
You original post:
Clearly is about the One catching up to the PS4, how it suddenly can turn into a EA defense force and developing for 8 platforms is a tad confusing. Yes they are closing the gap, but everyone here knows that there is a gap that simply never will be closed. So everytime we see a 3rd party game that runs equally on both platforms the PS4 version has been dialed down.

There is no reason to carry on this, it will just spin even more out of control.

Ok but a game being equal on both systems doesnt mean the Ps version was dialed down in every situation. You are completely ignoring the fact that both systems have comparable bandwidth and the exact same cpu. Sure there are extra gpu alus but they are not automatically usable in every situation. Example a title is bandwidth bound on both systems or the cpu is a bottleneck. How are you going to run tasks on those CUs if you dont have free memory bandwidth or the Cpu overhead to feed the gpu instructions?
 
The top set of pics - the PC no doubt wins hands down... however I prefer the softer look of the XB1 for the bottom set. The bottom PC set looks to exaggerated/disjointing.

Funny thing is though there is a slight blur on objects in the High version on Pc.
Its kind of weird though that these shots have pretty massive blur on the X1 version but if you look at the other df shots comparing the X1 to Pc low, normal and high you dont see that motion blur like effect on the X1. They also go on to say that there is some sort of smoothing effect that brings a film like quality to both the High Pc version and the X1 that is missing in the low settings and dialed back on the normal setting.
XXO_000.bmp.jpg


high_000.bmp.jpg
 
you're forgetting about the 360 number is total for both vertex&fragments. i.e. I think you'll want to see the triangles onscreen. in realworld tests with normal shaders I believe you'll see the ps3 can push more triangles onscreen (and thats ignoring cell helping out) but like I said tri counts havent really mattered that much since the turn of the century, in the vast majority (all!) of realworld examples fragment processing is the bottlenext

I've never tried a direct comparison to see which could push more triangles but it's kind of uninteresting in a vacuum. In real life PS3's vert unit was quite bad, and was indeed a bottleneck for many games. Standard optimizations were SPU backface culling, constant patching, SPU skinning, and interpolator packing... all more or less aimed at helping along the vertex/tri hardware.
 
Funny thing is though there is a slight blur on objects in the High version on Pc.
Its kind of weird though that these shots have pretty massive blur on the X1 version but if you look at the other df shots comparing the X1 to Pc low, normal and high you dont see that motion blur like effect on the X1. They also go on to say that there is some sort of smoothing effect that brings a film like quality to both the High Pc version and the X1 that is missing in the low settings and dialed back on the normal setting.
XXO_000.bmp.jpg


high_000.bmp.jpg

Yeah, to me, the X1 seems to have some kind of "grain filter", or better, a "film filter" that is almost not visilble, but that gives an incredible look to the game!
 
Hey, just for saying, but Ryse does NOT look that blurry at all on my Panasonic 50" !!!
Where on hell this screens are taken from?
Or better what percentage of the screen are you zooming in ?!?!
How's about reading back through the thread to find the link? It's not difficult. Or better yet, read the thread title that says, "Digital Foundry Article technical discussion." Bit of a clue there. The capture are lossly HDMI capture from XB1. Ergo, that is exactly what is shown on screen, barring a catastrophic blunder by DF. Zoom is 100%. The image is a 1920x1080p image, that pushed out of XB1's HDMI, as obvious from the UI in other screenshots.

My guess comparing different screenies is that there's a little more DOF being applied in that particular shot on XB1. If you didn't believe an industry-wide anti-Ryse conspiracy, you'd probably be able to reason as much for yourself...
 
Their AA method (SMAA 1Tx) starts getting used on normal settings. You can clearly see it in the screen shot comparison. Screenshots captured at low settings are more aliased than normal or high.
 
Status
Not open for further replies.
Back
Top