Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
Was that an up front license fee, or what Disney took after release? Because 100M up front after Marvel approached developers, followed by a large profit share, seems a rough deal!
Upfront it seems. They pay 50 millions dollars for Wolverine. And contract means 3 games for one franchise and game need to sold at least 6 millions in 1 year for the contact to continue.
 
Was that an up front license fee, or what Disney took after release? Because 100M up front after Marvel approached developers, followed by a large profit share, seems a rough deal!
It's strange that Microsoft passed on such a deal.
The game is based entirely around its combat.
Not really. It has a well written story, and the game starts with a long non-combat sequence where you have to collect comic books at the Avenger convention. And it isn't all of the combat I disliked, but only with some of the heroes. Playing as Ms Marvel was fine, it almost felt like a PS2 God of War game but here fists instead of Kratos' blades. I really disliked playing as Iron Man, and the game does feel padded out with combat only missions that feel like they were added just to make the game longer. But if the game was truly all about the combat we wouldn't have had such a backlash over the voice actors being different than the MCU actors.
 
How the heck does any game company make any money to make the next game having to pay Disney such stupid amounts for licensing?
 
Singleplayer of this game after first 2 kinda solid hours is a boring joke
Yep, exactly my experience. I still want to know how the story ends, but I just couldn't be bothered to. I might watch a longplay on youtube or something someday.
 
  • Like
Reactions: snc
It's strange that Microsoft passed on such a deal.
Well it's not now! The narrative has been very friendly. Disney approached devs and offered them whatever IP they wanted. Insomniac chose Spider-Man. If the reality is Disney offered to license any IP with a suitable price-tag, the conversation is completely different. All the 'criticism' that MS made bad decisions no longer holds. 100M is enough for your own IP AAA title. Especially if Disney take an ongoing massive chunk of profits.

That's why this needs clarifying.
 
He should have a WWE style entrance to the panel whenever multiplayer is brought up. A fourth person enters the call and his cam shows him running down a ramp with fireworks around him.
Perhaps he is like the John Cena and this has happened but you just can't see him. 👋
 
I guess its upfront otherwise there is no guarantee you will ever see those money ;)
Marvel approached Sony, so they must have had some faith that they would do a good job. I imagine licensing deals can be as flexible as both parties can agree. Some might be an up front cost, some might be revenue-based, and some can be a bit of both. If you the licensee is going to do well, it's probably worth the risk to eschew an upfront cost and let them invest in producing a good game that sells widely.
 
Wasn't there supposed to be a final video on Frontiers of Pandora? Or at least another one done by Alex with optimized settings?
 
This collection of arbitrary data should be formatted in a defined way and should have a reasonable number of attributes per meshlet.
If some arbitrary data were compatible with the graphics pipeline, mesh shaders would not have been needed in the first place.
Any per-vertex/primitive attributes are only defined for the mesh shader's output as clearly laid out in the specifications. If you looked at the DispatchMesh API intrinsic, there's nothing in there that states that the payload MUST have some special geometry data/layout. The only hard limitation is the amount of data the payload can contain. There's another variant of the function without using task shaders where you don't have to specify a payload at all!

You can already use regular memory with the legacy geometry pipeline so how do you think Mantle is able to render artist defined geometry without vertex buffers ?

There's a different rationale for the existence of the mesh shading pipeline. A combination of factors such as the variable output nature of geometry shaders made it difficult for hardware implementations to be efficient and features in the original legacy geometry pipeline such as hardware tessellation and stream out didn't catch on so the idea of mesh shading is a "reset/redo button" of sorts so that hardware vendors don't have to implement them ...
To what level are they compatible? While AMD's mesh shaders are compatible with task shaders, they have to use compute to emulate them and make a roundtrip through memory, essentially neglecting the main benefit of mesh shaders, which is to keep the data on-chip. In that case, compute shaders should also be considered perfectly 'compatible' with the rest of the graphics pipeline.
The difference between graphics shaders and compute shaders is that they both operate in their own unique hardware pipelines so there's no way for them to directly interface with each other in most cases. While AMD does have some emulation going on with task shaders, they do have hardware support to spin up the firmware to pass on the outputs to the graphics pipeline. Sure task shaders aren't integrated within the graphics/geometry pipeline in comparison to mesh shaders so whilst task shaders will likely bypass the hardware tessellation unit and it's undefined as to how stream out will behave in it's presence, the mesh shader stage alone can seamlessly interact with those features ...
The more powerful and flexible programming model wouldn't make a 'Cull_Triangles()' call difficult, right? In the same way, it doesn't make 'TraceRay()' any more difficult, so what's the catch then? Why is access to fixed pipeline blocks not exposed anywhere?
Culling has benefits beyond just the mesh shading pipeline. The reason why PC can't have a unified geometry pipeline is simply down to API/hardware design limitations with other vendors ...
I think at this point, the benefits of Mesh Shaders for faster culling are crystal clear: cull at cluster granularity as early as possible and do finer-grain per-triangle culling if it's beneficial in the second pass (replace with free Cull_Triangles() on consoles). Should be an easy win, but it's not.
There's another reason why mesh shading is slower and it's due to the fact that there's no input assembly stage. On NV, there is a true hardware stage for it to accelerate vertex buffers and vertex fetching all of which can't be used for mesh shading. On AMD, they don't have any special hardware to accelerate vertex fetching so even before the introduction of mesh shading they had the most flexible geometry pipeline beforehand. They didn't lose a whole lot by creating a unified geometry pipeline. I can see how mesh shading could be slower in many cases especially when one vendor is as intransigent about keeping their "polymorph engines" relevant as much as possible ...
 
If you're not playing on a laptop, with integrated graphics, using the track pad, you're not a real PC gamer.

That setup, well roughly as it had a pointing stick and 3D discrete graphics on laptops wasn't a thing yet, with a CD full of game demos was actually one of the setups that onboarded me to PC games.
 


A brilliant video as ever. I shudder to think of how much work must go into something like this capturing all the footage across various platforms and setups, finding the right comparison points, spotting the relevant differences, calculating the conclusions etc... crazy!

A pretty good showing for the PS5 to be almost on par with a 2070s in a game that uses hardware ray tracing, although it's unknown how much advantage it would take from using the mesh shader path.

Also interesting to see the comparison between HW and SW RT and the huge difference that can make in scenes with more extensive RT use. I do get the impression that the general use of RT in this game is fairly light though - at least compared to some recent examples.
 
Yes, excellent video as per usual. Really liked that he showed the difference software and HW-RT made, this is the exact kind of content I look for when I watch a DF video.

Also I'm surprised the 2070 Super performs a little better than PS5 despite missing the mesh shading path. I wonder if this could be due to the faster RT-acceleration of Turing. Nevertheless, the mesh shading path would be a welcome addition to PC, more performance is always good.

This is clearly a very good port that only misses minor marks to make it truly perfect.
 
Also I'm surprised the 2070 Super performs a little better than PS5 despite missing the mesh shading path. I wonder if this could be due to the faster RT-acceleration of Turing. Nevertheless, the mesh shading path would be a welcome addition to PC, more performance is always good.

This is clearly a very good port that only misses minor marks to make it truly perfect.

While it is just a benchmark and not a game I think there is some relevance here.
From Videocardz.com 3DMark Mesh Shader Benchmark. Mesh Shaders off.
Screenshot 2023-12-20 203038.jpg
Even after AMD updated their drivers for the test the numbers for Mesh Shaders off bench did not change.
 
Status
Not open for further replies.
Back
Top