UE5 hasn't even been released yet. You'll get lots of Steam shovelware store asset stuff that will look amazing but do nothing.I hope we get soon some games :d
UE5 hasn't even been released yet. You'll get lots of Steam shovelware store asset stuff that will look amazing but do nothing.I hope we get soon some games :d
There are rumors we will see trailer of new bioshock on ue5 at game awardsUE5 hasn't even been released yet. You'll get lots of Steam shovelware store asset stuff that will look amazing but do nothing.
It sure was. NV/AMD (and even others like matrox etc) had their tech demos that blew minds. Today those things arent a thing anymore somehow. Even console manufacturers had their tech demos, PS2 had find my own way for example and many more.
Didnt nvidia release marbles?
Yeah, I was going to say there were ray tracing demos from both nVidia and AMD, and Intel had a recent graphics demo showcasing some XE stuff. I do miss the Ruby demos ATi used to make, though. They had tiny stories in action heavy scenes and I though they were much more interesting than "look at this lizard walk on this branch" type demos.Nvidia still has been releasing tech demos. There was Reflections Demo (storm troopers) for Turing to show case ray tracing. Apollo for Maxell to showcase VXGI.
One of the problems with RT is that devs got really good at SSR , Cube Maps etc... that fake lighting that the RT doesn't wow as much as it could have. Let's face it: FH5 looks amazing with the lighting fakery they used. RT is awesome, but not going to be night and day for most people, including me. I know there are people here who get very exited for RT because they can really tell the difference, but I believe that most people can barely see it. That's why they keep using it mainly for reflections in Spider-Man MM and the like as it's what's going to be noticed the most by your average gamer.
Right and indeed if that's the model, you will always be 2-3 years too late. To be clear, I have no problem with people just doing the "wait and see before I build hardware" approach. As a business strategy this is totally fine; Intel did this for quite some time very intentionally and mobile vendors do it exclusively (can make arguments either way on Apple). But by taking this approach you don't really get a vote on new stuff because your vote is just always going to be "I don't want anything to change because that's what keeps me the most competitive with the folks actually pushing new hardware/features".If performance does end up being competitive in Nanite, I am anticipating that other hardware vendors will naturally stick to their own schedules rather changing it which can mean that these discussions could be stalled again.
You seriously think the cards have enough oomph for anything else (without dialing back the calendar some 10+ years on graphics quality)?The main problem with RT is that games are currently designed with the limitations of rasterization in mind. Basically you don't know what you're missing. I think it's really easy for most people to see the difference between an offline path traced render (the future) and current game graphics.
Right and indeed if that's the model, you will always be 2-3 years too late. To be clear, I have no problem with people just doing the "wait and see before I build hardware" approach. As a business strategy this is totally fine; Intel did this for quite some time very intentionally and mobile vendors do it exclusively (can make arguments either way on Apple). But by taking this approach you don't really get a vote on new stuff because your vote is just always going to be "I don't want anything to change because that's what keeps me the most competitive with the folks actually pushing new hardware/features"./QUOTE]
Almost every new feature has to be built before there are benchmarks in place for obvious reasons. Ray tracing, bindless, conservative raster, ROVs, etc. all have to be motivated based on some amount of knowledge or deliberate intention to build future rendering techniques. This was basically my job when I was at Intel - understand what is likely to be coming down the line and align the hardware/software as much as possible in advance. There's obviously a push/pull from both sides and there can never be a 100% hit rate (see geometry shaders and arguably tessellation).
If a company is content to just wait and see on all this stuff before they build it, so be it, but then we're not really even in the same discussion. You can certainly interpret that as "ISVs don't care about mobile", but the reality is if you want to be taken seriously with new technology you have to be willing to be at least a bit proactive. Reactive is fine, but then you're stuck without a vote. See nobody caring about whatever Metal does 5+ years too late on tessellation, compute, shading languages, etc.
I wouldnt call tessellation a dud -- sure its kinda bad, but its been present in just about every game for a long time. A feature doesnt have to be a generation defining game changer to be useful. And the stuff andrew actually listed, like bindless and conservative rasterization are huge, even if they don't come with glitzy tech demos the average gamer can understand at a glance. Regardless, im going to bet on the industry (including big players like epic) continuing to make steady good decisions and progress -- some things will be duds, some promising approaches will never get hardware support, others will get a usable but not ideal solution, but on average there's a good history of steady progress.Predicting the future is a perilous move especially since examples you mentioned like geometry shaders, tessellation, or ROVs turned out to be duds.
Cyberpunk is a complex open world game, with complex systems and great visuals, it's rasterization pass is already heavy as it is, the 3090 barely does 1440p60 in busy scenes, and 4K60 is completely out of reach even for the 3090, so it's natural that when the game adds 4 heavy RT effects that it would drop below 60 even on the 3090. The UE5 demo is just some static scenes with next to no animation or AI or any complex simulation.Watch Dogs, The Ascent and Cyberpunk all require sub 1440p to achieve 60 fps. Even at 1080p a 3090 cant maintain 60 in Watch Dogs and Cyberpunk actually. What games do you think have visuals that approach
The test scenes have comical amounts of objects on screen -- a regular shipped game would have *less* (at least of the bad, expensive case) not more. AI, animation, etc, have less to do with graphics performance than you seem to imagine. Ue5 gams will almost certainly be bound by lumen (not nanite) performance to that 1080p60 or whatever, and the rest of the game will have plenty of headroom.If the UE5 demo is running barely 1080p60 on a 3090 with no AI, big worlds, many objects on screen, animation .. etc, imagine the performance when that happens.
UE5 performance looks much more justified when looking at the output than the RTX games IMO. I expect that would be the common opinion amongst most gamers. I personally think all 3 of those titles offer a poor visual to performance ratio.Cyberpunk is a complex open world game, with complex systems and great visuals, it's rasterization pass is already heavy as it is, the 3090 barely does 1440p60 in busy scenes, and 4K60 is completely out of reach even for the 3090, so it's natural that when the game adds 4 heavy RT effects that it would drop below 60 even on the 3090. The UE5 demo is just some static scenes with next to no animation or AI or any complex simulation.
Watch Dogs is the same, it is a game with a big expensive worlds with AI and complex systems on top, and it's doing extensive RT reflections on everything (dozens of surfaces) , the UE5 demo can't even do any that yet, as Lumen can't do reflections, and the demos were devoid of any, compared to the empty UE5 demo, the world of Watch Dogs is rendering a much much more stuff.
The Ascent is another game with AI and a lot of details in big levels, with multiple RT effects and dozens of RT reflections in any scene, it's also game from a small developer, so it's not optimized well.
If the UE5 demo is running barely 1080p60 on a 3090 with no AI, big worlds, many objects on screen, animation .. etc, imagine the performance when that happens.
I don't know if you were involved in the early discussions, but let me assure you that bindless and especially raytracing all started life as *highly* contentions features, as you might imagine. Much more so than anything related to compute execution model to be honest.Focusing on low hanging fruits like more explicit bindless functionality, host-visible device memory heaps, or extending ray tracing in the near future are relatively benign subjects.
The main problem with RT is that games are currently designed with the limitations of rasterization in mind.
I have little sympathy for companies who are going to be forced to deal with a bunch of this stuff in the next few years as everyone had *ample* warning this time around.
That goes two ways, RTX/DXR are also designed to improve games as currently designed (and for streaming open world games not even that, instead a significant step back).
Can we be a little more specific regarding which companies/practices/tech features are holding the industry back, and which aren't?orthogonal discussion to the companies that are trying to really push the state of the art, and the former companies trying to hold the latter back is frankly a little bit silly and ultimately a self-serving business strategy that has nothing to do with what's best for the "industry".
I doubt he will desire to open that can of worms. We will probably have to wait and see.Can we be a little more specific regarding which companies/practices/tech features are holding the industry back, and which aren't?
Anyways we've gotten far enough afield from the original topic. Maybe I should have listened to the smarter part of myself and not done the public soapbox, but suffice it to say I have little sympathy for companies who are going to be forced to deal with a bunch of this stuff in the next few years as everyone had *ample* warning this time around.
Can we be a little more specific regarding which companies/practices/tech features are holding the industry back, and which aren't?
Jesse Natalie said:Yeah we're having active discussions with GPU vendors and other developers about similar topics, and it's a thorny issue. I'd suggest not trying to bake in any assumptions about forward progress.