Nvidia GeForce RTX 50-series Blackwell reviews

Doesn't it feel good for the industry to converge on standardized technologies because that way we can avoid more shortsighted tragedies from happening again ?

This isn’t the thread for it but standardized technology is great when it exists. However there’s still no standardized tech that does what PhysX did 10 years ago so it’s just another hypothetical fairy tale.
 
I had similar issues, they are mostly frame pacing issues, it happens in several old games (so not necessarily PhysX related), using vertical sync/fast sync or a frame cap solved the issues for me.
It is 100% PhysX related. Disabling its "GPU" tier solves all issues with performance.


I don't care about the reason, If security is the reason then NVIDIA can provide a beta branch or an experimental branch for users to use on their own risks, but to leave users hung out to dry like that is completely unacceptable.
You can't make a "beta branch" of something which doesn't exist. The problem is that there is no 32 bit CUDA runtime for Blackwell+. The only way to solve that is to produce it which Nvidia already decided against. I guess if they'd open source the old PhysX (2.x-3.x) some form of a wrapper/translator could be made...
 
I think if the demand is high enough, NVIDIA should at least make the documentations available (the API doc itself is publicly available but there must be some internal APIs) to allow other people to make their own implementations. It's probably even possible to do in GPU because even without CUDA DirectCompute should still be available (unless it's not then I don't know).
 
It is 100% PhysX related. Disabling its "GPU" tier solves all issues with performance.
Yes, GPU PhysX causes it, but it happens in several other games that don't have anything to do with PhysX. Off the top of my head I had the exact same issues in Ryse: Son of Rome and in Everybody's Gone to the Rapture, I was able to fix them by forcing fast sync from the driver.

You can't make a "beta branch" of something which doesn't exist. The problem is that there is no 32 bit CUDA runtime for Blackwell+. The only way to solve that is to produce it which Nvidia already decided against.
Then they need to make a U turn on that decision. You can't have a 4060 beat a 5090 under any scenario, you can't have enthusiasts hung out to dry playing their favorite games on their 2000$ GPU, that's simply an embarrassing level of decision making.
 
I think if the demand is high enough, NVIDIA should at least make the documentations available (the API doc itself is publicly available but there must be some internal APIs) to allow other people to make their own implementations. It's probably even possible to do in GPU because even without CUDA DirectCompute should still be available (unless it's not then I don't know).
Could a 32 bit app use a 32 bit wrapper which will translate the calls to a 64 bit runtime? Or should they all be 32 bit?
 
Let's not go there please, the industry is a big piece of slow moving shit, they converged on shitty DX12 API, they failed to converge on a physics standard after so many years to this day. It's why we are so pissed off about this PhysX thing, PhysX games are gems among gaming in general, as they provide visual effects not seen in any game to this day.
You forget that DX12 paved the way for ray tracing and the industry already is converging to a standardized physics solution from Epic Games ...
This isn’t the thread for it but standardized technology is great when it exists. However there’s still no standardized tech that does what PhysX did 10 years ago so it’s just another hypothetical fairy tale.
Really ? Because it looked like to me that UE's Chaos/Niagara solution appeared to be plenty competent for just about every developer's needs ...

Even if they're not 'qualitatively' speaking an upgrade in that respect the industry seems to think that it's a massive improvement in terms of maintenance, support, and consistency ...

Would you rather the industry make possibly no progression (given how intensive it is to support such technologies) as opposed to its current lockstep progression between its each participants ?
 
Really ? Because it looked like to me that UE's Chaos/Niagara solution appeared to be plenty competent for just about every developer's needs ...

I’m referring to actual physics in shipping games not promises.

Even if they're not 'qualitatively' speaking an upgrade in that respect the industry seems to think that it's a massive improvement in terms of maintenance, support, and consistency ...

Would you rather the industry make possibly no progression (given how intensive it is to support such technologies) as opposed to its current lockstep progression between its each participants ?

Progress is great when it happens but it isn’t happening. Again it doesn’t help to speak in hypotheticals. Interactive cloth, particle and fluid physics have seen some improvement in the last decade but they all still suck. Your view is that’s better to have very slow progress while holding hands than to have some proprietary tech blaze a trail . I am pretty sure most end users disagree and would prefer to see rapid progress in their lifetimes.
 
32 bit PhysX support is deprecated on RTX 50 series, this means 32 bit games with PhysX will not be hardware accelerated on RTX 50 GPUs (which is the majority of PhysX games).

This is hugely unacceptable and is a needless move as well. A 4090 will work better than a 5090 in these games. I mean what the hell!

Correct me if I'm wrong, but 32 bit games that use PhysX are not hardware accelerated on GPU for a long time already
There was mention of Borderlands 2. I complete it a year ago on rtx 3060 and with a large number of particles I saw a big fps drop and increase CPU load
Also Nvidia writes for Legacy PhysX that it will not work on GPUs newer than 600
 
Progress is great when it happens but it isn’t happening. Again it doesn’t help to speak in hypotheticals. Interactive cloth, particle and fluid physics have seen some improvement in the last decade but they all still suck. Your view is that’s better to have very slow progress while holding hands than to have some proprietary tech blaze a trail . I am pretty sure most end users disagree and would prefer to see rapid progress in their lifetimes.
I'm not sure if what I'm presenting is some hypothetical when non-standard proprietary technologies have all but nearly vanished from our very own eyes so the most likely conclusion must be is that our past experiences with their integration aren't relevant with the different constraints posed today...

I'm sure everyone wants to observe faster progress but sometimes temporary setbacks are necessary to establish stronger foundations towards moving forward at all ...
 
I'm not sure if what I'm presenting is some hypothetical when non-standard proprietary technologies have all but nearly vanished from our very own eyes so the most likely conclusion must be is that our past experiences with their integration aren't relevant with the different constraints posed today...

I'm sure everyone wants to observe faster progress but sometimes temporary setbacks are necessary to establish stronger foundations towards moving forward at all ...

Yes PhysX also failed at delivering on the promise. That’s not something to celebrate since “the industry” isn’t doing any better.
 
Could a 32 bit app use a 32 bit wrapper which will translate the calls to a 64 bit runtime? Or should they all be 32 bit?

It probably depends on how easy (and costly) to make a memory region shared between 32 bits and 64 bits. I believe it's possible if it's just main memory but I'm not sure how costly it'll be with mapped VRAM region.
 
It probably depends on how easy (and costly) to make a memory region shared between 32 bits and 64 bits. I believe it's possible if it's just main memory but I'm not sure how costly it'll be with mapped VRAM region.
I think that if this is at all possible it could be the easiest option of providing some sort of compatibility at this point.
Nvidia should look into this - or just open source v2-3 of PhysX (which are old anyway and v4-5 are open source from the start) so that the community would be able to do something with that.
 
Yes PhysX also failed at delivering on the promise. That’s not something to celebrate since “the industry” isn’t doing any better.
Again if Nvidia truly cared about proliferating the technology across the industry instead of hoarding all of it to themselves then they and their consumers encouraging their behaviour in the first place have no where else but to look in their own mirrors in pity at the end result for not realizing the mistake earlier ...

PhysX's own TOXIC development model pushed away even major proponents like Tim Sweeney who BADLY wanted to share the burden of development with others AND EVEN HE ABSOLUTELY WANTED for the industry to converge on it as well when he very well knew that it was not the "best fit/solution" for the engine he originally developed!

If mutual destruction/hostility hasn't worked for years and shows no sign of working in a specific case then maybe just maybe there'd be a better outcome if the dominant player showed some compassion/cooperation to see the bigger picture since there's no other clear option left ...
 
Last edited:
Again if Nvidia truly cared about proliferating the technology across the industry instead of hoarding all of it to themselves then they and their consumers encouraging their behaviour in the first place have no where else but to look in their own mirrors in pity at the end result for not realizing the mistake earlier ...

PhysX's own TOXIC development model pushed away even major proponents like Tim Sweeney who BADLY wanted to share the burden of development with others AND EVEN HE ABSOLUTELY WANTED for the industry to converge on it as well when he very well knew that it was not the "best fit/solution" for the engine he originally developed!

If mutual destruction/hostility hasn't worked for years and shows no sign of working in a specific case then maybe just maybe there'd be a better outcome if the dominant player showed some compassion/cooperation to see the bigger picture since there's no other clear option ...

It’s not Nvidia’s fault that everyone else didn’t endeavor to come up with something better. There are lots of smart people out there who don’t work for them. Where are all the engineers who care about proliferating technology across the industry?
 
Correct me if I'm wrong, but 32 bit games that use PhysX are not hardware accelerated on GPU for a long time already
You are wrong, they are accelerated up until the RTX 40 series.

There was mention of Borderlands 2. I complete it a year ago on rtx 3060 and with a large number of particles I saw a big fps drop and increase CPU load
That depends on your resolution. And the CPU load increases with object count whether they are CPU or GPU generated.
the industry already is converging to a standardized physics solution from Epic Games ..Niagra
Where? Chaos destruction is nowhere to be seen in any game .. it still doesn't hold a candle to ApeX destruction. Cloth, fluid, smoke simulations with Niagra are years behind what PhysX/ApeX/FleX was able to conjure 15 years ago.

We already have one example of an indie developer stripping his UE5 game clean of any Niagra/Chaos system and manually adding the latest PhysX system because it is so much faster and handles way more objects.
 
Could a 32 bit app use a 32 bit wrapper which will translate the calls to a 64 bit runtime? Or should they all be 32 bit?
You can compile RPC interfaces and call the 32bit implementation from 64bit. I assume the PhysX API is a COM interface. The MIDL compiler can interpret annotated COM interfaces.

I did the same for my Oblivion tools when I provided 64bit version while Havok only had a 32bit DLL. It looks like this:

C:
RPC_IF_HANDLE havok_v1_0_c_ifspec = (RPC_IF_HANDLE)& havok___RpcClientInterface;

extern const MIDL_STUB_DESC havok_StubDesc;

static RPC_BINDING_HANDLE havok__MIDL_AutoBindHandle;


int GenerateMoppCode(
    /* [in] */ hvkByte material,
    /* [in] */ int nVerts,
    /* [size_is][ref][in] */ const hvkPoint3 *verts,
    /* [in] */ int nTris,
    /* [size_is][ref][in] */ const hvkTriangle *tris)
{

    CLIENT_CALL_RETURN _RetVal;

    _RetVal = NdrClientCall2(
                  ( PMIDL_STUB_DESC  )&havok_StubDesc,
                  (PFORMAT_STRING) &nifopt2Dhvk__MIDL_ProcFormatString.Format[0],
                  material,
                  nVerts,
                  verts,
                  nTris,
                  tris);
    return ( int  )_RetVal.Simple;
    
}
 
I normally watch a bunch of different reviews on the hardware refresh cycles but the negativity really pushed me away alot this time. I started to watch the GN review and then just aborted when it was obvious Steve seemed to want to be a comedian, I did watch his piece with the nvidia thermals guy so they still can do stuff worth watching. Had a quick look at hub but that was as expected so aborted there aswell, even daniel owen seems to be sliding into the pit of negativity. I'm not saying bad products shouldn't be eviscerated but purposely going off road just for more negativity is tiresome to try and endure. I think Digital Foundry got it close to right, they got the information across and called the situation how it is.

Now as an aussie I used to use hub for my local look at hardware but I moved on to techyescity and optimum a while back, neither feel the need to kick a dead horse for clicks. It's like going for a beer with your mates and one just bitches about work all night when we all got you had a bad day in the first hour. I honestly don't get how people sit through the stuff, but i've gotta assume it must get more views otherwise it wouldn't be going on.

Now I am not saying nvidia can do no wrong, I was going to get a 5090 but not impressed currently so probably going on a deep sea fishing trip instead. I'm also not anti amd, i'm waiting for their 9070xt as I may get one of those for my future brother in law instead of wasting money on a stripper lol. Although i'm probably just going to watch the df and techyescity review for that, should be enough info to make an informed call. I just can't handle some of these reviewers being so offended you would almost think the product killed their grandmother, I actually skipped most of the last AMD cpu release reviews because the same problem.

edit: I don't really follow mobile hardware so I have a quick question, as hardware progress has slowed there aswell are mobile hardware reviewers doing the same thing? because the prices there are not going down either right?
 
Back
Top