Heh, so all current hybrid RT implementations are not "physically correct"/"representing natural lightning" now?
Adding to the above, there is a common misconception regarding RT often made up now by the media, something like 'rays simulate how light travels in the real world.' I hat that. It's bullshit.
Light does not spread out along a single ray like a laser pointer (almost) does. It spreads in every direction, so a single ray can not describe this.
Equally, the incoming light affecting the shading of a surface (except a perfect mirror) also comes from all directions, so again rays don't describe this function over a sphere either.
What rays do (for gfx) is to answer a visibility test. They do not transport light. Instead, we calculate the light transport using surface / light properties, normals, eventually distance or solid angle, etc.
The reason rays now give us better lighting than former realtime approaches is that we can integrate this light transport now over the whole sphere by taking samples. Basically, every ray (plus an eventual shadow ray from the hitpoint to a light) answers us how one point on that sphere looks like, and integrating many such point samples we get an approximation of how the whole sphere probably looks like. (If a surface is opaque, a half sphere is actually enough, because light from the backside has no effect.)
But unfortunately one such ray is not enough to tell the whole story about how the hit point looks like. Because light reflects form surfaces and then illuminates other surfaces, we need to chain multiple rays forming a path, to capture those bounces. That's where the path tracing name comes from.
To capture all bounces from the real world, we would need a path of infinite length. Beside the fact we also would need infinitely many point samples on the (half)sphere to get all the incoming light.
So to make a claim of 'physically correct', we first need to agree on some error which is acceptable, as there is way to calculate this precisely.
If we say error below perception, or something like the noise of an image sensor is fine, path tracing is accurate, and can also deal with difficult phenomena like refractions, so it's flexible.
But it's not efficient. Any efficient realtime approach would attempt to cache incoming light at the surface, so we no longer need paths because we can get that information from the cache. A cache with support for angular lookup also removes the need for many samples to approximate the sphere, so both expensive infinities can be removed if we accept some spatial and angular quantization.
Such caching really is difficult to do. Texels of a lightmap was a first attempt, but this became more difficult as games grew larger. It's thus easy to sell path tracing for its simplicity and generality, and the higher the specs and costs need to be, the better for some suspect parties. ;D
But ideally, we'll see a combination of both those approaches in the future. Any research and progress on PT as shown will additionally help with that on the long run.
That said about lighting and it's accuracy and costs. But 'Hybrid RT' does not affect lighting and has nothing to do with it. So i agree it's ok to call Quake II RTX, Portal RTX or now CP 'path traced'.
Hybrid only means the primary visibility is rasterized not traced.
Limitations ray traced primary visibility would address are: Better transparency, DOF, AA and motion blur.
Limitations of those current path traced games in comparison to offline rendering come mostly form aggressive reduction of sample counts, e.g. accepting samples of nearby pixels or previous frames, which is where all current progress comes from.