I am not sure I fully agree with that since the demo is nice from Crytek, but it is dramatically less complex (it is more or less static) and less physically accurate than any of the games or demos we have seen with DXR or VKRT HW accelleration. And even then, it runs rather poorly in comparison on AMD hardware.
Ofc i do not expect it could compete current HW RT and i never said so. Tracing accuracy however seems as accurate as any other RT implementation. Physical shading correctness, methods of denoising etc. is out of topic if we only talk about hardware vs. software RT.
I strongly disagree it runs poorly on AMD because i tried myself and 60fps at 720p is good. Impressive even, considering any 'out of the box UE4' game runs usually worse at 1080p without RT for me.
The scene complexity is high enough to show it's usable for games. Or at least it would be, if our expections wouldn't be to match RTX.
Also, we simply do not know how their methods scale with multiple objects.
I also assume many dynamic objects are a problem. I really think BVH would be better than regular grid for acceleration structure, and i do not understand why almost everybody uses grids for GPU RT. BVH means more divergence but less brute force work, and less extra cost for dynamic scene.
And i assume they have a problem with rays that are not parallel. Likely the additional divergence in traversal would hurt performance a lot, and unlike switching from grid to BVH this can't be fixed easily.
But that's just assumptions, and it does not matter how true they are, because they proofed RT in games is possible and practical regardless.
I'm aware there is no going back from HW RT and i'm fine with that.
As a journalist, your point of view ofc is to compare with available RTX technology.
As a developer, mine is just different and less tied the the here and now, and i'm more worried about limitations to eventually presist in the future and BC.
To me there is zero doubt RT (or other techniques achieving the same image quality) would have come to games in any case and soon, without hardware acceleration. This changes my ratio here too.
Crytek themselves in that intervierw mentioned how porting it to HWRT would dramatically speed it up and allow for it to be much more dynamic.
Of course they will use RTX - it's faster. And this will make their new triangle tracing effords basically a waste of time.
This is because they had the interesting things before already, like ability to cone trace voxels for rough materials or GI.
Will this demo tech ever make it into a game? Likely not - it's a high end feature, and those who want max details invest in RT GPUs anyways. Shit happens.
So the recognition for this demo is likely all they'll get for the work, and this deserves more than just saying 'no thanks - RTX is faster' in a single line comment and done. Maybe i got triggered a bit too much here.
Also, there’s not much to worry about in the pc space. Besides that amd hw rt in consoles will reside in amd gpu’s, probably even faster. Most games will be cross-platform.
There is always a need to worry about things you have no control of, if they affect your daily work, your long time investments and decisions, etc.
No. the Crytek demo is building on years and years of compute work and RT on compute work. It's not first-generation. During the whole 'do consoles need RTRT hardware?' discussion, I've been linking to previous work in compute-based lighting. This latest demo is the state-of-the-art, best-possible-so-far in compute raytracing
I don't know since when they worked on this, but guess they started before RTX announcement.
But their approach naturally extends their voxel tech and tells nothing about alternatives like using BVH, ray reordering, alternative geometry. Personally i became quite pessimistic about the latter, you remember, but i have no interest in developing high end features.
I believe it would work for high end and beat Crytek flexibility and performance. Maybe it would work for mid range too, but i won't try and nobody else will either. Get HW RT right instead is the way to go, and actually things don't look bad here.
I know Dreams uses raster to draw hole filling tiny cubes... they could have done much better than that.
Those are some crazy hypothetical numbers you're wielding.
Yeah, but don't nail me on one off errors in numbers. Maybe i need to exaggerate a bit for illustration of the idea
Additionally, Dreams is a special case for engine development because Sony have financed it over a decade without MM having to worry about a clear product release. They've been allowed to experiment and prototype with virtually academia-like freedom, including a
complete start-from-scratch for the existing game. Other devs don't have this luxury, save CIG. And if they did, gamers wouldn't see many games because the devs would be constantly working on engines.
On top of an idealistic software solution, we also have a realistic one, in achieving actual results with methods that, even if not hypothetically perfect, are 'good enough' and get results without needing years of research.
Agree, and doing the same i regret i have to fund myself. Also, working on tech meant to stop working on games for me.
But nothing of this
would justify to make FF HW just because devs have no time for research and progress. If it would be like that, then we had just another even larger issue to discuss...
In Battlefield V, only grass is excluded from RT and done through SSR, the rest of the scene is ray traced, even particles and smoke are ray traced. Did you notice anything differently?
Yes, initially BFV had very serious clipping issues, before release - seen in work of progress videos.
This has been improved and bugfixed, but missing objects were still noticable in videos from release version. (I remember a whole car missing from reflections for example, not so distant from camera.)
LOD is the only way to solve it. No matter how fast RTX is. We can not raster the whole world of BFV in full detail, and we can not trace it either.
It is all good if all future GPUs support LOD, but first gen RTX just does not.