IHV Business strategies and consumer choice

Against the infinite on-going onslaught of nVidia fanboys that constantly infest every single thread about AMD, which was the reason this thread was created in the first place.


Do you know what this thread is about?
Not this. This was supposed to be a mature discussion about the marketplace. So far it's a fair bit of ranting and emotional language.

You're way too emotional.
Actually...

ATI TruForm was a brand by ATI (now AMD) for a SIP block capable of doing a graphics procedure called tessellation in computer hardware.

That was before Matrox Parhelia. But thanks for trying.
It's not a competition. If someone's wrong, you enlighten them and they are grateful.

Currently you're generating too much noise. Any more of this and I'll issue a thread ban. That goes for anyone getting all emotional including replies. If it ends up too much work for me, I'll kick this thread into the RSPC forum and you guys will be free to spend however many hours you want complaining about how sucky each other is without moderator intervention.
 
Last edited:
If RT is too expensive and the alternatives are also too expensive where does that leave us?
We need to keep working on more efficient solutions. That's not really the problem.
The potential problem i see is: If we raise mainstream standards to high, and it turns out even after years the methods are not affordable, it would be difficult to scale down to lower standards.
This may sound unlikely to happen, but it already did in other ways. Devs kept making their games larger and bigger, until they can no longer afford to make them. They barely manage, can't afford a single unsuccessful game, and thus they avoid risks and innovation on game design. We might see a similar cause of stagnation with graphics, although i don't see or worry about that now.
I'm sure many people will be happy to get better games running with last generation rendering tech but that's not what this debate is about.
To me that's exactly what i talk about. I became uncertain if people still demand better visuals at a cost, or if they rather want games with inferior lighting but a stable image and just fun to play at a lower cost.

In the absence of that the default conclusion is IHV favoritism / hate.
Never considered this seriously, but maybe that's a thing and i should.
I believe brand preference just mostly reflects peoples intersets. Somebody who believes in RT will likely use NV GPUs and share their visions. Somebody who doesn't may pick another GPU, and is likely to express doubts on the vision.
Because NV is the only IHV who has and advertises such vision, AMD or Intel are not really involved into such conflict at all, and because Intel does not have much impact yet, it may look like a NV vs. AMD fanboy conflict for wrong reasons.

No that doesn't fly any more. It's been 5 years that we've been hearing HWRT is too early and a waste of resources and something cheaper and better is right around the corner. How long do we need to wait for proof? 5 more years? 10?
There won't be a proof or change. We just keep improving our stuff, and ideally after some time we have efficient methods which scale to all our needs. HWRT will be part of that, and likely a big one.
So we won't have a winner, but we also won't have a reason to argue anymore.
Our current mistake is that we predict a winner from personal conviction and believe, and we try to defend it against alternatives which are also just assumptions, disbelieves, etc.
It's fighting windmills on both ends. Even calling it 'both' ends is already building windmills and i should not do that. We all want the same thing, and to get there, we will combine anything that works.

Well that's where we fundamentally disagree. You think the incremental progress is useless (even though nobody thought it would be remotely possible). I think it's amazing. We already have GI solutions that scale down to lower power HW - baked light maps. Dynamic GI is a high end feature and there's nothing wrong with that. With time it will trickle down.
I fail to build up context with my quote - surely some misunderstandings in the way.
I do disagree with saying dynamic GI at sufficient quality is a high end feature, and i (or/and others) should be able to proof that after some time. That's really how i at arrive at the assumption of iGPU being good enough for next gen and mainstream.
I'm also very optimistic such method can be used to accelerate a higher quality high end solution involving heavy HWRT. But could be wrong, and maybe we end up using completely different methods for different HW.

However, one thing is important: We need a low cost dynamic GI solution, so games can be made to utilize from the new option of dynamic lighting.
Currently that's not the case. RT enthusiasts only get improved visuals for games which were designed for 'static' content. (big reason i personally still refuse to upgrade)
Current RT GI (but also Lumen, etc.) is very laggy, so this limitation is no big problem yet. But as lag reduces with future improvements, we will notice and complain.

Right, RT has nothing to do with gameplay innovation for good or bad.
Maybe only because of the above - games are not yet designed for dynamic lighting. But once we can afford this an any platform, maybe dynamic lighting can add some thing to gameplay as well.
It won't be a game changer, and due to gradual progress we might not really notice, but there should be some reward for all the work, fuzz and costs, i hope... : )


This is factually incorrect, between Intel being the first with ray coherence sorting and AMD doing lots of stuff with D3D12 advancements like GPU node graphs.
So yes, we definitely can play it both ways and say that AMD "fans" are just as toxic to any discussion here as any other "fans".
No, no. Compare objectively:
NV: First with HWRT, showing off tons of videos and research work, promising a future of awesome gfx to everybody. Intense Marketing, noticeable at least to the technically interested general public.
AMD: Only hardcore developers care about node graphs.
Intel: Modern RT HW, but inferior AMD still beats them when it comes to RT end performance. Which is all at least some people may notice, and NV had coherence too just 3 days later anyway.

What i mean is: Only NV shows such investment into researching and marketing their vision by a noticeable amount. Neither Intel nor AMD do this. You have to agree after adding a appropriate weight to your arguments.
NV always invested much more into software development than any other IHVs. They deliver a message about innovation, always suiting their agenda.
Ofc. this generates a lot of critique and doubt which other IHVs won't see happening in that sense.
So, attempts to play both ways rarely makes sense, and only spur speculations about true motivations coming from fanboyism, shilling, etc. That's noticeable really often here, from my perspective.

To me it looks like NV simply has no competitor which does similar things. They also have no competitor if we look at GPU market share.
Thus, how should it be even possible that other IHVs like AMD 'hold back' NVs visions, for example? They simply can't. And AMD isn't guilty for console makers preferring 'inferior' AMD HW over 'superior' NV HW either, in case consoles would hold back PC awesomeness.
If i read such claims, it always looks like NV fanboy riding along, stepping down from his high horse just to shit over competitors, only to point out how superior the green badge is.
But it's good entertainment and fun, which is the primary objective of this industry. \:D/
 
What i mean is: Only NV shows such investment into researching and marketing their vision by a noticeable amount. Neither Intel nor AMD do this. You have to agree after adding a appropriate weight to your arguments.
NV always invested much more into software development than any other IHVs. They deliver a message about innovation, always suiting their agenda.
Ofc. this generates a lot of critique and doubt which other IHVs won't see happening in that sense.
So, attempts to play both ways rarely makes sense, and only spur speculations about true motivations coming from fanboyism, shilling, etc. That's noticeable really often here, from my perspective.
Both AMD and Intel promote their products and their s/w features just fine and aren't at all different to Nv in this (FSR3 and PresentMon are two latest examples here against DLSS 3.5). The amount of effort spent by them on this is about the same I'd say - Nv does spend more on s/w side but in CUDA/AI space which isn't related to their consumer marketing in any way.
So we can definitely have it both ways - stop pretending that there are only "Nv fans" on the internet or this very forum, especially when you have the same people coming into Nv threads over and over to say how frustrated they are that people talk about Nv things in Nv threads.

Thus, how should it be even possible that other IHVs like AMD 'hold back' NVs visions, for example? They simply can't.
Sure they can, since games aren't PC exclusive and developers have to account for what other platforms are capable of. This is, did and always will affect what PCs get.

And AMD isn't guilty for console makers preferring 'inferior' AMD HW over 'superior' NV HW either, in case consoles would hold back PC awesomeness.
AMD isn't responsible for other parties choices but being late and slow with RT support is squarely on them. Or you think that Sony and MS would say no to RT h/w as capable as what Intel and Nv have?
 
It looks like NVIDIA solved this problem with AI Denoising (Ray Construction). As evident in their promotional materials for Alan Wake 2 Path Tracing.
Seeing is believing! Currently I'm a believer but will pass judgement once ray reconstruction reviews are out.
 
Interesting benchmarks of RT primary visibility vs. compute shader for subpixel triangles rasterization.
Not sure how optimized that shader is, but the 3x difference is massive. It makes me believe that once HW traversal is available in consoles, devs with custom engines might switch to RT with classic LODs for primary visibility, at least for static geometry. Because if it's faster (and likely much more flexible when it comes to sampling), why wouldn't they?
 
We need to keep working on more efficient solutions. That's not really the problem.
The potential problem i see is: If we raise mainstream standards to high, and it turns out even after years the methods are not affordable, it would be difficult to scale down to lower standards.
This may sound unlikely to happen, but it already did in other ways. Devs kept making their games larger and bigger, until they can no longer afford to make them. They barely manage, can't afford a single unsuccessful game, and thus they avoid risks and innovation on game design. We might see a similar cause of stagnation with graphics, although i don't see or worry about that now.

To me that's exactly what i talk about. I became uncertain if people still demand better visuals at a cost, or if they rather want games with inferior lighting but a stable image and just fun to play at a lower cost.

I don’t see why all of that isn’t possible today. Sure there may be some interesting tech that needs new hardware to be truly viable but there is a lot of general compute and bandwidth already at our disposal. There’s nothing stopping people from innovating (see Epic) new techniques that utilize that general compute horsepower. The presence of RT on the chip certainly isn’t a hindrance.

Never considered this seriously, but maybe that's a thing and i should.
I believe brand preference just mostly reflects peoples intersets. Somebody who believes in RT will likely use NV GPUs and share their visions. Somebody who doesn't may pick another GPU, and is likely to express doubts on the vision.
Because NV is the only IHV who has and advertises such vision, AMD or Intel are not really involved into such conflict at all, and because Intel does not have much impact yet, it may look like a NV vs. AMD fanboy conflict for wrong reasons.

Yep, there’s no reason for RT to be an Nvidia vs AMD thing. I posted a while back a hope that once AMD had decent RT performance things would change but that hasn’t happened even though RDNA3 did just that. Partly because Nvidia kept moving the goalpost with shader reordering, ReSTIR etc to maintain their lead. One day RT will be as mundane as texture filtering though and nobody will care so maybe enjoy the “fun” while it lasts.

Our current mistake is that we predict a winner from personal conviction and believe, and we try to defend it against alternatives which are also just assumptions, disbelieves, etc.

I’m not that concerned about predicting outcomes. Eventually software and hardware will normalize and RT will be old news. What I’m defending is the actual tangible progress in rendering tech in games that you can play right now. Especially because the touted alternatives are currently paper tigers. It’s just weird to me that people downplay something real while putting all their faith in yet unproven alternatives that may well suffer from the same pitfalls.

When Phantom Liberty and Alan Wake 2 drop in a few weeks the goalposts will probably move again.
 
Both AMD and Intel promote their products and their s/w features just fine and aren't at all different to Nv in this (FSR3 and PresentMon are two latest examples here against DLSS 3.5).
I have never heard about 'Present Mon', to answer your question. Is that something new from Intel? Maybe, like FSR or XeSS all together, some response to NVs strategy of using software features to promote and sell chips?
Do you think FSR and Xess would exist if there was no DLSS, software features formerly implemented by game devs?
Sure they can, since games aren't PC exclusive and developers have to account for what other platforms are capable of.
High End Settings often are PC exclusive, and that's where you can find your RT on checkbox. I've heard devs have supported NV exclusive PhysX before, and actually they also use NV exclusive RT extensions to support hitpoint coherency or alpha testing. PT is currently NV exclusive too, and ther is CP, Alan Wake 2, and likely some more to come.
There were even lots of RT games before AMD added some RT support too. Obviously NV does not need feature parity with other IHVs, and i doubt they want or care.
It's just a bit tiring to constantly inform you about facts you already know and enjoy yourself!
Or you think that Sony and MS would say no to RT h/w as capable as what Intel and Nv have?
Obviously they did say no when considering other IHVs to partner with. And MS does not even have strict requirements on backwards compatibility, so they could have chosen NV easily if they wanted.
Maybe next time.

It looks like NVIDIA solved this problem with AI Denoising (Ray Construction). As evident in their promotional materials for Alan Wake 2 Path Tracing.
I saw a video, but no other materials. And in such videos we can rarely see the failure cases, although they would be the most interesting.
Beside Ray Reconstruction, Neural Radiance Cache might help even more with lag.
 
I don’t see why all of that isn’t possible today. Sure there may be some interesting tech that needs new hardware to be truly viable but there is a lot of general compute and bandwidth already at our disposal.
I can only speak for myself, but the problem is developement time, not performance. The HW was ready already with PS4.
This sucks, but it also means: Even if HW progress would completely stagnate, we would still see improvements for the next 10 or even 20 years.
There’s nothing stopping people from innovating (see Epic) new techniques that utilize that general compute horsepower. The presence of RT on the chip certainly isn’t a hindrance.
But the presence of RT HW reduces chip area available to general purpose cores.
I don't say there should be no RT HW, but APIs should be flexible enough to not prevent a LOD solution or other needs for dynamic topology.
Seeing so much progress on other ends, it's really ridiculous such fundamental limitations are not addressed, and i'm not the only one complaining about this.

API failure is what holds back RT the very, very most, not AMD.

Especially because the touted alternatives are currently paper tigers. It’s just weird to me that people downplay something real while putting all their faith in yet unproven alternatives that may well suffer from the same pitfalls.
But that's only people like me - a tiny minority? Actually i don't know anybody else saying compute GI can be more efficient than PT or Lumen, etc.?

I also do not downplay RT or PT. It works, current results are impressive and better than i would have expected. They also keep improving. They are also very expensive, and mentioning this isn't downplay.

The reason of my rant is entirely the API issue. I was planning to fix my GI limitations with PT since 15 years, and now that the HW is here i can't fucking use it.
I still think about compute alternatives for RT to get the missing details. Nobody in 2023 should need to do this.
When new features are announced, like DLSS5, neural wish wash (TM), or reordered coherence clusterfucks, i just get mad and ask 'why do they always add more useless crap to the pile of shit, instead finally fixing it to be flexible enough?'
Sorry for the language, but i guess it isn't too hard to empathize. I can't join the praise, because i can't use RT, at least not efficiently.
 
Interesting benchmarks of RT primary visibility vs. compute shader for subpixel triangles rasterization.
Not sure how optimized that shader is, but the 3x difference is massive. It makes me believe that once HW traversal is available in consoles, devs with custom engines might switch to RT with classic LODs for primary visibility, at least for static geometry. Because if it's faster (and likely much more flexible when it comes to sampling), why wouldn't they?

They admit their CS rasterizer has room for improvement so it’s probably not a great representation. A comparison to Nanite’s rasterizer would be more interesting. I’m still baffled as to why there’s no simple RT benchmark yet similar to Tessmark that tests raw performance in a stressful workload.
 
But the presence of RT HW reduces chip area available to general purpose cores.

We don’t know that for sure and it certainly doesn’t hinder development or experimentation. You don’t need a specific number of flops for that.

I don't say there should be no RT HW, but APIs should be flexible enough to not prevent a LOD solution or other needs for dynamic topology. Seeing so much progress on other ends, it's really ridiculous such fundamental limitations are not addressed, and i'm not the only one complaining about this.

I get the frustration from an engineering perspective if you have a vision in mind that needs specific hardware support in order to hit performance targets. We’ve hashed all of this out before but it’s not at all clear to me that the things you’re asking for (hardware LOD support, direct BVH access etc) are even practical on current tech. Either way it’ll all come in time. For me I just want to see more of the tech we already have utilized in games. It’s shaping up to be a Nanite vs PT generation but hopefully more engines enter the fray with time.

API failure is what holds back RT the very, very most, not AMD.

There will always be limitations to any software or hardware and progress will always be incremental. Otherwise we would still be using DirectX 1.0. IHV hardware performance and corporate tactics are wholly independent of that and will be an issue no matter how good the APis are. Also there’s absolutely no guarantee that developers would have make good use of a more flexible DXR 1.0 api. We’ve been down that road.
 
The 4090 has nearly 90 TFLOPs. I dont think that GPUs do not have enough general purpose cores. And yet there are games in which die 4090 is only 50% to 60% faster than a 6900XT with 1/4 of the compute performance. With engines like UE5 and games like Starfield the bottleneck is not compute performance.
 
Last edited:
But the presence of RT HW reduces chip area available to general purpose cores.
Compute alone is expensive, very expensive. Adding slightly more Compute resources in instead of RT resources won't improve the situation at all. You need significantly more Compute to make a difference. RT in this case is more efficient.

Here we have an example, a game called "Satisfactory", featuring Software Lumen without Nanite, the game allows you to toggle Software Lumen off and on, and examine the performance. Problem is, with Lumen activated, fps takes a huge hit, @4K the 4090 dropped from 150fps without Lumen to just 44fps with Lumen, a 4080/7900XTX dropped from 108/118 fps to just 28fps.

In effect, for Ada/RDNA3 GPUs, Lumen off is at least 3x times faster than Software Lumen. For Ampere/RDNA2, the picture is quite worse, Lumen off is 4x times faster on the 3080Ti, while being 5X times faster on a 6900XT.

https://en.gamegpu.com/rpg/ролевые/satisfactory-test-gpu-cpu
 
We don’t know that for sure and it certainly doesn’t hinder development or experimentation. You don’t need a specific number of flops for that.
We do know for sure, because RT cores require chip area greater than zero. And while i will have enough flops left regardless, people still need to pay for chip area.
There will always be discussion on how chip area is spent. And if we want smaller and cheaper chips, it's even more justified.
For example, take the Steam Deck. It's RT performance may be just too weak to be really useful. Is it worth to add those RT cores then or not?
I would say it depends on how much area those RT cores take, and eventually a smaller, less powerful AMD solution is the better compromise than NVs advanced solution.
The same applies to any hardware which targets a maximized mainstream audience, to sell more games.
It's that simple, but said just to defend my argument. I'm personally not worried about HWRT reducing general compute power too much. I'm only worried about access to HWRT being too simplified, making it useless under certain circumstances.
We’ve hashed all of this out before but it’s not at all clear to me that the things you’re asking for (hardware LOD support, direct BVH access etc)
Seems we have not hashed it often enough.
I do not want 'hardware LOD support'. Please don't. Let us solve it in software, so we can do what we need instead being forced to adopt another inflexible off the shelf hack solving the problem only partially. I'm actually afraid about such a potential IHV LOD solution.
I want direct BVH access, yes. But this has nothing to do with hardware.
There will always be limitations to any software or hardware and progress will always be incremental. Otherwise we would still be using DirectX 1.0. IHV hardware performance and corporate tactics are wholly independent of that and will be an issue no matter how good the APis are.
Agreed, but if we miss something, we can't just sit there and wait until we eventually get it sometime by random luck or not. We have to complain and say what we miss and why it's needed.
That's what i do. It's just that my language gets worse and lacks attitude, which happens because people still confuse my constructive request on general RT improvement with my otherwise doubtful and critical personal opinion about RT in general.
Also there’s absolutely no guarantee that developers would have make good use of a more flexible DXR 1.0 api. We’ve been down that road.
We have no results because we can not do related research or development at all. We have not been down that road yet, because it's not possible.
To pull off some comparison from history, i could say look at programmable shaders and later general compute, and what it has enabled.
But that's a really bad comparison, since my request is not about new functionality requiring more flexible hardware.
In fact we never had a similar case before, because rasterization or compute does not have any data structures we might have wanted to access, but couldn't.

I see two options of what might happen:
1. We get the required access to BVH data structures.
2. We don't get it, graphics engines become entirely implemented by IHVs at driver level (including hardware LOD), and game devs only work on gameplay code.
My bet is on option 1, and after 5 years it would be about time to expect some movement.

The 4090 has nearly 90 TFLOPs. I dont think that GPUs do not have enough general purpose cores.
I totally agree. I would even say a 4090 has too much flops, which might explain why you don't get enough food for it, and you're now dissatisfied from playing more Lumen than PT games.
There is not much you can do about it, other than buying a second 4090 to fund more PT versions of current games. \:p/

Well, sorry. It seems we both can solve our problems only with patience. But let's keep the rant up until we get what we want, to accelerate things a bit...
 
Compute alone is expensive, very expensive. Adding slightly more Compute resources in instead of RT resources won't improve the situation at all. You need significantly more Compute to make a difference. RT in this case is more efficient.
This really depends on your workload, but i agree HWRT is more efficient than compute RT, and i also agree regarding my personal vision.
Here we have an example, a game called "Satisfactory", featuring Software Lumen without Nanite, the game allows you to toggle Software Lumen off and on, and examine the performance. Problem is, with Lumen activated, fps takes a huge hit, @4K the 4090 dropped from 150fps without Lumen to just 44fps with Lumen, a 4080/7900XTX dropped from 108/118 fps to just 28fps.

In effect, for Ada/RDNA3 GPUs, Lumen off is at least 3x times faster than Software Lumen. For Ampere/RDNA2, the picture is quite worse, Lumen off is 4x times faster on the 3080Ti, while being 5X times faster on a 6900XT.
Reminds me on first RTX impressions. Maybe Lumen is currently the only software GI solution we have available to do comparisons, but i do not accept it as a performance indicator.
The power of software is that we can work out efficient algorithms to improve time complexity, but i can't see that in Lumen. It serves my agenda only so far as it proofs HQ software solutions are possible and practical, at least.
 
I totally agree. I would even say a 4090 has too much flops, which might explain why you don't get enough food for it, and you're now dissatisfied from playing more Lumen than PT games.
There is not much you can do about it, other than buying a second 4090 to fund more PT versions of current games. \:p/

Well, sorry. It seems we both can solve our problems only with patience. But let's keep the rant up until we get what we want, to accelerate things a bit...
The 4090 is exactly what you want. It has as much FP32 compute performance as AMD's MI250X or 2x of the MI210. UE5 doesnt use any kind of HW RT with software Lumen.
How much faster is the 4090 with 90 TFLOPs over a 6900XT with 23 TFLOPs in UE5? 60%.

Nearly 55 TFLOPs of compute performance or a whole MI210 card is unused by UE5. Do you still think pure software solutions are the future?
 
I think this is the thread where we discussed software lighting solutions and whether HWRT was necessary: https://forum.beyond3d.com/threads/...lised-traced-and-everything-else-spawn.60873/

The problem with all those promising technologies isn't they didn't materialise. Demos were great but application in games was limited. Lumen may the first working solution built around various principles explored over the past 10 years of compute
 
The 4090 is exactly what you want. It has as much FP32 compute performance as AMD's MI250X or 2x of the MI210. UE5 doesnt use any kind of HW RT with software Lumen.
How much faster is the 4090 with 90 TFLOPs over a 6900XT with 23 TFLOPs in UE5? 60%.
Sadly such estimates on raw tf numbers rarely work. You would also need to factor in bandwidth and how 1AMD TF relates to 1NV tf.
I give a (dated) example from my personal experience on the latter point:
GTX 670: 2.6 tf, 192 GB/s
R9 280X: 4 tf, 288 GB/s
Back then i worked on Fermi/Kepler GPUs for my GI stuff. It's a big compute project, so profiling outputs covered a wide range of shaders. And i have optimized them individually for both architectures.
Then i wanted to see how it might perform on consoles, so i got a 280X which was the available GCN of the time.
It was 2 times faster out of the box, for the same money. No outliers - pretty consistent ratio across all shaders. That was the day i became 'AMD fan'.
After optimizing for the architecture, AMD was a whopping 5 times faster. This was in OpenCL. In OpenGL NV was really bad, and the difference was twice as much. Later with Vulkan perf was better than CL for both, but the ratio was again 5.
That midrange and cheap AMD GPU also did beat a Titan GPU of the time by a factor of 2, although Titan was a bit down clocked to work with Apple PSUs.

That's unexpected results, no? But it's a fair comparison, as i did specifically optimize for all architectures (which was more work for AMD, but also gave greater rewards).
The last comparison i did was FuryX vs. GTX 1070. AMD still had the lead if normalized by teraflops, but only by 10%. I was pretty happy to see NV went back from a power efficient mobile architecture to high performance stuff.

Teraflops numbers is all we have to estimate compute perf., but sadly it does not tell us much, especially now with dual issue, or parallel integer / floating point units, etc.
Looking just at synthetic and game benchmarks, i learn almost nothing about modern GPUs performance. If i want to know how powerful they are (or how practical RT really is for example), i need to buy GPUs from both vendors and spend a lot of time with each. Sadly i just work on tools and have no reason to do so yet.

From the endconsumer perspective it's kinda worse, since they don't have this option. But it's also better because game benchmarks tell them what matters to them.
I see why you are disappointed from UE5 performance, but i can't help explaining why results are what they are. Bandwidth might be a big factor. Another problem people often have is that it can be hard to saturate big GPUs. But that's just guessing blindly. And ofc. the bottleneck could be CPU as well. Remembering what devs said about Frostbite vs. UE during PS4 era, i also do not expect UE to be the fastest engine out there, an i hope for ongoing competition to limit Epics current dominance.

I would look at it this way: Your FPS should be high enough and fluid? So even if the GPU would be faster at it, you would not really notice the improvement anyway?
Currently, maxing out a 4090 surely is no priority for the industry. It may take some years until it can show its muscles beyond PT.
 
I have never heard about 'Present Mon', to answer your question. Is that something new from Intel?
And most people never heard about DLSS 3 working on Ampere GPUs either.

Do you think FSR and Xess would exist if there was no DLSS, software features formerly implemented by game devs?
No, I think that without DLSS we probably wouldn't have either FSR or XeSS since both were quite obvious answers to the former. But that doesn't mean that we wouldn't have something else in their place.

High End Settings often are PC exclusive, and that's where you can find your RT on checkbox.
Which in itself is already a limitation because if consoles would have gotten a potent RT h/w option then RT wouldn't be a "checkbox" and would be just there, always, kinda like pixel shaders are now.

Obviously they did say no when considering other IHVs to partner with. And MS does not even have strict requirements on backwards compatibility, so they could have chosen NV easily if they wanted.
Maybe next time.
All console makers have a requirement on b/c these days, MS is just better at abstracting the h/w and emulating the old devices on the new h/w than the rest of the bunch.
The choice in consoles is costs based, always. If you're arguing that good RT h/w would cost more let me direct you to Intel GPUs which are selling for peanuts right as we speak.
 
Back
Top