Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
I've watched most of them yes. But Minecraft didn't use very good lighting in the first place, so the RT in it is, of course, very dramatic compared to the lighting that existed in that game.

What people should be doing is comparing RT to the best lighting used in games. Not comparing RT to bad lighting used in games.

How does RT compare to the best lighting we've seen in recent games? Does it offer a dramatic and easily seen improvement? Especially when used in an actual game and not just a tailored RT demo. I and most people would argue that it doesn't. At least currently. Things will improve over time of course.

[edit] One caveat. If the next generation of console has significantly more performant RT than Nvidia's RTX 2080 Ti, that might be a game changer. But who here actually thinks that will happen?

Regards,
SB
Sony’s method of choice seems to involve photon mapping and kd trees rather than BVH. This comes from their patent history and hiring of a former Caustic/ImgTec employee. It will be interesting to see how they compare.
 
[edit] One caveat. If the next generation of console has significantly more performant RT than Nvidia's RTX 2080 Ti, that might be a game changer. But who here actually thinks that will happen?

Regards,
SB

I don't think brute forcing (PC/RTX) is the goal of console hardware designs, even with RT. There are many ways to skin a cat, without brutally tearing it apart. IMHO, I believe we will see more game developers within the console space pushing for smartly implemented RT, which of course will push PC games along further. So if Sony or Microsoft have some type of RT related logic in the hardware, you bets believe it’s going to be used more so than none.
 
I don't think brute forcing (PC/RTX) is the goal of console hardware designs, even with RT.
What exactly is brute forced about an entire API and years of hardware design and specialised hw units? That sounds actually pretty well thought out and "specialised".

Running Quake 2 RTX on a GTX 1080Ti? Well, that feels much more brute force if you look at the frame metrics.
 
What exactly is brute forced about an entire API and years of hardware design and specialised hw units? That sounds actually pretty well thought out and "specialised".

Running Quake 2 RTX on a GTX 1080Ti? Well, that feels much more brute force if you look at the frame metrics.

Brute force meaning it isn't optimized to a particular spec’d PC configuration. That PC GPUs and CPUs are still at the mercy of the consumers (thousands and thousands) configurations which can easily bottleneck the most beefiest of GPUs and CPUs on the market. And the majority of PC gamers tend to purchase upgrades (i.e., CPU, GPU, SSD, memory, PSU, etc.) within pieces over time… which even makes determining mid-range target PC spec’s even harder to nail down. I’m simply saying, an enclosed single spec’d console will offer console developers smarter efficient ways of targeting RT, not just solely relying upon the consoles hardware to brute force it way through (not going to happen).
 
Last edited:
The 'DF Direct' had an interesting snippet they'd heard. The next xboxs' SSD has two modes. A normal SSD mode and a lower level performance mode that's more akin to Sony's PS5 solution.

It's around 9 mins in where they mention this. I wonder why, if the low level mode is so much faster, why there is even a conventional SSD mode.
 
It's around 9 mins in where they mention this. I wonder why, if the low level mode is so much faster, why there is even a conventional SSD mode.

My guess is that the faster, lower level mode requires some degree of additional consideration or work - even if that's just toggling a few API settings and writing some manual access queues (or something). Given how important the ties to PC are, MS probably want to allow for a straight transfer of whatever works for current console and whatever works on the PC to next gen - or as close to that as possible.
 
There's also those rumours around MS having SSD as a large cache for a HD. The performance mode could give you more control over where data sits?
 
My guess is that the faster, lower level mode requires some degree of additional consideration or work - even if that's just toggling a few API settings and writing some manual access queues (or something). Given how important the ties to PC are, MS probably want to allow for a straight transfer of whatever works for current console and whatever works on the PC to next gen - or as close to that as possible.
Yeah, it could be more about tweaking I/O performance on Windows ports of Xbox games. The concept of I/O prioritisation already exists in the kernel of all three platforms.
 
Yeah, it could be more about tweaking I/O performance on Windows ports of Xbox games. The concept of I/O prioritisation already exists in the kernel of all three platforms.

Not mentioned is that it could also be considered a "fall back" mode if something goes wrong with the faster mode. Like some edge case or something that stripping away conventional SSD functions might cause.

While a console doesn't necessarily need all the things that a full blown OS would need from a storage device, especially when focused on games, when not running games, it might be handy to have some of those things available to apps.

It makes me think of the PC space where drive's firmware is generally focused on and optimized for certain market segments which makes them less than optimal for other segments.

Regards,
SB
 
My guess is that the faster, lower level mode requires some degree of additional consideration or work - even if that's just toggling a few API settings and writing some manual access queues (or something). Given how important the ties to PC are, MS probably want to allow for a straight transfer of whatever works for current console and whatever works on the PC to next gen - or as close to that as possible.
Possibly linked to the smart install type technology. But does sound like a dx11:12 type of scenario.
 
Given how important the ties to PC are, MS probably want to allow for a straight transfer of whatever works for current console and whatever works on the PC to next gen - or as close to that as possible.
Very true.
MS would either also make the low level API available on PC eg DX IO, or make it fallback to standard access automatically when running on PC.
Guess wouldn't be a huge deal for devs to check if low level access is available though.
 
Very true.
MS would either also make the low level API available on PC eg DX IO, or make it fallback to standard access automatically when running on PC.
Guess wouldn't be a huge deal for devs to check if low level access is available though.
That would be interesting to have them make it also available on PC
 
That it's currently a buzzword more than something most people will actually notice unless someone goes through and specifically points it out to them.

Most people don't even notice the RT in Metro: Exodus, for example.

Just like 720p vs. lower than 720p in the PS3/X360 days. Most people never noticed that COD was less than 720p on consoles until people started talking about it. And even then they probably didn't notice but just took someone's word for it.

This doesn't mean that developers won't experiment with RT if they have the development funds, desire, and time available to do so. At the same time there will likely be plenty of developers that don't bother unless the cost to implement is so insignificant that they don't have to budget for it.

Just look at the rather anemic support for RTX as to how most developers view the cost/benefit to implementing some form of RT. If both consoles support RT, they will do far more to accelerate RT in games than RTX, but even then, don't expect all or even potentially most developers to bother with RT for quite a few years, maybe not even until the gen after next gen, unless it becomes absolutely trivial to implement.

There's a lot of interest in RT currently. But there aren't a lot of developers that can afford to focus on leveraging RT.

And when it comes to your average consumer...it won't matter if they do or don't. THAT was the point.

Regards,
SB
As you say, most people can't tell the differences for themselves but since when has that stopped them from caring about them? Most people buy into the hype. There's a reason Cerny announced RT support for the PS5 already.

The number of PC games that currently support RT is obviously low since currently only thet highest end hardware can handle it well, was only released a few months ago and only one hw vendor supports it. RT is part of DX now, it's a standard. It's not going away. Once consoles adopt it you'll also see greater adoption by the devs as it happened with DX11.

In terms of R&D cost, it's no different than any other high end rendering feature. AAA devs will be fine and for everyone else there's middleware like UE4 and Unity.

Yes, RT is definitely here to stay unless it hits a dead end WRT performance scaling. But it's unlikely to supplant or replace conventional rendering for the forseeable future. This next generation of consoles will see some developers experimenting with it to some degree.

If you were to ask your average gamer whether Metro: Exodus with RT on has better lighting than Horizon, Uncharted 4, or Days Gone...I'm not sure Metro: Exodus would win. If they did not already know that Metro: Exodus has RT or even what RT is, there's a good chance they won't pick Metro: Exodus.

RT isn't yet at a point where it is clearly noticeable versus the best implementation of current lighting technology for most people. People who know what to look for will notice, for everyone else, RT just isn't that huge of an upgrade in most cases.

Regards,
SB
The average consumers don't drive the narrative, they follow it. Have ND claim that RT is the best thing ever and they'll eat it up.

Also, current RT implementations are pretty rushed and surface level, can't hardly be use as representative of what we can expect on titles years away in terms of performance though it's impressive they can already run at 1440p@60fps.

I am fearful to try RT. Because once you try it maybe you don't want to look back. That's why I am also fearful to try a 120 or 144Hz monitor, because then 60 fps will be the new 30 fps.
Most games are 30fps, did that stop you from trying 60fps titles?
 
As you say, most people can't tell the differences for themselves but since when has that stopped them from caring about them? Most people buy into the hype. There's a reason Cerny announced RT support for the PS5 already.
Most casuals can’t tell the difference between ultra and high.

But I think people will be able to tell the difference with RT depending on how it’s used.
 
Like dx9 cap bits all over again? :p

I jest.

Trying to actually figure out how many DX12 has...

First up is what's required at each level, taken from Wikipedia [ https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D#Direct3D_12 ], it's still confusing on how there is a DX 12.1 WDDM 2.0, and is it more or less than DX 12.1 WDDM 2.1-2.5?

DX 12.0 - WDDM 2.0 Resource Binding Tier 2, Tiled Resources Tier 2 (Texture2D), Typed UAV Loads (additional formats)
DX 12.0 - WDDM 2.1-2.5 Shader Model 6.0, DXIL
DX 12.1 - WDDM 2.1-2.5 Shader Model 6.0, DXIL
DX 12.1 - WDDM 2.0 Conservative Rasterization Tier 1, Rasterizer Ordered Views.

Optional Levels:
  1. Resource binding (three tiers)
  2. Resource binding (three tiers)
  3. Resource binding (three tiers)
  4. tiled resources (four tiers)
  5. tiled resources (four tiers)
  6. tiled resources (four tiers)
  7. tiled resources (four tiers)
  8. conservative rasterization (three tiers)
  9. conservative rasterization (three tiers)
  10. conservative rasterization (three tiers)
  11. stencil reference value from Pixel Shader
  12. rasterizer ordered views
  13. typed UAV loads for additional formats
  14. UMA/hUMA support
  15. view instancing.
  16. Logical blend operations
  17. double precision (64-bit) floating point operations
  18. minimum floating point precision (10 or 16 bit).
  19. Shader Model 6.0
  20. Shader Model 6.1
  21. Shader Model 6.2
  22. Shader Model 6.3
  23. Shader Model 6.4
  24. Raytracing
 
I think the early adopter with an average spending of 1600 dollars cares for Sony and I am sure the early adopter on Xbox side spend at least as much maybe more because they spend more money in digital.
 
It's around 9 mins in where they mention this. I wonder why, if the low level mode is so much faster, why there is even a conventional SSD mode.

Probably because the low level mode hands off management to the application. And not all app devs are interested in managing the ssd. Netflix and Hulu might not care to support low level access.
 
This is one aspect. Another is, as has been pointed out, that lighting with RT is still difficult.
And a third is that, for the foreseeable future, any multiplatform game has to offer both approaches, which means that it can't help but increase the total development work. RTRT lighting may be The Future(tm) but I maintain that for that to be the case it has to offer tangible benefits at a net lower resource cost. At what point, if ever, a game designer/publisher can assume that all targeted platforms will have sufficiently efficient support for RT that no other approach needs to be considered is very much up in the air.

One question about dropping the more conventional rendering path is how much further does the graphical power, power efficiency, and the level of knowledge in state of the art have to be to overcome how very good the architectures are at handling their primary use case?
How much of what users see on the screen is dominated by or sources heavily from calculations and data flow that is accelerated by rasterization hardware or the primary rays they handle quite well?

Many of Nvidia's initial efforts make a point of leaving the vast majority of the screen to be handled by the raster path, leaving the margins of the technique and types of incoherent ray behavior that are more difficult by nature for hardware of any kind to handle well.
If 80-90% of what is seen is heavily or partly dependent on the standard paths that handle those calculations up to an order of magnitude more efficiently, how eager are developers or hardware designers to discard the optimizations that made the room for ray-tracing hardware or the spare compute resources/power budget for a new sort of compute kernel?


It's around 9 mins in where they mention this. I wonder why, if the low level mode is so much faster, why there is even a conventional SSD mode.
Assuming the Sony patent is what the PS5 tries to do, and assuming that something similar is being done in the purported low-level mode:

There would be functions the low-level mode cannot do, like writing data. One scenario that I was curious about for the Sony method is what might happen for an open-world game that has a sprawling item database with small writes throughout, and if the write-back is better left to the standard file system.
Other standard functions like arbitration of file access, garbage collection, file and file system protection, system integrity, and quality of service are not handled to a significant degree by the low-level mode.
Depending on the specifics of the hardware mode, there may be limits to how complex the access pattern can get before limits to the hardware (controller on-die buffer space, queue depths, synchronization points, etc.) can prompt special handling or fallback behavior. While games are treated as a single-user system, there may be security implications to game or OS functions that interface with the network or other elements, and where those missing arbitration and protection layers need to be brought back in.

Also, depending on the nature of the low-level access and how well-protected it is from bad or malicious payloads, there is the question of what errors or flaws in low-level handling may affect. Can overflows or timing issues on a controller with fewer insulating layers potentially trash something like a remapping table--which may corrupt or brick the drive. As an example, what if the low-level management firmware had the same bug rate as Sony's suspend mode--where a game like Stardew Valley needed a system firmware update to stop the game from locking up the console and sometimes prompting a disk database rebuild?

A read-only performance mode may also avoid worrying about what may happen in the case of power loss, which many SSDs and operating systems already have problems with. Something like a whole-game remapping table subject to edits and shifting compression/encryption coverage may raise the stakes further.
There could be a risk-management theme for the platforms to have separate modes, so that they can offer the low-level mode later once they're much more certain about the quality and robustness of their methods before letting games get at them.
 
Status
Not open for further replies.
Back
Top