Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
many are hoping that RT hardware acceleration as featured in RTX dies a quick death and is replaced by a more general purpose solution.
Looking through the latest DXR specs, it seems like RT acceleration structures do away with the notion of 'descriptors' in favour of linear GPU virtual address space, because each ray can travel anywhere in a scene and access every resource... I guess it would take a couple more iterations of the Direct3D API and a move away from D3D11-era concept of 'resource views' and D3D12-era 'descriptor heaps' etc. - which would require a new shader language model - before DirectX Raytracing would be feasible to implement on consoles...
 
Last edited:
And most of those naysayers have exactly zero experience with RTX and minimal with RTRT.
The RTRT tech implementation is spanking "new" so naysayers are expected, and imagine would be less of a "black" box if they took time to peruse such documentation as indicated by @DmitryKo .

I'm just glad the major game/indie developers that have endorsed the RT concept and currently developing RTRT desktop games will likely spearhead it's adaptation to other platforms.
 
Last edited by a moderator:
What's your experience of implementing RTX enhanced routines into your start-to-end graphics processing pipeline?
None, which is why I don't dismiss it so readily.

For the most its because the next gen consoles most likely aint going to have dedicated RT hardware. If there was to be fixed function RT hardware in PS5, most likely people would be more excited about it.
Its like the fixed pixel and vertex shaders and features like bump mapping from the early 2000's, as more flexible solutions where too slow. With RTX2070 and up its atleast a start, perhaps we will see RTX 3080 or 4080 series with better RT solutions around console launch time (2020/2021). Nvidia most likely is preparing for the future of RT.
On a side note, i dont think theres that many 'naysayers' really, and this is the console forums so thats to expect.

I do think GPU's are too expensive as they are right now, and strangely people are massivly buying them anyway, this doesnt really help in Nvidia lowering future prices.
Yes, RTX is just a start. Skepticism always surrounds new technological paradigms. We'll just have to wait and see what comes out of this.

Microsoft provides preliminary DXR documentation and raytracing code samples on GitHub and they work with GCN hardware through a fallback layer - and Nvidia does provide RTX docs as well, so it's not that hard for a seasoned game developer to get a glimpse of the technology.

That said, I really doubt next-gen consoles would include raytracing, simply because it currently requires top-end hardware while consoles have always been based on mid-range GPUs.
Well DXR is one thing and RTX is another. For DXR you can play with it on its own but for RTX you need to get your hands dirty.

I still think that the point of RTX is to get developers on board with ray tracing, that's why its designed for speed and ease of adoption. Once devs embrace rays then you give them flexibility. Current RTX cards are transition cards, not showcases of a new paradigm.
 
And most of those naysayers have exactly zero experience with RTX and minimal with RTRT.

Just saying.

I'm talking about developers with experience to a greater or lesser degree with experience with ray tracing.

They all by and large want RT in games. But they mostly don't think fixed function black box RT is the way to do it.

Regards,
SB
 
Backwards compatibility unofficially confirmed for PS5?

Images

Inventors: Cerny; Mark Evan (Burbank, CA), Simpson; David (Los Angeles, CA)
Filed: January 20, 2017
Cerny , et al. October 16, 2018
Simulating legacy bus behavior for backwards compatibility

Abstract

To address problems that arise due to differences in bus behavior when running a legacy application on a new device the new device may throttle bus performance in a way that emulates the bus behavior of a legacy device when executing the legacy application.

Bus throttling on the new system may be based on estimated bandwidth allocations determined from behavior of the legacy bus. Bus traffic may be throttled by limiting the amount of available bus bandwidth allocated for particular bus transactions according to amounts estimated from the legacy bus behavior. The bus traffic is throttled so that the new device allocates at least as much bandwidth as would have been allocated by the legacy system, but not so much more that synchronization errors arise in execution of a legacy application. The throttling can be tuned while running legacy applications on the new device to determine how much additional bandwidth allocation causes problems with execution.

FIG. 1 shows an example a new device configured to account for differences in bus architecture between a legacy device and the new device when running applications written for the legacy device. In this example, the new device may include a multicore CPU and a multicore GPU coupled to a common memory 106 and I/O access controller 108. Each CPU or GPU core is coupled to a level 2 cache 110 and bus interface unit 112 via backside buses (BSB.sub.1, BSB.sub.2). The level 2 cache 110 is coupled to the memory 106 and I/O access controller 108 by a frontside bus (FSB). Additional memory (not shown), peripheral devices 114, video 116, and data storage devices 118 interface with the CPU and GPU through the access controller by various busses. The CPU and GPU may include configurable registers 105 for temporary storage of data and/or instructions. A legacy version of the device in FIG. 1 might have a different architecture, e.g., one in which there are separate busses for the CPU and GPU and in which there are separate controllers for memory and I/O access.
 
nVidia doesn't need mindshare...

Ms does but as others have said, likely too costly so the partnership is unlikely.

MS is probably gaining quite some mindshare with all its recent moves anyway such as BC, X, subscription models, Play Anywhere, hoarding first party, etc. so maybe going with Nvidia could be a waste of money if they're really not willing to strike a good deal for a console project.
 
Isn't that the same patent we already talked about last week, where result was the patent was merely updated by the patent office?

Edit: maybe not in this thread or in this forum, but this is all deja-vu to me.

I'm not a 100% sure on anyone posting this already - I haven't seen it. But I did post this, that had more to do with emulation, than backwards compatibility.
 
Well it proved my point in how important PlayStation is to Sony. You can't equate because Sony spent lots of money on PS3 that's the reason it was a failure.

Sony was bleeding money during PS3 era. Seeing the forecast of Sony Playstation fiscal year 2020 , some financial analyst think Sony will lose more money on PS5 per unit at launch than with PS4 but I doubt it will be at PS3 level, it was a money sinker and I doubt they lose as much money as MS on 360 per unit.
 
Last edited:
I still think that the point of RTX is to get developers on board with ray tracing, that's why its designed for speed and ease of adoption. Once devs embrace rays then you give them flexibility. Current RTX cards are transition cards, not showcases of a new paradigm.

Exactly. Which is corroborates the idea it is just too early to have that implementation in a $400 console releasing at 2020 (which with that release date would have started it's design years ago, and is are finishing touches ate the moment.)
 
Well it proved my point in how important PlayStation is to Sony. You can't equate because Sony spent lots of money on PS3 that's the reason it was a failure.

It doesnt equate either that spending more money means a more successfull console. The first xbox had much better hw then ps2 and probally was a much more expensive project.
Just that Playstation is Sonys biggest department doesnt either mean they would put more resources in their console. Even if Xbox is more of a side project for MS their Xbox department could have more resources. MS being bigger and not dependand on just xbox means they most likely can take more risks.
One x certainly cost more to produce then the Pro.
 
It doesnt equate either that spending more money means a more successfull console.
That wasn't the conversation. There was discussion over which of MS and Sony had the most invested in the consoles and so was more likely to invest the more heavily into the console, with PS being presented as far more important to Sony than Xbox consoles is to MS, and PS3 showing how much Sony has been willing to invest in HW before (a stupid amount). If the mentalities between the two companies are "winning the console space would be nice" for MS and "we have to dominate the console space or we're screwed" for Sony (not that I'm saying it is as polarised as that), then we'd expect Sony's investment to be more pronounced, all else being equal.

But either way, I don't think that line of debate contributes much to this thread. Hardware predictions should be based on budgets, with allowances made for more or less whatever the companies choose.
 
Yes, RTX is just a start. Skepticism always surrounds new technological paradigms. We'll just have to wait and see what comes out of this.
Well, I can live with that ;)

I'm sry if my comments before seem to have ignore some arguments.
But I can tell you, I'm old enough that I won't get hyped for every new piece of technology (there were way to many in the past). Currently, nvidia praises the hell out of RTX, like companies do if they have something new they can sell. Like Shifty already mentioned there are other "traditional" ways reflections etc. can work without this new thing. They are not like the new thing, but good enough (what's always a point of the computer graphics) to be convincing with lower hardware-costs.
Currently there is nothing like demos and the promise that games are coming soon for both new nvidia technologies. If it only works good on a 2080 it won't be in consoles anytime soon. If it works good on the 2070 maybe, if nvidia is bringing out a 2060 with working RTX (that is usable in games) than chances are way better. But console manifactures look at every penny they spend for the chips, so it will depend on how good it works and how scalable it is (especially the AMD solution).
 
Exactly. Which is corroborates the idea it is just too early to have that implementation in a $400 console releasing at 2020 (which with that release date would have started it's design years ago, and is are finishing touches ate the moment.)
I don’t think that necessarily matters in this scenario. The same team that builds Xbox and DXR are the same team. The DX team works closesly with them and they work closely with the vendors.

Without support from MS, there’s no way that nvidia would have tried to push this RTX line, too little support and too much infrastructure required to support the 5% of future GPU owners while taking up precious silicon to support it.

Nothing makes more sense than profitability and risk mitigation.

Xbox One shipped with DX12 customizations before DX12 was released, there are still custom components in there specifically for GPU driven rendering that we don’t see anywhere else in the AMD lineup.

And DX12 is seems quite largely based on GCN.

The BC hardware in Xbox was developed as singular components way in advance of the final design.

All of that must have started somewhere and certainly way before release.

There’s a big difference between power and feature set. Consoles will likely stay in the middle of the power band, as their costs are directly associated with SoC size and TDP.

Feature set is something else entirely though, there are Intel IGPUs that support DX12 features than some of the latest AMD releases.

Not to say you’re wrong there is merit to your point, but I draw a big line between power and features. And for obvious reasons, in the console space features matter much more than power. A 3.0 TF Xbox 360 would unlikely be able to produce the visuals of this gen.
 
I'm all for features that make the whole of the machine more capable and flexible. I'm not a fan of features that are single purpose gimmicks that can't contribute to other aspects outside of their original intended use case.
From all I've heard from devs on the matter, the fact RTX exist is surely exciting and promising, but the implementation is more like the second description than the first. I hope it can evolve to become the ideal, and surely being released on the 2080 is the first step. You gotta crawl before you can Sprint. No one denies that. My only point is I think consoles are no the place for crawling. They are mainstream consumer-centric general-purposeish gaming machines meant to last more rhan half a decade. The design must be as smart, efficient and generalized as possible.
 
m all for features that make the whole of the machine more capable and flexible. I'm not a fan of features that are single purpose gimmicks that can't contribute to other aspects outside of their original intended use case.
From all I've heard from devs on the matter, the fact RTX exist is surely exciting and promising, but the implementation is more like the second description than the first. I hope it can evolve to become the ideal, and surely being released on the 2080 is the first step. You gotta crawl before you can Sprint. No one denies that. My only point is I think consoles are no the place for crawling. They are mainstream consumer-centric general-purposeish gaming machines meant to last more rhan half a decade. The design must be as smart, efficient and generalized as possible.
In an ideal world you're be right. The idea of a generalized, non-fixed function based RT sounds a lot better than what Nvidia is doing. But once again, I'm not exactly sure what that means entirely, I also don't know if that's a limitation of DXR. Ie: only until DX11 were compute shaders brought into play. Compute _really_ just happened even though DX11 was around since 2007/08.

Having said that, we do need to start somewhere, and I'd rather start with basic RT features (before the mature ones) now, then to not have it at all next gen, and just go yet another generation of rasterization. It would imo, still open up a slew of new experiences imo. The idea that the inside of buildings will actually be lit by the lights and not baked etc. So that if lights are shot out or lights are turned off, in rooms, the hallways go black... etc etc.

There is more dynamism to stealth game-play when we can talk about manipulating light itself.
 
Last edited:
In an ideal world you're be right. The idea of a generalized, non-fixed function based RT sounds a lot better than what Nvidia is doing. But once again, I'm not exactly sure what that means entirely, I also don't know if that's a limitation of DXR. Ie: only until DX11 were compute shaders brought into play. Compute _really_ just happened even though DX11 was around since 2007/08.

Having said that, we do need to start somewhere, and I'd rather start with basic RT features (before the mature ones) now, then to not have it at all next gen, and just go yet another generation of rasterization. It would imo, still open up a slew of new experiences imo. The idea that the inside of builds will actually be lit by the lights and not baked etc. So that if lights are shot out or lights are turned off, in rooms, the hallways go black... etc etc.

There is more dynamism to stealth game-play when we can talk about manipulating light itself.
+1.

I also think that even if this hardware solution it's not the best, it will prove a good start point for developers to at least try some sort of RTRT right now. This will push forward DXR and maybe we'll discover better solutions and a better hardware approach once developers get actually working on this.
 
Status
Not open for further replies.
Back
Top