Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Would a wider bus reduce disproportionate bandwidth contention however?
60 FPS doubles bandwidth use over 30fps, while physical bandwidth keeps as a constant.
That being said, the CPU is running the memory twice as hard gutting the bandwidth available to the GPU.
*correct me if wrong*
Say CPU normally takes up 20 gb/s at 30fps. The bandwidth available to GPU is now 120 say.
But at 60fps it's going to be 40 gb/s. The bandwidth now will be < 100.

If the goal is high framerate support for next gen, will bus width be a factor?

edit:
wrt this old document
PS4-GPU-Bandwidth-140-not-176.png

Do you have the source of this document? I’ve never seen that graphic before.
 
GB/s is GB/s. If they enable the same number of GB/s with the faster bus they don't need a wider bus. It's possible at any given instant the narrower memory controller could have more requests in flight than there are lanes to service them, but there will surely be plenty of cycles (when the cycles are measured in nanoseconds) within that second where there are fewer requests in flight and the faster bus can catch up.
right. sorry let me clarify my question.
I'm not entirely sure what causes disproportionate memory contention. Is it an indication of the loss in bandwidth as a result of the memory controller likely switching between a lot of read/writes? Or is it because of scenario such that - say on a CPU read/write you only use 128bits of the 256bits to that chip, does that monopolize the whole 256bit to that chip such that the remaining 128bit cannot be shared with the GPU and that's where we are counting losses from?
 
I was about to simply post "no" based on my simple understanding of memory controllers but decided to check for a change prior to posting so I had a gander and there is a 2017 paper from AMD (pdf)
Improving CPU Performance through Dynamic GPU Access Throttling in CPU-GPU Heterogeneous Processors
It address the idea of implementing a real time QoS system for memory requests and resources and rapidly moves far, far beyond my comprehension so I thought someone here might gain some insight from it.

Edit:

There's also this paper from 2014 (pdf)
Managing GPU Concurrency in Heterogeneous Architectures
So it's clear AMD has been putting a lot of work into managing bandwidth between CPU/GPU in HSA designs, I'm still no clearer on whether a 2 x 128 bit requests could be sent together on a 256 bit request bus with one from the CPU and the other from the GPU
 
Last edited:
right. sorry let me clarify my question.
I'm not entirely sure what causes disproportionate memory contention. Is it an indication of the loss in bandwidth as a result of the memory controller likely switching between a lot of read/writes? Or is it because of scenario such that - say on a CPU read/write you only use 128bits of the 256bits to that chip, does that monopolize the whole 256bit to that chip such that the remaining 128bit cannot be shared with the GPU and that's where we are counting losses from?

In the case of the One X, the SoC has an integrated memory controller that is physically connected to each of the twelve 1GB memory chips via a 32-bit bus. It's these twelve 32-bit buses that give the One X its 384-bit bus. I wouldn't expect doing read or write operations via one or more of the individual 32-bit buses would prevent the memory controller from using the others. The CPU, though, is probably given precedence over the GPU for memory accesses given what people have posted here about CPUs being more latency-sensitive and that could be an issue if both the GPU and CPU need data that resides on the same chip(s) at the same time. I suppose that with fewer connections these collisions could happen more often.
 
Last edited:
I see its nice to want 24GB or more ram but is that really needed? I mean, more ram doesnt mean better performance, isnt raw GPU and CPU performance more to worry about?
 
I see its nice to want 24GB or more ram but is that really needed? I mean, more ram doesnt mean better performance, isnt raw GPU and CPU performance more to worry about?
memory and bandwidth are bottlenecks to performance. Having lots of raw CPU and GPU but no way to leverage them is why you have to look at the system as a whole.

having 24 TF but 65GB/s of bandwidth probably means you're not going to get much done.
 
Where are you reading that info? The bits you've quoted could just be working on PC hardware and appropriate DX12 techs without any knowledge of what the next-gen consoles will have.
When the mod was trying to help by editting my posts formatting & font, he also removed this bold and highlight text

Looks like 3rd party publishers/developers have gotten or are getting somewhat finalized specs for the nextgen consoles in Fall 2018

https://activision.referrals.selectminds.com/infinityward/jobs/narrative-scripter-temporary-2607

Programming Intern (Fall 2018) - Central Technology - Sherman Oaks, CA
Programming Intern – Central Technology
Activision Blizzard is seeking talented engineers to join its Central Technology division on an internship or co-op basis for Fall 2018. This is a rare West Coast opportunity to develop cutting edge games technology, learn the next-gen consoles inside and out, and to work with top developers like Sledgehammer, Treyarch, Infinity Ward, and Vicarious Visions. In the past interns have worked on problems such as data analytics, recommendation engines, logging and network data optimization.

There is also the job listing that (perhaps though i doubt it was the original source) from that Gamingbolt article:

Narrative Scripter (Temporary)
Come work with the game industry’s brightest on a new, exciting, unannounced title for multiple next gen platforms.
 
Last edited:
When the mod was trying to help by editting my posts formatting & font, he also removed this bold and highlight text



There is also the job listing that (perhaps though i doubt it was the original source) from that Gamingbolt article:
I'm not sure what's being implied here. it's known that we are still a ways out and a real kit isn't ready yet; most consoles at this stage are likely to be not much more than a PC slapped together with targeted specs, or at best a video card provided by Sony/MS.
 
I'm not sure what's being implied here. it's known that we are still a ways out and a real kit isn't ready yet; most consoles at this stage are likely to be not much more than a PC slapped together with targeted specs, or at best a video card provided by Sony/MS.
Exactly. My thoughts are these are PC devkits and 3rd party publishers may have rough target specs (and of course they've been inform that even the rough outline could be subject to change ala PS4's +4GB mem boost or X1's +53Mhz GPU boost)
However given the increasing cost of R&D, prototyping, validation, software, etc at 7nm I wouldn't be too surprised if Sony/MS are slightly more frugal with their modifications this time around compared to architecture seen in PC.
nano3.png
 
Last edited:
However given the increasing cost of R&D, prototyping, validation, software, etc at 7nm I wouldn't be too surprised if Sony/MS are slightly more frugal with their modifications this time around compared to architecture seen in PC.

Why would they be more frugal? They making more money than they have ever made. In my opinion because of the networks and how important it is to lock people in them as early as possible they should go all out in pushing the boundaries.

I mean how much money did Sony make from PSN in 2017?
 

Interesting. The disproportional bandwidth-hit seen on PS4 with high CPU load might not only be a result of DRAM contention, but also because GPU warp scheduling is actively throttled to reduce memory contention (to improve latency for the CPU).

It would be interesting to see how things have progressed in the past 4-5 years. Anyone with a R5 2200g/2400g system and the technical inclination to do some tests?

Cheers
 
When the mod was trying to help by editting my posts formatting & font, he also removed this bold and highlight text:
Working on next-gen doesn't mean knowing next-gen hardware. As we've seen before, devs get started on next-gen games only to dramatically change what they accomplish in the real thing a year or two later.

In this case, there's a time limit on the job I guess. How long is the Intern there for? But I'd still say it'd just an advertising hook rather than real hardware specs. Knowing next-gen consoles inside and out could mean getting to grips with an AMD Raven Ridge APU and DX 12 software methods like GPU based graphics dispatch.

Is Navi even in silicon to be experimenting with as an architecture, let alone knowing PS5 inside-out?
 
Well to be fair this coming gen will be the first generation ever where the base technology is exactly the same as last gen.
Is this certain?

The latest rumors say that Navi is an architecture mostly done for Sony, and that Sony was so demanding on it that most of the RTG staff was sent to work on PS5's GPU which in turn delayed Vega 10 and hindered its final performance.
To me, this tells me Navi might be a significant departure from AMD's GFX9, despite other contradicting rumors. If it was just an evolutionary upgrade to Vega's GFX9 then it shouldn't take that many resources.


what pitfall will it be to support a high and low powered variant from launch?
Performance profiling to keep all of the game's frametimes under 33ms is AFAIK a very demanding task. It's an effort that, if done right, will have to double for two distinct performance potentials.
It's not like they can simply treat the higher performing console as if it was a PC with a more powerful graphics card. In the PC spectrum there's no one dictating the game must run at a minimum 30FPS if e.g. a Ryzen 2700X + GTX1080 Ti is detected. For consoles failing to do so could dearly to the devs.

Multiplatform + cross-gen game devs, come 2020/2021, will have to do that for 2 known consoles from 2013, 2 different but known consoles from 2016/17 and two different and unknown consoles from 2020.
Adding yet another console would make an incredibly difficult task into an almost impossible one.
 
Is this certain?

The latest rumors say that Navi is an architecture mostly done for Sony, and that Sony was so demanding on it that most of the RTG staff was sent to work on PS5's GPU which in turn delayed Vega 10 and hindered its final performance.
To me, this tells me Navi might be a significant departure from AMD's GFX9, despite other contradicting rumors. If it was just an evolutionary upgrade to Vega's GFX9 then it shouldn't take that many resources.

I think he meant it in general terms, as in an x86 / PC based architecture versus something exotic like Cell + RSX or Xenon + Xenos.
 
I think he meant it in general terms, as in an x86 / PC based architecture versus something exotic like Cell + RSX or Xenon + Xenos.
Yes, but the performance profiling efforts still exist and PS5's Navi might be more distanced to current GCN architectures than we think.

Plus, my previous point is that although the newer platforms are supposedly easier to develop for, the time saved from not drastically changing CPU/GPU intruction set may be replaced by the increasing amount of time needed to make more visually complex games.
 
Multiplatform + cross-gen game devs, come 2020/2021, will have to do that for 2 known consoles from 2013, 2 different but known consoles from 2016/17 and two different and unknown consoles from 2020.
Adding yet another console would make an incredibly difficult task into an almost impossible one.

If they dont have dynamic engines that scale automatically by 2020 then they're developing it all wrong.
 
If they dont have dynamic engines that scale automatically by 2020 then they're developing it all wrong.
Do you mean dynamic resolution?

If dynamic resolution could permanently solve performance profiling then everyone would be using it right now. There's probably a good reason why that technique has been around for almost a decade and a half and not everyone has promptly adopted it.
In fact, aside from Quantum Break on Xbone is any AAA 1st-party game (the ones who push the hardware the most) using adaptive resolution?
 
No, he means engines that can be given a hardware target and adapt resources and rendering appropriately. If you were to take a PC game with adjustable settings that works across the full range of PC configs, you could create a couple of .ini files for a couple of fixed configuration boxes. That same engine could then use the same scaling for those two PC configurations, or analyse the game and adjust intelligently based on those two configurations. And of course, we just swap out 'PC configuration' with console, and we have the concept of automatically scaling engines. The engine is written for PC and consoles, with specific optimisations as appropriate, and adjusts the one game to the different platforms without requiring developer intervention.

No idea how practical that would actually be, but the concept seems sound.
 
Nope. Dynamic Engines. Where rendering effects can be scaled dynamically.

We've even had a discussion on one such engine year(s) ago used by Forza.
 
Status
Not open for further replies.
Back
Top