Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Let's not turn this discussion into whether XB1X is next-gen or not. ;)
fair enough!
I don't want to merry-go-round this one into the wrong direction.
I bring it up not because I think it is necessarily, but because the challenges of predicting next-gen (or even developing it), what's the target given what we have today. I guess is where I'm trying to go with the discussion.

It's the first time we've witnessed a mid-generation refresh, and so it's sort of having an negative impact on our abilities to predict the hardware (which in turn, lines up with the timing)
 
I'd take next gen as a generation advance on what this generation launched at in terms of general power, not features. If you compare how last gen consoles were feature wise at the end versus at the beginning, they could be considered similarly generationally advanced. So, 2-3x more power is not next-gen, regardless of whatever niceties appear. Maybe we'd be looking at ~8x XB1's TFs, so 10 TFs?
 
I'd take next gen as a generation advance on what this generation launched at in terms of general power,
yes, we can say with a certainty (nintendo aside) Next gen's CPU will be a lot more powerful. Games designed for next gen consoles will need sacrifices made to run on current gen, running at a lower resolution doesn't help at all if your CPU is too weak.
 
yes, we can say with a certainty (nintendo aside) Next gen's CPU will be a lot more powerful. Games designed for next gen consoles will need sacrifices made to run on current gen, running at a lower resolution doesn't help at all if your CPU is too weak.

Like what?
 
If it’s cpu power that we are thinking increasing in a meaningful way then big changes need to happen, as it is now it’s a lot easier to have a big increase from gpu to gpu over a generation than it is the cpu to cpu generation.
Look at past history of pc architecture which we are borrowing from.

Cell was supposed to take us there when that was released.....so are we looking at a 8 core cpu? 12 core? Or is an efficient quad core going to do the trick for cost per performance and energy spent?

Pretty hard to be radical when the price point is so low.
 
Like what?
everything that a CPU does, fewer objects in the world, worse physics, worse AI, worse framerate (framerate is very CPU dependent 16msec vs 33msec)

@borntosoul The xbox1 X CPU's only running at 2.3Ghz, Its a very low bar. I'm sure everyone agrees the current gen's CPU's were weak even when they released
 
everything that a CPU does, fewer objects in the world, worse physics, worse AI, worse framerate (framerate is very CPU dependent 16msec vs 33msec)

@borntosoul The xbox1 X CPU's only running at 2.3Ghz, Its a very low bar. I'm sure everyone agrees the current gen's CPU's were weak even when they released
I think we can all agree that their weak, but it's good, because they're able to muscle out as much performance as possible out of the limited die space that is available. We don't want to be in a situation where the CPU is spending it's cycles assisting the graphics chip like the PS3 did, there is more efficient use of the silicon space.

Here's what some CPU graphs look like: Green is the render code, which, could be entirely offloaded to the GPU. But we don't do it today.

Opt2-ProfilerTimeline.png


You can see the same trend here in video format:

Render block just eats up tons of CPU. It's not a real CPU function, it's not adding to the gameplay, it's not changing the physics.
The frame rate is limited by the fact that the CPU can't keep up with continually more complex scenes. But in the future, we expect all games to go the route of GPU side dispatching thus having a dramatic drop in CPU load, so why have a larger CPU than we realistically need. While I agree next gen will be Zen, I don't want it to be any larger than it needs to be.

If people keep asking for 60fps games, or even 120fps games. Developers can find a way to do it. They just don't want to do it right now. It's no where near a priority for them but for some games (like twitch FPS titles) perhaps they will.
 
On a more generic point of view: next time I want a better non gaming experience, decent multitasking, master media control, faster bootup, maybe pip again etc
 
Faster loading time too/better streaming. I mean, yes, most of the time I guess the hard drive is the bottleneck, but in some case the cpu is (decompression / compilation I guess ?).
On PC, some time ago when I moved my games from HDD to SSD, the surprise was to see the increased cpu usage during loading.

I think they will go 4c/8t. Or 6c/12t max. Maybe use 8c chip, but they will privilege yield ? Ryzen is so much more powerfull than what they have now... It would be a big upgrade anyway, and It won't killed the price.
 
On PC, some time ago when I moved my games from HDD to SSD, the surprise was to see the increased cpu usage during loading.
Loading the data, you want to process it, such as decompression and decryption. If you are loading faster, you need to decompress faster, so yes, CPU will increase. CPU is only a bottleneck if you hit 100% utilisation. As long as CPU utilisation is below 100% while maxing out the drive, it's not a bottleneck.
 
Moar ram! Moar cpu! Maybe a GW option for using almost all the resources for games, but if I pay 400€+ I want a media experience better than my 39€ android tv box
 
Moar ram! Moar cpu! Maybe a GW option for using almost all the resources for games, but if I pay 400€+ I want a media experience better than my 39€ android tv box
Devkit that's not a devkit? :p (double ram, cherry picked APU that has all units enabled)

:V

:runaway:
 
I was thinking of a software option, not a differrent sku
maybe an option in the game "select this for 60fps, but you will not be able to stream music from groove or go back to watch dirk gently 3 on syfy channel"
 
I think we can all agree that their weak, but it's good, because they're able to muscle out as much performance as possible out of the limited die space that is available. We don't want to be in a situation where the CPU is spending it's cycles assisting the graphics chip like the PS3 did, there is more efficient use of the silicon space.

Here's what some CPU graphs look like: Green is the render code, which, could be entirely offloaded to the GPU. But we don't do it today.

Opt2-ProfilerTimeline.png


You can see the same trend here in video format:

Render block just eats up tons of CPU. It's not a real CPU function, it's not adding to the gameplay, it's not changing the physics.
The frame rate is limited by the fact that the CPU can't keep up with continually more complex scenes. But in the future, we expect all games to go the route of GPU side dispatching thus having a dramatic drop in CPU load, so why have a larger CPU than we realistically need. While I agree next gen will be Zen, I don't want it to be any larger than it needs to be.

If people keep asking for 60fps games, or even 120fps games. Developers can find a way to do it. They just don't want to do it right now. It's no where near a priority for them but for some games (like twitch FPS titles) perhaps they will.

AMD is hard at work optimizing CPU and GPU architectures to be more efficient and better utilized for gaming workloads.

Pipeline including separate hardware data paths for different instruction types


And it appears AMD is finally tackling transactional memory (Intel TSX or equivalent)
Processor support for hardware transactional memory

Method and system for yield operation supporting thread-like behavior

The disclosed techniques can be used in processor architectures such as, but not limited to, vector processors, SIMD processors, and processors including scalar and vector units. The disclosed techniques yield substantial improvements in improved processing efficiency and flexibility in programming. In particular, the disclosed technique allows the execution of multiple instruction multiple data (MIMD) style applications on SIMD processors.

Heterogeneous parallel primitives programming model

https://patents.google.com/patent/US20180060124A1/

  • Further, conventional programming models fail to utilize braided parallelism. Braided parallelism is a combination of data parallelism and task parallelism. Conventional programming models, such as OpenCL and CUDA implement data parallelism. However in addition to data parallelism, task parallelism can also be implemented in a heterogeneous computing platform, as described below.
  • [0026]
    For example, a game engine that implements a heterogeneous computing platform displays many types of parallelism. It includes parallel AI tasks, concurrent workitems for user interfaces, and massive data-parallel particle simulations, to name a few examples. However, even when the components in the game engine exhibit parallelism, the video engine fails to exhibit parallelism in its entirety. In fact, the entire video engine is not parallel as many of its tasks are generated dynamically.
  • [0027]
    A need for implementing task-graph executions on a GPU is shown by existence of persistent threads. Persistent threads may be used for building scheduling systems within threads and thus circumventing the hardware scheduler. This approach is commonly used to reduce overhead that arises from massively parallel data executions. Persistent threads, however, also demonstrate a need and limitation in conventional programming models for implementing braided parallelism.

System and method for dynamically allocating resources among gpu shaders

https://patents.google.com/patent/WO2018075529A1

The GPU then calculates a resource allocation for the workload based on the

performance characteristics, and stores the resource allocation. In response to subsequently receiving a previously stored graphics workload with the given identifier, the GPU retrieves the stored resource allocation for the graphics workload, and applies the resource allocation for processing the graphics workload. By applying the stored resource allocation, the GPU reduces processing bottlenecks and improve overall processing efficiency of the processor.
 
I think we can all agree that their weak, but it's good, because they're able to muscle out as much performance as possible out of the limited die space that is available. We don't want to be in a situation where the CPU is spending it's cycles assisting the graphics chip like the PS3 did, there is more efficient use of the silicon space.

Here's what some CPU graphs look like: Green is the render code, which, could be entirely offloaded to the GPU. But we don't do it today.

Opt2-ProfilerTimeline.png


You can see the same trend here in video format:

Render block just eats up tons of CPU. It's not a real CPU function, it's not adding to the gameplay, it's not changing the physics.
The frame rate is limited by the fact that the CPU can't keep up with continually more complex scenes. But in the future, we expect all games to go the route of GPU side dispatching thus having a dramatic drop in CPU load, so why have a larger CPU than we realistically need. While I agree next gen will be Zen, I don't want it to be any larger than it needs to be.

If people keep asking for 60fps games, or even 120fps games. Developers can find a way to do it. They just don't want to do it right now. It's no where near a priority for them but for some games (like twitch FPS titles) perhaps they will.

Dat Xbox command processor... :cry::?:
 
AMD Computex show was last night. Nothing impacting console APUs was shown other than roadmaps still on schedule.

7nm CPUs are in the lab and looking good, but we already knew they were sampling, so kind of an empty statement.

https://www.anandtech.com/show/12909/amd-computex-2018-press-event-a-live-blog-10am-taiwan-2am-utc

Edit: 7nm Vega for AI/ML is now shipping this year instead of just sampling. Good sign for 7nm https://www.anandtech.com/show/12910/amd-demos-7nm-vega-radeon-instinct-shipping-2018
 
Last edited:
Status
Not open for further replies.
Back
Top