Let's not turn this discussion into whether XB1X is next-gen or not.
fair enough!Let's not turn this discussion into whether XB1X is next-gen or not.
yes, we can say with a certainty (nintendo aside) Next gen's CPU will be a lot more powerful. Games designed for next gen consoles will need sacrifices made to run on current gen, running at a lower resolution doesn't help at all if your CPU is too weak.I'd take next gen as a generation advance on what this generation launched at in terms of general power,
yes, we can say with a certainty (nintendo aside) Next gen's CPU will be a lot more powerful. Games designed for next gen consoles will need sacrifices made to run on current gen, running at a lower resolution doesn't help at all if your CPU is too weak.
everything that a CPU does, fewer objects in the world, worse physics, worse AI, worse framerate (framerate is very CPU dependent 16msec vs 33msec)Like what?
I think we can all agree that their weak, but it's good, because they're able to muscle out as much performance as possible out of the limited die space that is available. We don't want to be in a situation where the CPU is spending it's cycles assisting the graphics chip like the PS3 did, there is more efficient use of the silicon space.everything that a CPU does, fewer objects in the world, worse physics, worse AI, worse framerate (framerate is very CPU dependent 16msec vs 33msec)
@borntosoul The xbox1 X CPU's only running at 2.3Ghz, Its a very low bar. I'm sure everyone agrees the current gen's CPU's were weak even when they released
Loading the data, you want to process it, such as decompression and decryption. If you are loading faster, you need to decompress faster, so yes, CPU will increase. CPU is only a bottleneck if you hit 100% utilisation. As long as CPU utilisation is below 100% while maxing out the drive, it's not a bottleneck.On PC, some time ago when I moved my games from HDD to SSD, the surprise was to see the increased cpu usage during loading.
Devkit that's not a devkit? (double ram, cherry picked APU that has all units enabled)Moar ram! Moar cpu! Maybe a GW option for using almost all the resources for games, but if I pay 400€+ I want a media experience better than my 39€ android tv box
I think we can all agree that their weak, but it's good, because they're able to muscle out as much performance as possible out of the limited die space that is available. We don't want to be in a situation where the CPU is spending it's cycles assisting the graphics chip like the PS3 did, there is more efficient use of the silicon space.
Here's what some CPU graphs look like: Green is the render code, which, could be entirely offloaded to the GPU. But we don't do it today.
You can see the same trend here in video format:
Render block just eats up tons of CPU. It's not a real CPU function, it's not adding to the gameplay, it's not changing the physics.
The frame rate is limited by the fact that the CPU can't keep up with continually more complex scenes. But in the future, we expect all games to go the route of GPU side dispatching thus having a dramatic drop in CPU load, so why have a larger CPU than we realistically need. While I agree next gen will be Zen, I don't want it to be any larger than it needs to be.
If people keep asking for 60fps games, or even 120fps games. Developers can find a way to do it. They just don't want to do it right now. It's no where near a priority for them but for some games (like twitch FPS titles) perhaps they will.
The disclosed techniques can be used in processor architectures such as, but not limited to, vector processors, SIMD processors, and processors including scalar and vector units. The disclosed techniques yield substantial improvements in improved processing efficiency and flexibility in programming. In particular, the disclosed technique allows the execution of multiple instruction multiple data (MIMD) style applications on SIMD processors.
- Further, conventional programming models fail to utilize braided parallelism. Braided parallelism is a combination of data parallelism and task parallelism. Conventional programming models, such as OpenCL and CUDA implement data parallelism. However in addition to data parallelism, task parallelism can also be implemented in a heterogeneous computing platform, as described below.
- [0026]
For example, a game engine that implements a heterogeneous computing platform displays many types of parallelism. It includes parallel AI tasks, concurrent workitems for user interfaces, and massive data-parallel particle simulations, to name a few examples. However, even when the components in the game engine exhibit parallelism, the video engine fails to exhibit parallelism in its entirety. In fact, the entire video engine is not parallel as many of its tasks are generated dynamically.- [0027]
A need for implementing task-graph executions on a GPU is shown by existence of persistent threads. Persistent threads may be used for building scheduling systems within threads and thus circumventing the hardware scheduler. This approach is commonly used to reduce overhead that arises from massively parallel data executions. Persistent threads, however, also demonstrate a need and limitation in conventional programming models for implementing braided parallelism.
The GPU then calculates a resource allocation for the workload based on the
performance characteristics, and stores the resource allocation. In response to subsequently receiving a previously stored graphics workload with the given identifier, the GPU retrieves the stored resource allocation for the graphics workload, and applies the resource allocation for processing the graphics workload. By applying the stored resource allocation, the GPU reduces processing bottlenecks and improve overall processing efficiency of the processor.
I think we can all agree that their weak, but it's good, because they're able to muscle out as much performance as possible out of the limited die space that is available. We don't want to be in a situation where the CPU is spending it's cycles assisting the graphics chip like the PS3 did, there is more efficient use of the silicon space.
Here's what some CPU graphs look like: Green is the render code, which, could be entirely offloaded to the GPU. But we don't do it today.
You can see the same trend here in video format:
Render block just eats up tons of CPU. It's not a real CPU function, it's not adding to the gameplay, it's not changing the physics.
The frame rate is limited by the fact that the CPU can't keep up with continually more complex scenes. But in the future, we expect all games to go the route of GPU side dispatching thus having a dramatic drop in CPU load, so why have a larger CPU than we realistically need. While I agree next gen will be Zen, I don't want it to be any larger than it needs to be.
If people keep asking for 60fps games, or even 120fps games. Developers can find a way to do it. They just don't want to do it right now. It's no where near a priority for them but for some games (like twitch FPS titles) perhaps they will.
Yea, but also Nvidia'sDat Xbox command processor...