Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
Note that the console version will likely run at much lower clocks because the total TDP is limited and it frankly makes more sense to burn the power in the GPU than it does to burn it in the CPU.

Even at lower clocks it's a massive step up from current consoles
 
Note that the console version will likely run at much lower clocks because the total TDP is limited and it frankly makes more sense to burn the power in the GPU than it does to burn it in the CPU.
Still, even at lower clock speeds, as expected, I'm not sure what I'm more excited about. This CPU upgrade must be up there in the top 3 features I'm looking forward the most. Sometimes I feel that my iPhone X has a better CPU than my PS4.
 

This mark sounds just stupid or doesnt understand technology. Sounds like he thinks "more = better". 8c/16t ryzen2 is more than plenty, dunno what would make him impressed, +24 cores because of his PC?

It is always annoying when new gen is coming and people repeat shit like "nvidia flops are better/why dont make it use 1500€ gpu/lol my 3000€ pc is faster"

Imo ps5 sounds great, at least on cpu department
 
This mark sounds just stupid or doesnt understand technology. Sounds like he thinks "more = better". 8c/16t ryzen2 is more than plenty, dunno what would make him impressed, +24 cores because of his PC?

It is always annoying when new gen is coming and people repeat shit like "nvidia flops are better/why dont make it use 1500€ gpu/lol my 3000€ pc is faster"

Imo ps5 sounds great, at least on cpu department
Ignoring the stupidity goes a long way to, you know, making it go away ;)
 
This mark sounds just stupid or doesnt understand technology. Sounds like he thinks "more = better". 8c/16t ryzen2 is more than plenty, dunno what would make him impressed, +24 cores because of his PC?

It is always annoying when new gen is coming and people repeat shit like "nvidia flops are better/why dont make it use 1500€ gpu/lol my 3000€ pc is faster"

Imo ps5 sounds great, at least on cpu department

He's not, unless you think sebbi associates with idiots (and there are quite a few other programming luminaries amongst Mark Wayland's twitter followers). His perspective is probably a bit off, though, in that he's not considering the tradeoff that having more cores would require (less GPU power).

edit:Yeah, not stupid.
 
Last edited:
This mark sounds just stupid or doesnt understand technology. Sounds like he thinks "more = better". 8c/16t ryzen2 is more than plenty, dunno what would make him impressed, +24 cores because of his PC?

It is always annoying when new gen is coming and people repeat shit like "nvidia flops are better/why dont make it use 1500€ gpu/lol my 3000€ pc is faster"

Imo ps5 sounds great, at least on cpu department
A large part of the consumer base is susceptible to core wars, MHz myth, etc. That’s why it’s still in the marketing. It works.
 
There are two other factors which are very interesting:

1. How much power consumption will next-gen have? If PS5 targets above 400 dollars, can Sony design a more advanced cooling system so that PS5 can operate at 200W? Especially Sony is very good at cooling solution of game consoles.

2. How many transistors will the GPU have? For example, if PS5 uses 10~12 billion transistors for its GPU, and it may not include neural network processor, can PS5 save transistors for more ray tracing cores?
 
The problem is that RT is a massively parallel problem by its nature, so a massively parallel compute structure (read: GPU), already well suited to be a hardware solution to that. Is not hardware RT because those compute units do other things well? Can they only accelerate RT to be considered dedicated? What about ImgTec’s solution that isn’t full RT, does it count? What if GPU architecture evolves to where a generic compute unit does RT better? What’s the magic threshold to call it a hardware solution?

We've already had this discussion earlier in this thread. As I linked here even Microsoft defines DXR/RT as a purely compute workload that doesn't need/require specialized HW blocks. So the logic would be that HW RT is RT done on the GPU while SW RT is when it's done on the CPU (example Nvidia Iray running on CPU via SSE2 instructions vs running on CUDA GPUs or Radeon Pro Renderer or Cylces running on GPU vs running the same code on the CPU)..pretty straightforward...until people make up new conventions where, magically, HW RT is only when you have RT Cores/Specialized HW blocks for RT...so HW RT is now only "possible" on Turing GPUs with RT Cores ..apparently… Anyway...water is wet... or not..nobody knows anymore..

I agree entirely that it's the performance that matters (well, that and area and power, but the three are pretty closely related). And if a software solution using general purpose compute ends up being more flexible and adaptable (than for example hardware accelerated geometry intersection only) then clearly going more general could be better.

It's just the semantic argument about what constitutes the "hardware" prefix. It's almost always been used to refer to dedicated or specialised hardware. For example dedicated video decode / encode blocks are "hardware"; ROPs that can perform MSAA operations have hardware support (you can do exactly the same thing in a shader, all completely hidden behind the API); vector units on CPUs (you can perform vector operations just fine without them).

It's not magical, it's about whether the hardware has blocks or instructions or features designed specifically or at the very, very least primarily with supporting (normally meaning, 'accelerating' or 'saving power while doing') a particular thing.

In the context of graphics processing and GPUs, pixel shaders and compute are the bread and butter and general case. If you want to claim "hardware support" for a subsection of what they can do, I really think you need to be looking for dedicated hardware or modifications made specifically or primarily for that particular subsection of things that shader and compute capable processing units are already capable of doing.

But having dedicated hardware for a particular, definitive use case isn't always the best way to go, especially long term. If PS5 is only 'okay' at RT but 'brilliant' overall I don't think you can hold that against Cerny and Sony.

But yeah, it's semantics. I think we're all coming from the same place in terms of performance and flexibility.
 
This mark sounds just stupid or doesnt understand technology. Sounds like he thinks "more = better". 8c/16t ryzen2 is more than plenty, dunno what would make him impressed, +24 cores because of his PC?

If you've never really thought about the cost and power considerations that consoles face, it's probably easy to look at the beast PC next to you and think "lulwut". Even if you're smart!
 
If you've never really thought about the cost and power considerations that consoles face, it's probably easy to look at the beast PC next to you and think "lulwut". Even if you're smart!

Some PC gamers tend to do that. But yes, testing something like a Threadripper 1950x in multiple configuration (i.e., 16c/32th, 12c/24th, and 8c/16th) doesn't yield any vast improvements in gaming performance with higher core/thread counts. It's dropping down to something like 6c/12th where you may notice more CPU utilization and heavy spikes with games having lots streaming assets and world physics (e.g., GTA V, Far Cry 5, Wildlands, etc.).
 
Some PC gamers tend to do that. But yes, testing something like a Threadripper 1950x in multiple configuration (i.e., 16c/32th, 12c/24th, and 8c/16th) doesn't yield any vast improvements in gaming performance with higher core/thread counts. It's dropping down to something like 6c/12th where you may notice more CPU utilization and heavy spikes with games having lots streaming assets and world physics (e.g., GTA V, Far Cry 5, Wildlands, etc.).

Does this reflect a limit to how much game engines can benefit from more multithreading, or a limit in the population of gamers who can benefit from additional threading beyond 6-8c/12-16t to justify the additional effort by developers? If PS5 had incorporated a 24-core CPU, I expect that 1st-party developers would have found ways to load up every one.
 
Just a reminder about the awesomeness of Zen 2, few months old benchmark of test sample Ryzen 3000 8C/16T [the same that will end up in PS5] vs stock Intel i9 9900K.

Just one benchmark. I think we will have to wait for the reviews on how it performs. It's not that far from Zen+ IPC. Zen2 is mainly higher clocked, but in PS5 it won't run at the 5Ghz it is supposed to run at.

If PS5 had incorporated a 24-core CPU, I expect that 1st-party developers would have found ways to load up every one.

Probably they could. More cores doesn't hurt performance if running at the same speed. The reason that the PS5 won't have more then 8 cores is probably cost, TDP etc.
 
Does this reflect a limit to how much game engines can benefit from more multithreading, or a limit in the population of gamers who can benefit from additional threading beyond 6-8c/12-16t to justify the additional effort by developers?

I think the majority of limitation towards higher core/thread usage within the PC space has more to do with game developers limiting optimizations to smaller core/thread counts (i.e., 4c/8t, 6c/12t, and 8c/16t), because the vast majority of PCs (gaming PCs) are using average to mid-range hardware configurations. If and when the PC space becomes largely saturated with higher core/thread counts beyond 8c/16t, then usage/utilization of such CPUs would make more sense.

If PS5 had incorporated a 24-core CPU, I expect that 1st-party developers would have found ways to load up every one.

True. 100% agree.
 
People have just assumed that what was reported means it has to be a lot more expensive than they previously thought it was.

Apart from SSD, what has really changed in terms of specs and capabilities?
Even SSD lots of people expected some level of support, not just HDD.
8k - Hdmi 2.1 support
RTRT - what this actually equates to, and it may just be support within navi. Example could be Crytek demo using compute but running better than vega.

The article has seemed to reset a lot of people's price baseline which could be the biggest positive for Sony.
Where lots of people thought $399 was the only price point for ps5 now suddenly think $499.
 
Lots of stuff to like here . The only thing I was hoping for is a higher cpu core count and 32 gigs of ram. But it checked off every box I could want otherwise. The rumored HBM ram for the gpu is a nice surprise if true. We can get a real boost to total bandwidth that way.
 
Lots of stuff to like here . The only thing I was hoping for is a higher cpu core count and 32 gigs of ram. But it checked off every box I could want otherwise. The rumored HBM ram for the gpu is a nice surprise if true. We can get a real boost to total bandwidth that way.
Higher core CPU would have necessitated a second chiplet and blew the budget on SoC completely.
 
Lots of stuff to like here . The only thing I was hoping for is a higher cpu core count and 32 gigs of ram. But it checked off every box I could want otherwise. The rumored HBM ram for the gpu is a nice surprise if true. We can get a real boost to total bandwidth that way.

8c/16t @3-3.2GHz is plenty enough for console gaming. Something like the current Ryzen 7 2700 or even 1700 can run [alpha] game monsters like Star Citizen extremely well. As long as Sony and Microsoft provide a decent amount of system memory and GPU performance, CPU performance shouldn't be too much of an issue this time around.

Honestly, I'm still quite impressed by the performance still being milked from the Jaguar.
24eed5ef5aee4a53e28c84e924476a8c9fa5ff52.gif
 
Status
Not open for further replies.
Back
Top