Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Any publicity is good publicity.
If wanting to ride that train, he could have been positive. 12 TFs is over 8x XB1, which is a standard generational advance, and that's with Moore's law actually slowing down as the entire semiconductor industry knows.

main-qimg-a64bf220512daaba24de6d39ef0aa4c8


Pitchford's comment is asinine. It simply adds to his very negative reputation.
 
If he'd been positive, he'd be a drop in the ocean. He'd also be reasonable. But I can imagine he's gotten a lot more people talking about him by being obnoxious.

Don't get me wrong, I think he's being a twat, but I can see why he's being a twat.
 
If wanting to ride that train, he could have been positive. 12 TFs is over 8x XB1, which is a standard generational advance, and that's with Moore's law actually slowing down as the entire semiconductor industry knows.

main-qimg-a64bf220512daaba24de6d39ef0aa4c8


Pitchford's comment is asinine. It simply adds to his very negative reputation.
While Moores law is in trouble, it’s not nearly as bad as suggested by that graph. 7nm wafer cost is below $10000 and dropping. Graphs such as the one you quote typically include assumptions about design costs and production volumes that, while obviously relevant, differ greatly in their impact depending on specific part and volume. Wafer costs have actually improved at a very nice pace, faster than most analysts anticipated.
 
It's a bit sneaky/forgotten, but main memory in a desktop for certain motherboards handling mixed DIMMs. :p

My current desktop is running a mix of 2x 16 GB dims and 2x 8 GB dims for a total of 48 GB. :) I've been mixing and matching memory amounts ever since I started building my own PCs back in the 90's.

Regards,
SB
 
While Moores law is in trouble, it’s not nearly as bad as suggested by that graph. 7nm wafer cost is below $10000 and dropping. Graphs such as the one you quote typically include assumptions about design costs and production volumes that, while obviously relevant, differ greatly in their impact depending on specific part and volume. Wafer costs have actually improved at a very nice pace, faster than most analysts anticipated.

It's finally starting to get somewhat reasonable after over a year, but at introduction it was significantly more expensive than previous new nodes. I know AMD had a chart about it and I believe NV did as well. Basically noting how it in combination with node transitions coming slower, it was becoming harder to justify immediately moving to a new node. The same thing they both said about 14 nm as well, but (IIRC) 7 nm has been noted as being significantly more expensive to move to in its first year.

Regards,
SB
 
What does that have to do with Moore's Law?
It's the only way I can make sense of the cryptic tweet, beause it looks like a 12TF navi is actually keeping up mostly with moores law. While it's lockhart which doesn't.

2013: 1.3TF and 1.8TF
2016: 4.2TF
2017: 6TF
2020: 4TF and 12TF
 
It's the only way I can make sense of the cryptic tweet, beause it looks like a 12TF navi is actually keeping up mostly with moores law. While it's lockhart which doesn't.

2013: 1.3TF and 1.8TF
2016: 4.2TF
2017: 6TF
2020: 4TF and 12TF

It's 2+ times the TF of PS4 (last gen's "1080p" console) + Navi efficiency improvements + CPU + SSD and Anaconda is 2 times the TF of One X (last gen's "4K console") with same.

Seems to make sense when you look at it that way.
 
What an incredible coincidence, the Subor Z+ is Ryzen powered and run at a little more than 4TF...
Maybe someone has some leaked benchmark and supposed that it was Lockhart?
And the real Lockhart is more than that?
 
It's a bit sneaky/forgotten, but main memory in a desktop for certain motherboards handling mixed DIMMs. :p

Anyways, it's been done before with the GTX 55x (128+64-bit*), and bandwidth was fine up to the common amount of memory on all the chips**. Things probably get awkward for the addressable memory beyond that on the higher capacity chips, which kind of makes sense if you think about the memory controllers connected to certain chips. I'd shove the OS on the fewer higher capacity chips in that case.

* 4x128MB +2x256MB
**4x128MB + 2x (128MB + 128MB)

Then there's the weird stuff that happened with the GTX 970 (3.5GB + 0.5GB).


----
If they dedicated a couple chips on separate controllers for the CPU, would it help prevent the speed reduction we're seeing with cpu/gpu concurrency?
 
If they dedicated a couple chips on separate controllers for the CPU, would it help prevent the speed reduction we're seeing with cpu/gpu concurrency?

I think it would work the other way. In a theoretical 16GB comprised of 10 chips, 6 X 2GB and 4 X 1GB, the full capacity and bandwidth of the 1GB chips could be reserved to service the GPU only.
 
The sort of obvious question for me is.....What are the chances PS5 was designed in the same wattage range as Xbox Series X?..
 
I think it would work the other way. In a theoretical 16GB comprised of 10 chips, 6 X 2GB and 4 X 1GB, the full capacity and bandwidth of the 1GB chips could be reserved to service the GPU only.
But that wouldn't be technically a fully unifed memory architecture then ? some part would be fast, the other slow. And some here were saying PS5 would be bandwidth constrained with a unified pool at 576GB/s.

But for next xbox (12tf) only 10GB at 560GB/s wouldn't be a problem at all ? Devs will just have to adapt to the split memory ?

Franckly I don't believe they wouldn't use a fully unified memory architecture after XBX. And also I think 13GB is not enough. I think 15-16GB should be the min for next gen.
 
But that wouldn't be technically a fully unifed memory architecture then ? some part would be fast, the other slow. And some here were saying PS5 would be bandwidth constrained with a unified pool at 576GB/s.

But for next xbox (12tf) only 10GB at 560GB/s wouldn't be a problem at all ? Devs will just have to adapt to the split memory ?

Franckly I don't believe they wouldn't use a fully unified memory architecture after XBX. And also I think 13GB is not enough. I think 15-16GB should be the min for next gen.

It would be unified, or likely close enough that it wouldn't have to be explicitly managed by developers. GPU would have access to up to 10GB of memory at the full bandwidth of a 320-bit bus. 3 of the remaining 6 is reserved by the OS and the remaining 3 could be used for CPU data, lower-priority GPU data and data that needs to be shared between the CPU and GPU.

As for whether 13GB is enough, that would depend on how well the VMM works. FWIW, that is being touted as one of the standout features of Scarlet.
 
The sort of obvious question for me is.....What are the chances PS5 was designed in the same wattage range as Xbox Series X?..
Before that we need to know what's the xbsx wattage range is.

And for that we need to know if it's a fixed clock like previous consoles, or some boost clock, or laptop style TDP limiter, or whatever else.
 
Before that we need to know what's the xbsx wattage range is.

And for that we need to know if it's a fixed clock like previous consoles, or some boost clock, or laptop style TDP limiter, or whatever else.
We also need to know what node will be used in both. 7nm or 7nm EUV.
 
Before that we need to know what's the xbsx wattage range is.

And for that we need to know if it's a fixed clock like previous consoles, or some boost clock, or laptop style TDP limiter, or whatever else.

Pretty sure developers want no part of variable performance states. They will have enough to deal with with multiple hardware targets as it is. And the thought of someone getting better performance out of their console because they live in a cold climate or have good A/C vs. someone who has a higher ambient room temp is kind of bananas. Even more so when you consider that games could conceivably play differently for people depending on the time of year.

Let's leave at least some of the PC stuff on the PC where people know what they're signing up for. :mrgreen:
 
Pretty sure developers want no part of variable performance states. They will have enough to deal with with multiple hardware targets as it is. And the thought of someone getting better performance out of their console because they live in a cold climate or have good A/C vs. someone who has a higher ambient room temp is kind of bananas. Even more so when you consider that games could conceivably play differently for people depending on the time of year.

Let's leave at least some of the PC stuff on the PC where people know what they're signing up for. :mrgreen:
I'm not talking about thermal throttling, I'm talking about boost clocks and TDP limits, which would be identical behaviour on all consoles regardless of temperature. Scaling based on occupancy.

Devs can always lock it to the base clock if they want.
 
Before that we need to know what's the xbsx wattage range is.

And for that we need to know if it's a fixed clock like previous consoles, or some boost clock, or laptop style TDP limiter, or whatever else.

The takeway from that DF article and Phil's response tweet is that in order to achieve the power they wanted they had to design a console with significantly higher power consumption than we have in current consoles.

My question is merely what are the chances Sony did this as well...?
 
Status
Not open for further replies.
Back
Top