Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
@BRiT
The application program can use a total of 5056 MiB (5568 MiB in NEO mode) physical memory. Of this physical memory, the size to assign as flexible memory can be specified by including the SCE_KERNEL_FLEXIBLE_MEMORY_SIZE() macro in the program. The remaining difference will be assigned as direct memory. If there is no specification, 448 MiB will be assigned as flexible memory.

SCE_KERNEL_FLEXIBLE_MEMORY_SIZE(size): The value for size must be a multiple of 16 KiB and 448 MiB or less.
 
Last edited:
4 x the GPU power of the XBoxOneX would be quite an astounding improvement.

Going off of some half remembered Navi vs Vega comparisons, Navi is roughly 1.6 times as performant as Vega. So a 24TF Polaris GPU would theoretically be bested by a 15TF Navi GPU:
  • 56CU's clocked at 2.1GHz
  • 64CU's at 1.85GHz
  • dual 36 CU GPU's, at 1.7GHz
It'd be awesome, but wouldn't it make for too big and power hungry a GPU?
 
4 x the GPU power of the XBoxOneX would be quite an astounding improvement.

Going off of some half remembered Navi vs Vega comparisons, Navi is roughly 1.6 times as performant as Vega. So a 24TF Polaris GPU would theoretically be bested by a 15TF Navi GPU:
  • 56CU's clocked at 2.1GHz
  • 64CU's at 1.85GHz
  • dual 36 CU GPU's, at 1.7GHz
It'd be awesome, but wouldn't it make for too big and power hungry a GPU?

It would be faster than a 2080ti. Outside of some corner cases its hard to imagine how it would be possible.
 
Yeah but interlocutor and context of their coversation is quite precise. I don't agree he would not be in be position to know, I would even argue otherwise, he is more in the know as one of higher ups at Unity than when he was some indie dev.


Other than flops, there might be few ways to somehow explain 4x Gpu conclusion; 7nm( or even 7uev) + better bandwidth ratio than current rdna pc cards + front end + buses + further RDNA2 and/or custom blitter / further architecure improovments...
 
IIRC, the motion interpolation methods used in LED TVs take whatever the captured framerate (i.e, 24, 30, 60, etc.) and add the prior or future frame on matching the 120Hz or 240Hz method. Although 60fps capture already matches the TV's native refresh-rate of 60Hz, it still can benefit from prior frames (more so) being introduced. As long as the TV can support 120 frames (120Hz) or 240 frames (240Hz), through it's motion interpolation logic, the original film/video framerate capture will recieve the same treatment (as long as it's under 120fps or 240fps).

Either I'm not reading your post correctly, or it doesn't operate that way.

Interpolation on TVs takes 2 frames and then creates as many temporary frames using that data as needed to fill in the game for the desired display rate. So 30 Hz at 60 Hz display rate, for example, would show Real Frame - Interpolated Frame - Real Frame - Interpolated Frame, etc. with each Interpolated Frame using data from the prior and next real frame data. 30 to 120 Hz would have 2 interpolated frames, etc. It's possible that more advanced interpolation chips may use more than 2 frames of data to form motion vectors to assist with creating the temporary frames.

For 60 Hz content displayed at 60 Hz, there is no interpolation since there is nothing needing interpolation, so the TV would do nothing.

So, that only works when going up in display rate compared to actual per frame data.

Going the reverse direction, you'd have a choice to either discard frames or to merge multiple frames prior to displaying them. The effect being drastically different from motion interpolation.

Regards,
SB
 
Yeah but interlocutor and context of their coversation is quite precise. I don't agree he would not be in be position to know, I would even argue otherwise, he is more in the know as one of higher ups at Unity than when he was some indie dev.


Other than flops, there might be few ways to somehow explain 4x Gpu conclusion; 7nm( or even 7uev) + better bandwidth ratio than current rdna pc cards + front end + buses + further RDNA2 and/or custom blitter / further architecure improovments...

Hmm you're right he is actually talking about next gen.

I suspect it's RDNA 2 + special sauces.
 
It would be faster than a 2080ti. Outside of some corner cases its hard to imagine how it would be possible.


40 CU Navi 10 is pushing up on Nvidia's high end as is (maybe between 2070, just below 2070 Super), it's supposed to be mid range. If AMD gets a 64 CU part out anytime soon Nvidia's high end is indeed in big trouble. But of course, big "if". Just to say that faster than 2080Ti from Navi 20 or whatever they call the 5700/XT chip, is no wild pipe dream, back of envelope maths show that.
 
40 CU Navi 10 is pushing up on Nvidia's high end as is (maybe between 2070, just below 2070 Super), it's supposed to be mid range. If AMD gets a 64 CU part out anytime soon Nvidia's high end is indeed in big trouble. But of course, big "if". Just to say that faster than 2080Ti from Navi 20 or whatever they call the 5700/XT chip, is no wild pipe dream, back of envelope maths show that.

I meant the gpu in next gen consoles.
 
Other than flops, there might be few ways to somehow explain 4x Gpu conclusion; 7nm( or even 7uev) + better bandwidth ratio than current rdna pc cards + front end + buses + further RDNA2 and/or custom blitter / further architecure improovments...

It lives on!
 
Sebbi's tweet doesn't read to me like some kind of purpose hardware leak. Just reads like someone mentioning PR speak from MS during E3 (which they rectified btw).

So, IMO nothing points out to the 10+ TF machines. Well, maybe if they where 7nm EUV, but I think they will be 7nm+ from TSCM in best case scenario.
 
SIE(Sony)'s new? patent for heat sink
http://www.freepatentsonline.com/WO2018216627A1.pdf
Filing Date: 05/18/2018 Publication Date:11/29/2018

TLDR:
It' s not new, we talked about it a while ago.

Still unclear how it could be used in a home console. There are many many implementation examples in the patent. Going from simple filled vias and thermal paste, up to big block of metal going through the pcb.

I like the one with a maze structure going around the traces, but it looks expensive to manufacture, so there are better and cheaper ways to deal with heat in a home console form factor. This would be only interesting for a portable device, unless there's some major issue with hot spots from the underside of the package, or for keeping hbm memory at a lower target temp than the soc... I don't know.

For ages the way to transmit heat through the pcb was to use a big array of filled vias, so this seems to be an evolution on the idea.
 
Last edited:
Status
Not open for further replies.
Back
Top