Next Generation Hardware Speculation with a Technical Spin [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
All this engineering effort went into getting a more power-efficient part and pretty much left no stone unturned from a hardware and software standpoint to achieve it.

Yeah well nvidia also said they spent tens of thousands of engineer man-hours to create the Switch's hardware, only to put in there the buckets of Tegra X1 chips that no one wanted.
I wouldn't trust a single number that is mentioned when they talk about cooperation time between companies.

And what the hell did they optimize with memory bandwidth, if it's using regular old DDR4 2400MT/s? Unless this is using a 256bit bus, it's pretty much the same memory bandwidth you'd find on any other laptop with a Picasso + cheapest DDR4 SODIMM around.


There is one thing that's impressive in the Surface Laptop 3 though, and it's the fact that's coming with Freesync enabled in the laptop's own panel.
 
And what the hell did they optimize with memory bandwidth, if it's using regular old DDR4 2400MT/s? Unless this is using a 256bit bus, it's pretty much the same memory bandwidth you'd find on any other laptop with a Picasso + cheapest DDR4 SODIMM around.

The way it's managed, I would expect. We all know how contention between the CPU and GPU for memory bandwidth can kill performance.
 
Yeah, not sure where the 1.2 came from. Maybe that's the number at base (15W) TDP?
It's probably the performance of the Vega 11 in the 3780U with sustained 850MHz clocks, which should be what a Picasso can achieve at 20-25W.

Regardless, not using higher clocked DDR4 seems like the real lost opportunity here. Unless those 1.2V 3200MT/s SODIMMs are still pulling too much power or the SoC's MCU goes crazy in consumption/heat when clocked higher than 2400 MHz.
 
It's probably the performance of the Vega 11 in the 3780U with sustained 850MHz clocks, which should be what a Picasso can achieve at 20-25W.

Regardless, not using higher clocked DDR4 seems like the real lost opportunity here. Unless those 1.2V 3200MT/s SODIMMs are still pulling too much power or the SoC's MCU goes crazy in consumption/heat when clocked higher than 2400 MHz.

Alternatively, they could be using DDR4 that can hit 2400 MHz with lower voltage/power usage.
 
The way it's managed, I would expect. We all know how contention between the CPU and GPU for memory bandwidth can kill performance.
Side-note, in the typical config, are the RAM chips on SODIMMs, or are they soldered on-board?
 
There is one thing that's impressive in the Surface Laptop 3 though, and it's the fact that's coming with Freesync enabled in the laptop's own panel.

I suspect this was also done with an eye towards power-savings. That's what VESA Adaptive Sync, which the original Freesync was based on, was created for.
 
I have a new 499€ notebook with freesync screen, probably it's a bare minimum feature now. At least on amd's notebook, it's just that they are a handful and nobody noticed.
 
Yeah well nvidia also said they spent tens of thousands of engineer man-hours to create the Switch's hardware, only to put in there the buckets of Tegra X1 chips that no one wanted.
I wouldn't trust a single number that is mentioned when they talk about cooperation time between companies.

And what the hell did they optimize with memory bandwidth, if it's using regular old DDR4 2400MT/s? Unless this is using a 256bit bus, it's pretty much the same memory bandwidth you'd find on any other laptop with a Picasso + cheapest DDR4 SODIMM around.


There is one thing that's impressive in the Surface Laptop 3 though, and it's the fact that's coming with Freesync enabled in the laptop's own panel.
What took them so long to make it work out of the box in an actual product ? This is the first demonstration of freesync working on a laptop in January 2014:

 
What took them so long to make it work out of the box in an actual product ? This is the first demonstration of freesync working on a laptop in January 2014:

FreeSync in a laptop isn't new. It's just not common since most laptops have Intel processors with integrated graphics and Intel doesn't support it yet.
 
I suspect this was also done with an eye towards power-savings. That's what VESA Adaptive Sync, which the original Freesync was based on, was created for.

Other way around. VESA Adaptive Sync adopted AMD's implementation of VRR in FreeSync.

FreeSync itself was built on top of a very rudimentary form of VRR that existed in mobile laptops but wasn't universally implemented or used.

Regards,
SB
 
Other way around. VESA Adaptive Sync adopted AMD's implementation of VRR in FreeSync.

FreeSync itself was built on top of a very rudimentary form of VRR that existed in mobile laptops but wasn't universally implemented or used.

Regards,
SB

I did some digging. Apparently the lineage was Embedded DisplayPort Panel Self Refresh > AMD FreeSync > DisplayPort 1.2a Adaptive Sync. I was conflating PSR with Adaptive Sync. According to VESA, PSR was introduced as part of the Embedded DisplayPort standard in 2009.
 
https://www.wired.com/story/exclusive-playstation-5/

official information about dualshock 5, new Ps5 bluepoint game, official harware raytracing hardware, release holiday 2020




Some more minor details on the new DualShock 5 controller from the Wired article:

- better battery
- heavier than DualShock 4, but lighter than a XBO controller with batteries in it
- haptic feedback could have been included in the PS4 Pro but they decided against it

Nope, installing only the parts of the game that you want - only gonna play the multiplayer, install only that. Likely will be able to sub-divide the game further, like installing the levels progressively as you go, etc, so it will likely enable to do the same "instant play" the PS4 did.
 
Last edited:
Mark Cerny said:
“There is ray-tracing acceleration in the GPU hardware,” he says, “which I believe is the statement that people were looking for.”

I'm eager to see how this will be spinned.


«IDK guys, he said "there's ray-tracing acceleration in the GPU hardware", not "there's ray-tracing acceleration hardware in the GPU".
So it could still be just ray-tracing in the form of using regular compute units and nothing else, right?»

/s
 
Status
Not open for further replies.
Back
Top