Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
Call me naive but what would be the point of 64GB RAM? Most full games are less than that so unless we now want whole games in RAM then I’d rather funnel that budget into more GPU/CPU sauce.
I still remember years ago (maybe 2014-2015) back before DRAMs rate of decline in GB/$ had slowed down, Sebbbi wrote a forum post about interesting/cool things that could be enabled once consoles/gaming pcs were to get to a point where 64-128GB of RAM were common place. It was a short post and he didn't elaborate. I found that so fascinating and I wanted to ask him what those were but the conversation had moved on and he's a busy man and gets flooded with questions daily many of which he doesn't answer.
Unfortunately a sharp dropoff in the rate of improvement for DRAM prices occurred in the intervening years. The DRAM price decline didn't just slow down, in fact it didn't even stagnate, it reversed course and got more expensive for a few years, a first iirc in the last 50 years in the semiconductor industry. Understandably very few people had a crystal ball back then to foresee this unprecedented turn of events.

I'll try to dig it up his post.
 
Last edited:
I still remember years ago (maybe 2014-2015) back before DRAMs rate of decline in GB/$ had slowed down, Sebbbi wrote a forum post about interesting/cool things that could be enabled once consoles/gaming pcs were to get to a point where 64-128GB of RAM were common place. It was a short post and he didn't elaborate. I found that so fascinating and I wanted to ask him what those were but the conversation had moved on and he's a busy man and gets flooded with questions daily many of which he doesn't answer.
Unfortunately a sharp dropoff in the rate of improvement for DRAM prices occurred in the intervening years. The DRAM price decline didn't just slow down, in fact it didn't even stagnate, it reversed course and got more expensive for a few years, a first iirc in the last 50 years in the semiconductor industry. Understandably very few people had a crystal ball back then to foresee this unprecedented turn of events.

I'll try to dig it up his post.

Good post, but just wanted to make note that reversals and increases in DRAM prices have happened previously as well. Basically anytime there's significant damage suffered at a fab. IIRC the last time it happened prior to this one was when a S. Korean fab that produced memory chips was damaged in an earthquake or fire?

This one was perhaps notable for being a bit more drastic due to a convergence with the rise and popularity of mobile devices greatly increasing DRAM demand at the same time that supply was constrained.

Regards,
SB
 
What if the two SKU plan by Microsoft is the cheaper one is there to compete in the normal console space. Loss leading hardware that makes it back by the license fees from sold software.

But Anaconda is like $600 or more because they make profit on the hardware itself and is more powerful but it's basically a PC that runs Windows ten. I mean now that console will have proper CPUs and enough ram? Maybe with only access to the Microsoft store as a market place
 
Because of the problem with monetisation, running a console as a PC was never a good idea. If MS can lock it down to their store, it'd be worth their considering.
 
Because of the problem with monetisation, running a console as a PC was never a good idea. If MS can lock it down to their store, it'd be worth their considering.

Always down to the money :/

Windows 10 S mode does this limit now and it seems Xbox games have been tested on Windows also. Some interesting ideas if they can get the hypervisor to better secure and abstract this to ensure hardware performance.

https://www.techradar.com/news/windows-10-cloud-release-date-news-and-rumors

https://www.techradar.com/uk/news/windows-10-april-2019-update-could-play-native-xbox-one-games
 
Brad Sams expects no raw hardware specs but performance figures to be revealed at E3. He also claims to have heard that xCloud beats Google Stadias 10.7 TFLOPs.

For those who are hoping to see a list of raw specs, I don’t think that’s on the agenda quite yet. Instead, look for the company to provide performance multipliers for the hardware; I have heard that Lockhart and Anaconda are targeting at least a 2x performance gain for the high-end (Anaconda) and 4x for the lower end hardware(Lockhart).

[...]

While I don’t think Microsoft will share the raw performance figures of xCloud, I have heard that it does best Google’s previously announced 10.7 teraflops.

Will be interesting to see if they explicity say GPU performance or if they are talking about the entire system.

https://www.thurrott.com/xbox/207904/microsofts-aims-to-set-the-bar-high-at-e3
 
Brad Sams expects no raw hardware specs but performance figures to be revealed at E3. He also claims to have heard that xCloud beats Google Stadias 10.7 TFLOPs.



Will be interesting to see if they explicity say GPU performance or if they are talking about the entire system.

https://www.thurrott.com/xbox/207904/microsofts-aims-to-set-the-bar-high-at-e3

Yeah, that's the big question. Overall? CPU? GPU?

If it was just GPU then that'd be a minimum of ~9.6 TFLOPS for the low end and ~12 TFLOPs for the high. If it's overall then those might not be the minimum for the respective GPUs, but might be close to the actual for those GPUs.

But that's just specs. If it's just WRT to performance only, IE - how games bench, then performance doesn't scale linearly with FLOPs count. For example, the RTX 2080 Ti has ~32.7% more FLOPS than the RTX 2080, yet it's only ~26.6% faster in BF1 at 4k.

https://www.anandtech.com/show/1334...tx-2080-ti-and-2080-founders-edition-review/6

It's pretty similar for other games, FC5 it is ~27% faster at 4k.

https://www.anandtech.com/show/1334...tx-2080-ti-and-2080-founders-edition-review/7

In this case, the actual specs (TFLOPs) could be significantly higher than if just the TFLOPs were scaled by 4x or 2x.

Of course things get harder to quantify when looking at different architectures and system composition (CPU, memory bandwidth, memory latency, ROPs, etc.). All of which is to say that, it's an interesting tweet, but there really isn't anything we can take away from it other than both consoles will be faster. :D

Regards,
SB
 
Yeah, that's the big question. Overall? CPU? GPU? If it was just GPU then that'd be a minimum of ~9.6 TFLOPS for the low end and ~12 TFLOPs for the high. If it's overall then those might not be the minimum for the respective GPUs, but might be close to the actual for those GPUs.

Judging by his post reply to an Era forum member, it seems to be overall performance (which makes sense if you haven't locked down clocks yet), not just GPU or CPU alone.

https://www.resetera.com/threads/ne...h-thread-ot5-its-in-rdna.120059/post-21418924
Was told they are expected to talk in multiple % terms of performance gains, IE 2x,3x,4x,

So XB1S - 4x performance for Lockhart, 2x of XB1X for Anaconda.


This was hard info to lock down mostly because I dont think Microsoft fully knows yet, they just started getting production samples from AMD and performance output can vary widely right now with simple things like adjustments to clock speeds etc. Among those talked to, who absolutely would know this information, these performance figures were the consuses of the group.

Edit: Fu**ing useless auto-correct. Sense, not since.
 
Judging by his post reply to an Era forum member, it seems to be overall performance (which makes sense if you haven't locked down clocks yet), not just GPU or CPU alone.
Nah that wouldn't be whole system performance.
CPU alone would be magnitude higher.
That's just GPU.
Lockhart = X1 * 4
Anaconda = 1X * 2

If they did say whole system performance as their reviel, that would be the worst thing for us here, because of the varying ways they could come to that conclusion. Be pretty much pointless.
 
Nah that wouldn't be whole system performance.
CPU alone would be magnitude higher.
That's just GPU.
Lockhart = X1 * 4
Anaconda = 1X * 2

If they did say whole system performance as their reviel, that would be the worst thing for us here, because of the varying ways they could come to that conclusion. Be pretty much pointless.

MS/Brad are purposely stating overall performance, not just GPU. When you're stating x2 or x4 times the performance of a prior system, you're talking about CPU/GPU/Bandwidth as a whole. But I could be wrong, not a biggie.
 
This is why speculation should be treated as such, because supposed insiders are always changing the narrative.

Updated
On the hardware side, I expect Microsoft to start to peel back the layers on the secrecy around their next generation consoles, Anaconda and Lockhart. For those who are hoping to see a list of raw specs, I don’t think that’s on the agenda quite yet. Instead, look for the company to provide performance multipliers for the hardware; I have heard that Lockhart and Anaconda are targeting at least a 4x performance gain for the high-end (Anaconda) and 2x for the lower end hardware(Lockhart).

Honestly, this makes no sense.
 
Wooooohoooo! All aboard the 24 TF and 12 TF hype train! Wooooohoooo!

:runaway:

tumblr_mpah0eGms11rt0tppo1_500.gif
 
Totally okay with 6 and 12 TF respectively. With all the other enhancements and amazing zen 2 cpu, even lockhart will run circles around the X.
 
Status
Not open for further replies.
Back
Top