Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Status
Not open for further replies.
I think PoP is limited to relatively coarse balls, it's made for lpddr4/5?

The FO_M looks good for reducing the cost of HBM versus current silicon interposers.

Crazy idea: I am curious if the single die fanout could enable 512bit gddr6 on a reasonably sized 7nm part, or if there is still a limitation on the die design/cost regardless.
 
I think PoP is limited to relatively coarse balls, it's made for lpddr4/5?

The FO_M looks good for reducing the cost of HBM versus current silicon interposers.

Crazy idea: I am curious if the single die fanout could enable 512bit gddr6 on a reasonably sized 7nm part, or if there is still a limitation on the die design/cost regardless.
One issue with POP is the inductance of the bond wires. Not going to be great at GDDR speeds.

I think 512-bit could be possible on 7nm, but you would never be able to shrink it, or you would have to transition to a chiplet + IO die at some point, but that’s a lot of BW to push through the package.

I wonder if it would be possible to partition into two chiplets with 256-bit busses each and they share traffic over IF.

Then again, that Sony patent about a heatsink that would cross the PCB to make contact with the die on the bottom would be a nice hypothetical fit for an InFO_PoP.

I am concerned it has insufficient metal mass close to the package for 100W+ chips as the sole sinking method, particularly when the patent describes a paste rather than solid metal fill. I maintain that solution is for PSVR2.
 
Last edited:
The heatsink through PCB concept also puts a significant amount of metal at the base of the die, which for higher-power designs is competing with power and ground pads. A small chiplet with power demands at that scale is going to have less area to give.

This reminds me of something that came up in an interview with Intel's tech leads in the context of their die stacking.
https://www.anandtech.com/show/13699/intel-architecture-day-2018-core-future-hybrid-x86/9
Q: Are you having fun with FOVEROS?

J: Because Raja deals in GPUs, he’s having fun with high bandwidth communications between compute elements. It's a new technology and we're having some experimentation with it. What is frustrating is that as an industry we hit a limit for current flux density a year before stacking technology became viable, so for high performance on stacking we're trying a lot of things in different areas. There's no point having to make thermal setbacks if it also removes the reason why you're using the technology. But we're having fun and trying a lot, and we'll see FOVEROS in a number of parts over the next 5 years. We will find new solutions to problems we don't even know exist yet.

This seems to indicate from his experience with the attempts to get stacked systems that even as the manufacturing became feasible something else interfered. I think current flux in this instance means they were not able to deliver sufficient current into the stack, and this was a separate concern from the thermal setbacks he mentions later. That leaves more 2.5D or standard layouts being capable of sufficient current delivery, and they currently don't sacrifice pad area to heatsink metal.
(edit: For clarity, the J in that quote is Jim Keller, whose personal observations of a chip company trying to get stacked products working would cover a particularly relevant time period and chip company.)
 
Last edited:
PSVR2 for $100 RETAIL ...
A 6TB SSD included in $400 console ...

The problem with not being smart enough to pull off believable fake console specs is you never know that you're not smart enough to pull off believable fake console specs.
 
The length at which some people would go for a new meme :LOL:. Can't imagine Mark Cerny's face if he's ever made aware of this.
8267c8c547946c41ab51c63cd273614e.gif
 
The description of how the oled panel would work given what the dimensions of it would be to fit in what was described as a similar form factor to a DS4 was ludicrous. Picture trying to play PS5 games on a screen the size of the one in one of those Tiger LCD games.
That's true, but he never said anything about the controller's screen size.
 
It was so blatant that I think it was made not to spread a rumor but to laugh at whom would comment on it seriously
 
That's true, but he never said anything about the controller's screen size.

How big could it possibly be and still fit on a controller being described as the same shape with tweaks? To accommodate a screen of adequate size to be used in the manner described would necessitate changing the controller a great deal.
 
I guess I missed the part where he claimed they'd have the same shape..
Regardless, it's obviously fake a whole lot of other reasons.
 
At most, they will keep the same size of the gamepad and upgrade it with a small eInk touchscreen for OS/game UI rendering.
 
How big could it possibly be and still fit on a controller being described as the same shape with tweaks? To accommodate a screen of adequate size to be used in the manner described would necessitate changing the controller a great deal.
You mean chopping the controller in half and taping ipad in the middle isn't considered a tweak?
 
  • Like
Reactions: JPT
Very funny fake leak of PS5 on reddit Hilarious... I don't want to spoil it

So obviously fake haha:

1) PSVR2 retailing for $100
2) Bundling it up and gimping memory to 16GB and only having 8GB for games
3) OLED touchscreen
4) Anthem running at 720P/30FPS on base PS4
5) 6TB SSD
6) The wording on TLoU2, if it's finished then the only way it comes out on PS5 is via cross gen release. They won't make it a native title so there is no way they are "waiting" to decide what to do this late.

Super silly
 
So, Albert Penello just dropped into the Era thread.

https://www.resetera.com/threads/ne...uces-spicing-2019.91830/page-40#post-16935316

It's probably worth pointing out that calculating the amount of "loss" on a console is a bit more complex than simply the component cost. It has a lot to do with accounting actually. Since companies tend to look at their financials based on Fiscal Years, there is a lot of costs that get burdened in launch years that reduce pretty quickly as volumes go up.


So for instance, if you're ramping up to produce 15m consoles/year you're going to be buying a lot of tooling, parts, and paying a lot of employees which is divided by a small number of consoles sold. And also, chip yields improve pretty rapidly so your costs for the first run of components is significantly higher.


This is why you sometimes get two different data points that appear to be in conflict, but really aren't.


Launch consoles are VERY expensive in those first few months. But by the middle of the next year, and into the following holiday, prices come down drastically even though it was exactly the same console being made.


It's sort of a pedantic point. But in reality the pricing of launch consoles is a yield/production calculation, so most manufacturers are going to take a blended 12 month view (or 18 month to get through the 2nd holiday) which tends to smooth things out. Then you are simply asking if your overall business (Games + Accessories + Subscriptions) makes more money than you console loss. If so, you're OK. Then the second question is - how quickly does the console get to $0 loss or very close.


Where things get tricky is since cost reductions and cost amortization is based on volumes - when you don't hit those volumes things start to go awry pretty fast.
It's not meaningless, it's just complicated. I think people tend to view the "console loss" as an indicator of how far in the future technology the consoles "bought" - so the more "loss" on the console, the more future proof. At least, that's how I've always interpreted the debate when I see it, and there is a certain truth to that argument. The closer to $0 loss the launch console is, the more likely the components are mature (e.g. current-day). So there is logic to it.

HOWEVER that's not always the case, and really it's nearly impossible to understand what the true "loss" is anyway since there are ton of factors beyond component costs that can impact the "loss" that is quoted online. And yields, volumes, and production ramp impact the numbers more than anything.

So for instance - when people say the PS3 cost $900 or whatever (I don't have any knowledge of their costs) that was probably more due to the fact that they were expecting to sell more at launch, than it was due to them buying more "future tech" than the 360. I hope I'm being clear.
 
http://www.guc-asic.com/upload/media/event/PDF/HBM_PHY_Controller_2018Q1.pdf

There is a company selling memory controller IPs promising hbm2 at 2.8gbps on 7nm for Q2 2019 (current IPs are 2.4 on 7nm, 2.0 on 16nm) That's a nice 716GB/s with only 2 stacks.

Once they get cheaper non-silicon interposers working with HBM, I wonder what the cost difference would be between two stacks HBM and 384bit gddr6.
 
Last edited:
Once they get cheaper non-silicon interposers working with HBM, I wonder what the cost difference would be between two stacks HBM and 384bit gddr6.
I think 2 stacks of HBM may have the possibility to scale down in price a lot more than 384bit GDDR6.

They could start with interposers on the first release, and then use cheaper replacements on further revisions. Eventually with HBM3/4 they might even trade it for a single stack.
 
Status
Not open for further replies.
Back
Top