Sarcasm aside though, I haven’t seen a particularly good technical explanation other than “it’s stupid”
(..)
Hurting the baseline is a common counter-argument, but I’m not super convinced with close enough CPU’s across all SKUs currently while we are in an age of dynamic scalability for the graphics side.
Really?
1 - Dynamic scalability isn't a holy grail that solves all. You most probably need to set the boundaries for resolution and framerate.
Imagine a game that pushes the Anaconda with insane amounts of per-pixel processing and uses a dynamic resolution that ranges between e.g. 1800p30 + upscaling for indoors and 1200p30 + upscaling for outdoors.
Put that game into a Lockhart that has a 3x smaller GPU and the 1200p30 parts (2133*1200 = 2.5MPixels) will go down to a resolution that is 3x smaller, so 2.5 MPixel / 3 = 833 KPixels, which at 16:9 is something like 1215*685.
Such a rendering resolution, on the typical >= 50" TV sets from nowadays, would probably look like ass. So what the devs must do is make the game for the smaller common denominator to avoid going below 1080p on it, meaning all the IQ juice they wanted to put there for Anaconda now needs to be scaled down for dynamic scalability to work (while the PS5 gets all the juice regardless). That or they needed to treat Lockhart as yet another entirely new device to develop for, which defeats the purpose.
2 - Geometry performance is now more dependent on compute performance (primitive culling compute shader on some engines, primitive shader on Navi). That means if geometry setup takes up a bunch of GPU core time to make sure you can use very large meshes and many of those, then the percentage amount of GPU compute needed for geometry is going to be different between the larger GPU and the smaller one. Since the performance hit from geometry will be asymetrically larger for the smaller GPU, the devs now need to cut down geometry for the lowest common denominator or make two sets of geometry for each console that they have to put in the disk.
3 - More QA work for devs. Doesn't matter how close the architecture is, if it's a console you still need to do all quality for that hardware. Having the game consistently do CtDs on a PC when running on GPU XyZ with ZyX software / driver version / whatever won't be newsworthy.
But on a console? That's opening a can of worms. Release year is going to be crazy for QA devs doing because multiplatform crossgen titles will release for XBone, PS4, XBoneX, PS4 Pro, PS5 and Scarlett. Four of those are fairly known quantities, but the new two are not. With Lockhart we'd be looking ay three instead.
4 - Virtual Reality. VR is extremely demanding on performance where devs must target absolute minimum 60FPS at the headset's maximum resolution which definitely won't be lower than WMR's current mininum of 1440*1440 + SMAA/TXAA (though preferrably SSAA) per eye. It's not like a VR game can dip down to 25 FPS and make people vomit, so performance must be very finely tuned. How would devs handle VR games for the slower console? Have it significantly reduce the immersive experience by reducing rendering resolution and/or AA and hurt the baseline? Say the VR headset isn't compatible with the new console?
We've all said Lockhart made zero sense.
That is definitely not my perception from reading the various responses I got whenever I said Lockhart made no sense, throughout the last few months.
See Iroboto's tweet above. The 'plans' were years old. I'm unconvinced Lockhart ever existed as a solid design. It was probably a concept, perhaps to use tiered performance hardware (similar to our suggestions of a normal $400 and a $600 Plus version clocked higher with Leet components etc.), that was bounced around and profoundly rejected for being dumb.
The statements from reporters have been saying Lockhart was "recently" scrapped. If it was scrapped within the last 3 months with a planned release date for Holidays 2020, then it probably did go well beyond the concept stages.