Sure but old techniques all suddenly work and run better which is still an instant improvement across the board without having to wait for new techniques to be developed...
I mostly agree with you and I've largely argued the other side of this discussion in this thread. However, it's important to recognise that few things in life are 100% win, and we should note what gets lost, then decide if the losses make up for the gains. I feel you're a little too enthusiastic for the change and are somewhat idealising the outcome on a couple of points.
In some cases new hardware features don't get used at all because maybe their implementation is weak like tessellation on the 360, or maybe it's broken like scaling was on the ps3
That's far, far truer on PC than ever it's been on console. Hardware IHVs have included features that have pretty much never been used over several gens of GPU. Introduce a new technique on one console in 2017 that isn't on the others and it won't get used. In 2019 if the sequel has it, it might start to see use, by which time the first implementation is possibly too long in the tooth to be any use at all (see first efforts at tessellation HW on PC. Basically a waste of silicon as it was unused when released and too impotent to be valid when tessellation became featured in games).
With a more frequent hardware release schedule if there is a hardware feature that is broken, limited or gimped in some way at least you'd only have to wait 2 years for it to get fixed rather than 7.
A more frequent update strategy would need a very robust HW basis, which'd basically mean PC. There's no point using an eDRAM based solution if that's just adding issues to utilising the hardware and cross-platform. Which is where consoles have always had an advantage in actually being able to choose more esoteric solutions that provide a better bang per buck than PC's generic solution. I agree that the loss of esoteric hardware and simplification of development is a Good Thing overall, but generic, abstracted hardware will always be potentially at a pure-performance disadvantage.
It's optional though, those that don't care to spend don't have to and can be stuck with ancient hardware if they so choose...exactly as it works on current console hardware except that everyone is forced to stick with ancient hardware for 7 years. The difference is that people would now have the option to upgrade when they want rather than wait 7 years, and those that didn't care could keep their old hardware.
I've repeated that argument multiple times in this thread.
You can just look back to last gen tesselation and spu's to see how long it can take for developers to support custom hardware on fixed function consoles. Just because they have this freaky new hardware doesn't mean it will get supported right away, or supported at all.
Right, but the argument is that the peak potential is better in fixed hardware. It may take a few years, and by then new hardware will make the old hardware look weak, but you get it all used.
It's something like a choice between:
Fixed hardware -
50% utilisation on day one (5 performance metric units), 80% in year 3 (8 PMUs), and 99% (9.9 PMUs) in year 5.
Abstracted progressive hardware
70% utilisation on day one (7 PMUs), 75% in year 3 (7.5 PMUs), and 80% (8 PMUs) in year 5
Then, with added expense, 70% of the new B spec machine at 2x speed in year 3 (14 PMUs) and 75% utilisation at year 5 of A spec (15 PMUs)
For someone not wanting/unable to afford to upgrade to the B spec, they get less performance from the hardware they own. For those who do want to upgrade, they'll get way better performance than the old hardware could hope to achieve no matter how efficiently it is used.
Fixed hardware gives better peak, extracted performance for the money spent on it for gamers, at added developer effort. Progressive hardware gives the option for higher peak performance at any given time because you can buy the latest, greatest machine, but its utilisation is lower. You sacrifice peak performance for gains in flexibility, ease of development, and the option to upgrade more frequently.
Personally I'm in favour, but I recognise the downside as well as the up.
How long will it take for gpgpu to get commonly used on the new machines compared to on a more general software platform?
GPGPU has been around as an option on PCs for ages, but not used. It's now that it's an option on the consoles that it'll get utilised. More evidence that new hardware features get overlooked while the lowest common denominator remains the target. New consoles set a new lowest common denominator, and all that potential that PCs have had is now going to be unlocked. If XBToo is the same as XBOne but more so and with a raytracing unit, will the RT unit actually be used, or will devs just run their XB1 code on the XB2 and save themselves the bother? History suggests the latter. New PC and mobile technologies aren't adopted for quite a whiles after their introduction.