Just look at some of the big hits this last generation, you'll see just as many games that were successful just by their single-player cloud-less incarnations as there are multiplayer.
The past decades of gaming has shown this, and they've also shown the dangers of pushing too much connectivity in the wrong ways. (Diablo 3)
What's the point of comparing it to games from a generation where the technology could not be fully embraced because a publisher couldn't confidently know that all of it's target market could use it?
Looking at multiplayer games before Xbox basically made online standard for Xbox Gold. Look at the state of video games before 3D acceleration became relatively standard. This is most easily scene in PC gaming when 3D acceleration was relatively new. Good luck finding many games that were 3D accelerated until quite a few years after the first proficient consumer level 3D accelerator was released (3dfx Voodoo Graphics along with Rendition Verite). It was still relatively rare when Nvidia finally jumped in with a more standard 3D accelerator in the Riva series.
Look at the adoption of 256 color graphics in PC games before the majority of PC's contained 256 color graphics? Want to get one of those fancy games greenlight by a publisher or developer? Good luck trying to make that happen.
Of course, there's little evidence of how much things can change before there's a critical mass of machines/users that can make use of those things. Just look at how long it's taking for 64 bit games on PC as well as Dx11 native games on PC to take off. This despite there being a good 64 bit OS being released back in 2009 and the first Dx11 cards back in 2009 as well.
Basically until it is a standard, universal, or nearly universal technology or piece of hardware, it will not get enough universal adoption to drive gaming forward.
Hell look at how environmental audio was flourishing when Creative Labs made EAX 1.0 and 2.0 open license and thus having wide adoption even on motherboards with onboard audio. Then look what happened to it when they made it closed license for EAX 3.0-5.0, meaning publishers could no longer rely on nearly all PC users having access to EAX.
The consoles have an advantage over PCs in moving game changing technology forward in that whenever there is a new generation, a console manufacturer can guarantee publishers that ALL of its consoles will support X features. That gets technology advanced to that level relatively quickly assuming it is available on ALL machines. That does not happen with optional things...ever. At least in the history of console gaming.
If it isn't available on ALL machines from that console maker then good luck getting games greenlit to use them. Lightgun games, Move games, PS Eye games, etc. Kinect support in games would have eventually died off as well if MS didn't make it standard for every single Xbox One. If they had made it optional as was Kinect 1.0. I can virtually guarantee that it would have failed miserably this generation due to lack of support from publishers due to it being an optional accessory. That despite it being technologically quite impressive and allowing for gameplay that would not exist without it.
Hopefully things change and we'll see widespread adoption and experimentation with online compute, but I'm with Joker in thinking that by making it optional on all consoles, we've now stuck our heads in the sand and relegated most of the wonderful possibilities that are possible to the next generation of consoles after this. Assuming that there is a next generation of consoles after this.
Regards,
SB