Play from the HDD, or, Testing & Conditioning for 3D Gen 4
Synopsis: "Play from the HDD" is as much about future consoles as it is for the here and now. Is the right/wrong approach for the 4th gen 3D consoles? Can you think of a better solution for "next-gen" memory design?
General Thoughts: MS announced they are allowing gamers to now copy their games to the HDD which, in turn, will allow for quiter gameplay (less DVD whirl) and faster load times (HDD > DVD in seek time, transfer rate). The press release read something like this:
Play from hard drive. Copy your games from the game disc and play directly from the hard drive. Not only will the drive not spin, but load times are quicker, as well. Of course, you will still need the disc in the tray to prove you own the game.
While a nice "perk" to those with the HDD space (or a HDD at all! zing!) I started this thread because I believe there is something more subtle to this move: testing for the next console generation as well as consumer "conditioning."
On the testing front MS has the ability to track how many consumers use the feature, the impact on HDD space, benefit/detriment to the gaming experience, and so forth. They are also now in a position to survey consumers on their thoughts on the implimentation, what demerits consumers find with it and what they like, etc with an eye toward future console development.
The conditioning element, though, is what I find interesting. The amount of "positive" response I have seen here and at Gamersyde is surprising, yet it appears a lot of gamers are willing to trade off HDD space for less noise and faster load times. Having endured 3 years of non-HDD standard performance limitations it appears the HDD options is now a "positive point" which can be conditioned into a consumer selling point down the road (instead of taken for granted).
My take on this is that this is a strategic move for MS that plays into the Xbox 360 as well as the Xbox 3.
The Now
* Give consumers less noise now.
* Give consumers faster load times now.
* Resolve disk spanning complaints (ugh, I have to swap disks?!).
* Resolve some disk space issues (dev needs 20GB of space? Require the HDD).
* Performance tracking.
* Consumer feedback.
The Future
* 4th Gen consoles are facing storage issues.
+ Games will continue to grow in memory footprint requirements.
- As worlds become larger and more detailed there becomes a need to populate memory quickly for game access.
+ Bandwidth needs will increase for storage and immediate access.
+ Solid state technologies remain very expensive, but...
- Offer improved transfer rates and seek times, in many cases, over optical media
+ Optical media is cheap, but...
- Optical media is not fast; transfering 2GB-4GB of data (extrapolated memory footprint for new consoles) at 20MB-30MB/sec. is a much lower ration than even this generation and won't suffice on new consoles.
- Optical media is not quick; seek rates are pathetic.
- Optical drives can be loud (especially faster ones), take up significant space, increase failure rate, design complexity, etc
* Distribution: Digitial distribution from online networks is be a major factor
+ May not displace retailers (shelf space is important for mindshare, retailer exposure) but will continue to suppliment.
+ Other media, like music and movies, are encourching the console consumer market.
+ The community/social network concept is slowly trickling into the console space.
+ Steam has demonstrated that pre-caching content is viable (game goes "gold," consumers with intent to purchase cache large chunks of the game so when the game is officially released they can play sooner than later).
I am not sure how the new consoles will tier their memory systems but there are significant troubles ahead. While there isn't a single motive behind the Xbox 360 game caching, I believe it has been positioned for market research and testing for their next console. What I find interesting is that it is a low-tech "PC" approach. IMO the PC is often a ripe market for product testing: if something is viable and affordable it can gain some headway in the PC market. At this point the approach of "installing" games to a HDD appears to be a viable middle ground for the 4th gen 3D consoles:
* HDD space per GB is cheaper than SSD.
* HDD's offer superior seek times compared to optical media.
* HDD's offer superior transfer speeds compared to optical media.
* HDD is necessary for continued Digital Distribution development.
* HDD installation allows companies to retain Optical Media for cheap media distribution but leveraging the HDD for superior performance.
While not a sexy solution, and HDD costs don't diminish much over the lifecycle of a console, I think the new consoles are at a point where a HDD will finally offer enough "selling points" (digital distribution, perpetual social networks, pre-game release caching, "Avalache" style p2p networks, semi-resolution to game load times and perpetual world issues, game modifications and customization, demos and trailers, and so forth) to justify their inclusion again from MS. And not just as a content save area, but in a PC-style "installation" approach. Buy a game, install a game (play a minigame, watch a intro video, etc to viel the installation), play the game.
If asked right now what my ignorant opinion was about the biggest hurdles facing the next consoles (2010-2012 window) I would say input and storage. Per storage consumers are not going to desire enduring even longer load times than faced now. And with diminishing returns increasing graphically it is my opinion that some games becoming "bigger worlds" and more interactive (hence more memory) will be a big factor in console designs (will trading up to 4GB of memory over 2GB be a bigger boost to game design over, say, 20% more die space for GPU/CPU power?) Tiering memory for best performance/cost is going to be a focal point of new console design and, as of right now, I think the PC model which MS is introducing to the 360 seems to be a strong front runner for their new console. There may be better approaches, but this one is a known quantity at this time.
So my nugget of thought is this Xbox update is as much as testing the viability for the concept for the future. Does it work? How do consumers like it? What negatives can we resolve? Can we begin developing our pitch for this feature now and condition consumers to approach the negatives as really positives?
Feedback: I would love to hear others give feedback on realistic solutions for future consoles. Memory tends to be overlooked but it is a big part of the experience and with the growing importance of connectivity and the performance issues of optical media (as well as cost/cost reduction issues of SSDs and HDDs) there remains some significant questions about these issues. Likewise future "bottlenecks" and expected game size and needs are important factors. How much are games going to grow? What can publishers afford to fund? Will the emphasis favor fidelity per-frame or "more stuff [space] for more diversity"?
Predictions? My guess is the new consoles will use optical media (BDR), some Flash memory (cheap!), and will bite the bullet and use HDDs and MS will go back to using them, partially, like the Xbox1 and cache game content to the HDD to overcome performance issues. It appears, at this time, to be the best cost/performance tradeoff. I/O
Ps: Wow this thread has grown! I haven't read the last, like, 20 pages! Sorry if this has already been discussed (I almost started a new thread for it, but it appears to work in here). When/If I get some spare time I would like to ask some devs what are the biggest problems they are having this generation and some realistic solutions they expect to be implimented as well as their view of future HW. Lately there has been a lot of talk of everyone going cheaper HW, but one thing to consider is that Ninny's model doesn't necessarily work for everyone else. And while an uber complicated/expensive console is not reasonable, if MS/Sony undershoot they may find themselves in a poor position if the competitor brings better HW (in general) at a meager additional expense. That said, I think some budget messaging where pure performance takes a backseat to memory model and other considerations that add to the gameplay experience (and game quality) in different ways is possible. 100k poly model versus a 140k model isn't always easy to spot and making a concession at this point for, say, 50% more memory may be a solid tradeoff.
The market is interesting right now, as well, as partnerships could be of great importance. For example Intel has purchased Havok and NV has aquired Ageia. Epic, for example, is using PhysX and has a long running relationship with NV so that may be an appealing avenue. I still think that it is likely MS will go a route with more GPU silicon real-estate for the reasons that (a) higher peak FLOPs for marketing purposes (b) initial games will have some extra GPU headroom and (c) down the road the GPU resource could be leveraged as games hit the CPU wall to get more out of the system, especially for new game designs that may have a focus on tasks that can use the resources (e.g. physics). This meshes with MS's control over D3D and leveraging one of their core strengths and wouldn't put them on a straight competition (ala CPU vs. CPU) as it would change the paradigm of competition some. I wouldn't be surprised if MS went with at least some high thoroughput CPUs to accelerate development and make taming difficult to parallelize tasks quicker for smaller developers (with the moto: if you need more power, look to the vector units and/or using the shader array). The aquisition of some middleware for PC/360 development and to address the cost/ease of development issues, especially content generation, is something MS already is working on...
Well, that is all the time I have for now. I look forward to back skimming this thread!