+'s/-'s and feasability of lengthened hardware release cycle

JavaJones

Newcomer
Disregarding the "if's" and "why's" of it, what do you think would happen if the entire PC "Wintel" hardware industry was set on a 2,3, or even 5 year hardware release cycle? What I mean by that is that a new processor would only come out once ever 3 or so years, and when it did it would debut with all models that the family would feature for the life of the product. So for instance instead of Intel announcing the P4 1.5-1.8Ghz then later announcing the 1.9, 2.0, etc. the entire processor family would be released for different price levels and market segments initially. Same with graphics cards, etc. Perhaps only hardware that is non-critical in terms of special development support, such as hard drives, would not adhere to this schedule.

This is a part of a larger overall idea I've been mulling over for quite some time, so I won't bore you with the details for the moment. But I was curious what all your thoughts on this were. Plusses, minuses, what you percieve developers would feel about this vs. consumers, how profit models would change, would it be more or less profitable overall, etc. And I ask you to carefully consider the potential reality of it *before* you actually post.

Consider for instance that the console industry works this way (not that this is necessarily a guarantee of its applicability to the PC industry, but it is worth examining none the less), that even now we must wait 1-2 years and sometimes more before a hardware feature is truly taken advantage of, and that like the console industry, process improvements could be made across all products, decreasing cost and maintaining profit over longer periods of time. Additionally there would be more time to plan significant API upgrades and to coincide them with hardware features *and* to get developers to impliment said features as the hardware was released or shortly thereafter, since you could easily give the developers approximately a 1 year lead time with emulation, API's, beta hardware, and eventually final hardware even as much as 4-6 months in advance.

These are just a few of the possible advantages and considerations. I'm of course aware of a great many disadvantages as well, but I'll leave it to you to discuss those, as well as any additional advantages or factors you see.

- JavaJones
 
I don't think this would ever work, because the computer industry is not controlled by one entity. A console makes money on software so it needs the 3 to 5 years to make a profit. Computer makers make money on hardware so they are forced to keep coming out with new hardware as fast as consumers will buy. For something like this to happen it would need to be driven by every software developer.

If software doesn't need new hardware people might not buy new hardware, leading hardware manufacturers to make longer, but bigger jumps. This is where it gets really tough to make something like this work, because even if you have software pushing for this type of release cycle, you still need the consumer to cooperate. And that will never happen. People don't want companies to hold out better products. These are "I want it and I want it now" days. Just look around at how many people have already ordered GF4's, yet you can argue that the GF4 isn't really needed yet.
 
3dcgi, I think it's more feasable *in practice* than you seem to think. ALthough I agree that making the change would certainly not be easy. Then again I wasn't proposing that this sort of change actually be made, I'm just interested in speculation on how beneficial or otherwise it would be to the various industries and parties involved; the hardware developers and vendors, the software developers, and the consumers.

Coming out with new hardware every year or every few months costs R&D and revision time, new QA processes, etc. If you had say a 3 year hardware cycle, by the time the first year was up you'd have recouped all those initial design costs for the current generation *and* the process would have come down in price, and even taking price drops at retail into account a hardware maker could continue making profit for several more years. You cannot assume that everyone simultaneously upgrades, so there ought to be a market for the life of the product.

Consider where the largest percentage of sales is these days anways: usually 1-2 year old technology in the OEM market. Imagine, in essence, simply bringing OEM and everything across the board up to a certain standard of technology, and keeping it there for several years. I don't see a reason why that would mean less profit, nor why it would require a governing body to "enforce". Margins would no doubt be lower, but costs would also be *much* lower, especially in the latter half of a product's life cycle, and volume would be significantly higher per product generation as well. It would also greatly help the software industry IMO because you would essentially have a much smaller spread of hardware configurations to target.

People want new *little* technology changes. Think about the stereo industry. Very little major change has gone on there for years. A few more A/V connectors on their AMP, more watts per channel, etc. Basic Dolby Digital and DTS were the top of the line in surround sound for years (they're still very much the main stream), and stereo was state of the art for years before that. New standards are emerging but are very slow to be adopted. They're on *at least* a 3 year cycle, yet the consumers don't seem to mind, and they buy plenty of compatible equipment in a given hardware generation. Same with video technology. VHS to DVD was 20 + years. In most senses, it's the same, we're talking about the underlying technology. Granted one is a media format, the other is a processor technology, but in the case of music or movies, the media format is the primary limiting factor in the experience. In the case of computing, it's the processor and graphics card.

The point is that there is no reason to believe that, if software was taking advantage of a particular piece of hardware (which it would), people wouldn't buy it, no matter how long it had been out. The same hardware generations and limitations are present in the console industry, but it doesn't stop them, now does it?

It doesn't matter *where* you make your profit. The computer industry has *always* charged more for its products than the console industry, and people are used to that and expect that. The reason computer equipment costs what it does is because they make money off the hardware, and have no royalty system in place to profit off the software, so that selling hardware at a loss is not a good strategy. This doesn't need to change. The hardware prices stay the same, the hardware itself just lasts allot longer, probably ultimately costing less in the long run.

From where I'm standing it looks like it would be as good or better for the hardware industry, and *allot* better for the software industry. Imagine games being able to actually take full advantage of the current "top of the line" hardware.

The only reason this system is *not* adopted is because if one manufacturer comes out with several products in the time another puts out one, the one with several products will "win", regardless of how good the products of the other are generation to generation. At least under the current and entrenched system. Ultimately a longer development and release cycle is a better system to work under though, as far as I can see.

I suppose I see what you're saying in that it would take cooperation *initially* to change the industry to a longer cycle, but after that there would be no need for a governing body. It's industry inertia that needs to be overcome here. Because if the industry already worked that way, it would actually be a *disadvantage* for a company to come out with a bunch of new hardware products because no one would support it.

In fact, the software makers have the ultimate power here if they choose to use it. They just need to get together and agree on it. If they refuse to develop for the next generation of processors, graphics cards, etc. then those products will make less profit and hardware companies could, in theory, be forced into longer development cycles. And again, once the change was made, it seems to me it would be as good or better for everyone involved, including the consumers.

The only major obstacle is consumer and developer expectations and willingness to buy "the next great thing". Developers can, in theory, overcome that obstacle. Without software, the products of the hardware industry are useless.

However, I didn't really intend to discuss whether this would actually happen when I started this topic. I don't think it will. I'm just wondering what disadvantages people might see in the actual operation of such a system, not so much in *how* we get there, as again I don't think we will.

- JavaJones
 
The propblem is your examples are bad. There are inherent differences between the computer (hardware) industry and the "movie" (VHS/DVD) industry, or the "Home Theater System Reciever" industry.
The main difference is is that computers are backwards compatable, with no discernable differences between new hardware and old (to the end user) other than speed (and maybe a few features, but honestly, who would buy a GF4 for features? The R8500 has it totally beat...its all about speed). And software runs fine on older hardware (to a large degree). With something like VHS/DVD, they are totally incompatible, which is why the change took so long. Likewise, in the Home Theater (sound) area, newer tech gives you better sound quality (noticable!!!) and 3D immersion.
The fact that the Home Theater (sound) market has some backward compatability (any old stereo will sound OK with a DVD player) causes it to change much more rapidly than movie standards like VHS/DVD.
The computing industries almost 100% backward compatability is exactly what drives it forward in such a damn rush! Why not churn out a newer processor - everything will still run on it, and its faster, so of course people will buy it instead of last months slower model (this is hyperbole, but you get the point).
If you tried this in the VHS/DVD area, your new "player" that played "SAEM-DISCS" :smile: would be a flop, because there is no market for it, and its not backwards compatabile.
To mve to a more stable market would take ALL the hardware teams working together to establish HARD rules for compatibility (in each generation) and rules for determining next gen specs (hardware). Software would have to follow hardwares lead, and by the time software is maxed on a give generation, new hardware would be ready to come out. OF course, prices would have to fall for this to work, wich would only happen with massive integration of hardware (which would be easy considering the UN-need to upgrade!)...

Anyways, i am going camping to freeze my ass of in 6 hours, so i am going to sleep. Goodnight, and I'll be here to read your rebuttles in a few days :smile:
 
Son of a...

Well I typed up a long reply to this, and forgot to put in my l/p so I lost it when attempting to post. Argh!

Short summary:

Yes my examples sucked, couldn't think of better ones, point IMO still stands however. Flawed examples are not proof of a flawed concept.

Standards would work the same way they do today, no reason to think they wouldn't.

Retail cost would start out the same, go down, but production cost would also go down. R&D would also be more cost effective and targetted. More extended QA due to longer development cycle would mean fewer bugs, less revisions.

But original question is still not being discussed. Not how or whether, but *if* it happened, would it be good? I ask because only after considering the value of future possibilities could we decide upon one to strive for, if that were our desire. Without such discussion, we leave the future of computing in the hands of the selfish hardware industry.

I can't count how many times I've heard people lament the constant upgrade cycle, and yet no one really seems to care about this topic? Unbelievable.

- JavaJones
 
I can't count how many times I've heard people lament the constant upgrade cycle, and yet no one really seems to care about this topic? Unbelievable.

Actually, I don't believe there are that many people that "lament" the so-called constant upgrade cycle. The ones that do, are typically very vocal, that's all. ;)

AFAIC, there is no such thing as a "constant upgrade cycle." No one is forced to upgrade. Hell, I had a Voodoo3 2000 in my system until this past November. I personally couldn't justify a upgrading the video card until then. Anyone with a Radeon 8500 / GeForce3 class card right now is, in reality, probably set for a couple years.

It certainly costs money to constantly be "on top" of the performance curve. Money that I cannot personally justify spending. However, I'm content knowing that I have the choice to do so if I so desire.
 
I agree with Joe and I would like to add that one of the highest pressure come from people you know.

Other day I was using my "old" notebook with win95 to give a presentation. One of my friends said "oh, do you still use win95" :rollseyes:

I dont need to buy a new version of Windows and Office to do my job and probably I will keep it more 5 years :D

The same happens with other PC technologies.
edited: Be a smart buyer, just buy what you need with good price/performance and thinking about the future.

<font size=-1>[ This Message was edited by: pascal on 2002-03-05 17:00 ]</font>
 
Perhaps you are right and it *is* a vocal minority. But regardless, would a longer development and release cycle be better or worse in your opinion? How and why?

Personally I'm still running a PIII 600 with a Geforce 1, and I'm not really worried about upgrading yet either. It's not a question of my own personal inability to disregard the latest hardware's largely useless new features. It's a question of whether it's better for the industry and consumers to have a longer release cycle. Again my personal belief is that it would have allot of benefits. I think the primary difficulty would be in actually making any move toward that change, but I don't see allot of problems with the conceptual workings of such a system itself, and discussion on such potential problems is what I'm interested in.

Besides, just because something is "ok" now doesn't mean it can't be better, that it shouldn't be better, that we don't *want* it to be better. I have a hard time believing that, given an alternative, you would still be fine with buying a 3D accelerator 50% of whose features will not be taken advantage of for at least a year, and by then which may not be fast enough to even run your chosen applications at playable speeds and preferred resolution.

Under the system I propose hardware would be taken advantage of *at most* within 6 months of release *and*, like the console industry, the developer's ability to target said hardware would increase throughout its lifetime, with a new software generation ready to take advantage of new hardware as its released. Additionally hardware and software bugs would likely be reduced. Developer's target markets would expand, increasing their profits and ability to innovate.

So, I'm still looking for problems with the *workings* of such a system. Not how we get there, which is another problem to consider assuming it's found that this is a reasonable alternative to our current system.

- JavaJones
 
The computer gaming industry would probably take a major hit due to graphic technology and software compatibility still being hurt by the differences in 3d hardware - consoles that are later in life tend to start showing their age compared to the current PC gaming standards at the time..

can you imagine how awful consoles games would be (being in a comparable product lifecycle to what you're suggesting) especially after 2-3 years where the hardware has aged, if their software had to be developed to work on multiple different O/Ss, APIs, CPU architectures, CPU clock speeds, graphics chip architectures and unique featuresets (even if using only 1 specified API version, you'll still have different featuresets overall to optimize for), screen resolutions, etc like PC software has to be? even if software were only 'allowed' to be developed for that current generation, there'd still be far too much variation involved to yield good results..
 
Back
Top