If it wasn't for gaming would you still upgrade frequently?

If it wasn't for gaming would you still upgrade frequently?

  • Yes

    Votes: 9 10.5%
  • No

    Votes: 77 89.5%

  • Total voters
    86
I do it because I enjoy it. Playing on a MX440 right now, HL2 and CS:S play just fine, and really I realize I dont care to much about them looking a ton better. I still plan on building a killer game machine..... and those are the only two games I'll probably be playing.
 
Entropy said:
Waffle appologising for Intels inability to improve their CPU performance resulting in focus on other areas
Yah sure, maturity, right :???:
 
Last edited by a moderator:
I still would. Those videos still take hours to encode ;)
Apart from that i have already gone from "pure performance" Upgrades to "a bit faster and less power/noise" Upgrades.
 
Consoles will have quite a bit to go before catching upto PCs imo.

1080p is a step in the right direction but wont really be utilized this round. So maybe 5 years from now we will see this jump in resolution.

KB + Mouse is too big for me when it comes to any kind of gaming except sports.
I believe the PS3 is coming out with a KB+Mouse and the xBox360 may add this feature down the road. Right now consoles are only good for sports games imo.
 
Is it 1080p or 1080i? Isnt 1080p not possible to have content for right now? I was reading that somewhere...
 
If it weren't for games I'd still be on a ~1GHz Athlon, but I might have gotten around to updating the S3 Virge, if only for DVI and higher resolutions. ;)

Actually now that I think about it, I'd probably have shifted to the likes of a Mac Mini or an EPIA rig by now.
 
Last edited by a moderator:
Maintank said:
I believe the PS3 is coming out with a KB+Mouse and the xBox360 may add this feature down the road. Right now consoles are only good for sports games imo.
Keyboard/mouse is not comfortable on a couch, though. I had an opportunity to try it out over the last couple of days when I was away on a couple of back-to-back trips and had nothing but a well-aged laptop for playing games. My wrist got sore very quickly if the mouse was resting on a couch cushion. Keyboard in the lap is also not ideal.

So I don't think that keyboard/mouse with a console will ever become standard. Much better to seek an entirely new control scheme, like Nintendo is doing.
 
I wouldn't upgrade, and I am not going to for another 5-7 years, I am done with PC gaming unless I play something that makes me upgrade, other wise I am not going to upgrade for at least 7 years.
 
I am still on 2600XP+ and no need to upgrade. (or to be exact, only need is Open Transport Tycoon deluxe netgames, because on 1024x1024 map 3 or more players, rail networks with lot's of PBS signals tend to eat CPU power... and I am running dedicated server on same computer.)

I did change my 9700non-pro to 6600GT, but that was because ATI's unablity to support 1366x768 resolution, more than need of more power.


so, no... I don't see any realistic reason to updgrade anymore as active as I did previously.
 
arrrse said:
Entropy said:
Waffle appologising for Intels inability to improve their CPU performance resulting in focus on other areas
Yah sure, maturity, right :???:
It's bad form to edit qoutes of other people.
If you don't like what they say, just say so.

Anyway, in this case there is little room for argument. AMD has for over five years (more, actually) been able to increase their CPU performance by no more than roughly 20% per year. More than Intel since Northwood, for sure, but the overall rate and ratio has been the same for the two pretty much since the introduction of the Athlon.

Nor has IBM or Freescale/Motorola done better.

Five years ago, the cloud was looming, and you had to read the signs ahead, and listen to glum engineers in the semiconductor industry to see the trend. Today, this is no longer necessary, the slackening pace of CPU performance increase is a historical fact. No argument needed, nor references to authorities. The data is out there, and anyone can simply look for themselves. Arguing Intel vs. AMD is pointless. They are bound by roughly the same lithographic limitations, and serve the same markets with the same instruction set chips. Small wonder they track closely.

(Incidentally, for my home PC I've gone through five AMD Athlon CPUs since my Athlon 700, and I'll upgrade my 3500+ to a dual core in a week or so. It will hardly bring any benefit whatsoever to non-multithreaded apps, and will actually provide worse performance per watt than the 3500+ in just about all scenarios. I'm simply too curious about how the WindowsXP environment reacts to dual cores not to indulge in the experiment.)

Additional waffle: One positive aspect of the slowing rate of CPU improvements is that AMD and Intel has turned to improvement of their memory subsystems for performance increases. I find it a bit disappointing that both players simply slap dual cores into the existing memory infrastructure - it doesn't seem as if they expect much more total data to either enter the cores or flow out of them in spite of there being two of them now, so anticipated utilization rates can't be very high. So not only are very few applications capable of benefitting much from multiple cores, but those that do, get memory constrained that much faster. Oh joy.

And pancake: AMDs tech for 65nm looks good on paper. It might bring substantial improvements to their CPUs. It had better too, since it's their platform up to roughly 2009. So with next years Merom/Conroe and 65nm AMDs, 2006 look set to become a quite interesting year for PC CPUs. Upgrades galore. ;)
 
Meh, that was a paraphrase not an edit.

The thing is you are saying that Intel has done something which is completely different to what they actually did.

The AMD onboard memory controller was specifically designed for multicore because they saw the clock issues coming & realised they had to look elsewhere.
AMD completely redesigned the memory interface & basic CPU/mobo/chipset interface te help it reach those goals.
That is maturity.

Intel was focussed on core clocks even at the expense of performance/watt.
Having finally realised this was a dead end they pannicked & stuck 2 P4s on one package with the same front side bus & at the expense of HT, then started talking up other peripheral stuff.
That is certainly not maturity.


Anyway, back to topic I still have my 9800 & am looking to upgrade but I'm probably gonna wait to see what 65nm AMD & ATI x1xxx architecture brings.
 
Even with games I don't upgrade that frequently. Usually 1-2 years before a minor upgrade (small cpu bump, cheap video card upgrade, add some memory), and 3-4 before a major upgrade (expensive single purchase, or complete overhaul).

Oh, and even though Intel came out with the 3.06 P4 in 2002, it doesn't mean cpu performance stopped. The 2.8ghz 800fsb P4 performed better than the 3.06, so 3.2ghz in 2003, with amd at 2.2ghz. Now AMD is at 2.8ghz, which is a better increase than Intel has managed, and rather than merely increasing clock speed AMD also increased performance by using transistors for the integrated memory controller, which has surely pushed that 2.8ghz athlon 64 up to at least a 3.4ghz athlon xp's performance, and likely still better in many things.

And video cards seem to be on track to increase as fast as ever. From 2002 to 2003 there was barely any increase over the 9700 pro, then the geforce 6 and x800 series came out and had huge performance increases, then the 7800 and x1800 had lesser but still significant performance increases, and the next gen is supposed to have some good performance increases too. That, and there's sli as well. Too bad increases like these used to happen wihtout extremely high prices...$400 was the max video card price for many years, with a real push to go for lower prices at one point, then a few rare pokes at higher prices (geforce 2 ultra I think) before finally going all out with $700 video cards.

Anyhow, wasn't Moore's Law that transistor counts double every 18 months? That doesn't guarantee performance does, only a few companies in the PC industry actually stuck to the doubling performancec every 18 months, and often it was unrelated to the transistor budget. (though it's easier for graphics to double in performance with the transistor budget since they have so much parallelism)
 
Fox5 said:
Anyhow, wasn't Moore's Law that transistor counts double every 18 months?
Performance followed the advances in transistor densities at the same pace though, and did so through two different mechanisms.
* Clock frequency improvements from having finer geometry.
* larger transistor counts allowing architectural improvements that improved IPCs (Instructions Per Cycle).

It's important to emphasize that it was not just clock frequencies. Pipelining, branch prediction, caches, prefetching logic, multiple execution units, extended instruction sets, register mirroring, out of order processing, et cetera - all architectural advances that allows a single execution thread to process faster at a given frequency.

The reason processing speed increases has slowed to such a great degree is not only slackening clock frequency development but also that it has gotten progressively harder to extract higher IPCs from sequential code. The low hanging fruit has been picked long ago, and we've invested more and more transistor resources in order to inch forward in single thread execution speed, and have correspondingly gotten less and less computational power out of advances in transistor counts allowed by new lithography.

This has been obvious for a long time, and mainframes, commercial servers and compute servers have all turned to various forms of parallell processing for improving matters. PCs have kept pushing the Single Thread for longer than the others for many reasons, such as: Cost (always number one in PC space), installed base, consumer and corporate inertia, the problems of programming and OS environment, as well as conservative or lacking programmer training.

The architectural component is one reason that PC GPUs has been able to have a better development of processing power the last decade than PC CPUs. But it is also the reason why for instance the original Cell processing paper takes its basis in the gains available if you reallocate the transistor budget spent on making a single thread execute as fast as possible and instead spend it on multiple execution units and interprocess(or) communication.
Everyone has seen the writing on the wall for quite some time.
At least everyone outside of consumer PC space.

Entropy

PS. Where this will take personal computing is an interesting question. For PCs, many of the constraints outlined above that limit development paths aren't going away, at least not in the foreseeable future. One thing is certain though - the CPU manufacturers are not going to develop their wares to serve gamers alone. They would probably like to in order to keep ASPs up, but out of the 200 million + PCs sold in 2005 only a small fraction were sold to gamers and marketing can't necessarily be relied on to be able to keep selling other people a CPU that draws more power, requires noisy cooling and costs much more than it needs to. If you try to do that, you are asking for your market to eventually move away from under your feet. Intel and AMD can be relied on to try to find a balance that lets them sell to everyone at the highest possible ASPs. They are not abandoning performance as a competitive arena, but are trying to introduce a new Single Figure of Merit, performance/Watt.

Forecasts for their 2006 offerings look pretty good.
 
Yeah. Just consider, if you will, that the Pentium II was, if I remember correctly, somewhere in the range of 5-10 million transistors. And yet the IPC of that design isn't far removed from that of today's high IPC designs. This might give some idea as to how powerful a processor could be if it started sacrificing single-threaded performance only slightly in order to cram many more cores into the same processor.
 
Back
Top