64-bit CPUs for consoles. When?

clem64

Newcomer
http://story.news.yahoo.com/news?tmpl=story&u=/ap/20030923/ap_on_hi_te/amd_athlon_64_1

AMD Launches Athlon 64 Processor.

AMD's new Athlon 64 — so named because the chips process data in 64-bit chunks — can handle exponentially more memory than 32-bit Pentium 4 chips from Intel. But to take full advantage of the new technology, computers must have supporting software and much more installed memory.

AMD strongly disagreed during its flashy, 90-minute launch event. It pointed to a variety of software firms working on 64-bit programs, including Microsoft Corp., which is developing Windows XP (news - web sites) 64-Bit Edition, and Epic Games, which is updating its popular Unreal Tournament."

AMD said the new processors will usher in the next wave of innovation for PCs, including "cinematic" graphics.

The Athlon 64 chips can run programs that were designed for 32-bit processors, and they are designed to boost a computer's performance regardless of whether its programs are running in 32- or 64-bit mode.

Considering that consoles tend to have not that much RAM compared to PCs, will we see true 64-bit CPUs in consoles anytime soon? I guess the PS3/Xbox2/GC2 gen is out of the question.
 
but isn't it working only with 32 bits int. and floats?

I hadn't thought about the PS2. :oops: I guess it can do 64-bit processing. A better question would be, when, if ever, will gaming require true 64-bit processing?
 
I'm guessing you mean in the PC/Xbox arena? Cause older consoles have been using 64 bit CPUs for a while..

As for PC/Xbox.. when they feel like getting register-happy with x86-64?
 
64 bitness comes in 3 forms. Pure marketing BS, processing and addressing. All of these can get you 64bitness.

64bit addressesing is uncessary for the console space for some time to come, AFIACS. As for processing, well if you considering vector units, 64bits is too small.
 
Computers need it because a 32 bits "virtual" addressing space (4GB) is too small for certain applications. But as long as console have less then 4GB of memory they won't really need more then 32 bits.
 
Saem said:
As for processing, well if you considering vector units, 64bits is too small.

In that case, 128-bits would be the applicable figure wrt consoles.

I always thought the 64-bitness of a CPU came from how the integer processing units were implemented.
 
randycat99 said:
I always thought the 64-bitness of a CPU came from how the integer processing units were implemented.

As computers goes, it as really always been about the size of "flat" pointers. If not, then even a 486 could claim some "64-bitness" since it's fpu unit can also process 64 bits integers.
 
Saem said:
64 bitness comes in 3 forms. Pure marketing BS, processing and addressing. All of these can get you 64bitness.

64bit addressesing is uncessary for the console space for some time to come, AFIACS. As for processing, well if you considering vector units, 64bits is too small.
On the athlon 64 in 64 bit mode u have more registers too and you will also see a performance increase from it.
 
Traditionally the criteria for the bittness of a cpu, is its address length. Given that, there's hardly any need to use 64bit CPUs in Consoles for the next generation. The annoted performance gains in UT2003 can be attributed to the larger amount of registers and other isa specific enhancements of x86-64. The EE for example implements a subset of the 64Bit MIPS IV ISA with 32 Bit adressing, yet its word length is 32 Bit, so this processor would not qualify as a 64bit (every Pentium also has larger datapaths, FPU, MMX, SSE, etc..) cpu in the workstation realm. Point is however that on the computational side 64 Bit (with all processing done on 64 bit words) processors are only at an advantage when the 32Bit domain offers unsatisfiying precision for a significant share of the computations (and thus higher precision can only be archieved by multiple instructions). If 32 Bit precision is enough you profit from (potentially faster) smaller hardware implementations of alus and most important a much lower data/code footprint. To sum it all up, generally higher bittness hardly equals better/faster and 32Bit computining will IMO remain the sweet spot in embedded markets for years to come (until 4Gb adressing will become a necessety there).
 
but isn't it working only with 32 bits int. and floats?
Floats were never part of definition IIRC. After all x86 cpus had 80bit FPU for ~last 15 years.

Anyway, if you breakdown the EE core:
GP register width - 128bit
FP register - 128bit
Bus - 128bit
Address - 32bit Physical, 32bit virtual
Integer arithmetic - 64bit, partial 128bit

Speaking of addressing, Gekko already has 52bit virtual address space, so GCN is future proof in that regard 8)

SMarth said:
As computers goes, it as really always been about the size of "flat" pointers.
Sure could have fooled me seeing how graphic companies were making 256bit chips and what not more then 5 years ago... :p
And size of memory word was the first relevant criteria I remember being taught. Seems to me no single definition is good enough on its own today though.
Oh and btw, you know of a way how to do bit-arithmetic in a FPU unit? ;)
 
PiNkY said:
Traditionally the criteria for the bittness of a cpu, is its address length. Given that, there's hardly any need to use 64bit CPUs in Consoles for the next generation. The annoted performance gains in UT2003 can be attributed to the larger amount of registers and other isa specific enhancements of x86-64. The EE for example implements a subset of the 64Bit MIPS IV ISA with 32 Bit adressing, yet its word length is 32 Bit, so this processor would not qualify as a 64bit (every Pentium also has larger datapaths, FPU, MMX, SSE, etc..) cpu in the workstation realm. Point is however that on the computational side 64 Bit (with all processing done on 64 bit words) processors are only at an advantage when the 32Bit domain offers unsatisfiying precision for a significant share of the computations (and thus higher precision can only be archieved by multiple instructions). If 32 Bit precision is enough you profit from (potentially faster) smaller hardware implementations of alus and most important a much lower data/code footprint. To sum it all up, generally higher bittness hardly equals better/faster and 32Bit computining will IMO remain the sweet spot in embedded markets for years to come (until 4Gb adressing will become a necessety there).
Of course when you consider that the 3200 a64 is 400$ compared to the 600$ of the p4 3.2 ghz it really be foolish to use a 32bit chip. That of course is looking at what we can buy them for. Who knows how cheap intel will sell the p4 to ms for. Of course if amd was smart they'd almost give away the athlon 64 . It being in the xbox 2 would give it alot more support than it would have had otherwise
 
clem64 said:
http://story.news.yahoo.com/news?tmpl=story&u=/ap/20030923/ap_on_hi_te/amd_athlon_64_1

AMD Launches Athlon 64 Processor.

AMD's new Athlon 64 — so named because the chips process data in 64-bit chunks — can handle exponentially more memory than 32-bit Pentium 4 chips from Intel. But to take full advantage of the new technology, computers must have supporting software and much more installed memory.

AMD strongly disagreed during its flashy, 90-minute launch event. It pointed to a variety of software firms working on 64-bit programs, including Microsoft Corp., which is developing Windows XP (news - web sites) 64-Bit Edition, and Epic Games, which is updating its popular Unreal Tournament."

AMD said the new processors will usher in the next wave of innovation for PCs, including "cinematic" graphics.

The Athlon 64 chips can run programs that were designed for 32-bit processors, and they are designed to boost a computer's performance regardless of whether its programs are running in 32- or 64-bit mode.

Considering that consoles tend to have not that much RAM compared to PCs, will we see true 64-bit CPUs in consoles anytime soon? I guess the PS3/Xbox2/GC2 gen is out of the question.

Umm...Nintendo64 had a MIPS R4300i (64bit), and a coprocessor based on R4000/4600 (also 64bit). Dreacmcast used the SH4 which was 64bit.

The Midway Seatle and Vegas arcade hardware(NFL Blitz, Gauntlet Legends, etc.) used 64bit MIPS R5000's.
 
PiNkY said:
Traditionally the criteria for the bittness of a cpu, is its address length.

You sure about that?

I never heard ANYONE refer to MC68000 as a 24-bit CPU for example. I'd say width of GPRs coupled with the instruction set is a much better indication of bitness.

MC68000 GPRs and instruction set are 32-bit, data bus is 16 bit, address bus is 24 bit. Most people agree, 68000 is 32-bit CPU overall, not 16 or 24.

"Tom" coprocessor in Atari Jaguar had 32-bit GPRs, but had 64-bit bus and instructions that processed 64 bits of data in one go. 64-bit CPU? Well, you tell me, heh. Even the experts disagree here, lol!

Emotion Engine in PS2... Well, Faf went over that already. :) 64-bit CPU minimum, and I doubt anyone will say otherwise. ;)


*G*
 
jvd,

There's no way that AMD has the fab capacity to address the Xbox2 and PC, and workstation/server market at 130nm. Perhaps at 90nm, but that's still stressing things.
 
Fafalada said:
Speaking of addressing, Gekko already has 52bit virtual address space, so GCN is future proof in that regard

Flat... not! ;) Seriously...


Fafalada said:
SMarth said:
As computers goes, it as really always been about the size of "flat" pointers.
Sure could have fooled me seeing how graphic companies were making 256bit chips and what not more then 5 years ago... :p

I did say computers, and not graphics companies or device xyz, no ? ;)


Fafalada said:
And size of memory word was the first relevant criteria I remember being taught.

Same here, but even at that time it was already incorrect for a lot of machines. In fact I remember how the theacher was being prudent in trying to compare apples with apples, registers, processor bus, memory bus, data word size, ...


Fafalada said:
Seems to me no single definition is good enough on its own today though.

Only if you want an absolute/global definition for everything. So, of course you can be picky and start an argument for petty reasons if you want, but when we refer to a computer being 32 or 64 bits, we generally assume 32 bits code vs 64 bits code. Now if you want to speak about anything else, you will have to be more precise.


Fafalada said:
Oh and btw, you know of a way how to do bit-arithmetic in a FPU unit? ;)

Why :rolleyes:
 
Grall said:
I never heard ANYONE refer to MC68000 as a 24-bit CPU for example. I'd say width of GPRs coupled with the instruction set is a much better indication of bitness.

At that time, I often remember the term "pseudo 32 bits" use to describe the 68K.

The 70s-80s were interesting times because there was a lot of weird machines, cpus and devices. But when the frenzy begin to calm down in the late 80s and the begining of the 90s, it became normal to refer to 32 bits code vs 64 bits code when describing computers. Then, the larger data word we saw appear in the 90s did recreate confusion since they were quite useful for marketing (tho taking alone they are often meaningless), as for consoles, it's always about fanboys vs fanboys, no 8)
 
Pinky,
I'm just reminding you of this now. But every well educated person has said the same thing each time.
BTW - Graphics, physics, & other 3D-models need that type of precision long before busness apps do.

Fact is with network addressing we need it. The internet needs it.
Fundimentally the soil needs to be fertile before a plant/program can grow. :)

MS & IBM are both in on the grid network computing theme.
To lower processing time & number of packets, shifting the world to a 64bit standard is a smart move.
 
Grall said:
64-bit CPU minimum, and I doubt anyone will say otherwise. ;) *G*

Well, I will! :devilish: It's not because it can manipulate 64/128 bits data that it can execute 64 bits code (not that there would be any reasons for doing so). Hell, even P4 can manipulate 64/128 bits data (tho not as well as the EE), but it's still a 32 bits processor.

You can call it a 128 bits console CPU, but no one in it's right mind would call it a 64 bits computer CPU. What about keeping apples with apples ? 8)

[Edited]
 
Regarding the XBOX2 and what CPU MS use:

Prescott has been rumoured to include a 64bit mode as well that is hidden. However this is just a rumour at the moment but would give MS the chance of using an Intel based CPU again that has '64bitness.' I is highly unlikely the standard will be the same as x86-64 by AMD.

I think jvd has an excellent point and getting an AMD 64bit processor into the the XBOX2 would reap huge benefits for the x86-64 market and if I were AMD I would certainly sacrifice some of my resources to make sure this happens. However Intel is several times larger than AMD and would likely pull the rug under AMD's feet again.

Another thing to consider with regards to the CPU in XBOX2 is heat dissipation and power usage. It is believed that the Athlon64 architecture will never go above 89Watts and already the Athlon64 and FX run cooler than a AthlonXP 3200+ in many tests around the web (see: www.aceshardware.com). The Prescott spec apparently calls for at least 100Watts IIRC.

Now MS could do something entirely different and we should not discount the fact that MS may not use Intel or AMD CPU's. There is nothing stopping them from using a PowerPC processor for example.

The next consoles I believe will all be using 64bit processors as the timescale shows that in that time both Intel and AMD will have significant volumes of 64bit processors (assuming Prescott does have its own 64bit standard too). Of course AMD may still be making Durons, AthlonXP's and Intel will surely be still making its Celeron variety.

In 2005/6 wont 90nm be the standard anyway as 130nm is now?
 
Back
Top