64 bit computing

K.I.L.E.R

Retarded moron
Veteran
My new toy has gotten me interested in it's inner workings.

So far I've concluded:

* 64 bit computing is expensive (64 bit computing eats double the cache of 32 bit computing)

Now I'm asking a question:

Is 64 bit computing worth it on the desktop market?

Will I ever see any benefits?
 
K.I.L.E.R said:
My new toy has gotten me interested in it's inner workings.

So far I've concluded:

* 64 bit computing is expensive (64 bit computing eats double the cache of 32 bit computing)

Will I ever see any benefits?

Yes it costs double the money too.
No you will never see any benefits.
 
Tahir said:
K.I.L.E.R said:
My new toy has gotten me interested in it's inner workings.

So far I've concluded:

* 64 bit computing is expensive (64 bit computing eats double the cache of 32 bit computing)

Will I ever see any benefits?

Yes it costs double the money too.
No you will never see any benefits.

I cant agree with you on this. I'll post somes links later today you'll see.

Some ppls were saying the same thing in the past with the 16bits to 32bits

RainZ
 
Don't worry rainz I was posting in the style of K.I.L.E.R.. he likes to say the opposite of what he thinks from time to time and confuses everyone. 8)
 
Thanks Rainz. I don't know too much about 64 bit computing. I jsut went over some documents and confused myself to an extent. :D
 
* 64 bit computing is expensive (64 bit computing eats double the cache of 32 bit computing)

I'm no CPU expert but AFAIK this is incorrect WRT AMD64.

We can estimate that a 64 bits code will be 20-25% bigger compared to the same IA32 instructions based code. However, the use of sixteen GPR will tend to reduce the number of instructions, that could cause the 64 bits code to be shorter than the 32 bits code. This will depend on the function complexity, and of course on the compiler.

From here. Its a really good article actually, not too technical either.


Now, you may notice that the chart specifies 64-bit mode's default integer size as 32 bits. Let me explain:

We've already discussed how only the integer and address operations are really affected by the shift to 64 bits, so it makes sense that only those instructions would be affected by the change. If all the addresses are now 64-bit, then there's no need to change anything about the address instructions apart from their default pointer size. If a LOAD in 32-bit legacy mode takes a 32-bit address pointer, then a LOAD in 64-bit mode takes a 64-bit address pointer.

Integer instructions, on the other hand, are a different matter. You don't always need to use 64-bit integers, and there's no need to take up cache space and memory bandwidth with 64-bit integers if your application needs only smaller 32- or 16-bit ones. So it's not in the programmer's best interest to have the default integer size be 64 bits. Hence the default data size for integer instructions is 32 bits, and if you want to use a larger or smaller integer then you must add an optional prefix to the instruction that overrides the default. This prefix, which AMD calls the REX prefix (presumably for "register extension"), is one byte in length. This means that 64-bit instructions are one byte longer, a fact that makes for slightly increased code sizes.

Increased code size is bad, because bigger code takes up more cache and more bandwidth. However, the effect of this prefix scheme on real-world code size will depend on the number of 64-bit integer instructions in a program's instruction mix. AMD estimates that the average increase in code size from x86 code to equivalent x86-64 code is less than 10%, mostly due to the prefixes.

This one is from ARS and is another excellent article.

Here is some benchmarks of 64 bit v 32 bit on Linux, scroll down and look at the encoding and Pov ray benchmarks.

Hope that helps.
 
I think that getting from 32 bit to 64 bits is pretty different from 16 bits to 32 bits.

For example, in 16 bits worlds, most integers are 16 bits, which makes the range to -32768 ~ +32767 or 0 ~ 65536, a pretty small range. In 32 bits, integers are mostly 32 bits, which range is pretty enough for most usage.

Another matter is memory. Many 16 bits CPU (such as 8086) has 16 bits pointers, that is 64KB (68K is different, though). 64KB is very small. However, 32 bits pointers provide 4GB space, that is enough for most use.

Furthermore, 64 bits applications need 64 bits pointers, which eats more space and memory bandwidth. Therefore, I think many people will still run 32 bits applications on 64 bits CPU. Actually, it's already the case on many 64 bits RISC CPU.

Therefore, the most probable situation in near future is, people will run 64 bits OS on 64 bits CPU, but most applications will still be 32 bits. Only the applications that really need very big memory space are in 64 bits.
 
pcchen, you seem to be fogetting the extra 8 GPRs with AMD64. this IMHO is the most useful part of the AMD64 in the shorter term. The performance boost they give in applications, so far, seems to make up for the extra pointer size. If you look at the benchmarks I linked to earlier OGGenc does not need a big memory space and has huge performance benifits on AMD64. Some things, obviously, perform slower, but normally only a bit slower, whereas the things that benifit can show pretty big increases.

Obviously applications which get no performance boost from AMD64 will remain 32bit, thats the beauty of the system.
 
Actually I was talking more about the general 32 bits to 64 bits transition. You know, although many RISC CPU has already been 64 bits for several years, many OS designed for these CPUs are still 32 bits.

Regarding to x86-64, it's of course more complex. 8 GPRs are a plus (along with others such as RIP based addressing mode), but it needs good compiler to use them. If you check the SPEC CPU 2000, you may find that Intel C/C++ compiler for x86 still performs better than GCC for x86-64. However, as Intel starting to support x86-64, we may see an Intel C/C++ compiler for x86-64 in near future, and that will be interesting :)
 
pcchen said:
Regarding to x86-64, it's of course more complex. 8 GPRs are a plus (along with others such as RIP based addressing mode), but it needs good compiler to use them. If you check the SPEC CPU 2000, you may find that Intel C/C++ compiler for x86 still performs better than GCC for x86-64.
Actually, that's only true for SPECINT. In SPECFP, 64-bit linux/gcc soundly (10%) beats 32-bit WXP/icc (admittedly, PGI fortran is used too for linux, so that's not really comparing 64-bit gcc to 32-bit icc). In SPECINT though 64-bit gcc doesn't stand a chance against 32-bit icc.
However, as Intel starting to support x86-64, we may see an Intel C/C++ compiler for x86-64 in near future, and that will be interesting :)
icc 8.1 beta supports EMT64 I heard. Don't know if it supports x86-64 or if intel used the minor incompatibilites that exist between AMD and intel's version as an excuse to not support AMD's version...
 
Back
Top