Cell has endian-translating load/store instructions

What does it matter whether the byte you read is the 'nearest' or 'furthest'? From Phat's example, if I want to add 1 to 43565768, why should it matter if I call to address 00 or address 04 (or whatever it is)?

I'd have thought the argument for bigendianess is because there's no performance advantage either way (is there?) but things are easier to read and write big end first as human programmers are more used to existing widespread languages that work that way.
 
Shifty Geezer said:
What does it matter whether the byte you read is the 'nearest' or 'furthest'? From Phat's example, if I want to add 1 to 43565768, why should it matter if I call to address 00 or address 04 (or whatever it is)?

I'd have thought the argument for bigendianess is because there's no performance advantage either way (is there?) but things are easier to read and write big end first as human programmers are more used to existing widespread languages that work that way.
When you start to write code where you need to mess around with ...
Code:
union
{
  long   Val32;
  short  Vals16[2];
  char   Vals8[4];
}blah;
then you may appreciate the differences. I, for one, am a convert to little endian.
 
Is there a performance advantage at all, or is it only syntax? I think a fair analogy is conventional maths syntax versus reverse polish. I'm sure everyone would rather work with '3 + 2 x 6 + 4' than '3 2 + 6 x 4 +' but when you appreciate it's place on a stack and computer friendliness you learn to live with reverse polish. Is little endian any more computer frinedly?
 
Simon F said:
Shifty Geezer said:
What does it matter whether the byte you read is the 'nearest' or 'furthest'? From Phat's example, if I want to add 1 to 43565768, why should it matter if I call to address 00 or address 04 (or whatever it is)?

I'd have thought the argument for bigendianess is because there's no performance advantage either way (is there?) but things are easier to read and write big end first as human programmers are more used to existing widespread languages that work that way.
When you start to write code where you need to mess around with ...
Code:
union
{
  long   Val32;
  short  Vals16[2];
  char   Vals8[4];
}blah;
then you may appreciate the differences. I, for one, am a convert to little endian.

I'm going to agree with Simon on this, I used to hate little endian, but these days I think it's actually the more sensible system. Not that there is much in it.
 
Shifty Geezer said:
Is little endian any more computer frinedly?
If you have to implement multi-word/byte precision maths, then ordering your data in little endian would probably be more efficient than big endian.
 
Simon F said:
Shifty Geezer said:
Is little endian any more computer frinedly?
If you have to implement multi-word/byte precision maths, then ordering your data in little endian would probably be more efficient than big endian.
I think the overall question is still why...?
 
ERP said:
I'm going to agree with Simon on this, I used to hate little endian, but these days I think it's actually the more sensible system.
Waaaait a minute. ERP hates little endian, consoles are little endian. ERP likes little endian, consoles switch big endian.
It's all Your fault we're getting stuck with big endian isn't it, go back to hating dammit :devilish:
 
The whole problem with the endian-ness is just because people were as stupid as to store numbers as they are written when designing the first binary stuff. And who cares about that anyway? Just do Little-Endian on the bit level in all cases, (ie. write numbers and words both from left to right or the other way around), and there is no problem whatsoever.

Why should that have any relation to how things are displayed on the screen? Nobody reads binary anyway.

:D
 
Fafalada said:
ERP said:
I'm going to agree with Simon on this, I used to hate little endian, but these days I think it's actually the more sensible system.
Waaaait a minute. ERP hates little endian, consoles are little endian. ERP likes little endian, consoles switch big endian.
It's all Your fault we're getting stuck with big endian isn't it, go back to hating dammit :devilish:

:p

Actually the only real reason I hated little endian is that I spend a lot of my time looking at hex views of memory trying to establish exactly what data sructure it is that has been written to memory it doesn't own.
 
ERP said:
Actually the only real reason I hated little endian is that I spend a lot of my time looking at hex views of memory trying to establish exactly what data sructure it is that has been written to memory it doesn't own.

Yes, debugging a PC in the eighties when you were used to a 68000 was not fun. ;)
 
Shifty Geezer said:
What does it matter whether the byte you read is the 'nearest' or 'furthest'? From Phat's example, if I want to add 1 to 43565768, why should it matter if I call to address 00 or address 04 (or whatever it is)?

When you have a pointer to a 4-byte counter, say, it points to the first byte. On a little endian machine, this first byte is the LSB, so you can start you operation (like addition) right away on this first byte and increment the pointer when you propagate the carry or whatever up. On a big endian machine, you have to adjust the pointer to point ahead at the LSB and move it down as you propagate your carry. (I know this example is antiquated, but replace "byte" with "word" and it's just as relevant for modern-day multi-precision math.)

Another thing is in C, when you cast an int to a char, you get the LSB. When you cast an int pointer to a char pointer, what is that pointer pointing to? On a little endian machine, it's pointing to the LSB. On a big endian machine, it's pointing to the MSB--the symmetry breaks.

Big endian is simply a mistake.

Phat
 
Guden Oden said:
Fafalada said:
Actually MIPS cores are bi-endian by design, but the console versions typically just use one setting.
So if you have both settings to choose from, why intentionally pick the one that is backwards? There's no sense in that!

Because you do math from LSB to MSB.

Aaron Spink
speaking for myself inc.
 
Is this similar to Transmetaâ„¢ Code Morphingâ„¢ Software (CMS)?

Not at all... It just means that Cell and Xenon's CPU are using the hardware byte-swap support... It's a pretty common feature on PowerPC and several other ISAs for that matter...

Quote:
What part of PS2 is little-endian in nature???

All of it (as far as CPU side goes).[/b]

Actually pretty much all of Sony's MIPS implementations have followed DEC's implementation...

RISC-architectures in general have always tended to be big-endian in my experience, as that's the natural order of things. Only reason little-endian still lives on is because of dumbass Intel going that route with their x86 line way back in the 1800s.

Actually most RISC architectures are bi-endian... SPARC is the only major one off-hand that is pure big-endian (although the PowerPC 970 is also pure big-endian by not implementing any of PowerPC's bi-endian support). MIPS is bi-endian and ran that way on IRIX, and ran little-endian on Ultrix and NEWS... PA/RISC is bi-endian that typically runs big-endian. The Alpha is bi-endian, but pretty much always ran little-endian. IA-64 is bi-endian that pretty much always runs little-endian. PowerPC is bi-endian and typically runs big-endian... ARM runs all over the place depending where it's implemented...

Big endian only seems more natural because humans write numbers that way, but when has human convention ever been a good argument for naturalness?

Maybe I should call you "hpta" instead? :p Big-endians pretend to be English, while little-endians pretend to be chinese... :p

Big endian is simply a mistake.

Just different that's all... Afterall the first computers were big-endian (little-endian didn't come around 'till the PDP-11)... Plus big endian made sense on limited hardware of the time when reading punch cards and printing data (or storing data to linear tape reels)... In fact, most serial protocols are mainly big-endian (although they can be little endian as well)...
 
The best things about Big-Endian are, that it is totally consistent and easy to read. While Little-Endian makes sense from a byte-stream perspective, it is the opposite of how the processor handles the data internally, and can be a big mess when you start mixing and converting different data types.
 
Back
Top