Cell has endian-translating load/store instructions

one

Unruly Member
Veteran
In the last interview with Ken Kutaragi by Hiroshige Goto, Kutaragi revealed that Cell supports bi-endian to emulate Emotion Engine/MIPS, but in the next article by Goto he clarifies that, according to Kutaragi, Cell itself is not bi-endian so you can't switch back and forth different endiannesses on run-time like PPC, but it has special load/store instructions implemented in hardware that can translate little-endian data into big-endian.

The abstract of Goto's article itself, which focuses on binary translation and endianness, is like this:

+ Though bi-endian was considered in the early stage of the development, eventually Cell supports only the big-endian mode to simplify the structure of ALUs. Instead, it has special load/store instructions implemented in hardware that can translate little-endian data into big-endian. With this feature, you don't have to have a little-endian OS code and codes of both endiannesses can co-exist.

+ Perhaps Waternoose CPU may have this feature or a similar mechanism too. Allard said it doesn't support bi-endian because it doesn't run general-purpose code but suggested it can do fast endian translations.

+ Apple uses Transitive technology to translate PPC into x86, but performance can be suffered because of different endiannesses, unlike 680x0 and PPC. It can be solved if Intel implements endian-translation instructions into its new CPUs for Apple, and it's very likely.
 
PC-Engine said:
Is this similar to Transmetaâ„¢ Code Morphingâ„¢ Software (CMS)?
Apple's Rosetta (which is supposed to be a Transitive technology) will be similar to CMS, but Rosetta will be more API based like WINE while CMS is closely related only to ISAs. Transmeta CPU must have some feature sets implemented that accelerate CMS since they are designed at the same time, but it's unclear at this point that new Intel CPUs have such features.

For the PS2 emulation on PS3 I don't know how it works except for the fact of this topic. The SPE ISA might be designed with the Emotion Engine VU compatibility in mind, it will be clarified when the spec of SPE is opened by IBM later in this year.

VirtualPC on Mac is a pure binary translation, but Xbox 360 CPU may be equipped with some instructions that can be used to accelerates it as Goto suggests, unless the BC for Xbox is an afterthought.
 
Ummmmmmm..........WHY??? :oops:

RISC-architectures in general have always tended to be big-endian in my experience, as that's the natural order of things. Only reason little-endian still lives on is because of dumbass Intel going that route with their x86 line way back in the 1800s.
 
Guden Oden said:
Ummmmmmm..........WHY??? :oops:

RISC-architectures in general have always tended to be big-endian in my experience, as that's the natural order of things. Only reason little-endian still lives on is because of dumbass Intel going that route with their x86 line way back in the 1800s.

1700s BC.
 
Guden Oden said:
Ummmmmmm..........WHY???
Well it's not exactly uncommon, afaik aside for GC, consoles of last 2 gens all used little-endian.

RISC-architectures in general have always tended to be big-endian
Actually MIPS cores are bi-endian by design, but the console versions typically just use one setting.

Anyway I think the endian evangelist within you would love the "clever" multi-endian console designs like the Jaguar. :p You've had different chips that were supposed to work together using different endians there...
 
I'm sorry to sound ignorant, but in the last couple of days i've seen this endian or whatever thing being mentioned, and well, i have to idea what the hell it is. Never heard and i'm not sure why all of a sudden it's being used to compare different processors.
 
london-boy said:
I'm sorry to sound ignorant, but in the last couple of days i've seen this endian or whatever thing being mentioned, and well, i have to idea what the hell it is. Never heard and i'm not sure why all of a sudden it's being used to compare different processors.
Open a file with a hex editor, if you see "00 00 00 ff", it's 4278190080 in little-endian 32-bit unsigned, while 255 in big-endian.
 
one said:
london-boy said:
I'm sorry to sound ignorant, but in the last couple of days i've seen this endian or whatever thing being mentioned, and well, i have to idea what the hell it is. Never heard and i'm not sure why all of a sudden it's being used to compare different processors.
Open a file with a hex editor, if you see "00 00 00 ff", it's 4278190080 in little-endian 32-bit unsigned, while 255 in big-endian.

Errr ok that explains it.

NOT. ;)
 
endian determines whether numbers are arranged as highest-to-lowest or lowest-to-highest.

Given a 32 byte number, it's made up of 2 16 bit numbers, represented in hex as 00 00.

If you have the number 01 00, it can either be lowest bytes first or highest bytes first. Lowest bytes first, it represents 1+(00 x 256). The other way round, it represents (1 x 256) + 00.

In decimal, it'd be equivalent to storing numbers iither in the order Millions-Thousands-Units (big endian) or Unit-Thousands-Millions (little Endian)

eg. The number 134,254,665 would be stored as

Big Endian - 134 254 665
Little Endian - 665 254 134

No, I don't understand the reason for little endian either :p
 
london-boy said:
Errr ok that explains it.

NOT. ;)
It's the order in which multibyte numbers are stored. Little endian has the least significant byte stored closest to address 0, while big endian has the most significant byte stored closest to address 0.

So, if you want to store the number $1835 at address $10 then it will be stored like this:
Little endian: $35 at address $10, $18 at address $11
Big endian: $18 at address $10, $35 at address $11
 
Fafalada said:
Actually MIPS cores are bi-endian by design, but the console versions typically just use one setting.
So if you have both settings to choose from, why intentionally pick the one that is backwards? There's no sense in that!

You've had different chips that were supposed to work together using different endians there...
Heh, I knew the jag was quirky, but I didn't know it was THAT bad! :D
 
So if you have both settings to choose from, why intentionally pick the one that is backwards? There's no sense in that!

ok, i'll bite: Why do you suppose one to be more "advanced" than the other?
 
Guden Oden said:
So if you have both settings to choose from, why intentionally pick the one that is backwards?
Notation is ultimately a matter of preference and habit, what feels backwards to you feels right to others, and vice versa. And yet for some, it doesn't make a difference whichever way they use.

Personally I like the logic behind address increasing order, but I'm also well aware that the biggest reason for that isn't the inherent logic behind the ordering but rather the fact that I've worked with little endian platforms more then others.
 
Guden Oden said:
RISC-architectures in general have always tended to be big-endian in my experience, as that's the natural order of things.

Little endian is the natural order of things, for the most common programming duties. When you add, which end do you start with first? When you truncate in C, which end do you keep? When you have a counter that you need to increase the size of, do you add LSBs or MSBs? What endian-ness are Turing machines (when they manipulate symbols representing numbers)?

Big endian only seems more natural because humans write numbers that way, but when has human convention ever been a good argument for naturalness?

:)

Phat
 
Back
Top