Xbox 2 hardware overview leaked?

Status
Not open for further replies.
IBM is having problems with 90nm for the 970. That is why the Power Mac G5 has hit only 2.5 GB, instead of 3 GB as promised a year ago.

Well they're having trouble meeting 2GHz demands for Xserves as well... Keep in mind the G5s are quite a bit over engineered...

Besides, who said this is going to be a GigaProcessor derived core? None of them are 2-issue.. :p
 
256 MB :oops: :?: :!:

UE3 will use up to 2G of memory per evel right?
Who can they put this engine runing here ( I know that in a console the need of a lot of memory it is less , but 2G to 256M ...)[/quote]
 
when is there going to be a true dual core PowerPC for Mac ?

the industry is moving away from Mhz / clockspeed to multi-processor designs as we all know....
 
Who can they put this engine runing here ( I know that in a console the need of a lot of memory it is less , but 2G to 256M ...)
Well they did talk about scaling down texture resolution 4x on console.
On the other hand, you might also be able to afford a bit more sophisticated compression schemes in the future then the simple stuff we're using now.
 
pc999 said:
256 MB :oops: :?: :!:

UE3 will use up to 2G of memory per evel right?
Who can they put this engine runing here ( I know that in a console the need of a lot of memory it is less , but 2G to 256M ...)

They need the next generation of advanced 2D texture compression algorithm. Taking NGC's S3TC for example, 256MB and using 24bit texture will give 1536MB uncompressed, but of cause not all 256MB can be used for texture. Also if the Media to Memory is fast enough, some of that 2GB can be streamed ondemand.

This will be interesting to see how the devs tackle this if the next-gen settle at 256MB main RAM. ;)
 
the key to Xenon's performance may very well be with both the eDRAM and the solid-state storage media they use, be it flash memory or something else, for streaming data into main memory very very rapidly. faster than any DVD/HD-DVD/Blu-Ray, and even faster than a HDD.
 
Jov said:
pc999 said:
256 MB :oops: :?: :!:

UE3 will use up to 2G of memory per evel right?
Who can they put this engine runing here ( I know that in a console the need of a lot of memory it is less , but 2G to 256M ...)

They need the next generation of advanced 2D texture compression algorithm. Taking NGC's S3TC for example, 256MB and using 24bit texture will give 1536MB uncompressed, but of cause not all 256MB can be used for texture. Also if the Media to Memory is fast enough, some of that 2GB can be streamed ondemand.

This will be interesting to see how the devs tackle this if the next-gen settle at 256MB main RAM. ;)

this is ati so i'm sure they will be using 3dc which should work very nicely with the normal maps in unreal 3
 
in any case... I hope they have more than 256MB of ram... could be the only advantage they have really :(
 
Note that the 22.4+ GB/sec main memory bandwidth is shared between the CPU and GPU

This sounds strange to me. Why would it need to be shared with the GPU when there could be far more bandwidth to throw around within the EDRAM?
 
well, with PS2's GS graphics chip which has 48/GB sec eDRAM bandwidth, it still has a bus/path (if not more than one) to the CPU, so GS can have access to PS2's 32 MB main memory. I don't think GS has a direct path to main memory, not sure on that. but regardless, GS has access to main memory one way or another.

and GameCube's Flipper GPU, in addition to the bandwidth it has with the embedded 1T-SRAM, Flipper also has a bus to GC's 24 MB main memory.

So likewise, Xenon's GPU in addition to having high bandwidth eDRAM, will also need a path to the 256MB+ main memory, to share that memory.

yes the bandwidth of the embedded memory will be very high, but its still a small amount, be it 10 MB or somewhat more if the final spec is different. the graphics processors or renderers of all consoles have to have some way of accessing main system memory because their allocation of video memory, be it embedded or not, is always small if not extremely tiny.


btw, I am keeping my fingers crossed that the older reports on Xbox 2 memory bandwidth are true, where it was about 51 GB/sec. that's a whole lot nicer than 22+ GB/sec. :p

http://news.teamxbox.com/xbox/4811/First-Details-Inside-the-Xbox-2-Part-1

This VPU is being designed with the latest technologies in mind, such as GDDR2 SDRAM memory provided by Samsung running at 1600 MHz. A 128-bit configuration is capable of providing up to 25.6 GB/s peak bandwidth, while its 256-bit mode brings up to a shocking 51.2GB/s peak bandwidth!!!

http://nextbox.ccfx.net/specs.php

~51.2GB/s peak bandwidth
 
That's fine. No mention of the EDRAM bandwidth threw me off. In my head it was sounding like 22.4+ GB/s is for the entire system total. :oops:
 
well 22.4 GB/sec is for the entire system since it is shared memory / UMA but it's not the *only* bandwidth. the GPU will have its eDRAM bandwidth and the CPUs will have their cache bandwidth 8)
 
Nexiss said:
Count me as another still hoping for more than 256MB.

RAM is probably the cheapest thing MS can increase, thus improve overall performance at the least cost. Also will make life easier for the devs given MS always pushed the easier to develop on song.
 
oh my, I didn't notice this until 5 min ago. the Xenon EDRAM bandwidth is

for an EDRAM write bandwidth of 32 GB/sec

32 GB/sec aye? that seems kinda low. does it not? the PS2's eDRAM bandwidth is, as we all know, 48 GB/sec.

32 GB/sec EDRAM bandwidth is also lower than the also unconfirmed report (the old one) of 51.2 GB/sec for system memory.

PS3's EDRAM bandwidth on the GPU is likely to be in the 100s of GB/sec

so once again, this leads me to believe the Xenon block diagram and this new document outlining Xenon in detail, is either fake, or very old.


edit: however it looks like I missed something else too :!: :oops:

Each of these pixels can be expanded through multisampling to
4 samples, for up to 32 multisampled pixel samples per clock cycle.
With alpha blending, z-test, and z-write enabled, this is equivalent
to having 256 GB/sec of effective bandwidth!
The important thing is
that frame buffer bandwidth will never slow down the Xenon GPU.

equivalent to 256 GB/sec of effective bandwidth. :oops:

<feeling better now>

now that's what I'm talkin' about 8)

how I missed that, I dunno. musta' just glanced over it too fast. :oops:
 
this is ati so i'm sure they will be using 3dc which should work very nicely with the normal maps in unreal 3

More will probably be saved just from the compression from the audio architecture alone than 3dc... Besides even if you didn't have 3dc there are other ways to compress normal maps...

Of course map data and structures could be stored in a more efficient compressed manner and block loaded on the fly (rather than just load the whole thing into RAM and pray the end-user has enough RAM that it doesn't get paged out)...
 
One of the advantages of all these hardware thread architectures is having a decent decompression system running in the background.

Think about how many textures you can have, if you have treat DXT textures as the uncompressed version with the compressed version being one of the more advanced methods (wavelet or DCT based) floating around.

Of course this should also be possible on Cell etc.

For a high level overview of some modern wavelet techniques, read


Image Compression - from DCT to Wavelets : A Review

or

MS Research overview
 
archie4oz said:
Of course map data and structures could be stored in a more efficient compressed manner and block loaded on the fly (rather than just load the whole thing into RAM and pray the end-user has enough RAM that it doesn't get paged out)...

Aaaccckk, yes. Dammit, it's starting to feel like we have C64 disk drives in our computers these days, the load times are getting obnoxiously long when games just dump in hundreds of megs of data for a level only to have it paged right out again. (*cough* Farcry *cough*) Is it too much to ask that maybe levels - and hence loads - be segmented a bit more? I don't want to have to go watch TV for three minutes just to wait for the next level to load. I'd switch back to the desktop and surf the web if I could, but all of that has already been paged out so that would just increase waiting time (and my level of aggravation, lol) exponentially. PC programmers are such dumb f**ks sometimes, they should definitely look at what console devs have been doing for years and not take for granted there's infinite resources available in the system. Virtual memory is a scourge that is ruining our gaming experiences! :LOL: Leads to sloppy coding and all-around laziness! Put the code on a diet I say, get rid of the flab! (Hell, you might say the same about some PROGRAMMERS too! :LOL:)

DeanoC said:
One of the advantages of all these hardware thread architectures is having a decent decompression system running in the background.

That's a very exciting thought, like Metroid Prime used left-over frame time to load and decompress the next room, except taken a lot further. Of course, it means one more ball to juggle for programmers, possibly leading to mental meltdown for those less talented, unless schemes such as these can be easily/transparently managed through devkit or middleware. ;) Also, how much performance would have to be devoted to this scheme to make it useable? IE to have all textures uncompressed & ready to use when they're needed.
 
Status
Not open for further replies.
Back
Top