Predicting the Xbox-2

A rumormonger, Hedgehog Boy, posted some information about Xbox2, code named "Xenon" , at xboxactive.com. Someone accused him of stealing the information from a recent issue of Electronic Gaming Monthly(?), and said said Original article even included the specs of Xbox2 (Hedgehog Boy only provided some info). Anyone read the latest issue of EGM?
 
The Talisman reference spec calls for using 4 Mbytes of shared RDRAM, and two, 600 MHz, 8-bit Rambus channels. This configuration thus yields a potential throughput of 1.2 Gbyte/Sec. The shared Rambus memory holds the data during image processing. This data is logically carved up by Talisman into 32 x 32 bit "chunks." This 32 x 32 format also conveniently keeps the Rambus DRAMs in their efficient burst mode, and thus page-miss latencies don't suck up memory bandwidth.

The 32 x 32 chunking approach proposed by Microsoft is radically different from a conventional 3D pipeline, where the entire image and all its associated data must be on immediate beck and call for rendering. Such voluminous image data obviously place huge space and access demands on the memory system. Instead, says Microsoft, let's use the DirectX object-oriented mechanism. We can then carve the objects in the image up into these 32 x 32 pixel chunks, and next place each chunked object into the appropriate image layers. We would then have multiple image layers, with each layer possessing multiple image object chunks. Each image layer is thereby manipulated individually, with each object chunk described and manipulated via the DirectDraw and Direct3D APIs.

(Alternatively, image calculations can be done based on object hierarchy and bounding boxes.) As a consequence of this design, Talisman does not use frame buffers in the usual sense; "Instead, multiple image layers are composited together at video rates to create the video output signal."

Talisman is also a time-based image processing system. For example, if nothing has occurred within a particular image object chunk during the time several screen frames have passed, then it is not updated. If this temporal-processing scheme sounds somewhat similar to MPEG-2 compression techniques, such as that used by DSS (Digital Satellite Subscription) TV, then you are right. Microsoft claims that by using time-based techniques, the image layer transforms can be done considerably faster than if you had to continually recalculate the geometry--typically 10 to 20 times faster, it says. In addition, after each 32 x 32 chunk is sequentially rendered, Talisman uses an image compression algorithm quite similar to that used by JPEG. But, typically, Microsoft instead gives this JPEG-type system its own corporate moniker, "TREC." Finally, texture maps, which constitute the highest memory overhead, are also compressed this way, and stored in the Rambus graphics memory.

When you chart the overall Talisman image processing flow on the reference hardware, it looks like this:

1) Image pre-processing is done in the DSP, which is operating under a Microsoft-developed real-time kernel (not IA-SPOX!).

2) The DSP transforms the 3D objects into a 2D integer plane.

3) The processed data is carved up into 32 x 32 chunks, and then assigned to a particular image plane, as opposed to an enormous sorted list of visible polygons.

4) The now 2D processed, JPEG/TREC compressed, chunked data go into the Rambus DRAM.

5) The Polygon Object Processor (POP) VLSI chip then pulls out and decompresses the chunked data from the Rambus DRAM chips, and does the usual rendering stuff, like lighting, painting texture maps onto polygons, etc.

6) As no frame buffer is used, the once-again processed object data is put back by the POP chip into the Rambus memory.

7) Meanwhile the other VLSI chip, the Image Layer Compositor (ILC), is continually scanning the POP-processed data, building up the scene, 32 scan lines at a time. (Both the POP and ILC VLSI chips were designed by Silicon Engineering, and Cirrus Logic.)

8) At the same time, the ILC is checking to see if an object has moved since it built up the last screen frame.

9) If the object has moved, then the ILC applies affine transforms--it manipulates the polygons--so as to create the illusion of true 3D motion.






The sum result of Talisman's image processing software/hardware architecture, says Microsoft, is that the memory, and image processing demands would be greatly reduced; while at the same time, users could enjoy 1,344 x 1,024-pixel resolution at 75-Hz rates, with 24-bit true-color pixel data at all resolutions. The SIGGRAPH '96 paper states that a scene complexity of 20,000 to 30,000 rendered polygons, or higher, can be supported via Talisman; comparable, says Microsoft, to a 3D graphics workstation capable of executing 1.5 to 2 million polygons per second. And the Talisman cost? Microsoft quotes a bill of materials cost of $200 to $300, which roughly translates into a sub-$500 add-in PCI card at retail street prices.

http://www.vxm.com/21R.98.html

Found this intresting. The mention of 3do building a media processor. I think Microsoft bought the hardware team that worked for 3do.

When last we looked at MS Talisman, it appeared like Microsoft was on a 3-D roll. Chip vendors like Samsung, Cirrus Logic, Fujitsu, and others had all publicly pledged Talisman hardware support. But in the short time since that article first appeared, Samsung yanked the plug on its joint effort with 3DO to produce the Media Signal Processor, a DSP-like chip. The MSP was originally at the heart of the Talisman reference board, code-named "Escalante." And then Cirrus Logic, one of the principal makers of Talisman's unique VLSI chips, suddenly pulled out of the deal.


The following reference to Talisman being "time based" reminds me of that old reference to future Nvidia hardware being TT&L. True time....and something else...I can't really remember.

At first blush, the newly reborn Talisman PR seemed to ring true. For example, Trident said that its new single chip implementation would implement the image-chunks-manipulated-as-objects specified by Talisman. The original Microsoft spec called for logically carving up an image into 32 x 32 bit "chunks. After the images were carved up into 32 x 32 pixel pieces, each chunked object was placed in an appropriate image layer. This technique yielded multiple image layers, with each layer possessing multiple image object chunks. Every image layer was then manipulated individually, with each object chunk described and manipulated via Microsoft's DirectDraw and Direct3D APIs. Talisman was also a time-based system. If nothing changed to an image chunk in the next pass, then nothing was rendered.

By eliminating the need to continually process the entire image, processor overhead was cut way down. Thus, Talisman sought to make 3-D image processing smarter, as opposed to applying brute force methods, such as using more MIPS, increasing bandwidth, or adding more graphics pipelines. In addition to chunking support, Trident also says that it is implementing Talisman's 3-D notions about texture compression, z-buffering bump mapping, and anisotropic filtering. As icing on the Trident/Talisman cake, 2-D acceleration, and MPEG-2 decompression are also thrown in.

To have undergone such a drastic revamping, Talisman must have had some serious design problems. In fact, a Trident spokesperson said that "many things were badly wrong with the original architecture." And so, Trident/Talisman hardware ends up having almost nothing to do with Microsoft's original 3-D vision. Rather, the Trident/Talisman chip is simply a powerful new 3-D graphics chip, with some multimedia capabilities. This twice told, much rehashed Talisman tale is going to become increasingly common, as Microsoft has started a new multi-tier Talisman licensing plan.

For example, a licensee can pay just to get a complete description of the Talisman algorithms and driver model. At this upper level, licensees like S3 and ATI Technologies are probably just choosing those Talisman elements they consider interesting. At the deeper level, companies like Trident have licensed everything from Microsoft, including full access to the Talisman C source code simulator and the Verilog models.

http://www.vxm.com/21R.114.html



Microsoft based Talisman on the premise of coherence between successive frames or scenes. In other words, little changes between successive frames in a 3-D application. An object may move in the foreground, for example, while all other parts of an image remain static. Traditional 3-D graphics architectures render an entire scene 30 times/sec or more. Talisman will harness four key concepts--compositing, image compression, "chunking," and multipass rendering--to take advantage of coherence, presumably reducing memory and bandwidth requirements and, therefore, costs.

Talisman requires no traditional frame buffer. Instead, it stores each object in a scene in separate image layers that the systems can then composite to form the final image. Moreover, the compositing operation occurs in real time at the full video rate. The independent layers allow the graphics subsystem to update each object independently and only when an object changes. Moreover, the architecture lets a system simulate many typical 3-D transformations, such as scaling and rotation, using 2-D operations on the rendered object. The layering and compositing scheme ensures that image-layer transformations, such as motion, happen at 72 to 85 Hz, assuring fluid movement. The scheme does not guarantee fast layer updates, so it can at times trade geometric detail for the higher animation rates.

To ensure that geometric detail remains suitable, Talisman uses compression on both objects and textures. A JPEG-like technique, Texture and Rendering Engine Compression (TREC), handles the compression. Compression isn't a new idea, but graphics systems that use a conventional frame buffer can't process the entire image quickly enough to make compression feasible. Talisman skirts this roadblock using "chunking." Talisman breaks each image layer into 32×32-pixel chunks. The system renders all objects for one screen chunk before proceeding to the next and, therefore, requires a small Z-buffer. Moreover, the controller can apply sophisticated antialiasing filters and image compression in real time.

Chunking and compression combine to provide small memory requirements and allow a small shared-memory array to store all texture and image-layer data. The availability of such data allows the rendered data to feed back through the texture processor for use in rendering a new image layer. Such multipass rendering can generate special effects, such as shadows from multiple light sources, fog, underwater simulation, and waves.

Figure 3 depicts the reference design for Talisman. The Philips Trimedia processor will fill the role of Media DSP. It can handle floating-point 3-D geometry calculations, as well as other multimedia streams, such as compressed audio or video. Cirrus Logic is developing the Polygon Object Processor, and Samsung, Fujitsu, and Silicon Reality are also working on portions of the reference design. Microsoft believes that Talisman can reduce memory-bandwidth requirements by a factor of 60 and also support other multimedia functions.

http://www.e-insite.net/ednmag/archives/1997/032797/07df_01.htm

07DF13EP.gif
 
Deepak said:
Similarly if it wasnt for Sony, Nintendo wouldnt have changed a bit....

actually if it wasn't for sega ...


If it wasn't for nintendo being stupid sony never would have gotten into the market
 
jvd said:
Deepak said:
Similarly if it wasnt for Sony, Nintendo wouldnt have changed a bit....

actually if it wasn't for sega ...


If it wasn't for nintendo being stupid sony never would have gotten into the market
Same could be said about Atari and Nintendo. Nintendo originally came to Atari to market the NES! Talk about a mistake...
 
keegdsb said:
jvd said:
Deepak said:
Similarly if it wasnt for Sony, Nintendo wouldnt have changed a bit....

actually if it wasn't for sega ...


If it wasn't for nintendo being stupid sony never would have gotten into the market
Same could be said about Atari and Nintendo. Nintendo originally came to Atari to market the NES! Talk about a mistake...

yup. Its funny how that works . I wonder what mistake sony made thats going to come back to haunt them .
 
Johnny Awesome said:
Not using Windows as the OS for PS2 would be my guess.

Uh, excuse me? The thing sells 50 million copies and you say not using win is a MISTAKE? Please excuse me for a few moments while I laugh my arse off at your expense...

Anyway, back to original topic.

People speculate 3GHz or 4GHz or whatever GHz processor for next-gen XB, dual on-chip cores, etc.

Does anyone envision a 3GHz x86 chip drawing less than 50W power for the CPU alone, because I certainly don't. The next gen's going to be one hot running mother. Add to that, GPU which will likely be another 50W easily, and main memory + GPU memory (unless it's an UMA approach again).

They will have to make the case out of METAL, and use that as a heatsink too unless they want the thing to sound like a FlowFX unit. And this might well be true for ANY of the next-gen consoles. The piddly fan used in the GC for example isn't going to cut it, not by a long shot.


*G*
 
in 2005 a 3ghz - p4 on .9 process will be a cool running chip. So will a hammer chip . As for a gpu i agree they will be hot thats why i am saying a dual gpu set up. Lower clocked and cooler running.
 
They will have to make the case out of METAL, and use that as a heatsink too unless they want the thing to sound like a FlowFX unit. And this might well be true for ANY of the next-gen consoles. The piddly fan used in the GC for example isn't going to cut it, not by a long shot.

you better belive it if not you'll have to call the fire department to take care of the thing otherwise your house will burn down
 
jvd said:
keegdsb said:
jvd said:
Deepak said:
Similarly if it wasnt for Sony, Nintendo wouldnt have changed a bit....

actually if it wasn't for sega ...


If it wasn't for nintendo being stupid sony never would have gotten into the market
Same could be said about Atari and Nintendo. Nintendo originally came to Atari to market the NES! Talk about a mistake...

yup. Its funny how that works . I wonder what mistake sony made thats going to come back to haunt them .

the same ones, complacency and mismansgament.
 
notAFanB said:
jvd said:
keegdsb said:
jvd said:
Deepak said:
Similarly if it wasnt for Sony, Nintendo wouldnt have changed a bit....

actually if it wasn't for sega ...


If it wasn't for nintendo being stupid sony never would have gotten into the market
Same could be said about Atari and Nintendo. Nintendo originally came to Atari to market the NES! Talk about a mistake...

yup. Its funny how that works . I wonder what mistake sony made thats going to come back to haunt them .


I was talking specific. Nintendo tried to hold the market for to long with an older system and let sega release first and then they released with hardware that wasn't much better about 18 months later. Sega messed up by not seeing the shift to 3d and basicly bolting on a 3d solution to a 2d system.
the same ones, complacency and mismansgament.
 
Sega messed up by not seeing the shift to 3d and basicly bolting on a 3d solution to a 2d system.

Sega saw the shift to 3D as the future, you can see it in their arcade hardware. Sega mistakes is they want to build walls around their arcade business (which was very profitable at the time) and protect it. They know very well, if they can't differentiate the arcade from home consoles, its death to the arcade market. Death to arcade market, is death to Sega.
 
No, If it wasn't for the XBox, Sony might not be aiming so high with PS3.

Hehe, as long as Ken's running SCEI I don't think you have to worry about Sony aiming high... :p
 
The Interactive Visual Media group

Image Based Realites project

The goal of the Image-Based Realities (IBR) project is to create technologies that enable user experiences that combine the visual realism of real images and video with the interactivity of Computer Graphics. To this end we are developing techniques for:

the automatic recovery of 3D scenes and objects from images and video
novel representations of visual scenes that capture their appearance, behavior, and geometry
the separation of surface texture and appearance from specularities, highlights, and reflections
creating new virtual experiences that allow users to visually interact with scene elements, and
creating new virtual experiences that combine static and dynamic visual elements captured from images and video
In addition to the core technology development, we also explore and prototype scenarios that incorporate interactive visual experiences and leverage our technologies. We envision the application of our technologies to range from games and virtual experiences to home and other environment capture and modeling as well as novel Windows Shell level UI and user experiences.



Projects


Interactive Video Tours
High Dynamic Range Imaging
Dynamic Scene Rendering
Video Textures


http://research.microsoft.com/vision/ImageBasedRealities/

I guess any X-Box 2 project would probablly fit in the "prototype" scenario category.
 
jvd said:
in 2005 a 3ghz - p4 on .9 process will be a cool running chip. So will a hammer chip . As for a gpu i agree they will be hot thats why i am saying a dual gpu set up. Lower clocked and cooler running.

This is taken from sandpile.org:

typical power: 81.8 W (3.06B GHz 0.13 µm µPGA478 @ 1.525 V with HTT)
maximum power: ~100 W (3.06B GHz 0.13 µm µPGA478 @ 1.525 V with HTT)

If we use the reduction from .18um to .13um for the 2.0GHz P4 as a guide, these numbers should drop to around 55-60W typical ~65-70W max when moved to .9um. That's still a really hot running chip for something the size of the xbox. This is especially true when you consider the amount of heat that the next generation of graphics processors are going to put out as well. ATI certainly has an advantage over nvidia here, and perhaps that lends some credit to the ATI/Microsoft rumors.

Realistically, I think it would make a lot more sense to see something like the pentium-m (banias) in the xbox. A 2-2.5GHz pentium-m on a 0.9um process seems like it would be much more reasonble than a 3.06GHz P4 on the same. Pairing that with a R500 generation ATI solution would be pretty nice.

Nite_Hawk
 
Pairing that with a R500 generation ATI solution would be pretty nice.

Oh it would :) Especially concidering what the R500 ala Radeon2 will become, that's all I can say. But if Xbox2 does indeed use a R500 we are in for a treat is all I'm going to say at this point.
 
Nite_Hawk said:
jvd said:
in 2005 a 3ghz - p4 on .9 process will be a cool running chip. So will a hammer chip . As for a gpu i agree they will be hot thats why i am saying a dual gpu set up. Lower clocked and cooler running.

This is taken from sandpile.org:

typical power: 81.8 W (3.06B GHz 0.13 µm µPGA478 @ 1.525 V with HTT)
maximum power: ~100 W (3.06B GHz 0.13 µm µPGA478 @ 1.525 V with HTT)

If we use the reduction from .18um to .13um for the 2.0GHz P4 as a guide, these numbers should drop to around 55-60W typical ~65-70W max when moved to .9um. That's still a really hot running chip for something the size of the xbox. This is especially true when you consider the amount of heat that the next generation of graphics processors are going to put out as well. ATI certainly has an advantage over nvidia here, and perhaps that lends some credit to the ATI/Microsoft rumors.

Realistically, I think it would make a lot more sense to see something like the pentium-m (banias) in the xbox. A 2-2.5GHz pentium-m on a 0.9um process seems like it would be much more reasonble than a 3.06GHz P4 on the same. Pairing that with a R500 generation ATI solution would be pretty nice.

Nite_Hawk

don't count out a p4 or a hammer chip. Remember .9 micron will bring higher yields and underclocking the chip will allow them to lower the voltage and reduce watt consumtion even more. I'm sure ms is looking into every option.
 
Paul said:
Pairing that with a R500 generation ATI solution would be pretty nice.

Oh it would :) Especially concidering what the R500 ala Radeon2 will become, that's all I can say. But if Xbox2 does indeed use a R500 we are in for a treat is all I'm going to say at this point.

don't tease us haha
 
Nite_Hawk said:
jvd said:
in 2005 a 3ghz - p4 on .9 process will be a cool running chip. So will a hammer chip . As for a gpu i agree they will be hot thats why i am saying a dual gpu set up. Lower clocked and cooler running.

This is taken from sandpile.org:

typical power: 81.8 W (3.06B GHz 0.13 µm µPGA478 @ 1.525 V with HTT)
maximum power: ~100 W (3.06B GHz 0.13 µm µPGA478 @ 1.525 V with HTT)

If we use the reduction from .18um to .13um for the 2.0GHz P4 as a guide, these numbers should drop to around 55-60W typical ~65-70W max when moved to .9um. That's still a really hot running chip for something the size of the xbox. This is especially true when you consider the amount of heat that the next generation of graphics processors are going to put out as well. ATI certainly has an advantage over nvidia here, and perhaps that lends some credit to the ATI/Microsoft rumors.

Realistically, I think it would make a lot more sense to see something like the pentium-m (banias) in the xbox. A 2-2.5GHz pentium-m on a 0.9um process seems like it would be much more reasonble than a 3.06GHz P4 on the same. Pairing that with a R500 generation ATI solution would be pretty nice.

Nite_Hawk

Wouldn't they be using a .065 process by 2005?
 
Back
Top