XBOX2 graphic chip really using embedded DRAM?

MrSingh said:
let's use some good old logic here.

embedded dram = good thing

I agree, but I'm not totally convinced embedded dram is a good thing in all kinds of architectures. It's possible that it might not be totally necessary in Xenon's case. Especially considering the capability of the GPU to be able to access the CPU's L2 cache. I understand Nintendo had that capability, but was it ever used like the examples in Microsoft's patent? I'm actually curious to know the details.

MrSingh said:
but this is a good thing, and there's nothing good about Xenon, thus it can't be true.

hence xenon GPU having embedded ram is impossible. you might find some good old 120ns EDO RAM acting as the framebuffer though.

LOL, I understand where you're coming from. My pessimism is due to the concept that hype is never anywhere near what the final product looks like. Just read it as my attempt to keep people grounded. It would be nice if I was wrong about my some of my conservative opinions, but I have a feeling Microsoft is not going whole hog on the architecture design this time around. Especially if it's not necessary.

Tommy McClain
 
madmartyau said:
DaveBaumann said:
I'm somewhat expectant of "4x FSAA for free".

At what resolution?

I suspect at 480i and 480p. Doesn't make much sense at higher resolutions if devs are starting at HD modes to begin with. J Allard has hinted at the possibility and suggested that they would do a "pan-and-scan" for regular TV viewers. I'll try to find the exact source and what he said and post it later.

Tommy McClain
 
I'd say 16-32 megs of on die ram .

ITs a simple way to provide extremely fast ram to the graphics chip and not be limited by the umd ram .

It should also scale in cost some what quicker than the gdr ram as each process shrink will make it smaller
 
Mr Singh said:
I'll take EDO RAM anytime!
where is the Faf? the big barrel crowd is expecting the Faf to show up again!
I'm back in Seoul since last week again(been all over the place in Dec and good part of January). And I definately wouldn't mind visiting your corner of the world again, especially if any of the feel the magic girls will be there :oops:
 
jvd said:
I'd say 16-32 megs of on die ram .

ITs a simple way to provide extremely fast ram to the graphics chip and not be limited by the umd ram .

It should also scale in cost some what quicker than the gdr ram as each process shrink will make it smaller

And enormously expensive from what I understand.
 
ERP said:
jvd said:
I'd say 16-32 megs of on die ram .

ITs a simple way to provide extremely fast ram to the graphics chip and not be limited by the umd ram .

It should also scale in cost some what quicker than the gdr ram as each process shrink will make it smaller

And enormously expensive from what I understand.

You really think 16-32 megs will be that expensive ?

IF the r500 is made on a 90nm process it might be , but then what happens at the 65nm process ? 45nm process ? How long of the xenon's life will it be extremely expensive ?

(btw these are all questions i'd like to know answers too :) )
 
the CPU was an OOOe core

Yeah, barely...

even a simple port of GCC can do quite a good job

Ha! That's only because the MIPS branches make it look good! :p Believe me, GCC still does a lot of boneheaded things... However compared to the PS2 anything may look good.. ;)
 
AzBat said:
Doesn't make much sense at higher resolutions if devs are starting at HD modes to begin with.
Considering we're running PC games at 1280x1024 with 4x FSAA today, it would be a step backwards to go to 1280x720 with no AA in state-of-the-art video game console. Not entirely unexpected, but still a step back.
 
jvd said:
ERP said:
jvd said:
I'd say 16-32 megs of on die ram .

ITs a simple way to provide extremely fast ram to the graphics chip and not be limited by the umd ram .

It should also scale in cost some what quicker than the gdr ram as each process shrink will make it smaller

And enormously expensive from what I understand.

You really think 16-32 megs will be that expensive ?

My assumption has been that it could be expensive. Maybe Dave can tell us how much ATI's notebook GPUs with embedded DRAM are costing? Which ATI parts are the top of the line, have the most amount of embedded memory and smallest process technology? Maybe that could be a gauge as to what we might expect with a R500 if it has embedded memory?


jvd said:
IF the r500 is made on a 90nm process it might be , but then what happens at the 65nm process ? 45nm process ? How long of the xenon's life will it be extremely expensive ?

(btw these are all questions i'd like to know answers too :) )

JVD <shakes head> You can't really be serious that it's not a foregone conclusion that the R500 is based on 90nm? Anything better than that is wishful thinking.

Tommy McClain
 
Considering we're running PC games at 1280x1024 with 4x FSAA today
Not without tradeoffs though - Dave was talking about free AA :p
Anyway, so long as the penalty asociated with AA is large enough to require forward thinking to use it, you will still get titles that will choose not to use it.
 
Inane_Dork said:
AzBat said:
Doesn't make much sense at higher resolutions if devs are starting at HD modes to begin with.
Considering we're running PC games at 1280x1024 with 4x FSAA today, it would be a step backwards to go to 1280x720 with no AA in state-of-the-art video game console. Not entirely unexpected, but still a step back.

Why's that? When the Xbox was out PC games were running 800x600, 1024x768 and higher with 4x FSAA and Microsoft still shipped the Xbox running at 480i with no AA at all. AA has always been a user configurable option in games. Do you really expect devs to be happy with an environment that doesn't allow them the choice of wasting performance on FSAA on HD modes? The only way FSAA comes for free is if it is down scaled to a lower resolution like 480i/p.

Tommy McClain
 
AzBat said:
J Allard has hinted at the possibility and suggested that they would do a "pan-and-scan" for regular TV viewers. I'll try to find the exact source and what he said and post it later.

Check the J Allard video interview from TeamXbox and GameSpy that was done during E3...

http://events.teamxbox.com/movies/740/J-Allard-Livewire-Interview-Part-2

They start talking about Xbox 2 at 00:55. The talk about high-definition starts at 01:30. He thinks that they should design at 16:9 and scale down to 4:3. He said it would be an interesting challenge for developers to tackle 4:3 displays, but figures they should create some kind of pan-and-scan version instead of letterbox. I've tried transcribing it, but it's going so fast and some of the dialogue is hard to hear. Maybe somebody else can try?

Tommy McClain
 
Mintmaster said:
vliw said:
For me it's 100% sure that all next negeration graphics chip will contain EDRAM because is the only way to have the necessary amount of bandwidth.
I'm not so sure. EDRAM takes up a lot of silicon space that can be used for more pixel pipes.

In the PC market, certainly. The faster X800's, for instance, have 32-38 GB/sec that is accessed only by the rendering subsystem. On Xbox2, assuming the leaked specs are anything to base judgement upon, the triple core CPU, the R500 graphics chip, and everything else has to share a 22.4 GB/sec bus to shared system RAM.

The result is that the R500 will likely have a third or less of the bandwidth that a R480 has despite having roughly equal texel fill rates and computational throughput. In this context EDRAM might make more sense, especially given that this is a fixed platform and therefore developers will be able to optimize the application's memory access patterns and footprint to maximize the amount of hits to the EDRAM scratchpad.

EDIT:formatting
 
Alstrong said:
AzBat said:
Microsoft still shipped the Xbox running at 480i with no AA at all.


er....Unreal Championship, Max Payne 2, PGR2...

er... I thought we were talking about FSAA being on all the time, no? It's an option available to developers and that's what I was talking in my previous post. Not that there are no Xbox games that support FSAA. My bad if that wasn't understood.

Again, I see FSAA being an option in HD modes, but not on all the time. For 480i/p on the Xbox2 I could see Microsoft making the free FSAA on by default.

Tommy McClain
 
Fafalada said:
Not without tradeoffs though - Dave was talking about free AA :p
I know. I'm just saying that if next-gen games have a 480p with AA option and a 720p with no AA option, it'll be a step backwards. And would thus be unfortunate.


AzBat said:
Why's that? When the Xbox was out PC games were running 800x600, 1024x768 and higher with 4x FSAA and Microsoft still shipped the Xbox running at 480i with no AA at all.
I don't understand what your argument is. I said it would be unfortunate, which I don't see a problem with. And I said it could be expected for reasons you just made concrete.


Do you really expect devs to be happy with an environment that doesn't allow them the choice of wasting performance on FSAA on HD modes?
Wha? Who's talking about forcing developers to use AA at any resolution? I'm not, anyway.

And why is it wasted in HD? I haven't found that to be the case whatsoever. The stability of the video increases dramtically with AA on, HD or no HD.
 
let's see if anyone remembers my background and if someone picks this up...

when I was hearing "stuff" almost in daily basis from the "doomed company" on development of "viking weapon", I used to calculate quite lot, how much and with which rendering method you could actually get used of eDRAM. As all chips do now rendering in pixel quads, things have gotten much easier to understand, than back then, when it was pretty hard to understand how on earth "viking weapon" was suppose to get on performance specs that several sources confirmed.

If I ever got it right, basic idea on the system was split the backbuffer in tiles and render it in eDRAM on tile basis. If AA was needed, the SuperSampling was made during tile -> back buffer copy. This made use of eDRAM really efficient and kept rendering speed up. This also ensured that during rendering tile there wasn't other traffic to external ram than fetching texels for tmu's. Tile size was 32x32 pixels , so vertex and pixel shader programs and geometry (not sure about this one.) also stayed in eDRAM. after all, from 12MB there's plenty of left after 32x32 pixel tile buffer.

And they did this with 0.17µm in Infineon fab. Things have gone forward quite lot, and afaik at least toshiba and nec have been offering 0.13µm eDRAM process quite time already.

EDIT: I have to add this, after all in this section all haven't seen it:
first one:
picture1.jpg

second one:
picture2.jpg


so, here's your 12MB eDRAM DX8 4x2 pipeline chip. (one working, another not, though I haven't been able to prove either one, because missing card. ;) )
 
Inane_Dork said:
AzBat said:
Why's that? When the Xbox was out PC games were running 800x600, 1024x768 and higher with 4x FSAA and Microsoft still shipped the Xbox running at 480i with no AA at all.
I don't understand what your argument is. I said it would be unfortunate, which I don't see a problem with. And I said it could be expected for reasons you just made concrete.

But I was arguing with you that it would be a step backwards. I showed you evidence that during the development of the Xbox 1 that FSAA was available on games at the time and Microsoft still didn't offer FSAA always on. If Microsoft offers it always on for 480i/p then it should be a step forwards, not backwards.

Inane_Dork said:
Do you really expect devs to be happy with an environment that doesn't allow them the choice of wasting performance on FSAA on HD modes?
Wha? Who's talking about forcing developers to use AA at any resolution? I'm not, anyway.

I thought the talk was centered around 4x FSAA for free. I can't see how FSAA could be free in HD modes. I just assumed you wanted 4X FSAA always on in HD modes even with the loss in performance. Devs would rather make the decision themself whether or not it's worth sacrificing speed for it.

Inane_Dork said:
And why is it wasted in HD? I haven't found that to be the case whatsoever. The stability of the video increases dramtically with AA on, HD or no HD.

I wasn't saying that FSAA would be wasted on HD, but that developers might think FSAA on HD modes would be wasting performance.

Tommy McClain
 
AzBat said:
When the Xbox was out PC games were running 800x600, 1024x768 and higher with 4x FSAA
When the XBox launched the GF3 and R8500 were just out, and most people still had GF2s at best. Certainly not AA-friendly hardware, even on high end systems.
 
IF Xbox2~Xenon main memory bandwidth is under 30 GigaBytes per second, then it is essential that there be another pool of higher bandwidth memory. eDRAM or some form of embedded memory. not counting the usual caches found in all CPUs and GPUs.

btw, I think 22.4 GB second for main memory bandwidth kinda sucks. it would be like if Xbox only had 2 or 3 GB/sec instead of the 6.4 GB/sec it has-and even the 6.4 GB/sec was a major bottleneck in Xbox, even though Xbox had the highest main memory bandwidth of the current consoles-because Xbox lacked the high bandwidth embedded memory that PS2 GS and Gamecube Flipper had-which allieviated the smaller main memory bandwidths (3.2 GB & 2.6 GB respectively) of those consoles.

I am hoping Xbox2's main memory bandwidth is 50 GB/sec or better. There were rumors of ~51 GB/sec and 64 GB/sec. Plus, that Xbox2 has embedded memory bandwidth well north of 100/GB/sec
 
Back
Top