G70 memory controler

_xxx_ said:
OICA, the issue with traces is just as severe for the external memory. You'd have to route double the traces to the external memory on the board, which is already very tricky with 256-bit.

I was skimming this thread and that caught my eye. I just spent about three minutes googling OICA to find out what kind of retarded acronym it was before I realized it was a nickname. :???:
 
Zvekan said:
ATI thrumpeted their Ring Bus controler and its ability to support faster memory as traces arent all connected to the same spot therfore allowing more efficent cooling. How did nVidia manage it with traditional design?
Because that factor of ATI's design isn't focused on memory clock speeds. It's focused on core clock speeds.
 
Chalnoth said:
Because that factor of ATI's design isn't focused on memory clock speeds. It's focused on core clock speeds.
I'm not sure thats the case, I'd say their focus was on both - remember, its talking about removing the wire density away from the memory controller; the memory controllers are usually running at the memory rate rather than the core rate.
 
Isn't the GDDR4 support on R520 a bit irrelevant. Or do people expect Nvidia to be on GDDR3 while ATi goes for GDDR4? Isn't it more important that both companies have solutions compatible with GDDR4 when GDDR4 actually enters the market?
 
trinibwoy said:
Isn't the GDDR4 support on R520 a bit irrelevant. Or do people expect Nvidia to be on GDDR3 while ATi goes for GDDR4?
In the past, these companies have always moved to new memory technologies in a very similar time frame. There's no reason for GDDR4 to be any different. The only reason why you might see one company go for it before the other would be due to differences of launch scheduling on new parts.
 
trinibwoy said:
Isn't the GDDR4 support on R520 a bit irrelevant. Or do people expect Nvidia to be on GDDR3 while ATi goes for GDDR4? Isn't it more important that both companies have solutions compatible with GDDR4 when GDDR4 actually enters the market?

It's not irrelevant to ATI if they intend to continue to produce r520 beyond the next 6mos. An X1800GS type model (think x1800xl replacement) a year from now with GDDR4 wouldn't necessarily be a bad high mid-range part.
 
AlphaWolf said:
It's not irrelevant to ATI if they intend to continue to produce r520 beyond the next 6mos. An X1800GS type model (think x1800xl replacement) a year from now with GDDR4 wouldn't necessarily be a bad high mid-range part.

Who cares about ATi? Is it relevant to us?
 
Crusher said:
I was skimming this thread and that caught my eye. I just spent about three minutes googling OICA to find out what kind of retarded acronym it was before I realized it was a nickname. :???:

OICA = "Oh, I see a"... Spork. o.o; it came to me spontaniously years ago when I was getting frustrated trying to find an IM name that wasn`t taken and that didn`t require me to tack a number on the end of it. *looks up OICAspork on google* o.o; wow, you can see forums I haven`t posted in in years. That was amusing.
 
Mintmaster said:
I think NVidia did a lot of memory controller research back while developing GF3/Nforce/XBoxGPU, and maybe improved on it a bit for NV40/G70 (not sure what they did for NV30, as that wasn't very bandwidth efficient)..

yeah agree - it seems to be a game of leapfrog on that one. The nforce2 memory controller kicked arse for the day. I haven't kept up on the Intel side but it's interesting the nforce4 was comparable performance wise with intel's memory controller. I will have too check now but intel, ati and nv all have products that can be compared.

I doubt the ring bus is on radeon express products but it's interesting too me none the less.
 
trinibwoy said:
Isn't the GDDR4 support on R520 a bit irrelevant. Or do people expect Nvidia to be on GDDR3 while ATi goes for GDDR4? Isn't it more important that both companies have solutions compatible with GDDR4 when GDDR4 actually enters the market?
Imagine ATI started work on R520's memory architecture back in 2003, say - thinking ahead beyond GDDR3 (which was imminent, back then). Knowing that GDDR4 would be coming along soon (and knowing that ATI has been a major player in pushing GDDR forwards) it's a matter of "betting" on when GDDR4 would actually come to market.

It's interesting how long GDDR3 has taken to get beyond 1200MHz (seems longer than it should have been, to me). Perhaps GDDR4 is behind schedule. Perhaps there was a mild expectation that R580 could launch with GDDR4?

In the 2 years+ cycle of GDDR a little bit of play in the 6-monthly refreshed GPU designs is expected, no?

Jawed
 
Supporting GDDR4 on R520 is much like the support for DDR2 on R300 - its there because its designed to last longer than the initial implementation.
 
IgnorancePersonified said:
I doubt the ring bus is on radeon express products but it's interesting too me none the less.
The ring bus doesn't have to be, but the heuristics used in newer chips can make their way into the integrated side. I worked on one of the integrated chipsets from ATI several years ago, and its memory controller was from a newer architecture than the rendering pipe.
 
AlphaWolf said:
If you don't care about the technology, you clicked on the wrong site.

You missed my point completely.

Jawed said:
Imagine ATI started work on R520's memory architecture back in 2003, say - thinking ahead beyond GDDR3 (which was imminent, back then). Knowing that GDDR4 would be coming along soon (and knowing that ATI has been a major player in pushing GDDR forwards) it's a matter of "betting" on when GDDR4 would actually come to market.

Well I think it's simpler than that actually. They may not have necessarily been trying to time the emergence of GDDR4 but simply, as Dave stated, designing a flexible and efficient memory controller that could work with GDDR3 in the near-future and GDDR4 in the longer term.
 
trinibwoy said:
You missed my point completely.

Then next time feel free to elucidate.



Well I think it's simpler than that actually. They may not have necessarily been trying to time the emergence of GDDR4 but simply, as Dave stated, designing a flexible and efficient memory controller that could work with GDDR3 in the near-future and GDDR4 in the longer term.

which is what my hypothetical at the top of the page suggested, no?
 
AlphaWolf said:
which is what my hypothetical at the top of the page suggested, no?

Well yea, but you started off by saying it's not irrelevant to ATi, which is kind of a given and didnt really answer the question.
 
Isn't the change in the mem controller for DDR4 compared to DDR3 rather incremental (= not really huge)? It's not like it's a completely different technology. Anyone in the know?
 
Dave Baumann said:
<image>

That hunk of silicon, more or less in the centre, is the memory controller, so I'm told.
Okay, so now I can better answer DC's question:
DemoCoder said:
Well, certainly if the R520 is merely the "prototype" test to get their feet wet for future R6xx generations, but it seems like an awful lot of silicon and I have to wonder what was given up in the design if they had a simpler controller.
<snip>...and perhaps if it had a simpler controller, it would have alot less transistors, or better yet, more ALUs.
So it looks like R520's memory controller is about 10% of the die. Even if they chopped that in half, it seems the best they could do is fit half a pixel shader quad in there. I think it was definately worth it. The claims of a huge memory controller were exaggerated.

Too bad NVidia doesn't give out die shots of G70, but the NV40 die shots may give us a comparison of shader pipe size if anyone can figure out what each part is. Is there a shot of R420 anywhere?

It definately looks like the new shader architecure is where the bulk of ATI's transistor increase came from. Scheduling for branching, duplication of previously shared resources (to have different instructions executing in each quad), and orthogonal FP blending are probably the main reasons for huge die space increase.
 
Last edited by a moderator:
Back
Top