It looks like they are going to have stacked memory on the chiplets. That would explain the 32MB+ L3.
What if ray tracing is a separate card running PCIE4.0?
Tensor Core Equivalent?
Vega 20 GPU patents?
Apparently AMD already has 7nm Vega in their lab.
I think AMD is working on implementing their exascale apu architecture with vega 20 as outlined by AMD Research in their paper.
It looks like...
Perhaps mGPU is more for Ravenridge CPUs with Vega GPUs.
That would be for next year.
Rumor is that GT200b is cancelled.
A better solution would be to have one card start at the top and one at the bottom and work your way to the middle.
Radeon HD 4870X2 in the nude, 15% better than 4870 CrossFire
Thats what I'm talking about.
- lower cost
- higher yields
- better performance
If they can share the same framebuffer then there goes your arguement.
Perhaps with an interconnect they can treat two GPUs as one from a driver perspective.
As we will soon see multiple chips will eliminate the need for 512-bit memory...
I think the real efficiency gains will occur with the 4870X2.
However perhaps bandwidth (ie GDDR5) will solve that problem
You still have to tie the operation to the pixels and to the simd operation. So its not really Independent.
This time it looks like the shoe is on the other foot.
I think it was a design flaw. I don't think AA worked as expected.
This diagram shows a multi-chip module that has a crossfire connector and a...
The idea is to make people wait for the new product rather than buy the 8-month old competitor product.
The problem with multi-chip solutions:
- You have to load geometry, textures, shaders, etc into memory for each GPU.
- You have to share...
If older chips had 30 clock domains don't you think newer ones would as well.
With shared memory it makes much more sense to do split frame rendering. :wink:
Why? Think of all the common stuff you can share. It is significant. You avoid all sorts of duplication, memory allocation, loading objects,...
Wow my x1900xtx sure renders World of Warcraft poorly. ATI takes one step forward and six steps back. Older drivers work just fine and these...
ATI has had split clock domains for a long time in current chips.
The answer is neither.
Adaptive AA does seem to be the source of the rendering errors in WoW
What an interesting article. Certainly has some excellent arguments.
Catalyst 7.10 has lots of rendering errors in World of Warcraft.
Missing textures, lines, dots, and flickering. Not great.
Speculation: I suspect they expected the chip to have much greater performance than it does.
I think that R600 has general manufacturing problems like errors in the AA logic or something. I think it is more than we know.
Tell that to...
- Consoles are robbing all the capital for development of software.
- Majority of the hardware sold is low end.
I can respect that. I hate business travel.
Perhaps new direction is necessary to get ATI executing on schedule again.
All the consoles have pretty much destroyed the video card market for the PC. That and all the low end and integrated video.
1000-1200 feet underground at 100-150 degrees F
Eric, will there be perfomance difference between GDDR4 and GDDR3. Given the huge bandwidth the R600 already has.
Sound instead of electricity.
If ATI had managed current leakage it would really rock.
Something must be wrong either with their design, the 80nm process, or a combination...
Perhaps it had a quick demise because of yields. Just speculation of course.
And how long was it until the R520 was replaced. :wink:
The picture has a little web site watermark on it.
My Geforce3 because it had heat problems and crashed my computer non-stop if I didn't take the side of the case off. Despite having two case fans...
ATI has more business then Intel in the low end? How is that when Intel has 37-40% market share?