AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Considering that from the benchmarks I've seen the memory bandwidth doesn't increase with DDR3, yes.
Latency increases.
memory transfers are less efficient, because when using DDR3, "the command efficiency will drop because the burst length will double to 64B."
Commands are sent with same "base freq" as in DDR2, no? ANd CPUs rarely are able to work via bursting data to/from memory. Adding IGP will increase efficiency, but total available bandwidth will be lower.
Adding sideport gpu-only memory will help a lot no doubt
 
a little bit OT;

but shouldn't we have a dedicated Llano APU thread in the CPU or GPU section? On Tuesday we will finally know what Llano can do or not ( http://www.isscc.org/isscc/2010/ ) and so further hijacking of the RV8xx thread would be unwelcome I think.

So should the thread be in the CPU section or in the GPU section?
 
That shot is "half-present", use the complete one:

apuu.jpg
 
I think it had been established that those are two different dies, the latter possibly being a Photoshop job.
 
Then how else will they give the GPU component the bandwidth it needs? IGPs are fine with side-port memory to increase the capabilities of the part, so why not a side port for the CPU as well?
sideport memory does not much in terms of bandwidth, as it's only (in current incarnations) 16bit ddr3. That's only roughly 3.2GB/s. Though for AMD's current IGP, they also have quite limited bandwidth due to HT (8+8GB/s for HT 2Ghz). More IGP width isn't really a viable solution, you're really moving towards discrete graphic chip in that case. Sideport memory probably does more for saving power rather than increase performance...
But for fusion parts, you certainly will have the full bandwidth available to the gpu (well of course it still has to battle with cpu for it but at least potentially it can use all alone). For sideport to really make some performance difference you'd pretty much need at least a gddr5 32bit chip - adding 64bit ddr3 would seem completely pointless that would have all the cost of adding a third memory channel so why not just do that?
And I don't think gddr5 sideport makes sense neither.
 
sideport memory does not much in terms of bandwidth, as it's only (in current incarnations) 16bit ddr3. That's only roughly 3.2GB/s. Though for AMD's current IGP, they also have quite limited bandwidth due to HT (8+8GB/s for HT 2Ghz). More IGP width isn't really a viable solution, you're really moving towards discrete graphic chip in that case. Sideport memory probably does more for saving power rather than increase performance...
But for fusion parts, you certainly will have the full bandwidth available to the gpu (well of course it still has to battle with cpu for it but at least potentially it can use all alone). For sideport to really make some performance difference you'd pretty much need at least a gddr5 32bit chip - adding 64bit ddr3 would seem completely pointless that would have all the cost of adding a third memory channel so why not just do that?
And I don't think gddr5 sideport makes sense neither.

If those don't make sense, then perhaps what we might see is a system with embedded ram? Did not Microsoft say the biggest use of memory bandidth in a GPU was the frame-buffer? If they can add 20-30MB of embedded memory then they can be sure to contain a frame-buffer up to 1920/1080 which is pretty much what a monitor on a laptop computer is restricted to at maximum.
 
Then how else will they give the GPU component the bandwidth it needs? IGPs are fine with side-port memory to increase the capabilities of the part, so why not a side port for the CPU as well?

Side-port is a useless marketing gimick needed because the current design is generally broken. The memory controllers on current designs have the capability of delivering sufficient memory bandwidth. The issue isn't in the memory controller or the bandwidth it can deliver.

Seriously, look at the bandwidth provided by side-port then look at the bandwidth provided by the IMT.
 
Latency increases.
memory transfers are less efficient, because when using DDR3, "the command efficiency will drop because the burst length will double to 64B."
Commands are sent with same "base freq" as in DDR2, no? ANd CPUs rarely are able to work via bursting data to/from memory. Adding IGP will increase efficiency, but total available bandwidth will be lower.
Adding sideport gpu-only memory will help a lot no doubt

Actually the command efficiency improves with DDR3 vs DDR2. 1 PRE/RAS/CAS per 64B vs 1 PRE/RAS/CAS per 32B.

Also, fyi, CPUs almost ALWAYS work by bursting data to/from memory. Its a small brand spanking new invention called caches, circa 1960s.

And in any situation, you are better off adding more memory capability to the IMT than you are adding sideport memory controllers.
 
If those don't make sense, then perhaps what we might see is a system with embedded ram? Did not Microsoft say the biggest use of memory bandidth in a GPU was the frame-buffer? If they can add 20-30MB of embedded memory then they can be sure to contain a frame-buffer up to 1920/1080 which is pretty much what a monitor on a laptop computer is restricted to at maximum.

Sure, add significant cost to the most cost constrained part you design.
 
Side-port is a useless marketing gimick needed because the current design is generally broken. The memory controllers on current designs have the capability of delivering sufficient memory bandwidth. The issue isn't in the memory controller or the bandwidth it can deliver.

Seriously, look at the bandwidth provided by side-port then look at the bandwidth provided by the IMT.

I gotcha.

Sure, add significant cost to the most cost constrained part you design.

Aren't they paid for performance? If having onboard cache big enough to stow a 1920 by 1080 framebuffer nets them a great enough performance improvement then why would they not pursue this avenue?

They could always have two different versions, one lower cost without a framebuffer and one with. How else are they going to effectively feed a GPU which needs 25GB/s bandwidth on its own to function near full capacity?
 
Side-port is a useless marketing gimick needed because the current design is generally broken. The memory controllers on current designs have the capability of delivering sufficient memory bandwidth. The issue isn't in the memory controller or the bandwidth it can deliver.

Seriously, look at the bandwidth provided by side-port then look at the bandwidth provided by the IMT.

No, the side port is not useless at all, nor is it meant to boost the bandwidth/performance. It may do that, but the point is power savings more than anything else.

The sideport allows the frame buffer to be read and the screen refreshed while putting the IMC to sleep more aggressively. It saves a lot of power in idle/sleep states.

-Charlie
 
No, the side port is not useless at all, nor is it meant to boost the bandwidth/performance. It may do that, but the point is power savings more than anything else.

The sideport allows the frame buffer to be read and the screen refreshed while putting the IMC to sleep more aggressively. It saves a lot of power in idle/sleep states.

That's a design artifact of how they designed the IMC, not an inherent benefit of the side port.
 
Not really. You have power savings from lack of HT transfers and MC. Additional memory bandwidth s never a bad thing either.
 
Not really. You have power savings from lack of HT transfers and MC. Additional memory bandwidth s never a bad thing either.

The problem is that the sideport has very narow memory bus (usualy just 32bit) with single chip so the costs are as low as possible(there isnt much free space on MB either). The sideport memory is actualy much slower than the main memory (they hardly reach 3.2 GB/s which would need 800MHz ddr).
I think the main reason they use them still is for power savings in 2D.
With dual chanel ddr3 1600 and 25.6 GB/s max bandwith u can forget sideport.:p
 
Actually the command efficiency improves with DDR3 vs DDR2. 1 PRE/RAS/CAS per 64B vs 1 PRE/RAS/CAS per 32B.

Also, fyi, CPUs almost ALWAYS work by bursting data to/from memory. Its a small brand spanking new invention called caches, circa 1960s.

And in any situation, you are better off adding more memory capability to the IMT than you are adding sideport memory controllers.
I was quoting realworldtech ... the point being that 64B data transfers are not always needed. Though, honestly my data is a bit old, maybe in days of Win7 and quad cores bursting large memory chunk is the way?
Of course adding more "generic" channels is better, but atm AMD can't afford socket change. Or rather they don't want to change it.
Intel thinks that quad-and-more core CPU needs triple channel DDR3 memory, yet you claim AMD's quad-cores can't use dual-channel ...
And benchmarks show that 780G + 128MB sideport is way faster than same chipset without it.
So, perhaps someone is distorting reality & benchmarks & Intel +-
 
Back
Top