From the outside GDDR3 and DDR3 behave very similarly, almost identical, from a logical perspective. At 800 MHz, both GDDR3 and DDR3 have a CL (cas latency) of 11 cycles, while RCD (ras to cas delay) is 12 cycles for GDDR3 vs 11 cycles for DDR3. So the effective latency of a read is almost identical, 11+12 vs 11+11.
The main logical difference is that GDDR3 is a 4-bit fetch and DDR3 is an 8-bit fetch. This determines the minimum burst size you'll get for a read or write. This is important for pushing up to higher IO bandwidths but has almost nothing to do with latency. At their core, DRAMs have not been speeding up anywhere near as much as the IO speeds. To get more and more bandwidth the interface between IO and core has been made wider and wider. GDDR3 at 800 MHz runs the DRAM core at 400 MHz and fetches 4-bits in parallel. This gives you 1600 Mbit/sec. DDR3 at 800 MHz runs the DRAM core at 200 MHz and fetches 8-bits in parallel. It gives you the same 1600 Mbit/sec. In both cases it fetches bits in parallel at lower speed and then serializes them at a higher speed. The latency of making that fetch is roughly the same. This is also why CL in cycles goes up very quickly as IO speed goes up - the core is running much slower than the IO.
The core structures of the DRAMs in all of these (DDR2, DDR3, DDR4, GDDR3, GDDR5) are the same. The differences are in the IO area. The wider the core interface, the higher you can push the IO speed. So DDR2 and GDDR3 are 4-bit, while DDR3 and GDDR5 are 8-bit. Then when you get to the electrical interface between two chips you expose a larger number of differences. GDDR3 uses a 1.8-2.0V IO with pull-up termination at the end point and a pseudo open drain style of signaling. It also uses single ended uni-directional strobes. It is a good interface for a controller chip and a couple DRAMs. DDR3 uses 1.35-1.5V for IO with mid-rail termination at the end points with termination turned on/off by a control pin. It has bi-directional differential strobes. It is better suited for interfaces with more DRAMs (like a DIMM).
GDDR3 and GDDR5 use signaling designed to go a lot faster. They wind up limited by both the DRAM core speed and the IO speed. DDR2 and DDR3 use signaling designed to handle more loads. They wind up limited by IO speed but not by DRAM core speed.
At this point if you are making something at the upper end of the GDDR3 speed range there is almost no reason to use GDDR3 over DDR3. They will have very similar performance and latency. Since GDDR3 is being phased out it is relatively expensive. DDR3 is available in huge quantities because it is the PC main memory, and this drives down prices. The one advantage GDDR3 has is that it comes in x32 packages. If you wanted to keep the PCB small, you might opt for GDDR3. This also works against you because the core of the DRAM remains the same size. 2 x16 DDR3 modules can give you twice the memory of 1 x32 GDDR3 module. If there were no new Xbox coming out in the next couple years, you would definitely see an updated 360 using DDR3 rather than GDDR3 just because of the relative price of the DRAMs.