Ati shows 9700 with ddr2 on techtv?

From what I've heard the 9700 is mostly limited by the core rather than memory

Not in FSAA. i see a 3-5 FPS increase just OC my ram in several benchmarks with FSAA at higher resolutions..

DDR-II should make a sizable impact on 1280x1024 and up with 4x fSAA and 6x FSAA. Teh two areas that will be very important to compete with nvidia in.
 
i see a 3-5 FPS increase just OC my ram in several benchmarks with FSAA at higher resolutions.

By how much did you overclock the RAM by ? 3-5 fps in FSAA, isn't that much. Using DDR2 would probably gives at least 30% increased in bandwidth throughput.
 
Sabastian said:
'Inqwell's Fuad"

As we previously said, the NV 30 will have only 128-Bit interface which is half the speed of the one used on the fastest gaming card around, ATi's Radeon 9700 PRO.

The fastest NV 30 will work on the nice round number of 1000MHz, bringing graphics memory to a 1GHz speed for the first time in history.

The Radeon 9700 has 256bit bus and by the looks of this showing on TechTV it will also use DDRII. If this is the case then surely ATI will have an NV30 "Buster" in wait. What else could it be that nvidia has done to possibly outdo the Radeon 9700 with DDRII? I am anxious to see just how this card from ATi will compare to the NV30. I am surprised there arn't more rumors floating around about ATIs NV30 "Buster"..


From what I've heard the most likely speed for DDRII is 550-600MHz DDR (275-300MHz actual.) 1GHz seems preposterous right now.

As far as an nv30 "buster" goes--heck, they just busted the Ti4600....give 'em some time...;) (Judging by the apparent nv30 production schedule they are likely to have a lot of time.)


'Inqwell's Fuad"

One of the key questions is, who has this kind of memory? And if you search the net for a while you will find that Samsung promised volume production of this DDR II memory capable of up to 1GHz speeds in Q1 2003.

This means that some volumes will exist for December, when this card should appear on the shelves. Nvidia's CEO, Jen Hsun, never said that there will be loads of cards but there will be some cards shipping and on the shelves for that time.

The simple fact of the matter here is that nvidia does not own any special rights to this memory and ATI should be able to utilize the memory in conjunction with the Radeon 9700...... Further ATi should be able to put more of the cards on the market as they are already shipping and manufacturing, in volume, the Radeon 9700. So does nvidia have a leg to stand on? What could there be on the NV30 that would help it to outperform a 256bit bus coupled with DDRII memory that the Radeon 9700 will be endowed with?[/quote]


Not only that....but with this guy making incredible statements about 1GHz DDR, and his statement that SamSung said it would be making such in the 1st quarter of next year--it would have been instructive if the guy had included links to this info which he said is commonly available on the Internet. Is it? I surely haven't seen it. Did the guy post a link?
 
I seriously doubt that the R300 will be able to out perform the NV30, even if Ati releases it with DDRII. The R300 core is a great piece of hardware, but with the exception of floating-point, its architecture is fairly traditional. NV30 is a completely new approach. A popular theory is that it is a new advanced tiling architecture that will make memory bandwidth limitations a thing of the past. The Kyro chips used something similar but a little less complex and without hardware T&L. Remember, when paired with a fast CPU, the KyroII gave the Geforce 2 a run for its money!

As Nvidia chief scientist David Kirk said,"You haven't seen nothing yet!"[/quote]
 
Johnathan256 said:
As Nvidia chief scientist David Kirk said,"You haven't seen nothing yet!"

That's because they still don't have anything. :p

--|BRiT|
 
Johnathan256 said:
NV30 is a completely new approach. A popular theory is that it is a new advanced tiling architecture that will make memory bandwidth limitations a thing of the past.
[/quote]

It's a crisp clear speculation nothing else.
We don't know anything only these PR-shots...
 
WaltC said:
From what I've heard the most likely speed for DDRII is 550-600MHz DDR (275-300MHz actual.) 1GHz seems preposterous right now.

From looking at Samsung's website, current DDR2 is rated at 533MHz, but can be extended to 667MHz for "networks and special system environments."

But, I'm reasonably certain that this is for system memory. Video memory always runs quite a bit faster.

Regardless, the primary motivation for the development of DDR2 was to increase the bandwidth per pin. In this case, that means increasing the clock speed. I have serious doubts that the first batches of DDR2 for video cards would be no faster than today's fastest DDR.

Particularly since DDR2 looks to be slower per clock than DDR.
 
Do you all remember the company Gigapixel? Well if not, they were the company that developed the GP-1 graphics chip back in 1999. This graphics chip outperformed the Geforce256's normal performance while running in 4xFSAA and only operating at 100Mhz. Check out this site for more info:

http://www.chickshardware.com/html/articles/gigapixel/gigapixel.html

You have got to be crazy to think Nvidia is not going to take advantage of this technology. And lets not even mention all of the 3dfx stuff they could incorporate. Ever heard of RAMPAGE?
 
Its amazing how people who wouldn't have been could dead with an Ati card not too long can now praise Ati now that they have released their first really good product. Hmmm..... I only wonder "which" version of the NV30 will be in their Christmas stockings this year....
 
Johnathan256 said:
Its amazing how people who wouldn't have been could dead with an Ati card not too long can now praise Ati now that they have released their first really good product.

I guess some people aren't afraid to give credit where it's due, even if it isn't from the almighty Nvidia.

Hmmm..... I only wonder "which" version of the NV30 will be in their Christmas stockings this year....

Maybe the paper one?
 
Johnathan256 said:
I seriously doubt that the R300 will be able to out perform the NV30, even if Ati releases it with DDRII. The R300 core is a great piece of hardware, but with the exception of floating-point, its architecture is fairly traditional. NV30 is a completely new approach. A popular theory is that it is a new advanced tiling architecture that will make memory bandwidth limitations a thing of the past. The Kyro chips used something similar but a little less complex and without hardware T&L. Remember, when paired with a fast CPU, the KyroII gave the Geforce 2 a run for its money!

I don't believe for a moment that the NV30 will be a tiler. Furthermore, I don't believe that deferred rendering is a smart thing to do as we move on into the future (I've explained why on multiple occasions...though I think I'll just leave at that for the time being...).

That said, the NV30 will likely outperform the R300 based upon three primary things:

1. Past history. With every generation nVidia has been able to outperform ATI.

2. The NV30 will be released on a .13 micron process. This alone should allow for greater core clock speeds, better stability/compatibility (particularly related to power supplies), and cooler operation. Also, being designed originally for the .13 micron process, nVidia is in a better position for their refresh part.

3. The later release date has allowed nVidia to spend more time and money (both of which it has more of anyway) to make the NV30 even better.

Basically, there should be no doubt that the NV30 will be able to outperform the R300 when it comes to complex vertex/fragment programs, programs that use a lot of computing muscle in comparison to memory banwidth. For similar reasons, the NV30 should also have superior anisotropic performance.

The only relative unknown is FSAA performance. But, given nVidia's longer track record with multisampling FSAA than ATI, nVidia should also have little trouble in this area.
 
I have been doing some research and I have read that hardware T&L is much harder to use in a tiling architecture, but other than that I haven't found any reported drawbacks. Why do you think deferred rendering such a bad idea Chalnoth? I'm not trying to be smart, I'm just really interested.
 
Chalnoth said:
1. Past history. With every generation nVidia has been able to outperform ATI.

How do you explain GF3 Ti500 being slower than the R8500 then? Granted at release it was faster due to the R8500's bad drivers, but now the tables are turned and ATI has had more time to work out the driver kinks. I wouldn't rely on "past history" as much of an indication of anything.
 
Nagorak said:
Chalnoth said:
1. Past history. With every generation nVidia has been able to outperform ATI.

How do you explain GF3 Ti500 being slower than the R8500 then? Granted at release it was faster due to the R8500's bad drivers, but now the tables are turned and ATI has had more time to work out the driver kinks. I wouldn't rely on "past history" as much of an indication of anything.
The Ati 8500 began to become faster after the GF4 release, unfortunately for ATI....
 
Also, being designed originally for the .13 micron process, nVidia is in a better position for their refresh part.

They will also have less headroom for a refresh. Given what ARTX have done with the .15um process it’ll be interesting to see what they can do with .13um.

3. The later release date has allowed nVidia to spend more time and money (both of which it has more of anyway) to make the NV30 even better.

What do you think they are going to do? The design would have been frozen since they initially went to start the silicon layout.
 
Chalnoth said:
Also, being designed originally for the .13 micron process, nVidia is in a better position for their refresh part.

I really don't know about this one. While nVidia is getting some hard earned experience with the .13 process that will benefit them, part of the deal is that TSMC is also learning a lot which will in turn help ATI and others that is going to use the .13 process with behemoth chips. As far as we know a large part of nVidias problems with the NV30 was about TSMC getting the back end tools tuned/fixed for a 120 M transistor behemoth.
 
Johnathan256 said:
I have been doing some research and I have read that hardware T&L is much harder to use in a tiling architecture, but other than that I haven't found any reported drawbacks.

Cough... Sega N@OMI 2... Cough... ;)
 
Back
Top