You mean texture shimmering coming back in nv side?
Why don't you test a G80 yourself in various scenarios with and without various optimisations in the driver CP and tell me how much you gain?
You mean texture shimmering coming back in nv side?
If R580 had 24 ROP's and 24 texture units then it would significantly help.
You mean that ALU/ROP 3:1 was imbalanced or that 2:1 is balanced?Yet it would make no difference to card in which they kept the ALU to ROP/texture unit balance.
Desktop quad-core isn't due for another year after. So the only way enthusiasts are going to get their hands on quad-core AMD is by buying some variant of Barcelona, as far as I can tell.
Anyway, I don't want this to happen. I just want to know the techie stuff.
Jawed
ATI actually scheduled an Editors' Day and then cancelled flight and hotel bookings that had already been made. That suggests two things:
1) They found out something bad.
2) Whatever it was they found out took them completely by surprise.
Whatever it was, just a couple of days ago they still believed they were going to launch on time; then something happened that forced them suddenly to change their minds. This leads me to believe that it is not simply a question of yields and bins - unless it's a really bad problem with yields and bins - because a problem like that is something they would have seen coming a week or two earlier. I think it has to be something more significant than that: an actual bug in the hardware that needs a respin, a major problem with the card or cooler design, or (less likely) a really intractable problem with the drivers that causes crashes and will take several weeks to fix.
I simply don't buy the "strategic" line; I think that's desperate backside-covering by the PR team. It's perfectly possible that AMD might decide to hold back the launch to coincide with middle- and low-range parts, or even to coincide with the launch of the K10 CPU; but that's not something they would suddenly change their minds about at the last minute!
This cancellation also suggests that we are not talking about a two-week delay. They wouldn't have cancelled everybody's flights over that - they would have held the meeting as planned and announced a slightly later launch. I think this points to a delay of at least a month.
My guess is that they found out that 8300/8600 are superior to their 2300/2600 offerings and decided to cover this fact by launching 2900/2600/2300 together, so that most headlines would point to the fact that 2900 beats 8800 and kind of skipping the other results in the headlines and in the minds of most customers.
We told you earlier about NVIDIA skipping G81 or GeForce 8900 to go straight for something more powerful. It seems like INQ has insider scoop that NVIDIA mulling the successor to G80 but still would like to produce GX2 if they managed to solve the thermal and power issues. After that it will be time for G9x flavour, codenamed G90. NVIDIA is planning to produce 65nm G9x chips based on G80 architecture with much higher frequencies and powered with GDDR4. If NVIDIA makes G80 smaller there may also be just enough space for 512Mbit controller. If all goes as planned, the chip is scheduled for second part of 2007.
3. The annual TSMC Frisbee fight got out of hand.
Why do i get the feeling the Inquirer is printing everything they read here, even if it's nothing more than wild guesses ?
Even the 512bit (note the 512Mbit bus, they wish... ) thing seems ridiculous.
When you have a large chip, it's easier to have a wide memory bus, not more difficult.
Well, the problem with that for AMD/ATI is that there is no reason to expect that nVidia won't be able to pull another 20%-30% performance out of the G8x drivers.
My personal guess is that over the next 1-2 months nVidia is going to be working hard on Vista compatibility and performance, and should bring that up to par in that time, giving them another couple of months, apparently, to get further performance tweaks to spoil ATI/AMD's launch.
They also admitted at the release of the NV40 that they really borked the shader instruction set, making it exceedingly hard to develop a good compiler. Note that NV40 had its shader compiler up and running well right out of the gate, too.That doesn't say much for their current drivers if they do. NV30 took about 8 months for them to get their shader compiler going while ATI had theirs right out of the gate almost.
Considering the general architectural layout I suspect that it could had ended up with 6 quads instead of 4 in the end. All fine and dandy but I wouldn't want to imagine the transistor count of such a beast, nor its power consumption.