AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
Not sure if this has been posted yet, but here it is:

http://www.computerworld.com.au/art...end_gpu_shortage_by_late_november?fp=2&fpid=1


snippet:

Dave Baumann, a senior product manager for AMD, said this week that ramping up production of any new chip is an ongoing process, and that he could not say when the 5800 series operation will be running at full capacity. However, he did say that by the end of November, they're expecting a "substantial uptick" in chips coming out of the TSMC fab plant.

Baumann added that AMD is pulling some of its 5800 series GPUs from the retail add-in board market and to get them PC vendors looking to move desktops and laptops out the door in time for the holiday shopping season.

He did note that overall, the "vast majority" of 5800 series sales are generated in the retail add-in board market. Baumann said that many gamers and high-end users are expected to buy the GPUs off the shelf so they can manually upgrade systems.
 
I didn't say the design was faulty, I said teh assembly was faulty. Augment(bend) the heatpipe just slightly to allow for better contact to the GPU and the temps decrease because of the better contact patch.

OK, how come 10+ OEMs all made the same mistake with faulty ASSEMBLY only on laptops containing Nvidia GPUs?

Quit dodging the question.

-Charlie
 
You graph points to 60-80c as being the killing zone. I said "CLOSE TO 60C". Do you know what that means, it means it get close, but DOESN'T QUITE MAKE IT. On the other hand, Laptop GPUs idle around there and then go thru the damn roof. But yet a simple augmentation of the heatpipe for the heat sink giving it better contact WILL lower both IDLE and LOAD temps. A POINT YOU STILL FREAKING REFUSE TO FREAKING ACKNOWLEDGE!

Are you really that dense? You said:

"I've yet to see a laptop with an G86 idle and load temps come anywhere within 10c of the eVGA card I have. Under load it idles around 40c, under load close to 60c. Everylatop I've seen still working with G86s, are idle 55-60c and load temp near 90-100c."

It is here:
http://forum.beyond3d.com/showpost.php?p=1357113&postcount=4652

Someone put on you 'basic logic proof coating' to thick.

-Charlie
 
Go back up and re-read one of my posts, I did say we have had ATI based system in the shop for FAILED GPU. Care to guess who made the GPU? ATI. Reason for failure? HEAT FUCKING RELATED ISSUES! Cause of heat related issues? IMPROPER FREAKING HS/P ASSEMBLY CONTACT PATCH! It does happen, just not to the massive extent as with substrate affected Nvidia GPUs, but it does happen. Something YOU REFUSE TO ADMIT TO as you CLAIM IT DOESN"T HAPPEN!

Charlie, your biggest problem is you have SUCH A DAMN HARD ON for anything bad concerning Nvidia, that even WHEN you are right, you are still concidered a nut job. HardOCP, Maximumpc, ANAND, Toms, FiringSquad all view you in about the same light for communities, a nut job who will make up stories about them to get hits.

Now, I have said Nvidia is at fault in this thread SEVERAL DAMN TIMES NOW! And if the given thermal designs were for such high temps, then they should take een more of the blame, BUT THE DOESN'T MEAN THE FREAKING OEMS DESERVE A FREE FREAKING PASS FOR SHITTY QA OF PART ASSEMBLIES! And as I have stated before a simple augmentation to the HS/P assembly has shown to DECREASE, can you say that word or even understand its meaning, temps of G86 GPUs by AS MUCH as 20C. Not gonna take it down below the substrate thermal threshhold, but lower it enough to allow it to live that much longer. Still, Nvidia should pay for the fuck up, but it ISN'T ALL THERE FAULT!

And your G200 thing DOESN"T pertain to G86s. G200 have this nice big heat spreader and a HUGE HSF assembly that does about 1000 times better job of moving heat away from the GPU. Where as the laptop, usually has a flattened out heatpipe and cooling fins anywhere from 4 to 12" away from the GPU itself which in itself has NO HEAT SPREADER ON IT.

Wow, you are truly delusional, and you don't get the basics about the problem at all. It is not an overheating problem, it is a thermal cycling and materials problem.

That said, I will not take your one example of an ATI problem as being an industry plague like the Nvidia problem.

Why do 10+ OEMs have design, thermal, assembly or whatever problems ONLY on Nvidia GPUs across 100+ models?

Don't dodge the question.

-Charlie
 
Frankly, if real power consumption is anything like HD 4870X2 it would be a lot more honest to just say TDP is 375W and use 2 8 pin plugs,...
He started it...then the usual spamage form nargreencissistic land and the narcissistic "journo"...:cry:
 
Are you really that dense? You said:

"I've yet to see a laptop with an G86 idle and load temps come anywhere within 10c of the eVGA card I have. Under load it idles around 40c, under load close to 60c. Everylatop I've seen still working with G86s, are idle 55-60c and load temp near 90-100c."

It is here:
http://forum.beyond3d.com/showpost.php?p=1357113&postcount=4652

Someone put on you 'basic logic proof coating' to thick.

-Charlie

I'll help you charlie, "under load close to 60c."

Can you see it now, the word close? does close mean almost or does it it hits or exceeds something? That is for a desktop GPU that has 100% HSF assembly contact on the GPU. EVERY laptop I have seen with G86s in them are soldered down to the board not an add in card and the HS/P assembly DOES NOT make 100% contact with the GPU. And I even stated in an earlier post that we have had ATI based machines where the ATI GPU has failed/died due to heat and they also didn't have 100% HS/P assembly contact to the tops of the GPU.

Now again, I have never denied Nvidia is at fault, but what you are refusing to acknowledge is that the OEMs have some culpability in this whole mess due to poor QA. Thats Dells, HPs, Emachines, Acers, Asus, Comcraq, Gateway and anyone else selling laptops with the those chips in them. Yesterday we got a 2yr old Lenovo/IBM in with the G86 in it. AGain the same cool to touch pad thing for thermal transfer and its idle and load temps about 10-15c cooler than all the others but still above the safe region for the substrate(this is purely Nvidias fault, again not denying it), but to say the OEMs dont have some responcability in this is nuts when a simple augmentation to the HS/P assembly CAN and DOES lower temps and could possibly have lead to a slightly or improved life span of the GPU. The GPU would still have died, but I'm willing to bet nowhere near as soon as they did.
 
Wow, you are truly delusional, and you don't get the basics about the problem at all. It is not an overheating problem, it is a thermal cycling and materials problem.

That said, I will not take your one example of an ATI problem as being an industry plague like the Nvidia problem.

Why do 10+ OEMs have design, thermal, assembly or whatever problems ONLY on Nvidia GPUs across 100+ models?

Don't dodge the question.

-Charlie

I have never dodged the question. I have said it is Nvidia fault several times now, you either can't read or failed reading comprehension or keep glancing over it. But I am contesting that they are the sole reason behind it all. Due you not agree that improper HS/F mountings CAN and WILL reduce the life of a product? If so, then if that is the case, how is that OEMs who dont ensure proper contact patches on GPU which can and will allow for heat to build up, thus reducing a part life, not be to blame aswell. The GPU was made with a substrate that couldn't handle high temps to begin with, add in the fact that OEMs have HS/P assemblies that dont make 100% contact and allow for even higher temps and you end up with a part/product that is going to fail and fail in droves much sooner and faster than expected.
 
Yeah, it has to be 64-bit to reach 512MB, which is the spec of HD4350. Unless there's some 2Gb GDDR5 chips out there, which I haven't found.

Jawed
 
That's interesting. 3 chips but 2 of them are going two have (almost) the same memory bandwidth (unless one has a 96bit interface...) I thought it wouldn't make sense, but guess I've been proven wrong...
 
That's interesting. 3 chips but 2 of them are going two have (almost) the same memory bandwidth (unless one has a 96bit interface...) I thought it wouldn't make sense, but guess I've been proven wrong...

How did you come up with that conclusion?
3 chips, 256bit, 128bit and 64bit?
 
Back
Top