ION going to Asus too, next Fall:
http://www.fudzilla.com/content/view/13895/1/
I guess Asus did "touch" it...
http://www.fudzilla.com/content/view/13895/1/
I guess Asus did "touch" it...
Charlie sells clicks, much like a Howard Stern. IT's Shock-Jock. Don't call him a journalist then, call him a reporter. If nVidia wanted to get rid of him they'd sue him for libel or defamation. Once side says he's not worth it, the other side says nVidia couldn't win. He'll get his stuff out there anyway and his stories are far more entertaining than digitimes.
If I were to suggest something new for Charlie to sink his teeth in, it would be how Intel and Microsoft are "negotiating" with the industry to prevent ARM/linux netbooks from being produced.
So what were NVidia's tears over the price of Atom from Intel all about, with Jen practically crying in public whenever the subject has come up this last ~5 months? Do you think that was anything but emotional posturing? Good marketing to paint yourself as the underdog but fighting for consumer's rights and choice.
Hehe, I like this scenario. One catch though: does that mean the current Atom would get killed instantly? And if not, what makes you think Pineview will automatically win in higher-end sockets against Atom+Ion, or heck Core2+Ion2/Nano+Ion2? And heck, last I heard Pineview was still a MCM...I think Intel will cave totally on this, apologize, and set prices basically where they should be without any hint of bundling.
This will be 3 days before Pineview ships in volume. Game over, Intel wins because they are much smarter than Nvidia.
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?
Hehe, I like this scenario. One catch though: does that mean the current Atom would get killed instantly? And if not, what makes you think Pineview will automatically win in higher-end sockets against Atom+Ion, or heck Core2+Ion2/Nano+Ion2? And heck, last I heard Pineview was still a MCM...
I still suspect that the netbook endgame is x86 getting badly commoditized and Intel getting hurt as a result of it, TBH. But that's just a hunch, and I could be wrong.
Yet you try to suggest that what he's posted in this thread on that subject represents information that isn't in his articles.
What is JHH's point? Have you noticed how he doesn't actually have one?
The signs are NVidia knew these products were dicing with death because the materials science says so. It's fucking obvious from the materials science, in fact. That's Charlie's point. That means NVidia knowingly sold faulty products hoping that they'd fall out of warranty before the shit hit the fan. The only companies that'd give a toss about that would be those insurance companies that sell extended warranties.
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?
well we have 2 other people that used to work at Inq say the complete opposite of what Charlie just said about the gt300 :smile:, but maybe its the Inq itself that has this "vendetta" since if we look back Faud was like Charlie when he was there, don't remember if Theo was?
Yes, but one of us has the specs of the cards. Also, one of us realizes that NV is at a power wall, reticle size wall, and is moving in the wrong direction (generalization) for graphics performance. One of us actually gets the science behind the chips, and has a background of chemistry, chemical engineering, physics and CSCI (plus a lot of biology and genetics). That said, Fudo is really good at what he does, I won't comment on Theo.
When the specs for both cards come out, you will see. If you think about it, NV at 500mm^2 has about the biggest card you can reasonably make and sell profitably in the price bands they are aiming at.
With a shrink, they will have about 2x the transistor count, so about 2x the shaders. This means optimally, 2x the performance plus whatever efficiencies they can squeak out. Lets say 2.5x performance.
Take some out for inefficient use of area to support GPGPU, and then a bit more to support DX11, and lets just call it back at 2x performance for a 500mm^2 die.
Then you are staring down a power wall. If 40nm saves you 25% power, you can, very simplistically speaking, add 25% more transistors OR bump clock by a bit, but not both. If you double transistor count, you are looking at significantly lowering clock or getting into asbestos mining.
If NV doubles the transistor count and only keeps the clock the same, they are in deep trouble. I think 2x performance will be _VERY_ hard to hit, very hard. The ways to up that are mostly closed to them, and architecturally, the aimed wrong.
ATI on the other hand can effectively add in 4x the transistors should they need, but 2x is more than enough to keep pace, so they will be about 250mm^2 for double performance. Power is more problematic, but if you need to throw transistors at it to control power/leakage better, ATI can do so much more readily than NV.
ATI's power budget takes GDDR5 into account, NV's doesn't, so another black mark for NV. How much do you think the rumored 512b GDDR5 will consume?
The next gen is going to be a clean kill for ATI, but Nvidia will kick ass in the "convert video it widget" benchmarks. That is something they can be proud of, it uses physics, cuda, and pixie dust. Hell, it probably sells tens, maybe hundreds of GPUs.
Q3/Q4 and likely Q1 are going to be very tough for NV.
Then again, I said that a while ago.
http://www.theinquirer.net/inquirer/news/1137331/a-look-nvidia-gt300-architecture
-Charlie
Also, one of us realizes that NV is at a power wall, reticle size wall, and is moving in the wrong direction (generalization) for graphics performance. One of us actually gets the science behind the chips, and has a background of chemistry, chemical engineering, physics and CSCI (plus a lot of biology and genetics).
Take some out for inefficient use of area to support GPGPU, and then a bit more to support DX11, and lets just call it back at 2x performance for a 500mm^2 die.
Theo's site is practically paid for by nV (Palit challenge, headlines as "Nvidia’s $50 card destroys ATI’s $500 one") I wouldn't expect anything but green news from BSonVnews. He's getting fed a lot of bogus stuff regarding ATI (imminent launch of the Radeon 5600 back in January).
Fudo is much the same, two weeks before the 4890 launched he wrote a piece about how the 4890 didn't have higher clock speeds but different shaders etc. But making 3 posts by different people on things that are true it drowns out their bug rumour.
I do like fudzilla though, it has a lot of other news etc. but a lot of it feels like captioning other news blurbs.
That is true, but I was more interested in the way the posted at Inq vs. now. It was all doom and gloom before, but now they are much neutral in thier approach.
40 nm saving 25% power doesn't automatically translate to 25% more transistors either or vice versa, its all about the engineering of the part. Again the 512bit bus doesn't have anything to do with power consumption, although in general more die does increase power usage in general, nV has found ways around this in the gt200. Even if they use GDDR3 memory (which probably unlikely for gt300), they still have a power advantage.
If this is what you are basing your stories on, I suggest you talk to some of the engineers here because what you're saying is not correct. You are basing it on conjecture and not actuality.
Not the sharpest bowling ball of the bunch, are you? 512b bus takes more power than a 256. GDDR5 takes more power than GDDR5 for similar bit widths.
As for transistor -> power, you are right, but as a general rule, it is a good starting point. In very parallel architectures, it tends to work out fairly well as an estimator. Less so for monolithic cores.
Are you suggesting that adding transistors linearly will not increase power fairly linearly? Are you suggesting that for similar bit widths, GDDR5 does not take more power than GDDR3? Look at the numbers for the 4850 vs 4870, they are different how? OC or downclock them to the same frequency, and the difference is what again?
-Charlie
What takes the most power: 1x512 or (2x256+Bridge)? I think that should be pretty obvious.Not the sharpest bowling ball of the bunch, are you? 512b bus takes more power than a 256.