NVIDIA Fermi: Architecture discussion

really, where did I contradict myself? please point that out, you can't since I didn't ;).
Uh, you said Nvidia lies, then said they never lie.

No matter, your posts are becoming more and more bizarre and convoluted from what I've seen. And yes Fermi is horribly late, it looks more and more likely that actual supply will be late April if not later. It would not be so bad if AMD didn't have their ducks in a row, but by then AMD will have been selling a top to bottom DX11 lineup for months.
 
Uh, you said Nvidia lies, then said they never lie.

No matter, your posts are becoming more and more bizarre and convoluted from what I've seen. And yes Fermi is horribly late, it looks more and more likely that actual supply will be late April if not later. It would not be so bad if AMD didn't have their ducks in a row, but by then AMD will have been selling a top to bottom DX11 lineup for months.


They never lie about performance before product releases, I have never seen them say anything prior to a launch that would suggest thier performance if not what they say it is.

April? heh come on, how do you guess April? It is a guess right? No facts to back that up.
 
really, where did I contradict myself? please point that out, you can't since I didn't ;).

From what I've read to date, that would increasingly seem a fruitless, pointless exercise. Dogged repetition in the face of reasoned and substantial points does not an agreeable debate make.

You may walk away considering you've 'won' as you ended up the 'last man standing' - no one left willing to continue butting their heads against the wall, but what's the point in 'winning' like that? Kind of hollow, no?

Point of curiousity ... considering your demonstrated animosity toward Charlie, why do you have that quote by Charlie and a link to it at the bottom of your posts that proves Charlie was exactly right about Nvidia and his detractors wrong?
 
Last edited by a moderator:
Were they talking about performance? I don't remember them stating the fx series is faster then the 9700, or anything close to that. They talked about cimimatic rendering and what not.

Ugh from what I remember they stated that multiple times.

A quick google turned up the following interview here.
[NH]: We'd like to know what you think of ATi snatching the "performance crown", perhaps a bit unexpected, from you?
[nV]: There's no doubt that the 9700 Pro is a fast videocard. However ATi will not stay at the throne for very long. We are confident that the GeForce FX will bring the performance leadership crown back to nVidia. Our competition will all have the same challenges we have faced with the move to 0.13 micron technology* so we feel we have made the right choice making this move now.
*(From what we at NordicHardware have gathered the successor to 9700 Pro, R350, will be built on 0.15 micron technology.)

Do you work for NVIDIA? I know there are a few people on this board who are currently or have in the past worked for ATi/NVIDIA.
 
so 5 days until CES. until then no gtx 380 running 3d vision on a 120Hz 1080p monitor.:D
The posts are not abusive and are related to the topic.
The opinions are strong and highly polar, but that isn't bannable.

They appear to echo a monoculture of articles and analysis, but that's not bannable either.

eyefinity and bulldozer are not related to the topic.

...and this is right out of the rules/privileges. sensationalism and fanaticism are obnoxious.
Beyond3D management prefers to keep written rules for forum behavior to a minimum. There are, however, certain broad guidelines for most forums: Don't post scans of current or recent magazines; don't discuss means, methods, or personal occurrences of warez or other IP theft; don't post pr0n or other blatant "NSFW" material on our forums; don't post in an aggresssive and/or obnoxious manner; and please DO observe "fair use" or "fair dealing" copyright restrictions regarding the posting of portions of articles from elsewhere on our forums.
anyone else notice the typo?
 
Where is OpenCL 2.0?

http://sa09.idav.ucdavis.edu/docs/SA09-OpenCLOverview.pdf

Page 8 says 1.1 is coming within 6 months and 2.0 is 2012 :oops: 2 years :oops:

Hopefully 1.1 catches up with D3D11-CS. It seems to me that Fermi/CUDA 3 has about an 18-month lead on OpenCL 2.0.

I think the delay for OCL 2 is because AMD's new architecture (r9xx?) is launching in late 2010, early 2011. LRB and fermi have already moved towards introducing r/w caches, unifying mem-spaces.

AMD is the laggard in this regard and whatever the ocl 2 spec is being written, I bet it is being designed into r9xx as we speak.

If you look at ogl's version numbering (also managed by khronos), the pre decimal number is changed only at major bumps while post decimal numbers are changed at relatively minor bumps.
 
I don't see anything in Fermi which requires more than extensions, as for Larrabee ... if it ain't shipping it has no relevance.
 
Yes, you have a point there. Umm..., function pointers and recursion is a big deal.

And that merits a major version bump. OCL 1.0 is meant for G80 after all.
 
Were they talking about performance? I don't remember them stating the fx series is faster then the 9700, or anything close to that. They talked about cimimatic rendering and what not.

http://www.anandtech.com/showdoc.aspx?i=1749&p=7

Interestingly enough, NVIDIA did not make many performance numbers available to us prior to the GeForce FX announcement. In fact, the majority of the performance numbers won't be revealed until after this article is published. Right now NVIDIA is claiming a 30 - 50% performance advantage over the Radeon 9700 Pro across the board. We will be able to put those claims to the test as soon as we have a card in hand.

Yep, nVidia NEVER lies and NEVER talks shit... oh well.

The fact that we've not even heard a wage statement about performance can mean only one thing: Fermi will be a disaster for nvidia ... they can't get enough performance out from a lousy, hot, expensive design - end of story.
 
I've chopped a whole pile of nonsense out of this thread in the last few pages. Please keep it on-topic and free of personal attacks.
 
Both the 9800GX2 and the GTX295 dual chip GPUs from the past years have both used the largest GPUs (though usually downclocked) and not smaller die derivatives. Probably because anyone paying the premium for dual GPU is interested in the maximum performance even at a premium price.

That said, I admit I am impressed by the performance/watt of the 40nm GT240. Power-wise, you could make a single-card quad GPU board with its low wattage. That's not a practical SKU because of teh crowded PCB, and you'd still have to share PCIE and your onboard RAM would likely be only 512MB per core, but for something that scales to multiGPU, and doesn't need the memory size or PCIE bandwidth, it'd be great. These apps might be something like brute force hash or code cracking, or number theory factoring/trial division.

Well, if we consider that if everything would have gone according to plan GF104 would have shown up 6 months after GF100. It will be less now and they talked about how flexible the Fermi Design is. As they could remove GPGPU parts for example.

If it would have 256Bit and 256SP a dual version would have 512SP and 512Bit and if it clocks higher then the single GPU GF100 (which would not surprise me) it could come close to 5970.
 
Interesting bit I came across this morning in this article:

Thermaltake Launches Element V Nvidia Edition Case for Fermi
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=9337&Itemid=47

"The proprietary "air duct" system brings cool and fresh air directly from the outside of the chassis and accelerates it to graphic card's intake to increase heat displacement and achieve optimal cooling efficiency. Without Nvidia SLI certified chassis, system powered by the next generation of high-performance graphic cards may not be able to operate at their highest setting due to inadequate cooling."
 
GF100 is not a second nv30. :LOL:

Why the laugh?

Can you then, please, show us that GF100 is not in any way, shape or form; A delayed and underperforming product, like NV30 was?

I believe and hope it will not be, but where is the reliable fact to the contrary?
 
Interesting bit I came across this morning in this article:

Thermaltake Launches Element V Nvidia Edition Case for Fermi
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=9337&Itemid=47

"The proprietary "air duct" system brings cool and fresh air directly from the outside of the chassis and accelerates it to graphic card's intake to increase heat displacement and achieve optimal cooling efficiency. Without Nvidia SLI certified chassis, system powered by the next generation of high-performance graphic cards may not be able to operate at their highest setting due to inadequate cooling."
Marketing BS. Of course you need good cooling in the chasis, when you put 3 or 4 GPUs in it. High End Users will know that.
 
Interesting bit I came across this morning in this article:

Thermaltake Launches Element V Nvidia Edition Case for Fermi
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=9337&Itemid=47

"The proprietary "air duct" system brings cool and fresh air directly from the outside of the chassis and accelerates it to graphic card's intake to increase heat displacement and achieve optimal cooling efficiency. Without Nvidia SLI certified chassis, system powered by the next generation of high-performance graphic cards may not be able to operate at their highest setting due to inadequate cooling."

Same link:

"Through collaboration on the engineering level with world's leading gaming chassis provider, Thermaltake Element V Nvidia Edition is capable of providing the best operating environment for Nvidia's next generation of enthusiasts graphic cards based on FERMI architecture running on 3-way SLI or Quad SLI."

Why the laugh?

Can you then, please, show us that GF100 is not in any way, shape or form; A delayed and underperforming product, like NV30 was?

I believe and hope it will not be, but where is the reliable fact to the contrary?

Because NV30 had a pretty dreadful package of several wrong design decisions maybe? Given the fact that way too little or better next to nothing is known about the architecture's 3D transistors, there's no chance any such parallel can be drawn until more data is available about GF100/3D.
 
Nowhere. I can speak only for myself, but I've been following your site for ca. 1 year before my first post. That I have more time to actually post messages rather than just read them right now might have something to do that I have a few days off over the holidays. Anyway I think your site is the most competent about graphics cards of all that I found in the internets. So if you're not interested in new posters and prefer to remain your restricted club, I will behave myself and stop posting. Truly when people call for posters to be banned for no reason whatsoever with their first post... I have no words for that.

Take it easy - nobody called for banning on first post, but, since you've been lurking so long, B3D has always had certain rules that are strictly enforced and some of the new posters were violating them. Without such strict enforcement, this wouldn't be, as you say, "the most competent" site about graphics. It would be overrun by noise as this thread was before it got cleaned up. I can honestly say B3D has always welcomed polite and sincere contributions and inquiries so long as they don't turn to insults. For that we have the rpsc forum (which you have to earn your way into) and, alas, the console forum :)

Welcome.
 
Back
Top