NVIDIA GF100 & Friends speculation

My thoughts were, if you have two units, then all of a sudden you need extra logic to dispatch tris between them, buffer/route the results, etc = more complexity.
There is complexity, but is there expensive complexity? I still fail to see the big deal.
 
Too much hand waving. Not that hard to port to DC/OCL? A trivial port might not be that hard, but an optimized port? If it were so easy, many people would have already done it.
It's somewhat hard to port proprietary code, I don't know why but everything seems to indicate this is the rason why noone does it :rolleyes:

NV acquired a renderer not that long ago and that's all they show with this, performance itself doesn't mean anything without a comparison basis.

Porting CUDA to OCL is quite easy, and it's not that hard to port to DC either, but, seriously, why would NV do that since they own both CUDA and the renderer?
 
It's always the same, they show proprietary stuff again and again, so that we can't compare to anything else (except perhaps very bad CPU implementations such as with PhysX).

Why in the world would Nvidia want to facilitate comparisons to the otherwise non-existent competition? I mean really, what moronic company would do that? It's not Nvidia's job to develop a RT algorithm that runs on AMD hardware, that much should be obvious.
 
I haven't paid much attention to HAWX. Why do you think it's heavily setup limited? I would expect a flight game to have a lot of big triangles and "empty" screen space.
-When you increase the resolution, the framerate changes much less than expected
-It's not CPU limited because SLI/CF scale well
-The difference between the 5870 and 4890 is smaller than normal
-The 4890 is notably faster than the GTX 285

(I don't mean 100% limited, obviously. Just heavily limited on the fastest GPUs, maybe 40% at 1920x1200, though I have to cruch some numbers to make a proper guess. A 5670 has 1/4 the rendering power but almost the same setup speed as a 5870, so it won't be as setup limited.)
 
Right, but that's not what Tamlin was referring to. He was talking about the numbers felix posted.

Again the only numbers we have are those of the GTX 360 according to a few people. So if you take those numbers and compare them to random 5870 results (as felix did), you're not really doing a fair comparison. So I'm willing to give NVIDIA the benefit of a doubt, although I really wonder why they decided not to demonstrate a fully-unlocked GF100 chip.

These were my thoughts too. Whats the purpose of this demonstration? According to TGdaily, Nvidia flew in journalists for this occation.

Farcry 2 is a game that traditionally perform well on Nvidia hardware. Its also handpicked by Nvidia to demonstrate Fermi. I assume that the game was tested with the driver team, since Nvidia was going to show Fermi off on it in front of journalists. I don't think that Nvidia was bottlenecking the game with a P4 processor.

I don't know if Hexus used the "small ranch" benchmark, but the GTX285 performance numbers are very similar to the ones in the video:
http://www.hexus.net/content/item.php?item=21179&page=9

So, in this scenario, when the GTX285 gave 55.6 FPS@1920x1200, the 5870 gave 81.1FPS on the same settings. This almost equals the two cards used in the fermi video.

The tech report review shown earlier, is not one I have faith in, since I get higher FPS then them with much slower system.

By the time Fermi comes out (if mid March is the final date), the 5870 will be 6 months old already. I don't think that ATI has been sitting on their hands these months (after all, by then Nvidia has been talking about Fermi for 6 months). Its hard to belive that its the 5870 Fermi is going up against, unless ATI plans on releasing the 6000 series a few months after Fermi's release instead of refresh.

CrisRay (with all respect) have been using the expression impressive about Fermi in various forums. Nvidia have called it impressive. I'm glad to see at least some concrete numbers, but I was expecting to be a little impressed at least. Fermi has been hyped up too much for these numbers even to be interesting.

If I were a journalist, being flown in from Europe and Nvidia would have shown me a GTX360 (not even the GTX380) with minor performance increase over a 5870 in games that traditionally favors Nvidia cards, I would have been very disappointed and wondered why I wasted my time with this.

If the GTX260 in addition had low performing drivers which didn't show what Fermi could do at least in those games and used a low performing system to run it on, I wonder why they even had this press preview?

I have still hopes that we'll see something impressive tomorrow. If ChrisRay is correct about the tesselator being impressive, that might cheer up my disappointment a bit. :)
 
If I were a journalist, being flown in from Europe and Nvidia would have shown me a GTX360 (not even the GTX380) with minor performance increase over a 5870 in games that traditionally favors Nvidia cards, I would have been very disappointed and wondered why I wasted my time with this.
Aren't all expenses paid trips kinda what they are there for in the first place? ;)
 
CrisRay (with all respect) have been using the expression impressive about Fermi in various forums. Nvidia have called it impressive. I'm glad to see at least some concrete numbers, but I was expecting to be a little impressed at least. Fermi has been hyped up too much for these numbers

Which hype? We know nothing about Fermi's graphics side. It's really funny that we can hype something without real informationen. :LOL:
 
I thought flight sims have always been somewhat geometry limited, because there's very little of the traditional occlusion culling data structures used (BSPs/portals/etc). Typically, you have an extremely large view distance and you're only going to be using large flat triangles if you're not modeling lots of mountain terrain and ground shrubbery.

Bingo. If you look at "modern" G.A. sims like MSFS, most of the rendering workload is triangles. Lots of geometry in any given frame due to the aforementioned draw distances. MSFS allows view distance of up to 110 miles. With all the ground scenery objects (buildings, trees & other foliage, vehicles) and the geometry of the land itself, that's a lot of triangles.

I remember in the old days of B3D, the sims were always CPU liimited (prior to the era of T&L/vertex shaders) because of the geometry load.

If only G.A. flight sims had moved beyond this era (they haven't). They are still almost entirely CPU-bound. MSFS in particular is notorious for performance scaling linearly with CPU clock. Really sad given just how parallel modern graphics are. I imagine a future version of a G.A. sim which runs almost entirely upon the GPU, with higher quality and much higher FPS than what we have today. Of course, now that MSFS has been killed off by MS, I doubt that will happen any time soon.
 
-When you increase the resolution, the framerate changes much less than expected
-It's not CPU limited because SLI/CF scale well
-The difference between the 5870 and 4890 is smaller than normal
-The 4890 is notably faster than the GTX 285

(I don't mean 100% limited, obviously. Just heavily limited on the fastest GPUs, maybe 40% at 1920x1200, though I have to cruch some numbers to make a proper guess. A 5670 has 1/4 the rendering power but almost the same setup speed as a 5870, so it won't be as setup limited.)

And the 5770 CF beats the single 5870 hands down:

http://www.xbitlabs.com/articles/video/display/radeon-hd5770-hd5750-crossfirex_10.html#sect1

This happens also in some other games, but here the difference is quite large. In other games there is parity, instead.
 
I am absolutely shocked and dismayed that there hasn't been a single leak as yet and it's been days since the "deep dive". What good is the internet any more? It's like we're back in the dark ages :) I appreciate neliz's info but it raises more questions than answers.

Because the 'deep dive' wasn't anything more than the usual NV's patting themselves on the back session perhaps? There is more info floating though, and a lot more was gotten at the show if you knew who to ask.

The real problem is that no AIBs/OEMs have silicon yet. That is generally where the leaks come from, an we are told that they won't get silicon until (likely late) February. Even no, the number of chips in the wild is vanishingly small, and that is not on purpose, NV lacks good chips.

If you know anyone who had access to an A1(*), ask them what it took to use one. :)

-Charlie

(*) A card _WITH_ sillicon on it, not a puppy.
 
Which hype? We know nothing about Fermi's graphics side. It's really funny that we can hype something without real informationen. :LOL:

Of course we can and PR departments are like politicians in this field. They can say much without any real information. :LOL:

You cannot honestly say that all this talk about Fermi hasn't build up some expectations for you?
 
Interesting, so they really thought it was going to ship ... would be nice if Fudo or Apoppin asked them what happened after the Fermi launch has blown over and they can talk a little more freely (the journalistic thing to do).

No they didn't. They were telling people that mattered March and May for GF100 and Fermi respectively at the same conference. I have two sources that got the same info, and they are much higher up the food chain than Fudo.

-Charlie
 
You cannot honestly say that all this talk about Fermi hasn't build up some expectations for you?

I do. I saw a demo with GF100 and know that it will have a 384bit memory interface, 512 Cuda cores, cache hierarchy etc. But that did not build a hype for me. Maybe this will change next week.
 
These were my thoughts too. Whats the purpose of this demonstration?

I don't think you should expend so much energy trying to draw conclusions from a mish mash of blurry videos of ONE game, incompatible review comparisons, hearsay and the general chaos that is Fermi info right now.
 
Crap, those numbers might punch a hole in my theory because there's a huge hit for AA/AF even at 1280x1024.

The only explanation that I can think of is that 8xAA on a scene with lots of tiny triangles prevents you from discarding a bunch of zero-pixel quads, unlike 0xAA. But then the 5870 should have an advantage in that situation over a 4890 :???:
 
No they didn't. They were telling people that mattered March and May for GF100 and Fermi respectively at the same conference. I have two sources that got the same info, and they are much higher up the food chain than Fudo.

-Charlie

"March and May for GF100 and Fermi respectively" ?

So they will release the architecture in May ? :LOL:
It seems that at this point, you have yet to establish that NVIDIA's Fermi refers to a GPU architecture, not a product to be released...
 
I don't think you should expend so much energy trying to draw conclusions from a mish mash of blurry videos of ONE game, incompatible review comparisons, hearsay and the general chaos that is Fermi info right now.

Its not a conclution, but an expression of disappointment. I do hope that tomorrow will bring something more juicy. In all the hearsay and general chaos as you describe it around Fermi, I had expected something impressive when Nvidia finally came out with some benches at least.

If the benches are totally misrepresenting what Fermi can do, even on handpicked games that favor Nvidia cards, what was the point then showing this benchmark?

As I said, tomorrow might bring something more that makes the initial disappointment go away, but its not unatural to be disappointed with whats seen so far in this video. I was expecting more and I think there is many of us who did.
 
If the benches are totally misrepresenting what Fermi can do, even on handpicked games that favor Nvidia cards, what was the point then showing this benchmark?

Hm, you are the guy who missrepresenting the benchmark. I see a very nice performance increase over GTX285. And this is more than AMD delivered with Cypress in this farcry 2 benchmark.
And you don't show all of your tricks in a preview. :LOL:
 
If the benches are totally misrepresenting what Fermi can do, even on handpicked games that favor Nvidia cards, what was the point then showing this benchmark?

I'm not aware of any benchmarks that Nvidia has released or shown publicly. Everything you're talking about seems to be based on that one unwittingly leaked video of Far Cry 2. Unless I'm out of the loop and there are Nvidia documents out there with these handpicked games and benchmarks that you keep referring to.....
 
Back
Top