Nvidia GT300 core: Speculation

Status
Not open for further replies.
Or perhaps because the GT300 will use a PCB very similar to the GT200, and they figured they'd get a headstart on a 2-GPU PCB this time...
Certainly the way the new GTX295 board is arranged a pairing of GT300s would fit without any dieting (noting that the memory buses are cuto-down on GTX295), just a question over clocks.

Funny, soon (erm, 18 months?) we'll be talking about the first single-board 10TFLOP graphics card...

Jawed
 
Cooling is only an issue when you try to vent through the back ... for a gamer card I just don't see why they would have to religiously stick to that.


Actually I am talking about the way the heat is stored/transferred through the PCB from 1 core to the other. You can use a bigger fan ontop of a single PCB design. But both of these designs put heat back into the case.

If your heatsink is designed properly on a dual PCB solution. The 2 PCBs shouldnt be sharing much if any heat and the heatsink should be taking the majority of the heat down the center.
 
Lol, someone should pay his editor to tell him, "Do you suck? Because if you suck, just get up and say you suck." Charlie would probably implode.

 
eh, he draws quite a bit of attention and it (classic inq Journalism) seems to work very effectively. You got to give it that. The comments are just as silly really.
 
lol, daaamn that was a masterpiece. The venom was dripping from my monitor. So much drama and intrigue - all of this hatred can't be due to just a little snubbing from a press event.

I feel sorry for the guys at Nvidia if any of that is true (or turns out to be so). I can't remember but did Charlie STFU for a while after his R600 cheerleading turned out to be wrong? Or did he just find some other way to criticize Nvidia?

Read thru the comments, someone has challenged the brain dead dolt to come here and say the same thing then defend it.

Haha, there's a guy on there dissing B3D for the outdated reviews on the front page.
 
Certainly the way the new GTX295 board is arranged a pairing of GT300s would fit without any dieting (noting that the memory buses are cuto-down on GTX295), just a question over clocks.

Which makes me wonder... With this 'size-optimized' design, can't they make the single GPU cards smaller aswell? Seems to me like they could be much shorter than they are today. Which would be less of a headache to fit into most cases.
 
I can't remember but did Charlie STFU for a while after his R600 cheerleading turned out to be wrong? Or did he just find some other way to criticize Nvidia?
I can imagine that he actually blamed Nvidia for him being wrong. :)
 
Its too bad internet websites aren't really held accountable for the things they write due it being more trouble than its worth.
 
Its too bad internet websites aren't really held accountable for the things they write due it being more trouble than its worth.

We should organize something like that ourselves ;)
I mean, with 'repeat offenders' such as Charlie and Theo... we could set up a page with some of the claims they made over the years, and how things actually turned out.
The information is mostly around here on these forums, it just needs to be organized better and presented in an easily digestible way :)

So you'd end up with a page that lists the more shady 'internet reporters', and why they are not trustworthy sources, easy to verify for anyone.

The same could also be done for the ones who DO have good sources btw.
 
Good idea Scali! Instead of the Oabameter (st. Petersburg times) we have the Charliemeter. for every "tech journalist" we can have a "right, wrong, lucky shot" and add up the totals per project or overall "truthscores"

Should we just make it a thread under Industry?
 
Guys, if I may ask. What's the chance that either or both GPU's i.e. GT300 and R870 will do Physics/PhysX with little or no FPS loss right out the bat?

US
 
Guys, if I may ask. What's the chance that either or both GPU's i.e. GT300 and R870 will do Physics/PhysX with little or no FPS loss right out the bat?

US

how?

Right now, GPU physics are using the shaders of both GPU's. I don't see how running ultra high resolutions (that's what these cards are for) and matching IQ could possibly outweigh the use of Physics.

Why would anyone want to incorporate a PPU on the GPU die?
 
Guys, if I may ask. What's the chance that either or both GPU's i.e. GT300 and R870 will do Physics/PhysX with little or no FPS loss right out the bat?

I'd say that depends entirely on how heavy the physics workload is. Developers may well choose to add even more complex physics effects to their games when they see that GT300 and R870 are very efficient at processing them.
Just like how games have had ever increasing graphics quality and detail levels as GPUs evolved. We're still stuck at the same 25-50 fps framerates that we started with back in GLQuake :)
 
If one stream of execution were bottlenecked by a hardware portion unique to it, physics could be run with a small performance impact on graphics.

However, since physics calculations are overlaid onto the same hardware base, those corner cases are fewer under load.

They share the same shader ALUs, so they would be in direct competition for FLOPs and register resources.

Texture and fill rate limited cases would theoretically apply more to graphics, but a lot of the data paths being used are shared and both physics and graphics and they would be competing for the same memory controllers and bandwidth.

The triangle setup unit might be a potential bottleneck that might not also throttle a physics thread, in situations where high triangle throughput doesn't also translate into high bandwidth or shader utilization.
If this were an AMD chip, interpolation might be another bottleneck unique to graphics. Nvidia leverages its ALUs for that, if my recollection is correct.

Another possibility is that the frames per second is so high that even a significant drop would not be noticed by the user.
 
If one stream of execution were bottlenecked by a hardware portion unique to it, physics could be run with a small performance impact on graphics.

However, since physics calculations are overlaid onto the same hardware base, those corner cases are fewer under load.

They share the same shader ALUs, so they would be in direct competition for FLOPs and register resources.

Well, I think you have to include the CPU into the equation aswell.
Namely, if your GPU wasn't doing the physics, your CPU would be doing it.
Generally when the CPU does physics, the CPU becomes the bottleneck, not the GPU.
By offloading physics to the GPU, you can reduce the CPU bottleneck, which may yield higher framerates even though the GPU has a heavier load. Or it may at least give you better average framerates, because the GPU can digest physics workloads more easily while the CPU will have to do little or no extra work in parts of the game where there are 'outbursts' of physics.
 
Well, I think you have to include the CPU into the equation aswell.
Namely, if your GPU wasn't doing the physics, your CPU would be doing it.
Generally when the CPU does physics, the CPU becomes the bottleneck, not the GPU.
By offloading physics to the GPU, you can reduce the CPU bottleneck, which may yield higher framerates even though the GPU has a heavier load. Or it may at least give you better average framerates, because the GPU can digest physics workloads more easily while the CPU will have to do little or no extra work in parts of the game where there are 'outbursts' of physics.

There's also a cost to managing multiple contexts for the GPU, and there's more data going over the relatively long latency PCI-E bus. All that physics data has to get there and back somehow, and it might also take up a good chunk of VRAM.
 
There's also a cost to managing multiple contexts for the GPU, and there's more data going over the relatively long latency PCI-E bus. All that physics data has to get there and back somehow, and it might also take up a good chunk of VRAM.

True, but we've already seen that even with relatively light PhysX loads (eg UT3 with hardware enabled in the standard maps, not the Ageia ones), the GPUs manage just fine.
I don't think the latency is that big of a problem. Physics tend to be rather fire-and-forget ("here's some objects and their motion, let me know when you calculated the object positions at t+1").

Since the data required for physics is little more than some geometry it doesn't require that much storage anyway (at least, not when you're talking about a card with 1 GB of memory or more).
 
Status
Not open for further replies.
Back
Top