Nvidia GTS250M Series - DirectX 10.1 Supported?

iwod

Newcomer
I am surprised that no one here has mentioned it.
The newest Nvidia offering is actually a Mobile update, which is features wise on paper even more advance.

The Support of DirectX 10.1 and GDDR5.

So anyone else know what is updated?

Actually i am very impressed by the spec of GTS 250M.

96 Shader,
500Mhz Core Speed

@ only 28W.
 
I could be mistaken but the "ELSA hints GT206 and GT212 " thread hasn't rendered the final verdict about evidence of hardware support for DirectX 10.1 features... Given the flexible nature of DirectX 10 GPU's Nvidia could probably just modify the drivers (with obvious detriment to performance when using 10.1 features...)
 
If it was doable in drivers, it would have been done long ago or not at all.

homer, you can't benchmark a laptop chip without a laptop ;)
 
And you can't benchmark a desktop chip without a desktop. Or a speaker without an amplifier. Or a set of tires without a car. In other words, what is your point?
 
I think his point is that when nVidia introduces a desktop GPU, they generally have some reference cards available, which any reviewer can stick in their own PCs.
In this case you need to wait until some OEM integrates the GPU into a system before it can be tested. nVidia doesn't make its own laptops.
 
If we look at the chart here

http://www.theinquirer.net/inquirer/news/1271809/nvidia-mobile-gpu-kimono

The new GTS250M nearly DOUBLE the GFLOP / Watt over previous GTX260!!!!!!!!!!!!!!!

Yep that's what DX10.1 does, doubles peak compute power at least in theory, anyway GTX260M/280M @55nm with their 75W TDP can hardly claim to be a mobile parts. It's even high for desktop parts :devilish:

But still i don't see why is there so much hype when old RV730 had same 480Gflops inside claimed 58W TDP which in reality end up @45-47W. @40nm should have same or a little more 550GFlops @ maybe 35W (15,72GFlops/W) claimed which would maybe end up with just 30W :smile:

If TSMC 40NM is still leaky, then may be some even better Bin of Chip will come out later?

I still don't believe 40nm is that leaky especially not now 15 month after announcement. It's just an old story that hangs out for more than a half year.
 
I could be mistaken but the "ELSA hints GT206 and GT212 " thread hasn't rendered the final verdict about evidence of hardware support for DirectX 10.1 features... Given the flexible nature of DirectX 10 GPU's Nvidia could probably just modify the drivers (with obvious detriment to performance when using 10.1 features...)
The ROP's had to be updated.

More info from a nvidia beta tester:
GTS250M and GTS 260M (GT215) have 96 shaders, support up to 1GB of GDDR5 memory (128-bit) and DX10.1.
28W TDP and 38W TDP respectively.

GT230M and GT240M (GT216) has 48 shaders, supports up to 1GB of GDDR3 memory (128-bit) and DX10.1.
Both has an 23W TDP.

G210M (GT218) has 16 shaders, supports up to 512MB of GDDR3 memory (64-bit) and DX10.1. TDP is 13W.
 
Huh? Its nothing to do with DX10.1. Its simply a matter of same GFLOPS at half the power consumption.

? And they did it how just with LP process? (LowPower)

@MDolenc

dx10 issues instructions per SIMD inline, right? not multithreaded. if you pack dual precision (DOuble bit width) function with abvious above (pre-decoder) optimizations you could done some things "twice" as fast than before. I'd gladly read better explaination from yourself cause i dont know all, unlike you with that above presumptions.
 
? And they did it how just with LP process? (LowPower)

@MDolenc

dx10 issues instructions per SIMD inline, right? not multithreaded. if you pack dual precision (DOuble bit width) function with abvious above (pre-decoder) optimizations you could done some things "twice" as fast than before. I'd gladly read better explaination from yourself cause i dont know all, unlike you with that above presumptions.

I dont know how it is related to Direct X 10.1 , They offer double amount of shader for the same power develop.

So Double the Performance / Watt
 
dx10 issues instructions per SIMD inline, right? not multithreaded. if you pack dual precision (DOuble bit width) function with abvious above (pre-decoder) optimizations you could done some things "twice" as fast than before. I'd gladly read better explaination from yourself cause i dont know all, unlike you with that above presumptions.
Since Shader Model 2.0, there has been no specification for "dual issue" at all. The token stream comes in and it's up to the driver's compiler to pack things optimally for the HW on which the shader will be run.

The main improvements of DX10.1 over DX10 are increased flexibility of the pipeline. For example, you can specify some interpolants to be computed per sample instead of per pixel, allowing for better results while using multisampling. These improvements increase the application's (and HW's) efficiency because it removes the need to multipass certain effects.
 
Back
Top