How will NVidia counter the release of HD5xxx?

What will NVidia do to counter the release of HD5xxx-series?

  • GT300 Performance Preview Articles

    Votes: 29 19.7%
  • New card based on the previous architecture

    Votes: 18 12.2%
  • New and Faster Drivers

    Votes: 6 4.1%
  • Something PhysX related

    Votes: 11 7.5%
  • Powerpoint slides

    Votes: 61 41.5%
  • They'll just sit back and watch

    Votes: 12 8.2%
  • Other (please specify)

    Votes: 10 6.8%

  • Total voters
    147
Status
Not open for further replies.
That in no way means Nvidia will not try something as silly as saying 11 isn't a big deal yet b/c there are no games, but consumers will not believe it as easily as saying 10.1 isn't a big deal.

Which could backfire quite badly if in fact the rumored 4-6 Dx11 games launch for the holidays.

Regards,
SB
 

DX Compute is a form of GPU computing and probably the one that's poised to have the most success. So if Nvidia decided to hype GPGPU to spoil the Evergreen launch, AMD could reply with something like: "Yes, GPU Computing is great and has a lot of potential, especially now that Windows 7 is coming with DX Compute, which is why our HD 5xxx support it. Unfortunately, our competitor's products don't."
 
DX Compute is a form of GPU computing and probably the one that's poised to have the most success. So if Nvidia decided to hype GPGPU to spoil the Evergreen launch, AMD could reply with something like: "Yes, GPU Computing is great and has a lot of potential, especially now that Windows 7 is coming with DX Compute, which is why our HD 5xxx support it. Unfortunately, our competitor's products don't."

DX11 not only brings Compute Shaders to DX11 hardware, but also to DX10 and DX10.1 hardware. It has CS 4.0 for DX10.0 hardware, 4.1 for DX10.1 hardware and 5.0 for DX11 hardware. So NV can still say they support DX GPU Computing.
 
DX11 not only brings Compute Shaders to DX11 hardware, but also to DX10 and DX10.1 hardware. It has CS 4.0 for DX10.0 hardware, 4.1 for DX10.1 hardware and 5.0 for DX11 hardware. So NV can still say they support DX GPU Computing.

True, but not fully. Hyping it to counter ATI when ATI actually has better support doesn't seem very smart. Of course they could talk about OpenCL...
 
DX Compute is a form of GPU computing and probably the one that's poised to have the most success. So if Nvidia decided to hype GPGPU to spoil the Evergreen launch, AMD could reply with something like: "Yes, GPU Computing is great and has a lot of potential, especially now that Windows 7 is coming with DX Compute, which is why our HD 5xxx support it. Unfortunately, our competitor's products don't."

I know what DX Compute is. I meant to explain your statement about Nvidia not supporting it. DX Compute does not require DX11 hardware. And there's no guarantee that ATI will have better support as their current efforts in this space have fallen well short thus far.

Random Access Writes?

Just because Microsoft didn't include something in CS4.0 doesn't mean it's not doable in CUDA. There aren't any restrictions on random access writes in CUDA AFAIK given that you can define your own data structures and freely scatter/gather with them.
 
I know what DX Compute is. I meant to explain your statement about Nvidia not supporting it. DX Compute does not require DX11 hardware. And there's no guarantee that ATI will have better support as their current efforts in this space have fallen well short thus far.

Just because Microsoft didn't include something in CS4.0 doesn't mean it's not doable in CUDA. There aren't any restrictions on random access writes in CUDA AFAIK given that you can define your own data structures and freely scatter/gather with them.

Well, so far, Nvidia hasn't claimed full DX Compute support, so I'm guessing either they can't do it on GT200 or they haven't done it yet. As for ATI, Evergreen is DX11, so it must have full DX Compute support which, granted, doesn't mean that it works well.
 
The whole compute shader thing is strange, 4.x won't support random writes to shared memory and 5.0 won't be supported by NVIDIA (seemingly). Did NVIDIA just get the short end of the stick or did they intentionally try to avoid making compute shaders efficient on their older hardware to conserve CUDA momentum?
 
Well, so far, Nvidia hasn't claimed full DX Compute support, so I'm guessing either they can't do it on GT200 or they haven't done it yet. As for ATI, Evergreen is DX11, so it must have full DX Compute support which, granted, doesn't mean that it works well.

The whole compute shader thing is strange, 4.x won't support random writes to shared memory and 5.0 won't be supported by NVIDIA (seemingly). Did NVIDIA just get the short end of the stick or did they intentionally try to avoid making compute shaders efficient on their older hardware to conserve CUDA momentum?

Well according to this Nvidia can't claim CS 5.0 compliance for at least one reason - 32KB of shared memory is required.

CS 4.x seems to target G80 level hardware on some levels such as the 16KB shared memory and 768 thread limits as well as no atomic support. However, some other restrictions aren't limitations of CUDA such as the requirement that threads can only write to their own offset in shared memory. Perhaps that was done to accommodate RV770 or some other quirk of the API that we don't know about.

The random writes restriction is weird as that's obviously not a CUDA limitation either.
 
trinibwoy, as far as I know CS 4.0 is tailored so that every single DX10 (and 4.1 for 10.1) card supports it fully, not just nVidia & ATI, there could be some limitations for example on S3 or Intel chips
 
trinibwoy, as far as I know CS 4.0 is tailored so that every single DX10 (and 4.1 for 10.1) card supports it fully, not just nVidia & ATI, there could be some limitations for example on S3 or Intel chips

Really? Do those architectures even support CS 4.0 features?

Obviously, but with 4.1 there to accommodate ATI why wasn't there a flag or a version to accommodate NVIDIA?

Too messy. Having 4.1 be a strict superset of 4.0 is a lot easier.
 
Really? Do those architectures even support CS 4.0 features?
That's the impression I've gotten, that any DX10 chip will work with CS4.0, and any DX10.1 chip will work with 4.0 and 4.1, 11 obviously with 4.0, 4.1 and 5.0
 
That's the impression I've gotten, that any DX10 chip will work with CS4.0, and any DX10.1 chip will work with 4.0 and 4.1, 11 obviously with 4.0, 4.1 and 5.0

Yeah but CS doesn't have anything to do with DX10. R600 for example obviously doesn't qualify......
 
Yeah but CS doesn't have anything to do with DX10. R600 for example obviously doesn't qualify......

It doesn't? Curious, I wonder what the motive on even making 4.0 and 4.1 versions was, matching version numbers of DX10/10.1 shader models, if not to work with the said level hardware?
 
Status
Not open for further replies.
Back
Top