Recent content by compres

  1. C

    22 nm Larrabee

    So the more cores the more L3. And the GPU has access. Are there any tests showing the same GPU in 2 vs 4 core sandyb. configurations?
  2. C

    22 nm Larrabee

    Yo are stating that the GPU does not share the L3 BW with the cores. Can anyone confirm?
  3. C

    Predicting GPU Performance for AMD and Nvidia

    :lol: Seriously I could not stop laughing for like 5 minutes...
  4. C

    Hardware MSAA

    There has been a lot of research in linear algebra algorithms with regards to special orders for matrix-matrix or matrix-vector operations. The idea is to map the 2D structures of matrices to the linear structure found in caches. Make a search on google about "peano order" or "morton...
  5. C

    Hardware MSAA

    :D I think you hit 1 nail in the head, a very specific nail in a broad wooden construction. But such a big one, that has been the topic of countless research efforts. All of them (the serious ones) with excellent results. My take is, and it is very likely with graphics converging with...
  6. C

    NVIDIA GF100 & Friends speculation

    Let me know if you (or someone else) wants a proper translation of that one from Spanish to English.
  7. C

    NVIDIA GF100 & Friends speculation

    A driver bug frying 700+ dollar GPUs? Are we really back to those times when a virus could burn a monitor? This is rather incompetent hardware design in 2011.
  8. C

    NVIDIA GF100 & Friends speculation

    'cause they are a "software" company.
  9. C

    NVIDIA GF100 & Friends speculation

    What a pointless video....
  10. C

    Dual GPU cards and shared memory pool

    Sorry for the n00b question: Is it already available and/or have you already ran some benchmarks to see latencies/BW?
  11. C

    AMD: R9xx Speculation

    Good, 'cause I'll be there.
  12. C

    If ATI/AMD GPUs are VLIW, what is NVIDIA's GPUs architecture acronym?

    I have worked with 512 core SMPs. My original point is, that using MPI is not just because you are reusing trained personal in MPI, or that you are too lazy to learn other programming models, but that even in SMPs, MPI scales better for many algorithms. Add to that, the ones I have worked...
  13. C

    If ATI/AMD GPUs are VLIW, what is NVIDIA's GPUs architecture acronym?

    Agreed. I have seen applications where SM scales better. Personally, I have seen more that scale better on MPI, mainly in scientific computing. I can imagine other fields of computer science would see SM scaling better more often than MPI. The issue is that it is not always possible to have...
  14. C

    If ATI/AMD GPUs are VLIW, what is NVIDIA's GPUs architecture acronym?

    Like it or not, it is the best way so far to scale many algorithms that are not EP on super computers.
Back
Top