Recent content by tunafish

  1. T

    Was AMD a bad choice for the consoles? *spawn

    The comparison is not the ASP of a single model, but over the entire stack, and largely between the 6000 series and the 7000 series. It seems like AMD sold comparatively more 7800 and 7900 models than 7600 and 7700 vs 6800 and 6900 than 6600 and 6700. (Which actually makes some sense, because...
  2. T

    Re-litigating Radeon market performance *spawn

    There exist AMD slides to AIB vendors from well before release that claim that RDNA3 is a 3GHz+ architecture. Then when they had actual physical samples and knew what the chips were capable of, that messaging went away.
  3. T

    Re-litigating Radeon market performance *spawn

    AMD wanted to do a major laptop push with both RDNA2 and RDNA3, and, well, crickets. Geforce is not just plain better in a limited power envelope, but also has a such massive brand power that laptop makers don't want to tie their expensive products to the brand that the market feels is inferior...
  4. T

    Predict: Next gen console tech (10th generation edition) [2028+]

    I can't help but think that a new console in 2025 or 26 is a mistake. I really don't think that there is any "there" there for improving console performance on raster loads. Current gen consoles are good enough that you can make the kind of game you want to make, and have no horrible glaring...
  5. T

    RDNA4

    I would be genuinely surprised. By every rumor it will be faster than a 7800xt, I think they can sell N48 as x800 and x700 without any kind of issues. I'm just hoping they have the sense to not use x900.
  6. T

    RDNA4

    To be clear, AMD usually has like a month from announcement to availability.
  7. T

    AMD Execution Thread [2024]

    What role do you see local AI inference having? The way I see it, all the interesting near future AI products will run on models that are way too big to fit on a smartphone or a laptop. They will run on huge inference server farms instead. This also makes more economic sense, because batching...
  8. T

    RDNA4

    Not arguing for any specific performance level of N48, but I'd like to make two observations: One of the more counterintuitive findings from statistics is that you should always expect to revert to the mean. If two very tall people have kids, you should expect the kids to be shorter than them...
  9. T

    GTC 2024

    Makes sense for an AI chip. I wonder if nV will make a separate server GPU product line for FP64, or if they will just cede the market to AMD. It's a much smaller market than AI, but it's quite prestigious and sees a lot of software investment.
  10. T

    NVIDIA discussion [2024]

    What? No. The rules, as defined in antitrust law, basically say that when you have market power (also, as defined in antitrust law), you can no longer do things like deciding who you sell to based on what benefits you, but you will instead have to start treating all customers the same. There's...
  11. T

    NVIDIA discussion [2024]

    Depends, I don't know how large a market share Nintendo has and whether that is considered comparable. If a game console maker wanted to do orders from both AMD and some other source, and AMD punished them for this, they'd quite possibly get sued and lose. Again, a dominant market position is...
  12. T

    NVIDIA discussion [2024]

    The difference is that Nvidia is probably dominant enough in their market that them doing it is illegal. Being a monopolist/having market power is not in itself wrong, it just means that you have to play under different rules. Rewarding brand loyalty (or punishing disloyalty, same thing) is...
  13. T

    Speculation and Rumors: Nvidia Blackwell ...

    Note that you cannot harvest every defective die even if every part of the die is redundant. Faults don't just mean "transistor doesn't work", there are plenty of potential faults that trash the entire chip even if the part it occurred in isn't important. The canonical example is a direct short...
  14. T

    AMD Execution Thread [2023]

    Dispatching work is easy. The problem with treating multiple pieces of silicon as a single GPU is how do you manage memory coherency, when your program touches the same value in two different places, how do you make sure they see the same thing. How would you do coherency? I would expect there...
  15. T

    AMD Execution Thread [2023]

    Yes, just repeating myself here, right now in inference the platform that people are using is PyTorch, with vLLM and running on NVidia hardware. That's where AMD needs to be good at to compete, and other platforms and workloads just matter a lot less. I think that MI300X will do well enough...
Back
Top