I read that as the reverse, the data fabric is half of the external DDR clock.Edit: IIRC The Stilt had posted his testing of Ryzen in AnandTech, indicating the fabric is running at twice the memory clock. AMD seemed to have implied before that the data fabric is capable of moving 32B/clk. So the theoretical bandwidth of the data path should be at least twice the data rate of the DRAM, assuming IMC itself is just one node. I am curious in how the "inter-core bandwidth" is tested in this case.
It seems this is caused by AMD packed SMT threads into its ACPI CPU core ID, since the Linux kernel got a patch to fix the SMT topology for the exact same situation earlier. Why it did not push the driver patches earlier to be ready at launch is beyond me though.
I believe AMD said in an interview that Infinity could route around congested links. That bandwidth figure would imply a two node mesh or no routing capability.Coincidence that a 32B link cut in half or alternating directions every cycle is that close the the reported inter-CCX link?
There are two kinds of enthusiast gamers who own top of the line GPUs and CPUs. A) Gamers who play competitively on 120Hz/144Hz gaming displays @ 1080p using low/medium settings. Ryzen is not a good choice for them. B) Gamers who play at maximum detail settings at 1440p or 4K. Ryzen frame rate differs from 7700K by 1-2 fps in this scenario. Ryzen might be the best CPU for some of these gamers. Many gamers use their computer for other purposes, and 8-core might save lots of time. Examples: editing/rendering their gaming videos with After Effects, compiling Unreal Engine / Unity source code for their own mod/indie projects, etc.Because for a pure gamer, the 7700k is $150 less than the 1800X and trounces it in gaming. So in that respect yes the 1800X doesn't appear to be a very good investment for an enthusiast gamer looking for the highest fps in current games.
Completely agree and it's something that many of the reviews simply ignored. Thankfully there are many gamers who do understand and won't simply base their decision for their upgrade on the numbers alone. Was just pointing out that many will see it that way.There are two kinds of enthusiast gamers who own top of the line GPUs and CPUs. A) Gamers who play competitively on 120Hz/144Hz gaming displays @ 1080p using low/medium settings. Ryzen is not a good choice for them. B) Gamers who play at maximum detail settings at 1440p or 4K. Ryzen frame rate differs from 7700K by 1-2 fps in this scenario. Ryzen might be the best CPU for some of these gamers. Many gamers use their computer for other purposes, and 8-core might save lots of time. Examples: editing/rendering their gaming videos with After Effects, compiling Unreal Engine / Unity source code for their own mod/indie projects, etc.
That 150$ price difference is obviously relevant for both A and B style gamers. But I wouldn't say "trounces in gaming" when the downside mostly applies to small percentage of competitive players. Only a small minority of players own a 120Hz/144Hz display.
There are two kinds of enthusiast gamers who own top of the line GPUs and CPUs. A) Gamers who play competitively on 120Hz/144Hz gaming displays @ 1080p using low/medium settings. Ryzen is not a good choice for them.[…]
On console given lots of big budget games already distributing cpu workloads across 6 and/or 7 console cores, is the hurdle to take advantage of similar amounts of physical cores on PC not that big?There are already some games that show noticeable gains on 6-core and 8-core i7 over higher clocked 4-core i7. But most curren gen games still favor higher clocked quad. For current gen games, a 4.2 GHz quad is (and will likely continue to be) a slightly better choice than 3.6 GHz 8-core. But 8-core will of course be more future proof. Cheaper 6-core and 8-core chips mean that more gamers will have them. Intel's Coffee Lake also has 6-core mainstream i7 models. More incentive for developers to optimize their engines for larger parallelism. DX12 and Vulkan also helps.
Lower clocked 8-core is significantly more power efficient than higher clocked 4-core. For example: 8-core 2.1 GHz Xeon D is 45W, while 4-core 4.0 GHz Skylake i7 is 91W. Both have similar theoretical peak multi-core throughput. I would expect that next gen consoles (PS5/XB4) will have at least 8 cores, and possibly more. AMD also has SMT now, so 16+ threads is highly probable. Cores will also certainly be faster (higher clock rate and/or higher IPC). I would expect 8-core PC CPUs to become more important for gaming when next console generation launches.
There are two kinds of enthusiast gamers who own top of the line GPUs and CPUs. A) Gamers who play competitively on 120Hz/144Hz gaming displays @ 1080p using low/medium settings. Ryzen is not a good choice for them. B) Gamers who play at maximum detail settings at 1440p or 4K. Ryzen frame rate differs from 7700K by 1-2 fps in this scenario. Ryzen might be the best CPU for some of these gamers. Many gamers use their computer for other purposes, and 8-core might save lots of time. Examples: editing/rendering their gaming videos with After Effects, compiling Unreal Engine / Unity source code for their own mod/indie projects, etc.
That 150$ price difference is obviously relevant for both A and B style gamers. But I wouldn't say "trounces in gaming" when the downside mostly applies to small percentage of competitive players. Only a small minority of players own a 120Hz/144Hz display and choose to play at reduced quality settings at reduced resolution (and own top of the line GPU).
60Hz + reprojection seems to be almost just as good IMO.90hz is obviously quite important for VR. That's the minimum I'd be aiming for from a new CPU these days.
I'd have to agree here as many gamers don't usually spend $350+ on a CPU, sticking with decently performing i5's. The R5 is going to be a fantastic sweet spot with 6C/12T imo.As for someone using their PC mainly for gaming, I'd say waiting for Ryzen 5 would be their best bet. If anything, because Intel should be forced to lower their i5 and i7 prices when that happens.
60Hz + reprojection seems to be almost just as good IMO.
Given the choice, clearly native 90hz is better. And PC headsets don't support 60Hz anyway.
I just don't notice much difference in PSVR titles between 60Hz reprojected to 120Hz and native 90Hz ones, if any at all.
I don't know how much the reprojection costs to the GPU, but the performance delta between 60 and 90Hz is pretty big.
I think next gen will be 3D 90Hz with eye tracking, but who knows.Not sure this has to do with Ryzen. On PC, you'll want native 90fps. 45 FPS re-projected to 90 is an option, but it's rubbish by comparison. Maybe next gen headsets will be 120hz giving a 60hz re-projection option, but at that point, 120hz native will be better so you'll still want a CPU capable of that. A 60fps target for VR on PC is pointless today, or in the future, unless it's as a fall back option.