RTX 2080 only has 448 GB/s VRAM. Will 400 GB/s become bottleneck for a console GPU which is not as powerful as RTX 2080?Bandwidth being only 400 GB/s seems odd considering One X is already 326 GB/s. Would such a PS5 memory system be that much cheaper than a One X memory system expanded?
RTX 2080 only has 448 GB/s VRAM. Will 400 GB/s become bottleneck for a console GPU which is not as powerful as RTX 2080?
That’s purely for the video card however. It’s not a shared pool and doesn’t suffer from async bandwidth issues. (Not even sure if this is a thing next gen tbh). So arguably there needs to be a lot more bandwidth on a console to feed both CPU and GPU sufficiently in the worst case performance scenarios.RTX 2080 only has 448 GB/s VRAM. Will 400 GB/s become bottleneck for a console GPU which is not as powerful as RTX 2080?
Eh mechanical drives are all but gone from pc purchases. They exist as outliers in the market mostly in desktops or as secondary drive . The majority of ssds give 500MB/s speed . But over the next few years with NVME m.2 drives hitting $100 bucks for 3GB/s drives at 1TB capacities they will take over the gaming sector.The next-gen game console has extremely fast 2~4 GB/s SSD, Will we see some quite different game design for exclusive games?
In the next few years we may still see a lot of PC with only 200~300 MB/s HDD, will it affect game design for PC games or cross-platform games?
would be even more impressive if they aimed for less than 4K and maybe Checkerboard 4K at the maximum. Ridiculous resolutions will hold back next-gen. Not liking that fact even if I'll have to upgrade to a modern screen at some point. Even if I owned a 4K screen I'd take innovative worlds over native resolutions all day because I don't think it's important.
Good thing about consoles is all kinds of optimizations and efficient use of hw. Brute forcing 4k or worse 8k is just stupid. Everything doesn't need to be rendered at native resolution and not even for every frame.
Only questioning your statement about what screens people use, consider that the PS5 won’t be launched until the end of 2020, and remain the base platform for much of that decade. 4k TV sets have already dominated retail for years here and is in a position now that is shifting from being a step up from HD to being a step down from 8k.4k is almost pointless for native rendering, but 8k rendering would be the dumbest choice every. Something sub-4k will still be the best option for performance, quality and to match the tvs and screens people use.
Some of us already guessed as to the likely size of the GPU. And it worked out to something like 52 active CU's if you assume a 360mm^2 die size.
I think some argued that you couldn't use 7nm Vega GPU as a 1-to-1 comparison to estimate the apprx CU size because a bunch of hardware features would be stripped out on a Navi/console. But if they are adding dedicated RT hardware to the console GPU's then it may not be that much difference.
Wow!. I was a doubter, this should confirm it.
So torn. On the one hand, I have to 'like' if the ND dev is saying there's hardware RT. On the other hand, he's a lighting artist, not a lightning artist, and people keep making that mistake and it does my head in!!!Naughty Dog lightning artist
That assumes 2080 is optimal for BW. What if 2080 is actually lacking BW and could do with way more to not bottleneck on some workloads? If you look at BW per flop over the years, its reducing:RTX 2080 only has 448 GB/s VRAM. Will 400 GB/s become bottleneck for a console GPU which is not as powerful as RTX 2080?
But a memory bank can't be accessed simultaneously by the CPU and GPU, There will be memory contention and the overall memory bandwidth available to the GPU will decline disproportionately higher than the amount the CPU accesses.RTX 2080 only has 448 GB/s VRAM. Will 400 GB/s become bottleneck for a console GPU which is not as powerful as RTX 2080?
Or notWow!. I was a doubter, this should confirm it.
True but if the trade off is lower BW for less cost perhaps we're seeing the projected savings used to subsidize the SSD or CPU/GPU budget.That assumes 2080 is optimal for BW. What if 2080 is actually lacking BW and could do with way more to not bottleneck on some workloads? If you look at BW per flop over the years, its reducing:
"Memory bandwidth has always been a challenge for video cards, and that challenge only continues to get harder. Thanks to the mechanics of Moore’s Law, GPU transistor counts – and therefore the quantities of various cores – is growing at a rapid pace. Meanwhile DRAM, whose bandwidth is not subject to the same laws, has grown at a much smaller pace."
View attachment 3018
Considering the tech of HBM is supposed to provide ludicrous amounts of bandwidth and solve this issue, to have an implementation that's only so-so is definitely a disappointment.