Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
They may well be on track now but that can change quickly. We have decades of silicon wafer production experience but that never seems to prevent initial yield problems with newer process changes. Moving from silicon to a new material will be huge challenge. It's not just about layout and design, but new production equipment, new material suppliers. The entire chain from R&D to production and testing will have to change.Samsung says to be on track for 7nm and seeing non problem for 5nm.
Is this removing the command processor, or just supplanting its role for initiating wavefront launches?If the CPU and the GPU are going to unify, I believe a good next step would be to remove the GPU command processor and let the CPU cores directly spawn the waves/warps/etc to the array of compute units. Obviously this needs shared caches between the CPU and GPU and full coherence and efficient fine grained synchronization. Intel is almost there already with Broadwell.
From a conceptual point of view, the CPU would be at least as suited.Intel was (long time ago) performing vertex shaders on the CPU. The CPU would be more suited to do the command processor's tasks. This would obviously allow us to do crazy stuff that is not possible with the current designs. And would at the same time sidestep all the IO/security problems.
going to be potentially up to 10x Maxwells performance
Just to qualify that, it's going to be up to 10x for "deep learning" - http://i.imgur.com/4wFctzF.png - and he qualified that with "CEO math" beforehand. http://i.imgur.com/25pz1fx.png - 4x the FP16 (I'm assuming double the FP16 rate like in X1 and double the shaders in total, making 4x) and double the interconnects (8 GPUs v 4) with a bit of a clock speed increase makes about 10x. For games that'll be a tad optimistic shall we say.
Well, wouldn´t in theory a powerful gpu (10-15 tflops?) that allowed to use realtime GI -voxels based for example-in a non limited way make developments easier/cheaper?. Of course, as someone has said, if next gen we jump again in resolution more power will be for nothing.I dont think next gen gameplay is a hardware issue anymore. In fact I think the only reason for those powerfull machines is VR I feel once we get to 5tflops + the quality of the game studios and size of the budgets start having more of an impact than the hardware power. I could be talking crap though probably am.
Well, wouldn´t in theory a powerful gpu (10-15 tflops?) that allowed to use realtime GI -voxels based for example-in a non limited way make developments easier/cheaper?. Of course, as someone has said, if next gen we jump again in resolution more power will be for nothing.
For VR, not sure I'd make that distinction for non-VR.With VR on the horizon and pretty much all involved parties agreeing that more than 1080p resolution is ideal, I can certainly see this happening.
... company A packs in goggles for the all in one solution at the expense of less powerful base hardware. Company B ships more powerful hardware stand alone. Company B will have the edge of course.
Well, since i was right about the 8th generation's 8 gigs of ram:
https://forum.beyond3d.com/threads/...ation-console-tech.29254/page-369#post-752850
I now designate myself MASTER OF THE RAM PREDICTOR
128GB of ram for the PS5!
I thus hereby decree it.![]()
It took another six months before that capacity clawed back into the discussion. It was considered a nice thing to have on the wish list.Initially on B3D even 4GB was rather scoffed at.
There are diminishing returns. I suppose someone could imagine what they would do with hundreds of gigs of RAM, but I think at that point we would need to consider improving the IO of the platform to match.But back to next gen, even 16GB seems like a lot to ask. But I guess Titan X just dropped 12GB GDDR5...and they're talking about 16GB of stacked memory.