Which context switching patent is this ? Sony's or AMD's ?
More likely Sony's...
http://appft.uspto.gov/netacgi/nph-...68".PGNR.&OS=DN/20120320068&RS=DN/20120320068
Which context switching patent is this ? Sony's or AMD's ?
Digital Foundry provides a much better record on the GDC presentation, including the full quote of a quote that's been confusing some:
They're talking about handling threads, or compute jobs, and the GPU's ability to run compute in parallel with graphics which is going to be using some CUs in compute and the rest for graphics. There's no running both across the full GPU simultaneously. Whether that's the old 14+4 idea or the scheduler managing jobs across CU clusters is unknown at this point, but I'd put money on the latter.
What if the PS4 CU can run both compute jobs and graphics on all CU.?
Maybe that was the whole customization.
From what i read everything points at running both at once,which would be and incredible gain if they can pull it.
The 176 gigabytes of total bandwidth provided by that GDDR5 RAM are much more efficient than the 40 gigabytes a second provided by the standard DDR3 RAM used in most current computer systems.
Lol. Most current PC's sport 40GB/s of system memory now?
Yes.
PS4 sports roughly 1/4 the raw power of a 7990 so somethings got to give.
How about no. 1833MT/s is the highest anything officially supports and that's under 30GB/s on a 128-bit bus that's standard for PCs. 2133MT/s, the highest JEDEC standard, only gives you 33.3GB/s. Sure you can get RAM that unofficially clocks higher and CPUs/motherboards that will let you clock that high, but that is definitely not something most current PCs are doing. They could be using an SB-E or similar with 256-bit memory but that's even less likely. Most current PCs are probably only at 1333 or 1600MT/s.
That part of the article was silly anyway, since it's comparing GDDR5 and DDR3 in totally different bus width configurations. If you use 256-bit DDR3 you can go well beyond 40GB/s with standard RAM, like what MS is purported to be doing with Durango.
Where do you base that speculation on?I thought it was already confirmed 720p @60fps?
PS4 sports roughly 1/4 the raw power of a 7990 so somethings got to give. And this early in the generation the console specific advantages will be minimal at best.
"Resources" could mean quite a lot of things. Just additional cache to store two shader programs (compute and graphics) simultaneously, working on one at a time, would constitute more resources IMO. (I really must read up on APU design at some point!)The author's conclusion is a reach, given the data he is using as justification. Running compute and graphics simultaneously suggests nothing about reserving resources.
That's garbage, quite frankly. For a CU to run graphics code and compute at the same time, it'd effectively have to be twice as big, with twice as many computation units. Can we please apply just a modicum of common sense for once when dealing with rumours and speculation?What if the PS4 CU can run both compute jobs and graphics on all CU.?
Maybe that was the whole customization.
From what i read everything points at running both at once,which would be and incredible gain if they can pull it.
Again, how on earth can a processor process two workloads at once with no loss? The only way is to have twice as much logic, so basically 36 CUs with half dedicated to compute and half to graphics. We don't need official documentation from Sony to know that's utter twaddle. Basic understanding of hardware allows us to interpret the rumours and PR releases without succumbing to the allure of unrealistic hopes of magical hardware performance.If it can run compute without losing any graphics power...
Yes.
The statement was that there were special CU resources for each type of workload. While not impossible, it isn't necessary."Resources" could mean quite a lot of things. Just additional cache to store two shader programs (compute and graphics) simultaneously, working on one at a time, would constitute more resources IMO. (I really must read up on APU design at some point!)
A compute program and a graphics shader are just instructions as far as the CU is concerned. The graphics pipeline is just a client, albeit one with some funky add-ons.That's garbage, quite frankly. For a CU to run graphics code and compute at the same time, it'd effectively have to be twice as big, with twice as many computation units.
Yes.
The statement was that there were special CU resources for each type of workload. While not impossible, it isn't necessary.
Where do you base that speculation on?
Carmack's statement was deliberately vague and may have been at a higher level concerning the platform and its organization.John Carmack hinted at some intelligent design choice(s) in Orbis.
Mark Cerny spoke about some extensions in a mangled Japanese interview.
Would be nice if someone can tell what exactly they were referring to.
For Orbis, he postulated how low-level access coupled with a real-time scheduler could be used for a more responsive use of the GPU.Where did that FXAA creator go ? He removed his tech speculation post, but I remember he had some of his own ideas in his removed blog post.