I disagree, with both amd and nv supporting up to 6 monitors on a single card now, I think we will see a lot of gamers go multimonitor
nVidia supports max of 4, not 6.
I disagree, with both amd and nv supporting up to 6 monitors on a single card now, I think we will see a lot of gamers go multimonitor
It's right?
The ALU's aren't idle. They are probably processing another draw call at the time.But doesn't it feel even a little bit wrong also for a PC developer to have a whopping 2048 shader units (Tahiti) idling while you render your shadowmaps (bottlenecked by fixed function triangle setup or rops)? These 2048 shader units would surely be able to cruch a lot of triangle setup math. Or have 16 turbocharged geometry engines (Kepler) idling while you do deferred lighting and post processing?
Ah, then we are talking degrees. I am in favor of more programmable pipeline as well. I just want to see more kinds of ff hw than is out there today.I am not suggesting going fully sw route. Fixed function is fine for small repeated tasks. But I do not want to have more large scale fixed function units either. The goal must be more programmable/flexible pipeline in the future (not less).
But that another draw call is using the same shader (kernel), and so it is bottlenecked by the same fixed function units. Console GPUs and many currently installed PC GPUs cannot run multiple kernels simultaneously (and the ability is very limited at best).The ALU's aren't idle. They are probably processing another draw call at the time.
I thought that single kernel limit was for compute only. Anyway, GCN/Cayman *look* like they have 2 separate command buffers. It's about time that we get multiple DX command queues though. This should definitely be in DX12.But that another draw call is using the same shader (kernel), and so it is bottlenecked by the same fixed function units. Console GPUs and many currently installed PC GPUs cannot run multiple kernels simultaneously (and the ability is very limited at best).
The ability to run multiple kernels in parallel was first introduced in Fermi GF-100 (http://www.anandtech.com/show/2849/5). It naturally requires that the kernels do not share (other than read only) resources (for example backbuffer), and it can only take draw calls (kernels) from the command list in the order they are submitted (*). This limits the usability a lot. Kepler's Hyper-Q improved this situation significantly. Kepler can fetch commands from up to 32 GPU command lists. This however doesn't help with graphics rendering, since current DX11 multithreading model is based on a single GPU command list (other threads just render to software command buffers that are submitted to one GPU command list, one after each other). This is one of the things that I hope is improved in DX12, since also AMDs heterogeneous system architecture (HSA) slides are also talking about running multiple contexts/kernels in parallel and feeding GPU by multiple command lists (GPU support will be there for both manufacturers). This is also a great way to reduce compute latency (user can set higher priority for command list that is used for submitting compute work).
http://forums.create.msdn.com/forums/t/106060.aspx
There's even a question mark over whether D3D11.1 will run on W7 (and Vista, presumably). What are the chances of those OSs running D3D12? If that doesn't happen maybe we can just forget about D3D12 entirely.
How's OpenGL shaping up?... Is it going to catch up? Sorry for another de-rail subject.
According to AMD’s Roy Taylor in an interview with Heise Online, DirectX 11 won’t have a successor. Here is the google-translated version of the interview in german where Roy says that DirectX 12 won’t see the light of day:
We will put together future Game Bundles top games. We believe this is the right way. But also for the industry, it is an important sign. Because the computer industry has benefited for many years from a continuous renewal of the DirectX interface. A new DirectX has time and again refreshed the industry, new graphics cards need more processors and more RAM. But there is no DirectX come 12th That’s it. As far as we know there are no plans for DirectX 12 If this is not correct and someone wants to correct me – wonderful.
But no need to get sidetracked with it. Microsoft doesn't have the kind of clout they used to have with developers, that much is certain. AMD are usually first to move over to the latest DirectX, and if Taylor is saying it probably won't exist we can be pretty sure it doesn't exist right now. At best it's a long way off.
It is possible that PS4 is winning more developer favor now, though.
His comments, and even the use of the term "DX12" demonstrate to me a lack of connection/clue in this case. I wouldn't put much stock into anything he says on the topic...I'm aware that Taylor talks shit more often than not but I find it *extremely* hard to believe that he's just going to come out and say DX12 doesn't exist if it does. And if it does, it's ridiculous to believe that AMD wouldn't be amongst the first to know.
Just to be clear, this is not a MS doom thread. It's a wishlist for Direct3D 12. Let's try this again...