I'm aware, but I also would guess that quick time to market is of importance to make sure you get your part of the cake.In relative terms Polaris has only just launched on consumer desktop.
I'm aware, but I also would guess that quick time to market is of importance to make sure you get your part of the cake.In relative terms Polaris has only just launched on consumer desktop.
How much competition is in this space already? Does Nvidia have a similar dedicated product?I'm aware, but I also would guess that quick time to market is of importance to make sure you get your part of the cake.
Quite the opposite actually if you read and understand it. Just need to infer what technologies Vega or HPC Zen would bring to those capabilities. Then a little time for the software to mature. Hardware companies and ISVs would likely be developing for upcoming capabilities and not existing hardware. So while the article is talking about a Tonga, Vega with access to system memory could change those capabilities substantially.You don't seem to understand what we are talking about, Dave's article pretty much sums it up.
So Nvidia does have a x86 processor from Intel or AMD working with NVLink to provide direct access to system memory, added full hardware virtualization, and scrapped a lot of their software tools that are no longer needed?And AMD will not be in a niche, it already has a market leader that has over all better capabilities for the time being.
Well if you have 127 friends all doing essentially the same thing you would need 128 models, each on their own VM. If it's just you, go buy yourself a workstation instead of a VM. The whole point of a VM is sharing the hardware among a collection of users. Creating a giant machine that only one user utilizes at a time sort of defeats the purpose.Why the hell would I need 128 instances of that model?
That's it. Thanks.
Quite the opposite actually if you read and understand it. Just need to infer what technologies Vega or HPC Zen would bring to those capabilities. Then a little time for the software to mature. Hardware companies and ISVs would likely be developing for upcoming capabilities and not existing hardware. So while the article is talking about a Tonga, Vega with access to system memory could change those capabilities substantially.
So Nvidia does have a x86 processor from Intel or AMD working with NVLink to provide direct access to system memory, added full hardware virtualization, and scrapped a lot of their software tools that are no longer needed?
Well if you have 127 friends all doing essentially the same thing you would need 128 models, each on their own VM. If it's just you, go buy yourself a workstation instead of a VM. The whole point of a VM is sharing the hardware among a collection of users. Creating a giant machine that only one user utilizes at a time sort of defeats the purpose.
Ecosystem is most important, so SR-IOV validation, VM software enablement and then application certification (for the "professional" VDI space) are, in large, more key than the generation of hardware underneath.I'm aware, but I also would guess that quick time to market is of importance to make sure you get your part of the cake.
Just so we are clear, this is 16 users per GPU and you can enable 4x FirePro S7150x2's in a 4U server, which would enable up to a max of 128 users per server. In this scenario you're actually more likely to be CPU bound than GPU bound.This is not necessary for the time being, and software can't utilize this right now either. Software will never get scrapped, unless hardware is fully complaint with each other, and that we know won't happen in the short term or possibly even mid term.
Just so we are clear, this is 16 users per GPU and you can enable 4x FirePro S7150x2's in a 4U server, which would enable up to a max of 128 users per server. In this scenario you're actually more likely to be CPU bound than GPU bound.
Ah interesting I thought that would have to rely upon traditional Xeons and more likely Skylake Xeon when released, be curious to see how they manage to do this with KNL as it ideally requires optimised code compared to traditional x86.FWIW, KNL does not support virtualization out of the box.
source: http://ark.intel.com/products/94033/Intel-Xeon-Phi-Processor-7210-16GB-1_30-GHz-64-core
No, that's one of the idiocracies of public companies. Intels traditional markets are saturated. Hence, they need to find other field to foster growth and appease shareholders. That's necessary in order to avoid loss of stock market value and prevent a hostile buyout.
CAD is probably not the best example of what I'm describing. Point being, it could allow a multitude of relatively infrequent users to be sharing a larger GPU without partitioning resources. Cases where memory capacity in aggregate would likely become an issue. It could be a bunch of users updating their spreadsheets from tablets.
As for your GPU bound cad file, is that because of compute power or memory limitations? A lot of these current capacity issues you are mentioning seem like they may be alleviated by better access to system memory. If using a fast interconnect, the available VMs are likely limited by CPUs in the system. Even for IBM with NVLink I'm not sure they are linking more than one GPU at a time to the CPU. It would ultimately be wasteful as it only helps if you have system memory bandwidth to feed it. I'd imagine a system with large bus to GPU makes a good CAD workstation, but a VM might be a bit much unless you had a really big system and didn't oversubscribe active users.
The flipside on CAD it tends to a more bursty workload, concentrated when a sudden model rotation takes place or you want to view the model with the materials. Unlike games, its not pushing to deliver a constant 60FPS from the draw call level.
For CAD I think more along the lines of assembly lines, cars, aircraft, and smaller models for CNC machines. Tasks that a company could have a lot of designers working on, but not load down the entire system consistently. In your case if multiple people had large files constantly loading the device the VM is somewhat pointless. Even for memory consumption a single user likely avoids the limit, but multiple would likely hit it.
These users likely wouldn't be working on the same file. Likely just parts or separate projects. With the bursty nature as Dave mentioned you wouldn't need the same quantity of hardware. That's generally the point of VMs is that you can consolidate everything and share stronger hardware among users. Take the idle time from one user and give it to another. If a user is consistently utilizing the resources they'd be better off with a workstation than VM. If you had 100 engineers that needed to design bolts, would you want to provision a workstation for all of them or use a VM to reduce hardware costs?Pretty much results are you still need the same total amount of horsepower, bandwidth, memory as each individual system in a traditional setup, but with this new tech, you are consolidating everything into one area, and along with it the data too. From an organizational standpoint and operations its great, it doesn't mean you need less of anything though.
Thank you for providing more insight into this area I'm obviously not too familiar with!Ecosystem is most important, so SR-IOV validation, VM software enablement and then application certification (for the "professional" VDI space) are, in large, more key than the generation of hardware underneath.
I only know of something like this. What I don't know about are the capabilities besides claimed support for 16 virtuel users and a 100W TDP:How much competition is in this space already? Does Nvidia have a similar dedicated product?