NVIDIA has introduced the new Quadro vDWS software, which allows you to run software on Tesla server servers instead of workstations. A ...
NVIDIA has introduced the new Quadro vDWS software, which allows you to run software on Tesla server servers instead of workstations. A change of paradigm that entails (perhaps) the changes that will come in the coming years in computer management, increasingly reduced to niche devices. An evolution of the market that goes to the cloud at full speed.
NVIDIA has announced the Quadro vDWS software, acronym for "Quadro Virtual Data Center Workstation Software". This software allows to exploit the processing power of servers with Tesla cards with Pascal architecture to perform tasks that would otherwise require a workstation. This is why it is necessary to have hardware ready for the "cloud", in full harmony with recent developments in cloud computing. And the scenarios that this fact opens deserve some consideration.
VDWS Framework is designed to allow you to use software with a challenging computational load (eg photorealistic rendering, virtual reality, deep learning, scientific simulations, video encoding) without the user having to have a high performance machine and moving, therefore, the workload towards a calculation center. The service will actually take off as of September 1, but the servers on which it can be installed are already on sale and include more than 120 systems from more than 30 manufacturers.
NVIDIA emphasizes in the article posted on its site that businesses are changing and workflows include more and more elements that require high computing capability - and this fashion seems destined to continue in the future.
Therefore, moving the processing load to a central structure - a server, a rack, or even a datacenter - seems to be the solution that will take hold in the future, helped by a shift in this direction of the entire market. NVIDIA states that you can also use programs such as 3DS Max, Showcase and Solidworks on vDWS Quadrous servers, making the graphics workstations that are no longer needed. But do not stop here.
Both NVIDIA and Intel have spent a lot of resources in creating GPGPU cards and it seems that this choice is somehow outlining the future of computers. If, at the moment, we see a great emphasis on shifting cloud computing to business users and for specific tasks, on the other hand, it is already possible to see movements in this regard even with regard to common users: services such as GeForce Now and PlayStation Now are one of the first examples of processing done directly in the cloud and then managed remotely by the user.
Although it is difficult to make time predictions about the advent of operating systems such as Windows in cloude, therefore, a total shift of the entire work environment to unique calculation centers without following the widespread model now dominating, the general direction seems to be traced. As has already happened on countless occasions in the past, it is the business market to open up a range of solutions that then also affect the everyday lives of common users.
The positive aspects would be numerous and would include the availability of computational capability no longer limited by the hardware they possess, as well as the need to change their hardware in favor of more versatile machines. In this sense, common users could benefit greatly from this paradigm shift.
It is interesting to analyze, on the other hand, the impact that this model may have on the current infrastructure and what countermeasures it would require to respond to the security and availability requirements of the service. A constant connection to a virtual machine by millions of users requires a robust network infrastructure that should withstand a much higher workload than the current one and ensure that the user experience is similar to the one you can get with a car on the premises. It is also necessary to think that a centralized system is in its nature vulnerable to attacks that may limit availability (eg DDoS) or security breaches that are more difficult to perform on a large number of devices distributed across the territory.
