In several areas of scientific research — meteorology, seismology, and particle physics —, the need for high-performance processing systems has been felt for several decades. The production of Supercomputers, with specific and expensive architectural solutions, was the first natural response to this type of computational needs and put the foundations of High Performance Computing (HPC). Later, technological progress and the spread of personal computers and computer networks led to a strong change in addressing the demand for high processing speed. From the use of centralized architectures built with specialized and very expensive components (Supercomputer) we have moved to the use of low-cost standard components organized in massively parallel and distributed architectures (Cluster, Grid Computing and Cloud Computing).
In any event, the management of HPC systems, given the complexity and the degree of innovation of the same, has always required personnel of the highest level of specialization and the close collaboration between the application developers and the experts of the involved hardware architecture. From the problem of the optimal use of very expensive computational resources, we have moved on to address the not simple problem of the coordinated sharing of heterogeneous and distributed computing resources, and the organization of these resources so that they appear as “Virtual Supercomputers” to scientific and technology applications.