system whose components are located on different networked computers
Distributing computing is the theory and application of distributed systems in computer hardware and software. Distributed system are messaging networks with components located on different networked computers.
|This theme article is a stub. You can help Wikiquote by expanding it.|
- Although distributed computer systems are highly desirable, putting together a properly functioning system is notoriously difficult. Some of the difficulties are pragmatic, for instance, the presence of heterogeneous hardware and software and the lack of adherence to standards. More fundamental difficulties are introduced by three factors: asynchrony, llimited local knowledge, and failures. The term asynchrony means that the absolute and relative times at which events take place cannot always be known precisely. Because each computing entity can only be aware of information that it acquires, it has only a local view of the global situation. Computing entities can fail independently, leaving some components operational while others are not.
- Hagit Attiya and Jennifer Welch: Distributed Computing: Fundamentals, Simulations, and Advanced Topics. John Wiley & Sons. 25 March 2004. p. 2. ISBN 978-0-471-45324-6.
- Research on architectures and interconnection networks has resulted in low-cost distributed systems with large numbers of powerful processors that can communicate at high speeds. Research on distributed operating systems has produced ways for employing this high computing potential by dividing the total workload among the available processors. By executing different programs on different processors, the system can have a high throughput. Some system programs (e.g., a file server) may also be distributed among multiple processors, to achieve higher speed and greater reliability. Many user applications can also benefit, for the same reasons. The task of distributing a single user program among multiple processors, however, clearly falls outside the scope of an operating system. Thus, to achieve this distribution, extra effort is required from the applications programmers.
- Today, almost everyone is connected to the Internet and uses different Cloud solutions to store, deliver and process data. Cloud computing assembles large networks of virtualized services such as hardware and software resources. The new era in which ICT penetrated almost all domains (healthcare, aged-care, social assistance, surveillance, education, etc.) creates the need of new multimedia content-driven applications. These applications generate huge amount of data, require gathering, processing and then aggregation in a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems.