Loading [a11y]/accessibility-menu.js
OSI Standards and the Top Fallacy of Distributed Computing | IEEE Conference Publication | IEEE Xplore

OSI Standards and the Top Fallacy of Distributed Computing


Abstract:

In 1994, the OSI(Open System Interconnection)/IEC 7498-1:1994 Model was published. The two-sided point-to-pointcommunication TCP/IP protocol became widely accepted as the...Show More

Abstract:

In 1994, the OSI(Open System Interconnection)/IEC 7498-1:1994 Model was published. The two-sided point-to-pointcommunication TCP/IP protocol became widely accepted as the standard building block for all distributed applications. The OSI Standards states "Transport Layer: Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing".In the same year, "The Network is Reliable" was named thetop fallacy in distributed computing by Peter Deutsch andhis colleagues. Understanding these two conflicting descriptions for the seemingly same object has been the lasting myth for distributed and parallel processing. Investigating this myth under the lens of extreme scale computing revealed that it is impossible to implement reliablecommunications in the face of crashes of either the sender or the receiver in every point-to-point communication channel. The impossibility theory exposed two unacceptable risks: a) arbitrary data loss, and b) increasing probability of datalosses as the infrastructure expands in size. Therefore, thedirect use of point-to-point protocols in distributed and parallel applications is the cause of the notorious scalability dilemma. The top fallacy allegation in distributed computingis supported by theory and practice. The scalability dilemma is responsible for the growing planned and unplanned downtimes in existing infrastructures. It is also responsible for the under reported data and service losses in mission critical applications. It makes large scale computation's reproducibility increasingly more difficult. The growing instability is also linked to our inability to quantify parallel application's scalability. This paper reports a statistic multiplexed computing (SMC) paradigm designed for solving the scalability and reproducibility problems. Preliminary computational results are reported in support of the proposed solution.
Date of Conference: 16-19 August 2016
Date Added to IEEE Xplore: 26 September 2016
ISBN Information:
Electronic ISSN: 2332-5690
Conference Location: Philadelphia, PA, USA

References

References is not available for this document.