Skip to Main Content
The explosive growth of the Web contents has led to increasing attention on two major challenges: scalability and high availability of network file system. We introduced the concept of intermediate file handling to improve the availability of the NFS-based file system and proposed a new data consistency scheme to improve our design of a reliable file system, which had been implemented as an example of the software replication approach. By using a minor sequence in token getting, the total number of getting-token requests is reduced along with the data-requested size for each write operation. We also proposed a simple load-sharing mechanism for the NFS client to switch to a lightly-load server based on the number of NFS client's RPC requests for a period of time. Such an approach is easy to derive the load information, which each client generated. Finally, we analysed the proposed data consistency scheme. It shows truly that the scheme is able to reduce the overhead of write requests of our network file system.