Instead of a single server, the organization has a scaled storage system behind multiple NFS gateways, which serve to branch the traffic load across several hubs. This offers some improvement, but it still leaves bottlenecked communication between the gateways and the storage system. The storage system can scale up to the moon, but effective distribution for modern datasets requires scaling out with intelligent traffic management.
Otherwise, you get what NFS exhibits: some gateways going underutilized and some being swamped to the point of crippling performance. NFS has no way to push a client to an alternative gateway. You can terminate the connection before routing the client to a new gateway connection, but this will result in all kinds of client errors. In a scale-out storage solution like Quobyte, data is decentralized across multiple servers.
Clients can seek data directly from the server that has the necessary file with no need to pass through an intermediate. As a result, failover functionality often feels like a slapped-together workaround.
It then takes several seconds for the network to receive and assimilate this information. Clients will then talk to gateway B, which may not have the same state as gateway A. This produces stale file handles. If applications do not adequately accommodate this mismatch, it will produce errors.
Because it is an open standard, anyone can implement the protocol. NFS started in-system as an experiment but the second version was publicly released after the initial success. How does NFS work? Read the Blog. Unfortunately, as I see over and over, pNFS solves only a tiny part of the problem: the data transfer. NFS fails at failover as well. Like many critical features, fault-tolerance must be designed into a protocol from the start, but NFS has clunky failover added later on, like a badly designed building waiting to collapse.
This brings me to the second goal of enterprise IT: data safety —a catchall term for data integrity, governance, compliance, protection, access control, and so on.
Data safety is a major concern whether it be preventing data breaches or industry regulation. Enterprises processing personally identifiable information or health data must implement state-of-the art data protection through encryption. NFS is a severe business and compliance risk in comparison. This is also partly a result of the increasing scale of data operations today compared to when NFS was developed.
The name is a bit misleading because today, NFS is not the actual file system but the protocol used between the clients and the servers with the data. The Network File System NFS protocol was designed to allow several client machines to transparently access a file system on a single server. One of the design goals was to enable a broad range of operating systems and processor architectures to implement NFS.
Newer versions of Windows have native support for mounting NFS. Today there are only two versions of the NFS protocol left in use: Version 3, published in , and version 4 in NFS 3 is still by far the most common version of the protocol and is the only one supported by Windows clients.
It's like the lowest common denominator of storage because almost all operating systems can access NFS version 3 storage. Most of the disadvantages of NFS stem from the fact that it was designed decades ago and for communication with a single server:. The short answer is: No.
The Network File System - despite it's name - is a protocol to access a file system that is located on a remote server.
0コメント