NFS profile

The Network File System (NFS) is one of the File systems supported by FreeBSD. It allows computers on the Network to share resources over the TCP/IP Network. In NFS applications, the local NFS client application can transparently read and write files on the remote NFS server, just like accessing local files.

Currently, NFS has three versions (NFSv2, NFSv3, and NFSv4). NFSv2 and NFSv3 support more features, but the main difference between NFSv2 and NFSv3 is that NFSv2 uses UDP for transmission. Therefore, NFSv2 connections may not be reliable on complex networks, while NFSv3 supports both UDP and TCP. NFSv4 improves NFSv3 performance, enforces security policies, and introduces stateful protocols.

When the Client wants to mount an NFS shared volume, it sends an RPC request to the server. After the authentication, the NFS server sends a random Cookie to the Client so that the Client can use the Cookie to authenticate the shared volume to be accessed. NFS authentication supports built-in IP/ host permission assignment and is also limited by TCP wrappers.

I found the above paragraph and the diagram from the Internet, the main function is probably to support the scene, you know. If you just want to quickly set up an NFS environment on CentOS, skip this section…

A, environmental

1 Software Environment

Windows 10 x64

VMWare 12 x64

CentOS 6.7 x64 * 3

nfs-utils

nfs-utils-lib

rpcblind
Copy the code

2 Server Planning

IP OS Function Modlue Shared Folder Mount Folder
192.168.174.200 CentOS 6.7 x64 NFS Server /data/shared
192.168.174.201 CentOS 6.7 x64 NFS Client /data/shared
192.168.174.202 CentOS 6.7 x64 NFS Client /data/shared

Note:

  • The number of NFS clients can be increased as required.

2. Prepare the NFS Server environment

1 Create a shared directory

Log in to the planned NFS Server node and run the following command as user root to create a shared directory

[root@hadoop1 ~]# mkdir -p /data/shared
Copy the code

Note:

  • The location of the shared directory varies according to the actual situation. For example, if you have a large disk mounted on your server for storing data and the disk is mounted to the /data1 directory, you can create the shared directory under /data1 (/data1/shared).

  • This shared directory is the location where data files are stored. Other NFS clients access this directory to obtain information about shared files.

2 Grant read and write permission to the shared directory

Run the following command as user root

[root@hadoop1 data]# cd /data

[root@hadoop1 data]# pwd

/data

[root@hadoop1 data]# chmod -R 777 shared

[root@hadoop1 data]# ll -d shared

drwxrwxrwx. 2 root root 4096 Aug  6 06:18 shared
Copy the code

3. Install and configure the NFS Server

You can select a Server as the NFS Server based on the Server configuration and service requirements. The NFS Server is a node that stores files. Therefore, you need to consider whether the disk resources of the NFS Server meet the requirements. I select 192.168.174.200 as the NFS Server. SSH Connection to server 192.168.174.200 Omitted…

Procedure 1 Check whether nfS-utils has been installed on the server

[root@hadoop1 ~]# rpm -qa | grep nfs-utils

[root@hadoop1 ~]#
Copy the code

If the nfS-utils and nfs-utis-lib packages are already installed, you can skip step 2

NFS – 2 utils

Run the following command as user root to install nfs-utils

[root@hadoop1 ~]# yum install -y nfs-utils
Copy the code

Note:

  • The preceding command will install the nfS-utils.x86_64 1:2.3-78.el6 package and its dependent packages (versions may vary).
Nfs-utils-lib.x86_64 0:1.1.5-13.el6 keyutils.x86_64 0:1.4-5.el6 libgssglues. X86_64 0:0.1-11.el6 libtirpc.x86_64 0-0. 2.1-15. El6 rpcbind. X86_64 0-0. 2.0-16. El6...Copy the code
  • Rpcbind is a package for port mapping, which corresponds to portmap on CentOS 5.

3 Configure the NFS Server

The NFS configuration file is located in /etc/exports. In CentOS 6.7, the /etc/exports file already exists and is empty. Touch /etc/exports/vim /etc/exports /etc/exports/vi /etc/exports/vim /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports/vi /etc/exports

[root@hadoop1 ~]# vim /etc/exports
Copy the code

Add the following to the file:

/ data/Shared 192.168.174.201 (rw, sync, all_squash)/data/shared192.168.174.202 (rw, sync, all_squash)Copy the code

Note:

  • / data/shared192.168.174.201 (rw, sync, all_squash) said the NFS Shared directory on the Server/data/Shared from 192.168.174.201 is allowed Rw permissions of all users on the server all_squash. Data files are shared in sync mode

  • The preceding files can also be configured as a line /data/shared192.168.174.*(rw,sync,all_squash), which allows access to all computers on 192.168.174

  • The format of the content in the configuration file is as follows, with no Spaces between the list of options in ‘()’

< output directory > [Client 1 options (access, synchronization mode, user mapping)] [Client 2 options (Access, synchronization mode, user mapping)]

  • The specific configuration details are not listed here. You can refer to CentOS explain the club’s official website https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-nfs-server-config-exports.html

4 Configure the NFS Server firewall

Run the following command as user root to edit the iptables file

[root@hadoop1 shared]# vim /etc/sysconfig/iptables
Copy the code

Add the following information, save the Settings, and exit

### rpcbind- A multiport INPUT - p udp - m - dports, 9100, 53327, 69-111875892204 m state - the state NEW and ESTABLISHED -j ACCEPT -a INPUT -p TCP -m multiport - dports 03-111875892204 9100 53328 m state - the state NEW and ESTABLISHED -j ACCEPT -a OUTPUT - p udp - m Multiport - sports, 9100, 53327, 69-111875892204 m state - the state ESTABLISHED -j ACCEPT -a OUTPUT -p TCP - m multiport - sports, 111875892204, 9100, 53328 03 - m state - the state ESTABLISHED -j ACCEPTCopy the code

The following is an example:

Note:

  • If the firewall service is not enabled on the NFS Server, skip this step.

  • Since rpcbind maps both TCP and UDP ports, TCP/UDP needs to be configured

  • The preceding ports are the default listening ports of the NFS server. You can view the default configurations in the /etc/sysconfig/nfs file

5 start the NFS

You must enable rpcbind before starting NFS. Run the following command as user root to enable rpcbind

[root@hadoop1 data]# service rpcbind start

Starting rpcbind:                                          [  OK  ]
Copy the code

Run the following command as user root to start the NFS

[root@hadoop1 data]# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS quotas:                                       [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]
Copy the code

4. Configure NFS clients

Use SSH to connect to the two nodes of the NFS Client at 192.168.174.201/202

Procedure 1 Install the NFS package

Run the following commands on the two client servers as user root to install NFS

[root@hadoop2 ~]# yum install -y nfs-utils
Copy the code

Note:

  • The NFS-utils package must also be installed on the NFS client; otherwise, the NFS client cannot be mounted

2 Create a local mount directory

Run the following commands on the two client servers as user root to create two local directories to map the shared directory on the NFS Server to the local directory.

[root@hadoop3 ~]# mkdir -p /data/shared

[root@hadoop3 ~]# cd /data

[root@hadoop3 data]# pwd

/data

[root@hadoop3 data]# chmod -R 777 shared

[root@hadoop3 data]# ll -d shared/

drwxrwxrwx. 2 root root 4096 Aug  6 06:43 shared/
Copy the code

Note:

  • In the preceding command, I created the local directory on the NFS client and the shared directory on the NFS Server in the same path with the same name. In fact, the local directory on the NFS client and the shared directory on the NFS Server can be different.

3 Mount the NFS Server shared directory to the local PC

[root @ hadoop2 data] mount -t NFS 192.168.174.200: / data/Shared/data/SharedCopy the code

View the mounted file system of the server

[root@hadoop2 ~]# df -hFilesystem Size Used Avail Use% Mounted on /dev/sda2 97G 4.3g 87G 5% / TMPFS 1.9g 72K 1.9g 1% /dev/shm /dev/sda1 283M 41 m 227 m 16% / boot 192.168.174.200: / data/Shared 18% of the 97 g 16 g to 76 g/data/SharedCopy the code

Can see a more 192.168.174.200: / data/Shared mount to the local/data/records of the Shared directory

Five, validation,

Procedure 1 Create a file on the NFS Client node

Run the following command on server 192.168.174.202

[root@hadoop3 data]# cd shared

[root@hadoop3 shared]# vim test.text
Copy the code

Enter the following information to save and exit:

This is a test text!
Copy the code

Check the test.text file in the /data/shared directory on the 192.168.174.202 server

[root@hadoop3 shared]# pwd

/data/shared

[root@hadoop3 shared]# ll

total 4

-rw-r--r--. 1 nfsnobody nfsnobody 21 Aug  6 09:38 test.text

[root@hadoop3 shared]# cat test.text

This is a test text!
Copy the code

2 View the directories of the other two nodes

/data/shared / 192.168.174.200/201/192.168.174.200/201/192.168.174.200/201 /data/shared / 192.168.174.200/201 /data/shared / 192.168.174.200/201 /data/shared / 192.168.174.200/201 /data/shared

200:

[root@hadoop1 shared]# pwd

/data/shared

[root@hadoop1 shared]# ll

total 4

-rw-r--r--. 1 nfsnobody nfsnobody 21 Aug  6 09:38 test.text
Copy the code

201:

[root@hadoop2 shared]# pwd

/data/shared

[root@hadoop2 shared]# ll

total 4

-rw-r--r--. 1 nfsnobody nfsnobody 21 Aug  6 09:38 test.text
Copy the code

3 Modify the file on the NFS Client

Modify the file on 192.168.174.201 and add Hello World at the bottom of the file and save exit

[root@hadoop2 shared]# vim test.text

This is a test text!

Hello World
Copy the code

4 View the files on the other two nodes

192.168.174.200/202: /data/shared/test.text:

200:

[root@hadoop1 shared]# pwd

/data/shared

[root@hadoop1 shared]# cat test.text

This is a test text!

Hello World
Copy the code

202:

[root@hadoop3 shared]# pwd

/data/shared

[root@hadoop3 shared]# cat test.text

This is a test text!

Hello World
Copy the code

You can see that the changes also take effect on the other two nodes.

The NFS shared directory is installed and configured on CentOS 6.x.

Appendix: NFS related information

The following is a collection of NFS-related information for those interested.

Linux NFS on SourceForge explains: nfs.sourceforge.net/

Wikipedia on NFS explanation: en.wikipedia.org/wiki/Networ…

Simplified Chinese character version of the NFS explanation: wiki.archlinux.org/index.php/N…

Linux.die nfs: linux.die.net/man/5/nfs

Freebsd NFS: www.freebsd.org/doc/handboo…