An overview of the

Distributed file system: DFS, also known as Network file system. A file system that allows files to be shared across a network across multiple hosts, allowing multiple users on multiple machines to share files and storage space.

FastDFS is written in c an open source distributed file system, give full consideration to the redundancy backup, load balancing and linear expansion mechanism, and pay attention to the high availability, high performance and other indicators, features include: file storage, file synchronization, file access (such as file upload, download), solved the problem of the large capacity storage and load balancing. It is especially suitable for small and medium files (recommended range: 4KB < file_size <500MB), and for online file-based services, such as album websites and video websites.

FastDFS architecture

The FastDFS architecture includes Tracker Server and Storage Server. The client requests the Tracker Server to upload and download files. The Storage Server uploads and downloads files through the Tracker Server.

The Tracker Server

Mainly do scheduling work, play a balanced role; Responsible for the management of all storage servers and groups. After each storage is started, it will connect to Tracker to inform its group and other information, and maintain periodic heartbeat. Tracker Sets up a mapping table for group==>[Storage ServerList] based on the storage heartbeat information.

A Tracker has very little meta information to manage and is stored in memory. In addition, the meta information on tracker is generated by the information reported by the storage, and there is no need to persist any data itself, which makes tracker very easy to expand. By directly adding tracker machine, you can expand to serve tracker cluster. Each tracker in the cluster is completely equivalent. All the trackers accept stroage’s heartbeat information and generate metadata information to provide read and write services.

Storage Server Storage Server

Mainly provides capacity and backup services; The unit is group. Each group can contain multiple storage servers for mutual backup. Group storage can facilitate application isolation, load balancing, and number of copies customization (the number of storage servers in a group is the number of copies in the group). For example, application data can be isolated by storing different application data in different groups. In addition, applications can be assigned to different groups for load balancing based on their access characteristics. The disadvantage is that the capacity of the group is limited by the storage capacity of a single machine. At the same time, when a machine in the group fails, data recovery can only rely on other machines in the group, which takes a long time to recover.

The storage of each storage in a group depends on the local file system. A storage can be configured with multiple data storage directories. For example, if there are 10 disks mounted to /data/disk1-/data/disk10, all 10 directories can be configured as data storage directories of the storage. When receiving a file write request, the storage selects a storage directory to store files based on the configured rules. The number of documents in order to avoid a single directory is too much, for the first time in storage starts, in each data stored in the directory to create 2 levels of subdirectories, each level 256, a total of 65536 files, newly written documents will be routed to the one in the form of a hash a subdirectory, then the file data as a local file stored in the directory.

FastDFS storage policy

To support large capacity, storage nodes (servers) are organized into volumes (or groups). A storage system consists of one or more volumes whose files are independent of each other. The file capacity of all volumes is the total file capacity of the entire storage system. A volume can be composed of one or more storage servers. All files on the storage servers under a volume are the same. Multiple storage servers in a volume provide redundant backup and load balancing.

When a server is added to a volume, the system automatically synchronizes existing files. After the synchronization is complete, the system automatically switches the new server to online services. When the storage space is insufficient or about to be used up, you can dynamically add volumes. You only need to add one or more servers and configure them as a new volume, thus increasing the capacity of the storage system.

FastDFS upload process

FastDFS provides users with basic file access interfaces, such as Upload, Download, Append, and Delete, in the form of client libraries.

The Storage Server periodically sends its Storage information to the Tracker Server. If there is more than one Tracker Server in a Tracker Server Cluster, the relationship between the Tracker servers is equal. Therefore, the client can select any Tracker when uploading.

When the Tracker receives a request from the client to upload a file, it will assign a group for the file. After the group is selected, the Tracker must decide which storage server in the group to assign to the client. After a storage server is allocated, the client sends a file write request to the storage. The storage allocates a data storage directory for the file. Then assign a fileID to the file, and finally generate a file name to store the file based on the above information.

Select the tracker server

When there is more than one Tracker server in the cluster, the client can choose any Trakcer when uploading a file because the relationship between trackers is completely peer.

Select the group for the storage

When the tracker receives a request for an Upload file, it will assign the file to a group that can store the file. The following rules support the selection of groups: 2. Specified group to specify a Specified group. 3

Choose the storage server

After the tracker is selected, it will select a storage server from the group to the client. The following rules support the selection of storage: First server ordered by IP. First server ordered by priority. Sort by priority (Priority is configured on storage)

Choose the storage path

After a storage server is allocated, the client sends a file write request to the storage. The storage allocates a data storage directory for the file. The following rules are supported: Round robin: Round robin among multiple storage directories. The one with the most free storage space takes precedence

Generate a count

After the storage directory is selected, the storage generates a Fileid for the file, which is a concatenation of the storage server IP address, file creation time, file size, file CRc32, and a random number. The binary string is base64 encoded and converted into a printable string.

Select a two-level directory

After selecting a storage directory, the storage assigns a fileID to the file. Each storage directory has two levels of 256 x 256 subdirectories. The storage hashes the file twice based on the fileID, routes the file to one of the subdirectories, and stores the file to the subdirectory with the fileID as the file name.

Generate file name

After a file is stored in a subdirectory, it is considered that the file is successfully stored. Then, a file name is generated for the file. The file name is a combination of group, storage directory, two-level subdirectories, FileID, and file name extension (specified by the client to distinguish file types).

FastDFS file synchronization

When a file is written to a storage server in a group, the client considers that the file is successfully written. After the storage Server writes the file, the background thread synchronizes the file to other storage servers in the same group.

After each storage writes a file, it also writes a binlog. The binlog does not contain file data, but only file name and other meta information. This binlog is used for background synchronization. Progress is recorded as a timestamp, so it is best to keep the clocks of all servers in the cluster in sync.

The synchronization progress of a storage is reported to tracker as part of metadata. Tracke uses the synchronization progress as a reference when selecting a storage to read.

For example, if there are three storage servers A, B, and C in A group, A synchronizes to T1 from C (all files written before T1 have been synchronized to B), and B synchronizes to T2 from C (T2 > T1), when the tracker receives the synchronization progress information, it will arrange it. Use the smallest one as the synchronization timestamp of C. In this case, T1 is C’s synchronization timestamp T1 (all data written before T1 has been synchronized to C). Similarly, according to the above rules, the tracker generates A synchronization timestamp for A and B.

FastDFS file download

After the client Uploadfile succeeds, it will get a file name generated by the storage. Then the client can access the file according to the file name.

As with the Upload file, the client can select any Tracker server when downloading the file. When tracker sends a download request to a tracker, the file name must be displayed. Tracke retrieves information about the file name, such as the file group, size, and creation time, and then selects a storage for the request to serve the read request.

FastDFS performance solution

FastDFS installation

The software package version
FastDFS v5.05
libfastcommon v1.0.7

Download and install libfastCommon

  • download


     
  1. Wget HTTP: / / https://github.com/happyfish100/libfastcommon/archive/V1.0.7.tar.gz

Copy the code
  • Unpack the


     
  1. The tar - XVF V1.0.7. Tar. Gz

  2. CD libfastcommon - 1.0.7

Copy the code
  • Compile and install


     
  1. ./make.sh

  2. ./make.sh install

Copy the code
  • Creating soft Links


     
  1. ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so

  2. ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so

  3. ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so

  4. ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so

Copy the code

Download and install FastDFS

  • Download FastDFS


     
  1. Wget HTTP: / / https://github.com/happyfish100/fastdfs/archive/V5.05.tar.gz

Copy the code
  • Unpack the


     
  1. The tar - XVF V5.05. Tar. Gz

  2. CD fastdfs - 5.05

Copy the code
  • Compile and install


     
  1. ./make.sh

  2. ./make.sh install

Copy the code

Configuring the Tracker Service

After the above installation is successful, there will be an FDFS directory in the /etc/directory to access it. Here is a sample file that the author gave us. We need to change the tracker.conf.sample file to the tracker.conf configuration file and modify it:


     
  1. cp tracker.conf.sample tracker.conf

  2. vi tracker.conf

Copy the code

Edit the tracker. Conf


     
  1. False indicates that the configuration file does not take effect

  2. disabled=false

  3. The port that provides the service

  4. port=22122

  5. # Tracker Data and log directory address

  6. base_path=//home/data/fastdfs

  7. # HTTP service port

  8. http.server_port=80

Copy the code

Create the base data directory for tracker, the directory corresponding to base_path


     
  1. mkdir -p /home/data/fastdfs

Copy the code

Use ln -s to establish a soft link


     
  1. ln -s /usr/bin/fdfs_trackerd /usr/local/bin

  2. ln -s /usr/bin/stop.sh /usr/local/bin

  3. ln -s /usr/bin/restart.sh /usr/local/bin

Copy the code

Start the service


     
  1. service fdfs_trackerd start

Copy the code

Check the monitor


     
  1. netstat -unltp|grep fdfs

Copy the code

If port 22122 is monitored normally, the Tracker service is started successfully.

Tracker server directory and file structure After the Tracker service is successfully started, the data and logs directories are created in base_path. The directory structure is as follows:


     
  1. ${base_path}

  2.  |__data

  3. | | __storage_groups. Dat: storage group information

  4. | | __storage_servers. Dat: storage server list

  5.  |__logs

  6. | | __trackerd. Log: the tracker server log file

Copy the code

Configuring Storage Services

Go to the /etc/fdfs directory, copy the FastDFS storage sample configuration file storage.conf.sample, and name it storage.conf


     
  1. # cd /etc/fdfs

  2. # cp storage.conf.sample storage.conf

  3. # vi storage.conf

Copy the code

Edit storage. Conf


     
  1. False indicates that the configuration file does not take effect

  2. disabled=false

  3. Storage server group (volume)

  4. group_name=group1

  5. Storage server service port

  6. port=23000

  7. # Heartbeat interval, in seconds (this refers to actively sending heartbeat to tracker server)

  8. heart_beat_interval=30

  9. The root directory must exist, subdirectories will be generated automatically.

  10. base_path=/home/data/fastdfs/storage

  11. Storage Server supports multiple paths for storing files. The number of base paths for storing files is set here. Usually, only one directory is configured.

  12. store_path_count=1

  13. # configure store_path_count paths one by one, index number based on 0.

  14. If store_path0 is not configured, it is the same path as base_path.

  15. store_path0=/home/data/fastdfs/storage

  16. # FastDFS uses two levels of directories to store files. Set the number of directories for storing files.

  17. # If this parameter is set to N (e.g. : 256), the storage Server will automatically create N * N subdirectories under store_path for storing files when it is first run.

  18. subdir_count_per_path=256

  19. # tracker_server list, will actively connect to Tracker_Server

  20. If there are multiple tracker servers, write one line for each tracker server

  21. Tracker_server = 192.168.1.190:22122

  22. The time period during which system synchronization is allowed (default: full day). It is used to avoid problems with peak synchronization.

  23. sync_start_time=00:00

  24. sync_end_time=23:59

Copy the code

Use ln -s to establish a soft link


     
  1. ln -s /usr/bin/fdfs_storaged /usr/local/bin

Copy the code

Start the service


     
  1. service fdfs_storaged start

Copy the code

Check the monitor


     
  1. netstat -unltp|grep fdfs

Copy the code

Make sure Tracker is enabled before starting Storage. First start is successful, can be in/home/data/fastdfs/storage directory to create data, logs two directories. If port 23000 is monitored normally, the Storage service is started successfully.

Check whether the Storage and Tracker are communicating


     
  1. /usr/bin/fdfs_monitor /etc/fdfs/storage.conf

Copy the code

FastDFS configures the Nginx module

The software package version
openresty v1.13.6.1
fastdfs-nginx-module v1.1.6

FastDFS uses the Tracker server to store files on the Storage server. However, files need to be replicated between Storage servers in the same group, resulting in synchronization delay.

Suppose the Tracker server uploads the file to 192.168.1.190, and the file ID is returned to the client. In this case, the FastDFS storage cluster mechanism synchronizes the file to 192.168.1.190. If a client uses this file ID to fetch files from 192.168.1.190, the file cannot be accessed. Fastdfs-nginx-module redirects the file link to the source server to retrieve the file, avoiding the file failure caused by replication delay on the client.

Nginx and fastdfs-nginx-module:

It is recommended that you install the following development libraries using yum:


     
  1. yum install readline-devel pcre-devel openssl-devel -y

Copy the code

Download the latest version and unzip:


     
  1. Wget HTTP: / / https://openresty.org/download/openresty-1.13.6.1.tar.gz

  2. The tar - XVF openresty - 1.13.6.1. Tar. Gz

  3. wget https://github.com/happyfish100/fastdfs-nginx-module/archive/master.zip

  4. unzip master.zip

Copy the code

Install nginx and add the fastdfs-nginx-module module:


     
  1. ./configure --add-module=.. /fastdfs-nginx-module-master/src/

Copy the code

Compile, install:


     
  1. make && make install

Copy the code

View Nginx modules:


     
  1. /usr/local/openresty/nginx/sbin/nginx -v

Copy the code

If the following is present, the module was added successfully

Copy the fastdfs-nginx-module configuration file to /etc/fdfs and modify it:


     
  1. cp /fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/

Copy the code

     
  1. Connection timeout

  2. connect_timeout=10

  3. # Tracker Server

  4. Tracker_server = 192.168.1.190:22122

  5. # StorageServer Default port

  6. storage_server_port=23000

  7. Set to true if the uri of the file ID contains /group**

  8. url_have_group_name = true

  9. The store_path0 path configured for Storage must be the same as that specified in storage.conf

  10. store_path0=/home/data/fastdfs/storage

Copy the code

Copy some FastDFS configuration files to /etc/fdfs directory:


     
  1. cp /fastdfs-nginx-module/src/http.conf /etc/fdfs/

  2. cp /fastdfs-nginx-module/src/mime.types /etc/fdfs/

Copy the code

Configure nginx, modify nginx.conf:


     
  1. location ~/group([0-9])/M00 {

  2.    ngx_fastdfs_module;

  3. }

Copy the code

Nginx start:


     
  1. [root@iz2ze7tgu9zb2gr6av1tysz sbin]# ./nginx

  2. ngx_http_fastdfs_set pid=9236

Copy the code

Test upload:


     
  1. [root@iz2ze7tgu9zb2gr6av1tysz fdfs]# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /etc/fdfs/4.jpg

  2. group1/M00/00/00/rBD8EFqVACuAI9mcAAC_ornlYSU088.jpg

Copy the code

Deployment structure diagram:

JAVA client integration

Pom. XML is introduced into:


     
  1. <! -- fastdfs -->

  2. <dependency>

  3.    <groupId>org.csource</groupId>

  4.    <artifactId>fastdfs-client-java</artifactId>

  5. < version > 1.27 < / version >

  6. </dependency>

Copy the code

Fdfs_client. Conf configuration:


     
  1. Specifies the timeout period for connecting to the tracker server

  2. connect_timeout = 2  

  3. Timeout duration of socket connection

  4. network_timeout = 30

  5. # File content encoding

  6. charset = UTF-8

  7. #tracker server port

  8. http.tracker_http_port = 8080

  9. http.anti_steal_token = no

  10. http.secret_key = FastDFS1234567890

  11. #tracker server IP address and port

  12. Tracker_server = 192.168.1.190:22122

Copy the code

FastDFSClient upload class:


     
  1. public class FastDFSClient{

  2.    private static final String CONFIG_FILENAME = "D:\\itstyle\\src\\main\\resources\\fdfs_client.conf";

  3.    private static final String GROUP_NAME = "market1";

  4.    private TrackerClient trackerClient = null;

  5.    private TrackerServer trackerServer = null;

  6.    private StorageServer storageServer = null;

  7.    private StorageClient storageClient = null;

  8.    static{

  9.        try {

  10.            ClientGlobal.init(CONFIG_FILENAME);

  11.        } catch (IOException e) {

  12.            e.printStackTrace();

  13.        } catch (MyException e) {

  14.            e.printStackTrace();

  15.        }

  16.    }

  17.    public FastDFSClient() throws Exception {

  18.       trackerClient = new TrackerClient(ClientGlobal.g_tracker_group);

  19.       trackerServer = trackerClient.getConnection();

  20.       storageServer = trackerClient.getStoreStorage(trackerServer);;

  21.       storageClient = new StorageClient(trackerServer, storageServer);

  22.    }

  23. / * *

  24. * Upload files

  25. * @param file File object

  26. * @param fileName specifies the fileName

  27.     * @return

  28. * /

  29.    public  String[] uploadFile(File file, String fileName) {

  30.        return uploadFile(file,fileName,null);

  31.    }

  32. / * *

  33. * Upload files

  34. * @param file File object

  35. * @param fileName specifies the fileName

  36. * @param metaList file metadata

  37.     * @return

  38. * /

  39.    public  String[] uploadFile(File file, String fileName, Map<String,String> metaList) {

  40.        try {

  41.            byte[] buff = IOUtils.toByteArray(new FileInputStream(file));

  42.            NameValuePair[] nameValuePairs = null;

  43. if (metaList ! = null) {

  44.                nameValuePairs = new NameValuePair[metaList.size()];

  45.                int index = 0;

  46.                for (Iterator<Map.Entry<String,String>> iterator = metaList.entrySet().iterator(); iterator.hasNext();) {

  47.                    Map.Entry<String,String> entry = iterator.next();

  48.                    String name = entry.getKey();

  49.                    String value = entry.getValue();

  50.                    nameValuePairs[index++] = new NameValuePair(name,value);

  51.                }

  52.            }

  53.            return storageClient.upload_file(GROUP_NAME,buff,fileName,nameValuePairs);

  54.        } catch (Exception e) {

  55.            e.printStackTrace();

  56.        }

  57.        return null;

  58.    }

  59. / * *

  60. * Get file metadata

  61. * @param fileId Indicates the ID of a file

  62.     * @return

  63. * /

  64.    public Map<String,String> getFileMetadata(String groupname,String fileId) {

  65.        try {

  66.            NameValuePair[] metaList = storageClient.get_metadata(groupname,fileId);

  67. if (metaList ! = null) {

  68.                HashMap<String,String> map = new HashMap<String, String>();

  69.                for (NameValuePair metaItem : metaList) {

  70.                    map.put(metaItem.getName(),metaItem.getValue());

  71.                }

  72.                return map;

  73.            }

  74.        } catch (Exception e) {

  75.            e.printStackTrace();

  76.        }

  77.        return null;

  78.    }

  79. / * *

  80. * Delete files

  81. * @param fileId Indicates the ID of a file

  82. * @return Returns -1 on delete failure, or 0 otherwise

  83. * /

  84.    public int deleteFile(String groupname,String fileId) {

  85.        try {

  86.            return storageClient.delete_file(groupname,fileId);

  87.        } catch (Exception e) {

  88.            e.printStackTrace();

  89.        }

  90.        return -1;

  91.    }

  92. / * *

  93. * Download files

  94. * @param fileId fileId (ID returned after successfully uploading the file)

  95. * @param outFile File download location

  96.     * @return

  97. * /

  98.    public  int downloadFile(String groupName,String fileId, File outFile) {

  99.        FileOutputStream fos = null;

  100.        try {

  101.            byte[] content = storageClient.download_file(groupName,fileId);

  102.            fos = new FileOutputStream(outFile);

  103.            InputStream ips = new ByteArrayInputStream(content);

  104.            IOUtils.copy(ips,fos);

  105.            return 0;

  106.        } catch (Exception e) {

  107.            e.printStackTrace();

  108.        } finally {

  109. if (fos ! = null) {

  110.                try {

  111.                    fos.close();

  112.                } catch (IOException e) {

  113.                    e.printStackTrace();

  114.                }

  115.            }

  116.        }

  117.        return -1;

  118.    }

  119.    public static void main(String[] args) throws Exception {

  120.        FastDFSClient client = new FastDFSClient();

  121.        File file = new File("D:\\23456.png");

  122.        String[] result = client.uploadFile(file, "png");

  123.        System.out.println(result.length);

  124.        System.out.println(result[0]);

  125.        System.out.println(result[1]);

  126.    }

  127. }

Copy the code

Executing the main method returns:


     
  1. 2

  2. group1

  3. M00/00/00/rBD8EFqTrNyAWyAkAAKCRJfpzAQ227.png

Copy the code

Source:

https://gitee.com/52itstyle/spring-boot-fastdfs


A wechat public account with temperature

I look forward to making progress together with you and sharing beautiful articles

Share various Java learning resources