🚀 Functional design

  1. Login authentication: You must log in to the system first. If you do not log in to the system, you cannot access the static resources of back-end interfaces and web disks
  2. Upload: resumable upload and file transfer in seconds
  3. File sharing: Generates a random key string and a resource access address. You can access the resource after entering the key successfully. The key will expire within a certain period
  4. Recycle bin: A deleted file is saved in the recycle bin by default and automatically deleted seven days later
  5. File operation: new folder, rename, move, delete, batch delete

🌈 technology selection

  • Front end: useVueBuild, use ElementUI to build UI, usevue-simple-uploaderPlug-in to achieve upload breakpoint continuation, file second transmission function.
  • Backend: UsedKoaImplementation, directly use Koa to build a static resource server (that is, a personal web disk resource directory), add static resource authentication, use native Nodejs to handle file management and upload functions.

🌟 questions and reflections

Q: Do you need to use a database to save the file information to the database?

In principle, files will be added, deleted and modified using native NodeJS, which does not require the use of a database. However, native NodeJS cannot directly read the MD5 value of the file, so it cannot match the local file with the TRANSMITTED MD5 identifier in the breakpoint continuation and second transmission functions. Therefore, it is still necessary to establish a data table containing file MD5, file path and other information to record the MD5 of local files.

Q: If MD5 information of database record files is used, how can I ensure that the data in the table is synchronized with the local physical storage?

If the file operation is not through the file management system, but directly on Windows into the web disk directory for file addition, deletion and modification, then our application is unable to listen to the file changes, the data table data will not be updated. This can happen when I delete a file, but the table still records that the file was uploaded.

Originally, I wanted to use timer to synchronize data between local files and data tables, but I found that the performance would be poor in the case of many files or deep nesting, and this method was not appropriate.

This information is only used in the resumable and seconded file transfer functions. Therefore, the following solution is adopted: Directly check whether the database information is consistent with the local physical storage in the presearch request. If not, the local storage does not exist and needs to be uploaded again. (In principle, it is not recommended to use Windows directly to access the directory for file operations, but to use this file management system for file operations)

Q: The same file exists in different directories of the web disk. If the file is deleted from different directories at the same time, will there be a conflict in the recycle bin?

When deleting a file, rename the file using the original file name and time (YYYY-MM-DD HH: MM :ss) and move the file to the recycle bin. At the same time, you need to record file deletion information, file path before deletion and deletion time to the database, so as to realize file restoration and recycle bin periodic cleaning functions.

Q: The folder does not have an MD5 value. How do I ensure that the folder can be restored when deleted?

Deleting a folder is the same operation as deleting a file. It is also moved to the recycle bin directory by renaming the folder name + time. However, a new data table needs to be used in the database to record folder deletion information.

✨ implementation

Document authentication

A session is retained when logging in, and then a middleware authentication is used. Without a session, access to the system is not allowed to any requests other than the login interface, including static resources. Build the static resource server using koa-static and set the defer property to true to allow it through the authentication middleware.

// ...
app.use(async (ctx, next) => {
  if (ctx.url.includes('/storage') && ctx.url ! = ='/storage/login') {
    if(! ctx.session.isLogin) { ctx.body = r.loginError()return}}await next()
})

// ...
app.use(static(__dirname + '/public', {
  defer: true
}))

// ...
const router = new Router({
  prefix: '/storage'
})

// ...
router.post('/login'.async ctx => {
  const { access } = ctx.request.body
  if(! access) { ctx.body = r.parameterError()return
  }
  try {
    const base64Decode = new Buffer.from(access, 'base64')
    const genAccess = base64Decode.toString()
    if(storageRootKey ! == genAccess) { ctx.body = r.error(311.'Password error')
      return
    }
    ctx.session.isLogin = true
    logger('login Storage')
    ctx.body = r.success()
  } catch (e) {
    ctx.body = r.error(310.'Login failed')}})Copy the code

The file system interface is set to storage/*, and the static resource server is set to public/storage. During login, the front and back ends translate the password into base64.

If you access static resources without logging in, an error message is displayed.

Direct access without login

Login after access

Breakpoint continuation and file second transfer

File MD5 Calculation

To realize breakpoint continuation and file second transmission, you need to determine the unique identifier of the file. The best way is to calculate the MD5 value of the file.

The selected VUe-simple-Uploader does not directly provide an API for file MD5 calculation, so it needs to be implemented manually. The spark-MD5 plug-in is used to calculate the MD5 of the file. In the file-added event, the file is read directly with fileReader and the MD5 is calculated according to each slice cycle.

Do not directly read the MD5 of the entire file at one time. Reading large files directly may be jammed in Internet Explorer. Traversing each slice can reduce the calculation pressure of the browser.

methods: {
  hanldeFileAdd (file) {
    const fileList = this.$refs.uploader.files
    const index = fileList.findIndex(item= > item.name === file.name)
    if (~index) {
      file.removeFile(file)
    } else {
      file.targetPath = this.currentPath
      this.computeMD5(file)
    }
  },
  computeMD5 (file) {
    const fileReader = new FileReader()
    const blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice
    let currentChunk = 0
    const chunkSize = CHUNK_SIZE
    const chunks = Math.ceil(file.size / chunkSize)
    const spark = new SparkMD5.ArrayBuffer()
    this.$nextTick(() = > {
      this.createMD5Element(file)
    })
    loadNext()
    fileReader.onload = e= > {
      spark.append(e.target.result)
      if (currentChunk < chunks) {
        currentChunk++
        loadNext()
        this.$nextTick(() = > {
          this.setMD5ElementText(file, ` check MD5${((currentChunk / chunks) * 100).toFixed(0)}% `)
          document.querySelector(`.uploader-list .file-${file.id} .uploader-file-actions`).style.display = 'none'})}else {
        const md5 = spark.end()
        file.uniqueIdentifier = md5
        file.resume()
        this.destoryMD5Element(file)
        document.querySelector(`.uploader-list .file-${file.id} .uploader-file-actions`).style.display = 'block'
      }
    }
    fileReader.onerror = function () {
      this.$nextTick(() = > {
        this.setMD5ElementText(file, 'MD5 verification failed')
      })
      file.cancel()
    }
    function loadNext () {
      const start = currentChunk * chunkSize
      const end = ((start + chunkSize) >= file.size) ? file.size : start + chunkSize
      fileReader.readAsArrayBuffer(blobSlice.call(file.file, start, end))
    }
  },
  createMD5Element (file) {
    this.$nextTick(() = > {
      const el = document.querySelector(`.uploader-list .file-${file.id} .uploader-file-status`)
      const MD5Status = document.createElement('div')
      MD5Status.setAttribute('class'.'md5-status')
      el.appendChild(MD5Status)
    })
  },
  destoryMD5Element (file) {
    this.$nextTick(() = > {
      const el = document.querySelector(`.uploader-list .file-${file.id} .uploader-file-status .md5-status`)
      if (el) {
        el.parentNode.removeChild(el)
      }
    })
  },
  setMD5ElementText (file, text) {
    const el = document.querySelector(`.uploader-list .file-${file.id} .uploader-file-status .md5-status`)
    if (el) {
      el.innerText = text
    }
  }
}
Copy the code

Replace the calculated MD5 directly with the uniqueIdentifier attribute of the file object. The identifier in the final request will be the MD5 of the file, which is identified by the back end.

Vue-simple-uploader File list status needs to be added to calculate the MD5 status. You can add another MD5 status layer to the original file list using the CSS, and then use related events to hide the file list.

For some large files, md5 calculation is slow if you traverse the entire file. In this case, you can use the jump fragment method to calculate only the first fragment, the last fragment, and a certain number of fragments between them. You can speed up the CALCULATION of MD5.

Breakpoint continuingly

By default vue-simple-Uploader provides a pause/start operation for file uploads. You can pause at any time during the upload process. But this is not really a breakpoint continuation, because after the page refreshes, the upload status is not saved, and it will be re-uploaded from the first slice. It is still not practical to keep the state in localstorage. The best way is for the back end to return whether the current slice is needed or not, since the back end can know the current uploaded slice of the file.

If the testChunks attribute is set to true (the default), each slice will first send a pre-get request without a file stream to the back end, and the HTTP status code returned by the back end (modifiable) will determine whether the slice needs to be sent.

By default, each slice will send a presearch request, so a file of 10 slices will generate 20 requests, which is wasteful. Ideally, only one pre-probe request is sent. New simple – uplder considering it, and provides checkChunkUploadedByResponse properties, can request is set to a value, the backend for this request directly returns the current value have already sliced array, and then the front directly determine whether slice request needs to be sent.

Example: Halfway to upload a file, click pause, refresh the page, and then upload again. After the file is verified by Md5, the pre-probe request returns the existing slice array [1~25], and then the real slice request will be uploaded directly from the 26th slice.

The front-end processing

// Front-end vue-simple-uploader configuration item
options: {
  target: (instance, chunk, isTest) = > isTest ? '/api/storage/testUpload' : '/api/storage/upload'.query: () = > {
    return {
      targetPath: this.currentPath
    }
  },
  chunkSize: CHUNK_SIZE,
  allowDuplicateUploads: false.checkChunkUploadedByResponse: (chunk, message) = > {
    const response = JSON.parse(message)
    const existChunk = response.data.map(item= > ~~item)
    return existChunk.includes(chunk.offset + 1)}}Copy the code

/storage/testUpload is a pre-probe request (GET), and /storage/upload is a real slice upload request (POST). CheckChunkUploadedByResponse control only upload the biopsy of the backend does not exist.

The back-end processing

router.get('/testUpload'.async ctx => {
  const { identifier, filename, targetPath = '$Root', totalChunks } = ctx.query
  const chunkFolderURL = `${storageChunkPath}/${identifier}`
  try {
    const checkExistResult = await query(`select * from storage where id = ? and isComplete = 1 and isDel = 0`, identifier)
    // Check whether the file has been uploaded in its entirety
    if (checkExistResult.length > 0) {
      let { fullPath } = checkExistResult[0]
      let realPath = fullPath.replace('$Root', storageRootPath)
      // Check whether the current DB information is consistent with the physical storage
      if (fs.existsSync(realPath)) {
        // Check whether the destination location is the same as the previous uploaded location, if not, copy it over
        let targetFilePath = `${targetPath}/${filename}`
        if(fullPath ! == targetFilePath) { targetFilePath = targetFilePath.replace('$Root', storageRootPath)
          fs.copyFileSync(realPath, targetFilePath)
        }
        // Return the entire shard array
        const chunksArr = Array.from({ length: totalChunks }, (item, index) = > index + 1)
        ctx.body = r.successData(chunksArr)
        return}}if(! fs.existsSync(chunkFolderURL)) { fs.mkdirSync(chunkFolderURL, {recursive: true })
      const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
      const sql = `replace into storage(id, fullPath, updatedTime, isComplete, isDel) values(? ,? ,? `, 0, 0)
      await query(sql, [identifier, `${targetPath}/${filename}`, now])
      ctx.body = r.successData([])
    } else {
      const ls = fs.readdirSync(chunkFolderURL)
      ctx.body = r.successData(ls)
    }
  } catch (e) {
    ctx.status = 501
    ctx.body = r.error(306, e)
  }
})

router.post('/upload'.async ctx => {
  const { chunkNumber, identifier, filename, totalChunks, targetPath = '$Root' } = ctx.request.body
  const { file } = ctx.request.files
  const chunkFolderURL = `./public/storage-chunk/${identifier}`
  const chunkFileURL = `${chunkFolderURL}/${chunkNumber}`
  if(chunkNumber ! == totalChunks) {const reader = fs.createReadStream(file.path)
    const upStream = fs.createWriteStream(chunkFileURL)
    reader.pipe(upStream)
    ctx.body = r.success()
  } else {
    const targetFile = `${targetPath}/${filename}`.replace('$Root', storageRootPath)
    fs.writeFileSync(targetFile, ' ')
    try {
      for (let i = 1; i <= totalChunks; i++) {
        const url = i == totalChunks ? file.path : `${chunkFolderURL}/${i}`
        const buffer = fs.readFileSync(url)
        fs.appendFileSync(targetFile, buffer)
      }
      const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
      const sql = `update storage set isComplete = 1, updatedTime = ? where id = ? `
      await query(sql, [now, identifier])
      ctx.body = r.success()
      deleteFolder(chunkFolderURL)
      logger('File uploaded successfully'.1.`targetFile: ${targetFile}, MD5:${identifier}, cut the source deleted successfully)}catch (e) {
      ctx.status = 501
      ctx.body = r.error(501, e)
      logger('File merge failed'.0.'Fragment lost =>${e}`)
      fs.unlinkSync(targetFile)
    }
  }
})
Copy the code

In the testUpload request, the existing slice array is generated through the database and the local slice to the front end. If the slice array has never been transmitted, the database record needs to be updated.

In the Upload request, nodeJS pipeline flow is used to read and write each slice, the file is kept in the chunk folder, and the slice of the target file is stored with md5 value as the file name. When the last slice is encountered, the merge file operation is performed (it should be noted that the last slice is not saved locally because the stream is not closed, but the temporary file can be read directly at this moment). After the merge file is complete, delete the slice folder and update the database information to record that the file is complete.

When uploading a local existing file, the database records that the MD5 file has been completed, so the pre-probe request will return all sliced array, and the front end will not send upload request, thus realizing file transmission in seconds. Even if the uploaded target directory is in a different directory from the existing local files, the replication operation will be performed when the pre-exploration request is recognized, and the front-end does not need to upload again.

Resumable demo at breakpoint

The upload process is paused, then the page is refreshed to re-upload the same file, and you can see that the file is restarting from where the upload was paused.

File transfer demo in seconds

Uploading the same file shown above will return success because it is found to be an existing file.

At this point, a breakpoint continuation, second transmission function before and after the end are finished.

In addition, the system also has some files to move, delete, download functions are relatively simple, basically using nodeJS FS module can be achieved, I will not go into details here.

The back-end source

Because the back end is embedded in my other systems, the whole KOA back end has not been open source, the implementation of some interfaces listed below can be expanded to view the code.

  • Obtaining verification code
  • const svgCaptcha = require('svg-captcha'); .// Get the verification code
    router.get('/captcha'.async ctx => {
      const c = svgCaptcha.create({
        background: '#f5f5f7'
      })
      const captcha = 'data:image/svg+xml; base64,' + new Buffer.from(c.data).toString('base64')
      ctx.session.captcha = c.text.toLocaleLowerCase()
      ctx.body = r.successData({ captcha })
    })
    Copy the code
  • The login
  • / / login
    router.post('/login'.async ctx => {
      const { username, password, captcha } = ctx.request.body
      if(! username || ! password || ! captcha) { ctx.body = r.parameterError()return
      }
      if(captcha ! = ctx.session.captcha.toLocaleLowerCase()) { ctx.body = r.error(311.'Verification code error')
        return
      }
      try {
        const base64Decode = new Buffer.from(password, 'base64')
        const genPwd = base64Decode.toString()
        const result = await query(`select * from storage_user where username = ? and password = ? `, [username, genPwd])
        if(! result || result.length ===0) {
          ctx.body = r.error(311.'Wrong account or password')
          return
        }
        ctx.session.user = username
        logger('login Storage')
        ctx.body = r.success()
      } catch (e) {
        ctx.body = r.error(310.'Login failed')}})Copy the code
  • Gets the files in the current directory
  • // Get the files in the current directory
    router.get('/getFileList'.async ctx => {
      const { currentPath = '$Root' } = ctx.query
      const storageURL = currentPath.replace('$Root', storageRootPath)
      const ls = fs.readdirSync(storageURL)
      const infoList = ls.map(item= > {
        const info = fs.statSync(`${storageURL}/${item}`)
        return {
          fileName: item,
          fullPath: `${currentPath}/${item}`.isFolder: info.isDirectory(),
          size: info.size,
          updatedTime: DateFormat(new Date(info.mtime), 'yyyy-MM-dd HH:mm:ss')
        }
      })
      ctx.body = r.successData(infoList)
    })
    Copy the code
  • rename
  • / / renamed
    router.post('/rename'.async ctx => {
      const { oldPath, newPath } = ctx.request.body
      if(! oldPath || ! newPath) { ctx.body = r.parameterError();return }
      const oldRealPath = oldPath.replace('$Root', storageRootPath)
      const newRealPath = newPath.replace('$Root', storageRootPath)
      try {
        await query(`update storage set fullPath = ? where fullPath = ? `, [newPath, oldPath])
        fs.renameSync(oldRealPath, newRealPath)
        ctx.body = r.success()
        logger('rename'.1.`${oldPath}= >${newPath}`)}catch (e) {
        ctx.body = r.error(312, e)
        logger('rename'.0, e)
      }
    })
    Copy the code
  • Delete files or folders
  • // Delete files or folders
    router.post('/delete'.async ctx => {
      let { deleteList } = ctx.request.body
      if(! deleteList || deleteList.length ===0) { ctx.body = r.parameterError(); return }
      try {
        await Promise.all(
          deleteList.map(async item => {
            const { target, isFolder } = item
            const oldPath = target.replace('$Root', storageRootPath)
            // Delete the empty folder directly
            if (isFolder) {
              const ls = fs.readdirSync(oldPath)
              if (ls.length === 0) {
                fs.rmdirSync(oldPath)
                ctx.body = r.success()
                logger('Delete file or folder'.1.`${oldPath}(delete directly))
                return}}try {
              const time = DateFormat(new Date(), 'yyyyMMddHHmmss')
              const pathArr = target.split('/')
              const fileName = pathArr[pathArr.length - 1]
              let newFileName
              if (isFolder) {
                newFileName = `${fileName}-${time}`
              } else {
                const fileNameArr = fileName.split('. ')
                const prefix = fileNameArr.length > 1 ? fileNameArr.slice(0, fileNameArr.length - 1).join('. ') : fileNameArr[0]
                const suffix = fileNameArr.length > 1 ? fileNameArr[fileNameArr.length - 1] : ' '
                const dbFileInfo = await query(`select * from storage where fullPath = ? `, target)
                if (dbFileInfo.length > 0) {
                  newFileName = `${dbFileInfo[0].id}.${suffix}`
                } else {
                  newFileName = `${prefix}-${time}.${suffix}`}}const newPathArr = pathArr.slice(0, pathArr.length - 1)
              newPathArr.push(newFileName)
              const newPath = newPathArr.join('/')
              // const newRealPath = newPath.replace('$Root', storageTrashPath)
              const afterStorageTrashPath = `${storageTrashPath}/${newFileName}`
              if(! fs.existsSync(storageTrashPath)) fs.mkdirSync(storageTrashPath) fs.renameSync(oldPath, afterStorageTrashPath)if (isFolder) {
                const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
                const id = 'D' + RandomString(8)
                await query(`insert into trash_folder(id, folderName, fromPath, updatedTime) values(? ,? ,? ,?) `, [id, newFileName, target, now])
              } else {
                await query(`update storage set isDel = 1 where fullPath = ? `, target)
              }
              logger('Delete file or folder'.1.`${oldPath}`)
              return Promise.resolve(1)}catch (e) {
              logger('Delete file or folder'.0, e)
              return Promise.reject(e)
            }
          })
        )
        ctx.body = r.success()
      } catch (e) {
        ctx.body = r.error(308.'Operation failed, unknown error')}})Copy the code
  • To move or copy.
  • // Move or copy
    router.post('/move'.async ctx => {
      const { moveFrom, moveTo, moveType = 0 } = ctx.request.body
      if(! moveTo || ! moveFrom || moveFrom.length ===0) {
        ctx.body = r.parameterError()
        return
      }
      // try {
      let sql = ` `
      let paramsArr = []
      const moveToRealPath = moveTo.replace('$Root', storageRootPath)
      moveFrom.map(item= > {
        const moveFromRealPath = item.replace('$Root', storageRootPath)
        const arr = item.split('/')
        const fileName = arr[arr.length - 1]
        if(moveFromRealPath ! = =`${moveToRealPath}/${fileName}`) {
          if (moveType === 0) {
            fs.renameSync(moveFromRealPath, `${moveToRealPath}/${fileName}`)
            sql += `update storage set fullPath = ? where fullPath = ? ; `
            paramsArr.push(`${moveTo}/${fileName}`, item)
          } else {
            const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
            const id = 'F' + RandomString(8)
            fs.copyFileSync(moveFromRealPath, `${moveToRealPath}/${fileName}`)
            sql += `insert into storage(id, md5, fullPath, isComplete, isDel, updatedTime) values(? , (select md5 from storage a where fullPath = ?) ,? , 1, 0,?) `
            paramsArr.push(id, item, moveTo, now)
          }
        }
      })
      if (sql) {
        await transactionQuery(sql, paramsArr)
      }
      ctx.body = r.success()
      logger(moveType === 0 ? 'Move file' : 'Copy file'.1.`MoveFrom: ${moveFrom.join(', ')} => MoveTo: ${moveTo}`)})Copy the code
  • New Folder
  • // Create a folder
    router.post('/createFolder'.async ctx => {
      const { folderName } = ctx.request.body
      if(! folderName) { ctx.body = r.parameterError();return }
      let newPath = folderName.replace('$Root', storageRootPath)
      try {
        fs.mkdirSync(newPath)
        ctx.body = r.success()
        logger('New Folder'.1.`${newPath}`)}catch (e) {
        ctx.body = r.error(312, e)
        logger('New Folder'.0, e)
      }
    })
    Copy the code
  • Fragment upload preview
  • router.get('/testUpload'.async ctx => {
      const { identifier, filename, targetPath = '$Root', totalChunks } = ctx.query
      const chunkFolderURL = `${storageChunkPath}/${identifier}`
      try {
        const checkExistResult = await query(`select * from storage where md5 = ? and isComplete = 1 and isDel = 0`, identifier)
        // Check whether the file has been uploaded in its entirety
        if (checkExistResult.length > 0) {
          let { fullPath } = checkExistResult[0]
          let realPath = fullPath.replace('$Root', storageRootPath)
          If (fs.existssync (realPath)) {if (fs.existssync (realPath)) { Let targetFilePath = '${targetPath}/${filename}' if (fullPath! == targetFilePath) { targetFilePath = targetFilePath.replace('$Root', storageRootPath) fs.copyFileSync(realPath, targetFilePath) } } */
          let targetFilePath = `${targetPath}/${filename}`
          if(fullPath ! == targetFilePath) { targetFilePath = targetFilePath.replace('$Root', storageRootPath)
            fs.copyFileSync(realPath, targetFilePath)
          }
          if(! fs.existsSync(realPath)) {const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
            const id = 'F' + RandomString(8)
            const sql = `insert into storage(id, md5, fullPath, updatedTime, isComplete, isDel) values(? ,? ,? ,? `, 1, 0)
            await query(sql, [id, identifier, targetFilePath, now])
          }
          ctx.body = r.successData(Array.from({ length: totalChunks }, (item, index) = > ~~index + 1))
          return
        }
        if(! fs.existsSync(chunkFolderURL)) { fs.mkdirSync(chunkFolderURL, {recursive: true })
          const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
          const id = 'F' + RandomString(8)
          const sql = `insert into storage(id, md5, fullPath, updatedTime, isComplete, isDel) values(? ,? ,? ,? `, 0, 0)
          await query(sql, [id, identifier, `${targetPath}/${filename}`, now])
          ctx.body = r.successData([])
        } else {
          const ls = fs.readdirSync(chunkFolderURL)
          ctx.body = r.successData(ls)
        }
      } catch (e) {
        ctx.status = 501
        ctx.body = r.error(306, e)
      }
    })
    Copy the code
  • Shard to upload
  • router.post('/upload'.async ctx => {
      const { chunkNumber, identifier, filename, totalChunks, targetPath = '$Root' } = ctx.request.body
      const { file } = ctx.request.files
      const chunkFolderURL = `./public/storage-chunk/${identifier}`
      const chunkFileURL = `${chunkFolderURL}/${chunkNumber}`
      if(chunkNumber ! == totalChunks) {const reader = fs.createReadStream(file.path)
        const upStream = fs.createWriteStream(chunkFileURL)
        reader.pipe(upStream)
        ctx.body = r.success()
      } else {
        const targetFile = `${targetPath}/${filename}`.replace('$Root', storageRootPath)
        fs.writeFileSync(targetFile, ' ')
        try {
          for (let i = 1; i <= totalChunks; i++) {
            const url = i == totalChunks ? file.path : `${chunkFolderURL}/${i}`
            const buffer = fs.readFileSync(url)
            fs.appendFileSync(targetFile, buffer)
          }
          const now = DateFormat(new Date(), 'yyyy-MM-dd HH:mm:ss')
          const sql = `update storage set isComplete = 1, updatedTime = ? where md5 = ? `
          await query(sql, [now, identifier])
          ctx.body = r.success()
          deleteFolder(chunkFolderURL)
          logger('File uploaded successfully'.1.`targetFile: ${targetFile}, MD5:${identifier}, cut the source deleted successfully)}catch (e) {
          ctx.status = 501
          ctx.body = r.error(501, e)
          logger('File merge failed'.0.'Fragment lost =>${e}`)
          fs.unlinkSync(targetFile)
        }
      }
    })
    Copy the code
  • Gets the list of recycle bin directory files
  • // Get the list of recycle bin directory files
    router.get('/getTrashList'.async ctx => {
      const weekAgo = new Date().setDate(new Date().getDate() - 8)
      const trashFileSql = `select id, md5, fullPath, DATE_FORMAT(updatedTime, '%Y-%m-%d %H:%i:%s') updatedTime from storage where isDel = 1 and updatedTime > ? `
      const trashFileList = await query(trashFileSql, weekAgo)
      const trashFolderSql = `select id, folderName, fromPath, DATE_FORMAT(updatedTime, '%Y-%m-%d %H:%i:%s') updatedTime from trash_folder where updatedTime > ? `
      const trashFolderList = await query(trashFolderSql, weekAgo)
      if(! trashFileList || ! trashFolderList) { ctx.body = r.error()return
      }
      let trashListMap = {}
      trashFileList.map(item= > {
        const pathArr = item.fullPath.split('/')
        const fileRealName = pathArr[pathArr.length - 1]
        const fileNameArr = fileRealName.split('. ')
        const suffix = fileNameArr.length > 1 ? fileNameArr[fileNameArr.length - 1] : ' '
        const fileName = `${item.id}.${suffix}`
        trashListMap[fileName] = {
          fileName,
          showFileName: fileRealName,
          fromPath: item.fullPath,
          updatedTime: item.updatedTime,
          isFolder: false
        }
      })
      trashFolderList.map(item= > {
        const folderNameArr = item.folderName.split('/')
        const fileName = folderNameArr[folderNameArr.length - 1]
        trashListMap[fileName] = {
          fileName,
          fromPath: item.fromPath,
          updatedTime: item.updatedTime,
          isFolder: true}})const ls = fs.readdirSync(storageTrashPath)
      const result = ls.map(item= > {
        if (trashListMap[item]) {
          return trashListMap[item]
        } else {
          const arr = item.split('. ')
          const fileName = arr.length > 1 ? arr.slice(0, arr.length - 1).join('. ') : arr[0]
          const a = fileName.substr(-14)
          return {
            fileName: item,
            updatedTime: `${a.substr(0.4)}-${a.substr(4.2)}-${a.substr(6.2)} ${a.substr(8.2)}:${a.substr(10.2)}:${a.substr(12.2)}`
          }
        }
      })
      ctx.body = r.successData(result)
    })
    Copy the code
  • Recycle bin file restore
  • router.post('/restore'.async ctx => {
      let { restoreList } = ctx.request.body
      if(! restoreList || restoreList.length ===0) { ctx.body = r.parameterError(); return }
      try {
        let sql = ' '
        let paramsArr = []
        await Promise.all(
          restoreList.map(item= > {
            const oldPath = `${storageTrashPath}/${item.fileName}`
            const restorePath = item.fromPath.replace('$Root', storageRootPath)
            fs.renameSync(oldPath, restorePath)
            if (item.isFolder) {
              sql += `delete from trash_folder where folderName = ? ; `
              paramsArr.push(item.fileName)
            } else {
              const id = item.fileName.split('. ') [0]
              sql += `update storage set isDel = 0 where id = ? ; `
              paramsArr.push(id)
            }
          })
        )
        await transactionQuery(sql, paramsArr)
        ctx.body = r.success()
        logger('Restore file'.1.`${restoreList.map(item => item.fileName).join(', ')}`)}catch (e) {
        ctx.body = r.error(e)
        logger('Restore file'.0, e.toString())
      }
    })
    Copy the code
  • The recycle bin file is permanently deleted
  • router.post('/permanentlyDelete'.async ctx => {
      let { deleteList } = ctx.request.body
      if(! deleteList || deleteList.length ===0) { ctx.body = r.parameterError(); return }
      try {
        let sql = ' '
        let paramsArr = []
        await Promise.all(
          deleteList.map(item= > {
            const oldPath = `${storageTrashPath}/${item.fileName}`
            if (item.isFolder) {
              deleteFolder(oldPath)
            } else {
              fs.unlinkSync(oldPath)
            }
            if (item.isFolder) {
              sql += `delete from trash_folder where folderName = ? ; `
              paramsArr.push(item.fileName)
            } else {
              const id = item.fileName.split('. ') [0]
              sql += `delete from storage where id = ? ; `
              paramsArr.push(id)
            }
          })
        )
        await transactionQuery(sql, paramsArr)
        ctx.body = r.success()
        logger('Permanently delete files'.1.`${deleteList.map(item => item.fileName).join(', ')}`)}catch (e) {
        ctx.body = r.error(e)
        logger('Permanently delete files'.0, e.toString())
      }
    })
    Copy the code

    Git: github.com/leon-kfd/Fi…