Abstract

In view of the problem that most of the popular pure front-end compression picture solutions have changed the picture size or the compressed picture volume can not accurately reach the expected range, the optimal compression point is found by using dichotomy search method to achieve neither change the picture volume, but also control the compressed picture volume precision effect. Compared with pure front-end compression solutions of the same type, the scheme proposed in this paper does not change the image volume, and the volume accuracy after compression is controllable, which can achieve the desired effect. The results show that this scheme is scientific and reliable. 🤓 🤓 🤓

preface

First of all, it’s really not aCome roundThe article,althoughThere are many articles about pure front-end compression on the nuggets ⏬ butA closer look at the compression effect of these programs can not fully meet my needs in Zhang Xinxu teacherThis articleThe core principle of compression scheme is to useCanvasthedrawImage()Method, the large picture is drawn on a small Canvas to achieve equal proportion of image compression, but the volume of the compressed picture is uncontrollable. Many people may say that the smaller the volume of the compressed picture, the better? No, consider this application scenario. In the securities fund company account opening, identification and other scenarios, the transmission of certificate photos should not be too large, too large disadvantages are too many, nor too small, but can not change the size of the picture, otherwise it will cause the back endOCRThe probability of system identification failure increases greatly, and some key information on the certificate cannot be read. This requires the front end to compress the image as close to the expected volume as possible. Another type of solution is to useCanvas. ToDataURL (type, encoderOptions)The core principle is to set the encoderOptions value, that is, the image output quality to achieve compression effect. So for a given expected file size, whether we can make the output file closest to the expected value by setting the image quality is a question worth exploring.

Implementation approach

Combined with the implementation principles of the above two schemes, Canvas. ToBlob (callback, Type, encoderOptions) is used to achieve, and binary search method is used to find the optimal compression ratio (encoderOptions value). Why toBlob() instead of toDataURL? This is mainly because the toBlob method is asynchronous, whereas toDataURL is synchronous, so toDataURL blocks UI rendering in multiple processes. To be specific, toDataURL mainly performs the following three steps: it moves the bitmap from GPU to CPU, converts it to image format by CPU, and then converts it to Base64 string by base64 encoding. ToBlob moves bitmaps from the GPU to the CPU synchronously, but it does this asynchronously, doesn’t block UI rendering, and doesn’t require base64 encoding to convert them to strings, which means toBlob gets more done with less. In addition, the same image file processing result, toDataURL, takes up more memory than toBlob. ToDataURL returns USVString, which corresponds to the set of all possible sequences of Unicode scalar values. When returned in JavaScript, USVString maps to String. It is about 37% larger than the original binary, and every time you use the string somewhere, such as SRC * as a DOM element, or send it through a network request, it will be reallocated to memory. So, you almost never need toDataURL, because what toDataURL can do, toBlob can do better.

The implementation steps are broken down as follows: TSX code

 <input type={"file"} ref={frontRef} accept={"image/*"} onChange={uploadFile} multiple={true}/>
Copy the code

1. Obtain the uploaded file

const uploadFile = (files: any) => { const file = frontRef.current.files[0]; let src = getObjectUrl(file); setCardFront(src); const fileType = file.type.split('/')[1]; if (fileType ! == 'png' && fileType ! == 'jpg' && fileType ! == 'jpeg') {toast.info (' Please upload PNG, JPG, JPEG images! '); return; }};Copy the code

2. Generate a preview address

FileReader and url.createObjecturl work, but the latter is more compatible and performs better. Of course, if you do not need to compatible with some older models, directly take the image file base64 as a preview can also be.

const getObjectUrl = (file: any) => { let url = null ; if (window.createObjectURL! =undefined) { // basic url = window.createObjectURL(file) ; } else if (window.URL! =undefined) { // mozilla(firefox) url = window.URL.createObjectURL(file) ; } else if (window.webkitURL! =undefined) { // webkit or chrome url = window.webkitURL.createObjectURL(file) ; } return url ; };Copy the code

3. File file to img

The IMG Dom object is generated by previewing the address, and after obtaining its width and height, the Canvas of the same size is generated next.

const fileToImage = blob => new Promise( resolve => {
  const img = new Image();
  img.onload = () => resolve(img);
  img.src = getObjectUrl(blob);
});
Copy the code

4. Convert IMG to canvas

const imgToCanvas = (img) => new Promise(resolve => { const canvas = document.createElement('canvas'); const context = canvas.getContext('2d'); const imgWidth = img.width; const imgHeight = img.height; canvas.width = imgWidth; canvas.height = imgHeight; Context.clearrect (0, 0, imgWidth, imgHeight); context.drawImage(img, 0, 0, imgWidth, imgHeight); resolve(canvas); });Copy the code

Canvas to blob object

const canvastoFile = (canvas, type, encoderOptions) =>
  new Promise(resolve => canvas.toBlob(blob => resolve(blob), type, encoderOptions));
Copy the code

6. Start compression

const compress = (originfile, limitedSize) => new Promise(async (resolve, reject) => { const originSize = originfile.size / 1024; If (originSize < limitedSize) {resolve({file: originfile, MSG: 'Volume less than expected, no compression! '}); return; } // return the file to an image const img = await blobToImage(originfile); // Use this image to generate the required canvas const canvas = await imgToCanvas(img); // To avoid js precision problems, multiply encoderOptions by 100 to an integer. The initial maximum size is MAX_SAFE_INTEGER const maxQualitySize = {encoderOptions: 100, size: Number.MAX_SAFE_INTEGER }; // Initial minimum size 0 const minQualitySize = {encoderOptions: 0, size: 0}; let encoderOptions = 100; // let count = 0; // let errorMsg = "; // Error message let compressBlob = null; // compressed file Blob // compressed ideas, While (count < 8) {compressBlob = await canvastoFile(canvas, 'image/jpeg', encoderOptions / 100); const compressSize = compressBlob.size / 1024; count++; If (compressSize === limitedSize) {// limitedSize = limitedSize; } else if (compressSize > limitedSize) {/ / compressed volume is larger than expected maxQualitySize encoderOptions = encoderOptions; maxQualitySize.size = compressSize; } else {/ / compressed volume is smaller than expected minQualitySize encoderOptions = encoderOptions; minQualitySize.size = compressSize; } encoderOptions = (maxQualitySize.encoderOptions + minQualitySize.encoderOptions) >> 1; if (maxQualitySize.encoderOptions - minQualitySize.encoderOptions < 2) { if (! minQualitySize.size && encoderOptions) { encoderOptions = minQualitySize.encoderOptions; } else if (! minQualitySize.size && ! EncoderOptions) {errorMsg = 'compression failed, cannot compress to specified size '; break; } else if (minQualitySize. Size > limitedSize) {errorMsg = 'limitedSize '; break; } else {/ / compression complete encoderOptions = minQualitySize. EncoderOptions; compressBlob = await canvastoFile(canvas, 'image/jpeg', encoderOptions / 100); break; If (errorMsg) {reject({MSG: errorMsg, file: originfile,}); return; } const compressSize = compressBlob.size / 1024; Console. log(' encoderOptions :${encoderOptions}, size :${compressSize} '); // Const compressedFile = new File([compressBlob], originfile.name, {type: 'image/jpeg',}); Resolve ({file: compressedFile, compressBlob, MSG: 'resolve! '}); });Copy the code

Why is it that a maximum of eight attempts covers all possibilities? This is because 2^7 = 128 and 1/128=0.0078125, and encoderOptions has a minimum granularity of 0.01, so it doesn’t make sense to compress any further. Note: Cancanvas. ToBlob explicitly states in the MDN document that the optional encoderOptions parameter is of type Number, with a value between 0 and 1. This parameter is used to specify the image display quality when the image format is image/ JPEG or image/webp. In other words, only JPEG/webP are supported for compression, so you need to set type to Image/JPEG.

7. Compression is complete

The back end agreed with me that the maximum volume is 300KB, and we can happily send them the compression result RES. 🤗 🤗 🤗

 compress(file, 300).then((res: any) => {
      Toast.success(res.msg);
      console.log(res);
    }, (err: any) => {
      Toast.fail(err.msg);
    });
Copy the code

Added 8.

Most of the time, in addition to paying attention to the volume of the compressed picture, we also expect to experience the clarity of the compressed picture intuitively from the senses, so we can download the compressed picture for observation.

const downLoadImg = (blob) => { const link = document.createElement("a"); link.href = URL.createObjectURL(blob); link.download = 'fileName'; // Name the file link.click(); link.remove(); URL.revokeObjectURL(link.href); };Copy the code

conclusion

When writing this article, I also investigated many compression components with high usage on the Internet. The popular one is LRZ, whose syntax is as follows: (file, [options]); Width {Number} Specifies the maximum width of the image. The default is the width of the original image. If the height is not set, the width will be used. FieldName {String} specifies the name of the field (fieldName {String}, fieldName {String}, fieldName {String}, fieldName {String}). File cannot compress the image near the specified volume, so you still need to explore the value of quality. Another plugin is image-conversion, which has an API called compresssloppily (file, config) → {Promise(Blob)}, which works by setting error tolerance to compress images closer to a specified volume, But also integrated a lot of other functions that are not needed, so it is not as good as their own hands, by the way, in-depth study of learning, explore its principle, is not beautiful 😆

The resources

  • Stackoverflow.com/questions/5…
  • Image pure front-end JS compression implementation
  • Js picture filereader and window of front-end preview. The URL. CreateObjectURL
  • How to compare Canvas’s toDataURL and toBlob methods?