An easy way to optimize images for publishing online. List of optimization methods

  • Image processing
  • The reason for this article was the following post: . At one time, I had to write a lot of research code in C#, which implemented various compression and processing algorithms. It is no coincidence that I mentioned the fact that the code is research. This code has unique requirements. On the one hand, optimization is not very important - after all, it is important to test the idea. Although I would like this check not to last for hours and days (when a launch is carried out with various parameters, or a large corpus of test images is processed). The method used in the above post to access pixel brightnesses bmp.GetPixel(x, y) is how my first project started. This is the slowest, although simplest way. Is it worth the bother here? Let's measure it.

    We will use the classic Bitmap (System.Drawing.Bitmap). This class is convenient because it hides from us the details of encoding raster formats - as a rule, they do not interest us. All common formats are supported, such as BMP, GIF, JPEG, PNG.

    By the way, I will offer the first benefit for beginners. The Bitmap class has a constructor that allows you to open a file with an image. But it has an unpleasant feature - it leaves open access to this file, so repeated calls to it lead to an exception. To correct this behavior, you can use this method to force the bitmap to immediately “release” the file:

    Public static Bitmap LoadBitmap(string fileName) ( using (FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read)) return new Bitmap(fs); )

    Measurement technique

    We will measure by distilling the image processing classic Lena into an array and back into Bitmap (http://en.wikipedia.org/wiki/Lenna). This is a free image, it can be found in a lot of image processing works (and in the title of this post too). Size – 512*512 pixels.

    A little about the methodology - in such cases, I prefer not to chase ultra-precise timers, but simply perform the same action many times. Of course, on the one hand, in this case the data and code will already be in the processor cache. But, on the other hand, we identify the costs of the first launch of the code associated with the translation of MSIL code into processor code and other overhead costs. To guarantee this, we first run each piece of code once - we perform the so-called “warm-up”.

    Compiling the code to Release. We definitely launch it not from under the studio. Moreover, it is also advisable to close the studio - I have come across cases where the very fact of its “neglect” sometimes affects the results obtained. It is also advisable to close other applications.

    We run the code several times, achieving typical results - we need to make sure that it is not affected by some unexpected process. Let's say the antivirus woke up or something else. All these measures allow us to obtain stable, repeatable results.

    "Naive" method

    This is the method that was used in the original article. It consists in using the Bitmap.GetPixel(x, y) method. Here is the complete code for a similar method that converts the contents of a bitmap into a three-dimensional byte array. In this case, the first dimension is the color component (from 0 to 2), the second is the y position, the third is the x position. This is how it happened in my projects, if you want to arrange the data differently, I think there will be no problems.

    Public static byte[,] BitmapToByteRgbNaive(Bitmap bmp) ( int width = bmp.Width, height = bmp.Height; byte[,] res = new byte; for (int y = 0; y< height; ++y) { for (int x = 0; x < width; ++x) { Color color= bmp.GetPixel(x, y);

    res = color.R;

    res = color.G;

    res = color.B;

    ) ) return res; ) The reverse conversion is similar, only the data transfer goes in a different direction. I will not give its code here - those interested can look at the source code of the project using the link at the end of the article. 100 conversions to and from an image on my laptop with an I5-2520M 2.5GHz processor require 43.90 seconds. It turns out that with an image of 512*512, only about half a second is spent on data transfer!

    Public unsafe static byte[,] BitmapToByteRgb(Bitmap bmp) ( int width = bmp.Width, height = bmp.Height; byte[,] res = new byte; BitmapData bd = bmp.LockBits(new Rectangle(0, 0, width , height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); try (byte* curpos; for (int h = 0; h< height; h++) { curpos = ((byte*)bd.Scan0) + h * bd.Stride; for (int w = 0; w < width; w++) { res = *(curpos++); res = *(curpos++); res = *(curpos++); } } } finally { bmp.UnlockBits(bd); } return res; }

    This approach gives us 0.533 seconds per 100 transformations (82 times faster)! I think this already answers the question - is it worth writing more? complex code transformation? But can we speed up the process further while remaining within the managed code?

    Arrays vs pointers

    Multidimensional arrays are not the fastest data structures. Here checks are made for going beyond the index, the element itself is calculated using multiplication and addition operations. Since address arithmetic has already given us a significant acceleration once when working with Bitmap data, maybe we’ll try to use it for multidimensional arrays? Here is the code for the direct conversion:

    Public unsafe static byte[,] BitmapToByteRgbQ(Bitmap bmp) ( int width = bmp.Width, height = bmp.Height; byte[,] res = new byte; BitmapData bd = bmp.LockBits(new Rectangle(0, 0, width , height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); try ( byte* curpos; fixed (byte* _res = res) ( byte* _r = _res, _g = _res + width*height, _b = _res + 2*width* height; for (int h = 0; h< height; h++) { curpos = ((byte*)bd.Scan0) + h * bd.Stride; for (int w = 0; w < width; w++) { *_b = *(curpos++); ++_b; *_g = *(curpos++); ++_g; *_r = *(curpos++); ++_r; } } } } finally { bmp.UnlockBits(bd); } return res; }

    Result? 0.162 sec per 100 conversions. So we accelerated another 3.3 times (270 times compared to the “naive” version). It was this kind of code that I used when researching algorithms.

    Why transfer at all?

    It’s not entirely obvious why you would need to transfer data from Bitmap at all. Maybe all the transformations should be carried out there? I agree that this is one of the possible options. But the fact is that many algorithms are more convenient to test on floating point data. Then there are no problems with overflows or loss of precision at intermediate stages. You can convert to a double/float array in a similar way. The reverse conversion requires checking when converting to byte. Here is a simple code for such a check:

    Private static byte Limit(double x) ( if (x< 0) return 0; if (x >255) return 255;

    return (byte)x; )

    Adding such checks and type conversions slows down our code. The version with address arithmetic on double arrays takes 0.713 seconds to complete (per 100 transformations). But compared to the “naive” option, she’s just lightning.

    What if you need it faster? If you need it faster, then we write transfer and processing in C, Asm, and use SIMD commands. Loading directly, without the Bitmap wrapper. Of course, in this case we go beyond the Managed code, with all the ensuing pros and cons. And doing this makes sense for an already debugged algorithm.

    Update 2013-10-08:
    At the suggestion of commentators, I added to the code an option to transfer data to an array using Marshal.Copy(). This is done purely for testing purposes - this way of working has its limitations:

    • The order of the data is exactly the same as in the original Bitmap. That is, the components are mixed. If we want to separate them from each other, we will still need to loop through the array, copying the data.
    • The type of brightness remains byte, at the same time, it is often convenient to perform intermediate calculations with floating point.
    • Marshal.Copy() works with one-dimensional arrays. Yes, they are of course the fastest and it’s not very difficult to write rgb everywhere, but still...
    So, copying in both directions occurs in 0.158 seconds (per 100 transformations). Compared to the more flexible option on pointers, the acceleration is very small, within the statistical error of the results of different runs.

    Several of the most famous image optimization programs were tested. Let me make a reservation right away: we are talking about algorithms for significantly reducing the file size, “with losses” (“lossy”). Lossless JPEG compression is not of particular interest, since JPEG is itself a format for storing “lossy” images, and lossless compression by any means gives a maximum gain of 3-5%. Here we are talking about programs that can reduce the file size several times.

    By the way, optimization of files uploaded by participants has long been implemented in the Gallerix Art Club; it is carried out automatically in two stages: the first - immediately upon upload (lossless, Jpegtran is used) and the second - after some time with the JpegMini program.

    If your home archive has never been optimized and now occupies, for example, 100 GB, it is quite possible to reduce this figure by 2-2.5 times (up to 45-50 GB) without intruding on picture quality and by 3-4 times (up to 20- 25 GB) with a slight, not very noticeable decrease in quality.

    A little about the technical part of the question. To put it very roughly, the work of image compression tools can be compared to the invention of variable bitrate for audio files. Audio sections that are more saturated with various sounds are encoded with a high bitrate, silence - with the lowest. Unlike audio files with a constant bitrate, the JPEG format already has built-in optimization, the power of which is determined by the quality setting set when saving the file from any program. The work of image compression services is also based on the variable bitrate approach - identifying image fragments whose encoding can be cunningly neglected to save disk space.

    For these purposes, there are several commercial solutions on the market and, according to many Russian and foreign colleagues, JpegMini is number one in terms of a set of parameters. The purpose of this test is to pit all commercial and free tools for image optimization against each other in order to refresh the “table of ranks”.

    Source files

    The first of them - filmed hand-held on a mobile phone of yesteryear - Samsung Note II, with minimum file resolution. Each of us has millions of such photographs, such files are not awarded with quality, there are a lot of them and they are usually a little blurry and very grainy.

    The second file is completely technical in origin - it is a gradient table generated using Adobe Photoshop and saved with 100% quality. Visible changes in the processing of this table will serve to evaluate the final quality.

    The third file was shot on an “average” amateur camera Panasonic GF3 in automatic mode from hand This is a household system camera, without mirrors, but with interchangeable lenses. Thanks to optical stabilization, the sharpness is better here, it’s not a mobile phone, but the noise is almost the same.

    The last file is from Unsplash.com, by Karl Fredrickson. Serves as an example of a photo taken professional camera with expensive optics.

    Tools

    JpegMini

    JpegMini paid program. There are versions for 20 and 149 USD, as well as an expensive server version. Desktop versions work the same, the younger one has two limitations: in processing speed and in maximum resolution file (28MP), the older one is limited by file size (60MP). Works on Windows and Mac.

    TinyJPG

    There is a paid version in the form of a plugin for Adobe Photoshop (50 USD) and a free online version with restrictions on file size and the number of downloads per day (which, however, is very easy to manage). The plugin also works on Windows and Mac. The first 500 files per month via API are free, then 0.009 USD per file.

    Compressor.io

    Well-known online service. The only limit is the maximum file size, 10MB.

    Kraken.io

    Popular in the West professional service with high tariffs and strict quotas. Free online version is limited maximum size file is 1 MB and can hardly be perceived as a tool. This is nothing more than a demo version - a regular amateur camera is not able to produce a file smaller than 2-3 MB. Tariffs for using the API start at 5 USD per month for an incoming volume of 500 MB.

    ConvertImage.net

    A set of online tools for image processing, including a Jpeg compressor. Completely free.

    Jpeg-Optimizer.com

    Online service. Completely free and without restrictions. There is a manual adjustment of the compression level.

    Optimizilla.com

    Free web service. The same engine is found on other domains. Multilingual interface. Manual installation compression level (the setting appears after loading the image). No more than 20 files at a time.

    DynamicDrive.com

    A set of free online image processing tools. Optimized with a file limit of 2.8 MB. Displays several ready-made images with at different levels compression to choose from.

    ShortPixel.com

    Some new service with an abundance of different tariff plans, there is a subscription with monthly payment and package tariffs (10,000 files for 9.99 USD). Works via API or web interface (up to 20 files at a time). File size limit in free online version- up to 10MB.

    ACDSee Ultimate

    ACDSee Ultimate 9 professional program for organizing and processing images for Windows. Saving with “Optimize Huffman codes” and “Color Component sampling” enabled, quality 70%. The program is paid, the price starts at 40 USD for the most modest version, but many have similar settings free programs. This is an example of pure saving with minor lossless optimization.

    "Clean" JPEG

    Saving a file with about 70% quality (9 out of 12) from Adobe Photoshop CS6, a variation of the “basic” format. Here is a pure JPEG using only the capabilities of the format, the “lowest point of reference”, available for free from any program that can save in the JPG format.

    Now a little about those who were forgotten and why. You will definitely find this on Google if you try to find optimizers yourself, so it’s impossible not to talk about them.

    I think that to an inexperienced user with a not very high-quality file, the result of all tools will seem acceptable. The differences between the original quality and the optimized file with artifacts are not worth words when it comes to old blurry photos in the home archive. Everything said below concerns mostly photo-aesthetes. At the amateur level, all tools give quite suitable results; only the final file size matters.

    JpegMini and Kraken were at the very bottom of the table, sorted by the size of the final file, but only these two technologies compress the file, saving the image completely honestly. In the “10 points” category, JpegMini looks more attractive from all sides.

    With a very slightly less acceptable picture, it turned out that Compressor.io compresses small files better, and Optimizilla.com compresses large ones, but this is a very arbitrary conclusion within the framework of this test.

    Taking into account the size/quality ratio, what is in the table below nines, but has lower ratings with a larger file size, is of no practical interest. And above the “nines” there are only two services.

    The latest service turned out to be interesting - ShortPixel. The artifacts are very noticeable, but if you compress the picture to this size using the JPG format itself, the picture turns out noticeably worse. ShortPixel is more API oriented and relatively inexpensive, but very image destructive.

    TinyJPG in the form of a plugin is more modest, which is probably done correctly. It is still better than all the others in terms of file size, but the artifacts are slightly smaller than those of the online version.

    An interesting feature of TinyJPG in the form of a plugin: after processing, the file can be either 8-10 times smaller than the original size, or one and a half to two times larger.

    Test image No. 2 was best compressed by ShortPixel, but for most files with a large number of small details, the web version of TinyJPG comes out the winner. The file turns out smaller, and the original image is not deformed as much. However, TinyJPG allows a file no larger than 5 MB, and ShortPixel, even in free version, accepts files up to 10 MB.

    Another important measurement factor (besides the final file size) is ease of use on the stream. For example, if you are tasked with optimizing a 100 GB home archive for the first time, processing it completely using online services will be very labor-intensive and time-consuming due to limitations and the need to upload this volume in portions to the server and back. Despite the fact that, for a modest 20 USD, with JpegMini this requires one mouse movement (literally: you just need to drag root folder with images in the program window).

    By the way, the TinyJPG plugin for Photoshop is not designed as a filter, but as an export channel, i.e. It is not suitable for creating an automation script in PS. Correction: from the manufacturer’s website you can download automation scripts that work in any version of Photoshop, and in Photoshop CC, exporting through a plugin is an operation for batch processing. The plugin has no restrictions - tested on a file with a resolution of 130MP (processing time is about 7 minutes on a fairly powerful computer).

    In general, for bulk processing there are only two options - using the API and JpegMini. For processing via the API you will have to pay for each processed file.

    conclusions

    • All the tools that allow you to get a relatively acceptable file smaller size than JpegMini provides - or they visually degrade the picture and are inferior to it in ease of use or have a less attractive pricing policy - time-based or file-by-file rental versus a one-time purchase from JpegMini.
    • For cases where image quality is of paramount importance, and if reducing the file size at the expense of visual degradation is unacceptable, JpegMini remains the best tool.
    • For cases where the quality of the source files can be slightly neglected for the sake of compression, it makes sense to use both versions of TinyJPG, depending on the task.
    • If you choose only from free compressors, the best look are Compressor.io and Optimizilla.com, which almost do not spoil the picture, despite the fact that the final file is noticeably smaller than JpegMini.
    • Owners of news sites with a large flow of illustrations, the quality claims of which are not so acute, make sense to pay attention to ShortPixel and TinyJPG. Their advantages are affordable rates, record compression and work via API. And if quality matters (but money does not), then it makes sense to connect Kraken via the API or, in the case of very large volumes, the server version of JpegMini.

    review

    What does our module do? The main function is to optimize images as much as possible, with virtually no loss in quality. Why is this needed? There are several reasons: Saving free space on hosting, because As a rule, it is the images that “eat up” most of it. Compliance with requirements PageSpeed ​​Insights- requirements for site optimization, for its maximum ranking in the search engine. Increasing page loading speed by reducing downloaded traffic In addition: In July, Google will launch the Speed ​​Update algorithm and update the mobile-first index. As part of these updates, site loading speed and portability will become a serious ranking factor. You can check if your site has problems with image size: here and here. Our OptiImg module allows you to reduce the size of images on your site by up to 99% without any visible loss of quality. Automatic compression images will save you from long and tedious resaving in third-party applications. No quotas or restrictions, buy a license and convert as many files as you need, without topping up your balance or any additional payments! Dear clients, please note that license key makes it possible to use our service without restrictions; after the license key expires, access to the service will be possible, but you will not be able to receive updates for our module. Also, please note that when purchasing an extension, you do not need to change the club name in the module settings. The module works on the “set it and forget it” principle. Any image uploaded to the information block will be automatically compressed! All images are optimized in 1 click, all images uploaded to information blocks, the media library or when exchanging with 1C will be compressed automatically. Currently, JPEG and PNG formats are supported. HTTPS - protocol supported. PHP7 - supported .The demo period allows you to process 1000 images. Every day, using our module for 1C - Bitrix, up to 500 thousand images are processed! You can check the compression level on the module website. Please note that the module is almost entirely written in D7, on versions of 1C - Bitrix under 16 - may not work correctly. How to start using it? To make your life easier, our team is always ready to install a module on your website running 1C - Bitrix and perform optimization yourself, for this, after purchasing, send a request to our e-mail - [email protected] with the subject "Installation", in the body of the letter indicate the coupon code and access to the site on which you want to carry out the work. Dear customers! Do not forget to share your experience of using our products, leave reviews, write to discussions and to our email address - [email protected], we are always happy to help you and receive feedback!

    From AlfaGroup

    Buy: 2,000 1,800 rub.

    Screenshots


    Description

    Technical data

    Published: 10/17/2015 Updated: 03/28/2019 Version: 1.3.6 Installed: More than 1000 times Suitable editions: “First Site”, “Start”, “Standard”, “Small Business”, “Expert”, “Business”, “ Corporate portal ", "Enterprise" Adaptability: No Composite support: No

    Description

    What does our module do?

    The main function is to optimize images as much as possible, with virtually no loss in quality.

    What is it for?

    There are several reasons:

    1. Saving free space on hosting, because... As a rule, it is the images that “eat up” most of it.
    2. Compliance with the requirements PageSpeed ​​Insights - requirements for site optimization, for its maximum ranking in the search engine.
    3. Increasing page loading speed by reducing download traffic
    Besides:

    You can check if your site has problems with image size:

    Our module OptiImg allows you to reduce the size of images on the site up to 99% without any visible loss of quality.

    Automatic image compression will save you from long and tedious resaving in third-party applications.

    No quotas or restrictions, buy a license and convert as many files as you need, without topping up your balance or any additional payments!

    Dear customers, please note that the license key allows you to use our service without restrictions; after the license key expires, access to the service will be possible, but you will not be able to receive updates for our module.

    Also, please note that when purchasing an extension, the club name in the module settings does not need to be changed.

    The module works on the “set it and forget it” principle. Any image uploaded to the information block will be automatically compressed!

    All images are optimized in 1 click, all images uploaded to information blocks, media library or when exchanging with 1C will be compressed automatically.

    Currently JPEG and PNG formats are supported.

    HTTPS - protocol supported.

    PHP7 - supported.

    The demo period allows you to process 1000 images.

    Every day with the help of our module for 1C - Bitrix, processed up to 500 thousand images!

    You can check the compression level on the module’s website.

    Please note that the module is almost completely written in D7, on versions of 1C - Bitrix younger than 16 - may not work correctly.

    How to start using it?

    To make your life easier, our team is always ready to install a module on your website running 1C - Bitrix and perform optimization yourself, for this, after purchasing, send a request to our e-mail - [email protected] with the subject "Installation", in the body of the letter indicate the coupon code and access to the site on which you want to perform the work.

    Dear Clients!

    Don’t forget to share your experience of using our products, leave reviews, write to discussions and to our email address - [email protected], we are always happy to help you and get feedback!

    Reviews (10)

    Overall rating: Total reviews: 10

    Good decision to optimize images

    As of February 25, 2019, the solution is completely not working, even on their website http://www.optiimg.ru/ you cannot directly compress the image, it freezes after loading.

    We apologize, the solution is working, but it doesn’t work on our hosting, the problem was not in the solution, but as a person from technical support clarified, “the problem lies at the level of network communications of the hosting centers.” The technical support itself turned out to be very active in solving our problem and personally negotiated with the hosting of our site.

    The module is great!!!

    As they wrote below, I agree, but this is the only decision that I have not regretted! The module works quickly and does not freeze. I cleaned the site by 3 GB. It was 5 GB and became 2 GB.
    Super module, well done!) I recommend)

    Excellent technical support and cool module

    I was surprised by the immediate help from technical support; they responded immediately on Sunday evening.
    The module is very cool, I have been looking for a similar solution for a long time. I recommend!

    Hello

    Technical support works instantly, within 10 seconds they already answered online chat and answered questions! The program is fire, especially when there are a lot of pictures! I recommend to everyone!

    Excellent technical support

    There were problems with the operation of the module, I contacted technical support. support solved the problem very quickly, thank you.

    Good decision

    Perfect solution! The guys installed everything themselves and compressed the images by 40%. Thank you!

    So far the only purchase decision we have not regretted

    So far this is the only purchase decision we have not regretted. Cleared 5 GB.

    Very useful module and excellent technical support!

    Thanks to the module, it was possible to double the site speed, which is very important for an online store with big amount images. Technical support helped quickly resolve any issues that arose and patiently explained all the details.

    Great module, great support.

    We purchased the module, support immediately got involved and set everything up. Thanks to photo optimization, Google PageSpeed ​​gives a site a score of over 80 on most pages. I recommend this solution to everyone.

    Great module

    I tested the site on Google and it turned out to have big pictures. I was thinking about how to optimize it. A simple and inexpensive solution was found. The guys from the support team installed it themselves, wrote everything down, now when loading images it works automatically and compresses everything to optimal sizes. I immediately installed it on the second site and am very pleased. I recommend.


    Discussions (76)

    Discussions (76)

    I bought, paid, optimized the pictures. Everything is cool, except for one glitch that this module causes. Namely, when I try to edit a product from the front of the site, a fatal error appears

    Detail picture:
    Access to undeclared static property: Alfa1c\Optiimg\OptiImg::$_1260989302 (0)
    /home/bitrix/www/bitrix/modules/alfa1c.optiimg/include.php:1
    #0: OptiImgEvents::CompressOnResize(array, array, NULL, string, string, boolean)
    /home/bitrix/www/bitrix/modules/main/classes/general/module.php:490
    #1: ExecuteModuleEventEx(array, array)
    /home/bitrix/www/bitrix/modules/main/classes/general/file.php:1705
    #2: CAllFile::ResizeImageGet(array, array, integer, boolean)
    /home/bitrix/www/bitrix/modules/main/lib/ui/fileinputunclouder.php:40
    #3: Bitrix\Main\UI\FileInputUnclouder::getSrcWithResize(array, array)
    /home/bitrix/www/bitrix/modules/main/lib/ui/fileinput.php:477
    #4: Bitrix\Main\UI\FileInput->getFile(string, string, boolean)
    /home/bitrix/www/bitrix/modules/main/lib/ui/fileinput.php:283
    #5: Bitrix\Main\UI\FileInput->show(array, boolean)
    /home/bitrix/www/bitrix/modules/iblock/admin/iblock_element_edit.php:2539
    #6: include(string)
    /home/bitrix/www/bitrix/admin/cat_product_edit.php:3

    It doesn’t always come out, but very often. At the same time, sometimes on the same product there is a glitch at first, but the second or third time there is no glitch.

    Sergey Zabotin, judging by the fact that the function name is obfuscated, you continue to use the demo version, remove the module completely and install it again, if the error persists, write to us at [email protected] we will help you.

    What capabilities does the module have if images are stored in the clouds? For example, what if it's Selectel?
    As far as I understand, the module compresses both during loading and resizing, but does not compress if the images are already in the “clouds”?
    What are your plans in this direction?

    Gavril Scriabin, CMS uploads files directly to third-party clouds; for obvious reasons, our module cannot work on the cloud side; accordingly, only if you first process the files and then move them to the cloud.

    It will work with the Bitrix cloud, because... Bitrix files in the cloud are updated from time to time.

    Good afternoon
    For what period is the license key valid? (updates)

    Ivan Prilepin, Updates are available for 1 year, access to the server is not limited in time.

    1.2.8 version.

    Roman Petrov, Write to us by email, we’ll sort it out.

    I bought the optimizer a year ago, then I used it very little, now I wanted to use it again, but I got an authorization error?! Should I buy it again?
    Why should I buy it every year for 100 photos?

    Roman Petrov, You can buy an extension if you want to update the module; if the module version is older than 1.2.2, then you will have to do the update, because In this version there was a big refactoring; the server to which our module accesses was changed. If your Current version newer, then write to us by email, [email protected]- Let's figure out why the error occurs.

    What's new

    1.3.6 (28.03.2019) Fixed minor errors in events
    1.3.5 (15.03.2019) Added option for safer file replacement
    Rewritten class to work with events
    1.3.4 (20.01.2019) Added animation to the progressbar Removed deprecated methods
    1.3.3 (12.06.2018) The GPSI test has been moved to the analysis page
    1.3.2 (06.06.2018) Minor fixes
    1.3.1 (05.06.2018) Added the ability to check the page in GPSI
    ImageJpeg compression option marked as deprecated, option to enable removed
    1.3.0 (24.04.2018) Fixed errors with Cyrillic folders.
    Minor interface improvements
    1.2.9 (22.02.2018) Added correct handling of the situation when the cURL library is not installed on the server.
    1.2.8 (24.01.2017) Fixed a bug with module access rights
    1.2.7 (14.12.2017) Fixed a bug where temporary files were overflowing.
    1.2.6 (04.12.2017) Improved stability
    1.2.5 (15.11.2017) Changed the way to get a compressed file
    1.2.4 (30.10.2017) Fixed compression event bug
    1.2.3 (25.10.2017) Fixed errors in language files
    1.2.2 (24.10.2017) Module structure changes
    Refactoring for D7
    1.2.1 (22.10.2017) Added the ability to set the port
    Added a filter for errors and the ability to clear the processing log
    1.2.0 (20.10.2017) Methods rewritten
    Improved performance
    Added multithreading
    1.1.9 (17.10.2017) Improved stability performance
    Bugs fixed
    1.1.8 (03.10.2017) Fixed position saving error
    Fixed error sending statistics
    1.1.7 (01.10.2017) Major module refactoring
    The module has been rewritten for D7
    Class structure changed
    Improved stability
    Improved performance
    Removed legacy file manager mode
    Added correct error display
    Minor bugs fixed
    1.1.6 (22.04.2017) Minor code refactoring for D7
    Fixed a bug where on-the-fly compression might not work under certain conditions.
    Added the ability to exclude directories
    1.1.5 (13.04.2017) Bugs fixed
    It is now possible to enable/disable on-the-fly compression in the module settings.
    WARNING: If you use compression events in init.php, you must remove them before installing this update!.
    1.1.4 (21.03.2017) Removed the need to use allow_fopen
    1.1.3 (19.02.2017) Added the ability to log
    Added the ability to analyze the site before starting optimization.
    1.1.2 (14.02.2017) A bug has been fixed that led to the substitution of images when the checkbox in the information block was checked, create an image from a detailed one even if it has been created.
    1.1.1 (30.01.2017) Added the ability to specify specific folder in simplified mode
    Fixed a bug where statistics were not sent to the server
    1.1.0 (30.01.2017) Fixed a bug where sending files would not stop if the key was missing.
    Fixed a bug where files located in a folder that included empty non-empty subfolders were not compressed.
    Added the ability to continue the process from the last processed file.
    Improved performance.
    Statistics are now sent when the module page is loaded.
    1.0.9 (20.01.2017) Fixed a critical bug in file manager mode. Fixed an error with the upload folder being clogged when processing was unsuccessful. Added the ability to not keep statistics.
    1.0.8 (15.12.2016) Added support for the OnAfterResizeImage event to compress images modified using the ResizeImageGet method.
    1.0.7 (14.12.2016) Fixed a bug with the page navigation template
    1.0.6 (19.10.2016) Fixed work with uppercase extensions. Added the ability to skip broken files in simplified mode.
    1.0.5 (25.04.2016) New interface
    1.0.4 (16.02.2016) Fixed problems with displaying errors
    1.0.3 (14.02.2016) Redesigned interface
    Added the ability to compress using the imagejpeg function
    Bugs fixed
    Some functions have been improved and new ones have been added.
    1.0.2 (02.02.2016) Added page navigation for sections

    Installation

    Installation is standard from Marketplace.

    The module is accessed through the section

    Service -> Image Optimizer or follow the link:
    /bitrix/admin/optiimg_admin.php

    Module settings:
    /bitrix/admin/settings.php?lang=ru&mid=alfa1c.optiimg&mid_menu=1

    Please note that for the module to work, you must enter the key received by email after purchase in the “Key” field.

    The solution uses the cURL library, as a rule it is enabled by default; if not, contact your hoster or system administrator for help.

    We also draw your attention to the fact that the speed of operation directly depends on the speed of the server’s file subsystem and the communication channel.

    The module can also compress images on the fly, for example, those added to an information block element. To do this, in the module settings, check the boxes next to And Compress images when resizing.

    Support

    How to use our module correctly:

    1. Install the module from the marketplace.
    2. Go to /bitrix/admin/settings.php?lang=ru&mid=alfa1c.optiimg&mid_menu=1
    3. Set the compression quality, check the boxes Compress images when uploading And Compress images when resizing, if your site works on https protocol then enter 443 in the port field or leave it blank.
    4. Go to the file manager and delete the /upload/resize_cache/ folder
    5. Go to the interface for working with the module /bitrix/admin/optiimg_admin.php
    6. Press the button Optimize

    The speed of loading web pages is one of the factors in the “attitude” of search engines towards your site. The faster the pages load, the more loyal the users are to the site - no one likes to wait a long time and waste a lot of traffic.

    The (apparently) closed Google service PageSpeed ​​Insights is widely known, allowing you to check any published site for loading speed and, based on the results of the check, giving a rating on a hundred-point scale and recommendations. Typically, recommendations include optimizing and reducing program code, compressing images, setting up server and browser caching, etc.

    For the average content resource on and free CMS like WordPress (yes, I'm talking about mine and thousands of similar sites), optimization options are limited to installing caching plugins and working with images. “Shortcut” the theme and shorten HTML, CSS and JavaScript code Not everyone can, and such actions, as a rule, lead to various errors and loss of functionality.

    To work with images under WordPress, there are a number of plugins that convert and compress images with or without loss, on the fly or on request. The best, in my opinion, of these plugins is EWWW Image Optimizer.

    As an alternative, you can connect some CDN service to static content, namely, the graphics were served from third-party servers, relieving the load a little file server hosting and by distributing sources, speeding up page loading. In my case, this is done by the Photon module as part of the extremely popular and periodically criticized JetPack plugin.

    All this works to varying degrees, but there is a universal way to optimize images for any site running not only on WordPress - to get started, you only need a Google account.

    We are talking, oddly enough, about Google Photos - a service that is designed to store photos taken on Android smartphones and backup images and videos installed special utility computer users.

    In the PageSpeed ​​Insights help, Google's recommended image optimization process is described as follows:

    Optimize your images

    This rule is triggered when PageSpeed ​​Insights detects that the size of the images on the page can be reduced without much loss of quality.

    general information

    Try to keep the size of images to a minimum: this will speed up loading of resources. The correct format and compression of images can reduce their size. Thanks to this, users can save time and money.

    Basic and advanced optimization should be performed on all images. As part of basic optimization, unnecessary fields are trimmed, color depth is reduced (to the minimum acceptable value), comments are removed, and the image is saved in a suitable format. Basic optimization can be done using any image editing program, such as GIMP. During advanced optimization, compression is performed JPEG files and PNG (lossless).

    Use image compression tools

    There are tools that perform additional compression on JPEG and PNG files without loss or reduction in quality. For JPEG files, it is recommended to use jpegtran or jpegoptim (available only on Linux, run with --strip-all option). For PNG it is better to use OptiPNG or PNGOUT.

    The last paragraph appears to describe tools that Google's servers use to automatically optimize user images uploaded to Photos. By the way, video files included in the service are also optimized, but this is completely unimportant given the continued existence of YouTube.

    Let's look at an example. Today I photographed a folding knife for the next publication and ran the resulting photos through the FastStone application Image Viewer(“artistic” cropping + resizing up to 1280 pixels in width). The result was a folder with eight files weighing more than 3 (!) megabytes.

    Apparently newly installed FastStone viewer Image Viewer by default saves edited photos in close to maximum quality, which leads to an unreasonably large “weight” of files. But such a setting in our case is quite justified, because Google algorithms compress photos without visible loss of quality with 100% upscaling, which means that images of good quality will remain so even after uploading to Google Photos.

    For convenience, it is better to place the downloaded photos in a new album, which can be downloaded in its entirety almost immediately after creation as a ZIP archive:

    If you compare photos compressed in Google Photos with the original ones, you get significant savings.

    816 KB versus 3.27 MB. At the same time, the quality of the photographs, in my opinion, did not suffer at all. Facebook and VKontakte should learn from Google how to optimize photos. Moreover, GPhotos is equipped with good image editing tools - from applying filters to manual settings contrast, brightness, saturation, etc.

    Thus, Google Photos is not only an excellent cloud for storing and publishing photos, but also a powerful tool for optimizing images for publishing on the web. Only in the case of WordPress, do not forget to disable optimizing plugins and the same Photon jetpack module, otherwise photographs already prepared for publication will undergo additional compression, which will lead to noticeable loss quality (for example, see screenshots passed through Google Photos and then Photon in this post).

    Ilya is a Developer Advocate and Web Perf Guru

    Images are the resources that often take up a lot of space on a page and weigh the most. By optimizing them, we can significantly reduce the amount of data downloaded and improve the performance of the site. The more an image is compressed, the smaller bandwidth the channel takes to download and the faster the browser can show the page to the user.

    Image optimization is both a science and an art. We can call it art because no one can give a definite answer as to how best to compress a particular image. However, this is also science, because we have at our disposal developed techniques and algorithms that can significantly reduce the size of the resource. To choose optimal settings For an image, many factors must be taken into account: format capabilities, encoded data, quality, number of pixels, etc.

    Deleting and replacing images

    TL;DR

    • Remove unnecessary images.
    • Use CSS3 effects whenever possible.
    • Use web fonts instead of encoding text in images.

    First of all, ask yourself: is this image really necessary? Good design should be simple and not compromise performance. It's best to simply remove the image you don't need since it weighs a lot more bytes compared to the HTML, CSS, JavaScript, and other resources on the page. In this case, one image in in the right place can replace long text, so you need to find the balance yourself and make the right decision.

    After this, you should check whether the desired result can be achieved in a more efficient way:

    • Thanks to CSS effects(gradients, shadows, etc.) and CSS animations, you can create assets that look sharp at any resolution and scale, and weigh much less than images.
    • Web fonts allow you to use beautiful inscriptions, while maintaining the ability to select and search text, as well as change its size. Thanks to this, working with your resource will become even more convenient.

    Avoid encoding text in an image. Beautiful inscriptions are necessary for quality design, brand promotion and convenient work with the resource, but the text in the image only interferes with all this. It cannot be selected, found, enlarged, copied, and also does not look good on high-resolution devices. Of course, web fonts also require optimization, but they will help avoid the above problems. Always select these to display text.

    Vector and raster images

    TL;DR

    • The vector format is great for images of geometric shapes.
    • The quality of vector images does not depend on scale and resolution.
    • Use raster format for complex images with many unusual shapes and details.

    If you decide that you should use an image to achieve the result, choose the appropriate format for it:

    Vector image

    Raster image

    • Vector graphics use lines, points, and polygons to display images.
    • In raster graphics, the individual values ​​of each pixel in a rectangular grid are encoded and the image is displayed based on them.

    Each format has its own advantages and disadvantages. The vector format is ideal for images made from simple geometric shapes (such as logos, text, icons, etc.). They remain sharp at any resolution and scale, so use this format for large screens and resources that should be shown in different sizes.

    However, vector formats are not suitable for complex images (such as photographs). There may be too much SVG markup to describe all the shapes, but the resulting image will still look unrealistic. In this case, you should use a raster image format such as GIF, PNG, JPEG, or the newer JPEG-XR and WebP formats.

    The quality of raster images depends on the resolution and scale: when enlarged, they become blurry and disintegrate into pixels. As a result, you may need to save multiple versions of the bitmap at different resolutions.

    Optimized for high resolution screens

    TL;DR

    • On high-resolution screens, one CSS pixel consists of several screen pixels.
    • High-resolution images have many more pixels and bytes than regular images.
    • Optimization techniques can be applied to images of any resolution.

    When talking about pixels, we need to differentiate between screen pixels and CSS pixels. A CSS pixel can correspond to one or more screens. This is done so that on devices with a large number of screen pixels the image will be clearer and more detailed.

    Of course, graphics look very nice on high DPI (HiDPI) screens. However, to look good in high resolution, our images need to be more detailed. But we have a solution: vector formats are ideal for this task. They maintain clarity in any resolution. Even if the cost of rendering small details increases, we still use one resource independent of screen size.

    On the other hand, there are many more complexities with raster images because they encode image data at every pixel. Thus, the higher the number of pixels, the larger the size of such a resource. As an example, consider the difference between photos of 100x100 CSS pixels:

    When we double the screen resolution, the total number of pixels immediately quadruples: twice vertically and twice horizontally.

    Summarize. On high-resolution screens, graphics look very attractive, so you can create a good impression of your site. However, such screens require high-resolution images. Choose vector formats because they look sharp on any device. If you must use a bitmap image, add some optimized variations of the resource (see below).

    Optimizing vector images

    TL;DR

    • SVG is an image format for XML based
    • SVG files need to be minified to reduce their size.
    • Compress SVG files using GZIP.

    All modern browsers support SVG (Scalable Vector Graphics) format. It is an XML-based image format for 2D graphics. SVG markup can be embedded directly into a page or onto external resource. In turn, the SVG file can be created using any software for vector drawing or manually in a text editor.

    The example above draws a simple circular shape with a black border and a red background. It was exported from Adobe Illustrator. As you can imagine, it contains a lot of metadata, such as layer information, comments, and XML namespaces, that most often are not needed to display the resource in the browser. As a result, you should minify SVG files using the svgo tool.

    For example, svgo reduces the size of the above SVG file by 58% from 470 to 199 B. Additionally, since SVG is an XML-based format, we can apply GZIP compression to reduce its size during transmission. Make sure your server is configured to compress SVG assets.

    Optimizing bitmaps

    TL;DR

    • A raster image is a grid of pixels.
    • Each pixel contains color and transparency information.
    • To reduce the image size, compressors use various methods to reduce the number of bits per pixel.

    A raster image is simply a two-dimensional grid of individual pixels. For example, a 100x100 pixel image is a sequence of 10,000 pixels. Each pixel contains RGBA values: red (R), green (G), and blue (B) channels, as well as an alpha or transparency channel (A).

    The browser sets 256 values ​​(hues) for each channel, which translates to 8 bits per channel (2^8 = 256) and 4 bytes per pixel (4 channels x 8 bits = 32 bits = 4 bytes). Thus, knowing the grid dimensions, we can easily calculate the file size:

    • Image 100 x 100 pixels. consists of 10,000 pixels
    • 10,000 pixels x 4 B = 40,000 B
    • 40,000 B / 1024 = 39 KB
    Note:In addition, regardless of the image format transmitted from the server to the client, when decoding the image, each pixel takes up 4 bytes of memory. Therefore, when displaying large files On devices with limited memory, problems may arise.

    It may seem that 39 KB is quite a bit for a 100x100 pixel image. However, if the file size increases, the file will weigh much more, and downloading it will require a lot of time and resources. This image is currently uncompressed. What can be done to reduce its size?

    One simple way to optimize an image is to reduce the color depth from 8 bits per channel by choosing a smaller palette. By setting the depth to 8 bits per channel, we get 256 values ​​per channel and 16,777,216 (2563) colors. Maybe we should reduce the palette to 256 colors? Then we will need only 8 bits for all RGB channels and only 2 bytes per pixel, not 4 as before. We managed to compress the images in half!

    Note:Images in PNG format from left to right: 32 bits (16M colors), 7 bits (128 colors), 5 bits (32 colors). Complex images with smooth transition colors (gradients, skies, etc.) require larger palettes. However, if the resource consists of a small number of colors, large palette is a waste of bits.

    Having optimized the data in individual pixels, let’s turn our attention to neighboring pixels. It turns out that the color of such pixels in many images, especially photographs, is often similar. This allows the compressor to use delta coding. Instead of storing separate values ​​for each pixel, you can specify only the difference between neighboring pixels. If they are the same, then the delta is zero and we only need to store one bit. But that is not all!

    We often don't notice the difference in some shades, so we can optimize the image by reducing or increasing the palette for those colors. Each pixel in a 2D grid has multiple neighbors, so we can improve upon delta coding. Focus not on the immediate neighbors of a pixel, but on entire blocks of similar colors and encode them using different settings.

    As you can see, image optimization is becoming more complex and interesting. There is scientific and commercial research on this topic, because images weigh a lot of bytes, and it is profitable to develop new compression techniques. If you want to know more, read or check out the specific examples in .

    So how does all this complex material help us optimize images? Let's repeat: we don't need to invent new compression methods. However, we need to know about the key aspects of the issue: RGBA pixels, color depth and different techniques optimization. This is necessary to continue the conversation about raster formats.

    Lossy and lossless data compression

    TL;DR

    • Taking into account the peculiarities of human vision, lossy data compression can be used for images.
    • Lossy and lossless data compression is used to optimize the image.
    • The difference in image formats is the difference in how and what compression algorithms are used to reduce the size of the resource.
    • Doesn't exist itself best format or quality settings that would suit all images. When combining different compressors and resources, we will never get the same result.

    For certain data types, e.g. source code page or executable file, it is extremely important that the compressor does not delete or change the original information. A missing or incorrect bit of data can completely corrupt or destroy the meaning of a file's content. However, other types of data can be conveyed in approximate form.

    Due to the nature of human vision, we may not notice the absence of any information about each pixel, for example we will not see the difference between certain shades of color. Therefore, we can use fewer bits to encode some colors and thereby reduce the size of the resource. Thus, standard image optimization consists of two main stages:

    1. [Lossy] image compression(http://ru.wikipedia.org/wiki/Lossy_data_compression), which removes some pixel data.
    2. [Lossless] image compression(http://en.wikipedia.org/wiki/Lossless_compression), which compresses pixel data.

    It is not necessary to complete the first step. The exact algorithm depends on the specific image format, but note that every image can be lossily compressed.

    In fact, the difference between image formats such as GIF, PNG, JPEG, etc. is precisely the combination of different lossy and lossless data compression algorithms.

    When using lossy compression such as JPEG, you'll be able to select quality settings (like the Save for Web slider in Adobe Photoshop). Typically this is a value from 1 to 100, which determines whether lossy or lossless compression algorithms are used. Don't be afraid to lower the quality: often the image will still look good and the file size will be significantly smaller.

    Note:Please note that images with the same quality settings, but in different formats will be different. This occurs due to differences in image compression algorithms. For example, JPEG and WebP with quality settings of 90 look different. In fact, even images in the same format and with the same quality settings can differ depending on the compressor used.

    Selecting an image format

    TL;DR

    • Select the appropriate standard format: GIF, PNG or JPEG.
    • Try installing different settings for each format (quality, palette size, etc.) and choose the most suitable ones.
    • For modern clients, add resources in WebP and JPEG XR scaled images:
    • Image scaling is one of the simplest and most effective optimization methods.
    • If you are using images big size, the user can download unnecessary data.
    • Reduce the number of unnecessary pixels by scaling your images to their display size.

    In addition to lossy and lossless compression algorithms, other features such as animation and transparency channel (alpha channel) are supported in image formats. Thus, when choosing a suitable format, you need to take into account the desired visual effect and the requirements for the site or application.

    Format Transparency Animation Browser
    GIF Yes Yes All
    PNG Yes No All
    JPEG No No All
    JPEG XR Yes Yes I.E.
    WebP Yes Yes Chrome, Opera, Android

    There are three standard image formats: GIF, PNG and JPEG. In addition, some browsers support the new WebP and JPEG XR formats, for which greater compression and additional features are available. So which format should you choose?

    1. Should the image be animated? Then choose the GIF format.
    2. The GIF color palette consists of only 256 colors. This is not enough for most images. In addition, the PNG-8 format compresses images with a small palette better. Thus, choose GIF only if you require animation.
    3. We need to save everything small parts in the highest resolution? Use PNG.
    4. The PNG format does not apply lossy compression, other than selecting the palette size. Thanks to this, the image is saved in the high quality, but weighs much more than other file formats. Use this format only where necessary.
    5. If the image consists of geometric shapes, convert it to vector (SVG) format!
    6. Avoid text in images. It cannot be selected, found, or enlarged. If text is needed to create a design, use web fonts.
    7. Are you optimizing a photo, screenshot, or similar type of image? Use JPEG.
    8. JPEG uses a combination of lossy and lossless compression to reduce file size. To find the best combination of image quality and size, try setting several JPEG quality levels.

    Once you have determined the appropriate format and its settings for all resources, add additional option in WebP and JPEG XR. These are new formats that are not yet supported in all browsers. However, using them can significantly reduce the file size. For example, WebP compresses an image more than JPEG.

    Because WebP and JPEG XR are not supported in all browsers, you need to add additional logic to your applications or servers to send the appropriate resource to the user.

    • Some content delivery networks provide image optimization services, including providing JPEG XR and WebP files.
    • Some tools with open source, such as PageSpeed ​​for Apache and Nginx, automatically optimize, transform, and deliver relevant resources.
    • You can add additional application logic to determine the client and its supported formats, and then send the best possible resource.

    Note that if you are using Webview to render content in a native app, then you can have full client control and only use WebP. IN Facebook applications, Google+, etc., WebP resources are used because they really improve productivity. To learn more about this format, watch the presentation WebP: Deploying Faster, Smaller, and More Beautiful Images from Google I/O 2013.

    Tools and Option Selection

    There isn't one ideal format, an optimization tool or algorithm that would be suitable for all images. To get the best results, you must choose the format and its settings depending on the content, visual and technical requirements.

    Tool Description
    gifsicle creates and optimizes GIF images
    jpegtran optimizes JPEG images
    compresses PNG losslessly
    pngquant compresses PNG lossily

    Don't be afraid to experiment with compressor settings. Set different quality settings, select the appropriate option and apply it to others similar images Online. But remember: not all graphic resources need to be compressed using the same method!

    Scaling of transmitted images

    TL;DR

    Warning:A tag here did NOT convert properly, please fix! ""

    The image size is the sum of the pixels multiplied by the number of bytes used to encode each pixel. Image optimization comes down to reducing these two components.

    Thus, one of the simplest and most effective optimization techniques is to ensure that the size of the image you submit is no larger than its display size in the browser. Nothing complicated, but many sites do serious mistake. They host large resources, and the browser itself has to scale and display them at a lower resolution. Among other things, this increases the load on the user's processor.

    Note:To find out the original and displayed dimensions of an image, hover over it in Tools Chrome developer. In the example above, we download an image that is 300x260 pixels, but the client scales it down to 245x212 pixels when serving.

    By sending extra pixels and leaving the browser to scale the resource itself, we miss the opportunity to optimize the number of bytes needed to render the page. Please note that when scaling, not only does the number of pixels decrease, but it also changes original size Images.

    Original size Display Size Unnecessary pixels
    110 x 110 100 x 100 110 x 110 - 100 x 100 = 2100
    410 x 410 400 x 400 410 x 410 - 400 x 400 = 8100
    810 x 810 800 x 800 810 x 810 - 800 x 800 = 16100

    Note that in all three cases, the displayed size is only 10 pixels smaller than the original size. However, the larger the original image size, the more unnecessary data has to be encoded and sent. Even if you are unable to establish a complete match between the original and displayed sizes, you should reduce the number of unnecessary pixels as much as possible.

    List of optimization methods

    Image optimization is both a science and an art. We can call it art because no one can give a definite answer as to how best to compress a particular image. However, this is also science, because we have at our disposal developed techniques and algorithms that can significantly reduce the size of the resource.

    Keep in mind some tips and techniques to help you optimize your images:

    • Choose images in vector formats. Their quality is independent of resolution and scale, so they are suitable for large screens and different types of devices.
    • Minify and compress SVG assets. Many graphic applications add XML markup that often contains unnecessary metadata. It can be removed. Make sure your servers have GZIP compression configured for your SVG assets.
    • Choose the most suitable raster formats. Define necessary requirements to images and select the desired format for each resource.
    • Try different quality settings for raster formats. Don't be afraid to lower the quality: often the image will still look good and the file size will be significantly smaller.
    • Remove unnecessary metadata. Many raster images contain unnecessary information about the resource: geodata, camera information, etc. To remove them, use the appropriate tools.
    • Scale your images. Reduce files on the server so that the original and displayed sizes are almost the same. Pay special attention to large images. If the browser scales them, your site's performance will be significantly reduced.
    • Automate. Use reliable tools and software that will automatically optimize the images on your site.

    Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see ours. Java is a registered trademark of Oracle and/or its affiliates.

    Updated August 8, 2018