We have been using responsive images with srcset for a long time, but I wanted to implement lazy loading of images on projects, when the user sees an image in poor quality while the main one is loading.
Nowadays, users want to get as much content as possible in as little time as possible. Because of this, if an image doesn't load instantly, it's already a problem. The user is likely to skip it and move on, if not close the site altogether.
The option of displaying an animated block or a worse quality image at the moment of loading obviously attracts users more than just a white block. That's why we decided to implement it. However, there are many different ways to do it.
What options did we consider?
very simple implementation
consumes few resources
In 2023, it looks rather boring and does not grab attention (does not make you want to wait for the download to finish)
Displaying an image compressed to 1-2kb with a superimposed blur effect. Seems like a great idea...
these images load almost instantly
as images for screen sizes in desktop, tablet and mobile may differ (size, aspect ratio, image content itself), it becomes necessary to send an image from api base64 for each screen resolution. This multiplies the size of json (especially if we request a catalog page, each card in which contains a slider of many pictures).
This is what our image object looks like now:
Adding a base64 string for each resolution is not the best solution, so let's move on.
This is a relatively new technology of loading jpeg images, when the picture is not loaded linearly (from top to bottom), but fills the entire block space at once and is loaded in several layers (from the worst quality to the best).
In our opinion, this is a great option for uploading images to the site, because it gives the user an idea of the content from the very beginning.
However, there are a few nuances that make you look for something else:
It is jpeg, no other image format can do that, which means we have to give up other formats (like webp or avif)
Although JPEG is still the dominant technology for storing digital images, it does not fulfill several requirements that have become important in recent years, such as image compression with higher bit depth (9 to 16 bits), high dynamic range images, lossless compression and alpha channel representation.
So here’s the option on which we stopped
What is its essence for the frontend:
We get a short string (20-30 characters) in base83 format, and "stretch" it onto the canvas.
This format allows you to reduce the weight of json to a minimum;
No need to add extra styles to blur the picture;
No need to send multiple image options for different screens (desktop, tablet, mobile);
The image has a "soft" outline of future content.
On the cons side, perhaps, only the need to use canvas and a rather strong blurring of the content.
Before moving on to the front end implementation, we have implemented a small microservice that you can pull up from a docker image that can generate base83 for your images. The microservice is available in the repository: github.com/dev-family/blurhash
Approximate kind of json that this component receives:
In general, there is no urgent need to send images for different resolutions (and different formats). It is enough to change the typing and replace the handwritten Picture component with any other Image for your use case.
In this context, we are only interested in the field:
This is our blurhash string (the length of the string can be adjusted during encoding by changing the aspect ratio and size of the original image).
How the component works
By default, the image contains no data;
At the moment when it enters the user's visibility zone, useEffect is triggered, which performs a set of data in the Picture component and overlays a canvas with blurhash on top of it;
When the image is fully loaded, the canvas is smoothly hidden.
List of component props
The component itself:
Set the base isLoaded flag to change styles and control loading state;
imgPlaceholder is the blurhash line;
imageSrc — a link to the source image will be “netted” here (an empty string by default. Or, as in our case, an object of several fields);
imageRef — to track if the image is in the user's visibility area;
onLoad — handler for successful image loading.
Add useEffect, which does all the work:
We use IntersectionObserver to monitor when the image is in view, when it does, we network the data into the Picture and unsubscribe.
StyledLazyImage — div container, its styles:
hash (string);The encoded blurhash string.; width (int | string); Width (CSS) of the decoded image.; height (int | string);Height (CSS) of the decoded image.; resolutionX (int);The X-axis resolution in which the decoded image will be rendered at. Recommended min. 32px. Large sizes (>128px) will greatly decrease rendering performance. (Default: 32); resolutionY (int);The Y-axis resolution in which the decoded image will be rendered at. Recommended min. 32px. Large sizes (>128px) will greatly decrease rendering performance. (Default: 32); punch (int);NControls the "punch" value (~contrast) of the blurhash decoding algorithm. (Default: 1);
StyledBlurhash — container for the Blurhash component, its styles:
Blurhash hiding speed and smoothness can be controlled through transition and animation.
Picture — a picture component (it can be replaced with NextImage or any other, it must return an image).
It accepts links to all types of images and nests them in <picture />
I implemented this component into an already finished project with a lot of images in various sliders and block models.
The task was to implement the most adaptive variant without external fixes to the layout. Now we are trying this implementation on other projects and the feedback is positive, which can't help but make us happy.