Improving User Experience: Efficient Image Loading Using Progressive Display
Contents
Front-end developer
Andrey
Back-end developer
Alexey
We have been using responsive images with srcset for a long time, but I wanted to implement lazy loading of images on projects, when the user sees an image in poor quality while the main one is loading.
Nowadays, users want to get as much content as possible in as little time as possible. Because of this, if an image doesn't load instantly, it's already a problem. The user is likely to skip it and move on, if not close the site altogether.
The option of displaying an animated block or a worse quality image at the moment of loading obviously attracts users more than just a white block. That's why we decided to implement it. However, there are many different ways to do it.
What options did we consider?
Using Skeleton
Pros:
very simple implementation
consumes few resources
Cons:
In 2023, it looks rather boring and does not grab attention (does not make you want to wait for the download to finish)
base64 image
Displaying an image compressed to 1-2kb with a superimposed blur effect. Seems like a great idea...
Pros:
these images load almost instantly
Cons:
as images for screen sizes in desktop, tablet and mobile may differ (size, aspect ratio, image content itself), it becomes necessary to send an image from api base64 for each screen resolution. This multiplies the size of json (especially if we request a catalog page, each card in which contains a slider of many pictures).
This is what our image object looks like now:
Adding a base64 string for each resolution is not the best solution, so let's move on.
Progressive JPEG
This is a relatively new technology of loading jpeg images, when the picture is not loaded linearly (from top to bottom), but fills the entire block space at once and is loaded in several layers (from the worst quality to the best).
Image taken from this article
In our opinion, this is a great option for uploading images to the site, because it gives the user an idea of the content from the very beginning.
However, there are a few nuances that make you look for something else:
It is jpeg, no other image format can do that, which means we have to give up other formats (like webp or avif)
Although JPEG is still the dominant technology for storing digital images, it does not fulfill several requirements that have become important in recent years, such as image compression with higher bit depth (9 to 16 bits), high dynamic range images, lossless compression and alpha channel representation.
So here’s the option on which we stopped
Blurhash
Blurhash — is a library developed by cool guys with implementation in any language.
What is its essence for the frontend:
We get a short string (20-30 characters) in base83 format, and "stretch" it onto the canvas.
Pros:
This format allows you to reduce the weight of json to a minimum;
No need to add extra styles to blur the picture;
No need to send multiple image options for different screens (desktop, tablet, mobile);
The image has a "soft" outline of future content.
Cons:
On the cons side, perhaps, only the need to use canvas and a rather strong blurring of the content.
Before moving on to the front end implementation, we have implemented a small microservice that you can pull up from a docker image that can generate base83 for your images. The microservice is available in the repository: github.com/dev-family/blurhash
The developers kindly provided us with the react-blurhash library, which has already adapted components for working with blurhash.
Approximate kind of json that this component receives:
In general, there is no urgent need to send images for different resolutions (and different formats). It is enough to change the typing and replace the handwritten Picture component with any other Image for your use case.
In this context, we are only interested in the field:
placeholder: "|FDcXS4nxu~q4nt7-;9Fxu?bxu9FxuRjIU%MayRjRj%MRjIU%MM{RjxvRjozofxuM{t8xuIUofofWBRjt7RjayxuM{WBt7InWUofWBoft7WBWBofRioft7ayt7oeayofWBRjoLs:ayoffRayofR*ofj[j[oMWBayj[azfR"
This is our blurhash string (the length of the string can be adjusted during encoding by changing the aspect ratio and size of the original image).
How the component works
By default, the image contains no data;
At the moment when it enters the user's visibility zone, useEffect is triggered, which performs a set of data in the Picture component and overlays a canvas with blurhash on top of it;
When the image is fully loaded, the canvas is smoothly hidden.
List of component props
export interface LazyPictureProps {
data: IPicture;
alt?: string;
placeholder?: string;
breakpoints?: IBreakpoints;
onLoadSuccess?: (img: EventTarget) => void;
onLoadError?: () => void;
className?: string;
}
export interface IBreakpoints {
desktop: string;
tablet: string;
mobile: string;
}
The component itself:
export default memo(function LazyPicture({
data,
onLoadError,
onLoadSuccess,
className = "",
...props
}: LazyPictureProps) {
const [isLoaded, setIsLoaded] = useState(false);
const { placeholder, ...imageProps } = data;
const imgPlaceholder = useMemo(
() => placeholder || defaultBlurPlaceholder,
[placeholder]
);
const [imageSrc, setImageSrc] = useState(defaultImageProps);
const imageRef = useRef(null);
const _onLoad: ReactEventHandler = (event) => {
const img = event.target;
if (onLoadSuccess) onLoadSuccess(img);
setIsLoaded(true);
};
}
Set the base isLoaded flag to change styles and control loading state;
imgPlaceholder is the blurhash line;
imageSrc — a link to the source image will be “netted” here (an empty string by default. Or, as in our case, an object of several fields);
imageRef — to track if the image is in the user's visibility area;
onLoad — handler for successful image loading.
Add useEffect, which does all the work:
useEffect(() => {
let observer: IntersectionObserver;
if (IntersectionObserver) {
observer = new IntersectionObserver(
(entries) => {
entries.forEach((entry) => {
// when image is visible in the viewport + rootMargin
if (entry.intersectionRatio > 0 || entry.isIntersecting) {
setImageSrc(imageProps);
imageRef?.current && observer.unobserve(imageRef?.current);
}
});
},
{
threshold: 0.01,
rootMargin: "20%",
}
);
imageRef?.current && observer.observe(imageRef?.current);
} else {
// Old browsers fallback
setImageSrc(imageProps);
}
return () => {
imageRef?.current && observer.unobserve(imageRef.current);
};
}, []);
We use IntersectionObserver to monitor when the image is in view, when it does, we network the data into the Picture and unsubscribe.
Here’s jsx:
return (
<StyledLazyImage>
<StyledBlurHash isHidden={isLoaded}>
<Blurhash
hash={imgPlaceholder}
width={"100%"}
height={"100%"}
resolutionX={32}
resolutionY={32}
punch={1}
/>
</StyledBlurHash>
<Picture
ref={imageRef}
{...imageSrc}
{...props}
className={`${className} ${!isLoaded && "lazy"}`}
onLoad={_onLoad}
onLoadError={onLoadError}
/>
</StyledLazyImage>
);
});
Styled-components is used here, but this is not essential.
StyledLazyImage — div container, its styles:
Blurhash — is a component of the react-blurhash library, its props are:
const StyledLazyImage = styled.div`
width: 100%;
height: 100%;
position: relative;
canvas {
width: 100%;
height: 100%;
}
.lazy {
opacity: 0;
}
`;
StyledBlurhash — container for the Blurhash component, its styles:
const StyledBlurHash = styled.div<{ isHidden?: boolean }>`
position: absolute;
width: 100%;
height: 100%;
z-index: 22222;
visibility: visible;
transition: visibility 0.2s, opacity 0.2s;
${({ isHidden }) =>
isHidden &&
css`
visibility: hidden;
opacity: 0;
animation: ${displayAnim} 0.2s;
`
}
`;
const displayAnim = keyframes`
to {
display: none;
}
`;
Blurhash hiding speed and smoothness can be controlled through transition and animation.
Picture — a picture component (it can be replaced with NextImage or any other, it must return an image).
const Picture = forwardRef((props, imageRef) => {
const {
noImageOnTouch = false,
alt = "",
onLoad,
onLoadError,
className = "",
} = props;
const desktopImages: PictureSources =
props.desktop || defaultImageProps.desktop;
const {
x1: desktop_x1,
x2: desktop_x2,
webp_x1: desktop_webp_x1,
webp_x2: desktop_webp_x2,
} = desktopImages;
const tabletImages: PictureSources =
props.tablet || props.desktop || defaultImageProps.tablet;
const {
x1: tablet_x1,
x2: tablet_x2,
webp_x1: tablet_webp_x1,
webp_x2: tablet_webp_x2,
} = tabletImages;
const mobileImages: PictureSources = props.mobile || defaultImageProps.mobile;
const {
x1: mobile_x1,
x2: mobile_x2,
webp_x1: mobile_webp_x1,
webp_x2: mobile_webp_x2,
} = mobileImages;
It accepts links to all types of images and nests them in <picture />
return (
{!Object.keys(props).length ? (
<img src="/images/error-page-image.png" alt="error-image" />
) : desktop_x1 && desktop_x1.endsWith(".svg") ? (
<img src={desktop_x1} alt="" />
) : (
<picture>
{noImageOnTouch && (
<source
media="(hover: none) and (pointer: coarse), (hover: none) and (pointer: fine)"
srcSet={base64Pixel}
sizes="100%"
/>
)}
<source
type="image/webp"
media={`(min-width: 1025px)`}
srcSet={`${desktop_webp_x1}, ${desktop_webp_x2} 2x`}
/>
<source
media={`(min-width: 1025px)`}
srcSet={`${desktop_x1}, ${desktop_x2} 2x`}
/>
<source
type="image/webp"
media={`(min-width: 501px)`}
srcSet={`${tablet_webp_x1}, ${tablet_webp_x2} 2x`}
/>
<source
media={`(min-width: 501px)`}
srcSet={`${tablet_x1}, ${tablet_x2} 2x`}
/>
<source
type="image/webp"
media={`(max-width: 500px)`}
srcSet={`${mobile_webp_x1}, ${mobile_webp_x2} 2x`}
/>
<source
media={`(max-width: 500px)`}
srcSet={`${mobile_x1}, ${mobile_x2} 2x`}
/>
<img
ref={imageRef}
src={desktop_x1}
srcSet={`${desktop_x2} 2x`}
crossOrigin=""
className={className}
alt={alt}
onLoad={onLoad}
onError={onLoadError}
/>
</picture>
))}
);
Picture.displayName = "Picture";
export default Picture;
To summarize:
I implemented this component into an already finished project with a lot of images in various sliders and block models.
The task was to implement the most adaptive variant without external fixes to the layout. Now we are trying this implementation on other projects and the feedback is positive, which can't help but make us happy.
Tags: