Archive

Archive for June, 2021

Detect Unused Classes in… HTML

June 15th, 2021 No comments

Usually, when “unused” comes up in conversation regarding CSS, it’s about removing chunks of CSS that are not used in your site or, at least, the styles not currently in use on a specific page. The minimal amount of CSS is best! I’ve written about how this is a hard problem in the past. In JavaScript-land, the equivalent is tree shaking (removing unusued JavaScript).

But what about the other way around, detecting classes in HTML that aren’t used in your CSS? If you knew this for sure, you could clean up your markup, removing classes that don’t do anything.

I saw Robert Kieffer post a Gist the other day with an interesting solution. The idea is to load up document.styleSheets and find all the rules (the ones that are classes). Then, use a MutationObserver to watch the DOM for all HTML, and check the classList of each node to see if it matches any from any stylesheet. If the HTML has a class not found in a stylesheet, report it.

I gave it a quick whirl and got it working and correctly reporting unused classes:

Your mileage may vary. For one thing, this script is set up as an ES Module. That means if you just import it and run it on a regular ol’ HTML document, it won’t find anything because your is deferred and the MutationObserver won’t pick anything up. I just un-moduled it and put it in the to make my demo work.

I Netlify Dropped the site online in case you wanna dig into it and check it out. I would have used CodePen, but CodePen doesn’t link up your styles as ed stylesheets (by default, but you could use external resources to do that). I just thought it would be more clear as a deployed site.

Careful now.

Just like unused CSS is a hard problem because of how hard it is to know for sure know if a ruleset is unused, this approach may be an even harder problem. For one thing, classes might be used as a JavaScript hook for things. Styles might get injected onto the page in blocks, which this script wouldn’t check. Heck, you might have integration tests that run in CI that use classes to do testing-related things.

I’d say this kind of thing is a useful tool for havin’ a looksie at classes that you have a hunch might be unused. But I wouldn’t say there’s a permission slip to run this thing and then yank out every reported class without further investigation.


The post Detect Unused Classes in… HTML appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Media Queries in Times of @container

June 15th, 2021 No comments

Max Böck took me up on my challenge to look through a codebase and see how many of the @media queries could ultimately become @container queries.

I took the bait and had a look at some of my projects – and yes, most of what I use @media for today can probably be accomplished by @container at some point. Nevertheless, I came up with a few scenarios where I think media queries will still be necessary.

Max didn’t say exactly how many would be replaced, but I got the impression it was 50/50ish.

A combination of both techniques will probably be the best way forward. @media can handle the big picture stuff, user preferences and global styles; @container will take care of all the micro-adjustments in the components themselves.

A perfect team!

I also think there will be a big difference between what we do when refactoring an existing CSS codebase to what we do when we are building from scratch. And it will be different what we do when we first get container queries to what we do years from now when new patterns have settled in. I’ve long been bullish on components being the right abstraction for front-end development. It feels like everything lately pushes us in that direction, from JavaScript frameworks and to native components, to container queries and style scoping.

Direct Link to ArticlePermalink


The post Media Queries in Times of @container appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Just How Niche is Headless WordPress?

June 15th, 2021 No comments

I wonder where headless WordPress will land. And by “headless” I mean only using the WordPress admin and building out the user-facing site through the WordPress REST API rather than the traditional WordPress theme structure.

Is it… big? The future of WordPress? Or relatively niche?

Where’s the demand?

Certainly, there is demand for it. I know plenty of people doing it. For instance, Gatsby has a gatsby-source-wordpress plugin that allows you to source content from a WordPress site in a way that consumes the WordPress REST API and caches it as GraphQL for use in a React-powered Gatsby site. It has been downloaded 59k times this month and 851k times overall, as I write. That’s a healthy chunk of usage for one particular site-building technology. Every use case there is technically using WordPress headless-ly. If you’re interested in this, here’s Ganesh Dahal digging deep into it.

What is going headless and improve to?

The Gatsby integration makes a solid case for why anyone would consider a headless WordPress site. I’ll get to that.

Many advocate the main reason is architectural. It de-couples the back end from the front end. It tears down the monolith. As a de-coupled system, the back and front ends can evolve independently. And yet, I’m less hot on that idea as years go by. For example, I’d argue that the chances of simply ripping out WordPress and replace it with another CMS in a headless setup like this is easier said than done. Also, the idea that I’m going to use the WordPress API not just to power a website, but also a native reading app, and some digital internet-connected highway billboard or something is not a use case that’s exploding in popularity as far as I can tell.

The real reason I think people reach for a WordPress back end for a Gatsby-driven front end is essentially this: they like React. They like building things from components. They like the fast page transitions. They like being able to host things somewhere Jamstack-y with all the nice developer previews and such. They like the hot module reloading. They like Prettier and JSX. They just like it, and I don’t blame them. When you enjoy that kind of developer experience, going back to authoring PHP templates where it takes manually refreshing the browser and maintaining some kind of hand-rolled build process isn’t exactly enticing.

Frontity is another product looking to hone in on React + WordPress. Geoff and Sarah shared how to do all this last year on the Vue/Nuxt side.

But will headless WordPress become more popular than the traditional theming model of WordPress that’s based on PHP templates that align to an explicit structure? Nah. Well, maybe it would if WordPress itself champions the idea and offers guidance, training, and documentation that make it easier for developers to adopt that approach. I’d buy into it if WordPress says that a headless architecture is the new course of direction. But none of those things are true. Consequently, to some degree, it’s a niche thing.

Just how niche is headless?

WP Engine is a big WordPress host and has a whole thing called Atlas. And that effort definitely looks like they are taking this niche market seriously. I’m not 100% sure what Atlas all is, but it looks like a dashboard for spinning up sites with some interesting looking code-as-config. One of the elephants in the room with headless WordPress is that, well, now you have two websites to deal with. You have wherever you are hosting and running WordPress, and wherever you are hosting and running the site that consumes the WordPress APIs. Maybe this brings those two things together somehow. The deploy-from-Git-commits thing is appealing and what I’d consider table stakes for modern hosting these days.

Another reason people are into headless WordPress is that the end result can be static, as in, pre-generated HTML pages. That means the server for your website can be highly CDN-ized, as it’s literally only static assets that are instantly available to download. There’s no PHP or database for server-side rendering things, which can be slow (and, to be fair, dealt with) since it adds a process ahead of rendering.

What’s “the WordPress way” for going headless?

I’d lump any service that builds a static version of your WordPress site into the headless WordPress bucket. That’s because, ultimately, those sites are using WordPress APIs to build those static files, just like Gatsby or whatever else would do.

That’s what Strattic does. They spin up a WordPress site for you that they consider staging. You do your WordPress work there, then use their publish system to push a static version of your site to production. That’s compelling because it solves for something that other headless WordPress usage doesn’t: just doing things the WordPress way.

For example, custom WordPress blocks or plugins might produce output that not only contains custom HTML, but CSS and JavaScript as well. Imagine a “carousel” block or plugin. That carousel isn’t going to work if all you’re doing is grabbing the post content from an API and dunking it onto a page. You’ll either need to go extract the CSS and JavaScript from elsewhere and include it, or somehow just know that isn’t how you do things anymore. You might attach the images as metadata somehow, pull them client-side, and do your own implementation of a carousel. With Strattic, theoretically, it’ll work as the HTML, CSS, and JavaScript is still present on the static site. But notably, you don’t have PHP, so Strattic had to hand-build form integrations, they use client-side Algolia for search, Disqus for comments, etc., because there is no server-side language available.

Shifter is another player here. It’s similar to Strattic where you work on your site in the WordPress admin, then publish to a static site. I believe Shifter even spins your WordPress site down when it’s not in use, which makes sense since the output is static and there is no reason a server with PHP and MySQL needs to be running. As an alternative, Shifter has a headless-only WordPress setup that presumably stays spun up all the time for outside API usage.

It’s fun to think about all this stuff

But as I do, I realize that the ideas and discussions around headless WordPress are mostly focused on the developer. WordPress has this huge market of people who just are not developers. Yet, they administer a WordPress site, taking advantage of the plugin and theme ecosystem. That’s kinda cool, and it’s impressive that WordPress serves both markets so well. There’s just a heck of a lot more WordPress site owners who aren’t developers than those who are, I reckon, so that alone will keep headless WordPress from being anything more than a relatively niche concept for some time. But, ya know, if they wanna put GraphQL in core, I’ll still take it kthxbye.

Related articles


The post Just How Niche is Headless WordPress? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Spinning Up Multiple WordPress Sites Locally With DevKinsta

June 15th, 2021 No comments

When building themes and plugins for WordPress, we need to make sure they work well in all the different environments where they will be installed. We can sometimes control this environment when creating a theme for our own websites, but at other times we cannot, such as when distributing our plugins via the public WordPress repository for anyone to download and install it.

Concerning WordPress, the possible combinations of environments for us to worry about include:

  • Different versions of PHP,
  • Different versions of WordPress,
  • Different versions of the WordPress editor (aka the block editor),
  • HTTPS enabled/disabled,
  • Multisite enabled/disabled.

Let’s see how this is the case. PHP 8.0, which is the latest version of PHP, has introduced breaking changes from the previous versions. Since WordPress still officially supports PHP 5.6, our plugin may need to support 7 versions of PHP: PHP 5.6, plus PHP 7.0 to 7.4, plus PHP 8.0. If the plugin requires some specific feature of PHP, such as typed properties (introduced in PHP 7.4), then it will need to support that version of PHP onward (in this case, PHP 7.4 and PHP 8.0).

Concerning versioning in WordPress, this software itself may occasionally introduce breaking changes, such as the update to a newer version of jQuery in WordPress 5.6. In addition, every major release of WordPress introduces new features (such as the new Gutenberg editor, introduced in version 5.0), which could be required for our products.

The block editor it’s no exception. If our themes and plugins contain custom blocks, testing them for all different versions is imperative. At the very minimum, we need to worry about two versions of Gutenberg: the one shipped in WordPress core, and the one available as a standalone plugin.

Concerning both HTTPS and multisite, our themes and plugins could behave differently depending on these being enabled or not. For instance, we may want to disable access to a REST endpoint when not using HTTPS or provide extended capabilities to the super admin from the multisite.

This means there are many possible environments that we need to worry about. How do we handle it?

Figuring Out The Environments

Everything that can be automated, must be automated. For instance, to test the logic on our themes and plugins, we can create a continuous integration process that runs a set of tests on multiple environments. Automation takes a big chunk of the pain away.

However, we can’t just rely on having machines do all the work for us. We will also need to access some testing WordPress site to visualize if, after some software upgrade, our themes still look as intended. For instance, if Gutenberg updates its global styles system or how a core block behaves, we want to check that our products were not impacted by the change.

How many different environments do we need to support? Let’s say we are targeting 4 versions of PHP (7.2 to 8.0), 5 versions of WordPress (5.3 to 5.7), 2 versions of Gutenberg (core/plugin), HTTPS enabled/disabled, and multisite on/off. It all amounts to a total of 160 possible environments. That’s way too much to handle.

To simplify matters, instead of producing a site for each possible combination, we can reduce it to a handful of environments that, overall, comprise all the different properties.

For instance, we can produce these five environments:

  1. PHP 7.2 + WP 5.3 + Gutenberg core + HTTPS + multisite
  2. PHP 7.3 + WP 5.4 + Gutenberg plugin + HTTPS + multisite
  3. PHP 7.4 + WP 5.5 + Gutenberg plugin + no HTTPS + no multisite
  4. PHP 8.0 + WP 5.6 + Gutenberg core + HTTPS + no multisite
  5. PHP 8.0 + WP 5.7 + Gutenberg core + no HTTPS + no multisite

Spinning up 5 WordPress sites is manageable, but it is not easy since it involves technical challenges, particularly enabling different versions of PHP, and providing HTTPS certificates.

We want to spin up WordPress sites easily, even if we have limited knowledge of systems. And we want to do it quickly since we have our development and design work to do. How can we do it?

Managing Local WordPress Sites With DevKinsta

Fortunately, spinning up local WordPress sites is not difficult nowadays, since we can avoid the manual work, and instead rely on a tool that automates the process for us.

DevKinsta is exactly this kind of tool. It enables to launch a local WordPress site with minimum effort, for any desired configuration. The site will be created in less time it takes to drink a cup of coffee. And it certainly costs less than a cup of coffee: DevKinsta is 100% free and available for Windows, macOS, and Ubuntu users.

As its name suggests, DevKinsta was created by Kinsta, one of the leading hosting providers in the WordPress space. Their goal is to simplify the process of working with WordPress projects, whether for designers or developers, freelancers, or agencies. The easier we can set up our environment, the more we can focus on our own themes and plugins, the better our products will be.

The magic that powers DevKinsta is Docker, the software that enables to isolate an app from its environment via containers. However, we are not required to know about Docker or containers: DevKinsta hides the underlying complexity away, so we can just launch the WordPress site at the press of a button.

In this article, we will explore how to use DevKinsta to launch the 5 different local WordPress instances for testing a plugin, and what nice features we have at our disposal.

Launching A WordPress Site With DevKinsta

The images from above show DevKinsta when opening it for the first time. It presents 3 options for creating a new local WordPress site:

  1. New WordPress site
    It uses the default configuration, including the latest WordPress release and PHP 8.
  2. Import from Kinsta
    It clones the configuration from an existing site hosted at MyKinsta.
  3. Custom site
    It uses a custom configuration, provided by you.

Pressing on option #1 will literally produce a local WordPress site without even thinking about it. I could explain a bit further if only I could; there’s really not more to it than that.

If you happen to be a Kinsta user, then pressing on option #2 allows you to directly import a site from MyKinsta, including a dump of its database. (Btw, it works in the opposite direction too: local changes in DevKinsta can be pushed to a staging site in MyKinsta.)

Finally, when pressing on option #3, we can specify what custom configuration to use for the local WordPress site.

Let’s press the button for option #3. The configuration screen will look like this:

A few inputs are read-only. These are options that are currently fixed but will be made configurable sometime in the future. For instance, the webserver is currently set to Nginx, but work to add Apache is underway.

The options we can presently configure are the following:

  • The site’s name (from which the local URL is calculated),
  • PHP version,
  • Database name,
  • HTTPS enabled/disabled,
  • The WordPress site’s title,
  • WordPress version,
  • The admin’s email, username and password,
  • Multisite enabled/disabled.

After completing this information for my first local WordPress site, called “GraphQL API on PHP 80”, and clicking on “Create site”, all it took for DevKinsta to create the site was just 2 minutes. Then, I’m presented the info screen for the newly-created site:

The new WordPress site is available under its own local domain graphql-api-on-php80.local. Clicking on the “Open site” button, we can visualize our new site in the browser:

I repeated this process for all the different required environments, and voilà, my 5 local WordPress sites were up and running in no time. Now, DevKinsta’s initial screen list down all my sites:

Using WP-CLI

From the required configuration for my environments, I’ve so far satisfied all items except one: installing Gutenberg as a plugin.

Let’s do this next. We can install a plugin the usual via the WP admin, which we can access by clicking on the “WP admin” button from the site info screen, as seen in the image above.

Even better, DevKinsta ships with WP-CLI already installed, so we can interact with the WordPress site via the command-line interface.

In this case, we need to have a minimal knowledge of Docker. Executing a command within a container is done like this:

docker exec {containerName} /bin/bash -c '{command}'

The needed parameters are:

  • DevKinsta’s container is called devkinsta_fpm.
  • The WP-CLI command to install and activate a plugin is wp plugin install {pluginName} --activate --path={pathToSite} --allow-root
  • The path to the WordPress site, within the container, is /www/kinsta/public/{siteName}.

Putting everything together, the command to install and activate the Gutenberg plugin in the local WordPress site is this one:

docker exec devkinsta_fpm /bin/bash -c 'wp plugin install gutenberg --activate --path=/www/kinsta/public/MyLocalSite --allow-root'

Exploring Features

There are a couple of handy features available for our local WordPress sites.

The first one is the integration with Adminer, a tool similar to phpMyAdmin, to manage the database. With this tool, we can directly fetch and edit the data through a custom SQL query. Clicking on the “Database manager” button, on the site info screen, will open Adminer in a new browser tab:

The second noteworthy feature is the integration with Mailhog, the popular email testing tool. Thanks to this tool, any email sent from the local WordPress site is not actually sent out, but is captured, and displayed on the Email inbox:

Clicking on an email, we can see its contents:

Accessing The Local Installation Files

After installing the local WordPress site, its folder containing all of its files (including WordPress core, installed themes and plugins, and uploaded media items) will be publicly available:

  • Mac and Linux: under /Users/{username}/DevKinsta/public/{siteName}.
  • Windows: under C:Users{username}DevKinstapublic{siteName}.

(In other words: the local WordPress site’s files can be accessed not only through the Docker container, but also through the filesystem in our OS, such as using My PC on Windows, Finder in Mac, or any terminal.)

This is very convenient since it offers a shortcut for installing the themes and plugins we’re developing, speeding up our work.

For instance, to test a change in a plugin in all 5 local sites, we’d normally have to go to the WP admin on each site, and upload the new version of the plugin (or, alternatively, use WP-CLI).

By having access to the site’s folder, though, we can simply clone the plugin from its repo, directly under wp-content/plugins:

$ cd ~/DevKinsta/public/MyLocalSite/wp-content/plugins
$ git clone git@github.com:leoloso/MyAwesomePlugin.git

This way, we can just git pull to update our plugin to its latest version, making it immediately available in the local WordPress site:

$ cd MyAwesomePlugin
$ git pull

If we want to test the plugin under development on a different branch, we can similarly do a git checkout:

git checkout some-branch-with-new-feature

Since we may have several sites with different environments, we can automate this procedure by executing a bash script, which iterates the local WordPress sites and, for each, executes a git pull for the plugin installed within:

#!/bin/bash

iterateSitesAndGitPullPlugin(){
  cd ~/DevKinsta/public/
for file in * do if [ -d "$file" ]; then cd ~/DevKinsta/public/$file/wp-content/plugins/MyAwesomePlugin git pull fi done } iterateSitesAndGitPullPlugin

Conclusion

When designing and developing our WordPress themes and plugins, we want to be able to focus on our actual work, as much as possible. If we can automate setting up the working environment, we can then invest the extra time and energy into our product.

This is what DevKinsta makes possible. We can spin up a local WordPress site by just pressing a button, and create many sites with different environments in just a few minutes.

DevKinsta is being actively developed and supported. If you run into any issue or have some inquiry, you can browse through the documentation or head to the Community forum, where the creators of DevKinsta will help you out.

All of this, for free. Sounds good? If so, download DevKinsta and go spin up your local WordPress sites.

Categories: Others Tags:

Smashing Podcast Episode 39 With Addy Osmani: Image Optimization

June 15th, 2021 No comments

In today’s episode of the Smashing Podcast, we’re talking about image optimization. What steps should we follow for performant images in 2021? I spoke with expert Addy Osmani to find out.

Show Notes

Weekly Update

Transcript

Drew McLellan: He’s an engineering manager working on Google Chrome, where his team focuses on speed, helping to keep the web fast. Devoted to the open source community, his past contributions include Lighthouse, Workbox, Yeoman, Critical, and to do NVC. So we know he knows his way around optimizing for web performance. But did you know he wants won the Oscar for best actress in a supporting role due to a clerical error? My smashing friends, please welcome Addy Osmani. Hi, Addy. How are you?

Addy Osmani: I’m smashing.

Drew McLellan: That’s good to hear. I wanted to talk to you today about images on the web. It’s an area where there’s been a surprising amount of changes and innovation over the last few years, and you’ve just written a very comprehensive book all about image optimization for Smashing. What was the motivation to think at this time, “Now is the time for a book on image optimization?”

Addy Osmani: That’s a great question. I think we know that images have been a pretty key part of the web for decades and that our brains are able to interpret images much faster than they can text. But this overall topic is one that continues to get more and more interesting and more nuanced over time. And I always tell people this is probably, I think, my third or fourth book. I’ve never intentionally set out to write a book.

Addy Osmani: I began this book writing out an article about image optimization, and then over time I found that I’d accidentally written a whole book about it. We were working on this project for about two years now. And even in that time, the industry has been evolving browsers and tooling around images and image formats have been evolving.

Addy Osmani: And so I wrote this book because I found myself finding it hard to stay on top of all of these changes. And I thought, “I’m going to be a good web citizen and try to track everything that I’ve learned in one place so everybody else can take advantage of it.”

Drew McLellan: It is one of those areas, I think, with a lot of performance optimization in the browser, it’s a rapidly shifting landscape, isn’t it? Where a technique that you’ve learned as being current and being best practice, some technology shift happens, and then you find it’s actually an anti-pattern and you shouldn’t be doing it. And trying to keep your knowledge up and make sure that you’re reading the right articles and learning the right things and you’re not reading something from two years ago is quite difficult.

Drew McLellan: So to have it all collected in one well-researched book from an authoritative source is really tremendous.

Addy Osmani: Yeah. Even from an author’s perspective, one of the most interesting things and perhaps one of the most stressful things for our editorial team was I would hand in a chapter and say it was done. And then two weeks later, something would change in a browser, and I’d be like, “Oh, wait. I have to make another last minute change.”

Addy Osmani: But the image landscape has evolved quite a lot, even in the last year. We’ve seen WebP support finally get across the finishing line in most modern browsers. AVIF image support is in Chrome, coming to Firefox, JPEG XL, lazy loading. And across the board, we’ve seen enhancements in how you can use images on the web pretty concretely in browsers. But again, a lot for folks to keep on top of.

Drew McLellan: Some people might view the subject of image optimization as a pretty staid topic. We’ve all, at some point in our careers learn, how to export for web from our graphics software. And some of us that might be in the habit of taking those exported images and running them through something like ImageOptim.

Drew McLellan: So we might know that we should choose a JPEG when it’s a photographic image and a PNG when it’s a graphic based image and think that, “Okay, that’s it. I know image optimization, I’m done.” But really, those things are just table stakes, aren’t they, at this point?

Addy Osmani: Yeah, they are. I think that as our ability to display more detailed, more crisp images and images within even in a different context, depending on whether you care about art direction or not, has evolved over time. I think the need to figure out how you can get those images looking as beautiful as intended to your end users, keeping in mind their environment, their device constraints, their network constraints is a difficult problem and something that I know a lot of people still struggle with.

Addy Osmani: And so when it comes to thinking about images and getting a slightly more refined take on this beyond just, “Hey, let’s use a JPEG,” or “Let’s use a PNG,” I think there’s a few dimensions to this worth keeping in mind. The first is just generally compression. You mentioned ImageOptim, and a lot of us are used to just dragging an image over into a place and getting something smaller off the back of it.

Addy Osmani: Now, when it comes to compression, we’re usually talking about different codecs. And codecs are a compression technology that usually have an encoder component to them for encoding files and a decoder component for decoding them and decompressing them. And when you come to deciding whether you’re using something, you generally need to think about whether the photos or the images that you’re using are okay for you to approach using a lossy compression approach or a loss less approach.

Addy Osmani: Just in case folks are not really as familiar with those concepts, a lossless approach is one where you reproduce the exact same file at the very end upon decompression. So you’re not really losing much in the way of quality. Lossless is a lot more putting your image through a fax machine. You get a facsimile of the original, and it’s not going to be the original file. There might be some different artifacts in place there. It might look subtly different. But in general terms, the more that you compress, the more quality that you typically lose.

Addy Osmani: And so with all of these modern image codecs, they’re trying to see just how much quality you can squeeze out while still maintaining a relatively decent file size, depending on the use case.

Drew McLellan: So really, from a technology point of view, you have a source image and then you have the destination file format. But the process of turning one into the other is open for debate. As long as you have a conforming file, how you do it is down to a codec that can have lots of different implementations, and some will be better than others.

Addy Osmani: Absolutely. Absolutely. And I think that, again, going back to where we started with JPEG and PNG, folks may know the JPEG was created for a lossy compression of photos. You generally get a smaller file off the back of it, and it can sometimes have different banding artifacts. PNG was originally created for a lossless compression, does pretty well on non-photographic images.

Addy Osmani: But since then, things have evolved. Around 2010, we started to get support for WebP, which was supposed to replace JPEG and PNG and beats them in compression by a little bit. But the number of image formats and options on the table has just skyrocketed since then. I think things are headed in generally a good direction, especially with modern formats like AVIF and JPEG XL. But it’s taken a while for us to get here. Even getting WebP support across all browsers took quite some time.

Addy Osmani: And I think ultimately what swayed it is making sure that developers have been asking for it, they’ve had an appetite for being able to get better compression out of these modern formats, and the desire to just have good compatibility across browsers for these things, too.

Drew McLellan: Yeah. WebP seems really interesting to me, because as well as having lossless and lossy compression available within the format, we obviously have a much reduced file size as a result. And there’s good browser support, and we see adoption from big companies like Google and Netflix and various big companies.

Drew McLellan: But my perception in the industry is that we don’t see the same sort of uptake at the grassroots level. Is WebP still waiting for its day to come?

Addy Osmani: I think that I would say that WebP is arriving. A lot of folks have been waiting on Safari and WebKit support to materialize, and we finally have that. But when we think about new image formats, it’s very important that we understand what does support actually mean. There’s browser support for decoding those images. We also need really good tooling support so that whether you’re in a node environment, image CDN, if you’re in a CMS, you have the ability to use those image formats.

Addy Osmani: I can remember many years ago when WebP first came out. Early adopters had this problem of you’d save your WebP file to your desktop, and then suddenly, “Oh, wait. Do I need to drag this into my browser to view it?,” or, “If my users are downloading the WebP, are they going to get stuck and be wondering what’s going on?”

Addy Osmani: And so making sure that there’s pretty holistic support for the image format at both an operating system level as well as in other context is really important, I think, for an image format to take off. It’s also important for people who are serving up images to think about these use cases a little bit so that, if I am saving or downloading a file, you’re trying to put it into a portable format that people can generally share easily. And I think this is where, at least on iOS, iOS has got support for a hike and hyphen. And converting things over to JPEGs when necessary allows people to share them.

Addy Osmani: So thinking through those types of use cases where we can make sure that users aren’t losing out while we’re delivering them better compression is important, I think.

Drew McLellan: I have a slide sharing service that I run that, as you can imagine, deals with hundreds of thousands of images. And when I was looking at WebP, and this was probably maybe three years ago, I was primarily looking at a way to reduce CDN bandwidth costs, because if you’re serving a smaller file, you’re being charged less to serve it. But while I still needed a fullback image, a legacy image format as well, my calculations showed that the cost of storing a whole other image set outweighed the benefits of serving a smaller file. So here we are in 2021. Is that a decision I should be reconsidering at this point?

Addy Osmani: I think that’s a really important consideration. Sometimes, when we talk about how you should be approaching your image strategy, it’s very easy to give people a high-level answer of, “Hey, yeah. Just generate five different formats, and that will just scale infinitely.” And it’s not always the case.

Addy Osmani: I think that when you have to keep storage in mind, sometimes trying to find what is the best, most common denominator to be serving your users is worth keeping in mind. These days, I would actually say that WebP is worth considering as that common denominator. For people who have been used to using the picture tag to conditionally serve different formats down to people, typically you’d use a JPEG as your main fallback. Maybe it’s okay these days to actually be using the WebP as your fallback for most users, unless you’ve got people who are on very, very old browsers. And I think we’re seeing a lot less of that these days. But you definitely have some flexibility there.

Addy Osmani: Now, if you’re trying to be forward facing, I would say go pick one format that you feel works really well. If you can approach storage in a way that scales and is flexible to your needs, what I would say people should do is consider JPEG XL. It’s not technically shipping in a browser just yet. When it does, JPEG XL should be a pretty great option for a lot of photos in lossy or lossless use cases or for non-photo use cases as well. And it’s probably going to be much better than WebP V1. So that’s one place.

Addy Osmani: I think that AVIF is probably going to be better if you need to go to really low bit rates. Maybe you care a lot about bandwidth. Maybe you care a little bit less about image fidelity. And at those bit rates, I could imagine it looking crisper than some of the alternatives. And until we have JPEG XL, I’d try to take a look at your analytics and understand whether it’s possible for you to serve AVIF. Otherwise, I’d focus on that WebP. If you were analytics, I guess most people can be served WebP and you care a little bit less about wide-gamut or text overlays, places where chromosome sampling may not be perfect in WebP. That’s certainly something worth keeping in mind.

Addy Osmani: So I would try to keep in mind that there’s not going to be a one size fits all for everybody. I personally, these days, worry a little bit less about the storage and egress and bandwidth costs, just because I use an image CDN. And I’m happy to say I use Cloudinary personally. We use lots of different image CDNs at where I work. But I found that not having to worry as much about the maintenance costs of dealing with image pipelines, dealing with how I’m going to support like, “Oh, hey, here’s yet another image format or new types of fallbacks or new web APIs,” that has been a nice benefit to investing in something that just takes care of it for me.

Addy Osmani: And then the overall cost for my use cases have been okay. But I can totally imagine that if you’re running a slide service at that scale, that might not necessarily be an option, too.

Drew McLellan: Yeah. So I want to come back to some of these upcoming future formats. But I think that’s worth digging into, because with any sort of performance tools, Lighthouse, or WebPageTests, if any of us run our sites through it, one of the key things that it will suggest is that we use a CDN for images. And that is a very realistic thing to do for very big companies. Is it realistic and within the reach of people building smaller websites and apps, or is that actually as easy to do as it sounds?

Addy Osmani: I think the question people should ask is, “What are you using images for?” If you only have a few images, if you’re building a blog and the images you’re adding in are relatively simple, you don’t have hundreds and hundreds or thousands of thousands of images, you might be okay with just approaching this at build time, in a very static way, where you install a couple of NPM packages. Maybe you’re just using Sharp. And that takes care of you for the most part.

Addy Osmani: There are tools that can help you with generating multiple formats. It does increase your build time a little bit, but that might actually be fine for a lot of folks. And then for folks who do want to be able to leverage multiple-

Addy Osmani: And then for folks who do want to be able to leverage multiple formats, they don’t want to deal with as much of the tooling minutia and want to be able to get a really rich responsive image or story in place, I would say try out an image CDN. I was personally quite reticent about using it for personal projects for the cost concerns initially, and then over time as I took a look at my billing, I actually realized it’s saving me time that I’d otherwise be investing in addressing these problems myself. I don’t know how much you’ve had to write custom scripts for dealing with your images in the past but I realized if I can save myself at least a couple of days of debugging through these different npm packages a month, then the costs kind of take care of the time I’m saving and so it’s okay.

Addy Osmani: But it can be something where if you’re scaling to 100s of 1000s or millions of images and that’s not something that’s necessarily covered by your revenue or not something that you’re prepared to pay for, you do need to think about alternative strategies. And I think we’re lucky that we have enough flexibility with the tools that are available to us today to be able to go in either of those directions, where we do something a little bit more kind of custom, we tackle it ourselves or roll our own image CDN or we invest in something slightly more commercial. And we’re at a place where I’d say that for some use cases, yeah you can use an image CDN and it’s affordable.

Drew McLellan: I guess, one of the sort of guiding principles is always just to be agile and be prepared for change. And you might start off using an image CDN to dynamically convert your images for you as they’re requested, and if that gets to a point where it’s not sustainable cost-wise you can look at another solution and have your code base in a state where it’s going to be easy to substitute one solution for another. I think generally and anywhere you’re relying on a third-party service, that’s a good principle to have isn’t it? So these upcoming image formats, you mentioned JPEG XL. What is JPEG XL? Where’s it come from? And what does it do for us?

Addy Osmani: That’s an excellent question. So JPEG XL is a next generation image format, it’s supposed to be general purpose and it’s a codec from the JPEG committee. It started off with some roots in Google’s pic format and then Cloudinary’s FUIF format. There have been a lot of formats over the years that have kind of been subsumed by this effort, but it’s become a lot more than just the kind of sum of its individual parts and some of the benefits of JPEG XL are it’s great for high fidelity images, really good for lossless, it’s got support for progressive decoding, lossless JPEG transcoding, and it’s also kind of fuss and royalty free, which is definitely a benefit. I think that JPEG XL could potentially be a really strong candidate. We were talking earlier about, if you were to just pick one, what would you use? And I think the JPEG XL has got potential to be that one.

Addy Osmani: I also don’t want to over promise, we’re still very early on with browser support. And so I think that we should really wait and see, experiment and evaluate how well it kind of lines up in practice and meets people’s expectations but I see a lot of potential with JPEG XL for both those lossy and lossless cases. Right now, I belief that Chrome is probably the furthest along in terms of support, but I’ve also seen definitely interest from Mozilla side and other browsers in this so I’m excited about the future with JPEG XL. And if we were to say, what is even shorter term of interest to folks? There’s of course AVIF too.

Drew McLellan: Tell us about AVIF, this is another one that I’m unfamiliar with.

Addy Osmani: Okay. So we mentioned a little bit earlier about AVIF maybe being a better candidate if you need to go to low bit rates and you care about bandwidth more than image fidelity, as a general principle, AVIF really takes the lead in low fidelity high appeal compression. And JPEG XL, it should excel in medium to high fidelity, but they are slightly different formats in their own rights. We’re at a place where AVIF has got increasingly good browser support, but let me take a step back and talk a little bit more about the format. So AVIF itself is based on the AV1 video codec, which has been standardized by the Alliance for Open Media, and it tries to get people significant compression gains over JPEG, over WebP, which we were talking about earlier. And while the exact savings you can get from AVIF will depend on the content and your quality targets, we’ve seen plenty of cases where it can offer over 50% savings compared to JPEG.

Addy Osmani: It’s got lots of good features, it’s able to give you container support for new features like high dynamic range and wide color gamuts, film grain synthesis. And again, similar to talking about being forward facing, one of the nice things about the picture tag is that you could serve users AVIF files right now and it’ll still fall back to your WebP or your JPEG in cases where it’s not necessarily supported. But going back to your example about Photoshop Save For Web, you could take a JPEG that’s 500 kilobytes in size, try to shoot for a similar quality to Photoshop Save For Web and with AVIF I would say that you probably be able to get to a point where that file size is about 90 kilobytes, 100 kilobytes so quite a lot of savings with no real discernible loss in quality.

Addy Osmani: And one of the nice things about that is you’re ideally not going to be seeing as much loss of the texture in any images that have rich detail. So if you’ve got photos of forests or camping or any of those types of the things, they should still look really rich with AVIF. So I’m quite excited about the direction that AVIF has. I do think it needs a little bit more work in terms of tooling support. So I dropped a tweet out about this the other day, we’ve got a number of options for using AVIF right now, for single images we’ve got Squoosh, squoosh.app, which is written by another team in Chrome, so shout out to Surma and Jake for working on Squoosh. Avif.io has got a number of good options for folks who are trying to use AVIF today, regardless of what tech stack they’re focused on, Sharp supports AVIF too.

Addy Osmani: But then generally you think about other places where we deal with images, whether it’s in Figma or in Sketch or in Photoshop or in other places, and I would say that we still need to do a little bit of work in terms of AVIF support there, because it needs to be ubiquitous for developers and users to really feel like it’s landed and come home. And that’s one of the areas of focus for us with the teams working on AVIF in Chrome at the moment, trying to make sure that we can get tooling to a pretty good place.

Drew McLellan: So we’ve got in HTML, the picture element now, which gives us more flexibility over the traditional image tag. Although the image tag’s come a long way as well, hasn’t it? But we saw picture being added, it was around the same time as the native video tag, I think in that sort of original batch of HTML5 changes. And this gives us the ability to specify multiple sources, is that right?

Addy Osmani: Yes, that’s right.

Drew McLellan: So you can list different formats of images and the browser will pick the one it supports, and that enables us to be quite experimental straight away without needing to worry too much about breaking things for people with older browsers.

Addy Osmani: Absolutely. I think that’s one of the nicest benefits of using the picture tag outside of use cases where we’re thinking about our direction, just being able to serve people an image and have the browser go through the list of potential sources and see, okay, well, I will use the first one in that list that I understand otherwise I’ll fall back, that’s a really powerful capability for folks. I think at the same time, I’ve also heard some folks express some concern or some worry that we’re regenerating really huge blobs of markup now when we’re trying to support multiple formats and you factor in different sizes for those formats and suddenly it gets a little bit bulky.

Addy Osmani: So are there other ways that we could approach those problems? I don’t want to sell people too much on image CDNs, I want them to stand on their own. But this is one of those places where an idea called content negotiation can actually offer you an interesting path. So, we’ve talked a little bit about picture tag where you have to generate a bunch of different resources and decide on the order of preference, right, extra HTML. With content negotiation, what it says is let’s do all of that work on the server. So the clients can tell the server what formats it supports up front via list of MIME types via Accept HTTP header. Then the server can do all the heavy work of generating and managing ultimate resources and deciding which ones to send down to clients. And one of the powerful things here is if you’re using an image CDN, you can point to a single resource.

Addy Osmani: So maybe if we’ve got a puppy image like puppy.JPEG, we could give people a URL to puppy.JPEG and if their browser supports WebP or it supports a AVIF the server can get really smart about serving down the right image to those users depending on what their support looks like, but otherwise fall back without you needing to do a ton of extra work yourself. Now, I think that’s a powerful idea. There’s a lot that you can do on the server, we sometimes talk about how not everybody has got access to really strong network quality, your effective connection type can be really different depending on where you are.

Addy Osmani: Even living in Silicon Valley, I could be walking from a coffee shop to a hotel or I could be in the car and the quality of my wifi or my signal may not be that great. So this is where you’ve got access to other APIs, other ideas like the Save-Data client hint for potentially being able to serve people down even smaller sized resources, if the user has opted in to data savings. So there’s a lot of interesting stuff that we could be doing on the server side and I do think we should keep pushing on these ideas of finding a nice balance where people who are comfortable with doing the market path have got all the flexibility to do so and people who want slightly more magical solution have also got a few options.

Drew McLellan: The concept of this sort of data saver approach was something that I learned of first from your book. I mean, let’s go into that a little bit more because that’s quite interesting. So you’re talking about the browser being able to signal a preference for wanting a reduced data experience back because maybe it’s on a metered connection or has low battery or something.

Addy Osmani: Exactly. Exactly. I’ve been traveling in the normal times or the before times back when we would travel a lot more, I’ve experienced plenty of places in the world or situations where my network quality might be really poor or really spotty, and so even opening up a webpage could be a frustrating or difficult experience. I might be looking up a menu and if I can’t see pictures of the beautiful food they’ve got available I might go somewhere where I can, or I might, I don’t know, make myself some food instead. But I think that one of the interesting things about data saver is it gives you a connection back to what the user’s preferences are. So if as a user, I know that I’m having a hard time with my network connection. I can say, “Okay, well, I’m going to opt into data saver mode in my browser.”

Addy Osmani: And then you can use that as a developer as a signal to say, “Okay, well, the user’s at a bit of a constrained, maybe we will surf them down much smaller images or images of a much lower quality.” But they still get to see some images at all, which is better than them waiting a very long time for something much richer to be served down. Other benefits of these types of signals are that you can use them for conditionally serving media. So maybe there are cases where text is the most important thing in that page, maybe you can switch those images off if you discover that users are in kind of a constrained environment. I’ll only spend 30 seconds on this, but you can really push this idea to it’s extremes. Some of the interesting things you can do with Save-Data are maybe even turning off very costly features implemented in JavaScript.

Addy Osmani: If you have certain components that are considered slightly more optional, maybe those don’t necessarily need to be sent down to all users if they only enhance the experience. You can still serve everybody a very core, small, quick experience, and then just layer it on with some nice frosting for people who have a faster connection or device.

Drew McLellan: Potentially, I guess it could factor into pagination and you could return 10 results on a page rather than a 100 and those sorts of things as well. So lots of interesting, interesting capabilities there. I think we’re all sort of familiar with the frustrating process of getting a new site ready, optimizing all your images, handing it over to the client, giving them a CMS to manage the content and find that they’re just replacing everything with poorly optimized images. I mean, again, an image CDN, I guess, would be a really convenient solution to that but are there other solutions, are there things that the CMS could be doing on the server to help with that or is an image CDN just probably the way to go?

Addy Osmani: I think that what we’ve discovered after probably at least six or seven years of trying to get everybody optimizing their images is that it’s a hard problem where some folks involved in the picture might be slightly more technically savvy and maybe comfortable setting up their own tooling or going and running Lighthouse or trying out other tools to let them know whether there are opportunities to improve. I’d love to see people consistently using things like Lighthouse to catch if you’ve got opportunities to optimize further or serve down images of the right size but beyond that, sometimes we run into use cases where the people who are uploading images may not necessarily even understand the cost of the resources that they’re uploading. This is commonly something we run into, and I’ll apologize, I’m not going to call people out too much, but this is something we run into even with the Google blog.

Addy Osmani: Every couple of weeks on the Google blog, we’ll have somebody upload a very large 20 or 30 megabyte animated GIF. And I don’t expect them to know that that’s not a good idea, they’re trying to make the article look cool and very engaging and interactive, but those audiences are not necessarily going to know to go and run tools or to use ImageOptim or to use any of these other tools in place and so documenting for them, that they should check them out, is certainly one option. But being able to automate away the problem, I think is very compelling and helps us consistently get to a place where we’re hopefully balancing the needs of all of our users of CMSs, whether they’re technical or non-technical, as well as the needs of our users.

Addy Osmani: So I think the image CDNs can definitely play a role in helping out here. Ultimately, the thing that’s important is making sure you have a solution in place between people, stakeholders who might be uploading those images, and what gets served down to users. If it’s an image CDN, if it’s something you’ve rolled yourself, if it’s a built step, just needs to be something in place to make sure that you are not serving down something that’s very, very large and inefficient.

Drew McLellan: Talking about animated GIFs, they’re surprisingly popular. They’re fun, we love them, but they’re also huge. And really, it’s a case where a file format that was not designed for video is being used for video. Is there a solution to that with any of these image formats? What can we do?

Addy Osmani: Oh, gosh. The history of GIFs is fascinating. We saw a lot of the formats we know and love or have been around for a while were originated in the late ’80s to early ’90s, and the GIF is one of those. It was created in 1987. I’m about as old as the GIF.

Addy Osmani: As you mentioned, it wasn’t originally created necessarily for use case. I think it was Netscape Navigator which in mid ’90s maybe added support for looping GIFs and giving us this kind of crazy fun way to do memes and the like, but GIFs have got so many weaknesses. They’re kind of limited in many cases to a very finite color palette; 256 colors, in many cases. They’re a bitmapped raster format with pixel value stored in image files.

Addy Osmani: They’re very inefficient, for a number of reasons. And you mentioned that they’re also quite large. I think that we’ve gotten into this place of thinking that if we want a short segment of video or animation that’s going to be looping, the GIF is the thing that we have to use. And that’s just not the case.

Addy Osmani: While we do see that there are modern image formats that have support for animation, I think that the most basic thing you can do these days is make sure you’re serving a video down instead of a GIF. Muted auto-play videos combined with HD64, HD65, whatever video you’re going to use, can be really powerful, and significantly smaller for use cases where you need to be showing a sequence of images.

Addy Osmani: There are options for this. AVIF has got image sequences in there, potentially. Other formats have explored these ideas as well. But I think that one thing you can do is, if you’re using GIFs today, or you have users who are slightly less technical who are using GIFs today, try to see if you can give them tools that will allow them to export a video instead, or if your pipeline can take care of that for them, that’s even better.

Addy Osmani: I have plenty of conversations with CMS providers where you do see people uploading GIFs. They don’t know the difference between a video and a GIF file. But if you can just, whether it’s with an image CDN or via some built process, change the file over to a more efficient format, that would be great.

Drew McLellan: We talked briefly about tools like ImageOptim that manage to strip out information from the files to give us the same quality of result with a smaller file size. I’m presuming that’s because the file formats that we commonly deal with weren’t optimized for delivery over the Web in the first place, so they’re doing that step of removing anything that isn’t useful for serving on the Web. Do these new formats take that into consideration already? Is something like ImageOptim a tool that just won’t be required with these newer formats?

Addy Osmani: I’m anticipating that some of the older formats… Things that have been around for a while, take a while to phase out or to evolve into something else. And so I can see tools like ImageOptim continuing to be useful. Now, what are modern image formats doing that are much better? Well, I would say that they’re taking into account quite a few things.

Addy Osmani: They’re taking into account, are there aspects of the picture that the human eye can’t necessarily make out a difference around? When I’m playing around with different quality settings or different codecs, I’m always looking for that point where if I take the quality down low enough, I’m going to see banding artifacts. I’m going to see lots of weird looking squares around my buildings or the details of my picture.

Addy Osmani: But once those start to disappear, I really need to start zooming in to the image and making comparisons across these different formats. And if users are unlikely to do that, then I think that there are good questions around is that point of quality good enough? I think that modern image formats are pretty good at being able to help you navigate, filtering out some of those details pretty well. Keeping in mind what are the needs of color, because obviously we’ve got white gamut as a thing right now as well.

Addy Osmani: Some people might be okay with an amount of changing your color palette versus not, depending on the type of images that you have available, but definitely I see modern formats trying to be resilient against things like generational loss as well. Generational loss is this idea that… We mentioned memes earlier. A common problem on the Web today is you’ll find a meme, whether it’s on Facebook or Instagram or Reddit or wherever else, you’ll save it, and maybe you’ll share it around with a friend. Maybe they’ll upload it somewhere else. And you suddenly have this terrible kind of copy machine or fax effect of the quality of that image getting worse and worse and worse over time.

Addy Osmani: And so when I see something get reshared that I may have seen three months ago, now it might not be really, really bad quality. You can still make out some of the details, but image formats, being able to keep that in mind and work around those types of problems, I think are really interesting.

Addy Osmani: I know that JPEG XL was trying to keep this idea of generational loss in mind as well. So there’s plenty of things that modern codecs and formats are trying to do to evolve for our needs, even if they’re very meme focused.

Drew McLellan: Let’s say you’ve inherited a project that has all sorts of images on it. What would be the best way to assess the state of that project in terms of image optimization? Are there tools or anything that would help there?

Addy Osmani: I think that it depends on how much time you’ve got to sink into the problem. There are very basic things people can try doing, like obviously batch converting those images over to more modern formats at the recommended default quality and do an eyeball check on how well they’re doing compared to the original.

Addy Osmani: If you’re able to invest a little bit more time, there are plenty of tools and techniques like DSSIM and other ways of being able to compare what the perceptual quality differences are between different types of images that have been converted. And you can use that as a kind of data-driven approach to deciding, if I’m going to batch convert all of my old images to WebP, what is the quality setting that I should be relying on? If I’m going to be doing it for AVIF or JPEG XL, what is the quality setting that I should be relying on?

Addy Osmani: I think that there’s plenty of tools people have available. It really just depends on your time sink that’s possible. Other things that you can do, again, going back to the image CDN aspect, if you don’t have a lot of time and you’re comfortable with the cost of an image CDN, you can just bulk upload all of those images. And there are CDNs that support this idea of automatic quality setting. I think in Cloudinary it’s q_auto, or something like that.

Addy Osmani: But the basic idea there is they will do a scan of the image, try to get a sense of the type of content that’s in there, and automatically decide on the right level of quality that you should be using for the images that are getting served down to users. And so you do have some tooling options that are available here, for sure.

Drew McLellan: I mean, you mentioned batch processing of images. Presumably you’re into the area of that generational loss that you’re talking about, when you do that. When you take an already compressed JPEG and then convert it to a WebP, for example, you risk some loss of quality. Is batch converting a viable strategy or does that generational loss come too much into play if you care about the pristine look of the images?

Addy Osmani: I think it depends on how much you’re factoring in your levels of comfort with lossy versus lossless, and your use case. If my use case is that I’ve inherited a project where the project in question is all of my family’s photos from the last 20 years, I may not be very comfortable with there being too much quality loss in those images, and maybe I’m okay with spending a little bit more money on storage if the quality can remain mostly the same, just using a more modern format.

Addy Osmani: If those are images for a product catalog or any commerce site, I think that you do need to keep in mind what your use case is. Are users going to require being able to see these images with a certain level of detail? And if that’s the case, you need to make those trade-offs in mind when you’re choosing the right format, when you’re choosing the right quality.

Addy Osmani: So I think that batch is still okay. To give you a concrete idea of one way of seeing people approach this at scale, sometimes people will take a smaller sample of the images from that big collection that they’ve inherited, and they’ll try out a more serious set of experiments with just that set. And if they’re able to land on an approach that works well for the sample, they’ll just apply it to the whole batch. And I’ve seen that work to varying degrees of success.

Drew McLellan: So optimizing file size is just sort of one point on the overall image optimization landscape. And I’d like to get on to talking about what we can do in our browsers to optimize the way the images are used, which we’ll do after a quick word from this episode sponsor.

Drew McLellan: So we’ve optimized and compressed our large files, but now we need to think about a strategy for using those in the browser. The good old faithful image tag has gained some new powers in recent times, hasn’t it?

Addy Osmani: Yeah, it has. And maybe it’s useful for folks… I know that a lot of people that ask me about images these days also ask me to frame it in terms of metrics and the Core Web Vitals. Would it be useful for me to talk about what the Core Web Vitals are and maybe frame some of those ideas in those current terms?

Drew McLellan: Absolutely, because Core Web Vitals is a sort of initiative from Google, isn’t it, that we’ve seen more recently? We’re told that it factors into search ranking potentially at some level. What does Core Web Vitals actually mean for us in terms of images?

Addy Osmani: Great question. As you mentioned, Core Web Vitals is an initiative by Google, and it’s all about trying to share unified guidance for quality signals. That can be pretty key to delivering a great user experience on the Web. And it is part of a set of page experience signals Google Search may be evaluating for ranking purposes, but they can impact the Core Web Vitals in a number of ways.

Addy Osmani: Now, before I talk about what those ways are, I should probably say, what are the Core Web Vitals metrics? There’s currently three metrics that are in the Core Web Vitals. There’s largest contentful paint, there’s cumulative layout shift, and there’s first input delay. Now, in a lot of modern Web experiences we find that images tend to be one of the largest visible elements on the page. We see a lot of product pages where we have a big image that’s the main product item image. We see images in carousels, in stories and in banners.

Addy Osmani: Now, largest contentful paint, or LCP, is a Core Web Vitals metric that tries to measure when the largest contentful element, whether it’s an image text or something else, is in a user’s viewport, such that we’re able to tell when that image becomes visible. And that really allows a browser to determine when the main content of the page has really finished rendering.

Addy Osmani: So if I’m trying to go to a recipe site, I might care about how that recipe looks, and so we care about making sure that that big hero image of the recipe is visible to me. Now, the LCP element can change over time. It’s very possible that early on in load, the largest thing may be a heading, but as the page continues to load, it might actually end up being a much larger image or a poster of some sort.

Addy Osmani: And so when you’re trying to optimize largest contentful paint, there’s about four things that you can do. The first thing is making sure that you’re requesting your key hero image as early on as possible. Generally, we have a number of things that are important in the page. We want to make sure that we can render the main page’s content and layout.

Addy Osmani: For layout, typically we’re talking about CSS. So you may be using critical CSS, inline CSS, in your pages, want to avoid things that are render blocking, but then when it comes to your image, ideally you should be requesting that image early. Maybe that involves just making sure that the browser can discover that image as early on in the page as possible, given that a lot of us these days are relying on frameworks.

Addy Osmani: If you’re not necessarily using SSR, server-side rendering, if you are waiting on the browser to discover some of your JavaScript bundles, bundles for your components, whether you have a component for your hero image or product image, if the browser has to wait to fetch, parse, execute, compile and execute all of these different files before it can discover the image, that might mean that your largest contentful image is going to take some time before it can be discovered.

Addy Osmani: Now, if that’s the case, if you find yourself in a place where the image is being requested pretty late, you can take advantage of a browser feature called link rel preload to make sure that the browser can discover that image as early as possible. Now, preload is a really powerful capability. It’s also one that you need to take a lot of care with. These days, it’s very easy to get to a place where maybe you hear that we’re recommending preload for your key-

Addy Osmani: Maybe you hear that we’re recommending preload for your key hero image, as well as your key scripts, as well as your key fonts. And it becomes just this really big, massive trying to make sure that you’re sequencing things in the right order. So the LCP images is definitely one key place worth keeping in mind for this.

Addy Osmani: The other thing, as I mentioned four things, the other thing is make sure you’re using source set and an efficient modern image format. I think that source set is really powerful. I also see sometimes when people are using it, they’ll try to overcompensate and will maybe ship 10 different versions of images in there for each possible resolution. We tend to find, at least in some research, that beyond three by images, users have a really hard time being able to tell what the differences are for image quality and sharpness and detail. So DPR capping, device pixel ratio capping, is certainly an idea worth keeping in mind.

Addy Osmani: And then for modern image formats, we talked about formats earlier, but consider your WebP, your AVIF, your JPEG XL. Avoid wasting pixels. It’s really important to have a good strategy in place for quality. And I think that there are a lot of cases where even the default quality can sometimes be too much. So I would experiment with trying to lower your bit rate, lower your quality settings, and see just how far you can take things for your users while maintaining sharpness.

Addy Osmani: And then when we’re talking about loading, one of the other things that the image tag has kind of evolved to support over the last couple of years is the lazy loading. So with loading equals lazy, you no longer need to necessarily use a JavaScript library to add lazy loading to your images. You just drop that onto your image. And in chromium browsers and Firefox, you’ll be able to lazy load those images without needing to use any third-party dependencies. And that’s quite nice too.

Addy Osmani: So, we’ve got lazy loading in place. We’ve got support for other things like sync decoding, but I’m going to keep things going and talk very quickly about the other two core vitals metrics.

Drew McLellan: Go for it, yep.

Addy Osmani: So, get rid of layout shifts. Nobody likes things jumping around their pages. I feel like, one of my biggest frustrations is I open up a web page. I hover my finger over a button I want to click, and then suddenly a bunch of either ads or images without dimension set or other things pop in. And it causes a really unpleasant experience.

Addy Osmani: So cumulative layout shift tries to measure the instability of content. And a lot of the time, the common things that are pushing your layout shifts are images or other elements on your page that just don’t have dimension set. I think that that’s one of those places where it’s often straightforward for people to set image dimensions. Maybe it’s not something we’ve historically done quite as much of, but certainly something worth spending your time on. In tools like lighthouse will try to help you collect, like what is the list of images on your page that require dimensions? So you can go and you can set them.

Drew McLellan: I was going to say, that’s a really interesting point because when responsive web design became a thing, we all went through our sites and stripped out image dimensions because the tools we had at our disposal to make that work required that we didn’t have height and width attributes on our images. But that’s a bad idea now, is it?

Addy Osmani: What’s old is new again. I would say that it’s definitely worth setting dimensions on your images. Set dimensions on your ads, your eye frames, anything that is dynamic content that could potentially change in size is worth setting dimensions on.

Addy Osmani: And for folks who are building really fun out there experience, out there is the wrong phrase, really fun layout experiences where maybe you need to do kind of more work on responsive cards and the like; I would consider using CSS aspect ratio or aspect ratio boxes to reserve your space. And that can compliment setting dimensions on those images as well for making sure that things are as fixed as possible when you’re trying to avoid your layout shifts.

Addy Osmani: And then, finally last Core Web Vital is first input delay. This is something people don’t necessarily always think about when it comes to images. So it is in fact possible for images to block a user’s bandwidth and CPU on page load. They can get in the way of how other critical resources are loaded in, in particular on really slow connections or on lower end mobile devices that can lead to bandwidth saturation.

Addy Osmani: So first input delay is a Core Web Vital metric that captures, it users first impression of a site’s interactivity and responsiveness. And so by reducing main thread CPU usage, your first input delay can also be kind of minimized. So in general there, just avoid images that might cause network contention. They’re not render blocking. But they can still indirectly impact your rendering performance.

Drew McLellan: Is there anything we can do with images to stop them render blocking? Can we take load off the browser in that initial phase somehow to enable us to be interactive quicker?

Addy Osmani: I think it’s really important increasingly these days to have a good understanding of the right optimal image sequence for displaying something above the fold. I know that above the fold is an overloaded term, but like in the user’s first view port. Very often we can end up trying to request a whole ton of resources, some of them being images, that are not really necessary for what the user is immediately going to see. And those tends to be great candidates for loading later on in the page’s lifecycle, great things to lazy load in place. But if you’re requesting a whole slew of images, like a whole queue of things very early on, those can potentially have an impact.

Drew McLellan: Yeah. So, I mean, you mentioned lazy loading images that we’ve historically required a JavaScript library to do, which has its own setbacks, I think, because of historic ways that browsers optimize loading images, where it’s almost impossible to stop them loading images, unless you just don’t give it a source. And if you don’t give it a source and then try and correct it with JavaScript afterwards, if that JavaScript doesn’t run, you get no images. So lazy loading, native lazy loading is an answer to all that.

Addy Osmani: Yeah, absolutely. And I think that this is a place where we have tried to improve across browsers, the native lazy loading experience over the last year. As you know, this is one of those features where we shipped something early and we’re able to take advantage of conversations with thought leaders in the industry to understand like, “Oh, hey, what are the thresholds you’re actually manually setting if you’re using lazy sizes or you’re using other JavaScript’s lazy loading libraries?” And then we tuned our thresholds to try getting to a slightly closer place to what you’d expect them to be.

Addy Osmani: So in a lot of cases, you can just use native lazy loading. If you need something a lot more refined, if you need a lot more control over being able to set the intersection observer thresholds, the point of when the browser is going to request things, we generally suggest, go and use a library in those cases, just because we’re trying to solve for the 90% use case. But the 10% is still valid. There might be people who still need something a little bit more. And so, for most people, I’m hopeful that native lazy loading will be good enough for the foreseeable future.

Drew McLellan: Most of all, it’s free. A simple attribute to add, and you get all this functionality for free, which is great. If there was one thing that our listener could do, could go away and do to their site to improve their image optimization, what would it be? Where should they start?

Addy Osmani: A good place to start is understand how much of a problem this is for your site. I’d go and check out either lighthouse or pay speed insights. Go and run it on a few of your most popular pages and just see what comes out. If it looks like you’ve only got one or two small things to do, that’s fantastic. Maybe you can put some time in there.

Addy Osmani: If there’s a long list of things for you to do, maybe take a look at the highest opportunities that you have in there, things that say, “Oh, hey, you could save multiple seconds if you were to do this one thing.” And focus your energy there to begin with.

Addy Osmani: As we’ve talked about here, tooling for modern image formats has gotten better over time. Image CDNs can definitely be worth considering. But beyond that, there’s a lot of small steps you can take. Sometimes if it’s a small enough site, even just going and opening up Squoosh, putting a few of your images through there can be a great starting point.

Drew McLellan: That’s solid advice. Now I know it’s a smashing publication, but I really must congratulate you on the book. It’s just so comprehensive and really easy to digest. I think it’s a really valuable read.

Drew McLellan: So I’ve been learning all about image optimization. What have you been learning about lately, Addy?

Addy Osmani: What have I been learning about lately? Actually, on a slightly different topic that still has to do with images, so when I was doing my masters at college, I got really deep into computer vision and trying to understand, how can we detect different parts of an image and do wild and interesting things with them?

Addy Osmani: And a specific problem I’ve been digging into recently is I’ve been looking at pictures of myself when I was a baby or a kid. And back then, a lot of the food is my parents would take were not necessarily on digital cameras. They were Polaroids. They’re often somewhat low resolution images. And I wanted a way to be able to scale those up. And so I started digging into this problem again recently. And it led me to learn a lot more about what I can do in the browser.

Addy Osmani: So I’ve been building out some small tools that let you, using machine learning, using TensorFlow, using existing technologies, take a relatively low resolution image or illustration, and then upscale them to something that is much higher quality. So that it’s better than simply just like stretching the image out. It’s like actually filling in detail.

Addy Osmani: And that’s been kind of fun. I’ve been learning a lot about how stable web assembly is now across browser, how well you can use some of these ideas for desktop application use cases. And that’s been really fun. So I’ve been digging into a lot of web assembly recently. And that’s been cool.

Drew McLellan: It’s funny, isn’t it? When a technology comes along that turns everything you know on its head. We’ve always said that on the web, we can make images smaller. But if we’ve only got a small image, we can’t make it bigger. It’s just impossible. But now we have technology that, under a lot of circumstances, might make that possible. It’s really fascinating.

Drew McLellan: If you, dear listener, would like to hear more from Addie, you can find him on Twitter where he’s @AddieOsmani and find all his projects linked from AddyOsmani.com. The book “Image Optimization” is available both physically and digitally from Smashing right now at smashingmagazine.com. Thanks for joining us today, Addy. Do you have any parting words?

Addy Osmani: Any parting words? I have a little quirk from history that I will share with people. Tim Berners-Lee uploaded the very first image to the internet in 1992. I’m not sure if you can guess what it was, but you’ll probably be surprised. Drew, do you have any guesses?

Drew McLellan: I’m guessing a cat.

Addy Osmani: A cat. It’s a good guess, but no. This was at CERN. And the image was actually of a band called Les Horribles Cernettes, which was a parody pop band formed by a bunch of CERN employees. And the music they would do is like doo-wop music. And they would sing love songs about colliders and quirks and liquid nitrogen and anti-matter wearing sixties outfits, which I found just wonderful and random.

Categories: Others Tags:

Making Tables With Sticky Header and Footers Got a Bit Easier

June 14th, 2021 No comments

It wasn’t long ago when I looked at sticky headers and footers in HTML

s in the blog post A table with both a sticky header and a sticky first column. In it, I never used position: sticky on any

,

, or

element, because even though Safari and Firefox could do that, Chrome could not. But it could do table cells like

and

are sticky-able. That seems like it will be the most common use case here.

table thead,
table tfoot {
  position: sticky;
}
table thead {
  inset-block-start: 0; /* "top" */
}
table tfoot {
  inset-block-end: 0; /* "bottom" */
}
CodePen Embed Fallback

That works in all three major browsers. You might want to get clever and only sticky them at certain minimum viewport heights or something, but the point is it works.

I heard several questions about table columns as well. My original article had a sticky first column (that was kind of the point). While there is a table

tag, it’s… weird. It doesn’t actually wrap columns, it’s more like a pointer thing to be able to style down the column if you need to. I hardly ever see it used, but it’s there. Anyway, you totally can’tposition: sticky; a

, but you can make sticky columns. You need to select all the cells in that column and stick them to the left or right. Here’s that using logical properties…

table tr th:first-child {
  position: sticky;
  inset-inline-start: 0; /* "left" */
}

Here’s a sorta obnoxious table where the

,

, and the first and last columns are all sticky.

CodePen Embed Fallback

I’m sure you could do something tasteful with this. Like maybe:


The post Making Tables With Sticky Header and Footers Got a Bit Easier appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

CSS-Tricks Chronicle XXXX

June 14th, 2021 No comments

Just a little link roundup of some off-site stuff I’ve done recently. As I’m wont to do from time to time.

DevJourney Podcast

#151 Chris Coyier from ceramics to CSS-Tricks and CodePen

Chris took us from playing on his first C64 to his bachelor of arts in ceramics and back to web development. We talked about the different positions he held along the way and how they slowly but surely led him toward web development. We brushed over the creation and recreation of CSS-Tricks, learning in the open and what a good day looks like.

Podrocket Podcast

Rocket Surgery: Kaelan and Chris Coyier compare notes

Are you up to speed on all of this new CSS stuff? Chris Coyier and Kaelan compare notes on CSS and frontend development (they also discuss MDN plus).

CodePen Radio

I’ve been back to hosting the episodes of CodePen Radio this year, every week. Here are some recent episodes.

https://blog.codepen.io/feed/podcast/

I will randomly do the podcast as a video some weeks and call it CodePen Radio on TV! Here’s Adam Kuhn and I doing one of those videos. You’ll see a new intro animation on it that Adam himself did.

ShopTalk Show

Dave and I are six months away from doing this weekly for 10 years!! Again, some recent shows:

https://shoptalkshow.com/feed/podcast/

Some of the shows have been centered around a series called JavaScript in 2021.

The biggest change around ShopTalk is our Patreon + Discord which has been very awesome.


The post CSS-Tricks Chronicle XXXX appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

22 Exciting New Tools For Designers, June 2021

June 14th, 2021 No comments

This month’s new tools and resources collection is a mixed bag of elements for designers and developers. From fun little divots to tools that can speed up development, you are sure to find something usable here.

Here’s what new for designers this month:

June’s Top Picks

Codewell

Codewell is a service to help you learn, practice, and improve HTML and CSS skills with real templates. The benefit here is pretty obvious. When working with real templates, you can see the result of actions and changes. The tool includes free and premium options with new templates to work on weekly. Everything works in a responsive environment, and free plans have access to free challenges and a Slack community; a paid plan also includes source files and premium challenges. You need a Github login to get started.

LoomSKD

LoomSDK is an easy and reliable way to add video messaging to your product – and it’s free. The SDK enables your users to record, embed, and view with Loom videos directly within web apps – adding clarity and context to any workflow.

Pintr New Image

Pinter New Image turns photos into funky and fun line images. Upload an image with plenty of contrast, use the controls to set the look you want to achieve, and download. The new images are available as PNG or SVG. Maybe use it to create your next profile photos for social media or a nifty avatar.

Terms & Conditions Apply

Terms & Conditions Apply is a game that explains all those little pop-ups that you accept to enter and interact with websites. You are tasked with a mission to start the game: Do not accept terms and conditions, say no to notifications, and opt-out of cookies. Can you do it?

Khroma

Khroma uses artificial intelligence (via personalized algorithm) to learn what colors you love and create palettes for you to discover, search, and save for use in projects. The beta tool is easy to get started with, although you do have a little color-picking homework to get started.

6 WebTools

Mmm

Mmm is a different type of website builder. The tool, which is still in alpha, allows users to create drag and drop websites in a simple manner. It works almost like making a digital collage. Users can get a custom URL, and every page is responsive. The interface is designed so that you can even build yours on a phone. And here’s the other feature – they encourage messy designs.

LightGallery

LightGallery is a lightweight, modular, JavaScript image and video lightbox gallery plugin. It works with React.js, Vue.js, Angular, and TypeScript. It includes plenty of demos and documentation to help you make the most of this gallery tool.

Vandal

Vandal is a nifty browser extension for Firefox or Chrome that allows you to navigate back in time without changing tabs. The utility of Vandal is to allow quick and easy access to all the archived snapshots for a URL, and it supports navigation to a snapshot as well.

CSS Layout Generator

The CSS Layout Generator is a tool for creating the CSS for layout components. It is also a learning tool for teaching what is possible in CSS for positioning elements in the browser. Tweak specifications to see how it impacts the layout, CSS, and HTML.

Alpaca Data API

Alpaca Data API is an easy-to-use feed that allows you to bring in stock market data for modeling and backtesting. (it includes free and premium options based on your needs.)

Mobile Palette Generator

Mobile Palette Generator is a color-picking tool that will help you select the best hues for mobile design projects. It then shows you all the specs for primary, secondary, and accent colors.

6 Icons and UI Kits

Iconoir

Iconoir is an open-source icon repository with more than 900 SVG icons. Search icons, browse by category or poke around for what you are looking for. Everything is ready to use without signups or forms to fill out.

Pmndrs Market

Pmndrs Market has a collection of more than 300 three-dimensional elements and drawings of things for use in projects. Model renders are in a rough style with a realistic feel.

Boring Avatars

Boring Avatars is a fun collection of semi-customizable avatars without faces, hence the name. It’s a fun playground that puts a new twist on something that you might not expect to do differently.

Spark

Spark is a free download with three different website design starters. The hero images are ready to build from and are made for Figma.

Venus Design System

Venus Design System is a premium UI kit packed with more than 2,000 components and states that allow you to design fast. There’s also a demo version for you to test before you buy.

ReadyUI

ReadyUI contains more than 200 blocks and designs for agencies, developers, startups, and more. Everything is production-ready using Bootstrap and Figma files. Choose from light or dark themes and search for a design that works for your project.

5 Tutorials

Creating Generative SVG Characters

Creative Generative SVG Characters is a marriage of JavaScript and SVG that creates fun characters derived from drawings. Using shapes and a little code, you can see how to draw smooth lines, create polygons, and add other shapes for a fun feel. There’s a full demo on Codepen.

5 Steps to Faster Web Fonts

5 Steps to Faster Web Fonts helps you remove some of the bulk from popular typography options. Iain Bean explains a set of methods you can deploy to ensure that load times are quick with some applicable code snippets. Here’s a preview: Tip 1 is to use the most modern file formats (WOFF2).

The Perfect Link

The Perfect Link walks you through some accessibility checks from the A11Y Collective for linking best practices. Some of the things you think are the “right way” may be challenged here. The information goes through everything from design to semantics and is wonderfully thorough. It’s a must-read.

Readsom

Readsom is a curated collection of newsletters and emails that you can read online or sign up for. Its catchphrase is to “discover content you’ll want to read.” It is a good way to find newsletters that interest you, including plenty of design and development options that you might not otherwise know about.

Famous First Websites

Famous First Websites isn’t a tutorial per se, but it does provide a good place to do some visual learning. See what your favorite websites looked like when they launched and the evolution of the designs.

Source

The post 22 Exciting New Tools For Designers, June 2021 first appeared on Webdesigner Depot.

Categories: Designing, Others Tags:

Securing Your Website With Subresource Integrity

June 14th, 2021 No comments

When you load a file from an external server, you’re trusting that the content you request is what you expect it to be. Since you don’t manage the server yourself, you’re relying on the security of yet another third party and increasing the attack surface. Trusting a third party is not inherently bad, but it should certainly be taken into consideration in the context of your website’s security.

A real-world example

This isn’t a purely theoretical danger. Ignoring potential security issues can and has already resulted in serious consequences. On June 4th, 2019, Malwarebytes announced their discovery of a malicious skimmer on the website NBA.com. Due to a compromised Amazon S3 bucket, attackers were able to alter a JavaScript library to steal credit card information from customers.

It’s not only JavaScript that’s worth worrying about, either. CSS is another resource capable of performing dangerous actions such as password stealing, and all it takes is a single compromised third-party server for disaster to strike. But they can provide invaluable services that we can’t simply go without, such as CDNs that reduce the total bandwidth usage of a site and serve files to the end-user much faster due to location-based caching. So it’s established that we need to sometimes rely on a host that we have no control over, but we also need to ensure that the content we receive from it is safe. What can we do?

Solution: Subresource Integrity (SRI)

SRI is a security policy that prevents the loading of resources that don’t match an expected hash. By doing this, if an attacker were to gain access to a file and modify its contents to contain malicious code, it wouldn’t match the hash we were expecting and not execute at all.

Doesn’t HTTPS do that already?

HTTPS is great for security and a must-have for any website, and while it does prevent similar problems (and much more), it only protects against tampering with data-in-transit. If a file were to be tampered with on the host itself, the malicious file would still be sent over HTTPS, doing nothing to prevent the attack.

How does hashing work?

A hashing function takes data of any size as input and returns data of a fixed size as output. Hashing functions would ideally have a uniform distribution. This means that for any input, x, the probability that the output, y, will be any specific possible value is similar to the probability of it being any other value within the range of outputs.

Here’s a metaphor:

Suppose you have a 6-sided die and a list of names. The names, in this case, would be the hash function’s “input” and the number rolled would be the function’s “output.” For each name in the list, you’ll roll the die and keep track of what name each number number corresponds to, by writing the number next to the name. If a name is used as input more than once, its corresponding output will always be what it was the first time. For the first name, Alice, you roll 4. For the next, John, you roll 6. Then for Bob, Mary, William, Susan, and Joseph, you get 2, 2, 5, 1, and 1, respectively. If you use “John” as input again, the output will once again be 6. This metaphor describes how hash functions work in essence.

and

, which was a decent-enough workaround.

Well that’s changed.

Sounds like a big effort went into totally revamping tables in the rendering engine in Chromium, bringing tables up to speed. It’s not just the stickiness that was fixed, but all sorts of things. I’ll just focus on the sticky thing since that’s what I looked at.

The headline to me is that

Name (input) Number rolled (output)
Alice 4
John 6
Bob 2
Mary 2
William 5
Susan 1
Joseph 1

You may have noticed that, for example, Bob and Mary have the same output. For hashing functions, this is called a “collision.” For our example scenario, it inevitably happens. Since we have seven names as inputs and only six possible outputs, we’re guaranteed at least one collision.

A notable difference between this example and a hash function in practice is that practical hash functions are typically deterministic, meaning they don’t make use of randomness like our example does. Rather, it predictably maps inputs to outputs so that each input is equally likely to map to any particular output.

SRI uses a family of hashing functions called the secure hash algorithm (SHA). This is a family of cryptographic hash functions that includes 128, 256, 384, and 512-bit variants. A cryptographic hash function is a more specific kind of hash function with the properties being effectively impossible to reverse to find the original input (without already having the corresponding input or brute-forcing), collision-resistant, and designed so a small change in the input alters the entire output. SRI supports the 256, 384, and 512-bit variants of the SHA family.

Here’s an example with SHA-256:

For example. the output for hello is:

2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824

And the output for hell0 (with a zero instead of an O) is:

bdeddd433637173928fe7202b663157c9e1881c3e4da1d45e8fff8fb944a4868

You’ll notice that the slightest change in the input will produce an output that is completely different. This is one of the properties of cryptographic hashes listed earlier.

The format you’ll see most frequently for hashes is hexadecimal, which consists of all the decimal digits (0-9) and the letters A through F. One of the benefits of this format is that every two characters represent a byte, and the evenness can be useful for purposes such as color formatting, where a byte represents each color. This means a color without an alpha channel can be represented with only six characters (e.g., red = ff0000)

This space efficiency is also why we use hashing instead of comparing the entirety of a file to the data we’re expecting each time. While 256 bits cannot represent all of the data in a file that is greater than 256 bits without compression, the collision resistance of SHA-256 (and 384, 512) ensures that it’s virtually impossible to find two hashes for differing inputs that match. And as for SHA-1, it’s no longer secure, as a collision has been found.

Interestingly, the appeal of compactness is likely one of the reasons that SRI hashes don’t use the hexadecimal format, and instead use base64. This may seem like a strange decision at first, but when we take into consideration the fact that these hashes will be included in the code and that base64 is capable of conveying the same amount of data as hexadecimal while being 33% shorter, it makes sense. A single character of base64 can be in 64 different states, which is 6 bits worth of data, whereas hex can only represent 16 states, or 4 bits worth of data. So if, for example, we want to represent 32 bytes of data (256 bits), we would need 64 characters in hex, but only 44 characters in base64. When we using longer hashes, such as sha384/512, base64 saves a great deal of space.

Why does hashing work for SRI?

So let’s imagine there was a JavaScript file hosted on a third-party server that we included in our webpage and we had subresource integrity enabled for it. Now, if an attacker were to modify the file’s data with malicious code, the hash of it would no longer match the expected hash and the file would not execute. Recall that any small change in a file completely changes its corresponding SHA hash, and that hash collisions with SHA-256 and higher are, at the time of this writing, virtually impossible.

Our first SRI hash

So, there are a few methods you can use to compute the SRI hash of a file. One way (and perhaps the simplest) is to use srihash.org, but if you prefer a more programmatic way, you can use:

sha384sum [filename here] | head -c 96 | xxd -r -p | base64
  • sha384sum Computes the SHA-384 hash of a file
  • head -c 96 Trims all but the first 96 characters of the string that is piped into it
    • -c 96 Indicates to trim all but the first 96 characters. We use 96, as it’s the character length of an SHA-384 hash in hexadecimal format
  • xxd -r -p Takes hex input piped into it and converts it into binary
    • -r Tells xxd to receive hex and convert it to binary
    • -p Removes the extra output formatting
  • base64 Simply converts the binary output from xxd to base64

If you decide to use this method, check the table below to see the lengths of each SHA hash.

Hash algorithm Bits Bytes Hex Characters
SHA-256 256 32 64
SHA-384 384 48 96
SHA-512 512 64 128

For the head -c [x] command, x will be the number of hex characters for the corresponding algorithm.

MDN also mentions a command to compute the SRI hash:

shasum -b -a 384 FILENAME.js | awk '{ print $1 }' | xxd -r -p | base64

awk '{print $1}' Finds the first section of a string (separated by tab or space) and passes it to xxd. $1 represents the first segment of the string passed into it.

And if you’re running Windows:

@echo off
set bits=384
openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp
set /p a= < tmp
del tmp
echo sha%bits%-%a%
pause
  • @echo off prevents the commands that are running from being displayed. This is particularly helpful for ensuring the terminal doesn’t become cluttered.
  • set bits=384 sets a variable called bits to 384. This will be used a bit later in the script.
  • openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp is more complex, so let’s break it down into parts.
    • openssl dgst computes a digest of an input file.
    • -sha%bits% uses the variable, bits, and combines it with the rest of the string to be one of the possible flag values, sha256, sha384, or sha512.
    • -binary outputs the hash as binary data instead of a string format, such as hexadecimal.
    • %1% is the first argument passed to the script when it’s run.
    • The first part of the command hashes the file provided as an argument to the script.
    • | openssl base64 -A > tmp converts the binary output piping through it into base64 and writes it to a file called tmp. -A outputs the base64 onto a single line.
    • set /p a= <tmp stores the contents of the file, tmp, in a variable, a.
    • del tmp deletes the tmp file.
    • echo sha%bits%-%a% will print out the type of SHA hash type, along with the base64 of the input file.
    • pause Prevents the terminal from closing.

SRI in action

Now that we understand how hashing and SRI hashes work, let’s try a concrete example. We’ll create two files:

// file1.js
alert('Hello, world!');

and:

// file2.js
alert('Hi, world!');

Then we’ll compute the SHA-384 SRI hashes for both:

Filename SHA-384 hash (base64)
file1.js 3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2
file2.js htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9

Then, let’s create a file named index.html:

<!DOCTYPE html>
<html>
  <head>
    <script type="text/javascript" src="./file1.js" integrity="sha384-3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2" crossorigin="anonymous"></script>
    <script type="text/javascript" src="./file2.js" integrity="sha384-htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9" crossorigin="anonymous"></script>
  </head>
</html>

Place all of these files in the same folder and start a server within that folder (for example, run npx http-server inside the folder containing the files and then open one of the addresses provided by http-server or the server of your choice, such as 127.0.0.1:8080). You should get two alert dialog boxes. The first should say “Hello, world!” and the second, “Hi, world!”

If you modify the contents of the scripts, you’ll notice that they no longer execute. This is subresource integrity in effect. The browser notices that the hash of the requested file does not match the expected hash and refuses to run it.

We can also include multiple hashes for a resource and the strongest hash will be chosen, like so:

<!DOCTYPE html>
<html>
  <head>
    <script
      type="text/javascript"
      src="./file1.js"
      integrity="sha384-3frxDlOvLa6GGEUwMh9AowcepHRx/rwFT9VW9yL1wv/OcerR39FEfAUHZRrqaOy2 sha512-cJpKabWnJLEvkNDvnvX+QcR4ucmGlZjCdkAG4b9n+M16Hd/3MWIhFhJ70RNo7cbzSBcLm1MIMItw
9qks2AU+Tg=="
       crossorigin="anonymous"></script>
    <script 
      type="text/javascript"
      src="./file2.js"
      integrity="sha384-htr1LmWx3PQJIPw5bM9kZKq/FL0jMBuJDxhwdsMHULKybAG5dGURvJIXR9bh5xJ9 sha512-+4U2wdug3VfnGpLL9xju90A+kVEaK2bxCxnyZnd2PYskyl/BTpHnao1FrMONThoWxLmguExF7vNV
WR3BRSzb4g=="
      crossorigin="anonymous"></script>
  </head>
</html>

The browser will choose the hash that is considered to be the strongest and check the file’s hash against it.

Why is there a “crossorigin” attribute?

The crossorigin attribute tells the browser when to send the user credentials with the request for the resource. There are two options to choose from:

Value (crossorigin=) Description
anonymous The request will have its credentials mode set to same-origin and its mode set to cors.
use-credentials The request will have its credentials mode set to include and its mode set to cors.

Request credentials modes mentioned

Credentials mode Description
same-origin Credentials will be sent with requests sent to same-origin domains and credentials that are sent from same-origin domains will be used.
include Credentials will be sent to cross-origin domains as well and credentials sent from cross-origin domains will be used.

Request modes mentioned

Request mode Description
cors The request will be a CORS request, which will require the server to have a defined CORS policy. If not, the request will throw an error.

Why is the “crossorigin” attribute required with subresource integrity?

By default, scripts and stylesheets can be loaded cross-origin, and since subresource integrity prevents the loading of a file if the hash of the loaded resource doesn’t match the expected hash, an attacker could load cross-origin resources en masse and test if the loading fails with specific hashes, thereby inferring information about a user that they otherwise wouldn’t be able to.

When you include the crossorigin attribute, the cross-origin domain must choose to allow requests from the origin the request is being sent from in order for the request to be successful. This prevents cross-origin attacks with subresource integrity.

Using subresource integrity with webpack

It probably sounds like a lot of work to recalculate the SRI hashes of each file every time they are updated, but luckily, there’s a way to automate it. Let’s walk through an example together. You’ll need a few things before you get started.

Node.js and npm

Node.js is a JavaScript runtime that, along with npm (its package manager), will allow us to use webpack. To install it, visit the Node.js website and choose the download that corresponds to your operating system.

Setting up the project

Create a folder and give it any name with mkdir [name of folder]. Then type cd [name of folder] to navigate into it. Now we need to set up the directory as a Node project, so type npm init. It will ask you a few questions, but you can press Enter to skip them since they’re not relevant to our example.

webpack

webpack is a library that allows you automatically combine your files into one or more bundles. With webpack, we will no longer need to manually update the hashes. Instead, webpack will inject the resources into the HTML with integrity and crossorigin attributes included.

Installing webpack

Yu’ll need to install webpack and webpack-cli:

npm i --save-dev webpack webpack-cli 

The difference between the two is that webpack contains the core functionalities whereas webpack-cli is for the command line interface.

We’ll edit our package.json to add a scripts section like so:

{
  //... rest of package.json ...,
  "scripts": {
    "dev": "webpack --mode=development"
  }
  //... rest of package.json ...,
}

This enable us to run npm run dev and build our bundle.

Setting up webpack configuration

Next, let’s set up the webpack configuration. This is necessary to tell webpack what files it needs to deal with and how.

First, we’ll need to install two packages, html-webpack-plugin, and webpack-subresource-integrity:

npm i --save-dev html-webpack-plugin webpack-subresource-integrity style-loader css-loader
Package name Description
html-webpack-plugin Creates an HTML file that resources can be injected into
webpack-subresource-integrity Computes and inserts subresource integrity information into resources such as  and 
style-loader Applies the CSS styles that we import
css-loader Enables us to import css files into our JavaScript

Setting up the configuration:

const path              = require('path'),
      HTMLWebpackPlugin = require('html-webpack-plugin'),
      SriPlugin         = require('webpack-subresource-integrity');

module.exports = {
  output: {
    // The output file's name
    filename: 'bundle.js',
    // Where the output file will be placed. Resolves to 
    // the "dist" folder in the directory of the project
    path: path.resolve(__dirname, 'dist'),
    // Configures the "crossorigin" attribute for resources 
    // with subresource integrity injected
    crossOriginLoading: 'anonymous'
  },
  // Used for configuring how various modules (files that 
  // are imported) will be treated
  modules: {
    // Configures how specific module types are handled
    rules: [
      {
        // Regular expression to test for the file extension.
        // These loaders will only be activated if they match
        // this expression.
        test: /.css$/,
        // An array of loaders that will be applied to the file
        use: ['style-loader', 'css-loader'],
        // Prevents the accidental loading of files within the
        // "node_modules" folder
        exclude: /node_modules/
      }
    ]
  },
  // webpack plugins alter the function of webpack itself
  plugins: [
    // Plugin that will inject integrity hashes into index.html
    new SriPlugin({
      // The hash functions used (e.g. 
      // <script integrity="sha256- ... sha384- ..." ...
      hashFuncNames: ['sha384']
    }),
    // Creates an HTML file along with the bundle. We will
    // inject the subresource integrity information into 
    // the resources using webpack-subresource-integrity
    new HTMLWebpackPlugin({
      // The file that will be injected into. We can use 
      // EJS templating within this file, too
      template: path.resolve(__dirname, 'src', 'index.ejs'),
      // Whether or not to insert scripts and other resources
      // into the file dynamically. For our example, we will
      // enable this.
      inject: true
    })
  ]
};

Creating the template

We need to create a template to tell webpack what to inject the bundle and subresource integrity information into. Create a file named index.ejs:

<!DOCTYPE html>
<html>
  <body></body>
</html>

Now, create an index.js in the folder with the following script:

// Imports the CSS stylesheet
import './styles.css'
alert('Hello, world!');

Building the bundle

Type npm run build in the terminal. You’ll notice that a folder, called dist is created, and inside of it, a file called index.html that looks something like this:

<!DOCTYPE HTML>
<html><head><script defer src="bundle.js" integrity="sha384-lb0VJ1IzJzMv+OKd0vumouFgE6NzonQeVbRaTYjum4ql38TdmOYfyJ0czw/X1a9b" crossorigin="anonymous">
</script></head>
  <body>
  </body>
</html>

The CSS will be included as part of the bundle.js file.

This will not work for files loaded from external servers, nor should it, as cross-origin files that need to constantly update would break with subresource integrity enabled.

Thanks for reading!

That’s all for this one. Subresource integrity is a simple and effective addition to ensure you’re loading only what you expect and protecting your users; and remember, security is more than just one solution, so always be on the lookout for more ways to keep your website safe.


The post Securing Your Website With Subresource Integrity appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Popular Design News of the Week: June 7 2021 – June 13, 2021

June 13th, 2021 No comments

Every day design fans submit incredible industry stories to our sister-site, Webdesigner News. Our colleagues sift through it, selecting the very best stories from the design, UX, tech, and development worlds and posting them live on the site.

The best way to keep up with the most important stories for web professionals is to subscribe to Webdesigner News or check out the site regularly. However, in case you missed a day this week, here’s a handy compilation of the top curated stories from the last seven days. Enjoy!

Patttterns

What I Learned by Relearning HTML

All You Need is 5 Fonts

What Is Glassmorphism? (With Examples)

Half the Internet Went Down Today — Here’s What Happened

3 Essential Design Trends, June 2021

50 Free Cursive Handwritten Fonts to Spice Up Your Design

The 4 Biggest UI Trends Shaping Apple’s Future

Google Changes Core Web Vitals Metrics

Storytelling In Design – Top Trends For 2021

Source

The post Popular Design News of the Week: June 7 2021 – June 13, 2021 first appeared on Webdesigner Depot.

Categories: Designing, Others Tags: