Home > Designing, Others > Comparing Static Site Generator Build Times

Comparing Static Site Generator Build Times

October 28th, 2020 Leave a comment Go to comments

There are so many static site generators (SSGs). It’s overwhelming trying to decide where to start. While an abundance of helpful articles may help wade through the (popular) options, they don’t magically make the decision easy.

I’ve been on a quest to help make that decision easier. A colleague of mine built a static site generator evaluation cheatsheet. It provides a really nice snapshot across numerous popular SSG choices. What’s missing is how they actually perform in action.

One feature every static site generator has in common is that it takes input data, runs it through a templating engine, and outputs HTML files. We typically refer to this process as The Build.

There’s too much nuance, context, and variability needed to compare how various SSGs perform during the build process to display on a spreadsheet — and thus begins our test to benchmark build times against popular static site generators.

This isn’t just to determine which SSG is fastest. Hugo already has that reputation. I mean, they say it on their website — The world’s fastest framework for building websites — so it must be true!

This is an in-depth comparison of build times across multiple popular SSGs and, more importantly, to analyze why those build times look the way they do. Blindly choosing the fastest or discrediting the slowest would be a mistake. Let’s find out why.

The tests

The testing process is designed to start simple — with just a few popular SSGs and a simple data format. A foundation on which to expand to more SSGs and more nuanced data. For today, the test includes six popular SSG choices:

Each test used the following approach and conditions:

  • The data source for each build are Markdown files with a randomly-generated title (as frontmatter) and body (containing three paragraphs of content).
  • The content contains no images.
  • Tests are run in series on a single machine, making the actual values less relevant than the relative comparison among the lot.
  • The output is plain text on an HTML page, run through the default starter, following each SSG’s respective guide on getting started.
  • Each test is a cold run. Caches are cleared and Markdown files are regenerated for every test.

These tests are considered benchmark tests. They are using basic Markdown files and outputting unstyled HTML into the built output.

In other words, the output is technically a website that could be deployed to production, though it’s not really a real-world scenario. Instead, this provides a baseline comparison among these frameworks. The choices you make as a developer using one of these frameworks will adjust the build times in various ways (usually by slowing it down).

For example, one way in which this doesn’t represent the real-world is that we’re testing cold builds. In the real-world, if you have 10,000 Markdown files as your data source and are using Gatsby, you’re going to make use of Gatsby’s cache, which will greatly reduce the build times (by as much as half).

The same can be said for incremental builds, which are related to warm versus cold runs in that they only build the file that changed. We’re not testing the incremental approach in these tests (at this time).

The two tiers of static site generators

Before we do that, let’s first consider that there are really two tiers of static site generators. Let’s call them basic and advanced.

  • Basic generators (which are not basic under the hood) are essentially a command-line interface (CLI) that takes in data and outputs HTML, and can often be extended to process assets (which we’re not doing here).
  • Advanced generators offer something in addition to outputting a static site, such as server-side rendering, serverless functions, and framework integration. They tend to be configured to be more dynamic right out of the box.

I intentionally chose three of each type of generator in this test. Falling into the basic bucket would be Eleventy, Hugo, and Jekyll. The other three are based on a front-end framework and ship with various amounts of tooling. Gatsby and Next are built on React, while Nuxt is built atop Vue.

Basic generators Advanced generators
Eleventy Gatsby
Hugo Next
Jekyll Nuxt

My hypothesis

Let’s apply the scientific method to this approach because science is fun (and useful)!

My hypothesis is that if an SSG is advanced, then it will perform slower than a basic SSG. I believe the results will reflect that because advanced SSGs have more overhead than basic SSGs. Thus, it’s likely that we’re going to see both groups of generators — basic and advanced — bundled together, in the results with basic generators moving significantly quicker.

Let me expand on that hypothesis a bit.

Linear(ish) and fast

Hugo and Eleventy will fly with smaller datasets. They are (relatively) simple processes in Go and Node.js, respectively, and their build output will reflect that. While both SSG will slow down as the number of files grows, I expect them to remain at the top of the class, though Eleventy may be a little less linear at scale, simply because Go tends to be more performant than Node.

Slow, then fast, but still slow

The advanced, or framework-bound SSGs, will start out and appear slow. I suspect a single-file test to contain a significant difference — milliseconds for the basic ones, compared to several seconds for Gatsby, Next, and Nuxt.

The framework-based SSGs are each built using webpack, bringing a significant amount of overhead along with it, regardless of the amount of content they are processing. That’s the baggage we sign up for in using those tools (more on this later).

But, as we add thousands of files, I suspect we’ll see the gap between the buckets close, though the advanced SSG group will stay farther behind by some significant amount.

In the advanced SSG group, I expect Gatsby to be the fastest, only because it doesn’t have a server-side component to worry about — but that’s just a gut feeling. Next and Nuxt may have optimized this to the point where, if we’re not using that feature, it won’t affect build times. And I suspect Nuxt will beat out Next, only because there is a little less overhead with Vue, compared to React.

Jekyll: The odd child

Ruby is infamously slow. It’s gotten more performant over time, but I don’t expect it to scale with Node, and certainly not with Go. And yet, at the same time, it doesn’t have the baggage of a framework.

At first, I think we’ll see Jekyll as pretty speedy, perhaps even indistinguishable from Eleventy. But as we get to the thousands of files, the performance will take a hit. My gut feeling is that there may exist a point at which Jekyll becomes the slowest of all six. We’ll push up to the 100,000 mark to see for sure.

The results are in!

The code that powers these tests are on GitHub. There’s also a site that shows the relative results.

After many iterations of building out a foundation on which these tests could be run, I ended up with a series of 10 runs in three different datasets:

  • Base: A single file, to compare the base build times
  • Small sites: From 1 to 1024 files, doubling each to time (to make it easier to determine if the SSGs scaled linearly)
  • Large sites: From 1,000 to 64,000 files, double on each run. I originally wanted to go up to 128,000 files, but hit some bottlenecks with a few of the frameworks. 64,000 ended up being enough to produce an idea of how the players would scale with even larger sites.

Click or tap the images to view them larger.

Summarizing the results

A few results were surprising to me, while others were expected. Here are the high-level points:

  • As expected, Hugo was the fastest, regardless of size. What I didn’t expect is that it wasn’t even close to any other generator, even at base builds (nor was it linear, but more on that below.)
  • The basic and advanced groups of SSGs are quite obvious when looking at the results for small sites. That was expected, but it was surprising to see Next is faster than Jekyll at 32,000 files, and faster than both Eleventy and Jekyll at 64,000 files. Also surprising is that Jekyll performed faster than Eleventy at 64,000 files.
  • None of the SSGs scale linearly. Next was the closest. Hugo has the appearance of being linear, but only because it’s so much faster than the rest.
  • I figured Gatsby to be the fastest among the advanced frameworks, and suspected it would be the one to get closer to the basics. But Gatsby turned out to be the slowest, producing the most dramatic curve.
  • While it wasn’t specifically mentioned in the hypothesis, the scale of differences was larger than I would have imagined. At one file, Hugo was approximately 170 times faster than Gatsby. But at 64,000 files, it was closer — about 25 times faster. That means that, while Hugo remains the fastest, it actually has the most dramatic exponential growth shape among the lot. It just looks linear because of the scale of the chart.

What does it all mean?

When I shared my results with the creators and maintainers of these SSGs, I generally received the same message. To paraphrase:

The generators that take more time to build do so because they are doing more. They are bringing more to the table for developers to work with, whereas the faster sites (i.e. the “basic” tools) focus their efforts largely in converting templates into HTML files.

I agree.

To sum it up: Scaling Jamstack sites is hard.

The challenges that will present themselves to you, Developer, as you scale a site will vary depending on the site you’re trying to build. That data isn’t captured here because it can’t be — every project is unique in some way.

What it really comes down to is your level of tolerance for waiting in exchange for developer experience.

For example, if you’re going to build a large, image-heavy site with Gatsby, you’re going to pay for it with build times, but you’re also given an immense network of plugins and a foundation on which to build a solid, organized, component-based website. Do the same with Jekyll, and it’s going to take a lot more effort to stay organized and efficient throughout the process, though your builds may run faster.

At work, I typically build sites with Gatsby (or Next, depending on the level of dynamic interactivity required). We’ve worked with the Gatsby framework to build a core on which we can rapidly build highly-customized, image-rich websites, packed with an abundance of components. Our builds become slower as the sites scale, but that’s when we get creative by implementing micro front-ends, offloading image processing, implementing content previews, along with many other optimizations.

On the side, I tend to prefer working with Eleventy. It’s usually just me writing code, and my needs are much simpler. (I like to think of myself as a good client for myself.) I feel I have more control over the output files, which makes it easier for me to get ?s on client-side performance, and that’s important to me.

In the end, this isn’t only about what is fast or slow. It’s about what works best for you and how long you’re willing to wait.

Wrapping up

This is just the beginning! The goal of this effort was to create a foundation on which we can, together, benchmark relative build times across popular static site generators.

What ideas do you have? What holes can you poke in the process? What can we do to tighten up these tests? How can we make them more like real-world scenarios? Should we offload the processing to a dedicated machine?

These are the questions I’d love for you to help me answer. Let’s talk about it.


The post Comparing Static Site Generator Build Times appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:
  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.