Archive

Archive for July, 2021

Demystifying styled-components

July 27th, 2021 No comments

 Joshua Comeau digs into how styled-components works by re-building the basics. A fun and useful journey.

styled-components seems like the biggest player in the CSS-in-React market. Despite being in that world, I haven’t yet been fully compelled by it. I’m a big fan of the basics: scoped styles by way of unique class names. I also like that it works with hot module reloading as it all happens in JavaScript. But I get those through css-modules, and I like the file-separation and Sass support I get through css-modules. There are a few things I’m starting to come around on though (a little):

  • Even with css-modules, you still have to think of names. Even if it’s just like .root or whatever. With styled-components you attach the styles right to the component and don’t really name anything.
  • With css-modules, you’re applying the styles directly to an HTML element only. With styled-components you can apply the styles to custom components and it will slap the styles on by way of spreading props later.
  • Because the styles are literally in the JavaScript files, you get JavaScript stuff you can use—ternaries, prop access, fancy math, etc.

Direct Link to ArticlePermalink


The post Demystifying styled-components appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

20 Best New Websites, July 2021

July 26th, 2021 No comments

This month we have a variety pack for you. There are all sorts, from the most conservative layouts and structures to more experimental navigation. And we see how even the most traditional and functional websites can connect with users through mood-enhancing color schemes, clever font choices, and illustrations.

Non-functional–that is, purely decorative–details can really improve the user’s reaction to a site when applied sparingly. Constant movement gets tiring, and adding too many elements can feel fussy and unfocused. Getting the balance right so that things are kept simple but not bare and boring can be difficult, but it is a skill worth mastering.

Enjoy!

Laboratorio Permanente

The site for this multi-disciplinary architecture studio makes very nice use of a scrolling concertina effect.

Sometimes Always 

Wine sellers Sometimes Always have gone for a retro vibe with their color scheme and type.

Empty State 

Split-screen contraflow scrolling and the pop of bright blue against neutrals give Empty State’s home page impact, while the clean layout and clear navigation make the rest of the site pleasant to use.

Kibana 

Quirky display type and illustrations give this single-page site for Kibana lodge a friendly, welcoming feel.

For Good Design Lab

For Good Design Lab’s site is simple and bold; it makes a strong statement. And its design works even better on mobile.

Levitate

Levitate is a sports brand that makes an affordable running blade. Their site has a clean minimal feel, contrasting spaciousness with a sense of movement in the photography used.

Sunst-studio 

Multidisciplinary creative agency Sunst Studio keeps things simple but impactful with oversized black type on white and smaller images.

Made By Nacho 

Strong colors and illustrations, with the occasional subtle animation, work well for Made By Nacho.

Flexe 

Flexe are logistics experts, not the first industry that springs to mind when we think of great website potential, but this one is very appealing.

Friesday 

This site for Friesday restaurant/takeaway is lively, colorful, and fun.

Go out{side}

This offshoot site from B.A.S.S. (Bass Anglers Sportsman Society) aims to promote angling as part of an outdoor lifestyle. Earthy colors and a lively visual rhythm create energy.

Protos Car Rentals

Illustration brings a touch of personality to this site for Protos Car Rentals, while attention to detail pays off in the usability.

21wallpaper 

21wallpaper offers downloadable screen savers from different illustrators (7 artists, 3 designs each) regularly, which at the same time acts as a showcase for the artists.

Elon’s Toilet 

This is a brilliant and vital piece of marketing that succeeds in presenting a serious subject in a lighthearted and engaging way. You’ll have to find out for yourself what it’s about.

Pa’lais

A fresh feeling color scheme, a distinct lack of straight lines, and a scattering of illustration work well for the site for this vegan food producer.

Benjamin Righetti

Using a primarily visual medium to effectively promote an artist whose medium is sound is not always easy. This website for organist Benjamin Righetti does an outstanding job.

Wavemaker

Wavemaker is a creative media agency. Their site is strong and confident but with a playfulness that is engaging for the user.

Dpt.

This site for Dpt. throws out the usual menu concept and has its navigation arranged around the screen. It can be a risky strategy, but here it works well.

Plunt.co

Plúnt sells mix and match plant and planter combinations. The site’s core functionality is the combination chooser, which is pleasingly reminiscent of an animal flipbook.

Kingdom & Sparrow

Branding agency Kingdom & Sparrow have gone for a fairly standard approach, but the little hand-drawn details here and there add just enough individuality without overdoing it.

Source

The post 20 Best New Websites, July 2021 first appeared on Webdesigner Depot.

Categories: Designing, Others Tags:

How I Built a Cross-Platform Desktop Application with Svelte, Redis, and Rust

July 26th, 2021 No comments

At Cloudflare, we have a great product called Workers KV which is a key-value storage layer that replicates globally. It can handle millions of keys, each of which is accessible from within a Worker script at exceptionally low latencies, no matter where in the world a request is received. Workers KV is amazing — and so is its pricing, which includes a generous free tier.

However, as a long-time user of the Cloudflare lineup, I have found one thing missing: local introspection. With thousands, and sometimes hundreds of thousands of keys in my applications, I’d often wish there was a way to query all my data, sort it, or just take a look to see what’s actually there.

Well, recently, I was lucky enough to join Cloudflare! Even more so, I joined just before the quarter’s “Quick Wins Week” — aka, their week-long hackathon. And given that I hadn’t been around long enough to accumulate a backlog (yet), you best believe I jumped on the opportunity to fulfill my own wish.

So, with the intro out of the way, let me tell you how I built Workers KV GUI, a cross-platform desktop application using Svelte, Redis, and Rust.

The front-end application

As a web developer, this was the familiar part. I’m tempted to call this the “easy part” but, given that you can use any and all HTML, CSS, and JavaScript frameworks, libraries, or patterns, choice paralysis can easily set in… which might be familiar, too. If you have a favorite front-end stack, great, use that! For this application, I chose to use Svelte because, for me, it certainly makes and keeps things easy.

Also, as web developers, we expect to bring all our tooling with us. You certainly can! Again, this phase of the project is no different than your typical web application development cycle. You can expect to run yarn dev (or some variant) as your main command and feel at home. Keeping with an “easy” theme, I’ve elected to use SvelteKit, which is Svelte’s official framework and toolkit for building applications. It includes an optimized build system, a great developer experience (including HMR!), a filesystem-based router, and all that Svelte itself has to offer.

As a framework, especially one that takes care of its own tooling, SvelteKit allowed me to purely think about my application and its requirements. In fact, as far as configuration is concerned, the only thing I had to do was tell SvelteKit that I wanted to build a single-page application (SPA) that only runs in the client. In other words, I had to explicitly opt out of SvelteKit’s assumption that I wanted a server, which is actually a fair assumption to make since most applications can benefit from server-side rendering. This was as easy as attaching the @sveltejs/adapter-static package, which is a configuration preset made exactly for this purpose. After installing, this was my entire configuration file:

// svelte.config.js
import preprocess from 'svelte-preprocess';
import adapter from '@sveltejs/adapter-static';

/** @type {import('@sveltejs/kit').Config} */
const config = {
  preprocess: preprocess(),

  kit: {
    adapter: adapter({
      fallback: 'index.html'
    }),
    files: {
      template: 'src/index.html'
    }
  },
};

export default config;

The index.html changes are a personal preference. SvelteKit uses app.html as a default base template, but old habits die hard.

It’s only been a few minutes, and my toolchain already knows it’s building a SPA, that there’s a router in place, and a development server is at the ready. Plus, TypeScript, PostCSS, and/or Sass support is there if I want it (and I do), thanks to svelte-preprocess. Ready to rumble!

The application needed two views:

  1. a screen to enter connection details (the default/welcome/home page)
  2. a screen to actually view your data

In the SvelteKit world, this translates to two “routes” and SvelteKit dictates that these should exist as src/routes/index.svelte for the home page and src/routes/viewer.svelte for the data viewer page. In a true web application, this second route would map to the /viewer URL. While this is still the case, I know that my desktop application won’t have a navigation bar, which means that the URL won’t be visible… which means that it doesn’t matter what I call this route, as long as it makes sense to me.

The contents of these files are mostly irrelevant, at least for this article. For those curious, the entire project is open source and if you’re looking for a Svelte or SvelteKit example, I welcome you to take a look. At the risk of sounding like a broken record, the point here is that I’m building a regular web app.

At this time, I’m just designing my views and throwing around fake, hard-coded data until I have something that seems to work. I hung out here for about two days, until everything looked nice and all interactivity (button clicks, form submissions, etc.) got fleshed out. I’d call this a “working” app, or a mockup.

Desktop application tooling

At this point, a fully functional SPA exists. It operates — and was developed — in a web browser. Perhaps counterintuitively, this makes it the perfect candidate to become a desktop application! But how?

You may have heard of Electron. It’s the most well-known tool for building cross-platform desktop applications with web technologies. There are a number of massively popular and successful applications built with it: Visual Studio Code, WhatsApp, Atom, and Slack, to name a few. It works by bundling your web assets with its own Chromium installation and its own Node.js runtime. In other words, when you’re installing an Electron-based application, it’s coming with an extra Chrome browser and an entire programming language (Node.js). These are embedded within the application contents and there’s no avoiding them, as these are dependencies for the application, guaranteeing that it runs consistently everywhere. As you might imagine, there’s a bit of a trade-off with this approach — applications are fairly massive (i.e. more than 100MB) and use lots of system resources to operate. In order to use the application, an entirely new/separate Chrome is running in the background — not quite the same as opening a new tab.

Luckily, there are a few alternatives — I evaluated Svelte NodeGui and Tauri. Both choices offered significant application size and utilization savings by relying on native renderers the operating system offers, instead of embedding a copy of Chrome to do the same work. NodeGui does this by relying on Qt, which is another Desktop/GUI application framework that compiles to native views. However, in order to do this, NodeGui requires some adjustments to your application code in order for it to translate your components into Qt components. While I’m sure this certainly would have worked, I wasn’t interested in this solution because I wanted to use exactly what I already knew, without requiring any adjustments to my Svelte files. By contrast, Tauri achieves its savings by wrapping the operating system’s native webviewer — for example, Cocoa/WebKit on macOS, gtk-webkit2 on Linux, and Webkit via Edge on Windows. Webviewers are effectively browsers, which Tauri uses because they already exist on your system, and this means that our applications can remain pure web development products.

With these savings, the bare minimum Tauri application is less than 4MB, with average applications weighing less than 20MB. In my testing, the bare minimum NodeGui application weighed about 16MB. A bare minimum Electron app is easily 120MB.

Needless to say, I went with Tauri. By following the Tauri Integration guide, I added the @tauri-apps/cli package to my devDependencies and initialized the project:

yarn add --dev @tauri-apps/cli
yarn tauri init

This creates a src-tauri directory alongside the src directory (where the Svelte application lives). This is where all Tauri-specific files live, which is nice for organization.

I had never built a Tauri application before, but after looking at its configuration documentation, I was able to keep most of the defaults — aside from items like the package.productName and windows.title values, of course. Really, the only changes I needed to make were to the build config, which had to align with SvelteKit for development and output information:

// src-tauri/tauri.conf.json
{
  "package": {
    "version": "0.0.0",
    "productName": "Workers KV"
  },
  "build": {
    "distDir": "../build",
    "devPath": "http://localhost:3000",
    "beforeDevCommand": "yarn svelte-kit dev",
    "beforeBuildCommand": "yarn svelte-kit build"
  },
  // ...
}

The distDir relates to where the built production-ready assets are located. This value is resolved from the tauri.conf.json file location, hence the ../ prefix.

The devPath is the URL to proxy during development. By default, SvelteKit spawns a devserver on port 3000 (configurable, of course). I had been visiting the localhost:3000 address in my browser during the first phase, so this is no different.

Finally, Tauri has its own dev and build commands. In order to avoid the hassle of juggling multiple commands or build scripts, Tauri provides the beforeDevCommand and beforeBuildCommand hooks which allow you to run any command before the tauri command runs. This is a subtle but strong convenience!

The SvelteKit CLI is accessed through the svelte-kit binary name. Writing yarn svelte-kit build, for example, tells yarn to fetch its local svelte-kit binary, which was installed via a devDependency, and then tells SvelteKit to run its build command.

With this in place, my root-level package.json contained the following scripts:

{
  "private": true,
  "type": "module",
  "scripts": {
    "dev": "tauri dev",
    "build": "tauri build",
    "prebuild": "premove build",
    "preview": "svelte-kit preview",
    "tauri": "tauri"
  },
  // ...
  "devDependencies": {
    "@sveltejs/adapter-static": "1.0.0-next.9",
    "@sveltejs/kit": "1.0.0-next.109",
    "@tauri-apps/api": "1.0.0-beta.1",
    "@tauri-apps/cli": "1.0.0-beta.2",
    "premove": "3.0.1",
    "svelte": "3.38.2",
    "svelte-preprocess": "4.7.3",
    "tslib": "2.2.0",
    "typescript": "4.2.4"
  }
}

After integration, my production command was still yarn build, which invokes tauri build to actually bundle the desktop application, but only after yarn svelte-kit build has completed successfully (via the beforeBuildCommand option). And my development command remained yarn dev which spawns the tauri dev and yarn svelte-kit dev commands to run in parallel. The development workflow is entirely within the Tauri application, which is now proxying localhost:3000, allowing me to still reap the benefits of a HMR development server.

Important: Tauri is still in beta at the time of this writing. That said, it feels very stable and well-planned. I have no affiliation with the project, but it seems like Tauri 1.0 may enter a stable release sooner rather than later. I found the Tauri Discord to be very active and helpful, including replies from the Tauri maintainers! They even entertained some of my noob Rust questions throughout the process. 🙂

Connecting to Redis

At this point, it’s Wednesday afternoon of Quick Wins week, and — to be honest — I’m starting to get nervous about finishing before the team presentation on Friday. Why? Because I’m already halfway through the week, and even though I have a good-looking SPA inside a working desktop application, it still doesn’t do anything. I’ve been looking at the same fake data all week.

You may be thinking that because I have access to a webview, I can use fetch() to make some authenticated REST API calls for the Workers KV data I want and dump it all into localStorage or an IndexedDB table… You’re 100% right! However, that’s not exactly what I had in mind for my desktop application’s use case.

Saving all the data into some kind of in-browser storage is totally viable, but it saves it locally to your machine. This means that if you have team members trying to do the same thing, everyone will have to fetch and save all the data on their own machines, too. Ideally, this Workers KV application should have the option to connect to and sync with an external database. That way, when working in team settings, everyone can tune into the same data cache to save time — and a couple bucks. This starts to matter when dealing with millions of keys which, as mentioned, is not uncommon with Workers KV.

Having thought about it for a bit, I decided to use Redis as my backing store because it also is a key-value store. This was great because Redis already treats keys as a first-class citizen and offers the sorting and filtering behaviors I wanted (aka, I can pass along the work instead of implementing it myself!). And then, of course, Redis is easy to install and run either locally or in a container, and there are many hosted-Redis-as-service providers out there if someone chooses to go that route.

But, how do I connect to it? My app is basically a browser tab running Svelte, right? Yes — but also so much more than that.

You see, part of Electron’s success is that, yes, it guarantees a web app is presented well on every operating system, but it also brings along a Node.js runtime. As a web developer, this was a lot like including a back-end API directly inside my client. Basically the “…but it works on my machine” problem went away because all of the users were (unknowingly) running the exact same localhost setup. Through the Node.js layer, you could interact with the filesystem, run servers on multiple ports, or include a bunch of node_modules to — and I’m just spit-balling here — connect to a Redis instance. Powerful stuff.

We don’t lose this superpower because we’re using Tauri! It’s the same, but slightly different.

Instead of including a Node.js runtime, Tauri applications are built with Rust, a low-level systems language. This is how Tauri itself interacts with the operating system and “borrows” its native webviewer. All of the Tauri toolkit is compiled (via Rust), which allows the built application to remain small and efficient. However, this also means that we, the application developers, can include any additional crates — the “npm module” equivalent — into the built application. And, of course, there’s an aptly named redis crate that, as a Redis client driver, allows the Workers KV GUI to connect to any Redis instance.

In Rust, the Cargo.toml file is similar to our package.json file. This is where dependencies and metadata are defined. In a Tauri setting, this is located at src-tauri/Cargo.toml because, again, everything related to Tauri is found in this directory. Cargo also has a concept of “feature flags” defined at the dependency level. (The closest analogy I can come up with is using npm to access a module’s internals or import a named submodule, though it’s not quite the same still since, in Rust, feature flags affect how the package is built.)

# src-tauri/Cargo.toml
[dependencies]
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
tauri = { version = "1.0.0-beta.1", features = ["api-all", "menu"] }
redis = { version = "0.20", features = ["tokio-native-tls-comp"] }

The above defines the redis crate as a dependency and opts into the "tokio-native-tls-comp" feature, which the documentation says is required for TLS support.

Okay, so I finally had everything I needed. Before Wednesday ended, I had to get my Svelte to talk to my Redis. After poking around a bit, I noticed that all the important stuff seemed to be happening inside the src-tauri/main.rs file. I took note of the #[command] macro, which I knew I had seen before in a Tauri example earlier in the day, so I studied copied the example file in sections, seeing which errors came and went according to the Rust compiler.

Eventually, the Tauri application was able to run again, and I learned that the #[command] macro is wrapping the underlying function in a way so that it can receive “context” values, if you choose to use them, and receive pre-parsed argument values. Also, as a language, Rust does a lot of type casting. For example:

use tauri::{command};

#[command]
fn greet(name: String, age: u8) {
  println!("Hello {}, {} year-old human!", name, age);
}

This creates a greet command which, when run,expects two arguments: name and age. When defined, the name value is a string value and age is a u8 data type — aka, an integer. However, if either are missing, Tauri throws an error because the command definition does not say anything is allowed to be optional.

To actually connect a Tauri command to the application, it has to be defined as part of the tauri::Builder composition, found within the main function.

use tauri::{command};

#[command]
fn greet(name: String, age: u8) {
  println!("Hello {}, {} year-old human!", name, age);
}

fn main() {
  // start composing a new Builder chain
  tauri::Builder::default()
    // assign our generated "handler" to the chain
    .invoke_handler(
      // piece together application logic
      tauri::generate_handler![
        greet, // attach the command
      ]
    )
    // start/initialize the application
    .run(
      // put it all together
      tauri::generate_context!()
    )
    // print <message> if error while running
    .expect("error while running tauri application");
}

The Tauri application compiles and is aware of the fact that it owns a “greet” command. It’s also already controlling a webview (which we’ve discussed) but in doing so, it acts as a bridge between the front end (the webview contents) and the back end, which consists of the Tauri APIs and any additional code we’ve written, like the greet command. Tauri allows us to send messages across this bridge so that the two worlds can communicate with one another.

A component diagram of a basic Tauri application.
The developer is responsible for webview contents and may optionally include custom Rust modules and/or define custom commands. Tauri controls the webviewer and the event bridge, including all message serialization and deserialization.

This “bridge” can be accessed by the front end by importing functionality from any of the (already included) @tauri-apps packages, or by relying on the window.__TAURI__ global, which is available to the entire client-side application. Specifically, we’re interested in the invoke command, which takes a command name and a set of arguments. If there are any arguments, they must be defined as an object where the keys match the parameter names our Rust function expects.

In the Svelte layer, this means that we can do something like this in order to call the greet command, defined in the Rust layer:

<!-- Greeter.svelte -->
<script>
  function onclick() {
    __TAURI__.invoke('greet', {
      name: 'Alice',
      age: 32
    });
  }
</script>

<button on:click={onclick}>Click Me</button>

When this button is clicked, our terminal window (wherever the tauri dev command is running) prints:

Hello Alice, 32 year-old human!

Again, this happens because of the println! function, which is effectively console.log for Rust, that the greet command used. It appears in the terminal’s console window — not the browser console — because this code still runs on the Rust/system side of things.

It’s also possible to send something back to the client from a Tauri command, so let’s change greet quickly:

use tauri::{command};

#[command]
fn greet(name: String, age: u8) {
  // implicit return, because no semicolon!
  format!("Hello {}, {} year-old human!", name, age)
}

// OR

#[command]
fn greet(name: String, age: u8) {
  // explicit `return` statement, must have semicolon
  return format!("Hello {}, {} year-old human!", name, age);
}

Realizing that I’d be calling invoke many times, and being a bit lazy, I extracted a light client-side helper to consolidate things:

// @types/global.d.ts
/// <reference types="@sveltejs/kit" />

type Dict<T> = Record<string, T>;

declare const __TAURI__: {
  invoke: typeof import('@tauri-apps/api/tauri').invoke;
}

// src/lib/tauri.ts
export function dispatch(command: string, args: Dict<string|number>) {
  return __TAURI__.invoke(command, args);
}

The previous Greeter.svelte was then refactored into:

<!-- Greeter.svelte -->
<script lang="ts">
  import { dispatch } from '$lib/tauri';

  async function onclick() {
    let output = await dispatch('greet', {
      name: 'Alice',
      age: 32
    });
    console.log('~>', output);
    //=> "~> Hello Alice, 32 year-old human!"
  }
</script>

<button on:click={onclick}>Click Me</button>

Great! So now it’s Thursday and I still haven’t written any Redis code, but at least I know how to connect the two halves of my application’s brain together. It was time to comb back through the client-side code and replace all TODOs inside event handlers and connect them to the real deal.

I will spare you the nitty gritty here, as it’s very application-specific from here on out — and is mostly a story of the Rust compiler giving me a beat down. Plus, spelunking for nitty gritty is exactly why the project is open source!

At a high-level, once a Redis connection is established using the given details, a SYNC button is accessible in the /viewer route. When this button is clicked (and only then — because of costs) a JavaScrip function is called, which is responsible for connecting to the Cloudflare REST API and dispatching a "redis_set" command for each key. This redis_set command is defined in the Rust layer — as are all Redis-based commands — and is responsible for actually writing the key-value pair to Redis.

Reading data out of Redis is a very similar process, just inverted. For example, when the /viewer started up, all the keys should be listed and ready to go. In Svelte terms, that means I need to dispatch a Tauri command when the /viewer component mounts. That happens here, almost verbatim. Additionally, clicking on a key name in the sidebar reveals additional “details” about the key, including its expiration (if any), its metadata (if any), and its actual value (if known). Optimizing for cost and network load, we decided that a key’s value should only be fetched on command. This introduces a REFRESH button that, when clicked, interacts with the REST API once again, then dispatches a command so that the Redis client can update that key individually.

I don’t mean to bring things to a rushed ending, but once you’ve seen one successful interaction between your JavaScript and Rust code, you’ve seen them all! The rest of my Thursday and Friday morning was just defining new request-reply pairs, which felt a lot like sending PING and PONG messages to myself.

Conclusion

For me — and I imagine many other JavaScript developers — the challenge this past week was learning Rust. I’m sure you’ve heard this before and you’ll undoubtedly hear it again. Ownership rules, borrow-checking, and the meanings of single-character syntax markers (which are not easy to search for, by the way) are just a few of the roadblocks that I bumped into. Again, a massive thank-you to the Tauri Discord for their help and kindness!

This is also to say that using Tauri was not a challenge — it was a massive relief. I definitely plan to use Tauri again in the future, especially knowing that I can use just the webviewer if I want to. Digging into and/or adding Rust parts was “bonus material” and is only required if my app requires it.

For those wondering, because I couldn’t find another place to mention it: on macOS, the Workers KV GUI application weighs in at less than 13 MB. I am so thrilled with that!

And, of course, SvelteKit certainly made this timeline possible. Not only did it save me a half-day-slog configuring my toolbelt, but the instant, HMR development server probably saved me a few hours of manually refreshing the browser — and then the Tauri viewer.

If you’ve made it this far — that’s impressive! Thank you so much for your time and attention. A reminder that the project is available on GitHub and the latest, pre-compiled binaries are always available through its releases page.


The post How I Built a Cross-Platform Desktop Application with Svelte, Redis, and Rust appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Technology Tools for Better Legal Writing

July 26th, 2021 No comments

Writing takes a lot of time out of lawyers’ busy schedules – time that could be spent more efficiently. 

If you are still relying only on your word processor when writing, it would be an understatement to say you are missing out on a ton of benefits brought on by the use of additional software.

Infusing your workflow with special tools for better legal writing could turn a tedious, time-intensive process into something much easier, and ultimately, make the end-product reach extra levels of quality. 

These are some of the best technology tools that we recommend using if you want to level up your legal writing game.

1. Loio

Loio legal document software is an AI-powered legal document analysis and editing tool that ensures your documents are tidy and error-free when they leave your hands.

The highlights feature alone can help you save valuable time as it scans your document and digs out the key details automatically. These details are then grouped for easy access, helping you spot mistakes quickly.

Aside from that, Loio also gives you the option of fixing formatting and styling issues easily, as well as the option of editing lists, numbering, and bulleting. 

You can experience these useful features without ever leaving your word processor because this powerful tool is a simple, easy-to-use Microsoft Word add-on.

2. DraftLens

Another piece of legal AI software on this list is DraftLens, a web app that helps you save time before you even start writing. 

With DraftLens, you don’t need to start from scratch every time as it allows you to create dynamic automated templates using any legal text. This is a lifesaver for lawyers that handle a large volume of documents regularly.

Furthermore, it also helps you streamline the entire drafting process by allowing you to find and insert new precedents, information, and edit clauses.

3. Litigation Companion 

You probably wasted many hours generating tables of citations (TOA) in your documents, especially if you took the traditional route and marked every citation manually. The good news is, there is an app for that as well.

Litigation Companion (previously known as Best Authority) is a blessing when it comes to building TOA in documents such as legal briefs. With this app, you can complete the entire process in a couple of minutes.

Here’s how it works: it scans your document and automatically identifies citations present, then generates a TOA in a pre-defined location.

Additionally, Litigation Companion also recognizes content and citation errors. This eventually helps you save even more time by eliminating the need to cross-check individual citations for mistakes.

4. PerfectIt

The reason why MS Word is such a powerful tool for lawyers is the availability of a wide range of legal-centric add-ons. PerfectIt (along with other items on this list) is a great solution for lawyers who want to improve the quality of their documents.

PefectIt not only helps you proofread your document like a pro, but it also helps you quickly check italicization, hyphenation, and spelling of legal terms by utilizing its American Legal Style stylesheet. 

This particular stylesheet is based on the Black’s Law Dictionary, so you can bet you will never need to bust out the physical copy of this book ever again, as you now technically have it within MS Word.

5. ProWritingAid

The previous tools were mostly geared towards workflow improvements in legal document drafting. It’s time to mention software that can improve your writing itself. 

ProWritingAid is a helpful tool that makes sure your legal documents are grammar- and spelling-perfect. However, that is not the only thing you can do.

Aside from the usual style suggestions, the crowning feature of this grammar checker is its detailed reporting that can help you make further improvements to your writing. You can get reports for categories such as styles, grammar, clichés, and readability.

Plus, two features can be particularly useful for lawyers:

  • Snippets let you create a bank of phrases and terms you use regularly. This can be a useful asset when using repeated complicated legalese and Latin phrases.
  • Style Guide is helpful for keeping writing styles consistent across the board. ProWritingAid allows you to create a style guide that your entire team can follow to maintain a high level of consistency in your law firm.

6. Scrivener

Most previous tools we mentioned integrate perfectly with Microsoft Word. However, if you want to try an interesting, new approach to writing legal documents, Scrivener is a fine choice. The first reason to use this app is to increase your writing productivity

This word processor gives you a tree-based approach that completely changes the way you navigate through documents. As a bonus, Scrivener also allows you to open multiple file formats within the program itself, eliminating the need to jump between multiple tabs. 

Lastly, you can freely rearrange the different sections of the document by simply dragging and dropping them until you find the arrangement that works for you.

As stressful as you make it

Dealing with legal documents is time-consuming, and rightfully so, as a small slip-up could spell a disaster for you and your client. 

However, considering all the legal AI software available, legal writing shouldn’t be as stressful as it probably is. Leveraging tech advancements in the field of writing can ease a lot of the pressure you are experiencing while also saving you a lot of time in the process.

What’s more, implementing these tools might cause a cascading effect of improvements in your career. You can use the newly found extra time to make further improvements in your writing style or give more attention to other areas of your practice, such as client relations.

All things considered, any money you invest into technology writing tools is money well spent.

Categories: Others Tags:

Popular Design News of the Week: July 19 2021 – July 25, 2021

July 25th, 2021 No comments

Every day design fans submit incredible industry stories to our sister-site, Webdesigner News. Our colleagues sift through it, selecting the very best stories from the design, UX, tech, and development worlds and posting them live on the site.

The best way to keep up with the most important stories for web professionals is to subscribe to Webdesigner News or check out the site regularly. However, in case you missed a day this week, here’s a handy compilation of the top curated stories from the last seven days. Enjoy!

The Best Olympic Logos of all Time

16 CSS Backdrop-Filter Examples

9 Must-Install Craft CMS Plugins

Adobe Reveals the World’s Favourite Emojis in 2021

14 Must-See Graphic Design Movies You Should Watch

Desech: No-Code Visual HTML/CSS Editor

Of Course We Can Make a CSS-Only Clock

8 CSS & JavaScript Code Snippets for Creating Realistic Animation

What is a Headless CMS and What Does it Mean For SEO?

OOP is Dead. Wait, Really?

Source

The post Popular Design News of the Week: July 19 2021 – July 25, 2021 first appeared on Webdesigner Depot.

Categories: Designing, Others Tags:

My petite-vue review

July 23rd, 2021 No comments

Dave:

petite-vue is a new cut of the Vue project specifically built with progressive enhancement in mind. At 5kb, petite-vue is a lightweight Alpine (or jQuery) alternative that can be “sprinkled” over your project requiring no extra bundling steps or build processes. Add a  tag, set a v-scope, and you’re off to the races. This is up my alley.

Lots of us are still fond of jQuery, but didn’t like how fragile things could be, entirely separating interactive features and the HTML. “Separation of concerns” felt right at the time, but was ultimately too dogmatic. Authoring languages like JSX felt wrong at first, and now feel rather right, and a lot of JavaScript templating has fell in line. But heavy frameworks tend to be involved. Framework-free approaches began to show up, like Alpine.js, which allow us to sprinkle in interactive technology right into the HTML. Vue has always been sprinkable, to a degree, but now much moreso with petite-vue.

Direct Link to ArticlePermalink


The post My petite-vue review appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Organize your CSS declarations alphabetically

July 22nd, 2021 No comments

Eric, again not mincin’ no words with blog post titles. This is me:

The most common CSS declaration organization technique I come across is none whatsoever.

Almost none, anyway. I tend to group them by whatever dumps out of my brain as I’m writing them, which usually ends up with somewhat logical groups, like box model stuff grouped together and color stuff grouped together. It just… hasn’t mattered to me. But that is strongly influenced by typically working on small teams or alone. Eric recommends the alphabetical approach because:

[…] it imposes a baseline sense of structure across a team. This ask is usually enough, especially if it means cleaning up what’s come before.

And his (probably bigger) point is that the imparted structure helps legitimize CSS in a world where CSS skills are undervalued. Not going to argue against that, but I would argue that hand-alphabetizing CSS on an existing project is very likely not a good use of time. Worse, it might break stuff if done blindly, which is why Prettier punted on it. If you and your team agree this is a good idea, I’d find a way to get this into an on-save function in your code editor and make it a pre-commit hook. Alphabetizing is a task for computers to do and the output can be verified as you are authoring.

Direct Link to ArticlePermalink


The post Organize your CSS declarations alphabetically appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Making Your Mark: 6 Tips for Infusing Brand Essence into Your Website 

July 22nd, 2021 No comments

What makes a company special? There are hundreds of organizations out there selling fast food, but only one McDonalds. You’ve probably stumbled across dozens of technology companies too, but none of them inspire the same kind of loyalty and commitment as Apple. So why do people fall in love with some companies more than others?

Most of the time, it comes down to one thing: brand essence. Your brand essence, or brand identity, represents all of the unique components of your business that separate it from the competition. It’s not just about your logo or the brand colors you choose for your website. Your brand essence entails all of the visual assets you use and those less tangible concepts like brand values and personality.

When your customers decide which companies to buy from, they’re not just looking for another dime-a-dozen venture with the cheapest products. Instead, your audience wants to buy from a business that they feel connected with. That’s where your brand comes in.

Infusing brand essence into your website helps give your digital presence that extra touch that differentiates you from other similar companies.

So, how do you get started?

What is Brand Essence?

Your brand essence is a collection of core characteristics responsible for defining your brand. More than just a single asset, your essence encompasses everything from your unique tone of voice, from your brand image, your message, and even what you stand for.

More than just a single asset, your essence encompasses everything

When you’re trying to boost your chances of sales, a brand essence helps build a kind of affinity with your customers. Your clients see aspects of your character that they can relate to, such as a modern and playful image or a commitment to sustainable practices. It’s that affinity that convinces your customer to choose your company instead of your competition time after time.

Your website is one of the first places that your customer will visit when looking for answers. They might happen across your site when searching for a key phrase on Google or stumble onto it from social media. When they arrive on your product or home page, everything there should help them make an immediate emotional connection.

The only problem?

It’s challenging to portray a unique brand identity through a standard website template. If your site looks the same as a dozen other online stores, how can you convince your customers that you’re better?

Why Your Website Needs Brand Essence

First impressions are a big deal.

In a perfect world, your visitors would land on your website and instantly fall in love with what they find there. So everything from the unique design of your homepage to the pictures on your product pages should delight and impress your audience.

Unfortunately, if just one element is wrong, then you could also make the wrong impression entirely on your customers too. Adding elements of brand essence to your site will:

  • Build trust with your audience: Your brand should be a consistent component of everything your business does, from selling products to interacting with customers. When your consumers land on your website, everything from your logo to your chosen colors should remind them of who you are and what you stand for. This consistency will lead to better credibility for your business. 
  • Make you stand out: How many other companies just like yours are on the internet today? Your brand essence helps to differentiate you as special by showing what’s truly unique about you. It’s a chance to remind your audience of your values and make them forget all about your competitors. 
  • Create an emotional connection: Brand essence that shows off your unique values, mission, and personality will help create an emotional link with your audience. Remember, people fall in love with the human characteristics behind your brand!

Easy Ways to Add Brand Essence to your Website

Since there are so many elements that add up to a strong brand essence, there are also a variety of ways you can add your brand to your website. Whether it’s using specific colors to elicit an emotional response from your fans or adding your unique tone of voice into content, there’s no shortage of ways to show off what makes you special.

1. Use Your Brand Colors Carefully

Brand colors are an important part of your brand essence. Walk into Target, and the bright shades of red will instantly energize you. Likewise, when you see a McDonalds, that golden arch logo might instantly remind you of joy and nourishment.

Color psychology plays a significant role in every brand asset you create, from the packaging you choose for your products to the shades in your logo. With that in mind, you should be using your colors effectively on your website too. Stick with the same selection of shades in every digital environment online.

For instance, the Knotty Knickers company uses different shades of pink and white to convey feelings of femininity and comfort. Not only is the website packed with this color, but the company also follows through with similar shades on its social media pages too.

Everything from the highlights on the Knotty Kickers Insta to the decorations in their images feature the soft tones of pink that make the company recognizable.

Remember, consistent use of color is psychologically proven to help improve recognition by up to 80%. So let your colors shine through.

2. Know Your Type

Your colors are just one component of your brand essence.

Fonts and typography are other components that your customers use to recognize and understand your business. There are many different styles of font, and new ones appear all the time. However, your company should have a specific selection of fonts that it uses everywhere.

If you have a typeface logo, then you might have one specific kind of font for this, like Original Stitch’s logo here:

On top of that, you may also use one font for the “heading” text on your website and something slightly different for the body text. Your heading and body text need to be extremely clear and easy to read on any device if you want to appeal to your target audience.

However, there’s more to getting your fonts right than choosing something legible. The fonts you pick need to say something about your company and what it stands for. For example, the modern sans-serif font of Original Stitch’s website conveys a sense of forward-thinking cleanliness and style.

The fonts feel trendy and informal, perfect for a luxury fashion company. Alternatively, a serif font like Garamond might look more formal and professional. So what do you want your customers to think and feel when they see your typography?

3. Know Your Images

Stock images look out-of-touch, cliché, and fake. If you cover your website in images like that, then you’re not going to earn the respect of your customers. Instead, it’s up to you to ensure that every graphic on your website conveys the sentiment and personality of your brand.

Bringing your brand essence into your website design means thinking about how you can convey your values in every element of that site. From the photos of your team that show the real humans behind your products and services to the pictures that you use on your product pages, choose your graphics wisely. If you have dedicated brand colors, you can even include these in your photos.

Of course, the most valuable graphic on your website should always be your logo. This is the thing that you need to include on all of your website pages to ensure that your audience knows who they’re shopping with. So ensure that no matter how much visual content you have on your page, your logo still stands out.

Firebox places its logo at the top of the page in the middle of your line of sight, no matter where you go on the website. This ensures that the logo is the thing that instantly grabs attention and reminds consumers of where they are.

Remember to ensure that each of the images you do include in your website is surrounded by plenty of white space so that your customers can see them properly too.

4. Use the Right Tone of Voice

It’s easy to get stuck focusing on things like logos and colors when you’re trying to make your brand essence shine through in your website. However, one of the most common things that your customer will recognize you by is your tone of voice.

Your brand tone of voice is what gives the content you share online personality and depth. It can come through in the kind of words you use, including slang and colloquialisms. In addition, you can add humor to your voice (if it’s appropriate) and even include emojis if that makes sense for your brand.

With formal words, you can make sure that you come across as reliable, dependable, and sophisticated. With informal words, you’re more likely to convince your audience that you’re friendly and relatable. Sticking with the Firebox example, look at how the company writes its product descriptions.

Everything from the length of the sentences to the humor in the tone of voice helps to convey something unique about the brand.

Like all of the elements that bring your brand essence into your website, your tone of voice must remain consistent wherever your customers see it. Make sure that your customer service agents know how to use your voice in chat with customers and that you include that personality on social media too.

5. Focus on Your Message

The tone of voice and personality that you showcase in your website and content is crucial to driving success for your business. However, under all of that, the most important thing to do is ensure you’re sending the right message. In other words, what do you want your customers to think and feel when they land on your website?

If your whole message is that you can make great skincare easy to obtain without asking people to compromise on animal safety, this should be the first thing that comes across when someone arrives on your homepage. That’s why Lush combines clean, simple web pages with credibility-boosting badges that tell customers everything they need to know instantly.

Ensuring that your message can come across correctly means learning how to use all the different brand assets you rely on consistently and effectively. Everything from the colors on your website to how you place trust badges along the bottom of every page makes a difference to how significant and believable your message is.

Notably, when you know what your core message is, it’s important to repeat it. That means that you don’t just talk about what your ideals and values are on your homepage. You also include references to your message on every product page and the “About” page too. Lush has an entire “ethical charter” on its website, where you can learn more about its activities.

Having a similar component included within your web design could be an excellent way to confirm what your most crucial considerations are for your customers.

6. Never Copy the Competition

Exploring other website designs and ideas created by your competitors is one of the easiest ways to get inspiration. Competitive analysis gives you an insight into the trends and design strategies that are more likely to work for your target audience. It’s also an opportunity for you to learn from your competitors’ wrongs and what they’re doing right.

However, it would be best if you never allowed your initial research and exploration to go too far. In other words, when you see something great that your competitor is doing, don’t just copy and paste it onto your own website. This is more likely to remind your customers of the other company and send them to that brand than it is to improve your reputation.

Instead, focus on making your website unique to you. If you’re having trouble with this, then start by looking at your About page. How would you describe your background and your mission as a business to someone who has never heard of you before? What makes your company different from all of the other organizations out there?

Take the unique features that you rave about on your About page and the personality you try to convey through your employees and bring it into the rest of your website design. The whole point of bringing brand essence into your web design strategy is that it helps to differentiate you from the other companies out there.

It’s a good idea to protect any assets that other people might try to steal from you too. Copyrighting your logo, your name, and other essential components of your brand will stop people in your industry from treading on your toes too much.

Make the Most of Your Brand Essence

Your website is one of the most valuable tools that you’ll have as a brand. It’s an opportunity for you to share your products and services with the world, capture the attention of your target audience, and potentially make sales too. However, it’s important not to forget that your website is also a chance for you to demonstrate what your brand is truly all about.

Through your brand essence, you can share the unique values and messages that make your company special. But, more importantly, you can use those components to build a deeper relationship with your audience – the kind of connection that leads to dedicated repeat customers and brand loyalty. People connect with other people – after all, not just faceless corporations.

Once you’ve identified your brand essence, the next step is to make sure that you know how to connect with your customers consistently. Every aspect of your website, application, social media pages, and anything else you make for your business sends the same message.

Source

The post Making Your Mark: 6 Tips for Infusing Brand Essence into Your Website  first appeared on Webdesigner Depot.

Categories: Designing, Others Tags:

Using Google Drive as a CMS

July 22nd, 2021 No comments

We’re going to walk through the technical process of hooking into Google Drive’s API to source content on a website. We’ll examine the step-by-step implementation, as well as how to utilize server-side caching to avoid the major pitfalls to avoid such as API usage limits and image hotlinking. A ready-to-use npm package, Git repo, and Docker image are provided throughout the article.

But… why?

At some point in the development of a website, a crossroads is reached: how is content managed when the person managing it isn’t technically savvy? If the content is managed by developers indefinitely, pure HTML and CSS will suffice — but this prevents wider team collaboration; besides, no developer wants to be on the hook for content updates in perpetuity.

So what happens when a new non-technical partner needs to gain edit access? This could be a designer, a product manager, a marketing person, a company executive, or even an end customer.

That’s what a good content management system is for, right? Maybe something like WordPress. But this comes with its own set up of disadvantages: it’s a new platform for your team to juggle, a new interface to learn, and a new vector for potential attackers. It requires creating templates, a format with its own syntax and idiosyncrasies. Custom or third-party plugins may need to be to vetted, installed, and configured for unique use cases — and each of these is yet another source of complexity, friction, technical debt, and risk. The bloat of all this setup may end up cramping your tech in a way which is counterproductive to the actual purpose of the website.

What if we could pull content from where it already is? That’s what we’re getting at here. Many of the places where I have worked use Google Drive to organize and share files, and that includes things like blog and landing page content drafts. Could we utilize Google Drive’s API to import a Google Doc directly into a site as raw HTML, with a simple REST request?

Of course we can! Here’s how we did it where I work.

What you’ll need

Just a few things you may want to check out as we get started:

Authenticating with the Google Drive API

The first step is to establish a connection to Google Drive’s API, and for that, we will need to do some kind of authentication. That’s a requirement to use the Drive API even if the files in question are publicly shared (with “link sharing” turned on). Google supports several methods of doing this. The most common is OAuth, which prompts the user with a Google-branded screen saying, “[So-and-so app] wants to access your Google Drive” and waits for user consent — not exactly what we need here, since we’d like to access files in a single central drive, rather than the user’s drive. Plus, it’s a bit tricky to provide access to only particular files or folders. The https://www.googleapis.com/auth/drive.readonly scope we might use is described as:

See and download all your Google Drive files.

That’s exactly what it says on the consent screen. This is potentially alarming for a user, and more to the point, it is a potential security weakness on any central developer/admin Google account that manages the website content; anything they can access is exposed through the site’s CMS back end, including their own documents and anything shared with them. Not good!

Enter the “Service account”

Instead, we can make use of a slightly less common authentication method: a Google service account. Think of a service account like a dummy Google account used exclusively by APIs and bots. However, it behaves like a first-class Google account; it has its own email address, its own tokens for authentication, and its own permissions. The big win here is that we make files available to this dummy service account just like any other user — by sharing the file with the service account’s email address, which looks something like this:

google-drive-cms-example@npm-drive-cms.iam.gserviceaccount.com

When we go to display a doc or sheet on the website, we simply hit the “Share” button and paste in that email address. Now the service account can see only the files or folders we’ve explicitly shared with it, and that access can be modified or revoked at any time. Perfect!

Creating a service account

A service account can be created (for free) from the Google Cloud Platform Console. That process is well documented in Google’s developer resources, and in addition it’s described in step-by-step detail in the companion repo of this article on GitHub. For the sake of brevity, let’s fast-forward to immediately after a successful authentication of a service account.

The Google Drive API

Now that we’re in, we’re ready to start tinkering with what the Drive API is capable of. We can start from a modified version of the Node.js quickstart sample, adjusted to use our new service account instead of client OAuth. That’s handled in the first several methods of the driveAPI.js we are constructing to handle all of our interactions with the API. The key difference from Google’s sample is in the authorize() method, where we use an instance of jwtClient rather than the oauthClient used in Google’s sample:

authorize(credentials, callback) {
  const { client_email, private_key } = credentials;

  const jwtClient = new google.auth.JWT(client_email, null, private_key, SCOPES)

  // Check if we have previously stored a token.
  fs.readFile(TOKEN_PATH, (err, token) => {
    if (err) return this.getAccessToken(jwtClient, callback);
    jwtClient.setCredentials(JSON.parse(token.toString()));
    console.log('Token loaded from file');
    callback(jwtClient);
  });
}

Node.js vs. client-side

One more note about the setup here — this code is intended to be called from server-side Node.js code. That’s because the client credentials for the service account must be kept secret, and not exposed to users of our website. They are kept in a credentials.json file on the server, and loaded via fs.readFile inside of Node.js. It’s also listed in the .gitignore to keep the sensitive keys out of source control.

Fetching a doc

After the stage is set, loading raw HTML from a Google Doc becomes fairly simple. A method like this returns a Promise of an HTML string:

getDoc(id, skipCache = false) {
  return new Promise((resolve, reject) => {
    this.drive.files.export({
      fileId: id,
      mimeType: "text/html",
      fields: "data",
    }, (err, res) => {
      if (err) return reject('The API returned an error: ' + err);
      resolve({ html: this.rewriteToCachedImages(res.data) });
      // Cache images
      this.cacheImages(res.data);
    });
  });
}

The Drive.Files.export endpoint does all the work for us here. The id we’re passing in is just what shows up in your browsers address bar when you open the doc, which is shown immediately after https://docs.google.com/document/d/.

Also notice the two lines about caching images — this is a special consideration we’ll skip over for now, and revisit in detail in the next section.

Here’s an example of a Google document displayed externally as HTML using this method.

Fetching a sheet

Fetching Google Sheets is almost as easy using Spreadsheets.values.get. We adjust the response object just a little bit to convert it to a simplified JSON array, labeled with column headers from the first row of the sheet.

getSheet(id, range) {
  return new Promise((resolve, reject) => {
    this.sheets.spreadsheets.values.get({
      spreadsheetId: id,
    range: range,
  }, (err, res) => {
    if (err) reject('The API returned an error: ' + err);
    // console.log(res.data.values);
    const keys = res.data.values[0];
    const transformed = [];
    res.data.values.forEach((row, i) => {
      if(i === 0) return;
      const item = {};
      row.forEach((cell, index) => {
        item[keys[index]] = cell;
      });
       transformed.push(item);
      });
      resolve(transformed);
    });
  });
}

The id parameter is the same as for a doc, and the new range parameter here refers to a range of cells to fetch values from, in Sheets A1 notation.

Example: this Sheet is read and parsed in order to render custom HTML on this page.

…and more!

These two endpoints already get you very far, and forms the backbone of a custom CMS for a website. But, in fact, it only taps the surface of Drive’s potential for content management. It’s also capable of:

  • listing all files in a given folder and display them in a menu,
  • importing complex media from a Google Slides presentation, and
  • downloading and caching custom files.

The only limits here are your creativity, and the constraints of the full Drive API documented here.

Caching

As you’re playing with the various kinds of queries that the Drive API supports, you may end up receiving a “User Rate Limit Exceeded” error message . It’s fairly easy to hit this limit through repeated trial-and-error testing during the development phase, and at first glance, it seems as if it would represent a hard blocker for our Google Drive-CMS strategy.

This is where caching comes in — every time we fetch a new version of any file on Drive, we cache it locally (aka server-side, within the Node.js process). Once we do that, we only need to check the version of every file. If our cache is out of date, we fetch the newest version of the corresponding file, but that request only happens once per file version, rather than once per user request. Instead of scaling by the number of people who use the website, we can now scale by the number of updates/edits on Google Drive as our limiting factor. Under the current Drive usage limits on a free-tier account, we could support up to 300 API requests per minute. Caching should keep us well within this limit, and it could be optimized even further by batching multiple requests.

Handling images

The same caching method is applied to images embedded inside Google Docs. The getDoc method parses the HTML response for any image URLs, and makes a secondary request to download them (or fetches them directly from cache if they’re already there). Then it rewrites the original URL in the HTML. The result is that static HTML; we never use hotlinks to Google image CDNs. By the time it gets to the browser, the images have already been pre-cached.

Respectful and responsive

Caching ensures two things: first, that we are being respectful of Google’s API usage limits, and truly utilize Google Drive as a front end for editing and file management (what the tool is intended for), rather than leaching off of it for free bandwidth and storage space. It keeps our website’s interaction with Google’s APIs to the bare minimum necessary to refresh the cache as needed.

The other benefit is one that the users of our website will enjoy: a responsive website with minimal load times. Since cached Google Docs are stored as static HTML on our own server, we can fetch them immediately without waiting for a third-party REST request to complete, keeping website load times to a minimum.

Wrapping in Express

Since all this tinkering has been in server-side Node.js, we need a way for our client pages to interact with the APIs. By wrapping the DriveAPI into its own REST service, we can create a middleman/proxy service which abstracts away all the logic of caching/fetching new versions, while keeping the sensitive authentication credentials safe on the server side.

A series of express routes, or the equivalent in your favorite web server, will do the trick, with a series of routes like this:

const driveAPI = new (require('./driveAPI'))();
const express = require('express');
const API_VERSION = 1;
const router = express.Router();

router.route('/getDoc')
.get((req, res) => {
  console.log('GET /getDoc', req.query.id);
  driveAPI.getDoc(req.query.id)
  .then(data => res.json(data))
  .catch(error => {
    console.error(error);
    res.sendStatus(500);
  });
});

// Other routes included here (getSheet, getImage, listFiles, etc)...

app.use(`/api/v${API_VERSION}`, router);

See the full express.js file in the companion repo.

Bonus: Docker Deployment

For deployment to production, we can can run the Express server alongside your existing static web server. Or, if it’s convenient, we could easily wrap it in a Docker image:

FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "node", "express.js" ]

…or use this pre-built image published on Docker Hub.

Bonus 2: NGINX Google OAuth

If your website is public-facing (accessible by anyone on the internet), then we’re done! But for our purposes where I work at Motorola, we are publishing an internal-only documentation site that needs additional security. That means Link Sharing is turned off on all our Google Docs (they also happened to be stored in an isolated and dedicated Google Team Drive separated from all other company content).

We handled this additional layer of security as early as possible at the server level, using NGINX to intercept and reverse-proxy all requests before they even make it to the Express server or any static content hosted by the website. For this, we use Cloudflare’s excellent Docker image to present a Google sign-on screen to all employees accessing any website resources or endpoints (both the Drive API Express server and the static content alongside it). It seamlessly integrates with the corporate Google account and single-sign-on they already have — no extra account needed!

Conclusion

Everything we just covered in this article is exactly what we’ve done where I work. It’s a lightweight, flexible, and decentralized content management architecture, in which the raw data lives where Google Drive, where our team already works, using a UI that’s already familiar to everyone. It all gets tied together into the website’s front end which retains the full flexibility of pure HTML and CSS in terms of control over presentation, and with minimal architectural constraints. A little extra legwork from you, the developer, creates a virtually seamless experience for both your non-dev collaborators and your end users alike.

Will this sort of thing work for everyone? Of course not. Different sites have different needs. But if I were to put together a list of use cases for when to use Google Drive as a CMS, it would look something like this:

  • An internal site with between a few hundred to a few thousand daily users — If this had been the front page of the global company website, even a single request for file version metadata per user might approach that Drive API usage limit. Further techniques could help mitigate that — but it’s the best fit for small to medium-size websites.
  • A single-page app — This setup has allowed us to query the version numbers of every data source in a single REST request, one time per session, rather than one time per page. A non-single-page app could use the same approach, perhaps even making use of cookies or local storage to accomplish the same “once per visit” version query, but again, it would take a little extra legwork.
  • A team that’s already using Google Drive — Perhaps most important of all, our collaborators were pleasantly surprised that they could contribute to the website using an account and workflow they already had access to and were comfortable using, including all of the refinements of Google’s WYSIWYG experience, powerful access management, and the rest.

The post Using Google Drive as a CMS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Hashnode: A Blogging Platform for Developers

July 22nd, 2021 No comments

Hashnode is a free developer blogging platform. Say you’ve just finished an ambitious project and want to write about 10 important lessons you’ve learned as a developer during it. You should definitely blog it—I love that kind of blog post, myself. Making a jump into the technical debt of operating your own blog isn’t a small choice, but it’s important to own your own content. With Hashnode, the decision gets a lot easier. You can blog under a site you entirely own, and at the same time, reap the benefits of hosted software tailor-made for developer blogging and be part of a social center around developer writing.

Here are some things, technologically, I see and like:

  • Write in Markdown. I’m not sure I’ve ever met a developer who didn’t prefer writing in
    Markdown.
  • Its not an “own your content” as in theoretically you could export content. Your content is in your GitHub repo. You wanna migrate it later? Go for it.
  • Your site, whether at your own custom domain or at a free subdomain, is hosted, CDN-backed, and SSL secured, while being customizable to your own style.
  • Developer specific features are there, like syntax highlighting for your code.
  • You get to be part of on-site community as well as a behind-the-scenes Discord community.
  • Your blog is highly optimized for performance, accessibility, and SEO. You’ll be hitting 100’s on Lighthouse reports, which is no small feat.

Your future biggest fans are there waiting for you ;).

Example of my personalized Hashnode newsletter with the best stuff from my feed.

The team over there isn’t oblivious to the other hosted blogging platforms out there. We’ve all seen programming blog posts on Medium, right? They tend to be one-offs in my experience. Hashnode is a Medium-alternative for developers. Medium just doesn’t cater particularly well to the developer audience. Plus you never know when your content will end up being behind a random paywall, a mega turn-off to fellow developers just trying to learn something. No ads or paywalls on Hashnode, ever.

The smart move, I’d say, is buying a domain name to represent yourself right away. I think that’s a super valuable stepping stone in all developer journeys. Then hook it up to Hashnode. Then wherever you do from that day forward, you are building domain equity there. You’re never going to regret that. That domain strength belongs entirely to you forever. Not to mention Medium wants $50/year to map a domain and DEV doesn’t let you do it at all.

But building your own site can be a lonely experience at first. The internet is a big place and you’ll be a small fish and first. By starting off at Hashnode, it’s like having a cheat code for being a much bigger fish right on day one.

DEV is out there too being a developer writing hub, but they don’t allow you to host your own site and build your own domain equity, as Hashnode does, or customize it to your liking as deeply.

Hashnode is built by developers, for developers, for real. Blogging for devs! The team there is very interested and receptive to your feature requests—so hit them up!

One more twist here that you might just love.

Hashnode Sponsors is a new way your fans can help monetize your blog directly, and Hashnode doesn’t take a cut of it at all.


The post Hashnode: A Blogging Platform for Developers appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags: