Archive

Archive for November 1st, 2022

Most Important YouTube Metrics and KPIs to Track?

November 1st, 2022 No comments

With over 2 billion users, YouTube is the place to be when trying to reach a large audience with video content.

But after you’ve uploaded those fantastic videos to your channel, how will you gauge what’s working well and what could use some improvement and optimization? 

That’s where YouTube metrics come in. After all, it doesn’t make sense to spend hours upon hours creating a beautiful video to have it seen by only a handful of people.

This article outlines why you need to keep track of YouTube metrics, which ones are most valuable, and how to get the most from this ongoing exercise.

Let’s get into it!

Why You Need to Track YouTube Metrics

Yes, YouTube is the world’s second-largest search engine, which is why video is becoming an increasingly important part of the modern marketing mix. But putting all that effort into content creation doesn’t make sense if you ignore how it’s performing.

And while it may seem like a hassle to keep track of numbers, it’s the only way to get ahead of the competition. Think about it– identifying any data trends helps you to uncover a winning formula for sustainable YouTube success.

By monitoring YouTube metrics and setting KPIs, you’ll understand the following:

  • Who’s watching your videos
  • The average amount of time viewers look at your videos
  • How often actions are taken (e.g., whether a viewer clicks the website link you included,
  • or navigate to other videos on your channel)
  • What type of video content resonates most
  • Areas for improvement (e.g., having a better-designed thumbnail to increase clicks)

Once you get into the habit of monitoring the right YouTube metrics, you’ll quickly see how they help you in the long run–so don’t neglect the importance of it.


The Top 7 YouTube Metrics You Should Know About

Wondering where to begin with YouTube analytics? Have no fear–we’ll break down the most important metrics to keep an eye on. 


1. Average Watch Time

Watch time is one of the core YouTube metrics because it lets you know the average amount of time a viewer looks at your videos, which also provides insight into the exact time (or window) you may be losing viewers.

A short watch time typically means that your video failed to capture–and keep–the viewer’s attention. On the flip side, a significant watch time may mean that viewers find your content intriguing, which is what every YouTuber wants to see.

While watch time is a good metric to keep on your radar, it doesn’t necessarily paint the entire picture. You must consider and contextualize what’s happening with other YouTube metrics. 

This metric also ties to the average percentage viewed, which takes the average watch time and divides it by the length of the video. As with the Google helpful content update, the algorithm rewards content that is deemed “helpful” to the audience. And watch time, alongside the average percentage viewed, helps give the algorithm a sense of whether or not the content was relevant to the search query. 


2. Engagement Rate

This consolidated metric summarizes how a viewer interacts with your video by adding likes, shares, comments, and other user actions. 

Keeping tabs on engagement helps you to identify what resonates most with your YouTube audience and how to leverage video content effectively (e.g., expanding on a well-performing YouTube series).

Like watch time, it isn’t a tell-all of your channel’s overall performance, but it’ll give you some level of insight into what’s going on. 

Engagement rate joins watch time and average percentage viewed as one of the top metrics that YouTube uses inside its super-secret algorithm to decide which videos get those coveted results at the top of the search pages. 


3. Unique Views


Video views in isolation are one thing, but the unique views metric lets you know how many individual users have looked at your content (without counting duplicate user views). That way, you’ll have a better understanding of your content’s exposure to new pairs of eyes.

A climb in unique views indicates that your YouTube videos garner traction and possibly reach a broad subset of viewers. 

Not that a bump in video views, without a corresponding bump in unique views is always a bad thing. That is usually an indication that people are watching the same video multiple times. 


Pro-tip:
To maximize your YouTube content reach, be sure to optimize for mobile placement also. In fact, 70% of YouTube viewers are using a mobile device, so you make sure that text and other graphic elements are large enough that they can clearly be seen on a mobile device.  


4. Subscriber Growth Rate

As you put effort into disseminating great YouTube video content, you’ll want to see an uptick in subscribers (especially if you’re aiming to build brand awareness, drive conversions, or increase revenue).

A stall in subscriber growth (or even a decline) should be looked into further and may mean your content needs re-evaluation or improvement in some way (if, of course, you’re looking to grow your YouTube channel). 

It’s also important to tie the subscriber growth rate to the other viewer metrics. For example, keep an eye out if your videos are getting a lot of watches, comments, and engagement, but you’re not getting very many new subscribers. This could be something as simple as adding a “Please Subscribe” request near the beginning and at the end of your videos.

Plus, you can use YouTube Analytics to dig in and figure out exactly where your subscribers are coming from, including whether they subscribed from a particular video, your YouTube channel, or more. If most of your subscribers are coming from a specific video or a topic where you provide expertise, consider making more content like that. 


Pro-Tip: As an ongoing exercise, look at your channel’s demographics (such as age, gender, and geographic location. That way, you’ll know your YouTube audience’s characteristics, allowing you to create relevant content that speaks directly to them. It may even uncover insights into new target audiences. 


5. Impressions Click-Through Rate

When your YouTube video shows up on ‘Recommended’ lists or encourages users to follow through on a specific action (such as visiting your website), it’s important to understand how many people follow through.

That’s where impressions click-through rate comes in. It gives you insight into user intent and how compelled a viewer was to take a prompted action. In turn, it lets you know just how impactful your video content is. 

YouTube thumbnails are one of the most significant drivers of clickthrough rate on YouTube but are often overlooked or undervalued by YouTube creators. Your YouTube thumbnails should clearly align with the topic of the video, catch the eye, and–ideally–highlight the keywords you’ve identified as part of your YouTube SEO processes. 

6. Traffic Sources

To capitalize on the most high-traction traffic sources, you’ll need to know what’s working best and generating a significant number of views.

If you’ve embedded your YouTube content on your website or perhaps reference it regularly on your social media channels, monitoring traffic sources will help you to optimize your YouTube content further and focus on the most promising outlets. This metric also shows you internal traffic sources, which also gives you an idea of where it’s showing up on YouTube (e.g., Suggested Videos, search results, etc.). 

Pro-Tip: See which keywords YouTube users typically use when they click on your video(s). That way, you’ll better understand user search intent and which topics are most applicable. 


7. Most Popular Videos

If you’ve got a YouTube channel with a significant bank of content, ‘Most Popular Videos’ will provide insight into what resonates most with your target audience and what type of content viewers typically gravitate towards.

After learning which videos have performed best, dig a little deeper to figure out why. Did you provide insight into a complex topic? Was it your delivery or offering a unique perspective, perhaps? Whatever the case, use this information to narrow down what works best and how to guide future video content accordingly.

Keep Your YouTube Game Strong by Tracking Metrics

And there you have it! Posting videos to your channel is just one part of the equation.

Metrics and setting appropriate KPIs will help you to meet your YouTube goals, tie them into your big-picture strategy, and keep you on top of your game.

It doesn’t have to be a complex, painstaking activity. By getting in the habit of regularly monitoring YouTube data, you’ll position your YouTube channel for long-term success.

Pro-Tip: Not every metric needs to be a KPI. When deciding on your YouTube goals, determine which metrics affect your brand’s strategic goals (e.g., does watch time directly affect the number of online transactions you want to achieve? Probably not. But it is a good measurement of engagement if that’s the focus of your YouTube channel). So while YouTube metrics are helpful to understand, there’s no need to get lost in a sea of data. Distinguish what’s most important and what should be secondary to keep a clear eye on the growth of your channel. 

The post Most Important YouTube Metrics and KPIs to Track? appeared first on noupe.

Categories: Others Tags:

The Importance of Regularly Revising your Company’s Data Protocols

November 1st, 2022 No comments

Business owners have access to many tools that are vital to their success, data being one of them. Data can be precious to a business that knows how to collect, analyze, and use it. 

Staying on top of your company’s data protocols is also essential. You must continuously update and adapt your rules and procedures for handling data to your business’s evolution. Doing so will give your company an advantage in various ways. Let’s explore this further. 

Provide Better Security for Company and Customer Data 

Regularly revising your company’s data protocols is critical to ensure you always provide the best security for your business and customer data. Outdated data protocols can lead to serious data breaches and security issues. 

For example, when employees aren’t following your most recent procedures to secure data, they’re more vulnerable to scams like phishing

Phishing is a cybercrime that involves being contacted by someone who seems legitimate via email, phone, or text message. They aim to get you to give them confidential information like business account numbers or your customers’ personal information. 

If you aren’t following the most up-to-date data security protocols, you open yourself, your customers, and your company up to phishing and other scams that could cause serious harm. 

Leverage Modern Data Analytics Tools 

When you consistently review your company’s data protocols, you may find that you need better tools at some point. You’ll have opportunities to bring in modern data analytics tools that can take your collection and analysis processes to the next level. 

For example, analytic process automation (APA) can drastically improve the efficiency of your collection process and how you use the data you gather. However, as powerful as data can be for a business, too much of it can leave teams unsure of how to proceed. 

APA technology can take those large datasets, analyze them for prescriptive and predictive insights, translate those insights into tangible actions, and share them with the appropriate team members. 

Neglecting to revise policies and procedures for your data won’t allow you to leverage modern data analytics tools that can collect and house information critical to the success of your team and company. 

Standardize Your Data Collection Process 

One of the biggest mistakes companies make is not having a standardized process for data collection. They just implement analytics tools and collect as much data as possible without any real direction after gathering it. 

As a result, these companies aren’t collecting the data they need, nor are they putting whatever they gather to good use, keeping them a step behind their biggest competition.  

Regularly revising data protocols can help you refine your process for collecting data so that it’s done in an organized, productive way. 

Improve Your Use of Data 

In addition to standardizing your collection process, bettering your company’s data protocols can help you improve your use of data. Collecting valuable data is only part of the responsibility. 

The other, and maybe the most critical part, is how you analyze and use that data to better your business. When you review and revise your data protocols, you should be looking at the effectiveness of your analysis process as well as your utilization procedures. 

Doing this often allows you to constantly refine:

  1. How you pull meaningful insights from the information you collect;
  2. How you turn those insights into tangible actions that move your business forward.

Enhance Your Marketing and Sales Campaigns 

Two departments that rely heavily on data are marketing and sales. Both departments use data to understand customers better and create campaigns tailored to who they are and how they behave. 

The more personalized your marketing and sales campaigns are, the more likely they will resonate with your customers and drive conversions. However, the continued effectiveness of your marketing and sales campaigns relies on revising your company’s data protocols often. 

Managing and adjusting your data protocols ensures you always collect the most accurate, useful data. It ensures you’re studying it effectively. Revising your protocols also formalizes how you use data in marketing and sales, so the experience is consistent for your customers. 

Define Guidelines for Data Classification 

How you classify your data is essential for all of your departments. If each department organizes data in different ways, chaos and confusion are pretty much guaranteed. Silos will form, and your teams’ won’t share data effectively, let alone understand and use it. And the customer experience will suffer because of it. 

Reviewing your data protocols ensures everyone in your company follows the most up-to-date data classification guidelines. No matter their department, your employees will be on the same page about where data belongs and why.

Create a More Cohesive and Collaborative Team 

One of the most significant benefits of regularly revising your data protocols is creating a more cohesive and collaborative team rooted in digital culture and data. 

Every department uses data in some way. But you don’t want each person to handle data in their own way because it’ll lead to a disjointed workflow and inefficient data collection, analysis, and use. 

On the other hand, when you give your team data best practices to abide by, you can develop a more cohesive operation. Consistently revising your protocols will ensure your team knows the following:

  • How to identify the most valuable data to collect;
  • What to do once they gather data;
  • Best practices for examining data for meaningful insights;
  • Whom to contact for help with data;
  • Steps to take to turn insights into actions.

Keep your team cohesive, collaborative, and productive by establishing guidelines for data use in your company and constantly adjusting them to better fit how your team works.  

Stay in Line With Laws and Regulations 

Data collection and analysis are becoming more conventional practices in the business world. However, that doesn’t mean companies can just collect whatever data they want whenever they want. 

There are laws and regulations that dictate how companies can collect data and what kind of information they can gather about their customers. Neglecting these laws and regulations can cost you financially and stain your business reputation. 

Regularly revising your company’s data protocols can help you stay in line with laws and regulations. This ensures you’re collecting and using data ethically in your company, which is especially important if you’re in a high-risk or highly regulated industry. 

Conclusion 

Regularly revising your company’s data protocols is crucial for many reasons. Data will become an even more powerful tool for businesses as time goes on. So, make sure you’re adjusting your protocols consistently to ensure data’s influence on your business is meaningful. 

The post The Importance of Regularly Revising your Company’s Data Protocols appeared first on noupe.

Categories: Others Tags:

Rendering External API Data in WordPress Blocks on the Back End

November 1st, 2022 No comments
Console log of the block properties.

This is a continuation of my last article about “Rendering External API Data in WordPress Blocks on the Front End”. In that last one, we learned how to take an external API and integrate it with a block that renders the fetched data on the front end of a WordPress site.

The thing is, we accomplished this in a way that prevents us from seeing the data in the WordPress Block Editor. In other words, we can insert the block on a page but we get no preview of it. We only get to see the block when it’s published.

Let’s revisit the example block plugin we made in the last article. Only this time, we’re going to make use of the JavaScript and React ecosystem of WordPress to fetch and render that data in the back-end Block Editor as well.

Where we left off

As we kick this off, here’s a demo where we landed in the last article that you can reference. You may have noticed that I used a render_callback method in the last article so that I can make use of the attributes in the PHP file and render the content.

Well, that may be useful in situations where you might have to use some native WordPress or PHP function to create dynamic blocks. But if you want to make use of just the JavaScript and React (JSX, specifically) ecosystem of WordPress to render the static HTML along with the attributes stored in the database, you only need to focus on the Edit and Save functions of the block plugin.

  • The Edit function renders the content based on what you want to see in the Block Editor. You can have interactive React components here.
  • The Save function renders the content based on what you want to see on the front end. You cannot have the the regular React components or the hooks here. It is used to return the static HTML that is saved into your database along with the attributes.

The Save function is where we’re hanging out today. We can create interactive components on the front-end, but for that we need to manually include and access them outside the Save function in a file like we did in the last article.

So, I am going to cover the same ground we did in the last article, but this time you can see the preview in the Block Editor before you publish it to the front end.

The block props

I intentionally left out any explanations about the edit function’s props in the last article because that would have taken the focus off of the main point, the rendering.

If you are coming from a React background, you will likely understand what is that I am talking about, but if you are new to this, I would recommend checking out components and props in the React documentation.

If we log the props object to the console, it returns a list of WordPress functions and variables related to our block:

We only need the attributes object and the setAttributes function which I am going to destructure from the props object in my code. In the last article, I had modified RapidAPI’s code so that I can store the API data through setAttributes(). Props are only readable, so we are unable to modify them directly.

Block props are similar to state variables and setState in React, but React works on the client side and setAttributes() is used to store the attributes permanently in the WordPress database after saving the post. So, what we need to do is save them to attributes.data and then call that as the initial value for the useState() variable.

The edit function

I am going to copy-paste the HTML code that we used in football-rankings.php in the last article and edit it a little to shift to the JavaScript background. Remember how we created two additional files in the last article for the front end styling and scripts? With the way we’re approaching things today, there’s no need to create those files. Instead, we can move all of it to the Edit function.

Full code
import { useState } from "@wordpress/element";
export default function Edit(props) {
  const { attributes, setAttributes } = props;
  const [apiData, setApiData] = useState(null);
    function fetchData() {
      const options = {
        method: "GET",
        headers: {
          "X-RapidAPI-Key": "Your Rapid API key",
          "X-RapidAPI-Host": "api-football-v1.p.rapidapi.com",
        },
      };
      fetch(
        "https://api-football-v1.p.rapidapi.com/v3/standings?season=2021&league=39",
          options
      )
      .then((response) => response.json())
      .then((response) => {
        let newData = { ...response }; // Deep clone the response data
        setAttributes({ data: newData }); // Store the data in WordPress attributes
        setApiData(newData); // Modify the state with the new data
      })
      .catch((err) => console.error(err));
    }
    return (
      <div {...useBlockProps()}>
        <button onClick={() => getData()}>Fetch data</button>
        {apiData && (
          <>
          <div id="league-standings">
            <div
              className="header"
              style={{
                backgroundImage: `url(${apiData.response[0].league.logo})`,
              }}
            >
              <div className="position">Rank</div>
              <div className="team-logo">Logo</div>
              <div className="team-name">Team name</div>
              <div className="stats">
                <div className="games-played">GP</div>
                <div className="games-won">GW</div>
                <div className="games-drawn">GD</div>
                <div className="games-lost">GL</div>
                <div className="goals-for">GF</div>
                <div className="goals-against">GA</div>
                <div className="points">Pts</div>
              </div>
              <div className="form-history">Form history</div>
            </div>
            <div className="league-table">
              {/* Usage of [0] might be weird but that is how the API structure is. */}
              {apiData.response[0].league.standings[0].map((el) => {
                
                {/* Destructure the required data from all */}
                const { played, win, draw, lose, goals } = el.all;
                  return (
                    <>
                    <div className="team">
                      <div class="position">{el.rank}</div>
                      <div className="team-logo">
                        <img src={el.team.logo} />
                      </div>
                      <div className="team-name">{el.team.name}</div>
                      <div className="stats">
                        <div className="games-played">{played}</div>
                        <div className="games-won">{win}</div>
                        <div className="games-drawn">{draw}</div>
                        <div className="games-lost">{lose}</div>
                        <div className="goals-for">{goals.for}</div>
                        <div className="goals-against">{goals.against}</div>
                        <div className="points">{el.points}</div>
                      </div>
                      <div className="form-history">
                        {el.form.split("").map((result) => {
                          return (
                            <div className={`result-${result}`}>{result}</div>
                          );
                        })}
                      </div>
                    </div>
                    </>
                  );
                }
              )}
            </div>
          </div>
        </>
      )}
    </div>
  );
}

I have included the React hook useState() from @wordpress/element rather than using it from the React library. That is because if I were to load the regular way, it would download React for every block that I am using. But if I am using @wordpress/element it loads from a single source, i.e., the WordPress layer on top of React.

This time, I have also not wrapped the code inside useEffect() but inside a function that is called only when clicking on a button so that we have a live preview of the fetched data. I have used a state variable called apiData to render the league table conditionally. So, once the button is clicked and the data is fetched, I am setting apiData to the new data inside the fetchData() and there is a rerender with the HTML of the football rankings table available.

You will notice that once the post is saved and the page is refreshed, the league table is gone. That is because we are using an empty state (null) for apiData‘s initial value. When the post saves, the attributes are saved to the attributes.data object and we call it as the initial value for the useState() variable like this:

const [apiData, setApiData] = useState(attributes.data);

The save function

We are going to do almost the same exact thing with the save function, but modify it a little bit. For example, there’s no need for the “Fetch data” button on the front end, and the apiData state variable is also unnecessary because we are already checking it in the edit function. But we do need a random apiData variable that checks for attributes.data to conditionally render the JSX or else it will throw undefined errors and the Block Editor UI will go blank.

Full code
export default function save(props) {
  const { attributes, setAttributes } = props;
  let apiData = attributes.data;
  return (
    <>
      {/* Only render if apiData is available */}
      {apiData && (
        <div {...useBlockProps.save()}>
        <div id="league-standings">
          <div
            className="header"
            style={{
              backgroundImage: `url(${apiData.response[0].league.logo})`,
            }}
          >
            <div className="position">Rank</div>
            <div className="team-logo">Logo</div>
            <div className="team-name">Team name</div>
            <div className="stats">
              <div className="games-played">GP</div>
              <div className="games-won">GW</div>
              <div className="games-drawn">GD</div>
              <div className="games-lost">GL</div>
              <div className="goals-for">GF</div>
              <div className="goals-against">GA</div>
              <div className="points">Pts</div>
            </div>
            <div className="form-history">Form history</div>
          </div>
          <div className="league-table">
            {/* Usage of [0] might be weird but that is how the API structure is. */}
            {apiData.response[0].league.standings[0].map((el) => {
              const { played, win, draw, lose, goals } = el.all;
                return (
                  <>
                  <div className="team">
                    <div className="position">{el.rank}</div>
                      <div className="team-logo">
                        <img src={el.team.logo} />
                      </div>
                      <div className="team-name">{el.team.name}</div>
                      <div className="stats">
                        <div className="games-played">{played}</div>
                        <div className="games-won">{win}</div>
                        <div className="games-drawn">{draw}</div>
                        <div className="games-lost">{lose}</div>
                        <div className="goals-for">{goals.for}</div>
                        <div className="goals-against">{goals.against}</div>
                        <div className="points">{el.points}</div>
                      </div>
                      <div className="form-history">
                        {el.form.split("").map((result) => {
                          return (
                            <div className={`result-${result}`}>{result}</div>
                          );
                        })}
                      </div>
                    </div>
                  </>
                );
              })}
            </div>
          </div>
        </div>
      )}
    </>
  );
}

If you are modifying the save function after a block is already present in the Block Editor, it would show an error like this:

The football rankings block in the WordPress block Editor with an error message that the block contains an unexpected error.

That is because the markup in the saved content is different from the markup in our new save function. Since we are in development mode, it is easier to remove the bock from the current page and re-insert it as a new block — that way, the updated code is used instead and things are back in sync.

This situation of removing it and adding it again can be avoided if we had used the render_callback method since the output is dynamic and controlled by PHP instead of the save function. So each method has it’s own advantages and disadvantages.

Tom Nowell provides a thorough explanation on what not to do in a save function in this Stack Overflow answer.

Styling the block in the editor and the front end

Regarding the styling, it is going to be almost the same thing we looked at in the last article, but with some minor changes which I have explained in the comments. I’m merely providing the full styles here since this is only a proof of concept rather than something you want to copy-paste (unless you really do need a block for showing football rankings styled just like this). And note that I’m still using SCSS that compiles to CSS on build.

Editor styles
/* Target all the blocks with the data-title="Football Rankings" */
.block-editor-block-list__layout 
.block-editor-block-list__block.wp-block[data-title="Football Rankings"] {
  /* By default, the blocks are constrained within 650px max-width plus other design specific code */
  max-width: unset;
  background: linear-gradient(to right, #8f94fb, #4e54c8);
  display: grid;
  place-items: center;
  padding: 60px 0;

  /* Button CSS - From: https://getcssscan.com/css-buttons-examples - Some properties really not needed :) */
  button.fetch-data {
    align-items: center;
    background-color: #ffffff;
    border: 1px solid rgb(0 0 0 / 0.1);
    border-radius: 0.25rem;
    box-shadow: rgb(0 0 0 / 0.02) 0 1px 3px 0;
    box-sizing: border-box;
    color: rgb(0 0 0 / 0.85);
    cursor: pointer;
    display: inline-flex;
    font-family: system-ui, -apple-system, system-ui, "Helvetica Neue", Helvetica, Arial, sans-serif;
    font-size: 16px;
    font-weight: 600;
    justify-content: center;
    line-height: 1.25;
    margin: 0;
    min-height: 3rem;
    padding: calc(0.875rem - 1px) calc(1.5rem - 1px);
    position: relative;
    text-decoration: none;
    transition: all 250ms;
    user-select: none;
    -webkit-user-select: none;
    touch-action: manipulation;
    vertical-align: baseline;
    width: auto;
    &:hover,
    &:focus {
      border-color: rgb(0, 0, 0, 0.15);
      box-shadow: rgb(0 0 0 / 0.1) 0 4px 12px;
      color: rgb(0, 0, 0, 0.65);
    }
    &:hover {
      transform: translateY(-1px);
    }
    &:active {
      background-color: #f0f0f1;
      border-color: rgb(0 0 0 / 0.15);
      box-shadow: rgb(0 0 0 / 0.06) 0 2px 4px;
      color: rgb(0 0 0 / 0.65);
      transform: translateY(0);
    }
  }
}
Front-end styles
/* Front-end block styles */
.wp-block-post-content .wp-block-football-rankings-league-table {
  background: linear-gradient(to right, #8f94fb, #4e54c8);
  max-width: unset;
  display: grid;
  place-items: center;
}

#league-standings {
  width: 900px;
  margin: 60px 0;
  max-width: unset;
  font-size: 16px;
  .header {
    display: grid;
    gap: 1em;
    padding: 10px;
    grid-template-columns: 1fr 1fr 3fr 4fr 3fr;
    align-items: center;
    color: white;
    font-size: 16px;
    font-weight: 600;
    background-color: transparent;
    background-repeat: no-repeat;
    background-size: contain;
    background-position: right;

    .stats {
      display: flex;
      gap: 15px;
      &amp; &gt; div {
        width: 30px;
      }
    }
  }
}
.league-table {
  background: white;
  box-shadow:
    rgba(50, 50, 93, 0.25) 0px 2px 5px -1px,
    rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;
  padding: 1em;
  .position {
    width: 20px;
  }
  .team {
    display: grid;
    gap: 1em;
    padding: 10px 0;
    grid-template-columns: 1fr 1fr 3fr 4fr 3fr;
    align-items: center;
  }
  .team:not(:last-child) {
    border-bottom: 1px solid lightgray;
  }
  .team-logo img {
    width: 30px;
    top: 3px;
    position: relative;
  }
  .stats {
    display: flex;
    gap: 15px;
    &amp; &gt; div {
      width: 30px;
      text-align: center;
    }
  }
  .last-5-games {
    display: flex;
    gap: 5px;
    &amp; &gt; div {
      width: 25px;
      height: 25px;
      text-align: center;
      border-radius: 3px;
      font-size: 15px;
    &amp; .result-W {
      background: #347d39;
      color: white;
    }
    &amp; .result-D {
      background: gray;
      color: white;
    }
    &amp; .result-L {
      background: lightcoral;
      color: white;
    }
  }
}

We add this to src/style.scss which takes care of the styling in both the editor and the frontend. I will not be able to share the demo URL since it would require editor access but I have a video recorded for you to see the demo:


Pretty neat, right? Now we have a fully functioning block that not only renders on the front end, but also fetches API data and renders right there in the Block Editor — with a refresh button to boot!

But if we want to take full advantage of the WordPress Block Editor, we ought to consider mapping some of the block’s UI elements to block controls for things like setting color, typography, and spacing. That’s a nice next step in the block development learning journey.


Rendering External API Data in WordPress Blocks on the Back End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Categories: Designing, Others Tags:

Putting The Graph In GraphQL With Neo4j GraphQL Library

November 1st, 2022 No comments

This article is a sponsored by Neo4j

GraphQL enables an API developer to model application data as a graph, and API clients that request that data to easily traverse this data graph to retrieve it. These are powerful game-changing capabilities. But if your backend isn’t graph-ready, these capabilities could become liabilities by putting additional pressure on your database, consuming greater time and resources.

In this article, I’ll shed some light on ways you can mitigate these issues when you use a graph database as the backend for your next GraphQL API by taking advantage of the capabilities offered by the open-source Neo4j GraphQL Library.

What Graphs Are, And Why They Need A Database

Fundamentally, a graph is a data structure composed of nodes (the entities in the data model) along with the relationships between nodes. Graphs are all about the connections in your data. For this reason, relationships are first-class citizens in the graph data model.

Graphs are so important that an entire category of databases was created to work with graphs: graph databases. Unlike relational or document databases that use tables or documents, respectively, as their data models, the data model of a graph database is (you guessed it!) a graph.

GraphQL is not and was never intended to be a database query language. It is indeed a query language, yet it lacks much of the semantics we would expect from a true database query language like SQL or Cypher. That’s on purpose. You don’t want to be exposing our entire database to all our client applications out there in the world.

Instead, GraphQL is an API query language, modeling application data as a graph and purpose-built for exposing and querying that data graph, just as SQL and Cypher were purpose-built for working with relational and graph databases, respectively. Since one of the primary functions of an API application is to interact with a database, it makes sense that GraphQL database integrations should help build GraphQL APIs that are backed by a database. That’s exactly what the Neo4j GraphQL Library does — makes it easier to build GraphQL APIs backed by Neo4j.

One of GraphQL’s most powerful capabilities enables the API designer to express the entire data domain as a graph using nodes and relationships. This way, API clients can traverse the data graph to find the relevant data. This makes better sense because most API interactions are done in the context of relationships. For example, if we want to fetch all orders placed by a specific customer or all the products in a given order, we’re traversing the pattern of relationships to find those connections in our data.

Soon after GraphQL was open-sourced by Facebook in 2015, a crop of GraphQL database integrations sprung up, evidently in an effort to address the n+1 conundrum and similar problems. Neo4j GraphQL Library was one of these integrations.

Common GraphQL Implementation Problems

Building a GraphQL API service requires you to perform two steps:

  1. Define the schema and type definitions.
  2. Create resolver functions for each type and field in the schema that will be responsible for fetching or updating data in our data layer.

Combining these schema and resolver functions gives you an executable GraphQL schema object. You may then attach the schema object to a networking layer, such as a Node.js web server or lambda function, to expose the GraphQL API to clients. Often developers will use tools like Apollo Server or GraphQL Yoga to help with this process, but it’s still up to them to handle the first two steps.

If you’ve ever written resolver functions, you’ll recall they can be a bit tedious, as they’re typically filled with boilerplate data fetching code. But even worse than lost developer productivity is the dreaded n+1 query problem. Because of the nested way that GraphQL resolver functions are called, a single GraphQL request can result in multiple round-trip requests to the database. Addressing this typically involves a batching and caching strategy, adding additional complexity to your GraphQL application.

Doubling Down On GraphQL-First Development

Originally, the term GraphQL-First Development described a collaborative process. Frontend and backend teams would agree on a GraphQL schema, then go to work independently building their respective pieces of the codebase. Database integrations extend the idea of GraphQL-First development by applying this concept to the database as well. GraphQL-type definitions can now drive the database.

You can find the full code examples presented here on GitHub.

Let’s say you’re building a business reviews application where you want to keep track of businesses, users, and user reviews. GraphQL-type definitions to describe this API might look something like this:

type Business {
  businessId: ID!
  name: String!
  city: String!
  state: String!
  address: String!
  location: Point!
  reviews: [Review!]! @relationship(type: "REVIEWS", direction: IN)
  categories: [Category!]!
    @relationship(type: "IN_CATEGORY", direction: OUT)
}

type User {
  userID: ID!
  name: String!
  reviews: [Review!]! @relationship(type: "WROTE", direction: OUT)
}

type Review {
  reviewId: ID!
  stars: Float!
  date: Date!
  text: String
  user: User! @relationship(type: "WROTE", direction: IN)
  business: Business! @relationship(type: "REVIEWS", direction: OUT)
}

type Category {
  name: String!
  businesses: [Business!]!
    @relationship(type: "IN_CATEGORY", direction: IN)
}

Note the use of the GraphQL schema directive @relationship in our type definitions. GraphQL schema directives are the language’s built-in extension mechanism and key components for extending and configuring GraphQL APIs — especially with database integrations like Neo4j GraphQL Library. In this case, the @relationship directive encodes the relationship type and direction (in or out) for pairs of nodes in the database.

Type definitions are then used to define the property graph data model in Neo4j. Instead of maintaining two schemas (one for our database and another for our API), you can now use type definitions to define both the API and the database’s data model. Furthermore, since Neo4j is schema-optional, using GraphQL to drive the database adds a layer of type safety to your application.

From GraphQL Type Definitions To Complete API Schemas

In GraphQL, you use fields on special types (Query, Mutation, and Subscription) to define the entry points for the API. In addition, you may want to define field arguments that can be passed at query time, for example, for sorting or filtering. Neo4j GraphQL Library handles this by creating entry points in the GraphQL API for create, read, update, and delete operations for each type, as well as field arguments for sorting and filtering.

Let’s look at some examples. For our business reviews application, suppose you want to show a list of businesses sorted alphabetically by name. Neo4j GraphQL Library automatically adds field arguments to accomplish just this.

{
  businesses(options: { limit: 10, sort: { name: ASC } }) {
    name
  }
}

Perhaps you want to allow the users to filter this list of businesses by searching for companies by name or keyword. The where argument handles this kind of filtering:

{
  businesses(where: { name_CONTAINS: "Brew" }) {
    name
    address
  }

You can then combine these filter arguments to express very complex operations. Say you want to find businesses that are in either the Coffee or Breakfast category and filter for reviews containing the keyword “breakfast sandwich:”

{
  businesses(
    where: {
      OR: [
        { categories_SOME: { name: "Coffee" } }
        { categories_SOME: { name: "Breakfast" } }
      ]
    }
  ) {
    name
    address
    reviews(where: { text_CONTAINS: "breakfast sandwich" }) {
    stars
    text
  }
 }
}

Using location data, for example, you can even search for businesses within 1 km of our current location:

{
  businesses(
    where: {
      location_LT: {
        distance: 1000
        point: { latitude: 37.563675, longitude: -122.322243 }
      }
    }
  ) {
  name
  address
  city
  state
  }
}

As you can see, this functionality is extremely powerful, and the generated API can be configured through the use of GraphQL schema directives.

We Don’t Need No Stinking Resolvers

As we noted earlier, GraphQL server implementations require resolver functions where the logic for interacting with the data layer lives. With database integrations such as Neo4j GraphQL Library, resolvers are generated for you at query time for translating arbitrary GraphQL requests into singular, encapsulated database queries. This is a huge developer productivity win (we don’t have to write boilerplate data fetching code — yay!). It also addresses the n+1 query problem by making a single round-trip request to the database.

Moreover, graph databases like Neo4j are optimized for exactly the kind of nested graph data traversals commonly expressed in GraphQL. Let’s see this in action. Once you’ve defined your GraphQL type definitions, here’s all the code necessary to spin up your fully functional GraphQL API:

const { ApolloServer } = require("apollo-server");
const neo4j = require("neo4j-driver");
const { Neo4jGraphQL } = require("@neo4j/graphql");

// Connect to your Neo4j instance.
const driver = neo4j.driver(
  "neo4j+s://my-neo4j-db.com",
  neo4j.auth.basic("neo4j", "letmein")
);

// Pass our GraphQL type definitions and Neo4j driver instance.
const neoSchema = new Neo4jGraphQL({ typeDefs, driver });

// Generate an executable GraphQL schema object and start
// Apollo Server.
neoSchema.getSchema().then((schema) => {
  const server = new ApolloServer({
    schema,
  });
  server.listen().then(({ url }) => {
    console.log(`GraphQL server ready at ${url}`);
  });
});

That’s it! No resolvers.

Extend GraphQL With The Power Of Cypher

So far, we’ve only been talking about basic create, read, update, and delete operations. How can you handle custom logic with Neo4j GraphQL Library?

Let’s say you want to show recommended businesses to your users based on their search or review history. One way would be to implement your own resolver function with the logic for generating those personalized recommendations built in. Yet there’s a better way to maintain the one-to-one, GraphQL-to-database operation performance guarantee: You can leverage the power of the Cypher query language using the @cypher GraphQL schema directive to define your custom logic within your GraphQL type definitions.

Cypher is an extremely powerful language that enables you to express complex graph patterns using ASCII-art-like declarative syntax. I won’t go into detail about Cypher in this article, but let’s see how you could accomplish our personalized recommendation task by adding a new field to your GraphQL-type definitions:

extend type Business {
  recommended(first: Int = 1): [Business!]!
    @cypher(
      statement: """
        MATCH (this)<-[:REVIEWS]-(:Review)<-[:WROTE]-(u:User)
        MATCH (u)-[:WROTE]->(:Review)-[:REVIEWS]->(rec:Business)
        WITH rec, COUNT(*) AS score
        RETURN rec ORDER BY score DESC LIMIT $first
      """
    )
}

Here, the Business type has a recommended field, which uses the Cypher query defined above to show recommended businesses whenever requested in the GraphQL query. You didn’t need to write a custom resolver to accomplish this. Neo4j GraphQL Library is still able to generate a single database request even when using a custom recommended field.

GraphQL Database Integrations Under The Hood

GraphQL database integrations like Neo4j GraphQL Library are powered by the GraphQLResolveInfo object. This object is passed to all resolvers, including the ones generated for us by Neo4j GraphQL Library. It contains information about both the GraphQL schema and GraphQL operation being resolved. By closely inspecting this object, GraphQL database integrations can generate database queries at the time queries are placed.

If you’re interested, I recently gave a talk at GraphQL Summit that goes into much more detail.

Powering Low-Code, Open Source-Powered GraphQL Tools

An open-source library that works with any JavaScript GraphQL implementation can conceivably power an entire ecosystem of low-code GraphQL tools. Collectively, these tools leverage the functionality of Neo4j GraphQL Library to help make it easier for you to build, test, and deploy GraphQL APIs backed by a real graph database.

For example, GraphQL Mesh uses Neo4j GraphQL Library to enable Neo4j as a data source for data federation. Don’t want to write the code necessary to build a GraphQL API for testing and development? The Neo4j GraphQL Toolbox is an open-source, low-code web UI that wraps Neo4j GraphQL Library. This way, it can generate a GraphQL API from an existing Neo4j database with a single click.

Where From Here

If building a GraphQL API backed by a native graph database sounds interesting or at all helpful for the problems you’re trying to solve as a developer, I would encourage you to give the Neo4j GraphQL Library a try. Also, the Neo4j GraphQL Library landing page is a good starting point for documentation, further examples, and comprehensive workshops.

I’ve also written a book Full Stack GraphQL Applications, published by Manning, that covers this topic in much more depth. My book covers handling authorization, working with the frontend application, and using cloud services like Auth0, Netlify, AWS Lambda, and Neo4j Aura to deploy a full-stack GraphQL application. In fact, I’ve built out the very business reviews application from this article as an example in the book! Thanks to Neo4j, this book is now available as a free download.

Last but not least, I will be presenting a live session entitled “Making Sense of Geospatial Data with Knowledge Graphs” during the NODES 2022 virtual conference on Wednesday, November 16, produced by Neo4j. Registration is free to all attendees.

Categories: Others Tags: