Adding End 2 End Tests to WordPress plugins using wp-env and wp-scripts

I recently published a video walking through how End to End tests are set up for WPGraphQL, but I thought it would be good to publish a more direct step-to-step tutorial to help WordPress plugin developers set up End 2 End tests for their own WordPress plugins…

Setting up End to End tests for WordPress plugins can be done in a number of ways (Codeception, Cypress, Ghost Inspector, etc), but lately, the easiest way I’ve found to do this is to use the @wordpress/env and @wordpress/scripts packages, distributed by the team working on the WordPress Block Editor (a.k.a. Gutenberg), along with GitHub Actions.

If you want to skip the article and jump straight to the code, you can find the repo here: https://github.com/wp-graphql/wp-graphql-e2e-tests-example

What are End to End tests?

Before we get too far, let’s cover what end to end tests even are.

When it comes to testing code, there are three common testing approaches.

  • Unit Tests: Testing individual functions
  • Integration Tests: Testing various units when integrated with each other.
  • End to End Tests (often called Acceptance Tests): Testing that tests the application as an end user would interact with it. For WordPress, this typically means the test will open a browser and interact with the web page, click buttons, submit forms, etc.

For WPGraphQL, the majority of the existing tests are Integration Tests, as it allows us to test the execution of GraphQL queries and mutations, which requires many function calls to execute in the WPGraphQL codebase and in WordPress core, but doesn’t require a browser to be loaded in the testing environment.

The End to End tests in WPGraphQL are for the GraphiQL IDE tools that WPGraphQL adds to the WordPress dashboard.

What’s needed for End to End Tests with WordPress?

In order to set up End to End tests for WordPress, we need a way for the test suite to visit pages of a WordPress site and interact with the web page that WordPress is serving. We also need a way to write programs that can interact with the Web Page, and we need to be able to make assertions that specific behaviors are or are not happening when the web page(s) are interacted with. We also need a way for this to run automatically when our code changes.

Let’s break down how we’ll tackle this:

  • @wordpress/env: Sets up a WordPress environment (site) for the test suites to interact with
  • @wordpress/scripts: Runs the tests using Puppeteer and Jest. This lets our tests open the WordPress site in a Chrome browser and interact with the page.
    • Puppeteer: A Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer has APIs that we will use to write tests that interact with the pages.
    • Jest: JavaScript Testing Framework with a focus on simplicity
  • GitHub Actions: We’ll be using GitHub actions for our Continuous Integration. You should be able to apply what is covered in this post to other CI tools, such as CircleCI.

Setting up our dependencies

I’m going to assume that you already have a WordPress plugin that you want to add tests to. But, if this is your first time building a WordPress plugin, you can see this commit to the example plugin to see what’s needed to get a basic WordPress plugin set up, with zero functionality.

If you do not have a package.json already, you’ll need to create a new package.json file, with the following:

{
  "name": "wp-graphql-e2e-tests-example",
  "version": "0.0.1",
  "description": "An example plugin showing how to set up End to End tests using @wordpress/env and @wordpress/scripts",
  "devDependencies": {},
  "scripts": {},
}

npm “devDependencies”

???? If you don’t already have node and npm installed on your machine, you will need to do that now.

We need the following “dev dependencies” for our test suite:

  • @wordpress/e2e-test-utils
  • @wordpress/env
  • @wordpress/jest-console
  • @wordpress/jest-puppeteer-axe
  • @wordpress/scripts
  • expect-puppeteer
  • puppeteer-testing-library

The difference between “dependencies” and “devDependencies” is that if you are bundling a JavaScript application for production, the “dedpendencies” will be included in the bundles for use at runtime, but “devDependencies” are only used during development for things like testing, linting, etc and are not included in the built application for use at runtime. We don’t need Jest or Puppeteer, etc in our runtime application, just while developing.

We can install these via the command line:

npm install @wordpress/e2e-test-utils @wordpress/env @wordpress/jest-console @wordpress/jest-puppeteer-axe @wordpress/scripts expect-puppeteer puppeteer-testing-library --d

Or you can paste the devDependencies in the package.json and run npm install.

Whether you install via the command line or pasting into package.json, the resulting devDependencies block in your package.json should look like the following:

"devDependencies": {
  "@wordpress/e2e-test-utils": "^6.0.0",
  "@wordpress/env": "^4.2.0",
  "@wordpress/jest-console": "^5.0.0",
  "@wordpress/jest-puppeteer-axe": "^4.0.0",
  "@wordpress/scripts": "^20.0.2",
  "expect-puppeteer": "^6.1.0",
  "puppeteer-testing-library": "^0.6.0"
}

Adding a .gitignore

It’s also a good idea to add a .gitignore file to ensure we don’t version the node_modules directory. These dependencies are only needed when developing, so they can be installed on the machine that needs them, when needed. They don’t need to be versioned in the project. I’ve also included an ignore for .idea which are files generated by PHPStorm. If your IDE or operating system includes hidden files that are not needed for the project, you can ignore them here as well.

# Ignore the node_modules, we don't want to version this directory
node_modules

# This ignores files generated by JetBrains IDEs (I'm using PHPStorm)
.idea

At this point, we have our package.json and our .gitignore setup. You can see this update in this commit.

Setting up the WordPress Environment

Now that we’ve got the initial setup out of the way, let’s move on to getting the WordPress environment set up.

The @wordpress/env package is awesome! It’s one, of many, packages that have been produced as part of the efforts to build the WordPress block editor (a.k.a. Gutenberg). It’s a great package, even if you’re not using the block editor for your projects. We’re going to use it here to quickly spin up a WordPress environment with our custom plugin active.

Adding the wp-env script

The command we want to run to start our WordPress environment, is npm run wp-env start, but we don’t have a script defined for this in our `package.json`.

Let’s add the following script:

...
"scripts": {
  "wp-env": "wp-env"
}

You can see the commit making this change here.

Start the WordPress environment

With this in place, we can now run the command: npm run wp-env start

You should see output pretty similar to the following:

> wp-graphql-e2e-tests-example@0.0.1 wp-env
> wp-env "start"

WordPress development site started at http://localhost:8888/
WordPress test site started at http://localhost:8889/
MySQL is listening on port 61812
MySQL for automated testing is listening on port 61840

Two WordPress environments are now running. You can click the links to open them in a browser.

And just like that, you have a WordPress site up and running!

Stopping the WordPress environment

If you want to stop the environment, you can run npm run wp-env stop.

This will generate output like the following:

> wp-graphql-e2e-tests-example@0.0.1 wp-env
> wp-env "stop"

✔ Stopped WordPress. (in 1s 987ms)

And visiting the url in a browser will now 404, as there is no longer a WordPress site running on that port.

Configuring wp-env

At this point, we’re able to start a WordPress environment pretty quickly, but, if we want to be able to test functionality of our plugin, we’ll want the WordPress environment to start with our plugin active, so we can test it.

We can do this by adding a .wp-env.json file to the root of our plugin, and configuring the environment to have our plugin active when the environment starts.

Set our plugin to be active in WordPress

At the root of the plugin, add a file named .wp-env.json with the following contents:

{
  "plugins": [ "." ]
}

We can use this config file to tell WordPress which plugins and themes to have active by default, and we can configure WordPress in other ways as well.

In this case, we’ve told WordPress we want the current directory to be activated as a plugin.

You can see this change in this commit.

Login and verify

Now, if you start the environment again (by running npm run wp-env start), you can login to the WordPress dashboard to see the plugin is active.

You can login at: http://localhost:8888/wp-admin using the credentials:

  • username: admin
  • password: password

Then visit the plugins page at: http://localhost:8888/wp-admin/plugins.php

You should see our plugin active:

Screenshot of the Plugin page in the WordPress dashboard, showing our plugin active.

Running tests

Now that we’re able to get a WordPress site running with our plugin active, we’re ready to start testing!

At this point, there are 2 more things we need to do before we can run some tests.

  • write some tests
  • define scripts to run the tests

Writing our first test

Since our plugin doesn’t have any functionality to test, we can write a simple test that just makes an assertion that we will know is always true, just so we can make sure our test suites are running as expected.

Let’s add a new file under /tests/e2e/example.spec.js.

The naming convention *.spec.js is the default naming convention for wp-scripts to be able to run the tests. We can override this pattern if needed, but we won’t be looking at overriding that in this post.

Within that file, add the following:

describe( 'example test', () => {

    it( 'works', () => {
        expect( true ).toBeTruthy()
    })

})

This code is using two global methods from Jest:

  • describe: Creates a block of related tests
  • it: A function used to run a test (this function is an alias of the “test” function)

Adding scripts to run the tests

In order to run the test we just wrote, we’ll need to add some test scripts to the package.json file.

Right above where we added the wp-env script, paste the following scripts:

"test": "echo \"Error: no test specified\" && exit 1",
"test-e2e": "wp-scripts test-e2e",
"test-e2e:debug": "wp-scripts --inspect-brk test-e2e --puppeteer-devtools",
"test-e2e:watch": "npm run test-e2e -- --watch",

These scripts work as follows:

  • npm run test: This will return an error that a specific test should be specified
  • npm run test-e2e: This will run any tests that live under the tests/e2e directory, within files named *.spec.js
  • npm run test-e2e:debug: This will run the e2e tests, but with Puppeteer devtools, which means a Chrome browser will open and we can watch the tests run. This is super handy, and a lot of fun to watch.
  • npm run test-e2e:watch: This will watch as files change and will re-run the tests automatically when changes are made.

Run the tests

Now that we have a basic test in place, and our scripts configured, let’s run the test command so we can see how it works.

In your command line, run the command npm run test-e2e.

This will run our test suite, and we should see output like the following:

> wp-graphql-e2e-tests-example@0.0.1 test-e2e
> wp-scripts test-e2e

Chromium is already in /Users/jason.bahl/Sites/libs/wp-graphql-e2e-tests-example/node_modules/puppeteer-core/.local-chromium/mac-961656; skipping download.
 PASS  tests/e2e/example.spec.js
  example test
    ✓ works (2 ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        0.434 s, estimated 1 s
Ran all test suites.

Amazing! Our first test that checks if true is indeed truthy, worked! Great!

Just to make sure things are working as expected, we can also add a test that we expect to fail.

Under our first test, we can add:

  it ( 'fails', () => {
    expect( false ).toBeTruthy()
  })

This test should fail.

If we run the script again, we should see the following output:

> wp-graphql-e2e-tests-example@0.0.1 test-e2e
> wp-scripts test-e2e

Chromium is already in /Users/jason.bahl/Sites/libs/wp-graphql-e2e-tests-example/node_modules/puppeteer-core/.local-chromium/mac-961656; skipping download.
 FAIL  tests/e2e/example.spec.js
  example test
    ✓ works (1 ms)
    ✕ fails (73 ms)

  ● example test › fails

    expect(received).toBeTruthy()

    Received: false

       6 |
       7 |     it ( 'fails', () => {
    >  8 |         expect( false ).toBeTruthy()
         |                         ^
       9 |     })
      10 |
      11 | })

      at Object. (tests/e2e/example.spec.js:8:25)

Test Suites: 1 failed, 1 total
Tests:       1 failed, 1 passed, 2 total
Snapshots:   0 total
Time:        0.519 s, estimated 1 s
Ran all test suites.

We can delete that 2nd test now that we’re sure the tests are running properly.

You can see the state of the plugin at this commit.

Testing that our plugin is active

Right now, testing that true is truthy isn’t a very valuable test. It shows that the tests are running, but it’s not ensuring that our plugin is working properly.

Since our plugin doesn’t have any functionality yet, we don’t have much to test.

One thing we can do to get familiar with some of the test utilities, is testing that the plugin is active in the Admin.

To do this we will need to:

  • Login to WordPress as an admin user
  • Visit the plugins page
  • Check to see if our plugin is active.
    • As a human, we can see a plugin is active because it’s highlighted different than inactive plugins. A machine (our tests) can see if a plugin is active by inspecting the HTML and seeing if the plugin has certain attributes.

Writing the test

In our example.spec.js file, we can add a new test. Go ahead and paste the following below the first test.

it ( 'verifies the plugin is active', async () => {
  // Steps:
  // login as admin
  // visit the plugins page
  // assert that our plugin is active by checking the HTML
});

Right now, these steps are just comments to remind us what this test needs to do. Now, we need to tell the test to do these things.

Login as Admin

One of the dependencies we added in our package.json, was @wordpress/e2e-test-utils. This package has several helpful functions that we can use while writing e2e tests.

One of the helpful functions is a loginUser function, that opens the login page of the WordPress site, enters a username and password, then clicks login.

The loginUser function accepts a username and password as arguments, but if we don’t pass any arguments, the default behavior is to login as the admin user.

In our /tests/e2e/example.spec.js file, let’s import the loginUser function at the top of the file:

import { loginUser } from '@wordpress/e2e-test-utils'

Then, let’s add this function to our test:

it ( 'verifies the plugin is active', async () => {

  // login as admin
  await loginUser();

  // visit the plugins page
  // assert that our plugin is active by checking the HTML

});

Visit the Plugins Page

Next, we want to visit the plugins page. And we can do this with another function from the @wordpress/e2e-test-utils package: visitAdminPage().

Let’s import this function:

import { loginUser, visitAdminPage } from '@wordpress/e2e-test-utils'

And add it to our test:

it ( 'verifies the plugin is active', async () => {

  // login as admin
  await loginUser();

  // visit the plugins page
  await visitAdminPage( 'plugins.php' );

  // assert that our plugin is active by checking the HTML

});

At this point, you should now be able to run the test suite in debug mode and watch the test script login to WordPress and visit the admin page.

Run the command npm run test-e2e:debug.

You should see the tests run, open Chrome, login as an admin that navigate away from the dashboard to the plugins page, then we should see the tests marked as passing in the terminal.

Screen recording showing the test running in debug mode. The Chrome browser opens and logs into the admin then navigates to another page.

NOTE: If you’re in PHP Storm or another JetBrains IDE, the debugger will kick in for you automatically. If you’re in VSCode, you might need to add a .vscode/launch.json file, like this.

Asserting that the plugin is active

Now that we’ve successfully logged into the admin and navigated to the Plugins page, we can now write our assertion that the plugin is active.

If we wanted to inspect the HTML of the plugins page to see if the plugin is active, we could open up our browser dev tools and inspect the element. We would see that the row for our active plugin looks like so:

<tr class="active" data-slug="wpgraphql-end-2-end-tests-example" data-plugin="wp-graphql-e2e-tests-example/wp-graphql-e2e-tests-example.php">

We want to make an assertion that the plugins page contains a <tr> element, that has a class with the value of active, and a data-slug attribute with the value of wpgraphql-end-2-end-tests-example (or whatever your plugin name is).

We can use XPath expressions for this.

I’m not going to go deep into XPath here, but I will show you how to test this in your browser dev tools.

You can open up the plugins page in your WordPress install (that you started from the npm run wp-env command). Then in the console, paste the following line:

$x('//tr[contains(@class, "active") and contains(@data-slug, "wpgraphql-end-2-end-tests-example")]')

You should see that it found exactly one element, as shown in the screenshot below.

Screenshot of testing XPath in the Chrome browser dev tools.

We can take this code that works in the browser dev tools, and convert it to use the page.$x method from Puppeteer.

NOTE: the page object from Puppeteer is a global object in the test environment, so we don’t need to import it like we imported the other utils functions.

// Select the plugin based on slug and active class
        const activePlugin = await page.$x('//tr[contains(@class, "active") and contains(@data-slug, "wpgraphql-end-2-end-tests-example")]');

Then, we can use the (also global) expect method from jest, to make an assertion that the plugin is active:

// assert that our plugin is active by checking the HTML
expect( activePlugin?.length ).toBe( 1 );

The full test should look like so:

  it ( 'verifies the plugin is active', async () => {

  // login as admin
  await loginUser();

  // visit the plugins page
  await visitAdminPage( 'plugins.php' );

  // Select the plugin based on slug and active class
  const activePlugin = await page.$x('//tr[contains(@class, "active") and contains(@data-slug, "wpgraphql-end-2-end-tests-example")]');

  // assert that our plugin is active by checking the HTML
  expect( activePlugin?.length ).toBe( 1 );

});

Running the test should pass. We can verify that the test is actually working and not providing a false pass, by changing the name of the slug in our expect statement. If we changed the slug to “non-existent-plugin” but still assert that there would be 1 active plugin with that slug, we would have a failing test!

Continuous Integration

Right now, we can run the tests on our own machine. And contributors could run the tests if they cloned the code to their machine.

But, one thing that is nice to set up for tests like this, is to have the tests run when code changes. That will give us the confidence that new features and bugfixes don’t break old features and functionality.

Setting up a GitHub Workflow

We’re going to set up a GitHub Workflow (aka GitHub Action) that will run the tests when a Pull Request is opened against the repository, or when code is pushed directly to the master branch.

To create a GitHub workflow, we can create a file at .github/workflows/e2e-tests.yml.

Then, we can add the following:

name: End-to-End Tests

on:
  pull_request:
  push:
    branches:
      - master

jobs:
  admin:
    name: E2E Tests
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        node: ['14']

    steps:
      - uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4

      - name: Setup environment to use the desired version of NodeJS
        uses: actions/setup-node@38d90ce44d5275ad62cc48384b3d8a58c500bb5f # v2.2.2
        with:
          node-version: ${{ matrix.node }}
          cache: npm

      - name: Installing NPM dependencies
        run: |
          npm install

      - name: Starting the WordPress Environment
        run: |
          npm run wp-env start

      - name: Running the tests
        run: |
          npm run test-e2e

If you’ve never setup a GitHub workflow, this might look intimidating, but if you slow down to read it carefully, it’s pretty self-descriptive.

The file gives the Worfklow a name “End-to-End Tests”.

name: End-to-End Tests

Then, it configures what GitHub actions the Workflow should be triggered by. We configure it to run “on” the “pull_request” and the “push” actions, if the push is to the “master” branch.

on:
  pull_request:
  push:
    branches:
      - master

Then, we define what jobs to run and set up the environment to us ubuntu-latest and node 14.

jobs:
  admin:
    name: E2E Tests
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        node: ['14']

Then, we define the steps for the job.

The first step is to “checkout” the codebase.

- uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4

Then, we setup Node JS using the specified version.

      - name: Setup environment to use the desired version of NodeJS
        uses: actions/setup-node@38d90ce44d5275ad62cc48384b3d8a58c500bb5f # v2.2.2
        with:
          node-version: ${{ matrix.node }}
          cache: npm

Then, we install our NPM dependencies.

      - name: Installing NPM dependencies
        run: |
          npm install

Then, we start the WordPress environment.

      - name: Starting the WordPress Environment
        run: |
          npm run wp-env start

And last, we run the tests.

      - name: Running the tests
        run: |
          npm run test-e2e

And now, with this in place, our tests will run (and pass!) in GitHub!

You can see the passing test run here.

Conclusion

I hope this post helps you understand how to use the @wordpress/scripts and @wordpress/env packages, Jest, Puppeteer, and GitHub actions to test your WordPress plugins and themes.

If you’re interested in content like this, please subscribe to the WPGraphQL YouTube Channel and follow WPGraphQL on Twitter!

If you’ve never tried using GraphQL with WordPress, be sure to install and activate WPGraphQL as well!

Query any page by it’s path using WPGraphQL

One of the most common ways WordPress is used, is by users visiting a URL of a WordPress site and reading the content on the page.

WordPress has internal mechanisms that take the url from the request, determine what type of entity the user is requesting (a page, a blog post, a taxonomy term archive, an author’s page, etc) and then returns a specific template for that type of content.

This is a convention that users experience daily on the web, and something developers use to deliver unique experiences for their website users.

When you go “headless” with WordPress, and use something other than WordPress’s native theme layer to display the content, it can be tricky to determine how to take a url provided by a user and convert that into content to show your users.

In this post, we’ll take a look at a powerful feature of WPGraphQL, the nodeByUri query, which accepts a uri input (the path to the resource) and will return the node (the WordPress entity) in response.

You can use this to re-create the same experience WordPress theme layer provides, by returning unique templates based on the type of content being requested.

WPGraphQL’s “nodeByUri” query

One of the benefits of GraphQL is that it can provide entry points into the “graph” that (using Interfaces or Unions) can return different Types of data from one field.

WPGraphQL provides a field at the root of the graph named nodeByUri. This field accepts one argument as input, a $uri. And it returns a node, of any Type that has a uri. This means any public entity in WordPress, such as published authors, archive pages, posts of a public post type, terms of a public taxonomy, etc.

When a URI is input, this field resolves to the “node” (post, page, etc) that is associated with the URI, much like entering the URI in a web browser would resolve to that piece of content.

If you’ve not already used the “nodeByUri” query, it might be difficult to understand just reading about it, so let’s take a look at this in action.

Here’s a video where I walk through it, and below are some highlights of what I show in the video.

Video showing how to use the nodeByUri query in WPGraphQL

Writing the query

Let’s start by querying the homepage.

First, we’ll write our query:

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
  }
}

In this query, we’re doing a few things.

First, we give our query a name “GetNodeByUri”. This name can be anything we want, but it can be helpful with tooling, so it’s best practice to give your queries good names.

Next, we define our variable input to accept: $uri: String!. This tells GraphQL that there will be one input that we don’t know about up front, but we agree that we will submit the input as a string.

Next, we declare what field we want to access in the graph: nodeByUri( uri: $uri ). We’re telling WPGraphQL that we want to give it a URI, and in response, we want a node back.

The nodeByUri field is defined in the Schema to return the GraphQL Type UniformResourceIdentifiable, which is a GraphQL Interface implemented by any Type in the Graph that can be accessed via a public uri.

Screenshot of the nodeByUri field shown in GraphiQL

If we inspect the documentation in GraphiQL for this type, we can see all of the available Types that can be returned.

Screenshot of the UniformResourceIdentifiable GraphQL Interface in GraphiQL.

The Types that can be returned consist of public Post Types, Public Taxonomies, ContentType (archives), MediaItem, and User (published authors are public).

So, we know that any uri (path) that we query, we know what we can ask for and what to expect in response.

Execute the query

Now that we have the query written, we can use GraphiQL to execute the query.

GraphiQL has a “variables” pane that we will use to input our variables. In this case, the “uri” (or path) to the resource is our variable.

First, we will enter “/” as our uri value so we can test querying the home page.

Screenshot of the “uri” variable entered in the GraphiQL Variables pane.

Now, we can execute our query by pressing the “Play” button in GraphiQL.

And in response we should see the following response:

{
  "data": {
    "nodeByUri": {
      "__typename": "ContentType"
    }
  }
}
Screenshot of the nodeByUri query for the “/” uri.

Expanding the query

We can see that when we query for the home page, we’re getting a “ContentType” node in response.

We can expand the query to ask for more fields of the “ContentType”.

If we look at the home page of https://demo.wpgraphql.com, we will see that it serves as the “blogroll” or the blog index. It’s a list of blog posts.

This is why WPGraphQL returns a “ContentType” node from the Graph.

We can write a fragment on this Type to ask for fields we want when the query returns a “ContentType” node.

If we look at the documentation in GraphiQL for the ContentType type, we can see all the fields that we can ask for.

Screenshot of the ContentType documentation in GraphiQL

If our goal is to re-create the homepage we’re seeing in WordPress, then we certainly don’t need all the fields! We can specify exactly what we need.

In this case, we want to ask for the following fields:

  • name: the name of the content type
  • isFrontPage: whether the contentType should be considered the front page
  • contentNodes (and sub-fields): a connection to the contentNodes on the page

This should give us enough information to re-create what we’re seeing on the homepage.

Let’s update our query to the following:

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
    ... on ContentType {
      name
      uri
      isFrontPage
      contentNodes {
        nodes {
          __typename
          ... on Post {
            id
            title
          }
        }
      }
    }
  }
}

And then execute the query again.

We now see the following results:

{
  "data": {
    "nodeByUri": {
      "__typename": "ContentType",
      "name": "post",
      "uri": "/",
      "isFrontPage": true,
      "contentNodes": {
        "nodes": [
          {
            "__typename": "Post",
            "id": "cG9zdDoxMDMx",
            "title": "Tiled Gallery"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDoxMDI3",
            "title": "Twitter Embeds"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDoxMDE2",
            "title": "Featured Image (Vertical)…yo"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDoxMDEx",
            "title": "Featured Image (Horizontal)…yo"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDoxMDAw",
            "title": "Nested And Mixed Lists"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDo5OTY=",
            "title": "More Tag"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDo5OTM=",
            "title": "Excerpt"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDo5MTk=",
            "title": "Markup And Formatting"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDo5MDM=",
            "title": "Image Alignment"
          },
          {
            "__typename": "Post",
            "id": "cG9zdDo4OTU=",
            "title": "Text Alignment"
          }
        ]
      }
    }
  }
}

If we compare these results from our GraphQL Query, we can see that we’re starting to get data that matches the homepage that WordPress is rendering.

Screenshot of the homepage

There’s more information on each post, such as:

  • post author
    • name
    • avatar url
  • post date
  • post content
  • uri (to link to the post with)

We can update our query once more with this additional information.

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
    ... on ContentType {
      name
      uri
      isFrontPage
      contentNodes {
        nodes {
          __typename
          ... on Post {
            id
            title
            author {
              node {
                name
                avatar {
                  url
                }
              }
            }
            date
            content
            uri
          }
        }
      }
    }
  }
}

Breaking into Fragments

The query is now getting us all the information we need, but it’s starting to get a bit long.

We can use a feature of GraphQL called Fragments to break this into smaller pieces.

I’ve broken the query into several Fragments:

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
    ...ContentType
  }
}

fragment ContentType on ContentType {
  name
  uri
  isFrontPage
  contentNodes {
    nodes {
      ...Post
    }
  }
}

fragment Post on Post {
  __typename
  id
  date
  uri
  content
  title
  ...Author
}

fragment Author on NodeWithAuthor {
  author {
    node {
      name
      avatar {
        url
      }
    }
  }
}

Fragments allow us to break the query into smaller pieces, and the fragments can ultimately be coupled with their components that need the data being asked for in the fragment.

Here, I’ve created 3 named fragments:

  • ContentType
  • Post
  • Author

And then we’ve reduced the nodeByUri field to only ask for 2 fields:

  • __typename
  • uri

The primary responsibility of the nodeByUri field is to get the node and return it to us with the __typename of the node.

The ContentType fragment is now responsible for declaring what is important if the node is of the ContentType type.

The responsibility of this Fragment is to get some details about the type, then get the content nodes (posts) associated with it. It’s not concerned with the details of the post, though, so that becomes another fragment.

The Post fragment defines the fields needed to render each post, then uses one last Author fragment to get the details of the post author.

We can execute this query, and get all the data we need to re-create the homepage!! (sidebar widgets not included)

Querying a Page

Now, we can expand our query to account for different types.

If we enter the /about path into our “uri” variable, and execute the same query, we will get this payload:

{
  "data": {
    "nodeByUri": {
      "__typename": "Page"
    }
  }
}
Screenshot of initial query for the “/about” uri

We’re only getting the __typename field in response, because we’ve told GraphQL to only return data ...on ContentType and since the node was not of the ContentType type, we’re not getting any data.

Writing the fragment

So now, we can write a fragment to ask for the specific information we need if the type is a Page.

fragment Page on Page {
  title
  content
  commentCount
  comments {
    nodes {
      id
      content
      date
      author {
        node {
          id
          name
          ... on User {
            avatar {
              url
            }
          }
        }
      }
    }
  }
}

And we can work that into the `nodeByUri` query like so:

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
    ...ContentType
    ...Page
  }
}

So our full query document becomes (and we could break the comments of the page into fragments as well, too):

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
    ...ContentType
    ...Page
  }
}

fragment Page on Page {
  title
  content
  commentCount
  comments {
    nodes {
      id
      content
      date
      author {
        node {
          id
          name
          ... on User {
            avatar {
              url
            }
          }
        }
      }
    }
  }
}

fragment ContentType on ContentType {
  name
  uri
  isFrontPage
  contentNodes {
    nodes {
      ...Post
    }
  }
}

fragment Post on Post {
  __typename
  id
  date
  uri
  content
  title
  ...Author
}

fragment Author on NodeWithAuthor {
  author {
    node {
      name
      avatar {
        url
      }
    }
  }
}

And when we execute the query for the “/about” page now, we are getting enough information again, to reproduce the page that WordPress renders:

{
  "data": {
    "nodeByUri": {
      "__typename": "Page",
      "title": "About",
      "content": "

WP Test is a fantastically exhaustive set of test data to measure the integrity of your plugins and themes.

\n

The foundation of these tests are derived from WordPress’ Theme Unit Test Codex data. It’s paired with lessons learned from over three years of theme and plugin support, and baffling corner cases, to create a potent cocktail of simulated, quirky user content.

\n

The word “comprehensive” was purposely left off this description. It’s not. There will always be something new squarely scenario to test. That’s where you come in. Let us know of a test we’re not covering. We’d love to squash it.

\n

Let’s make WordPress testing easier and resilient together.

\n", "commentCount": 1, "comments": { "nodes": [ { "id": "Y29tbWVudDo1NjUy", "content": "

Test comment

\n", "date": "2021-12-22 12:07:54", "author": { "node": { "id": "dXNlcjoy", "name": "wpgraphqldemo", "avatar": { "url": "https://secure.gravatar.com/avatar/94bf4ea789246f76c48bcf8509bcf01e?s=96&d=mm&r=g" } } } } ] } } } }

Querying a Category Archive

We’ve looked at querying the home page and a regular page, so now let’s look at querying a category archive page.

If we navigate to https://demo.wpgraphql.com/category/alignment/, we’ll see that it’s the archive page for the “Alignment” category. It displays posts of the category.

Screenshot of the Alignment category page rendered by WordPress

If we add “/category/alignment” as our variable input to the query, we’ll now get the following response:

{
  "data": {
    "nodeByUri": {
      "__typename": "Category"
    }
  }
}
Screenshot of querying the “alignment” category in GraphiQL

So now we can write our fragment for what data we want returned when the response type is “Category”:

Looking at the template we want to re-create, we know we need to ask for:

  • Category Name
  • Category Description
  • Posts of that category
    • title
    • content
    • author
      • name
      • avatar url
    • categories
      • name
      • uri

So we can write a fragment like so:

fragment Category on Category {
  name
  description
  posts {
    nodes {
      id
      title
      content
      author {
        node {
          name
          avatar {
            url
          }
        }
      }
      categories {
        nodes {
          name
          uri
        }
      }
    }
  }
}

And now our full query document looks like so:

query GetNodeByUri($uri: String!) {
  nodeByUri(uri: $uri) {
    __typename
    ...ContentType
    ...Page
    ...Category
  }
}

fragment Category on Category {
  name
  description
  posts {
    nodes {
      id
      title
      content
      author {
        node {
          name
          avatar {
            url
          }
        }
      }
      categories {
        nodes {
          name
          uri
        }
      }
    }
  }
}

fragment Page on Page {
  title
  content
  commentCount
  comments {
    nodes {
      id
      content
      date
      author {
        node {
          id
          name
          ... on User {
            avatar {
              url
            }
          }
        }
      }
    }
  }
}

fragment ContentType on ContentType {
  name
  uri
  isFrontPage
  contentNodes {
    nodes {
      ...Post
    }
  }
}

fragment Post on Post {
  __typename
  id
  date
  uri
  content
  title
  ...Author
}

fragment Author on NodeWithAuthor {
  author {
    node {
      name
      avatar {
        url
      }
    }
  }
}

And when I execute the query for the category, I get all the data I need to create the category archive page.

Amazing!

Any Type that can be returned by the nodeByUri field can be turned into a fragment, which can then be coupled with the Component that will render the data.

Building a Bookstore using WordPress, WPGraphQL and Atlas Content Modeler

In this post, we’ll look at how we can create a simple Book Store using WordPress, WPGraphQL and Atlas Content Modeler, a new plugin from WP Engine that allows Custom Post Types, Custom Taxonomies and Custom Fields to be created in the WordPress dashboard and allows the data to be accessed from WPGraphQL.

By the end of this tutorial, you should be able to manage a list of Books, each with a Title, Price and Description field and a connection to an Author.

Then, you should be able to query the data using the GraphiQL IDE in the WordPress dashboard provided by WPGraphQL.

Pre-requisites

In order to follow this tutorial, you will need a WordPress install with WPGraphQL and Atlas Content Modeler installed and activated. This tutorial will not cover setting up the environment, so refer to each project’s installation instructions to get set up.

The WordPress environment I’m using has only 2 plugins installed and activated:

  • WPGraphQL v 1.6.3
  • Atlas Content Modeler v 0.5.0
Screenshot of the WordPress dashboard’s plugin page showing WPGraphQL and Atlas Content Modeler activated

Creating a Book Model with Atlas Content Modeler

Since the goal is to have a Book Store, we’re going to want to get a new Book model (custom post type) set up using Atlas Content Modeler.

If Atlas Content Modeler has not yet been used in the WordPress install, clicking the "Content Modeler" Menu item in the Dashboard menu will open a “Getting Started” page, where we can create a new Content Model.

Screenshot of the Atlas Content Modeler getting started page

After clicking the “Get Started” button, and I’m presented with a form to create a new Content Model.

Screenshot of the “New Content Model” form in Atlas Content Modeler

There are 6 fields to fill in to create a new model, and I used the following values:

  • Singular Name: Book
  • Plural Name: Books
  • Model ID: book
  • API Visibility: Public
  • Model Icon: I searched for a book and selected it
  • Description: A collection of books
Screenshot of the ACM “New Content Model” form filled in

Clicking “Create” will add the “Book” Content Model to WordPress.

We’ll see the “Books” Type show in the Admin Menu:

And we’ll be presented with a new form where we can start adding Fields to the “Book” Content Model.

For books in our bookstore, we’ll want the following fields:

  • Title (text)
  • Price (number)
  • Description (rich text)

We can add these fields by selecting the field type we want to add, then filling in the details:

Add the Title field

To add the Title field, I selected the “Text” field type, and filled in the form:

  • Field Type: Text
  • Name: Title
  • API Identifier: title
  • Make this field required: checked
  • Use this field as Entry Title: checked
  • Input Type: Single Line
Screenshot of adding the “Title” field to the Book Content Model

After clicking create, I’m taken back to the Model screen where I can add more fields:

Add the Price field

Clicking the “plus” icon below the title field allows me to add a new field.

For the Price field I configured as follows:

  • Field Type: Number
  • Name: Price
  • API Identifier: price
  • Required: checked
  • Number Type: decimal

Add the Description field

Next, we’ll add a Description field.

Following the same steps above, we’ll click the Plus icon and add a new field configured like so:

  • Field Type: Rich Text
  • Name: Description
  • API Identifier: description
Screenshot of the “Description” field being added by ACM

Adding Books to our Bookstore

Now that we’ve created a “Books” content model, we can begin adding Books to our bookstore.

We can click “Books > Add New” from the Admin menu in our WordPress dashboard, and we’ll be taken to a screen to add a new book.

The fields we created are ready to be filled in.

You can fill in whatever values you like, but I’ve filled in mine as:

  • Title: Atlas Content Modeler Rocks!
  • Price: 0.00
  • Description: A priceless book about building content models in WordPress.
Screenshot of a Book content being populated

Book Authors

Before we get too far adding more books, we probably want to add support for adding an “Author” to each book.

While we could add a Text field named Author to the Book Model, that could lead to mistakes. Each book would have to type the Author over and over, and if the Author’s name changed, each book would have to be updated, etc.

It would be better to add the Author as it’s own entity, and create connections between the Author and the Book(s) that the Author has written.

Adding the Author Taxonomy

In order to connect Authors to Books, we’re going to use Atlas Content Modeler to create an Author Taxonomy.

In the WordPress Admin menu, we can click “Content Modeler > Taxonomies” and we’ll be greeted by a form to fill out to add a new Taxonomy.

We’ll fill out the following values:

  • Singular Name: Author
  • Plural Name: Authors
  • Taxonomy ID: author
  • Models: Books
  • Hierarchical: unchecked (not-hierarchical as authors should not have parent/child authors)
  • API Visibility: Public

Once created, we’ll see the Author Taxonomy is now associated with the “book” model.

And we can also see this relationship in the Admin Menu:

And in the “Books” list view, we can also see the “Authors” listed for each book.

Adding an Author

Of course, we don’t have any Authors yet.

Let’s add an Author to our new Author Taxonomy.

In the Admin Menu we can click “Books > Authors” and add a new Author.

I’ll give our author the name “Peter Parker” simply because my son is watching Spiderman as I type this ????‍♂️.

And I added this description as Peter’s bio:

Peter Parker is an author of books about Atlas Content Modeler, and also a member of the Avengers.

Assign an Author to our Book

Now that we have Peter Parker added as an Author, we can assign Peter as the author of our book.

If we navigate back to “Books > All Books” and click “Edit” on the book we created, we’ll now see an “Authors” panel where we can make the connection from our Book to Peter Parker, the author.

If we add Peter Parker as the author, then click “Update” on the book, then navigate back to “Books > All Books” we can now see Peter listed as the author of the book.

Adding more Books

Now that we have our Book Model and Author Taxonomy all set up, let’s add a few more Books and Authors. Feel free to input whatever content you like.


I added one more Author: “Tony Stark”:



And 3 more books:

  • Marvel’s Guide to Headless WordPress
  • WordPress, a Super CMS
  • WPGraphQL: The Super Powered API you’ve been waiting for

Querying the Books with WPGraphQL

Now that we’ve created our Book Content Model and Author Taxonomy with Atlas Content Modeler, and populated some data, it’s now time to look at how we can interact with this data using WPGraphQL.

In the WordPress Admin, we’ll navigate to “GraphQL > GraphiQL IDE” and start exploring the GraphQL Schema.

Exploring the GraphQL Schema

In the top right, is a “Docs” button. Clicking this opens the GraphQL Schema documentation.

We can search “book” and see how our Book content model shows in the Schema in various ways.

Additionally, we can click the “Explorer” button to open up a panel on the left side which we can use to compose queries.

Using the “Explorer” we can find the “books” field, and start building a query:

We can also start typing in the Query pane, and get type-ahead hints to help us compose our query:

The final query I ended up with was:

query GetBooks {
  books {
    nodes {
      databaseId
      id
      price
      title
      description
      authors {
        nodes {
          name
        }
      }
    }
  }
}

And the data that was returned was:

{
  "data": {
    "books": {
      "nodes": 
        {
          "databaseId": 9,
          "id": "cG9zdDo5",
          "price": 25.99,
          "title": "WPGraphQL: The Super Powered API you've been waiting for",
          "description": "Learn how to use WordPress data in new ways, using GraphQL!",
          "authors": {
            "nodes":
              {
                "name": "Tony Stark"
              }
            ]
          }
        },
        {
          "databaseId": 8,
          "id": "cG9zdDo4",
          "price": 12.99,
          "title": "WordPress, a Super CMS",
          "description": "Learn all the super powers of the world's most popular CMS.",
          "authors": {
            "nodes":
              {
                "name": "Peter Parker"
              }
            ]
          }
        },
        {
          "databaseId": 7,
          "id": "cG9zdDo3",
          "price": 9.99,
          "title": "Marvel's Guide to Headless WordPress",
          "description": "How to develop headless WordPress sites like a Superhero.",
          "authors": {
            "nodes": 
              {
                "name": "Tony Stark"
              }
            ]
          }
        },
        {
          "databaseId": 5,
          "id": "cG9zdDo1",
          "price": 0,
          "title": "Atlas Content Modeler Rocks!",
          "description": "A priceless book about building content models in WordPress.",
          "authors": {
            "nodes":
              {
                "name": "Peter Parker"
              }
            ]
          }
        }
      ]
    }
  }
}

Conclusion

We’ve just explored how to build a basic Bookstore using WordPress, WPGraphQL and Atlas Content Modeler.

Without writing a line of code, we’ve added a Book model with an Author Taxonomy, and populated our bookstore with Books and Authors and created relationships between them.

Then, we used WPGraphQL to query data in the GraphiQL IDE.

Now that you can access the data via GraphQL, it’s up to you to build something with your favorite front-end technology. Whether you prefer React with Gatsby or NextJS, or Vue, or something else, the data in WordPress is now free for you to use as you please!

Getting started with WPGraphQL and Gridsome

This is a guest post by @nicolaisimonsen

Gridsome is a Vue.js framework for building static generated sites/apps. It’s performant, powerful, yet simple and really faaaaast. Gridsome can pull in data from all sorts of data-sources like CMSs, APIs, Markdown etc. It has a lot of features. Go check ’em out.

Since GraphQL is so efficient and great to work with it makes sense to fetch our WordPress data in that manner. That’s obviously where WPGraphQL comes into the picture and I think it’s a match made in heaven. ????

If you’re up for it, below is a quick-start tutorial that will guide you through building your first WPGraphQL-Gridsome app.

I know I’m stoked about it!

What we will be building

We’ll go ahead and build a small personal site in Gridsome. Basically just a blog. The blog posts will be fetched from WordPress via WPGraphQL.

This project is very minimal, lightweight and this project alone might not blow your socks off, but it’s foundational and a great start to get into headless WordPress with Gridsome.

Setup a WordPress install

First off is to install WordPress.

I highly recommend Local for setting up WordPress locally. It handles everything from server setup and configuration to installing WordPress.

You can also use MAMP/WAMP/LAMP or however else you like to do it. It’s all good.

With WordPress spun up and ready to go, we want to install and activate our one and only plugin. WPGraphQL.

Now go to WPGraphQL > Settings and tick “Enable Public Introspection“.

That’s it. We are now cooking with GraphQL ????????????

Included with WPGraphQL is the IDE tool which is awesome for building/testing out queries directly in WordPress.
It might be a good idea to play around in here for a few minutes before we move along.

Aaaaaaand we’re back. Last thing we need to do is just to add a new post. Add a title, add some content and press publish.

Great. You’re golden. Onwards to some coding!

Gridsome? Let’s go!

I’m including a WPGraphQL-Gridsome starter (well, actually two).

I highly recommend cloning the stripped version – this will only include styles and html, so we can hit the ground running.

However, you can also just start from scratch.

Either way I got you.

If you just want the full code, that’s completely fine too.


Let’s go ahead an open our terminal/console.

The very first thing is to install the Gridsome CLI

npm install --global @gridsome/cli

Navigate to your desired project folder and type in

gridsome create my-personal-site https://github.com/nicolaisimonsen/wpgraphql-gridsome-starter-stripped.git

or if you’re starting from scratch

gridsome create my-personal-site

Now move into the project directory – then start the local develoment

cd my-personal-site
gridsome develop

In our code editor we should have the following:

We’re now exactly where we want to be. From here we need to pull in WPGraphQL to Gridsome as our data-source. For that we’ll be using this gridsome source plugin. Go ahead and install it.

npm install gridsome-source-graphql

The source plugin needs to be configured.
Open up gridsome.config.js and provide the following object for the plugins array.

//gridsome.config.js
module.exports = {
//
  plugins: [
    {
      use: 'gridsome-source-graphql',
      options: {
        url: 'http://{your-site}/graphql',
        typeName: 'WPGraphQL',
        fieldName: 'wpgraphql',
      },
    },
  ],
//
}

Remember the options.url is the site url + graphql endpoint.
(Can be found in WordPress under WPGraphQL > Settings > GraphQL endpoint)

For every change to gridsome.config.js or gridsome.server.js, we need to restart the app. You can type ctrl + c to exit the gridsome develop process and run gridsome develop again to restart.

Now you can test the new GraphQL data-source in Gridsome Playground/IDE – located at http://localhost:8080/___graphql
Write out the following query and hit the execute button (▶︎):

query {
  posts {
    edges {
      node {
        id
        uri
      }
    }
  }
}

There you have it. On the right side you should see your posts data.

That data could prove to be mighty useful, huh?

We’ll start setting up a Gridsome template for our posts.

Within the “src” folder there’s a folder called “templates”.

A template is used to create a single page/route in a given collection (think posts). Go to/create a file within “templates” folder called Post.vue.

/src/templates/Post.vue

In order to query the data from the GraphQL data layer into our templates we can use the following blocks;

<page-query> for pages/templates, requires id.
<static-query> for components.

In the Post.vue template we are fetching a specific post (by id – more on that later), so we’ll write the following <page-query> in between the <template> and <script> blocks:

Also – change console.log(this) to console.log(this.$page).

Important – we’ve only laid the groundwork for our template. It won’t actually fetch the data yet, since the route/page and id (dynamically) haven’t been created. The step needed is the Pages API and that’s where we are heading right now.

Open up gridsome.server.js and provide the following.
(Remember to restart afterwards)

// gridsome.server.js
module.exports = function(api) {
  api.loadSource(({ addCollection }) => {
    // Use the Data Store API here: https://gridsome.org/docs/data-store-api/
  });

  api.createPages(async ({ graphql, createPage }) => {
    const { data } = await graphql(`
      
        query {
          posts {
            edges {
              node {
                id
                uri
              }
            }
          }
        }
      
    `);

    data.posts.edges.forEach(({ node, id }) => {
      createPage({
        path: `${node.uri}`,
        component: "./src/templates/Post.vue",
        context: {
          id: node.id,
        },
      });
    });
  });
};

Remember the Gridsome Playground query?

Basically the api.createPages hook goes into the data layer fetched from WPGraphQL and queries the posts (the exact query we ran in Playground) and then loops through the collection to create single page/routes. We’ll provide a path/url for the route, a component which is the Post.vue template and lastly and context.id of the post/node id.

Magic happened when running “gridsome develop” and now we have routes (got routes?). These can be found in src/.temp/routes.js.

Try accessing the very first Post route in the browser – localhost:8080/{path} and open up the inspection tool to get the console.

Because of the console.log(this.$page) in the mounted() hook of our Post.vue – the post data from WordPress is now being written out in the console.

With this specific data now being available we just need to bind it to the actual template, so we can finally get the HTML and post displayed. Replace the current <article> block with the following:

Refresh the page.

Well, ain’t that a sight for sore eyes. Our blog posts are finally up.

Even though we’re not quite done yet this is awesome. Good job!

Now. We have posts and that’s really great for a blog, but our visitors might need a way to navigate to these.
Let’s set up a page called “blog” to list all of our blog posts.

There’s a folder with “src” called “pages” and this is a great way to setup single pages/routes non-programmatically.
Basically we just put a file with the .vue extension in there and we now have a singe page for that particular route and only that route. Even if we did set up a Page.vue template within “templates”, the Blog.vue file in the “pages” folder would still supercede. Sweet!

But why would you do that? Well, simple and fast is not always a sin. We also really don’t need to maintain a page in WordPress that only list out blog posts and the content is not really changing. However, just know that we could create a Page.vue template if we choose to, and obviously it would include our blog page.

In our new Blog.vue file in “pages” folder insert this <static-query>
in between the <template> and <script> blocks:

So we want to fetch all the posts to display on our blog page and that’s why we’re writing a static query. There’s no page template/Wordpress data for this page and so even if we wrote out a <page-query> (like in Post.vue) it would return null. nothing. nada. nichego.
Change the console.log(this) to console.log(this.$static) and open up our blog page in the browser. Also open the inspection tool and look at the console.

Awesome. Our static-query ($static) has returned an object with an array of 2 posts. We now have the data, so let’s display it on the page.

Replace the <script> block with the following:

This adds a getDate function that we will be using in our Template.

Now, replace the <template> block with the following:

Voila! Go check out the page in the browser.

We are now displaying our posts or rather an excerpt of these with a button to take us to the actual post. That’s wild! Again, good job.

That pretty much concludes the tutorial. You’ve created a personal site with a blog in Gridsome using WordPress & WPGraphQL.

Build. Deployment. Live.

The last thing to this build is to actually use the command ‘build’.

Go to the terminal/console and execute:

gridsome build

Gridsome is now generating static files and upon completion you’ll find the newly created “dist” folder and all of the files and assets.

That’s the site and all of the data from WordPress in a folder that you can actually just drop onto a FTP server and you have a live site.

However a more dynamic and modern way of doing static deployment is to use a static web host and build from a git repository.

There’s lots of hosts out there. I absolutely love and recommend Netlify, but others include Vercel, Amplify, Surge.sh.

The links above should take you to some guides of how exactly to deploy using their services.

It would also be pretty cool if we could trigger a build whenever a post is created/updated/deleted in WordPress. Otherwise we could have to manually build from time to time retrieve the latest data from WordPress. Luckily plugins like JAMstack Deployments help us in that regard. It takes in a build hook url from a static web host and hits that each time WordPress does its operations. I would suggest you to try it out.

I won’t go into deployment in further details, but just wanted to let you in on some of the options for deploying a static site. I’m quite sure can take it from here.

Where to go from here?

Obviously deployment – taking this site live should be one of the next steps, but we might also want to enhance the project.
I’ve listed some possible improvements, which could also just serve as great practice ↓

Further improvements might be; 

* A word about extensions – WPGraphQL can be extended to integrate with other WordPress plugins.
Advanced Custom Fields is a great plugin used by so many to enrich the content and structure of a WordPress site. There’s an WPGraphQL extension for it (and other great plugins too) and these are maintained by some awesome community contributors. Gridsome also has a badass community and a lot of plugins to get you started.

It’s almost too good to be true ????

Wrap it up already

So that’s basically it. Thanks for reading and coding along.

I definitely encourage you to go further read the documentation on both Gridsome and WPGraphQL. It’s very well written and has examples that will help you no matter what you might build.

Lastly, if you need to get in touch I’ll try to help you out the best I can.
Very lastly, if this was of any use to you, or maybe you just hated it – go ahead and let me know.

@nicolaisimonsen

Gutenberg and Decoupled Applications

In this article I want to dive into the current state of Gutenberg and WPGraphQL.

This is a technical article about using Gutenberg blocks in the context of decoupled / headless / API-driven WordPress, and makes the assumption that you already know what Gutenberg is and have some general understanding of how it works.

TL;DR

Client-server contracts around the shape of data is fundamental to achieving “separation of concerns”, a pillar of modular and decoupled application development.

While much of WordPress was built with decoupling in mind, the WP REST API and Gutenberg were not.

As a result, decoupled application developers interacting with WordPress are limited in what they can achieve.

With the growing demand for headless WordPress, this is a key limitation that will hamper growth.

Fortunately, even with the limitations, there are ways forward. In this article I walk through 3 approaches you can implement to use Gutenberg in decoupled applications today, tradeoffs included, and propose a plan to make the future of Gutenberg for decoupled applications a better one.

Replacing my door lock

I recently replaced the lock on the front door of my house.

I ordered the lock from an online retailer. I was able to select a specific brand of lock in a specific color.

When the lock arrived and I opened the package, it was the same brand and color that I ordered. It wasn’t just any random lock, it was the one that I agreed to pay for, and the online retailer agreed to mail me.

I was able install the lock without any surprises. I didn’t have to drill any new holes in my door. The new lock fit the hole in my door that I removed the old lock from.

The new lock wasn’t made by the same manufacturer that made the door, and yet, the lock installed on my door just fine. In fact, there were at least 30 different locks from a variety of manufacturers that I could have selected that all would have worked in my door without any complications.

Decoupled systems

This wasn’t really a story about doors and locks. It’s a story about decoupled systems, and the contracts, or agreements, that make them work.

And its intent is to help frame what I’m talking about with using WordPress, and specifically Gutenberg, in decoupled contexts.

In order for decoupled systems to work well, whether it’s doors and door locks, or WordPress and a decoupled JavaScript application, there needs to be some sort of agreement between the different parts of the system.

In the case of door and lock manufacturers, it’s an agreement over the size and positioning of the holes in the door.

Diagram showing measurements for a door lock hole

Door manufacturers can build doors at their leisure and lock manufacturers at theirs, and when the time comes to bring them together, they work without issue because both parties are adhering to an agreement.

In the case of e-commerce, there are agreements about what a consumer purchases and what should be delivered. In my case, the online store provided a list of locks that were available to purchase. I selected a specific lock, paid for it, and in response I received the lock we agreed to, in exchange for my payment.

Decoupled tech, decoupled teams

When WPGraphQL first started, I was working at a newspaper that had a CMS team that focused on WordPress, a Native Mobile team that focused on the iOS and Android applications, a Data Warehouse team that collected various data from the organization and a Print team that took the data from WordPress and prepared it for Print.

WordPress was the entry point for content creators to write content, but the web was only one of many channels where content was being used.

Not only was the technology decoupled (PHP for the CMS, React Native for mobile apps, Python for Data warehousing and some legacy system I forget the name of for print), but the teams were also decoupled.

The only team that really needed to understand WordPress was the CMS team. The other teams were able to use WPGraphQL Schema Introspection to build tools for their teams using data from WordPress, without needing to understand anything about PHP or WordPress under the hood.

Much like door and lock manufacturers don’t need to be experts at what the other is building, WPGraphQL’s schema served as the contract, enabling many different teams to use WordPress data when, and how, they needed.

WPGraphQL served as the contract between the CMS team and the other teams as well as WordPress the system and the other team’s decoupled systems.

WordPress contracts

For WordPress, one of the common contracts, or agreements established between multiple systems (such as plugins, themes, and WordPress core) comes in the form of registries.

WordPress has registries for Post Types, Taxonomies, Settings, Meta and more.

The register_post_type function has more than 30 options that can be configured to define the contract between the Post Type existing and how WordPress core and decoupled systems (namely plugins and themes) should interact with it.

The register_taxonomy, register_meta, register_setting, register_sidebar and other register_* functions in WordPress serve a similar purpose. They allow for a contract to be established so that many different systems can work with WordPress in an agreed upon way.

These registries serve as a contract between WordPress core and decoupled systems (themes and plugins) that can work with WordPress. Because these registries establish an agreement with how WordPress core will behave, plugins and themes can latch onto these registries and extend WordPress core in some powerful ways.

The decoupled (pluggable) architecture of WordPress is enabled by these contracts.

Image showing WordPress in the middle with the logos for ElasticPress, WordPress SEO by Yoast, WPGraphQL and Advanced Custom Fields around it.
WordPress registries enable plugins to iterate outside of WordPress core

Registering a new post type to WordPress can get you a UI in the WordPress dashboard, but it can also get your content indexed to Elastic Search via ElasticPress, powerful SEO tools from WordPress SEO, custom admin functionality from Advanced Custom Fields, and API access via WPGraphQL.

If the next release of WordPress started hiding the UI for all post types that were registered with show_ui => true, or stopped allowing plugins from reading the post type registry, there would likely be a bug (or hundreds) reported on Trac, (and Twitter, and Slack, etc), as that would mean WordPress was breaking the established contract.

The client/server contract

Like we discussed earlier, decoupled systems need some sort of shared agreement in order to work well together. It doesn’t have to be a GraphQL API, but it has to be something.

For WordPress, this comes in the form of APIs.

WordPress core has 2 built-in APIs that enable decoupled applications to interact with WordPress data, XML-RPC and the WP REST API.

And, of course, there’s yours truly, WPGraphQL, a free open-source WordPress plugin that provides an extendable GraphQL schema and API for any WordPress site.

Blocks representing REST, GraphQL and RPC API on top of a block representing the Authorization and Business logic layers of WordPress, and at the bottom is a block representing the Persistence Layer (MySQL).
Diagram of the WordPress server + API setup

In order for decoupled applications, such as Gatsby, NextJS, Frontity, native mobile applications or others, to work with WordPress, the APIs must establish a contract that WordPress and the decoupled application can both work against.

The WP REST API provides a Schema

The WordPress REST API provides a Schema that acts as this contract. The Schema is introspect-able, allowing remote systems to see what’s available before asking for the data.

This is a good thing!

But the Schema is not enforced

However, the WP REST API doesn’t enforce the Schema.

WordPress plugins that extend the WP REST API Schema can add fields to the API without defining what data will be returned in the REST API Schema. Or, they can register fields that return “object” as a wildcard catch-all.

This is a bad thing!

Decoupled teams and applications cannot reliably use the WordPress REST API if it doesn’t enforce any type of contract.

Optional Schema and wildcard return types

Plugins such as the Advanced Custom Fields to REST API add a single “acf” field to the REST endpoints and declare in the WP REST API that the field will return “an object”.

We can see this if we introspect the WP REST API of a WordPress install with this plugin active:

Screenshot showing the Introspection of the ACF to REST API Schema definition

This means that decoupled applications, and the teams building them, have no way to predict what data this field will ever return. This also means that even if a decoupled application does manage to get built, it could break at any time, because there’s no contract agreed to between the client and the server. The WordPress server can return anything at anytime.

Unpredictable data is frustrating for API consumers

With the field defined as “object” the data returned can be different from page to page, post to post, user to user, and so on. There’s no predictable way decoupled application developers can prepare for the data the API will return.

This would be like me trying to purchase that door lock, but instead of the website showing me a list of door locks with specific colors to chose from, I was just given one “product” as the option to purchase.

The “product” might be a hat or some new sunglasses, or if I’m really lucky, it might be a door lock. I don’t have any way of knowing what the “product” is, until I receive it.

As an e-commerce consumer, this is not helpful.

And as a decoupled application developer, this type of API is frustrating.

Decoupled systems don’t work well if part of the equation is to “just guess”.

GraphQL enforces Schema and Strong Types

WPGraphQL, on the other hand, enforces a strongly Typed Schema. There is no option to extend the WPGraphQL API without describing the type of data the API will return. Additionally, there is no “wildcard” type.

A plugin cannot register a field to the WPGraphQL Schema that returns a door lock on one request, and sunglasses or a hat on the next request.

To extend WPGraphQL, plugins must register fields that declare a specific Type of data that will be returned. And this contract must be upheld.

This removes the “just guess” part of the equation.

Decoupled application developers always know what to expect.

Much like I, as an e-commerce consumer, was able to browse the list of door locks that were possible to purchase on the online store, decoupled application developers can use a tool such as GraphiQL to browse the GraphQL Schema and see what Types and Fields are available to query from the GraphQL API.

The screenshot below shows GraphiQL being used to explore a GraphQL Schema. The screenshot shows the type named “Post” in the GraphQL Schema with a field named “slug” which declares that it will return a String.

Screenshot of GraphiQL showing the “slug” field on the “Post” type.

Application developers can take the information they get from the Schema and construct queries that are now predictable.

And the GraphQL Schema serves as the contract between the server and the client application, ensuring that the server will return the data in the same shape the client was promised.

Just like I received the specific door lock matching the brand and color that I specified in my order, client applications can specify the Types and Fields they require with a GraphQL Query, and the response will match what was asked for.

In the example below, the GraphQL Query asks for a Post and the “slug” field, which we can see in the Schema that it will return a String. And in response to this query, the GraphQL server will provide just what was asked for.

The “just guess” part of the server/client equation is eliminated.

Example GraphQL Query & Response

query {
  post( id: 1, idType: DATABASE_ID ) {
    slug
  }
}
{
  "data": {
    "post": {
      "slug": "hello-world",
    },
  },
}
Screenshot showing a GraphQL Query and Response in the GraphiQL IDE

The Gutenberg block registry

Now that we’re on the same page about contracts between decoupled systems and how WPGraphQL provides a contract between the WordPress server and client applications, let’s move on to discuss Gutenberg more specifically.

Early integration with WPGraphQL

Gutenberg as a concept was fascinating to me early on. Like many others, I saw the potential for this block-based editor to impact WordPress users and the WordPress ecosystem greatly, WPGraphQL included.

I explored exposing Gutenberg blocks as queryable data in WPGraphQL as far back as June 2017:

Challenges and the current state of Gutenberg

While a basic initial integration was straightforward, I ran into roadblocks quickly.

Gutenberg didn’t have a server-side registry for blocks. At this time, all blocks in Gutenberg were fully registered in JavaScript, which is not executed or understood by the WordPress server.

This means that unlike Post Types, Taxonomies, Meta, Sidebars, Settings, and other constructs that make up WordPress, Gutenberg blocks don’t adhere to any type of contract with the WordPress server.

This means that the WordPress server knows nothing about blocks. There are no agreements between Gutenberg blocks and other systems in WordPress, or systems trying to interact with WordPress via APIs.

Blocks were practically non-existent as far as the application layer of WordPress was concerned.

There were no WP-CLI commands to create, update, delete or list blocks. No WP REST API Schema or endpoints for blocks. No XML-RPC methods for blocks. And no way to expose blocks to WPGraphQL.

Without any kind of agreement between the WordPress server and the Gutenberg JavaScript application, the WordPress server can’t interact with blocks in meaningful ways.

For example, the WordPress server cannot validate user input on Gutenberg blocks. Data that users input into the fields in Gutenberg blocks is trusted without question and saved to the database without the server having final say. This is a dangerous precedent, especially as Gutenberg is moving outside of editing Post content and into other parts of full-site editing. As far as I know, the lack of block input validation by the WordPress server is still a problem today.

Anyway, without the WordPress server having any knowledge of blocks, WPGraphQL also could not provide a meaningful integration with Gutenberg.

I was sad, because I was optimistic that this integration could lead to some really great innovations for decoupled applications.

Shortly after my tweet above and running into roadblocks, I raised these concerns with the Gutenberg team on Twitter and Slack. The Gutenberg team asked me to post my thoughts in a Gutenberg Github issue, which I did at length. While my comments received a lot of positive emoji reactions from the community. Unfortunately the issue has been closed with many of the concerns outstanding.

Months later I also voiced similar concerns on the Make WordPress post about Gutenberg and Mobile, pointing out that without a proper server registry and API, decoupled applications, such as the WordPress native mobile application, won’t be able to support Custom Blocks, or even per-site adjustments to core blocks.

As of today, my understanding is that the WordPress native mobile applications still do not support custom blocks or adjustments to core blocks, making the App nearly useless for sites that have adopted Gutenberg.

Even with the limitations of Gutenberg, the headless WordPress community has been determined to use Gutenberg with decoupled applications.

Three approaches to using Gutenberg in decoupled applications, today

Below are some of the different approaches, including tradeoffs, that you can implement today to start using Gutenberg in decoupled applications.

Gutenberg blocks as HTML

I believe the fastest way to get started using Gutenberg in decoupled applications today, is to query the “content” field from WPGraphQL (or the WP REST API, if it’s still your flavor).

This is the approach that Frontity is using.

This is also the approach I’m using for WPGraphQL.com, which is in use on this very blog post you’re reading right now.

This post is written in Gutenberg, queried by Gatsby using WPGraphQL, and rendered using React components!

Here’s how it works (and please don’t judge my hacky JavaScript skills ????):

  • The GraphQL Query in Gatsby gets the content (see the code)
  • The content is passed through a parser (see the code)
  • The parser converts standard HTML elements into the Chakra UI equivalent to play nice with theming (see the code)
  • The parser also converts things like HTML for Twitter embeds, and `<code>` blocks into React components (see the code)
    • This is how we get neat things like the Syntax highlighting and “copy” button on the code snippets

Tradeoff: Lack of Introspection, unpredictable data

While this is working for me and WPGraphQL.com, I can’t recommend it for everyone.

Using HTML as the API defeats much of the purpose of decoupled systems. In order to use the markup as an API, the developers of the decoupled application need to be able to predict all the markup that might come out of the editor.

Querying HTML and trying to predict all the markup to parse is like me ordering “product” at the store. At any time I (or other users of WordPress) could add blocks with markup that my parser doesn’t recognize and the consuming application might not work as intended.

Tradeoff: Missing data

Content creators can modify attributes of blocks, and Gutenberg saves these attributes as HTML comments in the post_content. But when the content is prepared for public use in WordPress themes, the WP REST API or WPGraphQL, the raw block attributes are not available, so a parser like the one I described will not have all the block data to work with.

Tradeoff: Undefined Types

To overcome the “missing data” issue, it’s possible to pass attributes from Gutenberg blocks as HTML data-attributes in the render_callback for blocks, as a way to get Gutenberg attributes passed from the editor to the rendered HTML and available for a parser to use, but even doing this leads to client applications not knowing what to expect, and leads to undefined Types as all data-attributes are strings, so mapping data-attributes to something like a React or Vue component is difficult and fragile with this method.

When to use

This approach works for me, because I personally control both sides of wpgraphql.com, what blocks are available in the WordPress install, what content is published, and the Gatsby consumer application that queries the content and renders the site. In the e-commerce analogy, I’m both the person ordering the “product” and the person fulfilling the order, so there are no surprises. I’m not working with different teams, or even different team members, and I’m the primary content creator.

For projects that have multiple team members, multiple authors, multiple teams and/or multi-channel distribution of content, I would not recommend this approach. And multi-team, I would argue, includes the team that builds the project, and the team that maintains it after it’s live, which in many agencies are different teams.

Gutenberg Object Plugin

In late 2018, Roy Sivan, a Senior JavaScript Engineer and recurring Happy Birthday wisher to Ben Meredith, released a plugin that exposed Gutenberg blocks to the WP REST API:

This plugin exposes Gutenberg block data to the WP REST API so that data saved to pages can be consumed as JSON objects.

Exposing Gutenberg data as JSON is what a lot of developers building decoupled applications want. They want to take the data in a structured form, and pass the data to React / Vue / Native components. This plugin gets things headed in the right direction!

Tradeoff: Lack of Introspection, unpredictable data

But, because of the lack of a server-side registry for Gutenberg blocks, and the non-enforced Schema of the WP REST API, this plugin also suffers from the “just guess” pattern for decoupled applications.

This plugin is unable to register blocks or fields to the WP REST API, so inspecting the Schema leaves decoupled application developers guessing.

If we Introspect the REST API Schema from this plugin and we can see that the Schema doesn’t provide any information to the decoupled application developer about what to expect.

Screenshot of the introspection of the Gutenberg Object Plugin REST endpoint

It’s like ordering a “product” from an e-commerce store. The endpoint can return anything at any time, and can change from page to page, request to request.

There’s no contract between the REST endpoints and the consumer application. There’s no scalable way for decoupled application developers to know what type of data the endpoints will return, now or in the future.

Tradeoff: Only available in REST

If you’re building headless applications using WPGraphQL, taking advantages of features that differentiate WPGraphQL from REST, you would not be able to use the GraphQL Objects plugin in your decoupled application without enduring additional pain points, in addition to the lack of introspection.

Caching clients such as Apollo would have to be customized to work with data from these endpoints, and still may not work well with the rest of the application that might be using GraphQL. Additionally, when using REST endpoints with related resources, it becomes the clients responsibility to determine how to map the various block endpoint data to the components that need the data. There’s no concept of coupling GraphQL Query Fragments with Components, like you can do with GraphQL.

When to use:

Again, if you are the developer controlling both sides, the WordPress server and the client application, this approach could work, at least while you’re building the application and the capabilities are fresh in your mind. But in general, this approach can cause some pain points that that might be difficult to identify and fix when things go wrong. For example, 6 months down the road, even the person that built the application will likely forget all the details, and when there’s a bug, and no contract between the applications to refer to, it can be hard to diagnose and fix.

Even when things break with GraphQL applications (and they do), the explicit nature of GraphQL Queries serve as a “documentation of intent” from the original application developer and can make it much easier for teams to diagnose, down to specific leaf fields, what is broken.

WPGraphQL for Gutenberg

In early 2019 Peter Pristas introduced the WPGraphQL for Gutenberg plugin.

The intent of this plugin is to expose Gutenberg blocks to the WPGraphQL Schema, so that decoupled application developers could use tools such as GraphiQL to inspect what blocks were possible to be queried from the API, and compose GraphQL Queries asking for specific fields of specific blocks that they wanted to support.

Now, content creators can publish content with Gutenberg, and decoupled application developers can introspect the Schema and construct queries asking for the specific blocks and fields their application supports.

Decoupled application developers can move at their own pace, independent from the developers and content creators working on the CMS. The decoupled application can specify which blocks are supported, and ask for the exact fields they need. Much like an e-commerce consumer can specify the specific color door lock they want to order from the store! The Schema serves as the contract between the server and the client. Clients can predictably ask for what they want, and get just that in response.

Creating a page

Content creators can use Gutenberg to create pages. In the example blow, we see a page in Gutenberg with a Paragraph block and an Image block.

Screenshot showing the Gutenberg editor with a paragraph and image block.
Screenshot showing the Gutenberg editor with a paragraph and image block.

Exploring the Schema

With the plugin installed and activated (for demo sake I have WPGraphQL v1.2.5 and WPGraphQL for Gutenberg v0.3.8 active), decoupled application developers can use GraphiQL to browse the Schema to see what Gutenberg Blocks are available to query and interact with.

Screenshot of GraphiQL showing Gutenberg Blocks
Screenshot of GraphiQL showing the CoreParagraphBlock and its fields

Querying the blocks

And using the Schema, developers can construct a query to ask for the blocks and fields that their application supports.

Here’s an example query:

{
  post(id: 6, idType: DATABASE_ID) {
    id
    databaseId
    title
    blocks {
      __typename
      name
      ... on CoreImageBlock {
        attributes {
          ... on CoreImageBlockAttributes {
            url
            alt
            caption
          }
        }
      }
      ... on CoreParagraphBlock {
        attributes {
          ... on CoreParagraphBlockAttributes {
            content
          }
        }
      }
    }
  }
}

And the response:

You can see that the response includes the exact fields that were asked for. No surprises.

{
  "data": {
    "post": {
      "id": "cG9zdDo2",
      "databaseId": 6,
      "title": "Test Gutenberg Post",
      "blocks": [
        {
          "__typename": "CoreParagraphBlock",
          "name": "core/paragraph",
          "attributes": {
            "content": "This is a paragraph"
          }
        },
        {
          "__typename": "CoreImageBlock",
          "name": "core/image",
          "attributes": {
            "url": "http://wpgraphql.local/wp-content/uploads/2021/03/Screen-Shot-2021-03-04-at-12.11.53-PM-1024x490.png",
            "alt": "Jason Bahl, dressed in character as JamStackMullet with a Mullet wig and sunglasses, watches the WP Engine Decode conference",
            "caption": "Screenshot of the JamStackMullet watching WP Engine Decode conference"
          }
        }
   }
Screenshot of a query for a post and some blocks using WPGraphQL for Gutenberg.

GraphQL Schema as the contract

Having the GraphQL Schema serve as the contract between the client and server allows each part of the application to move forward at its own pace. There’s now an agreement for how things will behave. If the contract is broken, for example, if the server changed the shape of one of the Types in the GraphQL Schema, it’s easily identifiable and can be fixed quickly, because the client specified exactly what was needed from the server by way of a GraphQL Query.

This removes the “just guess” pattern from decoupled application development with Gutenberg.

Teams that know nothing about WordPress can even make use of the data. For example, a data warehouse team, a native mobile team, a print team, etc. The GraphQL Schema and tooling such as GraphiQL frees up different teams to use the data in their applications how they want.

Client in control

With clients querying Gutenberg blocks as data, this gives clients full control over the presentation of the blocks. Whether the blocks are used in a React or Vue website, or used for a Native iOS app that doesn’t render HTML, or used to prepare a newspaper for print, the client gets to ask for the fields that it needs, and gets to decide what happens with the data. No unexpected changes from the server, the client is in control.

Tradeoffs: Scaling issues

While WPGraphQL for Gutenberg gets us much closer to being able to query Gutenberg blocks as data, it unfortunately has a dependency that makes it very difficult to scale, and it comes back, again, to the lack of a proper server side registry for blocks.

Since Gutenberg Blocks aren’t registered on the server, WPGraphQL for Gutenberg has a settings page where users must click a button to “Update the Block Registry”.

Screenshot of the WPGraphQL for Gutenberg settings page

Clicking this button opens up Gutenberg in a hidden iFrame, executes the JavaScript to instantiate Gutenberg, gets the Block Registry from Gutenberg initialized in JavaScript, sends the list of registered blocks to the server and stores the registry in the options table of the WordPress database. The registered blocks that are stored in the database are then used to map to the GraphQL Schema.

Peter Pristas deservers an award, because this approach is a very creative solution to the frustrating problem of Gutenberg not respecting the WordPress server.

Unfortunately this solution doesn’t scale well.

Since Gutenberg blocks are registered in JavaScript, this means that the JavaScript to register any given block might be enqueued from WordPress on only specific pages, specific post types, or other unique individualized criteria.

That means the JavaScript Block Registry for Page A and Page B might be different from each other, and maybe also different from the registry for Post Type C or Post Type D. So loading one page in an iframe to get the block registry might not get the full picture of what blocks are possible to interact with in a decoupled application.

In order for the block registry that is generated from the iframe to be accurate, every page of every post type that Gutenberg is enabled on in the site would need to be loaded by iframe to account for cases where blocks were registered to specific contexts. Yikes!

Tradoffs: Schema design issues

In addition to the scaling issues, there are some concerns with some of the Schema design choices, and I’ll even take the blame for some of this, as I had many conversations with Peter as he worked on the plugin, and he followed my lead with some of my also poor Schema design choices.

One issue is infinite nesting. Gutenberg blocks, as previously discussed, can sometimes have nested inner blocks. In WPGraphQL for Gutenberg, querying inner blocks requires explicit queries and without knowing what level of depth inner blocks might reach, it’s difficult to compose queries that properly return all inner blocks.

WPGraphQL used to expose hierarchical data in a similar way, but has since changed to expose hierarchical data, such as Nav Menu Items, in flat lists. This allows for nested data in any depth to be queried, and re-structured in a hierarchy on the client.

The unlimited depth issue is commonly reported for projects such as Gatsby Source WordPress.

When to use

If Gutenberg is a requirement for your headless project, this might be a good option, as it allows you to query Gutenberg blocks as structured data. You gain a lot of the predictability that you miss with the other options, and can benefit greatly from features of GraphQL such as Batch Queries, coupling Fragments with components, and more.

So while WPGraphQL for Gutenberg is probably the closest option available for being able to predictably query Gutenberg blocks as data in decoupled applications, there are some serious questions in regards to production readiness, especially on larger projects, and you should consider these issues before choosing it for your next project.

Tradeoffs in mind, agencies such as WebDevStudios are using this approach in production, even for large sites.

Progress for the server side block registry

In 2020, some progress was made in regards to a server side registry for Gutenberg blocks.

While the official Gutenberg documentation still shows developers how to create new blocks fully JavaScript with no server awareness, the core Gutenberg blocks have started transitioning to have some data registered on the server.

You can see here that (as of Gutenberg 5.6.2, released in February 2021) core Gutenberg blocks are now registered with JSON files that can be used by the PHP server as well as the JavaScript client.

These JSON files are now used to expose blocks to the WP REST API.

This is progress!

Inner blocks, inner peace?

Unfortunately it’s not all the progress needed to have meaningful impact for decoupled applications to use Gutenberg. There’s a lot of information that a decoupled application would need about blocks that is not described in the server registry. One example (of many) being inner blocks.

Gutenberg has a concept called “Inner Blocks”, which is blocks that can have other blocks nested within. For example, a “Column” block can have other blocks nested within each column, while other blocks such as an Image block cannot have nested inner blocks.

The bit of server side registry that is now available for core Gutenberg blocks doesn’t declare this information. If we take a look at the Column block’s block.json file, we can see there’s no mention of inner blocks being supported. Additionally, if. we look at the Image block’s block.json file, we don’t see any mention of inner blocks not being supported.

In order for a decoupled application, such as the official WordPress iOS app, to know what blocks can or cannot have inner blocks, this information needs to be exposed to an API that the decoupled application can read. Without the server knowing about this information, decoupled applications cannot know this information either.

So, while there’s been a bit of a migration for the core WordPress blocks to have some server (and REST API) awareness, there’s still a lot of missing information. Also the community of 3rd party block developers are still being directed to build blocks entirely in JavaScript, which means that all new blocks will have no server awareness until the server registry becomes more of a 1st-class citizen for Gutenberg.

What’s next?

The beginnings of a move toward a server-side registry gives hope, and gives a bit of a path toward blocks being properly introspect-able and useful by decoupled teams and applications.

Specification for Server Side Registering Blocks

I believe that the step forward for Gutenberg + decoupled applications, is to come up with a specification for how Gutenberg blocks can be registered on the server to work properly with server APIs.

Once a specification is discussed, vetted, tested and published, the WP REST API, WP CLI and WPGraphQL, and therefore decoupled applications such as the WordPress native mobile app, would all make use of the spec to be able to interact with Gutenberg blocks.

I don’t fully know what this spec needs to look like, but I believe it needs to exist in some form.

Projects such as Gutenberg Fields Middleware from rtCamp, ACF Blocks, and Genesis Custom Blocks all take a server-first approach to creating new Gutenberg blocks, and I think there’s a lot to learn from these projects.

The blocks from these tools are created in a way that the WordPress server knows what blocks exist, what attributes and fields the blocks have, and the server can then pass the description of the blocks to the Gutenberg JavaScript application, which then renders the blocks for users to interact with.

Since the server provides the Gutenberg JavaScript application with the information needed to render the blocks to a content producer, this means the server can also provide the information to other clients, such as the native mobile WordPress app, or teams building decoupled front-ends with Gatsby, Gridsome or NextJS.

The future of decoupled Gutenberg

I believe that with a proper specification for registering blocks on the server, Gutenberg can enable some incredibly powerful integrations across the web.

My thoughts are that we can arrive at a specification for registering blocks that can enable block developers to provide pleasant editing experiences, while providing decoupled application developers with the ability to Introspect the GraphQL API, predictably write GraphQL Queries (and Mutations) to interact with blocks, and get predictable, strongly typed results that can be used in decoupled applications.

In an effort to start discussing what the future of a Gutenberg Block Server Registry Specification like this might look like, I’ve opened the following Github issue: https://github.com/wp-graphql/wp-graphql/issues/1764

If this topic interests you, and you’d like to be involved in discussing what such a specification might look like, please contribute ideas to that issue. Additionally, you can Join the WPGraphQL community on Slack community, and visit the #gutenberg channel and discuss in there.

My weekend release snafu

sna•fu
  noun
    a confused or chaotic state; a mess.

This past weekend I released v1.2.0 of WPGraphQL.

And 1.2.1, and 1.2.2, and 1.2.3, and 1.2.4, and 1.2.5.

I made some changes to the Github repo that caused deploys to not behave as intended, and to call it a “snafu” might be an understatement.

Plugin distribution

For the first 4 years of WPGraphQL, the plugin was not distributed on the WordPress.org plugin repository, one of the most common distribution channels for WordPress plugins. The plugin has been versioned on Github and primarily distributed on Github and Packagist.org. Users that wanted to install WPGraphQL would either clone the plugin or download the zip from Github, or use Composer to install the plugin from packagist.org.

Vendor Dependencies

Since downloading a Zip from Github was a pretty common way for folks to get the plugin, and I wanted users to be able to download the Zip, install it to WordPress and be immediately productive, without having to run composer commands to install dependencies. So, I made the decision to version the plugins external dependencies in the repository, even against the advice of Composer.

For quite some time the plan was to deploy the plugin to WordPress.org after the v1.0 release, and then stop versioning the Composer dependencies as Github would no longer be the primary method of distribution.

WPGraphQL reached v1.0 in November and has been distributed on WordPress.org since, so I began to work on implementing the plan to no longer version the vendor dependencies in the Git repo.

Github Workflows

I already had a Github Workflow that installed the vendor dependencies and deployed to WordPress.org using the fantastic WordPress deploy action from 10up.

This workflow was in place starting in v1.0 of the plugin.

The workflow installs composer dependencies, then deploys the plugin to WordPress.org.

I was pretty sure that since the Workflow already included the composer install --no-dev step prior to the deploy action that I was already good to go. All I needed to do was add a new part of the workflow to create a Zip file of the plugin, including the installed dependencies, and upload that compiled Zip to the release as a release asset, so that users that still want to be able to download a zip from Github could do so.

I added that new part of the workflow.

The next step was remove the vendor directory and tell .gitignore that I no longer want to version that directory. This would allow me and other contributors to install the vendor dependencies while working locally, but not worry about committing them to the repo. So, I updated the .gitignore to ignore the vendor directory.

Broken release!

I released v1.2.0. And shortly after was told that it was breaking things for users.

Users were reporting seeing the following screen:

Screenshot showing the wp_die() screen users were reporting.

I checked out the deploy action to see if it failed for some reason.

Updating Deploy Action

Github was reporting that my arguments for the “generate-zip” option on the 10up deploy action were not accurate.

Screenshot showing a warning from Github about the “generate-zip” option

I thought perhaps this was maybe causing something to not work quite right. So I checked the 10up action docs and it appeared this option perhaps wasn’t needed, so I removed the option, re-released, and watched the deploy process.

Still no luck. WordPress.org was missing files.

Aha!

The error that was reported to me made it clear that WordPress.org was excluding the vendor directory from the plugin, but I wasn’t sure why, as the workflow runs composer install before deploying.

I dug into the code for the 1oup action, and realized that if a .distignore isn’t present, then tit uses the git archive command, which ignores files from the `.gitignore` and the `.gitattributes` files.

Aha!

This means that when I added the vendor directory to the .gitignore file to stop versioning the directory, the deploy action was leaving that directory out of what gets deployed to WordPress.org. So my step in the workflow that runs composer install was being nullified by the update to the .gitignore file.

:man-facepalming:

I did more investigating and found that the 10up action reads the .distignore file, if it exists. This was the missing piece!

I can use a .gitignore to ignore files used in local development from being versioned in Git, but separately configure what to ignore for distribution.

So, I added a .distignore file that ignores a lot of files that are useful for development but are not needed to run the plugin, and I configured this file to NOT ignore the vendor directory.

Success!

Now the plugin was deploying to WordPress.org with the vendor dependencies.

Fixed! But, still broken.

I had confirmation that installing from WordPress.org was working for users. It was fixed!

But now I was now getting reports that users installing the plugin from Composer, specifically as a dependency of Trellis / Bedrock were running into the same wp_die() screen that folks reported seeing when installing from WordPress.org where the vendor directory was missing.

So this seemed to mean that installing the plugin from Composer was also excluding the vendor dependencies?

Ooph!

I spun up a Trellis environment locally. It was super easy by the way – I typically use localwp.com for my local WordPress installs, but Trellis made it a breeze to get a local WordPress install running. Kudos to Ben and the other contributors of the project! ????.

From my Trellis-built WordPress environment, I installed WPGraphQL from WordPress.org, and it worked fine. ????

I deleted the plugin and installed from wpackagist (a wordpress specific Composer repository) and it worked fine. ????

But then, I installed the plugin from packagist.org, and I was met with the wp_die() screen others had reported. ????

Composer Dependencies!

It turns out that Composer installs the dependencies in the vendor directory of the parent project. I knew this, but my brain didn’t want to connect these dots.

This means that this code here was problematic.

Since the vendor directory was previously always installed in the wp-graphql/vendor directory, as it was versioned with the plugin, the file_exists() check was always true.

Now that the vendor directory isn’t versioned in the plugin, it can be installed anywhere within the project, so this check isn’t always true anymore.

When installing WPGraphQL from Packagist, the dependencies are not going to be installed in the wp-graphql/vendor, but instead in the vendor directory of the project that’s including WPGraphQL as a dependency.

So, I was able to update this part of the code to use the autoload from the WPGraphQL plugin if it exists, which it will when installing from WordPress.org or downloading the Zip from the Github release, and otherwise check for the existence of the dependency class (GraphQL\GraphQL) to make sure dependencies are installed whether in the plugin or the parent project.

Back in business!

Now, the plugin deploys to WordPress.org fine, can be installed with Composer from WPackagist and Packagist, and follows the recommendation of Composer to not version the dependencies in the Git repo.

Setting up a new developer environment to contribute to WPGraphQL

I just announced that I am now employed by WP Engine to work on WPGraphQL.

With new employment comes a new Macbook, which I need to setup as a dev machine to continue working on WPGraphQL.

It’s always a tedious process to get a new computer setup to be an effective developer, so I thought I’d record all the steps I take, as I take them, and hopefully provide some help to others.

Local WordPress Environment

One of the first things I need to do to work on WPGraphQL, is have a local WordPress environment.

For the past 3 years or so, my preferred ways to setup WordPress locally is to use Local, a desktop application that makes it easy to setup WordPress sites with a few button clicks.

I enjoy Local so much, I even picked it as my “Sick Pick” on the Syntax.fm episode about WordPress and GraphQL!

When working locally, I usually have a number of different WordPress sites with different environments. For example, I have a site that I use locally to test WPGraphQL with WPGraphQL for Advanced Custom Fields, and another environment where I test things with WPGraphQL and WPGraphQL for WooCommerce. Having different sites allows me to separate concerns and test different situations in isolation.

However, the constant is WPGraphQL. I want to be able to use the same version of WPGraphQL, that I’m actively making changes to, in both environments.

This is where symlinking comes in.

In the command line, I navigate to my local site’s plugins directory. For me, it’s at /Users/jason.bahl/Local Sites/wpgraphql/app/public/wp-content/plugins

Then, with the following command, I symlink WPGraphQL to the Local WordPress site: ln -s /Users/jason.bahl/Sites/libs/wp-graphql

This allows me to keep WPGraphQL cloned in one directory on my machine, but use it as an active plugin on many Local WordPress sites. As I create more sites using Local, I follow this same step, and repeat for additional plugins, such as WPGatsby or WPGraphQL for Advanced Custom Fields.

XDebug for PHPStorm Extension

PHPStorm is my IDE of choice, and Local provides an extension that makes it easy to get PHPStorm configured to work with XDebug. I recommend this extension if you use Local and PHPStorm.

TablePlus Extension

I used to use SequelPro, but have been transitioning to use TablePlus, and Local has a community extension that opens Local databases in TablePlus.

PHPStorm

For as long as I’ve been working on WPGraphQL, PHPStorm has been my IDE of choice. I won’t get into the weeds, and you are free to use other IDEs / Code Editors, but I find that PHPStorm makes my day to day work easier.

Pro tip: To save time configuring the IDE, export the settings from from PHPStorm on your old machine and import them on your new machine.

SourceTree

SourceTree is a free GUI tool for working with code versioned with Git. While Git is often used in the command line, sometimes I like to click buttons instead of write commands to accomplish tasks. I also find it super helpful to visualize Git trees to see the status of various branches, etc. I find the code diffs easier to read in SourceTree than in the command line too, although I like Github’s UI for code diffs the best.

In any case, I use SourceTree daily. I think it’s fantastic, and you can’t beat the price!

Note: If you try using SourceTree before using Git in the command line, it might fail. This is because you need to add github.com (or whatever git host you use) to your ssh known hosts. You can read more about this here.

MySQL

Local sets up MySQL for each site, but for running Codeception tests for WPGraphQL, I like to have a general MySQL install unassociated with any specific Local site that I can configure for Codeception to use.

I download and install MySQL v5.7.26 for macOS here.

I then ensured that I updated my .zshrc file to include this export, as described here, to ensure the mysqld command will work.

TablePlus

I used to use SequelPro, but it’s been deprecated, so I’ve begun using TablePlus. You can download it here.

Docker Desktop

WPGraphQL ships with a Docker environment that developers can spin up locally, and the tests also have a Docker environment so they can be run in isolation.

In order to spin up the local Docker environment or run tests with Docker, Docker Desktop needs to be installed and logged into.

Homebrew

Homebrew is a package manager for MacOS (or Linux). It makes it easy to install packages that are useful for development on a Mac.

I used Homebrew to install the below packages.

Command Line Tools for XCode

This is something I seem to forget almost any time I setup a new Mac. When trying to install things from the command line, I’m always prompted to install Command Line Tools for Xcode and agree to their licensing. For me, as I was installing Homebrew, I was prompted to Download and Install this. If you want to install it separately, follow these instructions.

Git

Since WPGraphQL is maintained on Github, Git is essential to my daily work.

With Homebrew installed, I use it to install Git, which is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Having git installed locally allows me to clone repositories from Github to my local machine, make commits to code, and push code back up to Github.

In order to use Git with 2-Factor Authentication enabled, I also had to get SSH keys setup for Github.

Composer

Composer is a PHP package manager. WPGraphQL uses Composer for test dependencies, so it’s important to have Composer installed in order to run tests. I used the command brew install composer to install Composer.

Note: I also had to make sure I was running a version of PHP that the zip module, so I followed these steps to get that working.

Node & NVM

Since I do a lot of work with JavaScript applications, such as Gatsby and the WP Engine Headless Framework, having Node installed locally is a must, and having nvm (Node Version Manager) to allow switching Node versions quickly is very helpful.

I followed this guide to get Node and NVM installed using Homebrew.

Time to contribute!

Now that I have my local environment setup and all my regular tools, I’m ready to contribute to WPGraphQL again!

What’s next for WPGraphQL?

On February 1, I announced that I was no longer employed at Gatsby, and stated a blog post would be coming soon.

This is that blog post.

TL;DR

I’m joining WP Engine as a Principal Software Engineer where I will continue maintaining WPGraphQL and will contribute to other projects and initiatives centered around the goal of making WordPress the best headless CMS.

Below I will expand a bit more on “Why WP Engine?”, but first, I’d like to take a moment to reflect on my time at Gatsby and acknowledge how important Gatsby is to the future of headless WordPress.

WPGraphQL and Gatsby

I am incredibly thankful for the opportunity I had to work at Gatsby to push forward WPGraphQL. Gatsby’s investment in WPGraphQL led to a lot of growth and maturation of the project

Project Growth and Maturation

I joined Gatsby in June 2019, and since then, WPGraphQL went from:

Community Growth

In addition to the growth and maturation of the core WPGraphQL plugin, the community around it has also grown.

While I believe WPGraphQL would have seen growth in the community regardless, I believe we can attribute at least some of this growth to Gatsby’s investment in WPGraphQL. Gatsby’s investment in WPGraphQL signaled that it wasn’t just a hobby project, but was solving real problems for real users, and users should have confidence using it in their projects.

When I joined Gatsby to work on WPGraphQL and collaborate with Tyler Barnes on WPGatsby and Gatsby’s new WordPress Source Plugin, the JavaScript ecosystem paid much more attention to using WordPress as a headless CMS, and the WordPress community got more comfortable using WordPress in ways they hadn’t before.

Many agencies, developers and site owners now consider WPGraphQL an essential part of their stack.

WordPress plugin developers have now created more than 30 WPGraphQL extensions, and there are now more than 1,500 people in the WPGraphQL Slack!

Agencies such as Zeek Interactive, WebDev Studios, 10up and Postlight use and recommend WPGraphQL for headless WordPress projects.

Websites such as gatsbyjs.com, qz.com, denverpost.com, diem.com, apollographql.com, bluehost.com, rudis.com and many more are using WPGraphQL in production.

So, why leave Gatsby?

Gatsby has been incredibly generous in funding open source developers to work on projects related to, but not part of Gatsby. For example, John Otander was working on MDX, Rikki Schulte was working on GraphiQL, and I was working on WPGraphQL.

I was the last remaining of these engineers working primarily on other projects that tangentially, but not directly benefit Gatsby.

WordPress is only one part of Gatsby’s story. Gatsby can work well with just about any data source. Some popular non-WordPress choices are Contentful, Sanity, DatoCMS, Shopify, among many others.

The team I was part of was asking me to start transitioning to work more on other Gatsby integrations, such as Contentful and Shopify, and work less on WordPress and WPGraphQL. This doesn’t mean Gatsby was abandoning WordPress or WPGraphQL, just that I would need to spend less time on it and prioritize other things. There’s nothing wrong with this. There’s a lot of sound decision making to this when it comes to making Gatsby a sustainable business.

I feel right now is a unique time in history where more investment in WordPress as a headless CMS can change the future of WordPress. I believe WordPress is now more respected as a viable option for a headless CMS and that with the momentum of WPGraphQL, technologies like Gatsby, NextJS, and others, I need to spend more time focusing on WPGraphQL and headless WordPress, and not less time.

Fortunately for me, WP Engine is investing in the future of headless WordPress, and they see WPGraphQL as an important part of that future.

As ironic as it may sound, I believe that my departure from Gatsby will actually strengthen the WordPress + Gatsby integration.

Instead of partially focusing on the Gatsby side of the integration and partially focusing on the WordPress API side of the integration, this move will allow Gatsby to hire a backfill for my position to work specifically on the Gatsby side of integrations, and not have to worry about the WordPress server API side of things. This allows the team to narrow their focus and deliver higher quality code on the Gatsby side of the Gatsby + WP integration.

I intend to continue working with Tyler Barnes and the Gatsby Integrations and Collaborations team to ensure that users of Gatsby + WPGraphQL feel supported and productive. Gatsby + WPGraphQL will continue to play a big role in the future of Headless WordPress, and I’m here for it.

Why WP Engine?

Serendipity, at least to some degree.

Within a few weeks of having conversations about needing to start focusing less on WPGraphQL at Gatsby, I discovered that WP Engine was building a headless WordPress framework and was hiring engineers to focus on headless WordPress. The job description felt like it was describing me, almost perfectly. Serendipity.

A few years ago, prior to my time at Gatsby I was interested in a position at WP Engine. But at the time there was a hard requirement for employees to be in Austin, TX. I have so many friends and family members in Denver that I have no plans to move if I don’t absolutely have to. WP Engine no longer requires employees to be in Austin, so I could now work for WP Engine without needing to move. Serendipity.

Along with the serendipitous aligning of stars, WP Engine is a generally attractive employer.

WP Engine is a leader in the WordPress space. I’ve trusted WP Engine to host many sites I’ve worked on over the last decade, including WPGraphQL.com and jasonbahl.com.

While WP Engine’s primary business is managed WordPress hosting, it also invests in a lot of products and projects that make it easier for businesses to run their sites on WordPress.

Projects such as LocalWP (that I gave a shout out to on Syntax.fm in Jul 2019) and Genesis Blocks are thriving under WP Engine, and I believe that WPGraphQL can continue to mature and thrive with WP Engine’s support.

WP Engine’s investment in headless WordPress isn’t limited to me joining to continue working on WPGraphQL and other headless WordPress projects. There will be more hires and projects aimed at reducing the friction of using WordPress as a headless CMS, and allowing businesses to get started and move faster within that context.

I believe that WP Engine’s investment in this space will allow WPGraphQL to grow and mature faster than ever before, as I will be part of a larger team working to make WordPress the best it can be.

So, does WP Engine own WPGraphQL?

Before my time at Gatsby, during my time at Gatsby and now as I transition to working at WP Engine, WPGraphQL has and will continue to be operated and maintained as a free, open-source community plugin benefitting anyone using WordPress.

WP Engine pays my salary, and in exchange I will be maintaining WPGraphQL and helping grow the headless WordPress ecosystem, reducing friction in many different ways.

What’s next for WPGraphQL?

I can’t officially commit to any of these things quite yet, but some things I have on my radar to tackle in the near future include, but are not limited to:

  • Significant updates to WPGraphQL for Advanced Custom Fields
  • Updates to the GraphiQL IDE that ships with WPGraphQL (testing as public and authenticated user, for example)
  • Introduce new Custom Scalars (datetime, HTML, among others)
  • Add Support for Image Uploads
  • Update Schema surrounding Media
    • Introduce a MediaItem interface and different GraphQL Types for Image, Video, etc
  • New tooling to help developers move faster
    • Query / error logging
    • Breaking change notifications
    • Persisted Queries
    • Query Complexity configuration and analysis
  • WPGraphQL Subscriptions (real time updates when data changes)
  • Component library(s) using WPGraphQL Fragments
  • More tutorials, videos, blog posts about using WPGraphQL in various contexts

I’m excited to get started at WP Engine and work on the next chapter of WPGraphQL and headless WordPress! I hope to have a more formal roadmap to discuss with the community in the near future, once I get settled as a WP Engine employee.

I’m so thankful to the community that has embraced WPGraphQL. I feel so much love and appreciation from thousands of developers that are using, contributing to, and providing feedback for WPGraphQL.

I’ve made many genuine friends from the WPGraphQL community and I am so thankful that this next chapter of my career allows me to continue working in this community.