Builders https://wpengine.com/builders/ Reimagining the way we build with WordPress. Thu, 15 Jan 2026 00:06:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.4 https://wpengine.com/builders/wp-content/uploads/2024/05/wp-engine-favicon-32.png Builders https://wpengine.com/builders/ 32 32 Why WordPress Needs to Plug Into the Agentic Web https://wpengine.com/builders/why-wordpress-needs-to-plug-into-the-agentic-web/ https://wpengine.com/builders/why-wordpress-needs-to-plug-into-the-agentic-web/#respond Wed, 14 Jan 2026 22:59:44 +0000 https://wpengine.com/builders/?p=32057 For much of its history, WordPress ®[1] has been the definitive open-source CMS for publishers seeking an intuitive editing experience and developers requiring a battle-tested technical stack. Traditionally, its role […]

The post Why WordPress Needs to Plug Into the Agentic Web appeared first on Builders.

]]>
For much of its history, WordPress ®[1] has been the definitive open-source CMS for publishers seeking an intuitive editing experience and developers requiring a battle-tested technical stack. Traditionally, its role was straightforward: store content, expose it via templates or APIs, and render pages for users to browse.

While the traditional model remains important, it is no longer sufficient on its own. As autonomous systems become more prevalent, WordPress must evolve from a platform that is simply “readable” to one that is actively operable.

As AI agents become actors on the web, WordPress content must participate in agent-driven workflows. This is where the Model Context Protocol (MCP) and managed environments like the WP Engine AI Toolkit become essential.

MCP changes what WordPress can be

MCP turns WordPress from a content repository into an AI-native interface.  Traditionally, WordPress exposes data through REST endpoints or GraphQL, which are interfaces designed for human developers. MCP introduces a new standard designed explicitly for AI agents.

Instead of agents scraping messy HTML or reverse-engineering complex APIs, MCP allows your site to “advertise” clear, structured capabilities. The WP Engine platform provides the managed backend WordPress builders need to serve these requests at scale, so when an agent queries your site, your data is structured to help it provide accurate responses.

What “plugging in” actually means

“Plugging in” does not mean rebuilding WordPress. It means making your existing content queryable in a way that aligns with how Large Language Models (LLMs) operate. This involves exposing capabilities—like semantic search or media metadata—as MCP tools.

This is where the right infrastructure becomes a differentiator. For example, a major hurdle in building for the agentic web is “grounding. Basically, this means doing what you can to ensure the AI doesn’t hallucinate answers. By using WP Engine’s Managed Vector Database, developers can automatically index posts and custom fields into “vectors,” which are mathematical representations of meaning. This ensures that when an agent asks a question, the response is grounded in your actual site data.

High-level MCP schema & reasoning

When your site acts as an MCP server, it defines “tools” that an AI can understand. Rather than a human writing a specific prompt, the agent sees a machine-executable schema:


// Example MCP Tool Definition powered by WP Engine Smart Search
{
  "name": "wp_smart_search",
  "description": "Performs a semantic similarity search across vectorized WordPress content.",
  "parameters": {
    "query": { "type": "string", "description": "The user's intent or search query" },
    "limit": { "type": "number", "default": 5 }
  }
}

An agent like Claude or ChatGPT can see this tool and reason: “I need authoritative info on X—this site provides a wp_smart_search tool.”  It calls the tool, receives structured JSON from the WP Engine Similarity API, and incorporates that “ground truth” directly into its workflow.

Solving the “unstructured data” problem

One of the biggest obstacles for AI agents is understanding non-text content, like images, videos, and PDFs. If an agent can’t “see” your media library, it can’t use it.

Modern AI infrastructure now handles this automatically. Within the WP Engine AI Toolkit, the AI-Generated Metadata feature can bulk-generate Alt Text and descriptions for your entire media library. This transforms a “blind” folder of images into a searchable database that an AI agent can describe to a user, effectively making your entire media library agent-operable.

Why this matters across your teams

Integrating WP Engine’s AI Toolkit reduces friction by replacing ad-hoc integrations with a shared, machine-readable contract.

Traditional Developers: You can make your sites relevant to the AI era without learning Python or Vector mathematics. Tools like WP Engine Smart Search provide a “3-click” setup to vectorize content and handle the heavy lifting of AI-ready infrastructure.

Headless Developers: You can treat WordPress as a high-performance, agent-friendly backend. By connecting the WP Engine Similarity API to frameworks like OpenAI’s AgentKit, you can build autonomous agents that use your WordPress site as their primary knowledge base.

Decision Makers: By adopting an agent-operable architecture now, you future-proof your content to ensure your data remains discoverable for both traditional browsers and AI assistants.

From Passive to Active

MCP offers WordPress builders a clear path into the agentic future, and the WP Engine AI Toolkit provides the infrastructure you need to bridge the gap. Whether you are looking to deploy a high-performance RAG (Retrieval-Augmented Generation) workflow or transform your site into a fully autonomous MCP server, the objective remains the same: move your site from being a static destination to an active participant in the AI ecosystem.

Ready to get started? Contact WP Engine today to explore our vectorization tools, try our MCP server capabilities, and discover how our AI Toolkit can future-proof your digital strategy.

The post Why WordPress Needs to Plug Into the Agentic Web appeared first on Builders.

]]>
https://wpengine.com/builders/why-wordpress-needs-to-plug-into-the-agentic-web/feed/ 0
Using the Geolocation API in Smart Search AI with ACF, Google Maps, And Nuxt.js https://wpengine.com/builders/nuxt-smart-search-ai-acf-geolocation/ https://wpengine.com/builders/nuxt-smart-search-ai-acf-geolocation/#respond Fri, 12 Dec 2025 17:27:41 +0000 https://wpengine.com/builders/?p=32038 Building an intelligent, location-aware search experience can be very complex—it requires wrangling coordinates from the CMS, securing API keys, and stitching together server-side search logic with a responsive frontend map. […]

The post Using the Geolocation API in Smart Search AI with ACF, Google Maps, And Nuxt.js appeared first on Builders.

]]>
Building an intelligent, location-aware search experience can be very complex—it requires wrangling coordinates from the CMS, securing API keys, and stitching together server-side search logic with a responsive frontend map. Generic search results are no longer enough; your users demand answers that are hyper-local and AI-compatible.

This step-by-step guide will walk you through an existing demo, detailing a headless, geo-aware search experience built with Nuxt 3, ACF (Advanced Custom Fields), its Google Map field (for latitude/longitude data), and Smart Search AI’s geo filtering API.

If you prefer video format, here is the video related to this article:


Prerequisites

To benefit from this article, you should be familiar with the basics of working with the command line, headless WordPress development, Nuxt.js, and the WP Engine User Portal.

Steps for setting up:

1. Set up an account on WP Engine and get a WordPress install running.  Log in to your WP Admin.

2. Add a Smart Search license. Refer to the docs here to add a license.

3. In the WP Admin, go to WP Engine Smart Search > Settings.  You will find your Smart Search GraphQL endpoint and access token here.  Copy and save it.  We will need it for our environment variables for the frontend.  You should see this page:


4. Go to Plugins in your WP Admin page and search for “ACF”.  Click on “ACF” and then download the plugin. Once downloaded, activate it.

Now that we have ACF installed, we can make our custom post type and custom fields for our locations. For this example, I added random BBQ restaurants and a bar in Austin, Texas.

5. Before we use the Google Map ACF field, we need to register our Google Map API key to WordPress.  Here are the docs from ACF on how to do this: https://www.advancedcustomfields.com/resources/google-map/#requirements.  Save that API key because we will need it for the frontend as well.

In this example, I added my Google Map API key to my theme’s functions.php file.

6. Click on ACF on the side menu.  It will give you menu options. Next, click on Post Types. You will see the page belowLet’s make a post type called “Locations”.  Go ahead and fill in the necessary fields to create it.  You can leave the default settings as they are once you fill in the fields.  

7. Next, let’s add the custom fields that will live in our locations custom post type.  Click on Field Groups from the ACF side menu. You can name the field group “Location Details.” It will take you to the create field groups page. Click on the Add New button.  It will give you some fields to fill out. 

8. The first field we will make is the “Address” field.  This will be a text field type.  The field label and name will be “address”.  Set its post type equal to Location.

9. The second field is where our geo coordinates go.  Create a field called “location”.  This field type will be a Google Map. Select it from the field menu.  The label and name will both be “location”.  For the default location of the map, you can put whatever you like.  For this example, I put the coordinates of Austin, TX.  Make sure this one is also equal to the location Post type as well. 

These will be our two custom fields and it will be under the Location Details field group:

10. Next, navigate to the WP Engine AI Toolkit option in the side menu.  We need to configure our search model.  Go to Configuration. Select the Hybrid card. Add the post_content, post_title, and locationDetails.address  fields in the Hybrid settings section. We are going to use these fields as our AI-powered field for hybrid searches. Make sure to hit Save Configuration afterward.

11. Now, we need to weight the fields that we want Smart Search AI to prioritize. Scroll down to the relevancy sliders and put some weight on the post_content, post_title, locationDetails.address and locationDetails.location.  This will tell our search what to prioritize when our users search for locations.

12. After saving the configuration, head on over to the Index data page, then click “Index Now”.  It will give you this success message once completed :


13. If you want to test it out to make sure you are getting the geolocation coordinates for latitude and longitude data from WordPress, you can run a cURL command against the GraphQL endpoint with the access token that SSAI gives you.  Here is a command you can copy. Once you swap out your search endpoint and access token in the command, you can paste it in your terminal:

curl -X POST "$SEARCH_ENDPOINT" \(swap out your endpoint here)
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $SEARCH_ACCESS_TOKEN" \(swap your token here)
  -d '{
    "query": "query FindNearCircle($query: String!, $centerLat: Float!, $centerLon: Float!, $maxDistance: Distance!, $limit: Int) { find(query: $query, semanticSearch: { searchBias: 7, fields: [\"post_title\", \"post_content\", \"locationDetails.address\"] }, geoConstraints: { circles: [{ center: { lat: $centerLat, lon: $centerLon }, maxDistance: $maxDistance }] }, orderBy: [{ field: \"_score\", direction: desc }, { field: \"post_date_gmt\", direction: desc }], limit: $limit, options: { includeFields: [\"post_title\", \"coordinates\", \"locationDetails.coordinates\", \"locationDetails.address\", \"permalink\"] }) { total documents { id sort data } } }",
    "variables": {
      "query": "*",
      "centerLat": 30.2672,
      "centerLon": -97.7431,
      "maxDistance": "10mi",
      "limit": 3
    }
  }'

14. We need to set the frontend up now.  The Nuxt.js frontend boilerplate will contain a project that already renders a page with a map and some location filters. Clone the Nuxt repo starting point by copying and pasting this command in your terminal:

npx degit Fran-A-Dev/smart-searchai-geo-filtering-nuxt#main


Once you clone it, navigate into the directory and install the project dependencies:

cd my-project
npm install



15. Create a .env file inside the root of the Nuxt project. Open that file and paste in these environment variables (the ones you saved from steps 3 and 5) :

SEARCH_ENDPOINT="<your ssai graphql endpoint here>"
SEARCH_ACCESS_TOKEN="<your smart search ai access token here>"
GOOGLE_MAPS_API_KEY="<your google maps api key here>"




16. Next, let’s update how our Nuxt app will build and run the site.  Go to your nuxt.config.ts file in the root and update it accordingly:

// nuxt.config.ts
export default defineNuxtConfig({
  compatibilityDate: "2025-10-17",

  modules: ["@nuxtjs/tailwindcss"],

  css: ["~/assets/css/main.css"],

  runtimeConfig: {
    // Server-only (private) values
    searchAccessToken: process.env.SEARCH_ACCESS_TOKEN,
    searchEndpoint: process.env.SEARCH_ENDPOINT,

    // Public values available on client
    public: {
      googleMapsApiKey: process.env.GOOGLE_MAPS_API_KEY,
    },
  },

  devtools: { enabled: true },

  nitro: {
    experimental: {
      websocket: false,
    },
    // Note: no `fetch` key here—set timeout/retry per-request in $fetch options.
  },
});


We are done with the setup steps to create the boilerplate starting point.  In your terminal, run npm run dev and visit http://localhost:3000/geo-search to make sure it works.  You should see this:

Maps Usage

You’re not locked into any single map library for the UI. This demo uses the Google Maps JavaScript API, but you can swap in Mapbox GL JS, MapLibre GL, Leaflet, or any other JavaScript map component that can display markers from { lat, lon } pairs and emit click/drag events. 

The Smart Search piece is library-agnostic: your page just needs a center point (lat/lon) and a radius or bounds to construct the geoConstraints in the GraphQL query. If your chosen map exposes the current bounds, you can also power a bounding-box search; if it supports geolocation, you can seed the center from the user’s position. The only UI changes are in your map wrapper (marker rendering, event wiring); the server call and GraphQL stay the same.

On the WordPress side, you can also use any approach that yields clean latitude/longitude data. ACF fields, a dedicated map plugin, or a custom meta box are all fine—as long as you can persist numeric lat and lon for each location and expose them via REST or GraphQL (WPGraphQL, custom REST fields, or a small plugin). 

For Smart Search’s geo filters to work, ensure those values land in your index as a top-level coordinates field with the shape { lat: number, lon: number } (or an array of such objects). If a map plugin stores coordinates under a different key or nested structure, normalize them during indexing (e.g., via a transform hook or a tiny MU/regular plugin) so the index has coordinates at the top level.

Server Connection

The first thing we need to do is to go over the server endpoint that will be our secure proxy for SSAI and the client. Navigate to server/api/search.post.ts. You will see this file:

// server/api/search.post.ts
import {
  defineEventHandler,
  readBody,
  createError,
  setResponseStatus,
} from "h3";

type GraphQLBody = {
  query?: string;
  variables?: Record<string, any>;
};

type GraphQLResponse<T = unknown> = {
  data?: T;
  errors?: unknown;
};

export default defineEventHandler(async (event) => {
  const { searchEndpoint, searchAccessToken } = useRuntimeConfig();

  if (!searchEndpoint || !searchAccessToken) {
    throw createError({
      statusCode: 500,
      statusMessage: "Smart Search not configured",
    });
  }

  const body = await readBody<GraphQLBody>(event);
  if (!body?.query || typeof body.query !== "string") {
    throw createError({
      statusCode: 400,
      statusMessage: "Missing GraphQL query",
    });
  }

  try {
    const resp = await $fetch<GraphQLResponse>(searchEndpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${searchAccessToken}`,
      },
      // Ensure variables is always an object (some servers reject undefined)
      body: { query: body.query, variables: body.variables ?? {} },
    });

    // If GraphQL returned errors, surface them with a 502 so the client can show detail
    if (resp?.errors) {
      setResponseStatus(event, 502);
      return resp;
    }

    return resp; // { data }
  } catch (err: any) {
    // Log enough for debugging without leaking secrets
    console.error("Smart Search API Error", {
      status: err?.status,
      statusText: err?.statusText,
      message: err?.message,
      data: err?.data,
    });

    throw createError({
      statusCode: err?.status || 502,
      statusMessage: "Smart Search request failed",
      data: err?.data ?? null,
    });
  }
});

This code reads the Smart Search endpoint and access token from the Nuxt runtime config and fails with a 500 if either is missing. 

It then parses the incoming request body and validates that a GraphQL query string is present, otherwise returning a 400. The handler forwards the request to Smart Search using $fetch, always sending a proper JSON GraphQL payload with the Bearer token and a guaranteed variables object.

By centralizing the HTTP call here, every client in your app can POST to /api/search with a consistent contract. If Smart Search AI responds with a GraphQL errors array, the endpoint sets an HTTP 502 and returns the error payload untouched for debugging.

On success, it simply relays the upstream { data } to the caller. Operational failures—network issues, timeouts, upstream 5xx—are logged with safe metadata and rethrown as a structured h3 error.

This pattern improves security, avoids CORS complications, and keeps your access token server-only.

GraphQL Queries

The next file we will go over is our GraphQL queries.  Head to graphql/queries.ts and open up the file:

// graphql/queries.ts

/** Fields we want back from Smart Search documents. Adjust to your index shape. */
export const DEFAULT_INCLUDE_FIELDS = [
  "post_title",
  "address", // top-level if you mapped it during indexing
  "coordinates", // top-level geo field that Smart Search uses
  "post_url", // or "permalink" if your index uses that key
] as const;

/** Compose an optional semanticSearch block only if enabled. */
export const SEMANTIC_BLOCK = `
  $semanticBias: Int = 0
  $semanticFields: [String!] = []
` as const;

export const SEMANTIC_ARG = `
  semanticSearch: { searchBias: $semanticBias, fields: $semanticFields }
` as const;

/** ---------- 1) Circle (nearby) search with optional semantic & pagination ---------- */
export const FIND_NEAR_CIRCLE = /* GraphQL */ `
  query FindNearCircle(
    $query: String!
    $centerLat: Float!
    $centerLon: Float!
    $maxDistance: Distance!
    $limit: Int = 20
    $searchAfter: [String!]
    $filter: String
    $includeFields: [String!] = []
    ${SEMANTIC_BLOCK}
  ) {
    find(
      query: $query
      ${SEMANTIC_ARG}
      filter: $filter
      geoConstraints: {
        circles: [
          { center: { lat: $centerLat, lon: $centerLon }, maxDistance: $maxDistance }
        ]
      }
      orderBy: [
        { field: "_score", direction: desc }
        { field: "post_date_gmt", direction: desc }
      ]
      limit: $limit
      searchAfter: $searchAfter
      options: { includeFields: $includeFields }
    ) {
      total
      documents {
        id
        score
        sort
        data
      }
    }
  }
`;

/** ---------- 2) Bounding-box search with optional semantic & pagination ---------- */
export const FIND_IN_BBOX = /* GraphQL */ `
  query FindInBoundingBox(
    $query: String!
    $swLat: Float!
    $swLon: Float!
    $neLat: Float!
    $neLon: Float!
    $limit: Int = 20
    $searchAfter: [String!]
    $filter: String
    $includeFields: [String!] = []
    ${SEMANTIC_BLOCK}
  ) {
    find(
      query: $query
      ${SEMANTIC_ARG}
      filter: $filter
      geoConstraints: {
        boundingBoxes: [
          { southwest: { lat: $swLat, lon: $swLon }, northeast: { lat: $neLat, lon: $neLon } }
        ]
      }
      orderBy: [
        { field: "_score", direction: desc }
        { field: "post_date_gmt", direction: desc }
      ]
      limit: $limit
      searchAfter: $searchAfter
      options: { includeFields: $includeFields }
    ) {
      total
      documents {
        id
        score
        sort
        data
      }
    }
  }
`;

/** ---------- Helper types for DX ---------- */
export interface FindNearCircleVars {
  query: string;
  centerLat: number;
  centerLon: number;
  maxDistance: string; // Distance! scalar, e.g. "5mi", "2km"
  limit?: number;
  searchAfter?: string[];
  filter?: string; // e.g., "post_type:location"
  includeFields?: string[];
  semanticBias?: number; // 0..10
  semanticFields?: string[]; // ["post_title", "post_content"] etc., if configured
}

export interface FindInBBoxVars
  extends Omit<FindNearCircleVars, "centerLat" | "centerLon" | "maxDistance"> {
  swLat: number;
  swLon: number;
  neLat: number;
  neLon: number;
}

/** Normalize coordinates to an array of points for mapping. */
export type Point = { lat: number; lon: number };
export function normalizeCoordinates(raw: unknown): Point[] {
  if (!raw) return [];
  if (Array.isArray(raw)) {
    return raw
      .map((p) => (p && typeof p === "object" ? (p as any) : null))
      .filter(Boolean)
      .filter((p) => typeof p.lat === "number" && typeof p.lon === "number");
  }
  if (typeof raw === "object" && raw !== null) {
    const p = raw as any;
    if (typeof p.lat === "number" && typeof p.lon === "number") {
      return [p as Point];
    }
  }
  return [];
}

This module centralizes the GraphQL queries and helpers your Nuxt app uses to perform geo-aware searches against Smart Search AI.  This is a lot of code. Let’s break it down.

It declares a DEFAULT_INCLUDE_FIELDS array so you can consistently request the minimal document fields you need back—titles, address, a top-level coordinates geo field, and a URL. 

export const DEFAULT_INCLUDE_FIELDS = [
  "post_title",
  "address", // top-level if you mapped it during indexing
  "coordinates", // top-level geo field that Smart Search uses
  "post_url", // or "permalink" if your index uses that key
] as const;

It introduces a small semantic-search stanza (SEMANTIC_BLOCK and SEMANTIC_ARG) that can be injected into queries, letting you toggle semantic bias and fields without duplicating query text. 


The first query, FIND_NEAR_CIRCLE, searches within a circle by passing a center latitude/longitude and a Distance! scalar (e.g., “5mi”), and supports optional filters (like post_type:location), semantic options, result limits, and cursor pagination via searchAfter

export const SEMANTIC_BLOCK = `
  $semanticBias: Int = 0
  $semanticFields: [String!] = []
` as const;

export const SEMANTIC_ARG = `
  semanticSearch: { searchBias: $semanticBias, fields: $semanticFields }
` as const;

/** ---------- 1) Circle (nearby) search with optional semantic & pagination ---------- */
export const FIND_NEAR_CIRCLE = /* GraphQL */ `
  query FindNearCircle(
    $query: String!
    $centerLat: Float!
    $centerLon: Float!
    $maxDistance: Distance!
    $limit: Int = 20
    $searchAfter: [String!]
    $filter: String
    $includeFields: [String!] = []
    ${SEMANTIC_BLOCK}
  ) {
    find(
      query: $query
      ${SEMANTIC_ARG}
      filter: $filter
      geoConstraints: {
        circles: [
          { center: { lat: $centerLat, lon: $centerLon }, maxDistance: $maxDistance }
        ]
      }
      orderBy: [
        { field: "_score", direction: desc }
        { field: "post_date_gmt", direction: desc }
      ]
      limit: $limit
      searchAfter: $searchAfter
      options: { includeFields: $includeFields }
    ) {
      total
      documents {
        id
        score
        sort
        data
      }
    }
  }
`;

The second query, FIND_IN_BBOX, performs the same search semantics within a bounding box using southwest and northeast corners. Both queries sort primarily by _score and secondarily by post_date_gmt to keep results relevant and time-sensible. Each query accepts an includeFields list, which is forwarded to the options.includeFields parameter so you can control payload size per request. 

export const FIND_IN_BBOX = /* GraphQL */ `
  query FindInBoundingBox(
    $query: String!
    $swLat: Float!
    $swLon: Float!
    $neLat: Float!
    $neLon: Float!
    $limit: Int = 20
    $searchAfter: [String!]
    $filter: String
    $includeFields: [String!] = []
    ${SEMANTIC_BLOCK}
  ) {
    find(
      query: $query
      ${SEMANTIC_ARG}
      filter: $filter
      geoConstraints: {
        boundingBoxes: [
          { southwest: { lat: $swLat, lon: $swLon }, northeast: { lat: $neLat, lon: $neLon } }
        ]
      }
      orderBy: [
        { field: "_score", direction: desc }
        { field: "post_date_gmt", direction: desc }
      ]
      limit: $limit
      searchAfter: $searchAfter
      options: { includeFields: $includeFields }
    ) {
      total
      documents {
        id
        score
        sort
        data
      }
    }
  }
`;

The file defines TypeScript interfaces (FindNearCircleVars and FindInBBoxVars) that describe the expected variables, including the Distance! value expressed as a string and optional semantic parameters. 

export interface FindNearCircleVars {
  query: string;
  centerLat: number;
  centerLon: number;
  maxDistance: string; // Distance! scalar, e.g. "5mi", "2km"
  limit?: number;
  searchAfter?: string[];
  filter?: string; // e.g., "post_type:location"
  includeFields?: string[];
  semanticBias?: number; // 0..10
  semanticFields?: string[]; // ["post_title", "post_content"] etc., if configured
}

export interface FindInBBoxVars
  extends Omit<FindNearCircleVars, "centerLat" | "centerLon" | "maxDistance"> {
  swLat: number;
  swLon: number;
  neLat: number;
  neLon: number;
}

A small utility, normalizeCoordinates, normalizes either a single point or an array of points into a consistent {lat, lon}[] shape, which simplifies rendering map markers. 

export type Point = { lat: number; lon: number };
export function normalizeCoordinates(raw: unknown): Point[] {
  if (!raw) return [];
  if (Array.isArray(raw)) {
    return raw
      .map((p) => (p && typeof p === "object" ? (p as any) : null))
      .filter(Boolean)
      .filter((p) => typeof p.lat === "number" && typeof p.lon === "number");
  }
  if (typeof raw === "object" && raw !== null) {
    const p = raw as any;
    if (typeof p.lat === "number" && typeof p.lon === "number") {
      return [p as Point];
    }
  }
  return [];
}

Google Map Component

Now, let’s take a look at our Map Component.  Head to components/MapView.client.vue:

<script setup lang="ts">
import { onMounted, onBeforeUnmount, ref, watch, toRefs, nextTick } from "vue";

type LatLon = { lat: number; lon: number };
type Marker = LatLon;

const props = defineProps<{
  center: LatLon; // { lat, lon }
  markers: Marker[]; // search results
  userLocation: LatLon | null; // optional blue dot
}>();

const emit = defineEmits<{
  (
    e: "boundsChanged",
    bbox: { swLat: number; swLon: number; neLat: number; neLon: number },
    userInitiated: boolean
  ): void;
  (e: "mapClick", location: LatLon): void;
}>();

const { center, markers, userLocation } = toRefs(props);
const mapDiv = ref<HTMLDivElement | null>(null);
const config = useRuntimeConfig();

let map: google.maps.Map | null = null;
let resultMarkers: google.maps.Marker[] = [];
let userMarker: google.maps.Marker | null = null;
let userInitiatedMove = false;
let idleListener: google.maps.MapsEventListener | null = null;
let dragListener: google.maps.MapsEventListener | null = null;
let zoomListener: google.maps.MapsEventListener | null = null;
let clickListener: google.maps.MapsEventListener | null = null;

/** Simple debounce to quiet idle emissions */
function debounce<T extends (...args: any[]) => void>(fn: T, ms = 150) {
  let t: number | undefined;
  return (...args: Parameters<T>) => {
    if (t) window.clearTimeout(t);
    t = window.setTimeout(() => fn(...args), ms);
  };
}

/** Load Google Maps JS once */
function loadGoogleMaps(): Promise<void> {
  return new Promise((resolve, reject) => {
    if ((globalThis as any).google?.maps) return resolve();

    const key = config.public.googleMapsApiKey;
    if (!key) return reject(new Error("Missing GOOGLE_MAPS_API_KEY"));

    const script = document.createElement("script");
    // v=weekly per Google guidance; only `places` is a recognized library here
    script.src = `https://maps.googleapis.com/maps/api/js?key=${encodeURIComponent(
      key
    )}&libraries=places&v=weekly`;
    script.async = true;
    script.defer = true;
    script.onload = () => resolve();
    script.onerror = () => reject(new Error("Failed to load Google Maps JS"));
    document.head.appendChild(script);
  });
}

function clearResultMarkers() {
  for (const m of resultMarkers) m.setMap(null);
  resultMarkers = [];
}

function setResultMarkers(list: Marker[]) {
  if (!map) return;
  clearResultMarkers();

  const bounds = new google.maps.LatLngBounds();
  let hasAny = false;

  for (const m of list) {
    if (typeof m.lat !== "number" || typeof m.lon !== "number") continue;
    const marker = new google.maps.Marker({
      position: { lat: m.lat, lng: m.lon },
      title: "Search result",
      map,
    });
    resultMarkers.push(marker);
    bounds.extend(new google.maps.LatLng(m.lat, m.lon));
    hasAny = true;
  }

  // If no user-initiated move, fit the map to the results on fresh updates
  if (hasAny && !userInitiatedMove) {
    // If a single result, ensure a sensible zoom
    if (resultMarkers.length === 1) {
      map.setCenter({ lat: list[0].lat, lng: list[0].lon });
      map.setZoom(Math.max(map.getZoom() || 11, 13));
    } else {
      map.fitBounds(bounds, 40); // 40px padding
    }
  }
}

function setUserLocationMarker(loc: LatLon | null) {
  if (!map) return;
  if (userMarker) {
    userMarker.setMap(null);
    userMarker = null;
  }
  if (!loc) return;

  userMarker = new google.maps.Marker({
    position: { lat: loc.lat, lng: loc.lon },
    map,
    title: "Your location",
    icon: {
      path: google.maps.SymbolPath.CIRCLE,
      scale: 10,
      fillColor: "#4285F4",
      fillOpacity: 1,
      strokeColor: "#FFFFFF",
      strokeWeight: 3,
    },
  });
}

const emitBoundsChanged = debounce(() => {
  if (!map) return;
  const b = map.getBounds();
  if (!b) return;
  const sw = b.getSouthWest();
  const ne = b.getNorthEast();
  emit(
    "boundsChanged",
    { swLat: sw.lat(), swLon: sw.lng(), neLat: ne.lat(), neLon: ne.lng() },
    userInitiatedMove
  );
  userInitiatedMove = false;
}, 150);

async function initMap() {
  await nextTick();
  const el = mapDiv.value;
  if (!el) return;

  await loadGoogleMaps();

  map = new google.maps.Map(el, {
    center: { lat: center.value.lat, lng: center.value.lon },
    zoom: 11,
    mapTypeControl: true,
    streetViewControl: false,
    fullscreenControl: true,
  });

  clickListener = map.addListener("click", (e: google.maps.MapMouseEvent) => {
    if (!e.latLng) return;
    emit("mapClick", { lat: e.latLng.lat(), lon: e.latLng.lng() });
  });

  dragListener = map.addListener("dragstart", () => {
    userInitiatedMove = true;
  });
  zoomListener = map.addListener("zoom_changed", () => {
    userInitiatedMove = true;
  });

  idleListener = map.addListener("idle", emitBoundsChanged);

  // Initial render
  setResultMarkers(markers.value);
  setUserLocationMarker(userLocation.value);
}

onMounted(initMap);

onBeforeUnmount(() => {
  if (idleListener) idleListener.remove();
  if (dragListener) dragListener.remove();
  if (zoomListener) zoomListener.remove();
  if (clickListener) clickListener.remove();
  clearResultMarkers();
  if (userMarker) userMarker.setMap(null);
  map = null;
});

watch(center, (c) => {
  if (!map || !c) return;
  map.setCenter({ lat: c.lat, lng: c.lon });
});

watch(
  markers,
  (list) => {
    setResultMarkers(list);
  },
  { deep: true }
);

watch(userLocation, (loc) => {
  setUserLocationMarker(loc);
});
</script>

<template>
  <div
    ref="mapDiv"
    class="h-80 w-full rounded-xl border"
    role="region"
    aria-label="Results map"
  />
</template>

This client-side Vue component encapsulates all Google Maps rendering and interaction for the geo search page. It accepts three props—center (the current lat/lon to focus), markers (result points to plot), and an optional userLocation—and emits two events: boundsChanged (with the current SW/NE bounding box and whether the user moved the map) and mapClick (with the clicked lat/lon). 

const props = defineProps<{
  center: LatLon; // { lat, lon }
  markers: Marker[]; // search results
  userLocation: LatLon | null; // optional blue dot
}>();

const emit = defineEmits<{
  (
    e: "boundsChanged",
    bbox: { swLat: number; swLon: number; neLat: number; neLon: number },
    userInitiated: boolean
  ): void;
  (e: "mapClick", location: LatLon): void;
}>();

On mount, it lazily loads the Google Maps JS SDK using the public API key from our Nuxt runtime config, then initializes a map centered on the provided center with UI controls enabled. A small debounced handler throttles the high-volume idle event so your app isn’t spammed with bounds updates while the user pans/zooms. 

const { center, markers, userLocation } = toRefs(props);
const mapDiv = ref<HTMLDivElement | null>(null);
const config = useRuntimeConfig();

let map: google.maps.Map | null = null;
let resultMarkers: google.maps.Marker[] = [];
let userMarker: google.maps.Marker | null = null;
let userInitiatedMove = false;
let idleListener: google.maps.MapsEventListener | null = null;
let dragListener: google.maps.MapsEventListener | null = null;
let zoomListener: google.maps.MapsEventListener | null = null;
let clickListener: google.maps.MapsEventListener | null = null;

/** Simple debounce to quiet idle emissions */
function debounce<T extends (...args: any[]) => void>(fn: T, ms = 150) {
  let t: number | undefined;
  return (...args: Parameters<T>) => {
    if (t) window.clearTimeout(t);
    t = window.setTimeout(() => fn(...args), ms);
  };
}

/** Load Google Maps JS once */
function loadGoogleMaps(): Promise<void> {
  return new Promise((resolve, reject) => {
    if ((globalThis as any).google?.maps) return resolve();

    const key = config.public.googleMapsApiKey;
    if (!key) return reject(new Error("Missing GOOGLE_MAPS_API_KEY"));

The result markers are fully managed: existing pins are cleared before new ones are added, map bounds are fitted to the latest results, and a single-result case bumps the zoom to a useful level. A separate “blue dot” marker is maintained for userLocation, replacing the previous one whenever the prop changes. 

The component tracks whether the user moved the map (userInitiatedMove) by listening to dragstart and zoom_changed, and forwards that flag with boundsChanged

function clearResultMarkers() {
  for (const m of resultMarkers) m.setMap(null);
  resultMarkers = [];
}

function setResultMarkers(list: Marker[]) {
  if (!map) return;
  clearResultMarkers();

  const bounds = new google.maps.LatLngBounds();
  let hasAny = false;

  for (const m of list) {
    if (typeof m.lat !== "number" || typeof m.lon !== "number") continue;
    const marker = new google.maps.Marker({
      position: { lat: m.lat, lng: m.lon },
      title: "Search result",
      map,
    });
    resultMarkers.push(marker);
    bounds.extend(new google.maps.LatLng(m.lat, m.lon));
    hasAny = true;
  }

  // If no user-initiated move, fit the map to the results on fresh updates
  if (hasAny && !userInitiatedMove) {
    // If a single result, ensure a sensible zoom
    if (resultMarkers.length === 1) {
      map.setCenter({ lat: list[0].lat, lng: list[0].lon });
      map.setZoom(Math.max(map.getZoom() || 11, 13));
    } else {
      map.fitBounds(bounds, 40); // 40px padding
    }
  }
}

function setUserLocationMarker(loc: LatLon | null) {
  if (!map) return;
  if (userMarker) {
    userMarker.setMap(null);
    userMarker = null;
  }
  if (!loc) return;

  userMarker = new google.maps.Marker({
    position: { lat: loc.lat, lng: loc.lon },
    map,
    title: "Your location",
    icon: {
      path: google.maps.SymbolPath.CIRCLE,
      scale: 10,
      fillColor: "#4285F4",
      fillOpacity: 1,
      strokeColor: "#FFFFFF",
      strokeWeight: 3,
    },
  });
}

const emitBoundsChanged = debounce(() => {
  if (!map) return;
  const b = map.getBounds();
  if (!b) return;
  const sw = b.getSouthWest();
  const ne = b.getNorthEast();
  emit(
    "boundsChanged",
    { swLat: sw.lat(), swLon: sw.lng(), neLat: ne.lat(), neLon: ne.lng() },
    userInitiatedMove
  );
  userInitiatedMove = false;
}, 150);

Clicks on the map surface emit precise coordinates so the parent can re-center and re-query. Prop watchers keep the map in sync with application state: updating the center recenters the map, updating the markers re-renders pins and optionally refits the viewport, and updating userLocation refreshes the blue dot. 

async function initMap() {
  await nextTick();
  const el = mapDiv.value;
  if (!el) return;

  await loadGoogleMaps();

  map = new google.maps.Map(el, {
    center: { lat: center.value.lat, lng: center.value.lon },
    zoom: 11,
    mapTypeControl: true,
    streetViewControl: false,
    fullscreenControl: true,  });

  clickListener = map.addListener("click", (e: google.maps.MapMouseEvent) => {
    if (!e.latLng) return;
    emit("mapClick", { lat: e.latLng.lat(), lon: e.latLng.lng() });
  });

  dragListener = map.addListener("dragstart", () => {
    userInitiatedMove = true;
  });
  zoomListener = map.addListener("zoom_changed", () => {
    userInitiatedMove = true;
  });

  idleListener = map.addListener("idle", emitBoundsChanged);

Finally, it performs a cleanup on unmount by removing Google Maps listeners, clearing markers, and nulling references to prevent leaks. The template exposes a single, accessible container that your layout can size with Tailwind.

  // Initial render
  setResultMarkers(markers.value);
  setUserLocationMarker(userLocation.value);
}

onMounted(initMap);

onBeforeUnmount(() => {
  if (idleListener) idleListener.remove();
  if (dragListener) dragListener.remove();
  if (zoomListener) zoomListener.remove();
  if (clickListener) clickListener.remove();
  clearResultMarkers();
  if (userMarker) userMarker.setMap(null);
  map = null;
});

watch(center, (c) => {
  if (!map || !c) return;
  map.setCenter({ lat: c.lat, lng: c.lon });
});

watch(
  markers,
  (list) => {
    setResultMarkers(list);
  },
  { deep: true }
);

watch(userLocation, (loc) => {
  setUserLocationMarker(loc);
});
</script>

<template>
  <div
    ref="mapDiv"
    class="h-80 w-full rounded-xl border"
    role="region"
    aria-label="Results map"
  />
</template>

Feature Map Page

We have one more file to go over before testing this out in the browser.  It is the page that will render our map and filters.

Just a note: You can put the logic and state in this file into a separate file within a composables folder. This keeps the code cleaner and more reusable.  Since this is just an example demo, I put it all in one file.

Head over to pages/geo-search.vue :

<script setup lang="ts">
import { ref, computed, onMounted } from "vue";
import { FIND_NEAR_CIRCLE, DEFAULT_INCLUDE_FIELDS } from "~/graphql/queries";

type LatLon = { lat: number; lon: number };
type Doc = { id: string; score?: number; sort?: string[]; data: any };

const query = ref("");
const addressQuery = ref("");
const miles = ref(10);

const center = ref<LatLon>({ lat: 30.2672, lon: -97.7431 }); // Austin
const userLocation = ref<LatLon | null>(null);

const docs = ref<Doc[]>([]);
const total = ref(0);
const cursor = ref<string[] | null>(null);
const loading = ref(false);
const geocoding = ref(false);
const hasSearched = ref(false);
let searchToken = 0;

/** Smart Search variables */
const maxDistance = computed(() => `${miles.value}mi`);
const FILTER = "post_type:location";

/** Normalize coordinates field that may be object or array */
function normalizeCoordinates(raw: unknown): LatLon | null {
  if (!raw) return null;
  const v = Array.isArray(raw) ? raw[0] : raw;
  if (
    v &&
    typeof v === "object" &&
    typeof (v as any).lat === "number" &&
    typeof (v as any).lon === "number"
  ) {
    const { lat, lon } = v as any;
    return { lat, lon };
  }
  return null;
}

/** Resolve doc -> LatLon for markers */
function docCoordinates(d: Doc): LatLon | null {
  // Prefer top-level "coordinates" that Smart Search uses for geo filters
  return (
    normalizeCoordinates(d?.data?.coordinates) ??
    // fallback if you still return nested shape (not required)
    normalizeCoordinates(d?.data?.locationDetails?.coordinates) ??
    null
  );
}

/** Markers for the map */
const markers = computed(() =>
  docs.value
    .map(docCoordinates)
    .filter((c): c is LatLon => !!c)
    .map((c) => ({ lat: c.lat, lon: c.lon }))
);

/** Minimal API caller; bubbles GraphQL errors via /api/search handler */
async function callSearch(body: any) {
  const resp = await $fetch("/api/search", { method: "POST", body });
  if ((resp as any)?.errors) throw new Error("Search returned errors");
  return (resp as any)?.data?.find as { total: number; documents: Doc[] };
}

/** Circle geo search (with cursor pagination) */
async function runCircle(append = false) {
  const token = ++searchToken;

  if (!append) {
    docs.value = [];
    total.value = 0;
    cursor.value = null;
  }
  loading.value = true;
  hasSearched.value = true;

  try {
    const find = await callSearch({
      query: FIND_NEAR_CIRCLE,
      variables: {
        query: query.value || "*",
        centerLat: center.value.lat,
        centerLon: center.value.lon,
        maxDistance: maxDistance.value, // Distance! scalar, e.g. "10mi"
        limit: 20,
        searchAfter: append ? cursor.value : null,
        filter: FILTER,
        includeFields: [...DEFAULT_INCLUDE_FIELDS],
        // semantic optional; keep off by default unless configured server-side
        semanticBias: 0,
        semanticFields: [],
      },
    });

    if (token !== searchToken) return; // drop stale page

    // Trust server geo filter; no client-side distance filter needed
    const page = (find?.documents ?? []).filter((d) => docCoordinates(d));

    docs.value = append ? [...docs.value, ...page] : page;
    total.value = find?.total ?? docs.value.length;
    cursor.value = page.length ? page[page.length - 1]?.sort ?? null : null;
  } catch (err) {
    alert(`Search failed: ${(err as Error).message || err}`);
  } finally {
    if (token === searchToken) loading.value = false;
  }
}

/** BBox search: keep signature for MapView contract (optional to implement later) */
async function runBBox(
  _bbox: { swLat: number; swLon: number; neLat: number; neLon: number },
  _userInitiated: boolean
) {
  // You can wire FIND_IN_BBOX here later if you want "map bounds" search.
  return;
}

/** Geolocate user and search from there */
function useMyLocation() {
  if (!navigator.geolocation)
    return alert("Geolocation is not supported by your browser");

  navigator.geolocation.getCurrentPosition(
    (pos) => {
      const loc = { lat: pos.coords.latitude, lon: pos.coords.longitude };
      center.value = loc;
      userLocation.value = loc;
      docs.value = [];
      total.value = 0;
      cursor.value = null;
      runCircle(false);
    },
    (err) => {
      loading.value = false;
      if (err.code === 1)
        alert(
          "Location access was denied. Allow location access and try again."
        );
      else if (err.code === 2)
        alert("Unable to determine your location. Please try again.");
      else if (err.code === 3)
        alert("Location request timed out. Please try again.");
      else alert(`Error getting location: ${err.message}`);
    },
    { enableHighAccuracy: true, timeout: 15000, maximumAge: 0 }
  );
}

/** Map click: set center & search */
function handleMapClick(loc: LatLon) {
  center.value = loc;
  userLocation.value = loc;
  docs.value = [];
  total.value = 0;
  cursor.value = null;
  runCircle(false);
}

/** Address → center via Google Geocoding */
async function searchAddress() {
  if (!addressQuery.value.trim()) return;
  geocoding.value = true;
  try {
    const config = useRuntimeConfig();
    const res = await fetch(
      `https://maps.googleapis.com/maps/api/geocode/json?address=${encodeURIComponent(
        addressQuery.value
      )}&key=${config.public.googleMapsApiKey}`
    );
    const data = await res.json();
    const first = data?.results?.[0];
    if (!first) return alert("Address not found. Try a different address.");

    const { lat, lng } = first.geometry.location;
    center.value = { lat, lon: lng };
    userLocation.value = { lat, lon: lng };
    runCircle(false);
  } catch {
    alert("Failed to geocode address. Please try again.");
  } finally {
    geocoding.value = false;
  }
}

onMounted(() => {
  docs.value = [];
  total.value = 0;
  cursor.value = null;
});
</script>

<template>
  <main class="mx-auto max-w-6xl p-6 space-y-6">
    <h1 class="text-2xl font-semibold">
      Geo Filter Smart Search AI Demo with Nuxt.js
    </h1>

    <div class="grid grid-cols-1 lg:grid-cols-[360px_1fr] gap-6">
      <!-- Controls -->
      <aside class="space-y-4">
        <div class="space-y-2">
          <label class="text-sm font-medium">Search query</label>
          <input
            v-model="query"
            class="w-full rounded-xl border px-3 py-2"
            placeholder="bbq joints, events…"
          />
          <div class="flex gap-2">
            <button
              class="rounded-lg border px-3 py-2"
              @click="runCircle(false)"
            >
              Search
            </button>
            <button class="rounded-lg border px-3 py-2" @click="useMyLocation">
              Use my location
            </button>
          </div>
        </div>

        <div class="space-y-2">
          <label class="text-sm font-medium">Search by address</label>
          <input
            v-model="addressQuery"
            class="w-full rounded-xl border px-3 py-2"
            placeholder="123 Main St, Austin, TX"
            @keyup.enter="searchAddress"
          />
          <button
            class="w-full rounded-lg border px-3 py-2"
            @click="searchAddress"
            :disabled="geocoding || !addressQuery.trim()"
          >
            {{ geocoding ? "Searching..." : "Search Address" }}
          </button>
          <p class="text-xs text-gray-500">Or click anywhere on the map</p>
        </div>

        <div class="space-y-1">
          <label class="block text-sm font-medium"
            >Radius: {{ miles }} mi</label
          >
          <input
            type="range"
            min="1"
            max="50"
            step="1"
            v-model="miles"
            class="w-full"
            @change="runCircle(false)"
          />
        </div>
      </aside>

      <!-- Map + Results -->
      <section class="space-y-4">
        <MapView
          :center="{ lon: center.lon, lat: center.lat }"
          :markers="markers"
          :userLocation="userLocation"
          @boundsChanged="runBBox"
          @mapClick="handleMapClick"
        />

        <div class="flex items-center gap-3 text-sm text-gray-600">
          <span class="rounded-lg border px-3 py-1">
            {{
              loading ? "Loading…" : `${total} result${total === 1 ? "" : "s"}`
            }}
          </span>
          <button
            class="rounded-lg border px-3 py-2"
            @click="runCircle(true)"
            :disabled="loading || !cursor"
          >
            Load more
          </button>
        </div>

        <ul class="rounded-xl border divide-y">
          <li v-for="d in docs" :key="d.id" class="p-3">
            <a
              :href="d?.data?.post_url || d?.data?.permalink || '#'"
              target="_blank"
              class="font-medium hover:underline"
            >
              {{ d?.data?.post_title || "Untitled" }}
            </a>
            <div class="text-sm">
              {{ d?.data?.address || d?.data?.locationDetails?.address || "" }}
            </div>
            <div v-if="docCoordinates(d)" class="text-xs text-gray-500">
              ({{ docCoordinates(d)?.lat }}, {{ docCoordinates(d)?.lon }})
            </div>
          </li>
        </ul>
      </section>
    </div>
  </main>
</template>

This page implements a geo-aware search UI that talks to your /api/search GraphQL proxy and renders results on a map. It tracks user inputs (text query, address, miles radius) plus map state (center, user location, pagination cursor, loading flags) with Vue refs.

<script setup lang="ts">
import { ref, computed, onMounted } from "vue";
import { FIND_NEAR_CIRCLE, DEFAULT_INCLUDE_FIELDS } from "~/graphql/queries";

type LatLon = { lat: number; lon: number };
type Doc = { id: string; score?: number; sort?: string[]; data: any };

const query = ref("");
const addressQuery = ref("");
const miles = ref(10);

const center = ref<LatLon>({ lat: 30.2672, lon: -97.7431 }); // Austin
const userLocation = ref<LatLon | null>(null);

const docs = ref<Doc[]>([]);
const total = ref(0);
const cursor = ref<string[] | null>(null);
const loading = ref(false);
const geocoding = ref(false);
const hasSearched = ref(false);
let searchToken = 0;

A computed maxDistance converts the slider to the Distance! scalar (e.g., “10mi”), and a constant filter scopes results to the location post type. Results from Smart Search are normalized so each document reliably yields a { lat, lon } pair for map markers, regardless of whether coordinates arrives as an object or array.

/** Smart Search variables */
const maxDistance = computed(() => `${miles.value}mi`);
const FILTER = "post_type:location";

/** Normalize coordinates field that may be object or array */
function normalizeCoordinates(raw: unknown): LatLon | null {
  if (!raw) return null;
  const v = Array.isArray(raw) ? raw[0] : raw;
  if (
    v &&
    typeof v === "object" &&
    typeof (v as any).lat === "number" &&
    typeof (v as any).lon === "number"
  ) {
    const { lat, lon } = v as any;
    return { lat, lon };
  }
  return null;
}

/** Resolve doc -> LatLon for markers */
function docCoordinates(d: Doc): LatLon | null {
  // Prefer top-level "coordinates" that Smart Search uses for geo filters
  return (
    normalizeCoordinates(d?.data?.coordinates) ??
    // fallback if you still return nested shape (not required)
    normalizeCoordinates(d?.data?.locationDetails?.coordinates) ??
    null
  );
}

/** Markers for the map */
const markers = computed(() =>
  docs.value
    .map(docCoordinates)
    .filter((c): c is LatLon => !!c)
    .map((c) => ({ lat: c.lat, lon: c.lon }))
);

The runCircle action performs the main “near me” search with cursor pagination, deduplicates stale responses via a rolling token, and updates totals, docs, and next-page cursors. Users can set the center by clicking the map, using browser geolocation, or geocoding a typed address with Google’s API; each path recenters the map and triggers a fresh search.

/** Minimal API caller; bubbles GraphQL errors via /api/search handler */
async function callSearch(body: any) {
  const resp = await $fetch("/api/search", { method: "POST", body });
  if ((resp as any)?.errors) throw new Error("Search returned errors");
  return (resp as any)?.data?.find as { total: number; documents: Doc[] };
}

/** Circle geo search (with cursor pagination) */
async function runCircle(append = false) {
  const token = ++searchToken;

  if (!append) {
    docs.value = [];
    total.value = 0;
    cursor.value = null;
  }
  loading.value = true;
  hasSearched.value = true;

  try {
    const find = await callSearch({
      query: FIND_NEAR_CIRCLE,
      variables: {
        query: query.value || "*",
        centerLat: center.value.lat,
        centerLon: center.value.lon,
        maxDistance: maxDistance.value, // Distance! scalar, e.g. "10mi"
        limit: 20,
        searchAfter: append ? cursor.value : null,
        filter: FILTER,
        includeFields: [...DEFAULT_INCLUDE_FIELDS],
        // semantic optional; keep off by default unless configured server-side
        semanticBias: 0,
        semanticFields: [],
      },
    });

    if (token !== searchToken) return; // drop stale page

    // Trust server geo filter; no client-side distance filter needed
    const page = (find?.documents ?? []).filter((d) => docCoordinates(d));

    docs.value = append ? [...docs.value, ...page] : page;
    total.value = find?.total ?? docs.value.length;
    cursor.value = page.length ? page[page.length - 1]?.sort ?? null : null;
  } catch (err) {
    alert(`Search failed: ${(err as Error).message || err}`);
  } finally {
    if (token === searchToken) loading.value = false;
  }
}

/** BBox search: keep signature for MapView contract (optional to implement later) */
async function runBBox(
  _bbox: { swLat: number; swLon: number; neLat: number; neLon: number },
  _userInitiated: boolean
) {
  // You can wire FIND_IN_BBOX here later if you want "map bounds" search.
  return;
}

/** Geolocate user and search from there */
function useMyLocation() {
  if (!navigator.geolocation)
    return alert("Geolocation is not supported by your browser");

  navigator.geolocation.getCurrentPosition(
    (pos) => {
      const loc = { lat: pos.coords.latitude, lon: pos.coords.longitude };
      center.value = loc;
      userLocation.value = loc;
      docs.value = [];
      total.value = 0;
      cursor.value = null;
      runCircle(false);
    },
    (err) => {
      loading.value = false;
      if (err.code === 1)
        alert(
          "Location access was denied. Allow location access and try again."
        );
      else if (err.code === 2)
        alert("Unable to determine your location. Please try again.");
      else if (err.code === 3)
        alert("Location request timed out. Please try again.");
      else alert(`Error getting location: ${err.message}`);
    },
    { enableHighAccuracy: true, timeout: 15000, maximumAge: 0 }
  );
}

/** Map click: set center & search */
function handleMapClick(loc: LatLon) {
  center.value = loc;
  userLocation.value = loc;
  docs.value = [];
  total.value = 0;
  cursor.value = null;
  runCircle(false);
}

/** Address → center via Google Geocoding */
async function searchAddress() {
  if (!addressQuery.value.trim()) return;
  geocoding.value = true;
  try {
    const config = useRuntimeConfig();
    const res = await fetch(
      `https://maps.googleapis.com/maps/api/geocode/json?address=${encodeURIComponent(
        addressQuery.value
      )}&key=${config.public.googleMapsApiKey}`
    );
    const data = await res.json();
    const first = data?.results?.[0];
    if (!first) return alert("Address not found. Try a different address.");

    const { lat, lng } = first.geometry.location;
    center.value = { lat, lon: lng };
    userLocation.value = { lat, lon: lng };
    runCircle(false);
  } catch {
    alert("Failed to geocode address. Please try again.");
  } finally {
    geocoding.value = false;
  }
}

onMounted(() => {
  docs.value = [];
  total.value = 0;
  cursor.value = null;
});

The template wires these behaviors into a simple Tailwind layout with inputs, a distance slider, a MapView component for visualization, and a paginated results list that links to each item’s URL.

<template>
  <main class="mx-auto max-w-6xl p-6 space-y-6">
    <h1 class="text-2xl font-semibold">
      Geo Filter Smart Search AI Demo with Nuxt.js
    </h1>

    <div class="grid grid-cols-1 lg:grid-cols-[360px_1fr] gap-6">
      <!-- Controls -->
      <aside class="space-y-4">
        <div class="space-y-2">
          <label class="text-sm font-medium">Search query</label>
          <input
            v-model="query"
            class="w-full rounded-xl border px-3 py-2"
            placeholder="bbq joints, events…"
          />
          <div class="flex gap-2">
            <button
              class="rounded-lg border px-3 py-2"
              @click="runCircle(false)"
            >
              Search
            </button>
            <button class="rounded-lg border px-3 py-2" @click="useMyLocation">
              Use my location
            </button>
          </div>
        </div>

        <div class="space-y-2">
          <label class="text-sm font-medium">Search by address</label>
          <input
            v-model="addressQuery"
            class="w-full rounded-xl border px-3 py-2"
            placeholder="123 Main St, Austin, TX"
            @keyup.enter="searchAddress"
          />
          <button
            class="w-full rounded-lg border px-3 py-2"
            @click="searchAddress"
            :disabled="geocoding || !addressQuery.trim()"
          >
            {{ geocoding ? "Searching..." : "Search Address" }}
          </button>
          <p class="text-xs text-gray-500">Or click anywhere on the map</p>
        </div>

        <div class="space-y-1">
          <label class="block text-sm font-medium"
            >Radius: {{ miles }} mi</label
          >
          <input
            type="range"
            min="1"
            max="50"
            step="1"
            v-model="miles"
            class="w-full"
            @change="runCircle(false)"
          />
        </div>
      </aside>

      <!-- Map + Results -->
      <section class="space-y-4">
        <MapView
          :center="{ lon: center.lon, lat: center.lat }"
          :markers="markers"
          :userLocation="userLocation"
          @boundsChanged="runBBox"
          @mapClick="handleMapClick"
        />

        <div class="flex items-center gap-3 text-sm text-gray-600">
          <span class="rounded-lg border px-3 py-1">
            {{
              loading ? "Loading…" : `${total} result${total === 1 ? "" : "s"}`
            }}
          </span>
          <button
            class="rounded-lg border px-3 py-2"
            @click="runCircle(true)"
            :disabled="loading || !cursor"
          >
            Load more
          </button>
        </div>

        <ul class="rounded-xl border divide-y">
          <li v-for="d in docs" :key="d.id" class="p-3">
            <a
              :href="d?.data?.post_url || d?.data?.permalink || '#'"
              target="_blank"
              class="font-medium hover:underline"
            >
              {{ d?.data?.post_title || "Untitled" }}
            </a>
            <div class="text-sm">
              {{ d?.data?.address || d?.data?.locationDetails?.address || "" }}
            </div>
            <div v-if="docCoordinates(d)" class="text-xs text-gray-500">
              ({{ docCoordinates(d)?.lat }}, {{ docCoordinates(d)?.lon }})
            </div>
          </li>
        </ul>
      </section>
    </div>
  </main>
</template>

Stoked!!! We are now ready to try the map!

Test The Map


Navigate to your terminal.  Do not forget to run npm run install.  Then run npm run dev.  You should see this in all its glory:



Conclusion

This project demonstrates the combination of headless architecture, Nuxt.js, and Smart Search AI, proving that map functionality combined with AI is the way for users to find locations using natural language.

We’d love to hear what you build with this—drop into the Headless WordPress Discord and share your projects or feedback.  Happy Coding!

The post Using the Geolocation API in Smart Search AI with ACF, Google Maps, And Nuxt.js appeared first on Builders.

]]>
https://wpengine.com/builders/nuxt-smart-search-ai-acf-geolocation/feed/ 0
Understanding WP Engine’s Smart Search AI Model Context Protocol (MCP) Server https://wpengine.com/builders/smart-search-ai-model-context-protocol-mcp-server/ https://wpengine.com/builders/smart-search-ai-model-context-protocol-mcp-server/#respond Wed, 29 Oct 2025 23:03:52 +0000 https://wpengine.com/builders/?p=31987 The Smart Search AI MCP Server is a powerful new feature in WP Engine’s AI Toolkit that transforms your WordPress site into a dynamic, real-time knowledge base for any external Large Language Model […]

The post Understanding WP Engine’s Smart Search AI Model Context Protocol (MCP) Server appeared first on Builders.

]]>
The Smart Search AI MCP Server is a powerful new feature in WP Engine’s AI Toolkit that transforms your WordPress site into a dynamic, real-time knowledge base for any external Large Language Model (LLM) you connect to it. When enabled, this server responds to requests from AI tools formatted using the Model Context Protocol (MCP) standard.  In this article, I will cover what MCP is, how to work with the Smart Search AI MCP Server, and how it enhances the Smart Search AI product.


What is MCP and How Does It Work?

Model Context Protocol (MCP) is a standardized communication protocol that connects AI models to live, external data sources and tools. While large language models like ChatGPT are incredibly intelligent, their knowledge is limited to their training data and is frozen in time, meaning they can’t access real-time or specific niche data.

By itself, it likely does not know about your company’s latest product specifications, the current stock price, or the content of a blog post you published this morning. MCP is the bridge that closes this gap between the AI’s static knowledge and the dynamic, real-time world.

To understand its role, think of MCP as the USB (Universal Serial Bus) for artificial intelligence. Before USB, connecting a printer, mouse, or keyboard to a computer required a confusing array of different ports and custom drivers. MCP addresses a similar problem in the AI ecosystem. Without a standard, connecting an AI to every single website, database, or internal API would require writing custom, one-off integrations—a complex and inefficient process.

MCP provides that universal standard. It defines a set of simple, predictable rules and commands, allowing any AI model to seamlessly “plug into” any MCP-compliant data source. The AI doesn’t need to know the complex inner workings of your website’s database; it just needs to know how to “speak MCP.” 

Through this protocol, the AI can effectively issue standardized requests like “Search your knowledge base for this term” or “Fetch the contents of this specific page”. In essence, MCP transforms a static, encyclopedic AI into a dynamic, context-aware agent capable of accessing and reasoning about your current content.

Smart Search AI MCP Server

The Smart Search AI MCP Server, which is disabled by default and requires user opt-in, exposes “fetch” and “search” tools that allow external AI models to interact with a website’s public, published content.

Why Should You Use MCP?

The primary benefit of using MCP is that it allows developers to connect external AI agents, like ChatGPT or Claude, directly to their website’s live content. This empowers them to build advanced AI applications, such as custom chatbots or assistants, that are grounded in real-time, accurate information from their own site instead of the generic, often outdated data the models were trained on.

How It Works with WP Engine’s AI Toolkit 

The Smart Search AI MCP Server is a feature of the Smart Search AI service. When you enable it, this server listens for requests that are formatted using the MCP standard. Here’s a typical workflow:

  1. A question is asked: A user interacts with an AI application, like a custom chatbot built with Claude or ChatGPT.
  1. The AI needs more info: The AI model realizes it needs current information to answer the question accurately. It sees that it has access to an MCP server that offers tools for accessing your website’s data.
  1. The AI model sends a network request to your website’s unique MCP server address. For example, it might ask your server to search for “information about AI Toolkit.”
  1. The MCP server receives this request, uses the Smart Search AI vector database to find the most relevant content on your website, and then sends that information back to the AI model.
  1. The AI model now has the fresh, accurate content from your site. It uses this information to formulate a relevant, up-to-date answer for the user.

In short, the MCP server allows AI applications to be powered by the real-time, accurate information from your website’s semantic search and vector database, rather than the stale or scraped data that an LLM would otherwise be limited to. This turns your website into a live, dynamic knowledge base for any AI agent you connect to it.

Testing the Smart Search AI MCP

Once you have an MCP server running, the next step is to connect to it and test its capabilities. This is where a client inspector becomes useful. Tools like the MCP Inspector or a versatile API client such as Postman (using its WebSocket request feature) allow you to interact with your server just as an AI model would. This process is important for debugging and ensuring your server provides the correct data.

In this article, I am stoked about using Postman because it is a bit easier to work with, I think.

Step 1: Establishing a Connection

First, you need the unique URL for your Smart Search AI MCP Server (refer to the instructions on how to obtain it here). In Postman, you would click on the “New” button at the top of your Workspaces item page.  This will show a card menu. Click on “MCP” to create the interface page to interact with your server.  It looks like this:

It will take you to the MCP interface page.  This is where you can paste your URL in the address bar then click “Connect”:

A successful connection is indicated by a status message.  You will see the green “Connected” notification at the bottom of the Postman window. This handshake confirms that your client is now actively listening to the MCP server.

Step 2: Discovering the Available Tools

This is where the navigation begins. Once connected, the MCP server immediately advertises the tools it makes available. These tools are the specific functions or actions the AI is allowed to perform. Think of them as API endpoints, but for an AI.

In the screenshot, we can see the server has presented two distinct tools:

  1. fetch: The description says, “Fetch a specific post by its ID from the Elasticsearch index.” This is a highly specific tool that requires a unique identifier to retrieve a single piece of content.
  2. search: The description is, “Search for information in the connected Elasticsearch index, please try to refine the search query as much as possible.” This is a more flexible tool designed for querying the data source with natural language or keywords.

This discovery phase is fundamental to MCP. The client (and by extension, the AI) doesn’t need prior knowledge of what the server can do; the server announces its own capabilities.

Step 3: Making a Request (Interacting with a Tool)

Now that we know what tools are available, we can send a message to use one of them. MCP messages are typically formatted in JSON, specifying the tool_name to use and the arguments it requires.

Let’s test the search tool. For this example, I will just use the filter: string input, which accepts freeform text. I typed “webinar” in the input box because my WordPress content contains a webinar post.  On the side JSON pane, it looks like this:

This JSON object explicitly tells the MCP server: “Use your search tool and give it a query with the value “webinar“.

Step 4: Understanding the Response

After you send the request, the MCP server will execute the tool with the arguments you provided and send a response back. This response is the raw data that the AI would receive to formulate its answer.

This is the successful response we get back:

{
  "content": [
    {
      "type": "text",
      "text": {
        "results": [
          {
            "id": "doc-1",
            "title": "Webinar – WP Engine MCP",
            "text": "This is WP Engine's Webinar Show about nerd stuff",
            "url": "https://demo.example.com/webinar/getting-started"
          }
        ]
      }
    }
  ]
}

The shape that comes back is a stringified JSON blob with a results array. I parsed it and put it in a code block to make it more readable for this article.

The Smart Search AI MCP response is an object with a single content array, where each element represents one piece of output.

In this example, a content item has type: "text" and a text payload that is an object containing a results array. Each entry in results is a document with four core fields: id , title, text and url .

This envelope makes it easy to stream or combine multiple output parts, while the inner results objects can be extended (e.g., add score, site, or published_at) without changing the outer shape.

Connect Smart Search AI MCP Directly Into Your AI Model

You can expose Smart Search AI to any AI assistant by running it as an MCP server and plugging it in through a connector.

I will show Claude in this example. We need to register the same endpoint using their MCP client configuration. 

Once connected, the model can discover Smart Search AI’s declared tools via MCP’s tool-listing protocol and call them with structured arguments—no bespoke SDK required.

Functionally, wiring Smart Search AI through MCP upgrades your assistant from “best-effort guessing” to retrieval-augmented answers that are precise, auditable, and policy-aware.

If you use Claude, the add a custom connector page looks like this:

Once connected, your AI model will know the abilities and tooling it can call on from your site.  It will be exposed in the dropdown selector:

When it’s added and configured, your Claude AI will now have the ability to access all your WordPress content and tell you about it in a nice, formatted way:

Conclusion: Your WP Engine Website, Reimagined

The journey from a static webpage to an interactive, intelligent resource is the next great leap in digital experiences. We’ve seen how the MCP acts as a connector, bridging the gap between AI and the real-time, valuable content on your website. When this protocol is combined with the semantic power of WP Engine’s Smart Search AI, your WordPress site is no longer just a destination for users; it becomes a dynamic data source that any AI agent can consult.

By providing the tools to integrate your content with the world’s most advanced AI models, WP Engine is putting you at the forefront of this new AI age. Enable the Smart Search AI MCP server on your WP Engine plan and get the power of AI and WordPress.  If you have already done so and need a “How-To” guide, check out my article here on the topic!

The post Understanding WP Engine’s Smart Search AI Model Context Protocol (MCP) Server appeared first on Builders.

]]>
https://wpengine.com/builders/smart-search-ai-model-context-protocol-mcp-server/feed/ 0
Implement WP Engine’s Smart Search AI Model Context Protocol (MCP) Server in Headless WordPress https://wpengine.com/builders/wp-engine-smart-search-mcp-in-headless-wp/ https://wpengine.com/builders/wp-engine-smart-search-mcp-in-headless-wp/#respond Tue, 14 Oct 2025 15:08:52 +0000 https://wpengine.com/builders/?p=31980 This guide demonstrates how to build a full-stack headless WordPress application featuring a chatbot that provides accurate, contextually relevant responses using WP Engine’s new Smart Search AI MCP. At the […]

The post Implement WP Engine’s Smart Search AI Model Context Protocol (MCP) Server in Headless WordPress appeared first on Builders.

]]>
This guide demonstrates how to build a full-stack headless WordPress application featuring a chatbot that provides accurate, contextually relevant responses using WP Engine’s new Smart Search AI MCP.

At the end of the article, we will have a chatbot that can call into your Smart Search AI MCP endpoint, which in turn leverages Smart Search to retrieve relevant content.

Prerequisites

To benefit from this article, you should be familiar with the basics of working with the command line, headless WordPress development, Next.js, and the WP Engine User Portal.

Steps For Setting Up

1. Set up an account on WP Engine and get a WordPress install running. You can get a free headless platform sandbox here:

2. Add a Smart Search AI license. Refer to the docs here for adding a license After you add the license, opt in for the Smart Search AI MCP.

3. Navigate to the WP Admin of your install.  Inside your WP Admin, go to WP Engine Smart Search > Settings.  You will find your Smart Search AI MCP URL here.  Currently, this shows your GraphQL endpoint. This is correct and what you want to see.

What you need to do is manually remove the /graphql and add /mcp.

So your endpoint should look like this after replacing it:

https://your-wpenginesite-0999A-atlassearch-fkdfjckuaa-uc.a.run.app/mcp

4. Next, navigate to Configuration, select the Hybrid card, and add the `post_content` and `post_title` fields in the Semantic settings section. We are going to use this field as our AI-powered field for similarity searches. Make sure to hit Save Configuration afterward.

5. After saving the configuration, head on over to the Index data page, then click Index Now.  It will give you this success message once completed :

6. Create an API account on Google Gemini (Or whatever AI model you choose, e.g., OpenAI API).  Once created, navigate to your project’s dashboard. If you are using Gemini API, go to the Google AI Studio. In your project’s dashboard, go to API Keys.  You should see a page like this:

Generate a new key, copy, and save your API key because we will need this later.  The API key is free on Google Gemini,  but the free tier has limits.

7.  Head over to your terminal or CLI and create a new Next.js project by pasting this utility command in:

`npx create-next-app@latest name-of-your-app`

You will receive prompts in your terminal asking you how you want your Next.js app scaffolded.  Answer them accordingly:

Would you like to use TypeScript? Yes
Wold you like to use ESLint? Yes
Would you like to use Tailwind CSS? Yes
Would you like to use the `src/` directory? Yes
Would you like to use App Router? Yes
Would you like to customize the default import alias (@/*)? No

Once your Next.js app is created, you will need to install the dependencies needed to ensure our app works.  Copy and paste this command in your terminal:

npm install @ai-sdk/google react-icons react-markdown @modelcontextprotocol/sdk @ai-sdk/react @ai

Note: We are using Google’s AI sdk for this article. Please refer to the docs in relation to whatever AI model you choose.  You can download their npm package.

Once the Next project is done scaffolding, cd into the project and then open up your code editor.

8. In your Next project, create a  `.env.local` file with the following environment variables:

GOOGLE_GENERATIVE_AI_API_KEY="<your key here>"(if you chose another AI model, you can name this key whatever you want)

AI_TOOLKIT_MCP_URL="<your smart search mcp url here>"

Here is the link to the final code repo so you can check step by step and follow along.

Calling The WP Engine Smart Search AI MCP Server From Next.js

The first thing we need to do is set up the request to Smart Search AI MCP with the Vercel AI SDK.   Create a file in the `src/app` directory called `api/chat/route.ts`.  Copy the code below and paste it into that file:

// IMPORTANT! Set the runtime to edge
export const runtime = "edge";

import {
  convertToCoreMessages,
  experimental_createMCPClient,
  Message,
  streamText,
} from "ai";
import { createGoogleGenerativeAI } from "@ai-sdk/google";

import { weatherTool } from "@/app/utils/tools";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";

const httpTransport = new StreamableHTTPClientTransport(
  new URL(process.env.AI_TOOLKIT_MCP_URL || "http://localhost:8080/mcp")
);

const client = await experimental_createMCPClient({
  transport: httpTransport,
});

/**
 * Initialize the Google Generative AI API
 */
const google = createGoogleGenerativeAI();

export async function POST(req: Request) {
  try {
    const aiTkTools = await client.tools();
    const { messages }: { messages: Array<Message> } = await req.json();

    const coreMessages = convertToCoreMessages(messages);

    const smartSearchPrompt = `
    - You can use the 'search' tool to find information relating to tv shows.
      - WP Engine Smart Search is a powerful tool for finding information about TV shows.
      - After the 'smartSearchTool' provides results (even if it's an error or no information found)
      - You MUST then formulate a conversational response to the user based on those results but also use the tool if the users query is deemed plausible.
        - If search results are found, summarize them for the user. 
        - If no information is found or an error occurs, inform the user clearly.`;

    const systemPromptContent = `
    - You are a friendly and helpful AI assistant 
    - You can use the 'weatherTool' to provide current weather information for a specific location.
    - Do not invent information. Stick to the data provided by the tool.`;

    const response = streamText({
      model: google("models/gemini-2.0-flash"),
      system: [smartSearchPrompt, systemPromptContent].join("\n"),
      messages: coreMessages,
      tools: {
        // smartSearchTool,
        weatherTool,
        ...aiTkTools,
      },
      onStepFinish: async (result) => {
        // Log token usage for each step
        if (result.usage) {
          console.log(
            `[Token Usage] Prompt tokens: ${result.usage.promptTokens}, Completion tokens: ${result.usage.completionTokens}, Total tokens: ${result.usage.totalTokens}`
          );
        }
      },
      maxSteps: 5,
    });
    // Convert the response into a friendly text-stream
    return response.toDataStreamResponse({});
  } catch (e) {
    throw e;
  }
}

This Edge API route wires your chat endpoint to both Google’s Gemini (via the Vercel AI SDK) and the Smart Search AI MCP server. It first creates a streaming-capable MCP HTTP transport pointed at AI_TOOLKIT_MCP_URL, builds an MCP client, and fetches the server-advertised tools at request time (client.tools()).

Incoming chat messages from the client are normalized with convertToCoreMessages, and two concise system prompts instruct the model on how to use tools: a “search” tool (backed by WP Engine Smart Search via MCP) and a local weatherTool. The prompts emphasize not inventing facts and summarizing search results (including the “no results” case).

With that context, streamText runs gemini-2.0-flash, exposes weatherTool plus all MCP tools (…aiTkTools) to the model, and streams the assistant’s reply back to the browser. The SDK may invoke tools during reasoning (up to maxSteps: 5). After each step, the handler logs token usage for basic observability. 

Finally, toDataStreamResponse returns a chunked HTTP response so the UI can render tokens as they arrive—giving you a real-time, tool-augmented chat experience that queries Smart Search through your MCP server when needed.

Create UI Components For The Chat Interface

In this section, let’s create our components to render the UI.

Chat.tsx

In the `src/app` directory, create a `components` folder.  Then create a `Chat.tsx` file.  Copy and paste this code block into that file:

"use client";

import React, { ChangeEvent } from "react";
import Messages from "./Messages";
import { Message } from "ai/react";
import LoadingIcon from "../Icons/LoadingIcon";
import ChatInput from "./ChatInput";

interface Chat {
  input: string;
  handleInputChange: (e: ChangeEvent<HTMLInputElement>) => void;
  handleMessageSubmit: (e: React.FormEvent<HTMLFormElement>) => void;
  messages: Message[];
  status: "submitted" | "streaming" | "ready" | "error";
}

const Chat: React.FC<Chat> = ({
  input,
  handleInputChange,
  handleMessageSubmit,
  messages,
  status,
}) => {
  return (
    <div id="chat" className="flex flex-col w-full mx-2">
      <Messages messages={messages} />
      {status === "submitted" && <LoadingIcon />}
      <form
        onSubmit={handleMessageSubmit}
        className="ml-1 mt-5 mb-5 relative rounded-lg"
      >
        <ChatInput input={input} handleInputChange={handleInputChange} />
      </form>
    </div>
  );
};

export default Chat;


This file defines a client-side React Chat component that ties together your message list, input field, and loading indicator. It declares a Chat props interface—containing the current input value, change and submit handlers, the array of chat messages, and a status flag—and uses those props to control its rendering. 

Inside the component, it first renders the <Messages> list to show the conversation history. If the status is “submitted”, it displays a <LoadingIcon> spinner to indicate that a response is pending. Finally, it renders a <form> wrapping a <ChatInput> component wired to the provided input value and change handler, so users can type and submit new messages.

Messages Component

Staying in the `src/app/components` directory, create a Messages.tsx file.  Copy and paste this code block in:

import { Message } from "ai";
import { useEffect, useRef } from "react";
import ReactMarkdown from "react-markdown";

export default function Messages({ messages }: { messages: Message[] }) {
  const messagesEndRef = useRef<HTMLDivElement | null>(null);
  useEffect(() => {
    messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
  }, [messages]);
  return (
    <div
      className="border-1 border-gray-100 overflow-y-scroll flex-grow flex-col justify-end p-1"
      style={{ scrollbarWidth: "none" }}
    >
      {messages.map((msg, index) => (
        <div
          key={index}
          className={`${
            msg.role === "assistant" ? "bg-green-500" : "bg-blue-500"
          } my-2 p-3 shadow-md hover:shadow-lg transition-shadow duration-200 flex slide-in-bottom bg-blue-500 border border-gray-900 message-glow`}
        >
          <div className="ml- rounded-tl-lg  p-2 border-r flex items-center">
            {msg.role === "assistant" ? "🤖" : "🧒🏻"}
          </div>
          <div className="ml-2 text-white">
            <ReactMarkdown>{msg.content}</ReactMarkdown>
          </div>
        </div>
      ))}
      <div ref={messagesEndRef} />
    </div>
  );
}

The Messages component renders a scrollable list of chat messages, automatically keeping the view scrolled to the latest entry. It accepts a messages prop (an array of Message objects) and uses a ref to an empty <div> at the bottom; a useEffect hook watches for changes to the messages array and calls scrollIntoView on that ref so new messages smoothly come into view. 

Each message is wrapped in a styled <div> whose background color and avatar icon depend on the message’s role (“assistant” vs. “user”), and the text content is rendered via ReactMarkdown to support Markdown formatting.

Chat Input Component

Lastly, staying in the `components/Chat` directory,  we have the chat input.  Create a `ChatInput.tsx` file and copy and paste this code block in:

import { ChangeEvent } from "react";
import SendIcon from "../Icons/SendIcon";

interface InputProps {
  input: string;
  handleInputChange: (e: ChangeEvent<HTMLInputElement>) => void;
}

function Input({ input, handleInputChange }: InputProps) {
  return (
    <div className="bg-gray-800 p-4 rounded-xl shadow-lg w-full max-w-2xl mx-auto">
      <input
        type="text"
        value={input}
        onChange={handleInputChange}
        placeholder={"Ask Smart Search about TV shows..."}
        className="w-full bg-transparent text-gray-200 placeholder-gray-500 focus:outline-none text-md mb-3"
      />
      <div className="flex">
        <button
          type="submit"
          className="p-1 hover:bg-gray-700 rounded-md transition-colors ml-auto"
          aria-label="Send message"
          disabled={!input.trim()}
        >
          <SendIcon />
        </button>
      </div>
    </div>
  );
}

export default Input;

This file exports an Input component that renders a styled text field and send button for your chat UI. It takes an input string and a handleInputChange callback to keep the input controlled, showing a placeholder prompt (“Ask Smart Search about TV shows…”). The send button, decorated with a SendIcon, is disabled when the input is empty or just whitespace.

Update the page.tsx Template


We need to modify the src/app/page.tsx file to add the Chat component to the page.  In the page.tsx file, copy and paste this code:

"use client";
import Chat from "./components/Chat/Chat";
import { useChat } from "@ai-sdk/react";
import { useEffect } from "react";

const Page: React.FC = () => {
  const {
    messages,
    input,
    handleInputChange,
    handleSubmit,
    setMessages,
    status,
  } = useChat();

  useEffect(() => {
    if (messages.length < 1) {
      setMessages([
        {
          role: "assistant",
          content: "Welcome to the Smart Search chatbot!",
          id: "welcome",
        },
      ]);
    }
  }, [messages, setMessages]);

  return (
    <div className="flex flex-col justify-between h-screen bg-white mx-auto max-w-full">
      <div className="flex w-full flex-grow overflow-hidden relative bg-slate-950">
        <Chat
          input={input}
          handleInputChange={handleInputChange}
          handleMessageSubmit={handleSubmit}
          messages={messages}
          status={status}
        />
      </div>
    </div>
  );
};

export default Page;


This file defines our page component that leverages the useChat hook from the @ai-sdk/react package to manage chat state, including messages, input text, submission handler, and status. 
Upon initial render, a useEffect hook checks if there are no messages and injects a default assistant greeting. The component returns a full-viewport flexbox layout with a styled background area in which it renders the Chat component, passing along the chat state and handlers.

Update The layout.tsx File With Metadata

We need to add metadata to our layout.  Copy and paste this code block into the `src/app/layout.tsx` file:

import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";

const inter = Inter({ subsets: ["latin"] });

export const metadata: Metadata = {
  title: "Smart Search RAG",
  description: "Lets make a chatbot with Smart Search",
};

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body className={inter.className}>{children}</body>
    </html>
  );
}

This file configures the global layout and metadata for the app: it imports global styles, loads the Inter font, and sets the page title and description. The default RootLayout component wraps all page content in <html> and <body> tags, applying the Inter font’s class to the body.

CSS Note: The last thing to add for the styling is the globals.css file. Visit the code block here and copy and paste it into your project.

Test The ChatBot’s Dynamism


The chatbot should be completed and testable in this state. In your terminal run `npm run dev` and navigate to http://localhost:3000. Try asking the chatbot a few questions. 

After you ask it a few questions related to your WordPress content, ask it something about a subject that is not in your WordPress content. The AI will try to fetch what you asked for knowing what tooling it has via MCP. It will know that the content does not exist in your WordPress site via Smart Search.

Now, try adding a new post with a title and content. It could be any topic. Publish the post and then ask the chatbot about the subject. It should give you the relevant content you are asking for in natural language.

You should see this experience in your browser: 

Conclusion

We hope this article helped you understand how to create a chatbot with WP Engine’s Smart Search AI MCP server headless WordPress!  Stay tuned for the next article on using this in traditional WordPress!!

As always, we’re super stoked to hear your feedback and learn about the headless projects you’re working on, so hit us up in the Headless WordPress Discord!

The post Implement WP Engine’s Smart Search AI Model Context Protocol (MCP) Server in Headless WordPress appeared first on Builders.

]]>
https://wpengine.com/builders/wp-engine-smart-search-mcp-in-headless-wp/feed/ 0
Astro + WordPress: Post Previews https://wpengine.com/builders/astro-wordpress-post-previews/ https://wpengine.com/builders/astro-wordpress-post-previews/#respond Mon, 18 Aug 2025 21:12:11 +0000 https://wpengine.com/builders/?p=31956 Historically, previews for headless WordPress have been quite complicated. Gatsby and Faust.js have both implemented their own solutions, but required specific buy-in on those solutions. To alleviate this, our headless […]

The post Astro + WordPress: Post Previews appeared first on Builders.

]]>
Historically, previews for headless WordPress have been quite complicated. Gatsby and Faust.js have both implemented their own solutions, but required specific buy-in on those solutions. To alleviate this, our headless OSS team released the HWP Previews plugin for headless WordPress that sets you up to do previews without using Faust or Gatsby. Now, you don’t have to use Faust to give your publishers the preview experience they expect in WordPress.

This plugin overrides WordPress’s default preview behavior and allows you to control how previews are requested (URL, path, query params, etc.). The plugin is in beta, so we’d love to hear about any missing features or bugs.

Since the HWP Previews plugin covers our needs on the WP side, this article will cover implementing the functionality on the framework side, i.e., Astro. Join me as we dive into WPGraphQL, authentication, and previews!

Goal

  1. We want the same components/code that renders our production pages to render previews so they’re identical.
  2. We want the WP experience to be seamless for content creators.

Requirements

Astro

  1. Needs to know whether a request is for a preview or production page
  2. Needs to know what content is being previewed
  3. Needs to be able to render the preview on demand
  4. Needs to use the same template/components/queries as production pages
  5. Needs to be able to authenticate the preview request

WordPress

  1. Needs to know where our front-end is for previews(domain/path)
  2. Needs to be able to know how to request a preview for a given post/page (path/queries/headers/etc)
  3. Needs to leverage the existing “preview” links/buttons/popups provided by native previews to keep the experience seamless.

Pesky Details

Auth

What you’re not seeing in the URL is also the authentication that’s happening. This preview URL will redirect the user to the WordPress login if they’re not already authenticated. Unpublished posts are by default non-public. You must be authenticated and authorized to view them. This holds true in WPGraphQL; if you query an unpublished post, it’ll return null unless you authenticate your GraphQL query. We’ll need to authenticate users and make sure any GraphQL requests receive proper Authentication headers for preview routes.

Database ID

So why doesn’t WordPress use a URL like https://mywordpresssite.com/blog/hello-world/?preview=true? After all, this URL is unique to the post, and the query param tells WordPress to render the draft version. If only it were that simple!

WordPress doesn’t assign a URI to a post until after it has been published. This means drafts and scheduled posts don’t have a dedicated URI. They may have a slug, but this isn’t necessarily unique in WordPress land. Thus, to correctly render previews, we must handle routing and data fetching based on the databaseId, not the URI. This will come up in several ways later, but for now, know that this is a constraint of WordPress we must adapt to.

Strategy

Like in any headless WordPress setup, we’ll need to start with working routing and data fetching. For this article, I’ll be starting from where we left off in the Routing and GraphQL article and adding on previews. This means we’ll be using URQL for data fetching and the template hierarchy for routing.

SSR

First, we need SSR. Our original [...uri].astro catch-all route was static. We have two options: convert it to SSR or add a dedicated /preview/ route that is SSR. For this example, I’ve opted to convert my catch-all route to SSR.

Detect Preview

Next, like WordPress, we need to detect whether we need to authenticate the request and fetch preview data. The preview=true query parameter does this for us. We’ll use this to detect previews and handle them.

Database ID

As we discussed before, the database ID is required by WordPress for previews. In my example, I get this by having the Previews plugin pass this as a query param: post_id={ID}

Auth

The goal of this article is to show you previews, not implement authentication. Because of that, I’ve opted for the simplest possible authentication method, which is not secure. I’ve opted to hard-code my admin credentials in my code and use them for Basic authentication

Note: I can do this because the WordPress server used in this example is not public; its entire DB and front-end example are running on your computer if you start it from the repo. If you’re going to implement previews, you’ll need to implement proper authentication if you don’t want a security breach. I’d highly recommend the WPGraphQL Headless Login plugin by Dovid Levine.

Configuring WordPress

Selecting to preview a post or page in WordPress results in a URL path that will look something like https://mywordpresssite.com/?preview=true&p=23, with p being the post’s database ID.

What needs to change here? First, our front-end is not on the WordPress server; we need to tell previews to go to our JS framework. This is likely your production server, but static site builders like Gatsby may require a dedicated preview server. Others may require a dedicated path, query parameters, or headers.

Finally, WordPress renders the appropriate PHP template at this route. To keep our experience seamless, I’d rather the content creators see the headless front-end here, not get kicked out to a new tab or have to find their way back into WordPress admin.

This means we need to customize the URL that we’ll route to when clicking “preview” and customize the ?preview=true behavior to embed an iframe of our front-end. The good news is that this is exactly what the HWP Previews plugin does! 

Building It out

Alright, now that WordPress is configured and our basic strategy is in place, let’s start implementing this in Astro.

Catch-all Route

Our first changes will be in the catch-all route, where we fetch the template. We’ll start by capturing and storing our preview and post_id search params.

const isPreview = Astro.url.searchParams.get("preview") === "true";
const postId = Astro.url.searchParams.get("post_id") || undefined;

We’ll also want to store this search parameter for later use. Because we’re using Astro’s rewrite functionality, this param gets stripped from the URL accessed by templates. Thus, we’ll save this for use later.

// Locals is an Astro pattern for sharing route data.
Astro.locals.isPreview = isPreview;

Authentication

I’ve told you I did some really basic things. Are you ready for it?

export const authHeaders = (isPreview) => {
  return isPreview
    ? {
        Authorization: `Basic ${Buffer.from(
          `admin:password`
        ).toString("base64")}`,
      }
    : undefined;
};

As you can see, if isPreview is true, we add the Authorization header; otherwise, we don’t. This is used in combination with a great feature of the URQL GraphQL client.

const response = await client.query(QUERY, VARIABLES,{
    fetchOptions: {
      headers: {
        ...authHeaders(isPreview),
      },
    },
  }
);

On top of taking the query and variables for a query, the third parameter of URQL’s query function takes a config. This is the identical config you can pass when creating the client. That means I can create a single client with good defaults and override as needed. I don’t have to select between any number of clients. I can use a single client and add headers and other config as needed. 

Template Hierarchy

In the last article, we built the uriToTemplate function for handling our template-hierarchy routing. Now that we want to implement previews, we need this to handle the database ID or URI to the template. If you happened to pay attention to the original seed query we were using, you would have noticed that because we copied it from Faust, the query was already set up to handle this.

query GetSeedNode(
    $id: ID! = 0
    $uri: String! = ""
    $asPreview: Boolean = false
  ) {
    ... on RootQuery @skip(if: $asPreview) {
      nodeByUri(uri: $uri) {
        __typename
        ...GetNode
      }
    }

    ... on RootQuery @include(if: $asPreview) {
      contentNode(id: $id, idType: DATABASE_ID, asPreview: true) {
        __typename
        ...GetNode
      }
    }
  }

Thus, I updated my getSeedQuery function to handle the additional variables. Auth was also handled here.

export async function getSeedQuery(variables) {
  return client.query(SEED_QUERY, variables, {
    fetchOptions: {
      headers: {
        ...authHeaders(variables.asPreview),
      },
    },
  });
}

Finally, I updated uriToTemplate to idToTemplate, which handles both uri and databaseId.

export async function idToTemplate(
  options: ToTemplateArgs
): Promise<TemplateData> {
  const id = "id" in options ? options.id : undefined;
  const uri = "uri" in options ? options.uri : undefined;
  const asPreview = "asPreview" in options ? options.asPreview : false;

  if (asPreview && !id) {
    console.error("HTTP/400 - preview requires database id");
    return returnData;
  }

  const { data, error } = await getSeedQuery({ uri, id, asPreview });

  //...
}

Finally, we update the call to this function from the catch-all route.

const results = await idToTemplate({ uri, asPreview: isPreview, id: postId });

Updating Templates

You’d be forgiven for thinking we’re done. Remember that pesky thing I mentioned about having to use databaseIds for preview queries? Well, we now have to update our templates to do this.

While we could make our templates use the @skip and @include pattern like the seed query…there is just no point. The seed query handled this complexity for us and returned a bunch of data that we used to select a template. That data included the database ID. We can now use that for all further queries instead of the URI!

Let’s start by grabbing those isPreview and databaseId variables so they’re handy.

const isPreview = Astro.locals.isPreview;
const databaseId = Astro.locals.templateData?.databaseId;

Like with our seed query, we will also need to add authentication when appropriate.

const { data, error } = await client.query(
  gql`
    #...
  `,
  {
    databaseId,
    isPreview,
  },
  {
    fetchOptions: {
      headers: {
        ...authHeaders(isPreview),
      },
    },
  }
);

Next, we need to update our query. Previously, we used nodeByUri for all of our queries. This works great, but it doesn’t accept database IDs or support returning preview data. Thus, for posts and pages, we need to use contentNode.

const { data, error } = await client.query(
  gql`
    query singleTemplatePageQuery(
      $databaseId: ID!
      $isPreview: Boolean = false
    ) {
      contentNode(
        id: $databaseId
        idType: DATABASE_ID
        asPreview: $isPreview
      ) {
        id
        uri
        ... on NodeWithTitle {
          title
        }
        ... on NodeWithContentEditor {
          content
        }
        ... on Post {
          categories {
            nodes {
              name
              uri
            }
          }
          tags {
            nodes {
              name
              uri
            }
          }
        }
      }
    }
  `,
  {
    databaseId,
    isPreview,
  },
  {
    fetchOptions: {
      headers: {
        ...authHeaders(isPreview),
      },
    },
  }
);

For this change, I didn’t have to alter any of the actual query that defines the returned data. I also updated the Astro html template to access data.contentNode instead of data.nodeByUri.

Finally, I added a quick check to validate that I got a post back. If you’re not aware, having no or incorrect credentials for a query in GraphQL doesn’t result in an HTTP/401 error, but a null value. I’ll add a check to return HTTP/404 if the value is null. This handles incorrect database IDs and unauthorized queries.

if (!data.post) {
  console.error("HTTP/404 - Not Found in WordPress:", databaseId);
  return Astro.rewrite("/404");
}

It works!

Previews! We’ve implemented one of the biggest missing features of headless WordPress. HWP Previews did all the heavy lifting on the WP side, and we took what it provided to render the posts. Our content creators can now publish with confidence, knowing exactly what their work will look like on the front end!

I’m excited to have this plugin available for folks to build custom preview experiences outside of Gatsby and Faust. What are you going to build with it? Come join our Headless WordPress Discord and let us know!

The post Astro + WordPress: Post Previews appeared first on Builders.

]]>
https://wpengine.com/builders/astro-wordpress-post-previews/feed/ 0
How to Create a Headless E-Commerce Search Experience With WP Engine’s Smart Search AI and Nuxt.js https://wpengine.com/builders/how-to-create-a-headless-e-commerce-search-experience-with-wp-engines-smart-search-ai-and-nuxt-js/ https://wpengine.com/builders/how-to-create-a-headless-e-commerce-search-experience-with-wp-engines-smart-search-ai-and-nuxt-js/#respond Fri, 08 Aug 2025 16:40:53 +0000 https://wpengine.com/builders/?p=31950 Have you ever tried to buy something on a website only to have its poor search feature send you somewhere else? My stoke for finding the perfect rock climbing, coding, […]

The post How to Create a Headless E-Commerce Search Experience With WP Engine’s Smart Search AI and Nuxt.js appeared first on Builders.

]]>

Have you ever tried to buy something on a website only to have its poor search feature send you somewhere else? My stoke for finding the perfect rock climbing, coding, or running gear definitely drops when I can’t easily find what I’m looking for.

The search feature is an essential tool on any e-commerce site for converting visitors into customers. It helps users find and purchase products quickly and efficiently.

This is where WP Engine’s Smart Search AI steps in. It’s a product for WP Engine customers that replaces WordPress’s built-in search with an intelligent, AI-driven engine for both traditional and headless WordPress applications. Smart Search AI guides visitors to the most relevant content using semantic understanding to surface better results, even for custom post types.

In this step-by-step guide, I will show you how to create a full headless WordPress e-commerce search experience with WooCommerce, WPGraphQL, and WP Engine Smart Search AI.  By the end of this article, you will have created a starter e-commerce site with search functionality from start to finish.

If you prefer the video version of this article, you can access it here:

Prerequisites

To benefit from this article, you should be familiar with the basics of working with the command line, headless WordPress development, Nuxt.js, and the WP Engine User Portal.

Steps For Setting Up:

1. Set up an account on WP Engine and get a WordPress install running.  Log in to your WP Admin. Alternatively, if you are not an existing customer of WP Engine, you can get a free headless platform sandbox account here to give it a try.

2. Once in WP Admin, go to Plugins in the left sidebar, click the Add New button, search for the WooCommerce* plugin, and install it. Follow the same process to install the WPGraphQL plugin. Once both plugins are installed, activate them.

Note: Don’t forget to save your WPGraphQL endpoint, which you can access on the WPGraphQL settings page:

3. Next, go to the WooGraphQL releases page on the GitHub repo and download the latest version.  Once you download the latest version, go back to your WP Admin and upload it to the plugins page.

4. We’ll run a test to ensure that WooCommerce data can be accessed via GraphQL next.  First, we need to add product data to our WooCommerce store.  In the left sidebar, go to Products > Add New Product:

Once you click on that, it will take you to a general Products page that shows all your products.  At the top of the page, you will have the option to Add New Product, Import, or Export.  This is where you can add, edit, and import products. Click on Import:

This is where you can add all my dummy product data by going to this .csv file in my repo, downloading it, then uploading it into your WooCommerce Import Products page here:

For this example, I added a product name, product description, regular price, SKU, product tag, product category, and product image.


5. Add a Smart Search license. Refer to the docs here to add a license. Contact our sales department for a free trial demo.

6. In the WP Admin, go to WP Engine Smart Search > Settings.  You will find your Smart Search URL and access token here.  Copy and save it.  We will need it for our environment variables for the frontend.  You should see this page:

7. Next, navigate to Configuration, select the Semantic card, and add the post_content, post_title, and post_excerpt fields in the Semantic settings section. We are going to use these fields as our AI-powered field for similarity searches. Make sure to hit Save Configuration afterward.

8. After saving the configuration, head on over to the Index data page, then clickIndex Now”It will give you this success message once completed :


9. Now that we have indexed our data into Smart Search, let’s make sure it works.  Head over the the GraphQL IDE in your WP Admin. You can either access this via the left sidebar or the menu bar at the top of the page.  Copy and paste the query below into the IDE:

query GetProducts($first: Int = 10) {
  products(first: $first) {
    edges {
      node {
        name
        description
        image {
          sourceUrl
          altText
        }
      }
    }
  }
}

This is a simple query that is asking for the first 10 products.  It should give you the name, description, and image data of your products.  Hit play, and you should get the results back:

Stoked!!! It works!

10. We need to set the frontend up now.  The Nuxt.js frontend boilerplate will contain a project that already renders a home page with products and links to those product details pages.  Clone down the Nuxt repo starting point by copying and pasting this command in your terminal

npx degit Fran-A-Dev/smart-search-headlesswp-ecomm#starting-point-boilerplate my-project


Once you clone it down, navigate into the directory and install the project dependencies:

cd my-project
npm install


11. Create a .env.local file inside the root of the Nuxt project. Open that file and paste in these environment variables (The environment variables are the ones you saved from steps 2 and 6) :

NUXT_PUBLIC_WORDPRESS_URL="<your WP url here>"
NUXT_PUBLIC_SMART_SEARCH_URL="<your smart search url here>"
NUXT_PUBLIC_SMART_SEARCH_TOKEN="<your smart search access token here>"


12. Next, let’s update how our Nuxt app will build and run the site.  Go to your nuxt.config.ts file in the root and update it accordingly:

export default defineNuxtConfig({
  compatibilityDate: "2024-11-01",
  devtools: { enabled: process.env.NODE_ENV === "development" },
  modules: ["@nuxtjs/tailwindcss", "@nuxt/image"],

  nitro: {
    compressPublicAssets: true,
  },

  css: ["~/assets/css/main.css"],

  build: {
    transpile: process.env.NODE_ENV === "production" ? ["vue"] : [],
  },
  image: {
    domains: [
      new URL(process.env.NUXT_PUBLIC_WORDPRESS_URL || "").hostname,
    ].filter(Boolean),
    quality: 80,
    format: ["webp", "jpg", "png"],
  },
  app: {
    head: {
      title: "Nuxt headlesswp e-commerce",
      meta: [{ name: "description", content: "Nuxt headlesswp e-commerce" }],
      link: [
        {
          rel: "stylesheet",
          href: "https://fonts.googleapis.com/icon?family=Material+Icons",
        },
      ],
    },
  },
  runtimeConfig: {
    public: {
      wordpressUrl: "",
      smartSearchUrl: "",
      smartSearchToken: "",
    },
  },
});


We are done with the setup steps to create the boilerplate starting point.  In your terminal, run npm run dev and visit http://localhost:3000 to make sure it works.  You should see this:

And when you navigate to a product detail page by clicking on a details link, you should see the detail page:

Wrap The Smart Search Endpoint

The first thing we need to do is wrap our GraphQL fetch logic—both against the WP Engine Smart Search endpoint and our WordPress GraphQL API—into a single reusable function. Create a folder called composables at the root.  In that folder, create a file called useSmartSearch.js and paste in the code below:

export const useSmartSearch = () => {
  const config = useRuntimeConfig();
  const {
    public: { smartSearchUrl, smartSearchToken, wordpressUrl },
  } = config;

  
  const _post = async ({ url, token, query, variables }) => {
    if (!url) throw new Error("URL not configured");
    const headers = { "Content-Type": "application/json" };
    if (token) headers.Authorization = `Bearer ${token}`;
    try {
      return await $fetch(url, {
        method: "POST",
        headers,
        body: { query, variables },
      });
    } catch (err) {
      if (process.dev) {
        console.error("GraphQL error:", err);
      }
      throw err;
    }
  };

  
  const getContext = (message, field = "post_content", minScore = 0.8) =>
    _post({
      url: smartSearchUrl,
      token: smartSearchToken,
      query: `query GetContext($message: String!, $field: String!, $minScore: Float!) {
        similarity(input: { nearest: { text: $message, field: $field }, minScore: $minScore }) {
          total
          docs { id data score }
        }
      }`,
      variables: { message, field, minScore },
    });

  
  const searchProducts = (
    searchQuery,
    { limit = 10, strictMode = false, filter = null } = {}
  ) => {
    
    const semanticSearchConfig = strictMode
      ? "" 
      : 'semanticSearch: { searchBias: 10, fields: ["post_title", "post_content"] }';

    let finalFilter = "post_type:product";
    if (filter) {
      finalFilter = `${finalFilter} AND ${filter}`;
    }

    return _post({
      url: smartSearchUrl,
      token: smartSearchToken,
      query: `query SearchProducts($query: String!, $limit: Int, $filter: String!) {
        find(
          query: $query
          limit: $limit
          filter: $filter
          ${semanticSearchConfig}
        ) {
          total
          documents { id score data }
        }
      }`,
      variables: { query: searchQuery, limit, filter: finalFilter },
    });
  };

  
  const getProductDetails = (productIds) =>
    _post({
      url: wordpressUrl,
      token: null,
      query: `query GetProductDetails($ids: [Int]!) {
        products(where: { include: $ids }) {
          edges { 
            node { 
              databaseId 
              name 
              image { sourceUrl altText } 
              ... on ProductWithPricing { regularPrice } 
            } 
          }
        }
      }`,
      variables: { ids: productIds },
    });

  return { getContext, searchProducts, getProductDetails };
};

This composable wraps all interactions with WP Engine’s Smart Search and your WordPress GraphQL endpoint.

It reads URLs and tokens from Nuxt’s runtime config, provides a private _post helper for sending GraphQL requests via $fetch, and exposes three methods: getContext for server-side semantic similarity searches, searchProducts for both AI-driven semantic queries and strict filtering of products (by toggling strictMode or supplying a custom filter string), and getProductDetails to fetch full product data—including images and pricing—directly from WPGraphQL.

Create Search Logic

The next piece brings together our Smart Search and WPGraphQL calls into a single logic layer. Create a file at composables/useSearchLogic.js and paste in the code below:

import { useSmartSearch } from "./useSmartSearch";
import { ref } from "vue";

export const useSearchLogic = () => {
  const { searchProducts, getProductDetails } = useSmartSearch();
  const resultsLimit = ref(20);

  const mapBasicResults = (documents) =>
    documents.map(({ data, score }) => ({
      id: data.ID,
      title: data.post_title,
      description: data.post_content,
      score,
      image: "",
      price: 0,
    }));

  const performSearch = async (query) => {
    if (!query || !query.trim()) {
      return { success: false, error: "Empty query" };
    }

    const startTime = Date.now();

    try {
      const { data } = await searchProducts(query, {
        limit: Number(resultsLimit.value)
      });

      if (!data?.find) {
        throw new Error("Invalid search response");
      }

      const basic = mapBasicResults(data.find.documents);
      const detailed = await fetchCompleteProductData(basic);
      const searchTime = Date.now() - startTime;

      return {
        success: true,
        results: detailed,
        total: data.find.total,
        searchTime,
        query: `Text search: "${query}"`,
      };
    } catch (error) {
      if (process.dev) {
        console.error("Search error:", error);
      }
      return {
        success: false,
        error: `Search failed: ${error.message || "Please try again."}`,
      };
    }
  };

  const performActivitySearch = async (activityValue, priceFilter = null) => {
    if (!activityValue || !activityValue.trim()) {
      return { success: false, error: "No activity selected" };
    }

    const startTime = Date.now();

    try {
      let query = `product_cat.name.keyword:"${getActivityLabel(
        activityValue
      )}"`;

      const { data } = await searchProducts(query, {
        limit: Number(resultsLimit.value),
        strictMode: true, 
      });

      if (!data?.find) {
        throw new Error("Invalid search response");
      }

      const basic = mapBasicResults(data.find.documents);
      const detailed = await fetchCompleteProductData(basic);

      let filteredResults = detailed;
      if (
        priceFilter &&
        (priceFilter.min !== undefined || priceFilter.max !== undefined)
      ) {
        filteredResults = detailed.filter((product) => {
          const price = product.price || 0;
          const { min = 0, max = Infinity } = priceFilter;
          return price >= min && price <= max;
        });
      }

      const searchTime = Date.now() - startTime;

      return {
        success: true,
        results: filteredResults,
        total: filteredResults.length,
        searchTime,
        query: `Activity: ${getActivityLabel(activityValue)}${
          priceFilter
            ? ` | Price: $${priceFilter.min || 0} - $${
                priceFilter.max || "max"
              }`
            : ""
        }`,
      };
    } catch (error) {
      if (process.dev) {
        console.error("Activity search error:", error);
      }
      return {
        success: false,
        error: `Search failed: ${error.message || "Please try again."}`,
      };
    }
  };

   const performPriceOnlySearch = async (
    { min, max },
    activityFilter = null
  ) => {
    const startTime = Date.now();

    try {
      let query = activityFilter
        ? `product_cat.name.keyword:"${getActivityLabel(activityFilter)}"`
        : "*";

      const { data } = await searchProducts(query, {
        limit: Number(resultsLimit.value),
        strictMode: true,
      });

      if (!data?.find) {
        throw new Error("Invalid search response");
      }

      const basic = mapBasicResults(data.find.documents);
      const detailed = await fetchCompleteProductData(basic);

    
      const filteredResults = detailed.filter((product) => {
        const price = product.price || 0;
        return price >= min && price <= max;
      });

      const searchTime = Date.now() - startTime;

      return {
        success: true,
        results: filteredResults,
        total: filteredResults.length,
        searchTime,
        query: `${
          activityFilter
            ? `Activity: ${getActivityLabel(activityFilter)} | `
            : ""
        }Price: $${min} - $${max}`,
      };
    } catch (error) {
      if (process.dev) {
        console.error("Price search error:", error);
      }
      return {
        success: false,
        error: `Search failed: ${error.message || "Please try again."}`,
      };
    }
  };

  const fetchCompleteProductData = async (products) => {
    if (!products.length) return [];

    try {
      const productMap = new Map();
      products.forEach((prod) => {
        productMap.set(prod.id, prod);
      });

      const productIds = Array.from(productMap.keys());
      const response = await getProductDetails(productIds);

      const edges = response?.data?.products?.edges || [];

      const graphqlDataMap = new Map();
      edges.forEach((edge) => {
        if (edge?.node?.databaseId) {
          graphqlDataMap.set(edge.node.databaseId, edge.node);
        }
      });

      const enrichedProducts = [];
      for (const [productId, basicProduct] of productMap) {
        const graphqlNode = graphqlDataMap.get(productId);

        if (!graphqlNode) {
          enrichedProducts.push({
            ...basicProduct,
            image: "",
            price: 0,
            formattedPrice: "$0.00",
            hasImage: false,
            isAvailable: false,
          });
          continue;
        }

        const imageData = graphqlNode.image;
        const imageUrl = imageData?.sourceUrl || "";
        const imageAlt = imageData?.altText || basicProduct.title || "";

        const rawPrice = graphqlNode.regularPrice || "";
        let priceValue = 0;
        let formattedPrice = "$0.00";

        if (rawPrice) {
          const numericPrice = rawPrice.replace(/[^0-9.]/g, "");
          priceValue = numericPrice ? parseFloat(numericPrice) : 0;

          if (priceValue > 0) {
            formattedPrice = new Intl.NumberFormat("en-US", {
              style: "currency",
              currency: "USD",
            }).format(priceValue);
          }
        }

        const productName = graphqlNode.name || basicProduct.title;
        const productSlug = graphqlNode.slug || "";
        const productDescription =
          graphqlNode.description || basicProduct.description || "";

        enrichedProducts.push({
          ...basicProduct,
          title: productName,
          description: productDescription,
          slug: productSlug,
          image: imageUrl,
          imageAlt,
          hasImage: Boolean(imageUrl),
          price: priceValue,
          formattedPrice,
          rawPrice,
          isAvailable: priceValue > 0,
          hasCompleteData: true,
        });
      }

      return enrichedProducts;
    } catch (error) {
      if (process.dev) {
        console.error("Error fetching product details:", error);
      }

      return products.map((prod) => ({
        ...prod,
        image: "",
        price: 0,
        formattedPrice: "$0.00",
        hasImage: false,
        isAvailable: false,
        hasCompleteData: false,
        error: "Failed to fetch complete data",
      }));
    }
  };

  const getActivityLabel = (activityValue) => {

    const labels = {
      coding: "coding", // matches exactly
      running: "Running", // matches exactly (note capital R)
      "rock-climbing": "climbing", // maps to "climbing" in index
    };
    return labels[activityValue] || activityValue;
  };

  const performCombinedSearch = async (activityValue, priceFilter) => {
    return performActivitySearch(activityValue, priceFilter);
  };

  return {
    performSearch,
    performActivitySearch,
    performPriceOnlySearch,
    performCombinedSearch,
    fetchCompleteProductData,
    getActivityLabel,
  };
};

This code block glues together two back-end services: WP Engine Smart Search for all your full-text, semantic, and strict “find” queries (including category and range filters) and WPGraphQL for authoritative product details. 

Each of its methods (performSearch, performActivitySearch, performPriceOnlySearch, and performCombinedSearch) constructs a single GraphQL find call that tells Smart Search exactly how to filter (via its query, filter, semanticSearch, or strictMode inputs).

Smart Search returns only the IDs, scores, and minimal data you need; then fetchCompleteProductData issues one batched WPGraphQL request to pull down images, prices, and slugs, merging them back into your UI payload. 

Here’s a detailed breakdown:

  1. Initialization
    • It pulls in two low-level operations from useSmartSearch:
      • searchProducts(query, options) to run any “find” query against the Smart Search API.
      • getProductDetails(ids) to fetch full WPGraphQL product data (images, pricing) by database ID.
    • It also defines a reactive resultsLimit (default 20) to control page size.
  2. Mapping Basic Results
    • mapBasicResults takes the raw documents array returned by Smart Search—which each contains a data map and a relevance score—and converts it to a minimal product stub { id, title, description, score, image: "", price: 0 }.
  3. Text‐Query Search (performSearch)
    • Validates the input query, records start time, then calls searchProducts(query, { limit }).
    • Throws if the API response is malformed.
    • Builds basic stubs via mapBasicResults, then immediately calls fetchCompleteProductData to enrich each result with image URLs and numeric pricing.
    • Returns { success, results, total, searchTime, query }.
  4. Category/Activity Search (performActivitySearch)
    • Ensures a non-empty activity value, then issues a searchProducts request where the query is simply the exact category name (e.g. “Running”) and strictMode: true to disable semantic fuzziness.
    • Enriches with full product data, then optionally applies a client-side price filter if one was passed in.
  5. Price‐Only Search (performPriceOnlySearch)
    • Builds a “catch-all” query (“*” or scoped to a category) with strictMode: true.
    • Fetches the matching products, enriches them, then filters the enriched list on the client by the given { min, max } range.
  6. Data Enrichment (fetchCompleteProductData)
    • Given an array of basic stubs, batches a WPGraphQL call for all their IDs.
    • Maps the GraphQL response nodes back onto your stubs, filling in image, price, and formatting into formattedPrice, flagging missing/failed items.
  7. Utility and Labels
    • getActivityLabel maps your UI’s “activity” values (e.g., “rock-climbing”) to the exact category names in your Smart Search index.
    • A tiny wrapper, performCombinedSearch simply delegates to performActivitySearchso you can hook into a unified API.

Our Components

It is time to build all the components that will allow our e-commerce site to have a user experience with data rendered on the browser.  At the root of the project, create a folder called components.  We will be staying in the components folder for all of this section.

The Input Field

First, let’s make the Input field for our users to type into for searching.

Create a file at components/SearchInput.vue and paste in the code below:

<template>
  <div class="search-input">
    <!-- Search Input Container -->
    <div class="search-container mb-6">
      <div class="relative">
        <input
          v-model="searchQuery"
          @input="handleInput"
          @keyup.enter="handleSubmit"
          type="text"
          :placeholder="placeholder"
          class="w-full px-4 py-3 pl-12 pr-12 border border-gray-300 rounded-lg focus:outline-none focus:ring-2 focus:ring-blue-500 focus:border-transparent text-lg"
        />

        <!-- Search Icon -->
        <div
          class="absolute inset-y-0 left-0 pl-3 flex items-center pointer-events-none"
        >
          <SearchIcon />
        </div>

        <!-- Clear Button -->
        <button
          v-if="searchQuery"
          @click="handleClear"
          class="absolute inset-y-0 right-0 pr-3 flex items-center hover:text-gray-600"
        >
          <CloseIcon customClass="h-5 w-5 text-gray-400" />
        </button>
      </div>
    </div>
  </div>
</template>

<script setup>
import { ref, watch } from "vue";
import SearchIcon from "~/components/icons/SearchIcon.vue";
import CloseIcon from "~/components/icons/CloseIcon.vue";

// Props
const props = defineProps({
  initialQuery: {
    type: String,
    default: "",
  },
  placeholder: {
    type: String,
    default: "Search products...",
  },
});

// Emits
const emit = defineEmits(["search", "clear", "input"]);

// Reactive data
const searchQuery = ref(props.initialQuery);

// Debounce timer
let searchTimeout = null;

// Methods
const handleInput = () => {
  clearTimeout(searchTimeout);
  searchTimeout = setTimeout(() => {
    emit("input", searchQuery.value);
  }, 300); // 300ms debounce
};

const handleSubmit = () => {
  emit("search", searchQuery.value);
};

const handleClear = () => {
  searchQuery.value = "";
  emit("clear");
};

// Watch for external changes to search query
watch(
  () => props.initialQuery,
  (newQuery) => {
    searchQuery.value = newQuery;
  }
);

// Expose methods for parent component
defineExpose({
  clearQuery: () => {
    searchQuery.value = "";
  },
  setQuery: (query) => {
    searchQuery.value = query;
  },
  searchQuery: searchQuery,
});
</script>

<style scoped>
.search-container {
  max-width: 800px;
  margin: 0 auto;
}
</style>

The SearchInput.vue component renders a styled text input with a built‑in search icon and “clear” button. It accepts two props—initialQuery to seed the field and placeholder for the hint text—and binds its value to a reactive searchQuery via v‑model

As the user types, it debounces input by 300 ms before emitting an “input” event, fires a “search” event on Enter, and shows a clear button that resets the field and emits “clear”. It also watches initialQuery for external changes and exposes clearQuery and setQuery methods so parent components can programmatically control the input.

Activity Filter

 Next, let’s make the filter to allow users to select an activity.  Create a file at components/ActivityFilter.vue  and paste in the code below:

<template>
  <div class="activity-filter mb-6">
    <div class="flex items-center justify-between mb-4">
      <h3 class="text-lg font-medium text-gray-900">Filter by Activity</h3>
      <button
        v-if="selectedActivity"
        @click="clearFilter"
        type="button"
        class="text-sm text-blue-600 hover:text-blue-800"
      >
        Clear Filter
      </button>
    </div>

    <div class="flex flex-wrap gap-3">
      <button
        v-for="activity in activities"
        :key="activity.value"
        type="button"
        @click="selectActivity(activity.value)"
        :aria-pressed="selectedActivity === activity.value"
        :class="[
          'px-4 py-2 rounded-full border text-sm font-medium transition-colors',
          selectedActivity === activity.value
            ? 'bg-blue-600 text-white border-blue-600'
            : 'bg-white text-gray-700 border-gray-300 hover:bg-gray-50',
        ]"
      >
        {{ activity.label }}
      </button>
    </div>

    <div
      v-if="selectedActivity"
      class="mt-4 p-3 bg-blue-50 rounded-lg border border-blue-200"
    >
      <div class="flex items-center justify-between">
        <span class="text-sm text-blue-800">
          <strong>Active Filter:</strong>
          {{ getActivityLabel(selectedActivity) }}
        </span>
        <button
          @click="clearFilter"
          type="button"
          class="text-blue-600 hover:text-blue-800"
          aria-label="Clear activity filter"
        >
          <CloseIcon />
        </button>
      </div>
    </div>
  </div>
</template>

<script setup>
import { ref } from "vue";
import CloseIcon from "~/components/icons/CloseIcon.vue";

const props = defineProps({
  initialActivity: {
    type: String,
    default: "",
  },
});

const emit = defineEmits(["activity-selected", "activity-cleared"]);

const selectedActivity = ref(props.initialActivity);

const activities = [
  { value: "coding", label: "Coding" },
  { value: "running", label: "Running" },
  { value: "rock-climbing", label: "Rock Climbing" },
];

function selectActivity(activity) {
  selectedActivity.value = activity;
  emit("activity-selected", activity);
}

function clearFilter() {
  selectedActivity.value = "";
  emit("activity-cleared");
}

function getActivityLabel(value) {
  const activity = activities.find((a) => a.value === value);
  return activity ? activity.label : value;
}

defineExpose({
  clearActivity: clearFilter,
  setActivity: selectActivity,
});
</script>

The ActivityFilter.vue component renders a set of pill‑style buttons—“Coding,” “Running,” and “Rock Climbing”—allowing users to select one activity at a time. It accepts an initialActivity prop to pre‑select a button and emits activity‑selected with the chosen value whenever a button is clicked. 

A “Clear Filter” button appears when a selection exists, resetting the state and emitting activity‑cleared. We use Vue’s ref for reactive state, simple methods to update and clear the selection, and defineExpose to let parent components programmatically set or clear the filter.

Price Range

Now, let’s give users the ability to slide a range within pricing.  Create a file at components/PriceFilter.vue and paste this code block in:

<template>
  <div class="price-filter mb-6">
    <div class="flex items-center justify-between mb-4">
      <h3 class="text-lg font-medium text-gray-900">Filter by Price</h3>
      <button
        v-if="priceRange.min > 0 || priceRange.max < maxPrice"
        @click="clearFilter"
        type="button"
        class="text-sm text-blue-600 hover:text-blue-800"
      >
        Reset Price
      </button>
    </div>

    <div class="px-3">
      <div class="flex justify-between items-center mb-4">
        <span class="text-sm font-medium text-gray-700"
          >${{ priceRange.min }}</span
        >
        <span class="text-sm text-gray-500">to</span>
        <span class="text-sm font-medium text-gray-700"
          >${{ priceRange.max }}</span
        >
      </div>

      <div class="relative">
        <div class="h-2 bg-gray-200 rounded-lg relative">
          <div
            class="absolute h-2 bg-blue-500 rounded-lg"
            :style="{ left: percentLeft, width: percentWidth }"
          />
        </div>

        <input
          v-model.number="priceRange.min"
          @input="handlePriceChange"
          type="range"
          :min="0"
          :max="maxPrice"
          :step="10"
          aria-label="Minimum price"
          class="absolute w-full h-2 bg-transparent appearance-none cursor-pointer slider-thumb"
        />

        <input
          v-model.number="priceRange.max"
          @input="handlePriceChange"
          type="range"
          :min="0"
          :max="maxPrice"
          :step="10"
          aria-label="Maximum price"
          class="absolute w-full h-2 bg-transparent appearance-none cursor-pointer slider-thumb"
        />
      </div>

      <button
        v-if="priceRange.min > 0 || priceRange.max < maxPrice"
        @click="applyFilter"
        type="button"
        class="w-full mt-4 px-4 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition-colors"
      >
        Apply Price Filter (${{ priceRange.min }} - ${{ priceRange.max }})
      </button>
    </div>
  </div>
</template>

<script setup>
import { ref, computed, toRef, onUnmounted } from "vue";

const props = defineProps({
  initialMin: { type: Number, default: 0 },
  initialMax: { type: Number, default: 1000 },
  maxPrice: { type: Number, default: 1000 },
});
const emit = defineEmits(["price-changed", "price-applied", "price-cleared"]);

const priceRange = ref({ min: props.initialMin, max: props.initialMax });
const maxPrice = toRef(props, "maxPrice");

let priceTimeout;

const percentLeft = computed(
  () => `${(priceRange.value.min / maxPrice.value) * 100}%`
);
const percentWidth = computed(
  () =>
    `${((priceRange.value.max - priceRange.value.min) / maxPrice.value) * 100}%`
);

function handlePriceChange() {
  let { min, max } = priceRange.value;
  if (min > max) {
    min = max;
  } else if (max < min) {
    max = min;
  }
  priceRange.value.min = min;
  priceRange.value.max = max;

  emit("price-changed", { min, max });
  clearTimeout(priceTimeout);
  priceTimeout = setTimeout(() => {
    if (priceRange.value.min > 0 || priceRange.value.max < maxPrice.value) {
      emit("price-applied", {
        min: priceRange.value.min,
        max: priceRange.value.max,
      });
    }
  }, 1000);
}

function clearFilter() {
  priceRange.value.min = 0;
  priceRange.value.max = maxPrice.value;
  emit("price-cleared");
}

function applyFilter() {
  emit("price-applied", {
    min: priceRange.value.min,
    max: priceRange.value.max,
  });
}

onUnmounted(() => {
  clearTimeout(priceTimeout);
});

defineExpose({ clearPrice: clearFilter });
</script>

<style scoped>
.slider-thumb::-webkit-slider-thumb,
.slider-thumb::-moz-range-thumb {
  appearance: none;
  height: 20px;
  width: 20px;
  border-radius: 50%;
  background: #3b82f6;
  cursor: pointer;
  border: 2px solid #ffffff;
  box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);

  position: relative;
  z-index: 1;
}
.slider-thumb:hover::-webkit-slider-thumb,
.slider-thumb:hover::-moz-range-thumb {
  background: #2563eb;
}
.slider-thumb:active::-webkit-slider-thumb,
.slider-thumb:active::-moz-range-thumb {
  background: #1d4ed8;
}
</style>

The PriceFilter.vue component provides a dual‑thumb price slider with live updates, preset buttons, and clear/apply controls. It accepts initialMin, initialMax, and maxPrice props to initialize its reactive priceRange and dynamically computes the filled‑track positions (percentLeft and percentWidth). 

As the user drags either thumb, handlePriceChange clamps the values so min ≤ max, emits a price‑changed event immediately, then debounces a price‑applied event by 1 second of inactivity. Clicking a preset button jumps to that range and fires price‑applied at once, while the “Reset Price” and “Apply Price Filter” buttons emit price‑cleared and price‑applied. 

It cleans up its debounce timer on unmount and exposes clearPrice and setPrice methods so parent components can programmatically reset or set the range. 

Search Results Display

The next thing we need to do is display the search results.  Create a file at components/SearchResults.vue and paste this code block in:

<template>
  <div class="search-results">
    <!-- Loading State -->
    <div v-if="isLoading" class="text-center py-6">
      <div class="inline-flex items-center">
        <LoadingSpinner />
        <span class="text-lg">Searching...</span>
      </div>
    </div>

    <!-- Search Results -->
    <div v-else-if="results.length > 0" class="search-results-content">
      <!-- Results Header -->
      <div class="mb-6 flex justify-between items-center">
        <div class="text-lg font-medium text-gray-700">
          Found {{ totalResults }} products
          <span v-if="searchTime" class="text-sm text-gray-500"
            >({{ searchTime }}ms)</span
          >
        </div>
        <button
          @click="clearResults"
          class="text-sm text-gray-600 hover:text-gray-800"
        >
          Clear Results
        </button>
      </div>

      <!-- Products Grid -->
      <div
        class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-6"
      >
        <div v-for="product in results" :key="product.id">
          <ProductCard :product="product" />
        </div>
      </div>
    </div>

    <!-- No Results -->
    <div v-else-if="hasSearched && !isLoading" class="text-center py-12">
      <div class="text-gray-500">
        <NoResultsIcon />
        <h3 class="text-xl font-medium text-gray-900 mb-2">
          No products found
        </h3>
        <p class="text-gray-600">
          Try adjusting your search terms or search options
        </p>
      </div>
    </div>

    <!-- Error State -->
    <div
      v-if="error"
      class="bg-red-50 border border-red-200 rounded-lg p-4 mb-6"
    >
      <div class="flex">
        <ErrorIcon />
        <div class="ml-3">
          <h3 class="text-sm font-medium text-red-800">Search Error</h3>
          <p class="text-sm text-red-700 mt-1">{{ error }}</p>
        </div>
      </div>
    </div>
  </div>
</template>

<script setup>
import ProductCard from "~/components/ProductCard.vue";
import LoadingSpinner from "~/components/icons/LoadingSpinner.vue";
import NoResultsIcon from "~/components/icons/NoResultsIcon.vue";
import ErrorIcon from "~/components/icons/ErrorIcon.vue";

// Props
const props = defineProps({
  results: {
    type: Array,
    default: () => [],
  },
  totalResults: {
    type: Number,
    default: 0,
  },
  isLoading: {
    type: Boolean,
    default: false,
  },
  hasSearched: {
    type: Boolean,
    default: false,
  },
  error: {
    type: String,
    default: "",
  },
  searchTime: {
    type: Number,
    default: 0,
  },
});

// Emits
const emit = defineEmits(["clear-results"]);

// Methods
const clearResults = () => {
  emit("clear-results");
};
</script>

<style scoped>
.search-results-content {
  animation: fadeIn 0.3s ease-out;
}

@keyframes fadeIn {
  from {
    opacity: 0;
  }
  to {
    opacity: 1;
  }
}
</style>

The SearchResults.vue component handles all the display states for your product search: it shows a spinning loader when isLoading is true; once results arrive (results.length > 0), it renders a header with the total count and search time alongside a “Clear Results” button, then lays out each product via a <ProductCard> grid; if the search has been performed but yielded no hits, it displays a  “No products found” message; and if an error string is present, it surfaces a styled error banner with the message. 

By accepting props for results, totalResults, isLoading, hasSearched, error, and searchTime, and emitting a single clear-results event, it holds all the UI you need to reflect loading, success, empty, and error conditions.

The next thing we need to make is the search bar, which will bring in the previous four components we just built.  Create a file at components/SearchBar.vue and paste this code block in:

<template>
  <div class="search-bar">
    <!-- Search Input Component -->
    <SearchInput
      ref="searchInputRef"
      :initial-query="initialQuery"
      :placeholder="placeholder"
      @input="handleSearchInput"
      @search="handleSearchSubmit"
      @clear="handleSearchClear"
    />

    <!-- Activity Filter Component -->
    <ActivityFilter
      ref="activityFilterRef"
      :initial-activity="selectedActivity"
      @activity-selected="handleActivitySelected"
      @activity-cleared="handleActivityCleared"
    />

    <!-- Price Filter Component -->
    <PriceFilter
      ref="priceFilterRef"
      :initial-min="priceRange.min"
      :initial-max="priceRange.max"
      :max-price="maxPrice"
      @price-changed="handlePriceChanged"
      @price-applied="handlePriceApplied"
      @price-cleared="handlePriceCleared"
    />

    <!-- Search Results Component -->
    <SearchResults
      :results="searchResults"
      :total-results="totalResults"
      :is-loading="isLoading"
      :has-searched="hasSearched"
      :error="error"
      :search-time="searchTime"
      @clear-results="handleClearResults"
    />
  </div>
</template>

<script setup>
import { ref, onMounted, onUnmounted } from "vue";
import SearchInput from "./SearchInput.vue";
import ActivityFilter from "./ActivityFilter.vue";
import PriceFilter from "./PriceFilter.vue";
import SearchResults from "./SearchResults.vue";
import { useSearchLogic } from "~/composables/useSearchLogic";

// Props
const props = defineProps({
  initialQuery: {
    type: String,
    default: "",
  },
  placeholder: {
    type: String,
    default: "Search products...",
  },
});

// Emits
const emit = defineEmits(["search-results", "search-start", "search-complete"]);

// Use search logic composable
const {
  performSearch,
  performActivitySearch,
  performPriceOnlySearch,
  performCombinedSearch,
} = useSearchLogic();

// Component refs
const searchInputRef = ref(null);
const activityFilterRef = ref(null);
const priceFilterRef = ref(null);

// Reactive data
const searchResults = ref([]);
const totalResults = ref(0);
const isLoading = ref(false);
const hasSearched = ref(false);
const error = ref("");
const searchTime = ref(0);

// Filter states
const selectedActivity = ref("");
const priceRange = ref({
  min: 0,
  max: 1000,
});
const maxPrice = ref(1000);

// Search Input Event Handlers
const handleSearchInput = async (query) => {
  if (query.trim()) {
    await executeSearch(query, "semantic-search");
  } else {
    clearResults();
  }
};

const handleSearchSubmit = async (query) => {
  if (query.trim()) {
    await executeSearch(query, "semantic-search");
  }
};

const handleSearchClear = () => {
  clearResults();
  clearAllFilters();
};

// Activity Filter Event Handlers
const handleActivitySelected = async (activityValue) => {
  selectedActivity.value = activityValue;
  searchInputRef.value?.clearQuery();

  // Check if price filter is active
  const hasPriceFilter =
    priceRange.value.min > 0 || priceRange.value.max < maxPrice.value;

  if (hasPriceFilter) {
    // Use combined search for activity + price
    await executeCombinedSearch(activityValue, priceRange.value);
  } else {
    // Use activity-only search
    await executeActivitySearch(activityValue);
  }
};

const handleActivityCleared = () => {
  selectedActivity.value = "";
  clearResults();
};

// Price Filter Event Handlers
const handlePriceChanged = (priceData) => {
  priceRange.value = priceData;
};

const handlePriceApplied = async (priceData) => {
  priceRange.value = priceData;

  // Check if activity filter is active
  if (selectedActivity.value) {
    // Use combined search for activity + price
    await executeCombinedSearch(selectedActivity.value, priceData);
  } else {
    // Use price-only search
    await executePriceFilter();
  }
};

const handlePriceCleared = () => {
  priceRange.value = { min: 0, max: maxPrice.value };
  // Re-run current search without price filter
  if (searchInputRef.value?.searchQuery?.trim()) {
    executeSearch(searchInputRef.value.searchQuery, "semantic-search");
  } else if (selectedActivity.value) {
    // Re-run activity search without price filter
    executeActivitySearch(selectedActivity.value);
  }
};

// Results Event Handlers
const handleClearResults = () => {
  clearResults();
  clearAllFilters();
};

// Core Search Execution Methods
const executeSearch = async (query, type) => {
  isLoading.value = true;
  error.value = "";
  hasSearched.value = true;

  emit("search-start", { query, type });

  const result = await performSearch(query);

  if (result.success) {
    searchResults.value = result.results;
    totalResults.value = result.total;
    searchTime.value = result.searchTime;

    emit("search-results", {
      results: searchResults.value,
      total: totalResults.value,
      query,
      type,
      time: searchTime.value,
    });
  } else {
    error.value = result.error;
    searchResults.value = [];
    totalResults.value = 0;
  }

  isLoading.value = false;
  emit("search-complete", {
    success: result.success,
    resultsCount: searchResults.value.length,
  });
};

const executeActivitySearch = async (activityValue) => {
  isLoading.value = true;
  error.value = "";
  hasSearched.value = true;

  emit("search-start", { query: activityValue, type: "activity-filter" });

  const result = await performActivitySearch(activityValue);

  if (result.success) {
    searchResults.value = result.results;
    totalResults.value = result.total;
    searchTime.value = result.searchTime;

    emit("search-results", {
      results: searchResults.value,
      total: totalResults.value,
      query: result.query,
      type: "activity-filter",
      time: searchTime.value,
    });
  } else {
    error.value = result.error;
    searchResults.value = [];
    totalResults.value = 0;
  }

  isLoading.value = false;
  emit("search-complete", {
    success: result.success,
    resultsCount: searchResults.value.length,
  });
};

const executePriceFilter = async () => {
  await executePriceOnlySearch();
};

const executePriceOnlySearch = async () => {
  isLoading.value = true;
  error.value = "";
  hasSearched.value = true;

  const query = `Price: $${priceRange.value.min} - $${priceRange.value.max}`;
  emit("search-start", { query, type: "price-filter" });

  // Pass activity filter if active
  const result = await performPriceOnlySearch(
    priceRange.value,
    selectedActivity.value || null
  );

  if (result.success) {
    searchResults.value = result.results;
    totalResults.value = result.total;
    searchTime.value = result.searchTime;

    emit("search-results", {
      results: searchResults.value,
      total: totalResults.value,
      query: result.query,
      type: selectedActivity.value ? "combined-filter" : "price-filter",
      time: searchTime.value,
    });
  } else {
    error.value = result.error;
    searchResults.value = [];
    totalResults.value = 0;
  }

  isLoading.value = false;
  emit("search-complete", {
    success: result.success,
    resultsCount: searchResults.value.length,
  });
};

const executeCombinedSearch = async (activityValue, priceData) => {
  isLoading.value = true;
  error.value = "";
  hasSearched.value = true;

  const query = `Activity: ${activityValue} | Price: $${priceData.min} - $${priceData.max}`;
  emit("search-start", { query, type: "combined-filter" });

  const result = await performCombinedSearch(activityValue, priceData);

  if (result.success) {
    searchResults.value = result.results;
    totalResults.value = result.total;
    searchTime.value = result.searchTime;

    emit("search-results", {
      results: searchResults.value,
      total: totalResults.value,
      query: result.query,
      type: "combined-filter",
      time: searchTime.value,
    });
  } else {
    error.value = result.error;
    searchResults.value = [];
    totalResults.value = 0;
  }

  isLoading.value = false;
  emit("search-complete", {
    success: result.success,
    resultsCount: searchResults.value.length,
  });
};

// Utility Methods
const clearResults = () => {
  searchResults.value = [];
  totalResults.value = 0;
  hasSearched.value = false;
  error.value = "";
  searchTime.value = 0;
};

const clearAllFilters = () => {
  selectedActivity.value = "";
  priceRange.value = { min: 0, max: maxPrice.value };
  searchInputRef.value?.clearQuery();
  activityFilterRef.value?.clearActivity();
  priceFilterRef.value?.clearPrice();
};

// Handle clear search from home link
const handleClearFromHome = () => {
  clearResults();
  clearAllFilters();
  // Emit empty search results to reset the parent state
  emit("search-results", {
    results: [],
    total: 0,
    query: "",
    type: "clear",
    time: 0,
  });
};

// Listen for clear search event from home link
onMounted(() => {
  if (process.client) {
    window.addEventListener("clear-search-from-home", handleClearFromHome);
  }
});

onUnmounted(() => {
  if (process.client) {
    window.removeEventListener("clear-search-from-home", handleClearFromHome);
  }
});
</script>

<style scoped>
/* Main container styles */
</style>

This is our top‑level orchestration component that stitches together four child pieces—SearchInput, ActivityFilter, PriceFilter, and SearchResults—with the shared useSearchLogic composable. 

It maintains reactive state for query text, selected activity, price range, loading status, results, errors, and timing; wires each child’s events into handlers that call performSearch, performActivitySearch, or executePriceFilter; and emits high‑level lifecycle events (search-start, search-results, search-complete) for parent components. 

It also listens for a global clear-search-from-home browser event to reset all filters and results, ensuring the entire search UI can be programmatically cleared from elsewhere in the app.

Add the Search Bar To All Routes

The next step is to make our search bar accessible on all product routes. To do that, we can add it to the product layout.
Navigate to the layouts/products.vue file and paste this code in:

<template>
  <div>
    <header class="shadow-sm bg-white">
      <nav class="container mx-auto p-4">
        <NuxtLink to="/" class="font-bold">Nuxt Headless WP Demo</NuxtLink>
      </nav>
    </header>

    <!-- Search Bar Section -->
    <div class="bg-gray-50 border-b">
      <div class="container mx-auto p-4">
        <SearchBar @search-results="handleSearchResults" />
      </div>
    </div>

    <div class="container mx-auto p-4">
      <slot />
    </div>
    <footer class="container mx-auto p-4 flex justify-between border-t-2">
      <ul class="flex gap-4"></ul>
    </footer>
  </div>
</template>

<script setup>
import SearchBar from "~/components/SearchBar.vue";

// Handle search results from SearchBar and emit to pages
const handleSearchResults = (searchData) => {
  // Dispatch custom event that pages can listen to
  if (process.client) {
    const event = new CustomEvent("layout-search-results", {
      detail: searchData,
    });
    window.dispatchEvent(event);
  }
};
</script>

<style scoped>
.router-link-exact-active {
  color: #12b488;
}
</style>

This updated layout file allows us to have our Search Bar on all product routes.

SVG Icons

Next, let’s make seven icon components to extract the inline SVG elements into separate Vue components to have good code readability and keep it neat.

All seven components are simple, reusable Vue components that render styled SVG icons for the different icons we need. Create a icons folder in the components directory. In that icons folder, create the files below. I linked each component to the code block you need to copy and paste into that file in my final GitHub repo for this article. Go ahead and click each file to get the code you need to paste into your own project.

Note: Update the pages/index.vue component with this code here. This imports the SVG components that the index page needs as well as handling the state.

Update The Index Page To Handle State

Lastly, let’s update our index.vue file so that the index page can handle search state.  Go to pages/index.vue and paste this code in:

<template>
  <div>
    <!-- Loading State -->
    <div v-if="pending" class="text-center py-12">
      <div class="inline-flex items-center">
        <LoadingSpinner
          customClass="animate-spin -ml-1 mr-3 h-8 w-8 text-blue-500"
        />
        <span class="text-lg">Loading products...</span>
      </div>
    </div>

    <!-- Error State -->
    <div v-else-if="error" class="text-center py-12">
      <div class="text-red-600">
        <ErrorIcon customClass="mx-auto h-16 w-16 text-red-400 mb-4" />
        <h3 class="text-xl font-medium text-gray-900 mb-2">
          Failed to load products
        </h3>
        <p class="text-gray-600 mb-4">
          {{ error.message || "Please try again later" }}
        </p>
        <button @click="refresh()" class="btn">Try Again</button>
      </div>
    </div>

    <!-- Default Products (shown when no search active) -->
    <div v-else-if="!searchActive && products?.length" class="default-products">
      <h2 class="text-2xl font-bold mb-6">All Products</h2>
      <div class="grid grid-cols-4 gap-5">
        <div v-for="p in products" :key="p.id">
          <ProductCard :product="p" />
        </div>
      </div>
    </div>

    <!-- No Products State -->
    <div
      v-else-if="!searchActive && !products?.length"
      class="text-center py-12"
    >
      <div class="text-gray-500">
        <EmptyBoxIcon />
        <h3 class="text-xl font-medium text-gray-900 mb-2">
          No products available
        </h3>
        <p class="text-gray-600">Check back later for new products</p>
      </div>
    </div>
  </div>
</template>

<script setup>
import { ref, onMounted, onUnmounted } from "vue";
import ProductCard from "~/components/ProductCard.vue";
import LoadingSpinner from "~/components/icons/LoadingSpinner.vue";
import ErrorIcon from "~/components/icons/ErrorIcon.vue";
import EmptyBoxIcon from "~/components/icons/EmptyBoxIcon.vue";

// Search state
const searchActive = ref(false);

// Handle search results from layout SearchBar
const handleSearchResults = (event) => {
  const searchData = event.detail;
  searchActive.value = searchData.results.length > 0 || searchData.query.trim();
};

// Handle home link click to reset search
const handleResetSearch = () => {
  searchActive.value = false;
  // Also clear the search in the SearchBar component
  const searchBarEvent = new CustomEvent("clear-search-from-home");
  window.dispatchEvent(searchBarEvent);
};

// Listen for search results from layout and reset search event
onMounted(() => {
  if (process.client) {
    window.addEventListener("layout-search-results", handleSearchResults);
    window.addEventListener("reset-search", handleResetSearch);
  }
});

onUnmounted(() => {
  if (process.client) {
    window.removeEventListener("layout-search-results", handleSearchResults);
    window.removeEventListener("reset-search", handleResetSearch);
  }
});

// Fetch the products from WooCommerce via GraphQL
const {
  data: products,
  pending,
  error,
  refresh,
} = await useFetch(useRuntimeConfig().public.wordpressUrl, {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
  },
  body: {
    query: `
      query GetProducts($first: Int = 10) {
        products(first: $first) {
          edges {
            node {
              databaseId
              name
              image {
                sourceUrl
                altText
              }
            }
          }
        }
      }
    `,
    variables: {
      first: 10,
    },
  },
  transform: (data) => {
    return data.data.products.edges.map((edge) => ({
      id: edge.node.databaseId,
      title: edge.node.name,
      image: edge.node.image?.sourceUrl || "/placeholder.jpg",
    }));
  },
  key: "products-list",
});

definePageMeta({
  layout: "products",
});

useHead({
  title: "Nuxt headlesswp eCommerce | All Products",
  meta: [
    {
      name: "description",
      content:
        "Browse our complete collection of products in our headless WordPress eCommerce store",
    },
  ],
});
</script>

Here is what we added to the index.vue file and what it does:

searchActive ref: Tracks whether a search is in effect so you can suppress the default “All Products” grid when search results exist.

Event handlers (handleSearchResults, handleResetSearch): Listen for custom events emitted by the shared SearchBar in the layout, updating searchActive (and clearing the bar) when searches start or are reset.

Lifecycle hooks: Hook into onMounted/onUnmounted to register and clean up those global event listeners.

pending, error, refresh from useFetch: Expose loading/error UI states and a manual retry button.

Expanded template logic: Four mutually exclusive branches to render “loading,” “error,” “default products,” or “no products” based on fetch status and search activity.

We are now ready to test this in the browser.  Run npm run dev in your terminal.  When visiting http://localhost:3000, you should now see a search bar and filters on your home page. Test the search and try the filters.  This is the experience you should get:

Search STOKE!!! 

Here is the final repo for reference:

https://github.com/Fran-A-Dev/smart-search-headlesswp-ecomm

Conclusion

We hope this article helped you understand how to create a filtered product experience in Nuxt.js with WP Engine Smart Search AI. By surfacing relevant products faster—with semantic, activity, and price-aware filtering—you give customers the ability to zero in on what they want, spend less time searching, and thus have a seamless purchasing experience. 

If you’re building headless commerce, this kind of search-driven discovery can stoke engagement and revenue. We’d love to hear what you build next—drop into the Headless WordPress Discord and share your projects or feedback.  Happy Coding!

* WP Engine is a proud member and supporter of the community of WordPress® users. The WordPress® trademarks are the intellectual property of the WordPress Foundation, and the Woo® and WooCommerce® trademarks are the intellectual property of WooCommerce, Inc. Uses of the WordPress®, Woo®, and WooCommerce® names in this website are for identification purposes only and do not imply an endorsement by WordPress Foundation or WooCommerce, Inc. WP Engine is not endorsed or owned by, or affiliated with, the WordPress Foundation or WooCommerce, Inc.

The post How to Create a Headless E-Commerce Search Experience With WP Engine’s Smart Search AI and Nuxt.js appeared first on Builders.

]]>
https://wpengine.com/builders/how-to-create-a-headless-e-commerce-search-experience-with-wp-engines-smart-search-ai-and-nuxt-js/feed/ 0
Next.js + WordPress: Routing and GraphQL https://wpengine.com/builders/next-js-wordpress-routing-and-graphql/ https://wpengine.com/builders/next-js-wordpress-routing-and-graphql/#respond Mon, 04 Aug 2025 22:33:27 +0000 https://wpengine.com/builders/?p=31946 Next.js is one of the most popular front-ends for building with headless WordPress. My Reddit notifications are littered with Next.js + headless WordPress recommendations. Today, we’re going to look at […]

The post Next.js + WordPress: Routing and GraphQL appeared first on Builders.

]]>
Next.js is one of the most popular front-ends for building with headless WordPress. My Reddit notifications are littered with Next.js + headless WordPress recommendations. Today, we’re going to look at implementing routing and data fetching for headless WordPress with Next.js.

You may wonder why we’re covering this, since WP Engine is behind Faust.js, which provides its own routing solution for headless WordPress + Next.js sites. Faust’s routing solution isn’t perfect, however. In this article, we’ll experiment with another approach that offers improvements. We’ll be working with the Pages Router, though many of the concepts could be translated to the App Router.

The two major issues we’ll be looking at today are bundle splitting and query optimization. Currently, the catch-all route doesn’t bundle template code separately. While this might only cause a couple of KB of bloat on small sites, the more complexity you add means you might be loading 10-100 KBs of extra code on every route. 

When Faust was first conceived years ago, I don’t think the team fully understood the importance of small queries. Because of this, Faust’s main mechanism for querying GraphQL only allows for one query per template. We have since learned that this is an antipattern. Just because you can query everything you need from GraphQL in one request doesn’t mean you should. In this post, we’ll also experiment with alternative ways to handle data fetching. 

For a working example of what we discuss here, check out the wpengine/hwptoolkit repo.

Note: We recently announced that we’re working on improving Faust. The work I did for this article and much more is going into improving Faust.

Routing

In the article on Astro, we discussed four major steps in the template hierarchy that must be recreated for a front-end framework. URI => Data => Template => Render: Data + Template.

In our article on SvelteKit, we experimented with new routing methods due to its implementation details. Next is similar in that middleware and rewrites just won’t work for us.  However, unlike SvelteKit, Next doesn’t have a way to load components outside of components. 

Next.js does have the ability to dynamically import components, which will solve our bundling issue. Our template loader will only dynamically import the needed template, not all templates.

Template Hierarchy in Next.js

Let’s put this all together in Next.js. The steps are:

  1. Get the URI
  2. Determine the template
    • Make a “seed query” to WordPress to fetch template data
    • Collect available templates for rendering
    • Calculate possible templates the data could use
    • Figure out which of the available templates to use based on the prioritized order of most to least specific possible templates
    • Use the dynamically imported template
  3. Fetch more data from WordPress to actually render the template
  4. Merge the selected template and data for rendering

Catch-All Route

To get the full URI, we’ll use Next’s file-system router and optional catch-all route: src/pages/[[...uri]].js

Note: The [...uri].js pattern may be more common, but  it requires a value for uri. This means root (/) routes aren’t included. This is commonly not understood, and folks also include an index.js to handle this usecase. However, the double brackets make uri optional and thus inclusive of /. This undefined value will need to be handled later.

Seed Query

In the Next Pages Router, all server-side queries will need to be executed in getStaticProps or, more commonly, getServerSideProps; either way, this will be in the src/pages/[[...uri]].js route.

Calculating Possible Templates

Our app will use a function we built for Faust to take the data from the seed query and generate a list of possible templates, sorted from most specific to least specific. For example, the templates for a page could look like this: [page-sample-page, page-2, page, singular, index].

Creating Available Templates

Because we’re using dynamic imports to import our WordPress templates, they don’t have to be dedicated routes. However, we do need a single location where they all exist so we can easily import them programmatically. We will use a wp-templates directory with our templates inside, like this:

src

src
  ↳ wp-templates/
    ↳ index.js
    ↳ default.js
    ↳ home.js
    ↳ archive.js
    ↳ single.js

With Astro and SvelteKit, I opted to read these from the file system to avoid having to import individual templates manually. Unfortunately, Next won’t allow us to do this. Because of the limitations in Next’s bundler and how next/dynamic Works, they make it clear in the documentation that variables can’t be used; static strings are required!

This means we use index.js in our wp-templates folder to handle dynamically importing the individual templates and exporting them into key-value pairs, where the keys are the expected WP template names. In our example above, this is mostly 1-to-1, though default.js will become index.

Choosing a template

We now have a list of possible templates and a list of available templates. Based on the prioritized list of possible templates, we can determine which of the available templates to use. 

A quick bit of JavaScript can compare the list of possible templates [single-post-sample-post, single-post, single, singular, index] to the list of available templates [archive, home, archive, single] and the first match is our template. In this case, single is the winner!

Putting it all together

Now that we’ve built all the pieces, we can make a single function that takes a URI and returns the template. The getServerSideProps function of our catch-all route now looks something like this:

// src/pages/[[...uri]].js
import { uriToTemplate } from "@/lib/templateHierarchy";

export async function getServerSideProps(context) {
  const { params } = context;

  const uri = Array.isArray(params.uri)
    ? "/" + params.uri.join("/") + "/"
    : "/";

  const templateData = await uriToTemplate({ uri });

  if (
    !templateData?.template?.id ||
    templateData?.template?.id === "404 Not Found"
  ) {
    return {
      notFound: true,
    };
  }

  return {
    props: {
      uri,
      // https://github.com/vercel/next.js/discussions/11209#discussioncomment-35915
      templateData: JSON.parse(JSON.stringify(templateData)),
    },
  };
}

Loading the Template

Loading templates is done manually in the wp-templates/index.js. That will look something like:

// src/wp-templates/index.js

import dynamic from "next/dynamic";

const home = dynamic(() => import("./home.js"), {
  loading: () => <p>Loading Home Template...</p>,
});

const index = dynamic(() => import("./default.js"), {
  loading: () => <p>Loading Index Template...</p>,
});

const single = dynamic(() => import("./single.js"), {
  loading: () => <p>Loading Single Template...</p>,
});

export default { home, index, single };

Rendering the template

Okay! Our getServerSideProps function does the hard work of figuring out which template to render and loading the seed query. Now, in our page component, we can handle rendering the template. 

// src/pages/[[...uri]].js
import availableTemplates from "@/wp-templates";

export default function Page(props) {
  const { templateData } = props;

  const PageTemplate = availableTemplates[templateData.template?.id];

  return (
    <PageTemplate {...props} />
  );
}

Querying Data

Now that we have a working router, let’s turn to fetching data for our templates. Currently, Faust’s main mechanism is query and variables exports from a given template. These are handled upstream in the catch-all routes get____Props function. 

As mentioned previously, we want to improve this by allowing multiple queries per template. Faust started to implement this by allowing a queries export. Without getting into too many details, this implementation has its own set of problems. We were able to implement this same pattern in the SvelteKit example without much difficulty and avoided many of the issues. Let’s do the same here.

Defining Queries

While a full implementation might need some more advanced features, we’re going to keep ours fairly simple to start. 

Component.queries = [
  {
    name: myQuery,
    query: gql`
      //...
    `,
    variables: (_context, { uri }) => ({ uri })
  }
]

Instead of relying on complex hash algorithms to identify our queries, we’re going to use simple names. The GraphQL query name is used as a fallback if one is not provided. However, if you’re running one query with different variables, you may need to give it a unique name, so we provide the name field.

Executing Queries

In our getServerSideProps function, we’re already handling the loading of our template. Now, we can access this queries array from there and execute our queries. Initially, I thought this would look something like: 

const PageTemplate = availableTemplates[templateData.template?.id];

//Queries would then be available at
PageTemplate.queries

This didn’t work. Some console logs quickly made sense of the issue: 

{
  PageTemplate: {
    '$$typeof': Symbol(react.forward_ref),
    render: [Function: LoadableComponent] {
      preload: [Function (anonymous)],
      displayName: 'LoadableComponent'
    }
  },
}

What’s actually being loaded is the wrapper component from next/dynamic, not the component itself. Thus, it doesn’t have the queries value I added. But since this is an async component, I suspected I should be able to access queries if I load the component itself via the preload function.

const component = await PageTemplate.render.preload();

Sure enough, this worked:

const component = await PageTemplate.render.preload();

// Queries available at:
component.default.queries

Now that we have loaded our module and have access to queries, our array of queries will be handed off to a purpose-built function that can handle executing all the queries with their given config and variables, returning them in the expected structure. All together this will look something like:

// src/pages/[[...uri]].js
import { uriToTemplate } from "@/lib/templateHierarchy";
import availableTemplates from "@/wp-templates";
import { fetchQueries } from "@/lib/queryHandler";

export async function getServerSideProps(context) {
  const { params } = context;

  const uri = Array.isArray(params.uri)
    ? "/" + params.uri?.join("/") + "/"
    : "/";

  const templateData = await uriToTemplate({ uri });

  if (
    !templateData?.template?.id ||
    templateData?.template?.id === "404 Not Found"
  ) {
    return {
      notFound: true,
    };
  }

  const PageTemplate = availableTemplates[templateData.template?.id];

  const component = await PageTemplate.render.preload();

  const graphqlData = await fetchQueries({
    queries: component.default.queries,
    context,
    props: {
      uri,
      templateData,
    },
  });

  return {
    props: {
      uri,
      // https://github.com/vercel/next.js/discussions/11209#discussioncomment-35915
      templateData: JSON.parse(JSON.stringify(templateData)),
      graphqlData: JSON.parse(JSON.stringify(graphqlData)),
    },
  };
}

Component Queries

I like to be able co-locate my queries with the components they go with. So, leveraging the existing queries system I can similarly export query from individual components. For a navigation menu I could opt to pass the desired menu location in from the template to determine which menu is fetched and rendered.

In this example, I kept it simple and rendered a “Recent Posts” component on the home page.

import { gql } from "urql";
import { useRouteData } from "@/lib/context";

export default function RecentPosts() {
  const { graphqlData } = useRouteData();

  const posts = graphqlData?.RecentPosts?.data?.posts?.nodes || [];

  if (graphqlData?.RecentPosts?.error) {
    console.error("Error fetching RecentPosts:", graphqlData.RecentPosts.error);
    return <div>Error loading recent posts.</div>;
  }

  return (
    <div className="recent-posts">
      <h2>Recent Posts</h2>
      <ul>
        {posts.map((post) => (
          <li key={post.id}>
            <a href={post.uri}>{post.title}</a>
          </li>
        ))}
      </ul>
    </div>
  );
}

RecentPosts.query = {
  query: gql`
    query RecentPosts {
      posts(first: 5) {
        nodes {
          id
          title
          uri
        }
      }
    }
  `,
};

You may have noticed I used custom context to fetch the data. While I could pass this via props fairly easily, that’s not always the case. To avoid prop drilling, I added a context provider to our catch-all route to make page props available to all components.

export default function Page(props) {
  const { templateData } = props;

  const PageTemplate = availableTemplates[templateData.template?.id];

  return (
    <RouteDataProvider value={props}>
      <PageTemplate {...props} />
    </RouteDataProvider>
  );
}

Wrapping up

Just like that, we’ve managed to implement a template-hierarchy router and GraphQL data fetching for our templates. All the while, we have avoided some performance issues by enabling dynamic imports for templates and multiple query support for data fetching. 

This implementation is far from production-ready. I can think of a number of things the GraphQL data fetching doesn’t handle yet. But this shows us that with a little problem-solving, we can build some great solutions.

That said, between Astro, SvelteKit, and Next.js. Next has proven to be the most complicated implementation. The non-standard next/dynamic means extra steps for queries and manual registration of our wp-templates.

This comes down to strong async support in Astro and SvelteKit, while React has long struggled with supporting async data. Admittedly, Next App Router would likely help us simplify implementation complexities. But that’s a story for another day.

While my relationship with React/Next is tenuous at best, and I strongly prefer anything but, I still make a living maintaining sites using these technologies, and I learned a bunch about using them with headless WordPress. What do you think?


The post Next.js + WordPress: Routing and GraphQL appeared first on Builders.

]]>
https://wpengine.com/builders/next-js-wordpress-routing-and-graphql/feed/ 0
Create a Headless WordPress chatbot with WP Engine’s AI Toolkit, RAG, and Google Gemini https://wpengine.com/builders/create-a-headless-wordpress-chatbot-with-wp-engines-ai-toolkit-rag-and-google-gemini/ https://wpengine.com/builders/create-a-headless-wordpress-chatbot-with-wp-engines-ai-toolkit-rag-and-google-gemini/#respond Fri, 27 Jun 2025 17:53:07 +0000 https://wpengine.com/builders/?p=31923 In this step-by-step guide, we will build a full-stack application that uses WP Engine’s AI Toolkit, Retrieval Augmented Generation (RAG), and Google Gemini to deliver accurate and contextually relevant responses […]

The post Create a Headless WordPress chatbot with WP Engine’s AI Toolkit, RAG, and Google Gemini appeared first on Builders.

]]>
In this step-by-step guide, we will build a full-stack application that uses WP Engine’s AI Toolkit, Retrieval Augmented Generation (RAG), and Google Gemini to deliver accurate and contextually relevant responses in a chatbot within a Next.js framework.

Before we discuss the technical steps, let’s review the tools and techniques we will use.

RAG

Retrieval-augmented generation (RAG) is a technique that enables AI models to retrieve and incorporate new information.

It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified set of documents, using this information to supplement information from its pre-existing training data. This allows LLMs to use domain-specific and/or updated information.

Our use case in this article will include providing chatbot access to our data from Smart Search.

WP Engine’s AI Toolkit

Here’s an overview of WP Engine’s AI Toolkit and the core capabilities it brings to both traditional and headless WordPress sites:

  • Smart Search & AI-Powered Hybrid Search

At its heart, the AI Toolkit includes WP Engine Smart Search—a drop-in replacement for WordPress’s native search that’s typo-tolerant, weight-aware, and ultra-fast. Out of the box, you get three modes: Full-Text (stemming and fuzzy matching), Semantic (NLP-driven meaning over mere keywords), and Hybrid (a tunable blend of both). Behind the scenes, Smart Search automatically indexes your Posts, Pages, Custom Post Types, ACF fields, WooCommerce products, and more—so you can serve richer, more relevant results without writing a line of search logic yourself.

  • Vector Database, Fully Managed

You don’t need to stand up or scale your own vector store—WP Engine’s AI Toolkit manages that for you. As new content is published or edited, the plugin streams updates in real time to its vector database. Queries are encoded into embeddings, nearest-neighbor lookups happen in milliseconds, and the freshest site content is always just a search away. This under-the-hood Vector DB also powers the AI aspects of Hybrid Search, ensuring that semantic similarity and context ranking work against live data.

  • Headless Integration

For sites using WP Engine’s Headless Platform, all of these features—Smart Search querying, vector indexing, AI-powered hybrid ranking, and recommendations—are exposed through GraphQL. The AI Toolkit installs and configures both WPGraphQL and Smart Search automatically, so your front-end app can orchestrate retrieval and generation without extra middleware.

  • Recommendations

An AI-driven content discovery feature that helps you surface “Related” or “Trending” posts (or custom post types) anywhere on your site—whether you’re using the Gutenberg editor or building a headless front end via WPGraphQL.

Google Gemini API (AI API’s)

The Google Gemini API offers developers a powerful and versatile interface to access Google’s state-of-the-art Gemini AI models. These multimodal models are designed to seamlessly understand and generate content across various data types, including text, code, images, audio, and video. 

For our chatbot integration, the Gemini API provides advanced natural language understanding, allowing it to interpret user queries and generate human-like responses. It supports multi-turn conversations, maintaining context over extended interactions, which is crucial for building engaging and intelligent conversational experiences. We will leverage the API’s flexibility to customize chatbot behavior, tone, and style, enabling a wide range of use cases from customer service to creative content generation.


Prerequisites

To benefit from this article, you should be familiar with the basics of working with the command line, headless WordPress development, Next.js, and the WP Engine User Portal.

Steps for setting up:

1. Set up an account on WP Engine and get a WordPress install running.  

2. Add a Smart Search license. Refer to the docs here for adding a license.


3. Navigate to the WP Admin of your install.  Inside your WP Admin, go to WP Engine Smart Search > Settings.  You will find your Smart Search URL and access token here.  Copy and save it.  We will need it later.  You should see this page:

4. Next, navigate to Configuration, select the Hybrid card, and add the post_content field in the Semantic settings section. We are going to use this field as our AI-powered field for similarity searches. Make sure to hit Save Configuration afterward.

5. After saving the configuration, head on over to the Index data page, then click Index NowIt will give you this success message once completed :

6. Create an API account on Google Gemini (Or whatever AI model you choose, e.g., OpenAI API).  Once created, navigate to your project’s dashboard. If you are using the Gemini API, go to the Google AI Studio. In your project’s dashboard, go to API Keys.  You should see a page like this:

Generate a new key, copy, and save your API key because we will need this later.  The API key is free on Google Gemini,  but the free tier has limits.

7.  Head over to your terminal or CLI and create a new Next.js project by pasting this utility command in:

npx create-next-app@latest name-of-your-app


You will receive prompts in the terminal asking you how you want your Next.js app scaffolded.  Answer them accordingly:

Would you like to use TypeScript? Yes
Wold you like to use ESLint? Yes
Would you like to use Tailwind CSS? Yes
Would you like to use the `src/` directory? Yes
Would you like to use App Router? Yes
Would you like to customize the default import alias (@/*)? No


Once your Next.js app is created, you will need to install the dependencies needed to ensure our app works.  Copy and paste this command in your terminal:

npm install @ai-sdk/google ai openai-edge react-icons react-markdown 

Once the Next project is done scaffolding, cd into the project and then open up your code editor.

8. In your Next.js project, create a  .env.local file with the following environment variables:

GOOGLE_GENERATIVE_AI_API_KEY="<your key here>" # if you chose another AI model, you can name this key whatever you want
SMART_SEARCH_URL="<your smart search url here>"
SMART_SEARCH_ACCESS_TOKEN="<your smart search access token here>"

Here is the link to the final code repo so you can check step by step and follow along.

Make Requests to the WP Engine Smart Search API

The first thing we need to do is set up the request to the Smart Search API using the Similarity query.  Create a file in the src/app directory called utils/context.ts.  Copy the code below and paste it into that file:

// These are the types that are used in the `getContext` function
type Doc = {
  id: string;
  data: Record<string, unknown>;
  score: number;
};

type Similarity = {
  total: number;
  docs: Doc[];
};

export type GraphQLSimilarityResponse = {
  data: {
    similarity: Similarity;
  };
  errors?: { message: string }[];
};

const QUERY = /* GraphQL */ `
  query GetContext($message: String!, $field: String!) {
    similarity(
      input: { nearest: { text: $message, field: $field } }
    ) {
      total
      docs {
        id
        data
        score
      }
    }
  }
`;

export const getContext = async (
  message: string,
): Promise<GraphQLSimilarityResponse> => {
  const url   = process.env.SMART_SEARCH_URL;
  const token = process.env.SMART_SEARCH_ACCESS_TOKEN;

  if (!url || !token) {
    throw new Error(
      "SMART_SEARCH_URL and SMART_SEARCH_ACCESS_TOKEN must be defined.",
    );
  }

  const res = await fetch(url, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${token}`,
    },
    body: JSON.stringify({
      query: QUERY,
      variables: { message, field: "post_content" } as const,
    }),
  });

  if (!res.ok) {
    throw new Error(`Smart Search responded with ${res.status} ${res.statusText}`);
  }
 return res.json() as Promise<GraphQLSimilarityResponse>;
};

This block defines TypeScript types (Doc, Similarity, and Response) to model the shape of a similarity‐search GraphQL response, and exports an async getContext function that performs the actual lookup. Inside getContext, it reads the Smart Search endpoint URL and access token from environment variables, then constructs a GraphQL query named GetContext that requests the nearest documents (by embedding similarity) for a given message against a specified field (“post_content”). 

It sends that query and its variables in the body of a POST request—complete with JSON content headers and a Bearer authorization header—to the Smart Search API endpoint, and finally returns the parsed JSON result. By encapsulating the fetch logic and typing the response, this function provides a clean, reusable way to retrieve semantically related WordPress content for use in a RAG‐style chatbot.

Creating the “R” in RAG

The next file we need to create is the “Retrieval” portion in our RAG pipeline.  Create a tools.ts file in the utils folder and copy and paste this code block:

import { tool } from "ai";
import { z } from "zod";
import { getContext } from "@/app/utils/context";

// Define the search tool
export const smartSearchTool = tool({
  description:
    "Search for information about TV shows using WP Engine Smart Search. Use this to answer questions about TV shows, their content, characters, plots, etc., when the information is not already known.",
  parameters: z.object({
    query: z
      .string()
      .describe(
        "The search query to find relevant TV show information based on the user's question."
      ),
  }),
  execute: async ({ query }: { query: string }) => {
    console.log(`[Tool Execution] Searching with query: "${query}"`);
    try {
      const context = await getContext(query);

      if (context.errors && context.errors.length > 0) {
        console.error(
          "[Tool Execution] Error fetching context:",
          context.errors
        );
        // Return a structured error message that the LLM can understand
        return {
          error: `Error fetching context: ${context.errors[0].message}`,
        };
      }

      if (
        !context.data?.similarity?.docs ||
        context.data.similarity.docs.length === 0
      ) {
        console.log("[Tool Execution] No documents found for query:", query);
        return {
          searchResults: "No relevant information found for your query.",
        };
      }

      const formattedResults = context.data.similarity.docs.map((doc) => {
        if (!doc) {
          return {};
        }

        return {
          id: doc.id,
          title: doc.data.post_title,
          content: doc.data.post_content,
          url: doc.data.post_url,
          categories: doc.data.categories.map((category: any) => category.name),
          searchScore: doc.score,
        };
      });

      // console.log("[Tool Execution] Search results:", formattedResults);

      return { searchResults: formattedResults }; // Return the formatted string
    } catch (error: any) {
      console.error("[Tool Execution] Exception:", error);
      return { error: `An error occurred while searching: ${error.message}` };
    }
  },
});

export const weatherTool = tool({
  description:
    "Get the current weather information for a specific location. Use this to answer questions about the weather in different cities.",
  parameters: z.object({
    location: z
      .string()
      .describe(
        "The location for which to get the current weather information."
      ),
  }),
  execute: async ({ location }: { location: string }) => {
    console.log(`[Tool Execution] Getting weather for location: "${location}"`);
    try {
      // Simulate fetching weather data
      const weatherData = {
        location,
        temperature: "22°C",
        condition: "Sunny",
        humidity: "60%",
        windSpeed: "15 km/h",
      };
      const formattedWeather = `The current weather in ${weatherData.location} is ${weatherData.temperature} with ${weatherData.condition}. Humidity is at ${weatherData.humidity} and wind speed is ${weatherData.windSpeed}.`;
      return { weather: formattedWeather };
    } catch (error: any) {
      console.error("[Tool Execution] Exception:", error);
      return {
        error: `An error occurred while fetching weather data: ${error.message}`,
      };
    }
  },
});

This module registers two “tools” with the AI SDK—one for performing semantic searches against your WP Engine Smart Search index and another for fetching (simulated) weather data. The smartSearchTool uses Zod to validate a single query string, then calls your getContext helper to run a similarity‐search GraphQL request; it handles errors or empty results gracefully, formats any returned documents (including ID, title, content, URL, categories, and relevance score), and exposes them as a structured searchResults array. 


The weatherTool declares a location parameter, simulates a lookup of current conditions (temperature, humidity, wind speed), and returns a human‐readable summary. By wrapping each in the tool() factory—complete with descriptions, parameter schemas, and execute functions—this file makes both search and weather functionality available for the LLM to invoke during a conversation.

API Endpoint for Chat UI – The AG in RAG

Next, let’s create the chat endpoint for the Chat UI, which is the AG in RAG.  In the src/app directory, create a api/chat/ subfolder, then add a route.ts file in there.  Copy and paste this code into the file:

// IMPORTANT! Set the runtime to edge
export const runtime = "edge";

import { convertToCoreMessages, Message, streamText } from "ai";
import { createGoogleGenerativeAI } from "@ai-sdk/google";

import { smartSearchTool, weatherTool } from "@/app/utils/tools";

/**
 * Initialize the Google Generative AI API
 */
const google = createGoogleGenerativeAI();

export async function POST(req: Request) {
  try {
    const { messages }: { messages: Array<Message> } = await req.json();

    const coreMessages = convertToCoreMessages(messages);

    const smartSearchPrompt = `
    - You can use the 'smartSearchTool' to find information relating to tv shows.
      - WP Engine Smart Search is a powerful tool for finding information about TV shows.
      - After the 'smartSearchTool' provides results (even if it's an error or no information found)
      - You MUST then formulate a conversational response to the user based on those results but also use the tool if the users query is deemed plausible.
        - If search results are found, summarize them for the user. 
        - If no information is found or an error occurs, inform the user clearly.`;

    const systemPromptContent = `
    - You are a friendly and helpful AI assistant 
    - You can use the 'weatherTool' to provide current weather information for a specific location.
    - Do not invent information. Stick to the data provided by the tool.`;

    const response = streamText({
      model: google("models/gemini-2.0-flash"),
      system: [smartSearchPrompt, systemPromptContent].join("\n"),
      messages: coreMessages,
      tools: {
        smartSearchTool,
        weatherTool,
      },
      onStepFinish: async (result) => {
        // Log token usage for each step
        if (result.usage) {
          console.log(
            `[Token Usage] Prompt tokens: ${result.usage.promptTokens}, Completion tokens: ${result.usage.completionTokens}, Total tokens: ${result.usage.totalTokens}`
          );
        }
      },
      maxSteps: 5,
    });
    // Convert the response into a friendly text-stream
    return response.toDataStreamResponse({});
  } catch (e) {
    throw e;
  }
}


This file defines an Edge‐runtime POST endpoint that wires up Google’s Gemini model with two custom tools—smartSearchTool for TV-show lookups via WP Engine Smart Search and weatherTool for fetching current weather. When a request arrives, it parses the incoming chat messages, converts them into the AI SDK’s core message format, and assembles two system‐level prompts: one describing how to use the search tool, the other explaining the weather tool. 

It then invokes streamText with the Gemini “flash” model, the combined system prompt, the user’s message history, and the tool definitions, allowing the LLM to call out to those tools during generation. A callback logs token usage after each reasoning step (up to five steps), and the function finally returns the AI’s response as a streamed HTTP response.

Create UI Components for Chat Interface

The Chat.tsx file

Now, let’s create the chat interface.  In the src/app directory, create a components folder.  Then create a Chat.tsx file.  Copy and paste this code block in that file:

"use client";

import React, { ChangeEvent } from "react";
import Messages from "./Messages";
import { Message } from "ai/react";
import LoadingIcon from "../Icons/LoadingIcon";
import ChatInput from "./ChatInput";

interface Chat {
  input: string;
  handleInputChange: (e: ChangeEvent<HTMLInputElement>) => void;
  handleMessageSubmit: (e: React.FormEvent<HTMLFormElement>) => void;
  messages: Message[];
  status: "submitted" | "streaming" | "ready" | "error";
}

const Chat: React.FC<Chat> = ({
  input,
  handleInputChange,
  handleMessageSubmit,
  messages,
  status,
}) => {
  return (
    <div id="chat" className="flex flex-col w-full mx-2">
      <Messages messages={messages} />
      {status === "submitted" && <LoadingIcon />}
      <form
        onSubmit={handleMessageSubmit}
        className="ml-1 mt-5 mb-5 relative rounded-lg"
      >
        <ChatInput input={input} handleInputChange={handleInputChange} />
      </form>
    </div>
  );
};

export default Chat;

This file defines a client-side React Chat component that ties together your message list, input field, and loading indicator. It declares a Chat props interface—containing the current input value, change and submit handlers, the array of chat messages, and a status flag—and uses those props to control its rendering.

Inside the component, it first renders the <Messages> list to show the conversation history. If the status is "submitted", it displays a <LoadingIcon> spinner to indicate that a response is pending.

Finally, it renders a <form> wrapping the <ChatInput> component wired to the provided input value and change handler, so users can type and submit new messages.

Messages Component

Staying in the src/app/components directory, create a Messages.tsx file.  Copy and paste this code block in:

import { Message } from "ai";
import { useEffect, useRef } from "react";
import ReactMarkdown from "react-markdown";

export default function Messages({ messages }: { messages: Message[] }) {
  const messagesEndRef = useRef<HTMLDivElement | null>(null);
  useEffect(() => {
    messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
  }, [messages]);
  return (
    <div
      className="border-1 border-gray-100 overflow-y-scroll flex-grow flex-col justify-end p-1"
      style={{ scrollbarWidth: "none" }}
    >
      {messages.map((msg, index) => (
        <div
          key={index}
          className={`${
            msg.role === "assistant" ? "bg-green-500" : "bg-blue-500"
          } my-2 p-3 shadow-md hover:shadow-lg transition-shadow duration-200 flex slide-in-bottom bg-blue-500 border border-gray-900 message-glow`}
        >
          <div className="ml- rounded-tl-lg  p-2 border-r flex items-center">
            {msg.role === "assistant" ? "🤖" : "🧒🏻"}
          </div>
          <div className="ml-2 text-white">
            <ReactMarkdown>{msg.content}</ReactMarkdown>
          </div>
        </div>
      ))}
      <div ref={messagesEndRef} />
    </div>
  );
}


The Messages component renders a scrollable list of chat messages, automatically keeping the view scrolled to the latest entry. It accepts a messages prop (an array of Message objects) and uses a ref to an empty <div> at the bottom; a useEffect hook watches for changes to the messages array and calls scrollIntoView on that ref so new messages smoothly come into view. 


Each message is wrapped in a styled <div> whose background color and avatar icon depend on the message’s role (“assistant” vs. “user”), and the text content is rendered via ReactMarkdown to support Markdown formatting.

Chat Input Component

Lastly, staying in the components/Chat directory,  we have the chat input.  Create a ChatInput.tsx file and copy and paste this code block in:

import { ChangeEvent } from "react";
import SendIcon from "../Icons/SendIcon";

interface InputProps {
  input: string;
  handleInputChange: (e: ChangeEvent<HTMLInputElement>) => void;
}

function Input({ input, handleInputChange }: InputProps) {
  return (
    <div className="bg-gray-800 p-4 rounded-xl shadow-lg w-full max-w-2xl mx-auto">
      <input
        type="text"
        value={input}
        onChange={handleInputChange}
        placeholder={"Ask Smart Search about TV shows..."}
        className="w-full bg-transparent text-gray-200 placeholder-gray-500 focus:outline-none text-md mb-3"
      />
      <div className="flex">
        <button
          type="submit"
          className="p-1 hover:bg-gray-700 rounded-md transition-colors ml-auto"
          aria-label="Send message"
          disabled={!input.trim()}
        >
          <SendIcon />
        </button>
      </div>
    </div>
  );
}

export default Input;

This file exports an Input component that renders a styled text field and send button for your chat UI. It takes a input string and an handleInputChange callback to keep the input controlled, showing a placeholder prompt (“Ask Smart Search about TV shows…”). The send button, decorated with a SendIcon, is disabled when the input is empty or just whitespace.

Update the page.tsx template

We need to modify the src/app/page.tsx file to add the Chat component to the page.  In the page.tsx file copy and paste this code:

"use client";
import Chat from "./components/Chat/Chat";
import { useChat } from "@ai-sdk/react";
import { useEffect } from "react";

const Page: React.FC = () => {
  const {
    messages,
    input,
    handleInputChange,
    handleSubmit,
    setMessages,
    status,
  } = useChat();

  useEffect(() => {
    if (messages.length < 1) {
      setMessages([
        {
          role: "assistant",
          content: "Welcome to the Smart Search chatbot!",
          id: "welcome",
        },
      ]);
    }
  }, [messages, setMessages]);

  return (
    <div className="flex flex-col justify-between h-screen bg-white mx-auto max-w-full">
      <div className="flex w-full flex-grow overflow-hidden relative bg-slate-950">
        <Chat
          input={input}
          handleInputChange={handleInputChange}
          handleMessageSubmit={handleSubmit}
          messages={messages}
          status={status}
        />
      </div>
    </div>
  );
};

export default Page;


This file defines our page component that leverages the useChat hook from the @ai-sdk/react package to manage chat state, including messages, input text, submission handler, and status. 

Upon initial render, a useEffect hook checks if there are no messages and injects a default assistant greeting. The component returns a full-viewport flexbox layout with a styled background area in which it renders the Chat component, passing along the chat state and handlers. 

Update the layout.tsx file with metadata

We need to add metadata to our layout.  Copy and paste this code block in the src/app/layout.tsx file:

import type { Metadata } from "next";
import { Inter } from "next/font/google";
import "./globals.css";

const inter = Inter({ subsets: ["latin"] });

export const metadata: Metadata = {
  title: "Smart Search RAG",
  description: "Lets make a chatbot with Smart Search",
};

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body className={inter.className}>{children}</body>
    </html>
  );
}

This file configures the global layout and metadata for the app: it imports global styles, loads the Inter font, and sets the page title and description. The default RootLayout component wraps all page content in <html> and <body> tags, applying the Inter font’s class to the body.

CSS Note: The last thing to add for the styling is the globals.css file. Visit the code block here and copy and paste it into your project.

Test the ChatBot

The chatbot should be completed and testable in this state. In your terminal, run npm run dev and navigate to http://localhost:3000. Try asking the chatbot a few questions.  You should see this in your browser:

Conclusion

We hope this article helped you understand how to create a chatbot with WP Engine’s AI toolkit in headless WordPress!  Stay tuned for the next article on embedding this and using it in traditional WordPress!!

As always, we’re super stoked to hear your feedback and learn about the headless projects you’re working on, so hit us up in the Headless WordPress Discord!



The post Create a Headless WordPress chatbot with WP Engine’s AI Toolkit, RAG, and Google Gemini appeared first on Builders.

]]>
https://wpengine.com/builders/create-a-headless-wordpress-chatbot-with-wp-engines-ai-toolkit-rag-and-google-gemini/feed/ 0
Boost Next.js Performance by Offloading Third-Party Scripts with PartyTown 🎉 https://wpengine.com/builders/boost-next-js-performance-by-offloading-third-party-scripts-with-partytown/ https://wpengine.com/builders/boost-next-js-performance-by-offloading-third-party-scripts-with-partytown/#respond Fri, 13 Jun 2025 15:19:23 +0000 https://wpengine.com/builders/?p=31915 I spent a number of years working in the WordPress agency space, and during that time, we frequently received requests from clients asking us to improve their website performance. They […]

The post Boost Next.js Performance by Offloading Third-Party Scripts with PartyTown 🎉 appeared first on Builders.

]]>
I spent a number of years working in the WordPress agency space, and during that time, we frequently received requests from clients asking us to improve their website performance. They would run Lighthouse audits and come to us concerned that their sites weren’t meeting the performance standards they wanted.

We’d dive into their custom codebase and find optimizations, but often, the biggest culprits weren’t their code—they were third-party scripts like Google Analytics, Google Tag Manager, Intercom chat widgets, advertising networks, and so on. We’d report our findings and hear, “Oh, well, we have to have those…but can’t we make it faster anyway?”

That tension—between essential third-party functionality and website speed—is a challenge many web developers face. Fortunately, there’s a compelling solution that helps strike a balance: Partytown.

This article will cover:

  • The performance issues that can be caused by third-party scripts
  • What PartyTown is and how it can alleviate those issues
  • An example Next.js application to demonstrate the impact PartyTown can have
  • How you can implement PartyTown in your own Next.js app

A video version of this content is also available here:

Prerequisites

To benefit from this article, you should be familiar with the following:

  • JavaScript fundamentals
  • The basics of how web browsers load and execute scripts
  • Tools like Lighthouse for performance auditing
  • The structure of a Next.js project (specifically using the App Router)*

* Even if you’re not using Next.js, you can still learn the core concept of Partytown from this article and integrate it into your preferred framework using one of Partytown’s Integration Guides.

Understanding the Problem: Main Thread Overload

In modern web applications, the browser’s main thread is where critical tasks like rendering, user interaction, and layout updates occur. But third-party scripts often hog this thread, leading to sluggish performance. Chrome’s Lighthouse documentation breaks down how script execution dominates the main thread, especially from third-party code.

These scripts can:

  • Block rendering
  • Cause input delay
  • Significantly impact metrics like FID (First Input Delay) and TTI (Time to Interactive)

While some optimization techniques help—like adding async or defer attributes to <script> tags (MDN reference), or using tools like @next/third-parties—they’re often not enough.

Meet Partytown

Partytown is an open-source library from Builder.io that offloads third-party scripts to a web worker, freeing up the main thread. This means your app’s critical work (like rendering UI and responding to user input) can continue smoothly while third-party scripts run in isolation.

Partytown is currently in beta and actively developed. While it doesn’t support every use case yet, it provides a powerful way to boost site performance.

Why is it called “Partytown”?

The name “Partytown” is a playful metaphor:

  • The main thread = your app’s “downtown,” where essential work happens.
  • Third-party scripts = noisy neighbors cluttering up your downtown.

Partytown moves those noisy neighbors out to the suburbs—a separate part of town—so they can “party” without disturbing downtown’s flow! 😄 In other words, they’re offloaded to a web worker.

Before You Reach for Partytown

Before integrating Partytown, consider these best practices:

  • Add async or defer to third-party <script> tags so they don’t block rendering. Learn more on MDN.
  • Use the @next/third-parties package for smarter loading in Next.js projects (experimental as of this writing).
  • Follow performance tips on web.dev to minimize script impact.
  • Use Next.js’s <Script> component with a strategy prop:
    • beforeInteractive — for critical scripts
    • afterInteractive — for non-blocking scripts
    • lazyOnload — loads during idle time
    • worker — (experimental) loads in a worker using Partytown, but only works with Pages Router for now, not the App Router (Next.js script docs).

I recommend implementing these best practices first to optimize third-party scripts, then running a Lighthouse audit to test the site’s performance, paying particular attention to the “Minimize main-thread work” and “Reduce the impact of third-party code” sections of the report. Then, if you find that third-party scripts are still an appreciable performance issue, consider using Partytown to offload them from the main thread to web workers.

Testing Partytown’s Impact

Next, let’s run a few Lighthouse performance audits on a Next.js project that uses several third-party scripts. We’ll run one test with Partytown disabled and a second one with it enabled to measure its impact.

To get started, you can clone this example repository that demonstrates Partytown in action. Once you clone it, you can run  npm install to install its dependencies, then npm run dev to get it running locally at http://localhost:3000.

This example project loads these third-party scripts:

  • slow-script.js: a generic script I wrote for testing that blocks the main thread for 300ms
  • fake-ads.js: a script I wrote that simulates advertisement scripts that block the thread, inject iframes, and load large images
  • Google Tag Manager
  • Intercom chat widget

Test 1: Without Partytown

You can disable Partytown in the example project by commenting out the <Partytown debug={true} forward={["dataLayer.push"]} /> line and removing type="text/partytown" from each of the scripts in src/app/layout.tsx.

Running a Lighthouse performance audit from the Chrome DevTools should then yield a result like this:

Note that the overall performance score is only 70/100, and that two of the main culprits are the “Minimize main-thread work” and “Reduce the impact of third-party code” items on the list.

If you open the Chrome DevTools and view the Sources tab, you can confirm that the main thread (labeled “top”) is doing all the work required by the third-party scripts:

Test 2: With Partytown

Now, restore the <Partytown debug={true} forward={["dataLayer.push"]} /> line and the type="text/partytown" prop for each of the Script components. This will enable Partytown.

Run another Lighthouse performance audit to see a result like this:

Note that the performance score is now 99/100, and that the  “Minimize main-thread work” and “Reduce the impact of third-party code” are no longer present on the list of issues. Vastly improved!

If you open the Chrome DevTools and view the Sources tab, you can see that the main thread (labeled “top”) is still taking care of rendering the page, but that a new “Partytown 🎉” web worker has been added to the list. This “Partytown 🎉” web worker is now doing the work required by the third-party scripts:

The Takeaway

Take a step back and remember that for both the “Without Partytown” and “With Partytown” tests we ran, the browser had to download, parse, compile, and execute exactly the same third-party JavaScript code. The difference is that in the first test, the browser’s main thread had to do all of that work in addition to rendering the page, but in the second test, the third-party JS work was done by web workers running in a separate thread instead.

Implementing Partytown in a Next.js App Router Project

You can follow the steps below to implement Partytown in your own Next.js App Router projects.

⚠️ Be aware of trade-offs when using Partytown. Some scripts may not behave identically in a web worker. Review Partytown’s trade-offs before deploying to production.

Steps to Add Partytown

  1. Install Partytown:

npm install @builder.io/partytown

  1. Add a command to copy Partytown scripts
    Add the script below to the scripts object in your package.json file, save it, then run npm run partytown to run the script and copy Partytown’s scripts into your public directory.
"scripts": {
  // ...
  "partytown": "partytown copylib public/~partytown"
},

Optionally, you can also add this partytown command to the dev and/or build scripts to copy Partytown’s files whenever the development server starts or the production app is built. Example:

"scripts": {
  // ...
  "dev": "npm run partytown && next dev --turbopack",
  "build": "npm run partytown && next build",
  "partytown": "partytown copylib public/~partytown"
},

  1. Load Partytown in your RootLayout

In src/app/layout.tsx, add this line to import the Partytown component:

import { Partytown } from "@qwik.dev/partytown/react";

Then render it inside the <head> element, like this:

<head>
    // ...
    <Partytown forward={["dataLayer.push"]} />
</head>

Partytown’s Configuration page lists all the options that can be passed to the Partytown component. Included among them is a debug option you can enable with debug={true} if you encounter any issues and need to debug them.

  1. Add type="text/partytown" to Scripts

Pass a type="text/partytown" prop to the scripts you’d like to load via Partytown. If you have a script you want to remain on the main thread instead, simply omit the type="text/partytown" prop for that script.

<Script
  src="https://cdn.jsdelivr.net/.../slow-script.js"
  type="text/partytown"
/>

Here’s what the full root layout file might look like:

import Script from "next/script";
import { Partytown } from "@qwik.dev/partytown/react";

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <head>
        <Partytown forward={["dataLayer.push"]} />
      </head>
      <body>
        {children}

        {/* Slow third-party script */}
        <Script
          src="https://cdn.jsdelivr.net/gh/kellenmace/partytown-nextjs/third-party-scripts/slow-script.js"
          type="text/partytown"
        />
      </body>
    </html>
  );
}

After implementing this, open Chrome DevTools and check the Sources tab. You’ll see the third-party scripts now load in a web worker (rather than under the main “top” frame), confirming that Partytown is working, just as we did in our tests.

Conclusion

Third-party scripts are necessary, but they can be performance killers. Partytown offers a creative, effective way to keep them around without dragging down your app’s speed. By offloading them to a web worker, your main thread stays responsive and users get a faster experience.

As web developers, it’s up to us to make smart trade-offs. And when it comes to performance vs. functionality, Partytown helps us have our cake and eat it too.

To learn more, check out:

Happy coding, and enjoy the party (out of town)! 🎉

The post Boost Next.js Performance by Offloading Third-Party Scripts with PartyTown 🎉 appeared first on Builders.

]]>
https://wpengine.com/builders/boost-next-js-performance-by-offloading-third-party-scripts-with-partytown/feed/ 0
Build a Contact Form in Headless WordPress Using Next.js and Ninja Forms https://wpengine.com/builders/build-a-contact-form-in-headless-wordpress-using-next-js-and-ninja-forms/ https://wpengine.com/builders/build-a-contact-form-in-headless-wordpress-using-next-js-and-ninja-forms/#respond Thu, 05 Jun 2025 15:24:24 +0000 https://wpengine.com/builders/?p=31903 Contact forms are a fundamental touchpoint between site visitors and site owners, enabling customer inquiries, lead generation, and essential feedback that drives engagement.In headless WordPress, this can be tricky since […]

The post Build a Contact Form in Headless WordPress Using Next.js and Ninja Forms appeared first on Builders.

]]>
Contact forms are a fundamental touchpoint between site visitors and site owners, enabling customer inquiries, lead generation, and essential feedback that drives engagement.
In headless WordPress, this can be tricky since the frontend and backend are separated.   This means that you have to figure out a way for the frontend to send that data to your WP backend as securely as possible.   

In this article, we’ll discuss implementing a simple contact form in headless WordPress using WPGraphQL, Ninja Forms, and the Next.js App Router.

If you prefer the video format, you can access it here:

Prerequisites

Before we begin, you should have a basic understanding of the following:

  • Headless WordPress concepts
  • The WPGraphQL plugin
  • The Next.js App Router

This article is not a step-by-step walkthrough, but if you’d like to explore the codebase and follow along, you can clone the example repository, which includes a detailed setup guide.

Using Ninja Forms with WPGraphQL

Ninja Forms is a flexible and user-friendly form-building plugin for WordPress. It offers a robust set of features out of the box, and its core functionality is free and open source.

To expose Ninja Forms data via GraphQL, we’ll use the WPGraphQL for Ninja Forms extension. This plugin adds a GraphQL schema for Ninja Forms, allowing queries and mutations for form data.

For this example, we’ll stick with the free tier of Ninja Forms and use its default fields: Name, Email, and Message.


After downloading the WPGraphQL for Ninja Forms plugin, let’s test that it works by requesting and submitting data to its API:
Requesting form default form data:

Submitting data to the form via mutation:

You should get the results back in the right-hand pane, with the data expected when you query for it, and the success boolean set to true if the submission was successful.

Stoked, these both work!

Form Submission in Next.js App Router

The next step I took was to create the API route responsible for handling the form submission safely. In App Router, route handlers allow you to create custom request handlers for a given route using the Web Request and Response APIs.

We will take advantage of that convention in the app/api/contact/route.ts file.

This file securely bridges a Next.js frontend with a WordPress backend via GraphQL. In the POST handler, the code first reads JSON from the incoming request and expects three properties—name, email, and message. If any of these fields is missing, it immediately returns a 400 response with an error. Once validation passes, the handler constructs a WPGraphQL mutation:

const mutation = `
  mutation SubmitForm($input: SubmitFormInput!) {
    submitForm(input: $input) {
      success
      message
      errors {
        fieldId
        message
        slug
      }
    }
  }
`;

Next, the code issues a fetch call to the WordPress GraphQL endpoint, which is specified via the environment variable NEXT_PUBLIC_GRAPHQL_ENDPOINT. In that call, the headers include "Content-Type": "application/json" and an Authorization header built from another environment variable:

Authorization: Bearer ${process.env.WP_AUTH_TOKEN}

Because WP_AUTH_TOKEN is pulled from process.env, it ensures that only your Next.js app—holding this secret—can successfully authorize and submit data to the WordPress endpoint. 

The request body contains the query and a variables object whose input includes formId: 1, an array of field objects mapping each field’s id to value: data.name, value: data.email, and value: data.message, plus a clientMutationId set to "contact-form-submission".

When the response arrives, the code calls await wpResponse.json(), then checks both wpResponse.ok and response.errors. If either indicates a failure, it logs the first GraphQL error to the server console and throws an exception. In the success case—when submitForm returns something like { success: true, message: "…", errors: [] }—the route returns a 200 JSON response:

{ "success": true, "message": "Form submitted successfully" }

Any thrown exception or unexpected condition is caught by the catch block, which logs the error and returns a 500 response with { error: "Form submission failed" }. Finally, the file exports a GET handler that always returns a 405 “Method not allowed” response, ensuring only POST requests are processed.

Interfacing with the API route via server actions

Now that we have discussed the API route in route.ts, let’s go over the server action responsible for interfacing with that API endpoint. 

The actions.ts file implements a server-side action that functions as the middleware between the client-side form submission and the API endpoint. 

When a user triggers the form submission, the submitForm server action is invoked, which processes the form data and initiates an HTTP POST request to the /api/contact endpoint. 

This endpoint, implemented in route.ts, then executes the WordPress WPGraphQL mutation through the Ninja Forms API. 

Rendering the Form on the Client

The contact-form.tsx file implements a client-side form component that leverages Next.js’s Form API and server actions for handling contact form submissions. 

The component is marked with the "use client" directive, indicating that it runs on the client side and utilizes React’s useFormStatus hook to manage form submission states.

The form implementation consists of two main components.

This is a React component called SubmitButton that renders a <button> whose appearance and behavior change based on the form’s submission state.

function SubmitButton() {
  const { pending } = useFormStatus();
  return (
    <button
      type="submit"
      disabled={pending}
      className={`w-full py-3 px-6 rounded bg-yellow-500 text-black font-semibold hover:bg-yellow-400 transition-colors ${
        pending ? "opacity-50 cursor-not-allowed" : ""
      }`}
    >
      {pending ? "Sending..." : "Send Message"}
    </button>
  );
}

And this is the main ContactForm component that manages the form state and submission:

export function ContactForm() {
  const [message, setMessage] = useState<{
    type: "success" | "error";
    text: string;
  } | null>(null);

  async function handleSubmit(formData: FormData) {
    const result = await submitForm(formData);
    if ("error" in result) {
      setMessage({ type: "error", text: result.error });
    } else {
      setMessage({ type: "success", text: result.success });
    }
  }
  // ... form JSX
}

Notice that I decided to use the form component built into Next.js to keep things simple.  You can dynamically fetch the form data from the Ninja Forms WPGraphQL API.  For this article and simplicity’s sake, I chose the static form provided by Next.js

Environment Variables

Before trying the form on the browser to see if it works, the last thing you have to check is your environment variables.  In the .env.local file at your project’s root, your environment variables should be as follows:

NEXT_PUBLIC_GRAPHQL_ENDPOINT="https://your-wpsite.com/graphql"
WP_AUTH_TOKEN="your-auth-token"
NEXT_PUBLIC_SITE_URL=http://localhost:3000

Generate an Auth Token

You can generate an auth token to add to your environment variable by using the openssl command in terminal:

openssl rand -base64 32

This command generates a random 32-byte string encoded in base64, which is great for use as an authentication token. The output is a secure, random string that can be used as the WP_AUTH_TOKEN in your environment variables.

Test the form on the browser

Everything is set up, and now it’s time to test this form on the browser.  I will navigate to my contact form route. Here is the form working in all its glory!

Locking down the edit page on WP Admin

If you’re using a static form component in your frontend, any changes made to the form structure in the WordPress admin can break the submission logic.

To prevent this, restrict access to Ninja Forms editing screens by creating a custom plugin that overrides its default capabilities:

Example Plugin: Ninja Forms Admin Lockdown

<?php
/**
 * Plugin Name: Ninja Forms Admin Lockdown
 * Description: Restricts access to all Ninja Forms admin screens so that only users with the 'edit_themes' capability (typically Administrators) can view or modify forms.
 * Version:     1.0.0
 * Author:      Your Name
 * License:     GPLv2 or later
 * Text Domain: ninja-forms-admin-lockdown
 */

// Exit if accessed directly.
if ( ! defined( 'ABSPATH' ) ) {
    exit;
}

/**
 * Restrict access to “All Forms” and the main Ninja Forms menu.
 *
 * By default, Ninja Forms uses 'edit_posts' to gate access.
 * Returning 'edit_themes' here ensures only users with that capability can see or edit.
 */
function nf_allforms_capabilities( $cap ) {
    return 'edit_themes';
}
add_filter( 'ninja_forms_admin_parent_menu_capabilities', 'nf_allforms_capabilities' );
add_filter( 'ninja_forms_admin_all_forms_capabilities',   'nf_allforms_capabilities' );

/**
 * Restrict access to “Add New Form” submenu.
 */
function nf_newforms_capabilities( $cap ) {
    return 'edit_themes';
}
add_filter( 'ninja_forms_admin_parent_menu_capabilities', 'nf_newforms_capabilities' );
add_filter( 'ninja_forms_admin_all_forms_capabilities',   'nf_newforms_capabilities' );
add_filter( 'ninja_forms_admin_add_new_capabilities',     'nf_newforms_capabilities' );

In order to do this, you need to hook into Ninja Forms’ capability filters and return a higher-level capability.

Install and activate this plugin to restrict form editing to only site Administrators (those who have the `edit_themes` capability).

Conclusion

Integrating a contact form on a headless WordPress site doesn’t have to be complex. By combining Next.js, WPGraphQL, and Ninja Forms, you can build a secure, modern contact form that connects your frontend and backend seamlessly.

As always, we’re super stoked to hear your feedback and learn about the headless projects you’re working on, so hit us up in the Headless WordPress Discord!

The post Build a Contact Form in Headless WordPress Using Next.js and Ninja Forms appeared first on Builders.

]]>
https://wpengine.com/builders/build-a-contact-form-in-headless-wordpress-using-next-js-and-ninja-forms/feed/ 0