Building an AI App With Cloudflare Workers Part 3
Part 3 of 3 Tying it all together with Artificial Intelligence
In the first two parts of this series, we built a complete full-stack application. Part 1 covered the React front-end, and Part 2 built the Hono and D1 back-end that lets us save, load, and delete our notes and content.
Our app “works” but, it’s still missing the core feature that inspired this project: an LLM friend to help with writing guidance. The chat window we built is just a UI element that doesn’t talk to anything.
Welcome to Part 3, the final and most exciting piece of the puzzle. This is where we wire up our friend’s brain.
We are going to dive into the Cloudflare AI ecosystem. I’ll walk through adding the AI bindings to our worker, creating a new /aichat endpoint in Hono, and for a little classical fun we’re using Hono’s built-in streaming helpers to get that real-time, ‘typing’ effect directly from the LLM.
By the end of this post, our AI-powered text editor will finally be complete.
Updating The Backend API To Use Cloudflare
Up to this point, our Hono back-end could be developed and tested in a generic local server. This was perfectly fine for our file and database routes in Part 2, as they behave like any standard API.
However, to add the AI functionality, we need to leverage Cloudflare’s AI binding. This is a special service provided by the Cloudflare platform that our current local setup has no idea what it is or how to connect to it.
To use this powerful platform, we must transition our project to run within an actual Cloudflare Workers runtime. We can emulate this environment locally using Wrangler, Cloudflare’s command-line tool. This requires us to reinitialise our server folder as an official Cloudflare Workers project.
First, delete the existing server folder we created in Part 2. Now, from your project’s root, initialise a new Cloudflare Workers project:
$ bun create cloudflare@latestThe terminal will prompt you to name the project. Name it “server“ to replace the one you just deleted. When it asks you to choose a framework starter, select Hono. This will set you up with a new “server” folder that includes Hono, wrangler, and all the necessary configurations to run on the Cloudflare platform.
Note: The dependency install step may fail here, that’s okay. It will be fixed after modifying the package.json and tsconfig.json to include bun in the build scripts instead of npm. Enter the “server” directory and change the package.json and tsconfig.json to reflect the given examples. Then, run the following command to install all dependencies and set up the starter project.
$ cd server
$ bun installThis project is now set up to use wrangler, Cloudflare’s command-line tool, which you can run from this directory. Now using your Cloudflare account, and type the following command to login to your account via wrangler cli.
$ buns wrangler loginThe new server/src directory contains a default index.ts file. We’re going to modify this file and add a new one, types.ts, to create a clean, type-safe structure for our API.
Here’s what each file will do:
types.ts(New File): This is a crucial file for a good developer experience. We’ll use it to define custom TypeScript types for our Cloudflare environment bindings. In simple terms, this file will teach TypeScript whatc.env.AIandc.env.DBare, giving us auto-completion and error-checking so we don’t make mistakes later.index.ts(Modified): This will be the main entry point for our Hono application. Its job is to set up the Hono server, import our custom types, and define the top-level routes for our API (like/filesand/aichat). It will then hand off the requests to the correct route-handler files.
import type { Env } from “hono”;
declare class Ai {
run(model: string, options: any): Promise<any>;
}
declare class D1Database {
prepare(query: string): {
bind(...values: any[]): {
run(): Promise<any>;
all(): Promise<any>;
first(): Promise<any>;
raw(): Promise<any>;
};
};
}
export interface CustomEnv extends Env {
AI: Ai;
DB: D1Database;
FRONTEND: string; // The URL of the frontend application
}This types.ts file is crucial for a good developer experience. We’re using it to declare the “shape” of the environment bindings Cloudflare will provide at runtime. This teaches TypeScript what our c.env object will contain, giving us auto-completion and error-checking.
Declare Class Ai
This tells TypeScript that a class named Ai will exist when we deploy our code. We don’t have to write this class; Cloudflare’s runtime provides it.
run(model: string, options: any): Promise<any>: We’re defining its primary method,run.It takes a model identifier (e.g.,
@cf/meta/llama...) and anoptionsobject.We use
Promise<any>because the response can vary. We’re using it for text streaming, but it could also be a simple text response or other data. Thisanytype gives us the flexibility to handle different AI model outputs.
Declare Class D1Database
Similarly, this defines the interface for Cloudflare’s D1 database binding.
prepare(query: string): This is the main method you use. It prepares an SQL query and returns an object with methods to execute it:.run(): Use this forINSERT,UPDATE, orDELETEqueries that don’t return data..all(): Use this forSELECTqueries to get an array of all matching rows..first(): Use this to get just the first row from a query, which is useful for fetching by ID..raw(): Use this for accessing the low-level, raw results from the database driver.
Export Interface CustomEnv
This interface is the most important part for our code. It’s the “glue” that brings all the bindings together for Hono.
extends Env: It “extends” Hono’s baseEnvtype.AI: Ai: Binds ourAiclass definition. Now, when we typec.env.AI, TypeScript knows it has a.run()method.DB: D1Database: Binds ourD1Databasedefinition.c.env.DBis now fully typed.FRONTEND: string: This defines a string variable for our front-end’s URL. We will store this as a secret in the Cloudflare dashboard. We’ll use this in our CORS (Cross-Origin Resource Sharing) configuration. This policy instructs the browser to only allow our deployed front-end to read responses from our API, which is a key security measure to prevent malicious websites from making requests and reading our data on a user’s behalf.
Cloudflare Environment Configuration
To start with storing secrets, enter the following command in the terminal to store all required secrets as environment variables. The terminal will then prompt you to enter the value.
$ wrangler secrets add FRONTENDDefining the types in types.ts is only half the story. That file tells TypeScript what our environment should look like, but it doesn’t connect any real services.
For that, we need to edit wrangler.jsonc. This is the central configuration file for our Worker. Its most important job is to create bindings—links between the variables in our code (like c.env.DB and c.env.AI) and the actual, live Cloudflare services (like a specific D1 database). This file is what ensures that when our code runs, it has real services to talk to.
{
“$schema”: “../node_modules/wrangler/config-schema.json”,
“name”: “server”,
“main”: “src/index.ts”,
“compatibility_date”: “2025-07-02”,
“ai”: {
“binding”: “AI”
},
“d1_databases”: [
{
“binding”: “DB”,
“database_name”: <your database name here>
“database_id”: <your generated database id here>
}
]
}The wrangler.jsonc configuration shown above contains the essential bindings to get your AI and database services operational. You’ll notice the d1_databases section requires a unique database_name and database_id. To generate these for your project, run the following command in your terminal.
Building the Database
$ bunx wrangler@latest d1 create <databse-name>The create command initialises a new D1 database This command initialises a new D1 database and outputs its associated database_id. You’ll then copy both the database_name you chose and this new database_id into your wrangler.jsonc file to complete the D1 binding. With the database created, your next step is to define its table structure. Create a new file named schema.sql in the server/directory and paste in the following SQL queries.
-- here is the default schema for storing the files.
DROP TABLE IF EXISTS files;
CREATE TABLE IF NOT EXISTS files (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL UNIQUE,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);These SQL commands define the “blueprint” for our files table. Let’s look at the key elements and what they do:
DROP TABLE IF EXISTS files;This command makes our script idempotent, meaning it can be run over and over without causing errors. It ensures that every time we apply this schema, we start with a fresh, cleanfilestable, which is perfect for development.CREATE TABLE IF NOT EXISTS files (...)This is the main command that creates our table. The definitions inside the parentheses are the most important part:id INTEGER PRIMARY KEY AUTOINCREMENTThis creates our unique ID. ThePRIMARY KEYensures everyidis unique, andAUTOINCREMENTmeans we don’t have to provide anidwhen we save a note; the database will assign one for us (1, 2, 3, etc.).title TEXT NOT-NULL UNIQUEThis is for the note’s filename.NOT NULLmeans the database will reject any note without a title.UNIQUEis crucial: it prevents two notes from having the same title, which is exactly the behaviour we want.content TEXT NOT-NULLThis column will store the actual note content (the HTML from our editor). It also cannot be empty.created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMPThis is a great “housekeeping” column. We don’t have to specify the time when we save a note. The database will automatically stamp the new row with the exact time of its creation.
Running the following command in the terminal initialises the database locally. Changing the --local tag to --remote creates a database instance in on the workers platform instead of locally on your machine.
$ bunx wrangler d1 execute <database-name> --local --file=./schema.sqlTalking To The Cloudflare AI Binding
With our database schema in place and all our bindings configured in wrangler.jsonc, we’re ready to write the main logic for our back-end.
We’ll do this in server/src/index.ts. This file is the main entry point for our Hono API. By importing the CustomEnv type we defined in types.ts, we’ll get full type-safety across our application.
This file will be responsible for two primary tasks:
Handling CORS: We’ll apply Hono’s CORS middleware here. This will use our
c.env.FRONTENDsecret to ensure that only our deployed front-end application is permitted to read responses from our API.Routing Requests: This file will act as the main “switchboard” for our API. It will inspect the incoming request’s path and route it to the correct handler file. For example, a request to
/fileswill be forwarded to our file-handling logic, while a request to/aichatwill be sent to our new AI logic.
To keep our index.ts clean and organized, we won’t put all our API logic directly in this file. Instead, we’ll use Hono’s app.route() method to delegate groups of requests to specific route handlers.
Add these two lines to your index.ts:
app.route(”/files”, filesRoutes);
app.route(”/aichat”, chatRoutes);This code tells Hono to act like a switchboard:
Any request beginning with
/files(like/filesor/files/123) will be forwarded to thefilesRouteshandler.Any request beginning with
/aichatwill be forwarded to thechatRouteshandler.
We’ll define these handlers in their own files inside the server/src/routes/ directory: filesRoutes in files.ts and chatRoutes in chat.ts.
Inside our chat.ts file, the core logic for connecting to Cloudflare AI comes down to this single call:
const response = await ai.run(”@cf/meta/llama-3.1-8b-instruct”, {
messages,
stream: true,
});
This code calls the ai.run() method, which we get from our c.env.AI binding. Let’s break down the parameters:
@cf/meta/llama-3.1-8b-instruct: This is the specific model we’re asking Cloudflare to use.messages: This is the formatted array containing the user’s prompt (and, as we’ll see, our editor’s content as context).stream: true: This is the most important parameter for our app’s user experience. By setting this totrue, we are telling the AI agent not to wait until the entire response is generated. Instead, it will send us the response in small chunks, word by word. This allows us to stream the data to our front-end and create that real-time “typing” effect, rather than making the user wait for one large block of text.
(The full code for the chat.ts file, which includes all the logic for formatting the messages object and handling the streaming response, can be found here.)
Defining the File API Using CRUD
The files.ts file is where we’ll define all the API logic for handling our notes. By using app.route(”/files”, filesRoutes), our main index.ts file has already directed all requests starting with /files to this file.
Here, we’ll define a set of CRUD (Create, Read, Update, Delete) endpoints that match the HTTP methods our front-end is calling:
GET /filesWhat it does: Fetches a list of all files.
Why: This is what our sidebar will call to display the list of saved notes.
POST /filesWhat it does: Creates a new file (or updates an existing one).
Why: This is the endpoint our
handleSavefunction calls, sending thetitleandcontentin the request body.
GET /files/:titleWhat it does: Fetches the content of a single, specific file.
Why: The
:titlepart of the URL is a dynamic parameter. When our front-end calls/files/my-note, this route will “capture”my-noteand use it to query the database for that specific file’s content.
DELETE /files/:titleWhat it does: Deletes a single, specific file.
Why: Just like the
GETroute, this uses the:titleparameter to find and delete one specific file. This is what our “delete” button will call.
Wrapping Up and Adding Sidebar Content
Now that we have all the endpoints defined, we can wrap up the project by updating App.tsx and AppSideBar.tsx. The code for AppSideBar can be found here. This will allow for all the available files to be displayed in the sidebar. The <SidebarContent> area is populated by mapping over our list of files and rendering a custom FileBar component (defined in FileBar.tsx) for each one. We pass the functions for file deletion and retrieval down from App.tsx as props, allowing each FileBar component to trigger actions on the correct file.
App.tsx serves as the main controller, defining the event handlers for retrieving file content and managing the active filename, as shown in the code here.
Conclusion
And with that, our project is complete. We now have a fully functional, AI-powered text editor built from the ground up.
Across this three-part series, we’ve built a React front-end, a Hono API, and connected it all to a Cloudflare D1 database and streaming AI bindings. My hope is that this journey has demystified how these services fit together and shown how powerful the Cloudflare stack is for building modern, full-stack applications.
Next Steps & Resources
Our application is built, but it’s not live on the web yet. The next logical step is deployment. For further study and to get your own project online, here are the official resources:
Project Repository: Get the complete, finished code for this application.
BHVR Docs: Learn how to deploy your application to the web.
Cloudflare Docs: The official documentation for Workers, D1, and AI.
Thanks for following along!
Carl’s Final Comments
Great writeup from Rahul explaining how he used multiple open source and community backed platforms to build a structured application for use within the Cloudflare platform. This journey has been an exercise in how to architect a solution which can be easily retuned and redeployed to any environment with minor changes really showcasing Rahul’s platform agnostic writing.
Simplifying this for a production product many organisations might choose to use a more platform specific route. Utilising something like static pages and direct use of Cloudflare workers for a more light weight trade off at the expense of being locked into the Cloudflare platform.
Rahul’s path does allow for the option to be more extensive across environments even allowing for easy integration of multiple AI modules to be easily integrated not just the Cloudflare supplied resources. Finally the quality of an AI application written in this way can really depend on the prompting it is using. This example shows how to build a strong platform agnostic back end that gives a good strong foundation for all the AI preprinting that a project like this would benefit from.




