---
title: "How Marketers Can Build AI Workflows That Produce Reliable and Consistent Results"
serpTitle: "How Marketers Can Build AI Workflows "
description: "Ann Stanley talks about building reliable AI workflows with better context, RAG, and tools like Claude to ensure consistent marketing results."
date: "2026-04-01"
url: "https://www.optmyzr.com/ppctownhall/how-to-get-reliable-output-from-ai/"
---

# How Marketers Can Build AI Workflows That Produce Reliable and Consistent Results

> PPC Town Hall 125

Ann Stanley talks about building reliable AI workflows with better context, RAG, and tools like Claude to ensure consistent marketing results.

**Published:** April 1, 2026

**Watch:** [YouTube Video](https://www.youtube.com/watch?v=yAGxJTvj6Ho)

**Apple Podcasts:** [Listen](https://podcasts.apple.com/no/podcast/how-marketers-can-build-ai-workflows-that-produce-reliable/id1508399985?i=1000758690987&l=nb)
**Spotify:** [Listen](https://open.spotify.com/episode/5aTFxFkFpFqTkG95gomiBG)
---

## Episode Description

> *“Better quality humans produce better quality sandwiches.”*

That's Ann Stanley's take-home message from years of building AI workflows for one of the UK's original digital agencies. Frederick Vallaeys, CEO & Co-Founder of Optmyzr, sits down with Ann to unpack that metaphor and explore why AI is just the filling in a process that still requires human expertise to deliver reliable, consistent results.

The conversation breaks down why AI outputs often feel unpredictable and how to fix that with better structure, context, and systems like RAG (Retrieval-Augmented Generation). Ann explains how tools like n8n, Claude, and custom AI agents can help teams turn repetitive marketing tasks into scalable workflows, without sacrificing quality or control.

If you're using AI in marketing but struggling with inconsistent outputs or one-off results, this episode will help you build workflows that are more reliable, repeatable, and practical for real-world use.

Here's what you'll learn:

* Why AI outputs feel inconsistent (and how to fix it with better prompts)
* What n8n is and why structured workflows beat ad-hoc prompting
* How RAG makes AI stop hallucinating about your business
* Why AI increases capacity instead of reducing workload
* How Claude Code acts as an execution layer for everything
* The real risks: memory compaction, error propagation, and version control
* When automation is worth building (and when it's not)
* How to get teams to actually adopt AI (and why resistance makes sense)
* What AI means for agencies competing on expertise instead of execution
* The human AI sandwich: why quality control still requires humans

---

## Episode Takeaways

Ann Stanley doesn't approach AI adoption the way most marketers do. With 23 years in digital marketing and a background in medical research and clinical trials, she brings scientific rigor to marketing. She's the founder and CEO of Anicca Digital, one of the UK's original digital agencies. Over the past year, she's moved from experimenting with AI tools to building production-ready workflows that her 20-person team actually uses.

Fred opened the conversation by asking about n8n, the workflow automation tool Ann spent seven months mastering in 2024. It's not cutting-edge anymore—Claude Code and other tools have leapfrogged it—but the principles Ann learned building n8n workflows still apply to everything she builds today.

And those principles matter because they answer the question every agency struggles with: how do you make AI outputs reliable instead of random?

The conversation moves through the practical mechanics of building AI systems that work: what tools to use, how to structure prompts, why context matters more than cleverness, and when automation is worth the investment. But underneath the technical details runs a more important thread: AI isn't replacing expertise, but amplifying it. The people who understand what needs to be built and why will thrive the most.

### Why AI outputs feel inconsistent (and how to fix it with better prompts)

The first real question Fred asked cut straight to the problem agencies face every day: How do you make AI predictable when it's fundamentally a probabilistic system?

> Ann's answer reframes the entire problem. "LLMs is just a mathematical model that takes a load of words and brings it back based on statistical probability," she explained.

The AI doesn't fail because it's broken. It fails because you haven't provided enough information for it to succeed.

She uses a deceptively simple example: the word "apple." Type that into an AI, and it has no idea whether you mean Steve Jobs and iPhones or apple pie and orchards. Both are valid. But the AI can't know which one you mean unless you provide context.

She further explains this with a cooking analogy. If you ask, "What can I cook for a family of four? I'm vegetarian. I have these ingredients," you'll get suggestions. But if you give those same ingredients to an Italian chef and a French chef, they'll produce completely different meals. The ingredients are identical. The output depends on expertise, training, and context.

> "When you create an AI agent in something like a workflow, you don't just give it the instructions of the job," Ann said. "You give it all that background, examples, and the knowledge behind it. So you need a couple of things. You need the instructions at the moment. You need the user instructions, and the RAG or the memory, the brain behind it."

That's the core principle underlying everything else in this conversation. AI doesn't magically know your business, your clients, or your standards. You have to provide that context explicitly. The more context you provide, the more predictable and useful the output becomes.

For agencies, this means building systems that enforce context. Don't let team members just type random prompts into ChatGPT. Build Custom GPTs or Claude Projects with brand guidelines, writing samples, and strategic guardrails already baked in. Make it impossible to produce generic output by providing the complete context.

### What n8n is and why structured workflows beat ad-hoc prompting

Fred asked Ann to explain n8n for people unfamiliar with it. And here’s what she said:

> "n8n is like a Miro board, which has a load of nodesthath do different things, different functions," Ann explained. "What this does is it gives you access to loads of APIs. It gives you AI brains in there so it can make decisions."

The key advantage isn't the visual interface or the node-based system, but the predictability. As long as you structure your prompts correctly, you get consistent results every time. That's critical for agencies promising clients a certain level of quality.

Ann and her team built a platform called Secret Agents to solve a common problem: most people don't know how to prompt AI properly.

> "The advantage of using a form at the beginning of an AI agent like n8n is that you can collect the right information and get the user to enter the information that they should be writing when they do a prompt," she said. "And then you take that, and then that gets you a much better output."

The form forces us to have a structure. Instead of a vague "write me a blog," the form asks, "Who's the target audience?" What tone should we use? What examples should we follow? What should we avoid? That structured input produces structured, reliable output.

The other major advantage is that n8n works via APIs, which means there are no token limits. You can process massive amounts of data without hitting the walls you'd encounter in ChatGPT or Claude's chat interface. And workflows can run in the background without manual triggers.

Ann's team built a fully automated blog workflow that scrapes 25 websites every week, processes the content, writes a roundup, and uploads it to WordPress. Zero human intervention is required once it's built. That level of automation only works because the workflow enforces structure at every step.

### How RAG makes AI stop hallucinating about your business

RAG, short for Retrieval-Augmented Generation, is a standard LLM that is trained on massive amounts of text from the internet. While they know various information, they don't know anything specific about your business, your clients, your brand voice, or your strategic priorities unless you tell them.

RAG solves this by giving the AI a knowledge base it can search and retrieve information from when generating responses. Instead of relying purely on training data, the AI pulls from documents, guidelines, and examples you've provided.

Ann explained her approach through what she calls the AI Adoption Ladder. Most people start with the free version of ChatGPT, and 80% of the 4,000 marketers she spoke to last year were still using it. Most hadn't tried Claude, and none of them knew what an AI agent was.

So, Ann advised that the next step up is a Custom GPT or Claude Project.

> "When you create a project or a custom GPT, you provide a recipe, which is your prompt, but you also provide examples of what good looks like," Ann said.

For agencies managing multiple clients, this structure is essential. Client A gets their brand guidelines, writing samples, and industry knowledge loaded into their project. Client B gets completely different materials. The AI then writes in each client's voice because you've given it the reference library to pull from.

The advantage of Claude Projects over Custom GPTs, according to Ann, is memory persistence within a project.

> "You can work in Claude and have a whole project around that one client, and it will remember everything and those conversations," Ann explained. “And that really is the starting point to all the really exciting stuff that's come out in the last year."

### Why AI increases capacity instead of reducing workload

Fred laughed when Ann warned about sleep deprivation, because he's living the same reality. The promise of AI is that it does all this work for you, freeing up your time. The actual experience is completely different.

> "You wait until you start using Claude Code," Ann said. "You'll never get any sleep again."

Here's why: AI doesn't reduce your workload. It expands what's possible. You get so excited about the capabilities that you start building five times more than you used to. So you stay up until 2 or 3 in the morning because you can't leave something broken.

> "This idea that you have more time to yourself is just rubbish," Ann said bluntly. "You just do five times more work."

Fred agreed completely. He's up past midnight regularly now, not because he has to be, but because he's investing in building processes that will 10x his team's capabilities. Individual team members might use AI to finish tasks faster, but they're not the ones staying up late. That's the founders and technical leaders who see the potential and can't stop exploring it.

Ann shared two productivity shifts that changed everything for her. First: "The biggest change in productivity is when you stop typing, and you start talking." Voice commands dramatically speed up iteration cycles because you can provide more detail, more nuance, more context when speaking than when typing.

> Second: run multiple things in parallel. "You have to have probably at least two things on the go at once because if you just sit there for one thing to happen, you end up wasting a lot of time," she explained. "One of the biggest skills is being able to flip your brain between two tasks."

That's the current state of AI tools. They're fast but not instant, so you queue up multiple tasks, switch between them, and keep moving forward instead of sitting idle.

### How Claude Code acts as an execution layer for everything

Ann walked through the evolution from basic prompting to Claude Projects to Claude Code, and why each step matters for marketers trying to build reliable systems.

Claude Projects introduced "skills," which are saved workflows for specific tasks. If you create a proposal template with specific formatting and structure, you save it as a skill. Next time you need a proposal, you invoke that skill, and Claude generates it following the template.

Then Claude launched Co-work, which makes these capabilities accessible without requiring technical setup.

> "You can use connectors to integrate it to your Drive, your Gmail, etc.," Ann explained. "So suddenly it's talking to your files on your laptop. It's talking to other services."

The next level is Claude Code, which developers and technical people have embraced. You run it through tools like Cursor or Terminal, the "little black box just like something out of The Matrix" that IT people use to fix your laptop, as Ann described it.

But the power is worth the learning curve.

> "You then talk to Claude Code and have a conversation with it and ask it to bring up some of these skills that you might have created or go and search on the internet to find them," Ann said. "And you basically end up with a big library of skills that you've developed."

Those skills could be order confirmations, carousels, blog posts, posting to WordPress (because you've given it API keys), and sending emails.

> "Suddenly, instead of going into Microsoft or going into other platforms, you can do everything within Claude Code," she explained.

### The real risks: memory compaction, error propagation, and version control

Fred brought up something critical that doesn't get discussed enough: the risks of relying on AI systems that compact memory and make decisions autonomously.

He shared the story of a Meta employee whose emails were deleted by Claude. She had explicitly instructed it never to delete her emails. But at some point, Claude's context memory filled up and compacted. During that compaction, the instruction was lost. Claude then thought it was fine to delete emails as part of a cleanup task. "She literally recorded all her emails being deleted in front of her eyes, saying 'Stop, stop, stop,' and it just ignored her," Ann said.

Fred clarified that this happened with Open Claude (now renamed), which has fewer safety features than Claude Code. But the principle matters: when you see memory compaction happening, that's a warning sign. You should verify what the AI still remembers from your original instructions.

Ann's solution is discipline and version control. "When I first start a new one, I always date it and I put the heading in. So when I've got multiple tabs open, I can see what one it is." She watches the context usage indicator in Cursor. At around 80%, she creates a handover document and saves everything to GitHub.

> "Throughout the day, if I get to a point where I've got some really good stuff, I will be saying 'Update my skill, save to memory.md,' which is like a memory file, and also save to GitHub," Ann explained. "And then I get a handover, and then I'll start a new chat."

The other risk is error propagation. Ann compared it to Excel before autosave existed.

> "You'd spend two hours building a spreadsheet, and if it crashed, you lost everything. It's very much like that at the moment," Ann said. "If it screws it up, it screws it up. So this is why I do a lot of versions, and I do a lot of saving so you've got checkpoints that you can go back to."

These are practical realities of working with AI systems right now. The tools are powerful but not foolproof. So you need discipline, backups, and version control to use them safely.

### When automation is worth building (and when it's not)

This is the practical calculation every agency needs to make: when should you invest weeks building an AI workflow, and when should you just do it manually?

> Ann's rule is simple: frequency justifies investment. "Unless it's something you do regularly, it's very difficult to justify the amount of time," she said. "If you're only doing one audit a month or one audit every three months, it's not worth it. But if you're doing one audit a day, then it's worth it."

Blogging is the classic example. Most agencies produce blogs regularly, like weekly or even daily. That volume justifies automation. But if you only write one proposal every few months, spending a week building proposal automation doesn't make financial sense.

Ann shared a recent example. Her team tried to automate a 50-page tech SEO audit. After significant effort, they couldn't get it working reliably. "In the end, what we did was we took his normal way of doing it and then just made it look nice," she said.

They had better success with Google Ads audits because they'd already solved the API access challenges. But even that required significant upfront work.

> "You have to go through a lot of hoops to get yourself registered to be able to get the data out of the API, believe it or not. You need to get the development token," Ann mentioned.

Ann's current project is GEO reports for outreach. Her email outreach generates about 10 opportunities per week.

> "If I had to do them manually, it would take me all week to do them. It's still a nightmare because it still doesn't always do what it's told," she admitted. "But the good thing is they look really good."

She's linked it up with Playwright to grab screenshots and even set up accounts automatically. "If I had to do 10 of those, as I said, it would take me all week. It probably takes me a day to do 10 because they can be problematic."

### How to get teams to actually adopt AI (and why resistance makes sense)

Fred raised a challenge every agency leader faces: How do you get team members to invest time learning AI when they're already stretched thin doing their actual jobs?

The tension is real. If someone's job is producing 10 blog posts per month, and you tell them to spend a week learning automation, they see a week with zero output. They don't necessarily think about how, after that week, they could produce 100 posts per month instead of 10.

Ann shared a blunt example. A year ago, her team offered AI training. "We would give training to the team, and they really weren't interested," she said. But then they started seeing what was being produced with Claude Code. "I think they now see this is an assistant I can use to do all sorts of stuff. Suddenly, they're desperate for us to bring them up to speed."

> The tipping point is when benefits clearly outweigh pain. "Once the benefit outweighs the pain, then the team will come on board, and then you'll bring them along with you," Ann explained. "But the finders and the pioneers at the front are usually six months ahead of the rest of the team."

That six-month gap matters. The people building the systems need to be comfortable with uncertainty, failure, and iteration. The rest of the team can adopt once the systems are proven and workflows are smooth.

Ann's approach is to start with directors and technical people willing to invest the time and build something that works to demonstrate clear value. Then bring the team along once the path is proven.

> "You've got to have at least a couple of you that are prepared to invest the time," she said. "And I think that has to be the directors at this stage, or you've got to have somebody who's geeky enough and got the tenacity to carry on doing it."

### What AI means for agencies competing on expertise instead of execution

Near the end of the conversation, Ann shifted to the bigger question: what does all of this mean for agencies as a business model?

> "The thing is, it's going to get to a point where it's probably only 1 or 2% of people are using Claude Code at the moment," she said. "But at Agency Hackers, 30% of the people in the audience were using it. Whereas a year ago, when I asked them whether they had heard of Claude or AI agents, it was less than 20. So it's moving really quickly."

> So technology is no longer a barrier. "It's only going to be your imagination and your ability to understand it and also to market yourself as an agency or as a practitioner to say that you're adopting these tools," Ann explained. "Because if you don't, the clients are going to be there before you."

That's the existential risk. Clients will see these tools, assume they're easy, and think they can do it themselves. They won't understand that expertise still matters and that knowing what to build and why to build it requires years of accumulated knowledge.

> "All that knowledge that means that you know what to build and you know how to build it because you've done it and been there and bought the t-shirt, that experience is worth something," Ann said. Her entire team has 10, 15, 20 years of experience. "That's what becomes valuable rather than the day-to-day stuff."

### The human AI sandwich: why quality control still requires humans

As they wrapped up, Ann offered a take-home message: the human AI sandwich.

> "You need humans at the beginning of the process who have the idea, have the inspiration, and they can give the right instructions to the AI to produce something," she explained. "And then you have the human on the other side that takes that content, knows whether it's good or not, and adapts it, edits it, and makes it even better."

The sandwich: human input at the top, AI execution in the middle, human validation at the bottom.

> "The better quality humans are going to produce the better quality sandwiches because they know their stuff," Ann said. That's the competitive advantage. Not the AI. The people using it.

Both Ann and Fred on the fundamental shift happening in how we build things. Fred put it this way:

> "We no longer should write the code on the page as the people. We should write what it is we're trying to do with this page, and what's the goal. And six months from now, Claude Code may have a better version, and it should just look at those instructions and say, 'Oh, well, Fred, here's what you were trying to do, and here's how we did it six months ago. But let's just rebuild the whole thing to do it better and still meet your goal.'"

So instead of writing code, write intent and let AI handle the implementation. When tools improve, regenerate the implementation without rewriting the intent.

> Ann pointed out the implication for agencies: "The technology isn't going to be the barrier anymore. It's only going to be your imagination and your ability to understand it."

The people who thrive won't be the ones who execute fastest. They'll be the ones who know what needs to be built, why it matters, and how to validate results. That's the expertise AI can't replace.

As Ann reminded everyone, better quality humans produce better quality sandwiches. The AI is just the filling.

---

## Episode Transcript

**Frederick Vallaeys:** Hello, and welcome to another episode of PPC Town Hall. My name is Fred Vallaeys. I'm your host. I'm also CEO and co-founder at Optmyzr, a PPC management tool.

So for our guest today, she's been in digital marketing since before there was a Google Ads interface to even complain about. And she's still running campaigns and training teams on the latest changes. So Ann Stanley, she's the founder and the CEO of Anicca Digital. It's one of the UK's original digital agencies, and it's based in Leicester.

She actually came to marketing from a career in science, medical research, and clinical trials specifically. And you can tell because she brings a very scientific rigor and discipline and the researcher's discipline to everything from campaign structures to AI adoption and management. So what's been catching my eye lately is Ann's work that her and her team have been doing around AI in marketing, and she publishes a weekly roundup that's definitely a must-read in this space.

And around the conference Epic 25 at the National Space Center, and that was a great event where she also did a keynote on some of the latest cutting-edge AI tools. So with all of that, really can't wait to hear what she brings to today's episode. I know she has a lot to share and some stuff that people really want to hear about. So with that, let's bring in Ann and let's get rolling with this episode of PPC Town Hall.

**Ann Stanley:** Hi Fred, thank you so much for inviting me.

**Frederick Vallaeys:** Thanks for coming on the show, and it's great to see you again. I know we go back quite a long way, don't we? Because I think originally we met over 10 years ago at PPC Hero when we were talking about shopping ads and how they just hit the world, and you were going to—you ended up introducing them into Optmyzr. So we've got a checkered history.

**Ann Stanley:** Exactly. Yeah. You were an inspiration at the time. You were the one on the panel, the expert, and you gave me some ideas, and I was like, maybe I can code that up and make people's lives a little bit easier, so they could use the Ann methodology, but using a little bit of software then.

**Frederick Vallaeys:** So yeah, since we last spoke, I mean, that was many, many years ago, right? So for people who haven't had the pleasure of being at one of your conferences or working with Anicca Digital, give people the 30-second view on who you are and what you've been up to.

**Ann Stanley:** Okay. So I've been in digital marketing for 23 years, which is quite scary. I've always been interested in paid media, particularly Google Ads. It was always my main area, but anything that's numeric, so analytics, any paid media, etc.

And I've got a team of just under 20 who are really experienced. They speak a lot at events as well. And we do shopping, search, social, strategy, and we do quite a lot of training.

We've always been quite altruistic. So we've been running webinars every week since COVID hit six years ago, believe it or not. We've run annual conferences for 10 years, and we're just launching a brand new endeavor called the Thursday AI Club, which I'm sure we'll get to talk about, which is fortnightly events to bring everybody up to speed with anything to do with AI in marketing and management.

**Frederick Vallaeys:** Great. Yeah. So much there, right? But let's go back to the early days of AI, and I know you did a good bit of work on n8n about a year ago. I know it's not the state-of-the-art anymore, but even today some people might never have tried n8n and be curious what that is.

So tell us a little bit about the work you did there and if that's still something people should check into.

**Ann Stanley:** Yeah. So January last year, I started on a seven-month course out of India every Saturday and Sunday afternoon for seven months. And one of the first things we picked up was n8n.

N8n is like a Miro board which acts as has a load of nodes which do different things, different functions. Most marketers have used things like Zapier or created workflows in HubSpot or Go HighLevel or one of those sorts of automations. But what this does is it gives you access to loads of APIs. It gives you AI brains in there so it can make decisions.

I actually did another blog on that at the weekend, sort of a basic guide on how to use it. And what we were doing was we were building single-task automations like do me my meeting notes, write me a blog. And the advantage of n8n is that it's predictable, and as long as you can learn how to prompt properly, you can get really good results, consistent results.

And what we did was we built a platform called Secret Agents because a lot of people struggle to know how to prompt properly. And so the advantage of using a form at the beginning of an AI agent like n8n is that you can collect the right information and get the user to enter the information that they should be writing when they do a prompt. And then you take that, and then that gets you a much better output.

And the other advantage, of course, is if you go through a chat like ChatGPT or Claude or whatever your favorite chat is, you've got quite limitations on the token limit. So by using something like n8n that goes through the API, you get much better inputs, and consequently you get much better reliable outputs. So it's really good for doing repeatable, consistent things.

And the other thing about it is you can have it running in the background, and you don't need to trigger anything at all. So if an email comes in, or we build a blog every week where it scrapes 25 websites and then produces it and loads it up into WordPress, and that's fully automated. So there's still a space for—there's still definitely an important space, and n8n is by far the easiest platform to learn if that's what you want to do.

**Frederick Vallaeys:** Yeah, there's so much there that we need to unpack here a little bit. But one of the first things you said there was it's predictable. And I guess that's one of the challenges with AI, especially in agencies where I think you've made a commitment to your customer to do things a certain way, or as an agency, as a management team, you have a certain strategy that you promise you will deliver.

AI is not always predictable, right? It's not deterministic. So is n8n the only path into that predictability, or how else do you use your standard LLMs and sort of guarantee that it's doing what it needs to do?

**Ann Stanley:** Okay. So the problem is that LLMs is just a mathematical model that takes a load of words and brings it back based on statistic probability. And so I always, when I'm teaching this stuff, I always use the example of Apple.

Apple, you immediately either think of tech and Steve Jobs and iPads and iPhones, or you think of apple pie and apple trees. So but it's still the same word. So unless you give it enough context, it can get everything out of—you know, it brings back the wrong information.

So what we have to do when we prompt is we have to provide enough context and enough information to get the best possible answer. Otherwise, it will just hallucinate and will just give you rubbish. So the analogy that I use is cooking.

So when you create a prompt in an agent in n8n, what you do is you give it a lot more information than you would if you're just chatting in ChatGPT or Claude or whatever, or Gemini. So what you do is you have—if you can imagine that you were asking for a recipe for a family of four. What can I cook? I've got these—I'm a vegetarian. What ingredients? And it will come up with some examples.

But what you do is you have a user prompt which is your chef. And if you gave an Italian chef the same ingredients and a French chef the same ingredients again, they'd come up with something completely different. So what you do when you create an AI agent in something like a workflow is you don't just give it the instructions of the job. You give it all that background, examples, and the knowledge behind it.

So you need a couple of things. You need the instructions of the moment. You need the user instructions, and the RAG or the memory, the brain behind it. And then those principles are exactly the same as what we've taken into new technology this year because without that context and without that additional information, it'll just come out with rubbish.

So it's always better to reinforce an AI agent with information that you have. So as an agency, even if you didn't use something like n8n and you just used a custom GPT, as long as you create enough background information, enough guidelines, enough guardrails, you can get a more predictable answer which isn't going to be rubbish every time, right?

**Frederick Vallaeys:** And I think I heard you say RAG somewhere in there.

**Ann Stanley:** Yes.

**Frederick Vallaeys:** And I never can remember what it stands for.

**Ann Stanley:** Retrieval Augmented Generation.

**Frederick Vallaeys:** Exactly. Yeah. But explain a little bit what that means and why it's important and how it might fit into an agency workflow.

**Ann Stanley:** Okay. So let me go back a step. So we've got something called the AI Adoption Ladder. So most people start off just by using ChatGPT or something like that. And in fact, last year I spoke to 4,000 people, and 80% of them were just using free ChatGPT. They hadn't even bothered to buy a paid package.

Most of them hadn't tried Claude yet, which is by far better for writing good quality content. And none of them knew what an AI agent was. Well, sort of 20% would know what an AI agent was.

So I see the next step is something like a custom GPT or a Claude Project. And the reason that's a big step up is that when you create a project or a custom GPT, you provide a recipe, which is your prompt, but you also provide examples of what good looks like. So for example, if you were writing a blog in my French and Italian chef example, that could be Client A and Client B.

And what you do for Client A is you provide the brand guidelines, you provide examples of what they've written before, you provide industry knowledge. So that becomes your brain or your RAG or your knowledge, whatever word you want to use. And then for Client B, you'd have a completely different sort of set of rules and examples.

And so custom GPT, which have been around for about two and a half, nearly three years now, definitely two and a half years, allows you to do that. The advantage of Claude is that when you do a Claude Project is that you can do one message after another, and it remembers what you've been talking about. So you can work in Claude and have a whole project around that one client, and it will remember everything and those conversations, and that really is the starting point to all the really exciting stuff that's come out in the last year.

**Frederick Vallaeys:** Right. And that's interesting because if you think about GPT in the beginning, each conversation was its own sort of memory thread. And so you could say, okay, I'm going to have one conversation that just is about one client, and it'll start remembering the brand guidelines. But then they actually said, well, we'll give you memory across chats.

So now all of a sudden, as an agency, you run into the problem that now it's thinking Brand A actually uses Brand B's colors. And unless you tweak these settings, it gets you into trouble, right? And then how that works in OpenAI ChatGPT versus Claude versus Gemini, it's all different. And I think that's a little bit of the value that an agency needs to bring is understanding how do you split this out, how do you make sure you have the right context and memory.

**Ann Stanley:** Yeah, exactly. And the reason I mean, I've been using Claude Projects for over a year now. And the main reason that I really like them is that it allows you to—it's almost like your own sort of filing system. But all of these chats eventually run out of context window, and you only got a certain amount of chats that you can get before you get kicked out.

And particularly if you've got the $20 or the lowest value. So I was already using $75 or the next one up because I would just run out of space, and it means you got to start a new chat again. And I would use Claude a lot when I was building—it's a bit of a circular thing, but if I was creating an n8n workflow, I can't code. I haven't got a clue.

So I would use Claude to help me build scripts and JSON code, etc., all the things I needed to do. And because sometimes you have to go through—quite some of them get quite long and complicated—you actually work interactively or iteratively with Claude to give you the answer over in n8n, and you'd run out of space. Suddenly you get to 10:00 at night, and it'll say, "Sorry, you can't do any more work until 1:00 in the morning."

Because the biggest thing about all this AI stuff is you don't get any sleep once you really get into it. So I just need to give everybody a bit of a health warning there. You wait until you start using Claude Code, and yeah, you'll never get any sleep again.

**Frederick Vallaeys:** Yeah, let's talk about that actually, right? I mean, the promise of AI is that it'll do all this work for us, so we'll have less work to do. But you're saying exactly what I'm experiencing. Like I'm up later than I've ever been. Maybe not ever. I mean, but it's been a while since I've been up past midnight because I get so excited about these projects, and like there's all these things I can do.

But part of it is also the frustration of you give it a command, and you wait a few minutes for it to do the work, and then you validate, and you go to the next step. So what's your life looking like in the world?

**Ann Stanley:** Well, I mean, it got much worse when I discovered Claude Code. So we have to blame Mike Rhodes and a couple of other colleagues for that because he's been sort of preaching about that for a while. So Claude Code came out in September, and Claude already had something called MCP, which is Model Context Protocol, which allows you to access your local PC. So that's been around for a long time.

And what that basically means is that you can use—you can access files on your laptop, and also you can initiate searches, and you can even control your Chrome browser so that it can do things for you. So suddenly you've got—instead of just getting content back out of an AI, suddenly it can do things for you and do tasks. So I really started to use that heavily about two months ago.

And the biggest change in productivity is when you stop typing and you start talking.

So are you using Whisper to do your commands?

**Frederick Vallaeys:** No.

**Ann Stanley:** Okay. Well, that will massively change—

**Frederick Vallaeys:** I don't like Whisper.

**Ann Stanley:** Oh, okay. There are other products out there.

**Frederick Vallaeys:** Yeah, exactly. I mean, so yeah, I don't use Whisper as a tool, but I do issue voice commands pretty extensively.

**Ann Stanley:** Yeah. So that's the biggest thing that speeds you up. One of the things I found about anything to do with n8n, anything to do with AI, is I tend to have to have multiple screens, and you have to have probably at least two things on the go at once. Because if you just sit there for one thing to happen, you end up wasting a lot of time.

So you've got to be able to—one of the biggest skills is being able to flip your brain between two tasks. So I try not to have more than about two things going on at once because then you do get lost. So when you're doing one project, you'll probably have something else on the go. So you'll swap between them because you're absolutely right, you do have to wait for things to happen.

It is a lot faster now, particularly with the update Claude Opus 4.6. That's much faster. And also, it's got, I think, a million tokens, so you can do a lot more before it starts kicking you out. But unfortunately, none of this AI goes to sleep.

And so what happens is you start doing something, and this was the same with n8n to be honest, is you start doing something, you just get to a point, "Oh, this is really good," and you sort of save it, and you think, "Great." And then you try something else, and it breaks, and you can't go to sleep with something that's broken. And then before you know it, it's 2 or 3 o'clock in the morning.

And so this idea that you have more time to yourself is just rubbish. You just do five times more work.

**Frederick Vallaeys:** But I'm also… totally agreed. And I'm curious, so it's exactly like that for me. But the way that I think about it is I'm trying to build these processes and invest my time in building the AI process so that my whole team can benefit. I don't think the individuals on my team necessarily behave in the same way, right?

They're much more focused on I need to get some blog post out, or I need to get some case studies ready, or I need to build a certain piece of code for the software. AI is sort of in that, but I feel like I'm much more on the how far can I push AI to 10x my team in the future, and that's what also then takes me down these paths of being up at two o'clock in the morning.

**Ann Stanley:** Yeah, absolutely. So shall I explain what Claude Code is because there'll be some people here that won't know what Claude Code is, because I think that might be quite useful.

**Frederick Vallaeys:** Yeah, absolutely. Let's talk about Claude Code, Claude Co-work, maybe the new Dispatch feature they have.

**Ann Stanley:** Yeah. Yeah. Well, let's go back a step to Claude Projects again because one of the things that we started to notice when we were using Claude Projects sort of September, October time is that it would say, "Oh, we're going to initiate the PowerPoint skill or the Word skill or something like that." So it started to use this word called skills, which was basically when it was going to do a task.

So Claude Projects was really good at creating artifacts, and the best type of artifacts in any AI really that's been designed to create pages and code is HTML. So what I often do—and I think this is something Mike Rhodes talks about as well—is if you're going to create something, particular if it's an image or if it's a presentation, is to create it in HTML first and then convert it into a PowerPoint or convert it into Word or something else later on, because you can get it pixel perfect.

So I'm creating things like carousels, and I'm creating a lot of documents. So Claude skills was already over in normal Claude, and what it meant was if I was going in a project and I started to do something—so say I was creating a proposal or an audit document or something like that, and it had a lot of brilliant features—what I would do is I then save it as a skill. And all the markdown, which is a type of text which allows—everything's done in text.

So the trouble with text is that if you remember the good old days when we used to send plain text emails, we'd use capitals and underscores, equals signs and headings and things like that. Well, markdown just uses hashtags and stars and things to make it easier to read. Anyway, so you create a markdown which would be basically a set of instructions, and then you'd have all your templates, and you'd have your sort of styling and a configuration file, and this would be saved as a skill.

So you can do that now, and that's been available for a while. They then offered—I'm not sure chronologically what time this came—but they now offer something called Co-work. So if you've got a paid version of Claude, they will have a big thing that will flash up and say, "Would you like to use Co-work?" And Co-work is like an easy-to-use, easy interface that does a lot of the same things that we've been talking about, but you don't have to install lots of extra stuff, which I'll come on to.

But it will—you can use connectors to integrate it to your Drive, your Gmail, etc. So suddenly it's talking to your files on your laptop. It's talking to other services. So it allows you to do a lot of stuff. So Co-work takes you to the next level.

And then the next level after that is something which mainly developers have picked up or techies and geeks like us who understand enough. I'm not a coder, by the way. I've never been. But I understand enough. And we will use tools like Cursor or Terminal to run Claude Code in.

So let me explain what that is. So if you're non-technical altogether, the time that you would have seen a terminal is when your IT comes around and fixes your laptop, and they get this little black box just like something out of The Matrix, and they type instructions, and it talks to your laptop, and it does stuff. So that's one way of running code within that window.

But actually, there's much better ways and easier ways like sort of Visual Studio Code and Cursor, which basically just provide a nice interface for you to run Claude Code. And you then talk to Claude Code and have a conversation with it and ask it to bring up some of these skills that you might have created or saved, or go and search on the internet to find them. And you basically end up with a big library of skills that you've developed.

It might be for an order confirmation or carousel or write a blog or posting your blog to WordPress because you've provided the API keys for it to talk to WordPress or sending an email because you've got it to talk to your email. So suddenly, instead of going into Microsoft or going into other platforms, you can do everything within Claude Code. And like you, you spend all your time getting it right the first time, and then once you've got it right the first time, you've got a reproducible skill that you can then use over and over again.

So it just takes you into a new world of possibilities where you can produce stuff that you just couldn't have produced before without maybe a designer or without talking to lots of other people. So it's—I think it's revolutionary. I think if you think of the way that you reacted when you first saw ChatGPT, well, it's that with bells on. It's just another level again what you can do.

**Frederick Vallaeys:** Yeah. Are you—oh my God. So yeah, I just installed Dispatch, Claude Dispatch, on one of my extra computers so that I can from my phone text basically to that agent, to Claude Co-work, and then it starts doing stuff because it does need a computer to be on so that it has access to all the files on it. It can access the internet. It can send emails on my behalf, but now I control it from my phone.

And it's more secure than the Claude, which some people may have heard about, but that was like the no restrictions of any kind. Basically, do anything you want with my computer.

**Ann Stanley:** Yeah, I think they renamed it Open Claude, didn't they? And wasn't there a girl working at Meta, and she literally recorded all her emails being deleted in front of her eyes, saying, "Stop, stop, stop," and it just ignored her and just completely—

**Frederick Vallaeys:** Yeah. And actually, that's the real story. And then you did allude to something else which is related to this. So you said that when you're using Claude, at some point the memory runs out, your context memory runs out, and you had to start a new conversation, and that was frustrating, right? Because now you have kind of like two, three conversations that build on the same thing. And if you wanted to go back and see what did I tell it, you'd have to remember which of these three things about the same task was it.

So they fixed that, but now what they do is they quote-unquote compact the memory. If you see this happening in Claude, be very, very careful because what it means is it tries to get rid of the stuff it thinks doesn't matter. But this is exactly what happened to this woman.

So at some point, the memory had filled up, and even though she had put in an explicit instruction that was like never delete my emails, that was compacted. And in the compacting, it was lost. And so then Claude thought that, "Oh, it's fine to delete emails for whatever needs to be done." And that's what got her in trouble, right?

And so when you see the compacting of memory, that is a warning sign where you might want to step in and be like, "Hey Claude, what do you remember from my instructions?" Because clearly some of it is gone.

**Ann Stanley:** But we do need to clarify that we're talking about Open Claude and the sort of the risky side of things because Claude Code wouldn't do that.

**Frederick Vallaeys:** It would. Oh, okay. Well, you should have built that in. So—

**Ann Stanley:** Okay, so there's a couple of ways around that problem. So she was definitely using Open Claude, not Claude Code, because Code has got a lot more safety features in. So what you tend to do when you're running—the reason I run Cursor is twofold. I have multiple tabs or projects open, and what I do is when I first start a new one, I always date it and I put the heading in. So when I've got multiple tabs, I can see what one it is.

And then when you run it in Cursor, it also gives you like a percentage of how far you've gone in. So when you get to about 80%, then what you need to do is create what's called a handover document. And I also save everything to GitHub. So throughout the day, if I get to a point where I've got some really good stuff, I will be saying, "Update my skill, save to memory.md," which is like a memory file, and also save to GitHub. And then I get a handover, and then I'll start a new chat.

So there's quite a lot of discipline that you need and housekeeping that you need to learn to get the best out of it. So for example, I've got certain cron jobs that happen at 7 o'clock every morning to make sure everything's saved.

**Frederick Vallaeys:** Well, a cron job—cron job is a web term, cron as in time, but you can schedule tasks now. That was quite a recent update. So that means that you can actually schedule jobs that can be done. So on Saturday, I've got a load of jobs that I need to do for content for the week. But I also have certain tasks that happen certain times a day.

So there's quite a lot of safety features and things that you can build in to stop you screwing it up basically. But there's quite a lot of tips like that that you need to get your head around because if you're going to spend a lot of time in it, the first annoying thing that you start when you first use Claude Code is that you have to say yes to everything. And you don't want to do that.

So you can amend the settings to avoid that, but you need to know what you're doing. And I did write a bit of a guide on how to use it. So I think that might be quite useful for people to go and have a look on the Anicca blog. Because there is—I've written three or four guides on some of this subject over the last few weeks as I've been trying to learn how to use it, but also to pass some of that knowledge on.

So for me, one of the biggest things was instead of it being a black dark theme, I've gone to a white theme because for me, it's not so jarring on the eyes, and it feels like I'm in a normal SaaS product. The other reason why I really like Cursor is that a lot of the time, particularly when you're doing documents, you need to show it what's wrong. So I can grab screenshots and use that in Cursor, but I don't think you can in Terminal. So that makes a big difference as well.

**Frederick Vallaeys:** So yeah, really like the tip you gave about doing the handoff document when you see that the memory is almost about to be full. And really, this points to the fact that all of these things do look easy at the surface, but there is a lot of detail that goes into making sure they run safely. And so for that reason, anyone listening today, if you're interested in this, definitely check out Ann's courses and everything she's written because it's kind of like they say in PPC, PPC is not easy.

It's also not tremendously complex, but you do learn from the battle scars of having a mistake keyword that conflicted with something else, or maybe you changed the budget, you didn't realize it was going to be in conflict with another thing, right? And these are the ways that you learn, okay, how to actually run things. And it's the same with all of this AI stuff. So definitely do learn from people who've been there before.

**Ann Stanley:** Yeah, definitely. I mean, I think my experience from some of the reports and things that I'm doing is based on years of experience of knowing the sorts of stuff I like to create and not being able to. But also, I mean, two years ago, I spent loads of time setting up Go HighLevel automations for things like forms on the website and integration with WhatsApp so that if somebody filled in a form, they got an email and they got a WhatsApp message, and we were using at that time using Zapier.

So that information was really useful. Then I was learning how to use n8n and the APIs. So all that has been useful. So actually, although it seems much easier now in the way of what you can do, I don't think I would be as effective at it if I hadn't gone through the apprenticeship of all the other stuff that I've had to learn to get here.

But also, when you first start, you can make a lot of mistakes. Can you remember the days where you'd create an Excel document, and it didn't have autosave, and you'd lose two hours' work, and you'd have to start again? It's very much like that at the moment because a lot of the stuff it's doing—it's creating an HTML document. It hasn't got a back button or, you know, a Control-Z or whatever.

If it screws it up, it screws it up. So this is why I do a lot of versions, and I do a lot of saving so you've got checkpoints that you can go back to so that you don't lose all your work effectively.

**Frederick Vallaeys:** Well, and so it sounds like you've done a lot of work in n8n a year ago. I've been vibe coding quite a bit in Lovable, and part of that is I like having all of these AI capabilities. But like you said, there's no versioning or when it's helping me create a doc, I like a little bit more granular control over like change this paragraph, but not that one.

And they all have these canvas-like products, but they're a little bit clunky in my opinion. So I'm like, let me vibe code so that I have a doc. I can tell them this is the paragraph you need to change. This is a place where you need to help me find a more authoritative quote, blah, blah, blah. But there's infinite undo. There's trackability of every change that was made, which AI responded, what was the response it gave, what was the request.

So that you bring in these layers of what's expected in terms of accountability, in terms of repeatability that you don't natively get if you're just using the chat inside the LLM or even if you're using Terminal or Claude Code or Co-work or whatever it is.

**Ann Stanley:** Well, interestingly, when I created Secret Agents, which was this product that I created as part of my course a year ago, that's when I started to use Bolt. I preferred Bolt to Lovable actually, but I found—it was really expensive because the amount of stuff—and I built a whole website until I had to pass it over to a developer to make sure that it wasn't hackable.

It had Supabase behind it, which is a Postgres database, and it was linked into GitHub and Netlify for version control. So I learned a lot from doing that. That I switched to Cursor quite quickly, and at that time Cursor was only available in the Terminal version. There wasn't the Claude version.

But I found that level of control was much more available within Cursor. But Claude Code itself has got that extra layer on top. So but I do agree with you, the skills that you would have developed doing Lovable or using Cursor, they're really important now when you're starting to use Claude Code, particularly for documents. And in fact, I think sometimes the document side of it is more difficult because we're used to just going in and editing a word or two at a time.

And that level of control. So what I tend to do is I tend to get the document as near as I can, and then I'll get it converted into a Word document. And then I'll—or I'll copy the HTML into a Word document. So I might still finesse it at the Word level.

But a lot of the stuff that I'm doing is highly designed. I'm going from documents which you would have had in PowerPoint or Word into almost like what you'd have in an ebook or better than an ebook in the way that it's designed. Then it saves you a fortune in design cost. But then you have to have the patience to get it right.

And that's why if you only really want to build skills about stuff that you want to do lots of times, the first few times I was doing a carousel, it would take me a day to get it right. And now, after two attempts, it's right. So that's the investment is getting it right and then having the tenacity and patience to keep with it, and then you get something that's reproducible.

**Frederick Vallaeys:** Right. Let's go deeper on that, right? Because you have the luxury of being the founder and the owner of the agency, and so you can invest that time. But I think you also have team members you've tasked with sort of figuring out how to deploy AI across the organization. And that's what I want to get to, right?

Because I think so many employees—they have a task, and whether the task is like do 10 blog posts a month—they're just trying to get these things done. And the moment that they hit a roadblock of like, well, this first one using AI, it's taking me maybe a week to get it perfect, and they don't necessarily think about, well, but then after that's done, now I can do a hundred in a month instead of 10. So how do you get your team to be willing to make that investment and know that there's a light at the end of the tunnel?

**Ann Stanley:** Yeah, we had a really good example recently. I know it's tech SEO, but we had a similar example with our Google Ads audit actually as well. The tech audit is 50 pages. It's really, really detailed. And to get it to go off and do that would have probably taken two weeks' worth of work.

So in the end, what we did was we took his normal way of doing it and then just made it look nice. But we got further with the Google Ads audit because we'd already played with n8n to try and pull the data out of the API. And even to do that, you have to go through a lot of hoops to get yourself registered to be able to get the data out of the API, believe it or not. You need to get the development token.

So there's quite a lot of hoops to get information out. So you've got to have at least a couple of you that are prepared to invest the time. And I think that has to be the directors at this stage, or you've got to have somebody who's geeky enough and got the tenacity to carry on doing it.

But then what we did with Claude Code is that I'd already got quite a lot of information and quite a lot of examples. So I said, "Can you create a skill out of that?" I then asked it to go and search online for other people's skills where they've created stuff. And a lot of people have left them on GitHub openly, and there's quite a lot of directories. So you can go and steal what everybody else is doing.

And then you say, "Can you put that together and come up with something better?" And then I gave it—so I first of all reproduced what we were already doing internally without the API. That was my version one. And then what I did gave it access to the API so it could go directly into Google Ads and go into GA4 and then supplemented it and got better quality data on top.

But it is a lot of work. So unless it's something you do regularly, it's very difficult to justify the amount of time. So your blogging is a classic example because you probably want to do lots of that. Whereas if you're only doing one audit a month or one audit every three months, it's not worth it. But if you're doing one audit a day, I'm doing GEO reports as outreach at the moment.

You know, generative engine optimization or AIO, whatever you want to call it. There's so many different names for it at the moment. But I'm producing—my email outreach gives me about 10 opportunities a week. They're a right pain in the butt to do, but if I had to do them manually, it would take me all week to do them.

It's still a nightmare because it still doesn't always do what it's told, and you feel like you tell it—it's like a child. You sometimes tell it the same thing over and over again. But the good thing is they look really good. And actually, because I've linked it up with certain pieces of software, because it can go and—it uses something called Playwright, and it goes and grabs the screen grabs. It even sets the accounts up for me.

If I had to do 10 of those, as I said, it would take me all week. It probably takes me a day to do 10 because they can be problematic. But you've got to have the time to invest in it. So I think to answer your question, I think it's really difficult for people to learn things like n8n.

And interestingly, a year ago, we would give training to the team, and they really weren't interested. But they started to see what we're producing with Claude Code, and I think they now see this is an assistant. I can use this to do all sorts of stuff. And suddenly they're desperate for us to bring them up to speed.

So I think once the benefit outweighs the pain, they don't know what they're letting themselves in for yet, do they, Fred? Once the benefit outweighs the pain, then the team will come on board, and then you'll bring them along with you. But the finders and the pioneers at the front are usually six months ahead of the rest of the team, I would say.

**Frederick Vallaeys:** Yeah. Well, I know they're lucky at Anicca Digital that they have you to learn from once they decide to make that jump. But you also do a lot of education in other ways, right? I think you had August, AI August.

**Ann Stanley:** AI August.

**Frederick Vallaeys:** You have AI Thursdays now. So tell us about a number of the efforts where people can actually get hands-on and really learn this stuff with you.

**Ann Stanley:** Yeah. And just to clarify, I'm not the only one. Fortunately, my MD Darren, he used to be head of PPC. You probably knew him from years ago. He's just as into it as me, and he's been doing some really interesting stuff, bringing in reporting from all sorts of places. And then James, who's the tech SEO, he's really into it.

So there's three of us that are really into it. And then you've got more and more of the rest of the team are getting into it as well. But it's difficult to get into it if you don't understand the basics. So one of the things we wanted to do was instead of just offering a two-day course every month where that was all we were doing, we wanted to have something that was a bit more organic.

So we decided to launch something called the Thursday AI Club. It's going to be every two weeks to start with, and the first one's on the 9th of April. And you can sign up for free for the first one, and then it's a members club. And the idea is there's an hour of ask us anything, and there's three or four technical people where they'll be there to answer all the questions.

And then you can stay around for a two-hour workshop. And the idea is that over six months, there's 12 workshops that were already planned out, most of which we've given before. And it's an evolving program because in six months' time, you won't need the same stuff as what we're talking about now, or things will have moved on again.

But the whole idea is there's going to be a whole load of resources available, all the previous recordings. And you can just join on a monthly basis, or you can sign up for the year and get a big discount for signing up for the year. So but you can try before you buy on the 9th of April.

**Frederick Vallaeys:** Nice. And then what happened in August?

**Ann Stanley:** Ah, August. So I thought that if I change the U to an I, it was AI-gust. So we ran two-hour workshops every week during August. So those are all available on the website as well. So you can go and help yourself, and that will give you a good idea. That was almost like the prototype of what we're doing now.

I think there's five two-hour sessions that you can go and help yourselves to. So all of our webinars on our website you can get for free, and I think some of them are on YouTube, but you're best off going onto the Anicca website.

**Frederick Vallaeys:** So I did some workshops at SMX in Munich, a vibe coding workshop, an hour 20 minutes. And I asked people to come with a spreadsheet of some type of data they wanted to use. Because I knew a lot of this data you can get through API or MCPs, but I wasn't ready to help people set that all up. So I was like, just bring it on a spreadsheet, okay? And then let's see what we can do.

And they came in with some really sophisticated asks. They wanted to like consolidate or combine data from three different sources and have like an AI layer on top of that. And I was like, okay, we got an hour 20 minutes. Like nobody's done this before except me. Like let's be realistic.

But actually, by the end of an hour 20 minutes, they all had really good prototypes with actual data. It was somewhat surprising, right? I know it's possible, but I didn't think it'd be possible for someone who'd never done it before in less than an hour and a half. And then nobody wanted to leave because then they all got into that same routine of like, oh, this thing is cool, but what if, okay, and what if.

And then they kept doing what if more and more and more. And so they didn't want to leave. But like you, having done these courses and having talked to people, has that been your experience? Words of encouragement people who might be a little scared about this.

**Ann Stanley:** I think that some skills you need to go away and play with and come back. So we always try and do some practical. But what we're going to do in the AI Club is every quarter, we're going to have a whole day hackathon. So we're going to pick certain technology and certain ideas. So we'll do that once a quarter.

It's going to be on a Saturday because a lot of people can't get the time off during the work time. But there will always be some practical every time. So we did do some practical when we were doing the two-day course. We actually had breakout rooms and things. And that was the original idea, but not everybody's got software.

So when you come in and start teaching them stuff, they're not necessarily ready. I did do a session last week at—we got something called Agency Hackers in the UK—and I did a 45-minute teach-yourself how to make a joke machine, which was quite interesting. So we made a joke machine, but I sort of had to do quite a bit.

You can't just dump people into n8n without explaining it. So the slides are—that's one of the blogs actually that I did at the weekend. So you can go and see that. But that was really easy because it sort of explained the basic principles. But I agree with you. I mean, I created a whole portal in a weekend, client portal in a weekend using Claude Code.

So what you can do is just amazing, but you sort of need to build yourself up. It's no good going in the deep end. You need to start with little baby steps and just do the annoying jobs that you have to do a million times a week. Like your meeting notes, like your blogs, like your LinkedIn posts, whatever it is you do all the time. Crack that, and then you can build up your skills over a period of time, right?

**Frederick Vallaeys:** And I mean, that's a good measured approach. I'll take a little bit more of an aggressive, maybe Silicon Valley, let's just build and break stuff.

**Ann Stanley:** Yeah, but you've already got loads of experience. You've been doing—you've been playing around—

**Frederick Vallaeys:** I do, right. But I really want to encourage people who are in that boat that they've never done it. I agree with what you're saying, but I do want to encourage people take those baby steps now. And if you misstep, that's okay, right? Using Claude Code is so cheap. If you've got that plan, get it to build something, and if it's completely wrong or it has security issues, ask it to identify those for you and don't deploy it.

But see what it can do with a couple of really basic things, and then start layering out all of these things about building it the right way, structuring it correctly, making sure it is secure, making sure your API tokens don't get stolen, right? But until you see the power of like a single prompt and what it can do with a single prompt, that's the thing that makes the light bulbs go off and makes you understand what is now possible.

**Ann Stanley:** Well, the other big advice on that is you go into what's called planning mode. So if you say, "Oh, I want to build"—so what I built was a client portal. And I already knew exactly what I wanted because I'd been playing around with this stuff for quite a long time anyway. So I described exactly what I wanted. I knew how I wanted the clients to be uploaded. I knew I wanted to add users. I knew what I wanted in the layout and the menus because I tried some stuff a year ago and couldn't get it to work because the technology wasn't there.

So but what I did was you go into what you call planning mode. You sort of describe it again with your voice because you can give it a lot more detail by talking at it than you can by trying to type it in. And then you say, "Give me a plan." And you say, "Oh, okay. Well, you've missed that bit out." And then you work together on the plan. Then you get it to build.

So although you're right, go in and try stuff, but just by putting that intermediate step in—because remember, if you can't prompt properly or you don't give it the instructions, you'll get rubbish out. But if you work together to develop what you want, then you're going to get a much better result out the end of it. So definitely recommend doing that.

**Frederick Vallaeys:** Yeah. And again, I'll take the contrarian approach here or stance because I have tried doing more of this planning and going to ChatGPT, and I have a big conversation. But there's so many preconceived notions that I have. Whereas if I just say like, here's what I'm trying to build, and I specify what is the goal, right? What am I trying to achieve? And then it just has so much information about other ways that that's been built, approaches that it's taken—

**Ann Stanley:** No, no, I'm saying the same. Yeah. Yeah. So I'm the same. I'm saying the same. But don't go in and chat and do it. Do it in Claude because Claude knows what tools it's got. No, I'm agreeing with you. I'm saying just describe it to start with, what you want, what you're trying to achieve, what the output will be. And then it will come up with a description of what it thinks you need.

And then you can say, "Oh, no, that's not quite right." And then once you got to something that vaguely looks like what you had in your head—because remember, we are all pretty rubbish at describing what we want really. Unless you do this a lot, then you let it go ahead and build it, and then you work iteratively with it.

So it is all about a conversation. It's working like working with your own assistant next to you who can do design and function content and everything and knows what works because they've done it a million times before. So I agree with you actually. I would do the same.

**Frederick Vallaeys:** Yeah. And I guess once you start doing this, you have a whole newfound respect for people who build product and people who manage campaigns because it's like, well, I thought of the 10 things it should do, but then oh wait, now that I'm using it, there's this other thing and there's this edge case and that thing I hadn't considered. It's just like, well, there's actually a lot of things you need to think about.

But what's also beautiful is that the systems keep evolving so quickly. The costs keep going down. So it's like, well, six months from now, sure, you've got this dashboard, but why not just rebuild it? And there's this whole philosophy in coding that's like, well, we no longer should write the code on the page as the people. We should write like what is it we're trying to do with this page, what's the goal, right?

And six months from now, Claude Code may have a better version, and it should just look at those instructions and say, "Oh, well, Fred, here's what you were trying to do, and here's how we did it six months ago. But let's just rebuild the whole thing to do it better and still meet your goal." And that's such a fundamental shift in how we think about how things are built.

And to the people who have the goals.

**Ann Stanley:** Well, I mean, what we haven't talked about is the impact that this is going to have on agencies and what we do as a living going forward. Because the thing is, it's going to get to a point where it's probably only 1 or 2% of people are using Claude Code at the moment. But Agency Hackers, 30% of the people in the audience were using it. Whereas a year ago when I asked them, have they heard of Claude or AI agents, it was less than 20.

So it's moving really quickly. But the technology isn't going to be the barrier anymore. It's only going to be your imagination and your ability to understand it and also to market yourself as an agency or as a practitioner to say that you're adopting these tools. Because if you don't, the clients are going to be there before you. And they're going to think they can do it themselves without understanding that all that knowledge that means that you know what to build and you know how to build it because you've done it and been there and bought the t-shirt.

That experience is worth something. All of our team have got 10, 15, 20 years of experience. So that's what becomes valuable rather than the day-to-day stuff. And I think a few years ago, you wrote a book pretty much saying that stuff about how AI was going to change everything.

**Frederick Vallaeys:** Oh yeah. The biggest mind shift in PPC history and level the playing field. I actually have a new book. So anyone watching this episode, *The AI Amplified Marketer* is available now on Amazon. So that's the one that picks all this new generative stuff. And it was written before Claude Code, so it's not talking about that, but a lot of the concepts that feed into it and how you think about AI. Definitely, yeah, check out that new book.

**Ann Stanley:** So I guess with your new book and my first AI Club, the audience are going to be able to absolutely smash it, aren't they?

**Frederick Vallaeys:** Absolutely. But yeah, this has been fantastic, sharing all your insights. I feel like we need to do another one here in a couple of months when everything will be new again, and we'll have even better advice on how to do things. But yeah, certainly anyone watching today, check out all the great work that Ann's been doing and sign up for these classes because honestly, the one hour you invest every week in doing this is going to pay off tenfold, a hundred-fold.

And my philosophy is you're not competing against AI. You're competing against people who use AI better than you. And this is exactly what we're seeing here, right? Someone can 100x you by just using the right tools.

**Ann Stanley:** And maybe in six months' time, we can say that we've got a bit more sleep.

**Frederick Vallaeys:** I don't know. I mean, every hour that frees up, I'm like, "Oh, there's this other thing in my life that I've always wanted." I've actually built an app now that helps me remember what I ate at certain restaurants because the thing that frustrates me the most is like I've been there before, but I can't remember what on the menu I enjoyed or what I tried. And I just need a little app that helps me remember. And so I start vibe coding all of these things.

And yeah, and I do some PPC along the way as well.

**Ann Stanley:** Yeah. Well, we didn't talk too much about PPC, but I think what we did talk about was how these tools can help you do your PPC, can do your reports, can create—I mean, some of the other type of reports, competitor analysis, landing page, CRO stuff. It's all on the edge of what PPC and marketers need to know. So hopefully there'll be a few snippets in there that they'll find useful.

**Frederick Vallaeys:** Well, yeah, exactly. I think so. Take these concepts of how you can get the computer to do the work for you. Put in place these skills that really define—so for Optmyzr users, you would think about what is your blueprint for how you manage an account. That blueprint basically becomes your skills file, like do these things in that order and take this into consideration for this and that. But layer some AI LLM capabilities on top of that, which is what you were describing you can do with n8n, right?

So that's one starting point. But then also, we've talked about the business intelligence and sort of the experience that you have as an agency or as a software tool. The hard thing about building a tool is not the basic stuff. It's the edge cases, right? And that's what we've thought about. And that's where you can take an agent for Claude Code, but you can say connect with the MCP from Optmyzr, which will help avoid these mistakes, which will build on 10 years of experience of doing this the right way and gives you reliability, right?

Or if you're working with an agency, there's something about an agency having access to all of these other accounts, and so they have a broader insight into the industry. And that is valuable. And they've probably built that into their ways that they communicate with these agents, right? And so the risk right now is that people just think, oh, Claude by itself can do everything better. No, it can automate it to some degree, but it still needs the help from the things that we've all been building over decades of work.

**Ann Stanley:** I've got a really good take-home message for everybody as we come to the end of the time together, and that's the human AI sandwich. So for anybody that's feeling hungry at the moment, the idea of that is you need humans at the beginning of the process who have the idea, have the inspiration, and they can give the right instructions to the AI to produce something. And then you have the human on the other side that takes that content, knows whether it's good or not, and adapts it, edits it, and makes it even better.

So the human AI sandwich is what we're talking about here. And what I'm saying is the better quality humans are going to produce the better quality sandwiches because they know their stuff. I nearly swore then, but I didn't. I managed to go through the whole episode without not swearing once. So yeah, the human AI sandwich is the take-home message of this podcast with you today, Fred.

**Frederick Vallaeys:** Okay. Well, very good. Now you've made me hungry, Ann. So with that, we're going to wrap it up here. This was fantastic, Ann. Thanks so much for being on as a guest, everyone. Thank you for watching. Please put comments in, like this episode, subscribe, and go and check out all the great work Ann and Anicca Digital have been doing. And with that, we'll wrap it up here, and we'll see you for the next episode.

**Ann Stanley:** Thank you. Bye, Fred.

---

*Source: [How Marketers Can Build AI Workflows That Produce Reliable and Consistent Results](https://www.optmyzr.com/ppctownhall/how-to-get-reliable-output-from-ai/)*
*© Optmyzr. All rights reserved.*
