

Episode Description
There’s a lot of noise around using AI in PPC. Most of it centers on big, existential questions: Will automation replace strategists? Is AI taking over PPC management?
But on the ground, the concerns look very different.
PPC practitioners are navigating rising costs, unstable automation, mismatched campaign types for lead gen, and platforms that seem to change faster than performance data can stabilize.
Julie Bacchini has spent years listening to those real-world challenges. As the founder of Neptune Moon and host of PPC Chat since 2017, she’s built a space where PPC professionals talk honestly about what’s breaking, and not just what’s trending.
In this conversation with Frederick Vallaeys, she brings that grounded perspective to a much-needed reality check on AI in PPC.
Here are the key points discussed:
- Lead generation lives in the wrong neighborhood
- AI Max is PMax all over again: early chaos, eventual evolution
- Liability lives in the gaps between AI outputs and human accountability
- Test AI on problems you’ve already solved
- Documentation isn’t optional anymore
- Platforms evolve faster than search behavior
- Workarounds work until they don’t
- AI anxiety isn’t about job security alone
- You can’t navigate this alone: business protection and community support in an AI-driven industry
Episode Takeaways
In this discussion with Frederick Vallaeys, Julie Bacchini starts by explaining that lead gen advertisers are not Google’s target market, despite their significant spending on the platform. She emphasizes that automation and AI are designed for larger data volumes, which many lead gen accounts don’t achieve, and highlights the limitations of manual bidding in accessing essential features. The typical campaign launch often results in a mix of outcomes as the platform seeks stability.
Julie then shifts the focus to pressing issues like legal liability and insurance gaps concerning unverifiable AI-generated data. She raises critical questions about accountability for poor automated decisions, the efficient management of AI documentation, and how to handle data provided by clients from algorithms that they may not fully understand.
Lead generation lives in the wrong neighborhood
Julie delivers an uncomfortable truth: “If you’re lead gen, you’re not the target market for Google advertising.” Everything Google builds is ecommerce-first, designed for conversion volumes that most lead gen accounts never hit. The automation needs a critical mass—500+ conversions in 30 days to really find its groove.
When you’re sitting at 30 or 50 conversions, the algorithms have drastically fewer data points to work with. That’s just math.
The problem goes beyond simple volume metrics. Google’s entire automation infrastructure—the machine learning, the pattern recognition, the optimization algorithms—all of it expects a steady flow of conversion data to learn from. With 5,000 conversions in a month, the system has endless opportunities to spot patterns and adjust. With 30, it’s essentially guessing.
“The way that the automation works is the more information it has, the better it is at being able to figure stuff out and figure it out faster and do it more efficiently,” Julie points out.
Lead gen marketers end up getting creative with portfolio bidding and account structures, trying to “lessen your data disadvantage” just to hit the baseline thresholds that ecommerce advertisers reach naturally. Julie’s been in this position for years. She’s made peace with it, even jokes about her annual GML bingo card where “we’re going to get serious about lead gen” never gets crossed off. Manual bidding sounds tempting as a workaround, but it cuts you off from features Google reserves for smart bidding users.
“When you don’t use automated or smart bidding, then you also don’t get access to a lot of the goodies that come with that, right, that Google’s doing in the background for you,” she explains.
The days are numbered for that approach anyway. The reality is harder than it was five years ago. Half the things that used to happen in the foreground now happen in the background, automated and opaque.
AI Max is PMax all over again: early chaos, eventual evolution
The pattern repeats itself. Some advertisers try AI Max and get amazing results immediately. Others find it completely underwhelming. Some call it a disaster. Julie describes this as typical Google product behavior:
“It’s like everything else, your mileage may vary. If it feels like something that you want to try or you think could potentially work in your account, give it a whirl. You’ll know fairly quickly if it’s going to work for you.”
The conversations in PPC Chat mirror the Performance Max launch almost exactly. Early adopters rush in, some see magic, others see chaos, and the vast majority land somewhere in the middle, wondering if they’re doing something wrong or if the tool just isn’t ready yet. Right now, we’re still in that first-release period where experiences vary wildly depending on account type, vertical, and pure luck. Julie expects AI Max to follow the same evolution arc. The PMax that exists today bears little resemblance to the version that first rolled out.
“I’m sure it’s going to evolve because the PMax that we have today is not the PMax that we had when that baby rolled out. It’s not. So I feel like AI Max will probably be the same way. They rolled it out like here’s what it is, and then it’s going to morph over time.”
The bigger issue is that this cycle of “launch a new tool, watch it underperform, iterate for two years, finally make it usable” has become the norm. Every major Google Ads feature follows this path now. Practitioners are tired of being unpaid QA testers, but opting out means falling behind when the tool eventually becomes essential.
Liability lives in the gaps between AI outputs and human accountability
Julie raises a scenario that insurance companies haven’t figured out yet: a client uses AI to analyze their sales data and generate customer personas. They hand you this AI-generated summary as the strategic foundation for their campaigns. You build everything on top of it. Then it turns out the AI got it wrong.
“What happens if the data that got spit out of whatever AI they use is wrong? Where does that liability lie? Is that solely with the client because they gave you bad information?” remarked Julie.
Nobody knows. Insurance policies move more slowly than technology. Legislation moves even more slowly. Meanwhile, marketers are out here making decisions with AI-generated inputs they have no way to verify, and the question of who’s responsible when things go sideways remains unanswered.
She frames the dilemma clearly: “They’re instructing you to utilize it when you are coming up with a strategy, when you’re making decisions, when you’re prioritizing, all the stuff that we do. What happens if the data that got spit out of whatever AI they use is wrong?”
Julie describes her experience: “I did have a client provide a massive document with guidance based on sort of like here’s our sales data, here’s who bought stuff. And there wasn’t really any way to verify it. So you ask questions like, okay, well, can I just see the raw data for the actual things that were purchased? Oh, you have demographic data. Can I just see the raw demographic data? And that wasn’t possible. The only results I got were that it was AI.”
Her reaction? “That made me uncomfortable because it’s putting a lot of faith in something that I didn’t have any way of verifying.”
Test AI on problems you’ve already solved
Before trusting AI with high-stakes decisions, run it through scenarios where you already know the answers.
“You can ask questions you already know the answers to and see how correct it is,” Julie suggests.
Pull historical data from completed campaigns. Feed it into the AI. See if it arrives at the conclusions you already validated through actual performance. This builds confidence in the tool’s accuracy and helps you understand which types of prompts produce reliable outputs.
Julie explains the logic: “Instead of giving it something that you’re still trying to figure out yourself and you don’t have all the information, or you’re working on figuring it out, why don’t you ask it things you already know the answer to? And then you can see, does it come up with the correct answer?”
She describes this as building your confidence level in a tool before you rely on it for high-stakes decisions.
“That’s a better way in my mind to be putting yourself in a position to have a higher confidence level when you’re asking a question that you don’t already know the answer to.”
Documentation isn’t optional anymore
Julie’s most practical recommendation is meticulous record-keeping.
“This is the prompt that I put in. This is the data that it had access to. This is the instruction that I was given. This is what the results were.”
It sounds tedious, but it serves multiple purposes. First, it helps you spot inconsistencies—sometimes asking the same question twice produces wildly different AI answers. As Julie notes, “Sometimes you can ask an AI the same question twice, and it’ll give you different answers.”
Without documentation, you won’t catch that. Second, it creates a historical record for pattern recognition.
Julie describes what you might discover: “Wait a minute, I asked it to do something very similar three or four times, and the results I got were very different. Two of them were super similar, and then the other two were very, very different, but I followed the same methodology while I was doing it. That would make me be like, " Hm, what is going on there?”
Third, it protects you legally if questions arise later about why you made certain decisions. “If there are questions that come up as well, why did you do this, or what happened here, or what was the sequence of things that you do?” Having documentation makes all the difference.
A simple Google Sheet tracking your AI interactions is enough to start. Julie suggests: “You can have a Google sheet where you note this is what I was asked for, this is how I created the prompt, this is the platform that I used, this is what I fed it, and this is what it spit out. Just keep a record of that for yourself.”
She frames documentation as “just like covering yourself,” and acknowledges it’s not the most exciting work.
“Documenting stuff is not the sexiest work we do, but we do it for a lot of things. We have spreadsheets that have all the ad copy. We have all the images. We do work like this on a fairly regular basis, where we’re trying to keep track of what tests we are running this month, what’s coming up next,” explained Julie.
Platforms evolve faster than search behavior
Google pushes Performance Max and AI-driven automation aggressively, but most users still search the way they always have. That disconnect creates real confusion. The tools change faster than user behavior, which means new campaign types don’t necessarily align with how your audience actually finds you.
Frederick frames the disconnect: “People are searching in different ways. They are formulating what they look for in different ways. They have different touch points along the way to getting to that day of feeling googly and doing that final search.”
But the reality is that the shift happens gradually while platforms want you to adopt new tools immediately. Julie points out that Google’s advantage is that massive behavioral foundation: “They have, they’ve spent 20 odd years teaching people how to find stuff. And a lot of that results in ads showing when people are finding things.”
She believes Google will protect that position: “I don’t think Google is interested in falling behind really when it comes to the other. They’re going to protect their turf, and why wouldn’t they? They have a very expensive turf that they spent a lot of years building. So, I think it’s going to take a lot for them to be pushed off the mountain.”
The challenge for marketers is figuring out which changes matter. Julie’s test is straightforward: “Do you? What does it do? What are you hoping to accomplish? How is it different from what we’re already doing? Are you just trying a new toy because someone said you had to try it?”
Success requires understanding actual search behavior in your vertical, not just following the platform’s latest feature rollout.
Workarounds work until they don’t
Strategies that perform perfectly can suddenly stop working when Google rolls out an algorithm update or launches a new feature that disrupts your carefully crafted setup.
“You have things working in your account, and then one day you wake up two months later, and it’s not working anymore,” Julie explains.
This is the constant anxiety in PPC Chat conversations. As she describes it: “We have a lot of conversations around, hey, is anybody else seeing this? We had X, Y, and Z, and it was going along perfectly, but now it’s not.”
Lead gen advertisers particularly feel this because their workarounds are already fragile, designed to compensate for not being the platform’s primary use case. The question isn’t whether your workaround will eventually break. It’s how long it holds up and whether you’ll have enough warning to build the next one.
Julie acknowledges the challenge: “How long can you do workarounds? I think that’s the other piece where the people who are in the lead gen space talk about and think about, like, all right, it’s not realistic to say like, well, I’m manual bid forever.”
Even when workarounds exist, they come with trade-offs. “You have to be able to absorb what’s happening and see what you’re experiencing in the accounts that you’re responsible for. And then you have to try to make responsible decisions as far as what things we can try,” reiterates Julie.
The deeper frustration is that the pace of change has accelerated. Expertise becomes obsolete faster than people can build it.
As Julie observes, “This has always been an industry, it’s tech, so nothing stands still. It’s always been an industry where there’s been a ton of change happening. But that pace has really picked up, I think, in the last couple of years.”
AI anxiety isn’t about job security alone
The PPC Chat community reflects a wide spectrum: people using AI for everything, people refusing to touch it, and most people somewhere in between, trying to figure out where it helps without creating risk. The anxiety goes deeper than “will AI take my job?”
It’s about losing control over strategy, trusting outputs you can’t independently verify, and navigating constant change where the expertise you built over the years suddenly doesn’t apply the same way. Julie acknowledges the discomfort while accepting reality.
“That train has left the station. How do we take the knowledge that we have for however long we’ve been working in PPC and integrate that with the AI that seems to be everywhere?” Julie mentions.
That’s the real question facing the industry. Julie isn’t a huge fan of AI—she’s been vocal about that. “I’m not the hugest fan of AI. I’ve been vocal about that. That shouldn’t be a surprise to anybody who’s followed me anywhere. Because I have questions about it, and I think it has shortcomings. But I also realized that that train has left the station.”
The conversation often centers on figuring out where AI actually helps versus where it introduces risk.
As Julie frames it, “Where can they use AI in a reliable way so that they’re not potentially having AI generate something that turns out to be incorrect and then you’re moving forward using incorrect data? That’s a real fear. But where does it fit in? Where does it make sense? How can it make you more efficient? Where can it fill in gaps that maybe you had previously, that it actually does a nice job with?”
She describes the PPC Chat community’s approach. Julie mentions, “People are experimenting like crazy, and they share their thoughts of like, oh, I created this or oh, these are the prompts I use for that or whatever. So, there’s a lot of testing and experimentation going on right now in general.”
The people who succeed in this transition won’t be the ones who reject AI entirely or embrace it uncritically—they’ll be the ones who figure out exactly where the tool helps and where human judgment still matters.
“Nobody wants to get left behind. Nobody wants to feel like they wake up one day and suddenly they’re a dinosaur who doesn’t speak the language of anything that’s going on,” she remarked.
You can’t navigate this alone: business protection and community support in an AI-driven industry
The conversation between Frederick and Julie covered a lot of ground beyond the major themes. Two points that deserve attention: the business realities underneath all the AI anxiety, and why community matters more now than it did five years ago.
Julie thinks about PPC differently than most practitioners because she’s been running her own consulting business for 26 years. That perspective shows up in how she approaches risk. While others debate whether AI will write better ad copy, she’s on the phone with her insurance agent asking whether her policy covers liability when a client hands her AI-generated data that turns out to be wrong.
“It is smart to carry various types of business insurance, but even if you are, this technology moves faster. It moves faster than legislation. It moves faster than your insurance policy,” she says.
She talks about early experiences that shaped this mindset—getting pulled into situations where documentation and process saved her. That’s why she pushes so hard on creating systems and keeping records.
The conversation also touches on practical AI use cases where the technology genuinely helps. Frederick points out one area where AI has an advantage over human perspective: understanding different audiences.
“If you’re going to be writing some ads, have you put yourselves in the shoes of every different persona that might be consuming those ads? I can’t put myself in the shoes of an 18-year-old black woman. We don’t have a lot of common experiences. So it’s difficult for me to write ads towards that audience, but AI has absorbed so much information that it is, in some cases, able to do that,” explains Fred.
Google’s adding AI directly into the platform too. Julie notes, “Google has their assistant inside of Google Ads now, where theoretically you can just have a little conversation with the assistant and kind of tell it what you’re looking to do, and it’s going to spoon feed you everything that you need to do.” You can ask questions like “hey, why are my CPCs up? Why are my conversions down?”
But she adds a critical caveat: “I’m not suggesting that you only ask the Google Assistant why something’s happening because you should probably verify what the Google Assistant is telling you.”
That verification piece matters because the AI landscape includes everything from helpful automation to complete avoidance. Julie describes the spectrum she sees in PPC Chat:
“Some people love it and are using it for everything. They’re using it for strategy. They’re using it for ad copy. They’re using it to generate images. You have people who are in the what else can I have it do camp. And then you have other people who are like, “I don’t want it to do anything because I don’t trust it at all.”
The community aspect becomes crucial here. PPC Chat functions as “a free community where PPC professionals can talk with each other, ask questions, and get moral support.” It’s not just about finding the one right answer. It’s about hearing the range of experiences and figuring out your own path forward. That diversity of perspectives helps people find their own approach rather than just following whatever the loudest voices say.
The underlying message in both of these points is the same: you can’t do this alone anymore. You need systems to protect yourself as a business. You need a community to stay sane and informed. And you need to think beyond just “how do I optimize this campaign” to “how do I build a sustainable practice in an industry that’s changing faster than anyone can comfortably adapt to.”
Episode Transcript
Frederick Vallaeys: Hello, and welcome to another episode of PPC Town Hall. My name is Fred Vallaeys. I’m your host. I’m also CEO and co-founder at Optmyzr, a PPC management software. Now, for today’s episode, we’re going to go in a slightly different direction.
Usually, you get to hear from Fred, the AI fanboy, and all of the other AI fans. But today, we have Julie Bacchini Freriedman. She is the host of PPC chat, and that is a community where more often than not, there are some people talking about fears around AI, AI anxiety, worried about using AI the wrong way, and worrying about what it will do to their careers in PPC.
So I think this is going to be an interesting one. We’re going to get a different perspective on artificial intelligence in the world of digital marketing. And with that, let’s get rolling with this episode of PPC Town Hall.
Julie, great to have you on again. Thanks for joining us.
Julie Bacchini: Oh, thanks so much for having me. I appreciate the opportunity to talk about PPC whenever I’m asked.
Frederick Vallaeys: Really? Yeah, you do it all the time, several times a week. Um, so you’ve been on the show before, but for people who haven’t had the chance to meet you, tell them a bit about yourself, PPC chat, Neptune Moon, and what you’ve been up to lately.
Julie Bacchini: Sure. I’m Julie Freriedman Bacchini. I am the president of Neptune Moon. So I do PPC consulting, and I have been working in PPC since literally the beginning. Uh, so I have seen such a huge amount of change over all these years. So I’ve been in business for 26 years. So I have really seen just about everything.
And uh, in addition to that, I have been running the PPC chat community for PPC professionals since 2017, which I can’t believe is coming. It’s been that way for many years. Like it feels like I took it over not that long ago, but yeah, it’s been that long.
And we’re a free community where PPC professionals can talk with each other, ask questions, and get moral support. We do a weekly live chat Tuesdays at 12 o’clock Eastern where we talk about a timely PPC topic. And uh, AI of course has been if not the topic, it floats in I think to a lot of topics that we talk about too because it’s very much on everybody’s mind.
Frederick Vallaeys: Yeah, that makes sense. AI is everywhere these days. Um, and so yeah, thanks for hosting that community and taking it over back in 2017. And uh, you’re also known as PPC mom to some, uh, one of the nicest people in the PPC industry. Someone like you said, it’s about uh, mental wellness. It’s sort of like airing what’s on our minds and not just about PPC strategies but also uh, living good lives. And uh, you’re also an actual mom, right? And uh, is that part of the reason why your glasses match your background and the shirts?
Julie Bacchini: Yes, I have to be very—I mean not that I don’t like to be coordinated, but I do have a teenage daughter who’s very into fashion. So if I am askew or amiss uh, in appearance and she catches wind of it, I will hear about it. So, yes, she keeps me on my toes.
Frederick Vallaeys: We’re uh, for everyone who’s just listening to the audio portion of this, I highly recommend you take a look at the YouTube version of this or just go to the show page to see uh, how great Julie looks with the glasses that match the galaxy in the background.
Julie Bacchini: Well, I am Neptune Moon, you know.
Frederick Vallaeys: Ah, yes, of course. Do you have one of the Neptune swatches? The moon swatch?
Julie Bacchini: I do not. I do not.
Frederick Vallaeys: Maybe a gift idea. Let’s see who sends Julie one of the Neptune watches. Okay, but let’s jump into the topic at hand here. So, artificial intelligence and um, first of all, it’s interesting because you said even if you don’t have a PPC chat about that topic, it sort of floats in. Um, so, maybe before we go too deep on AI, what are the topics on people’s minds these days when it comes to Google Ads and PPC?
Julie Bacchini: Um, I mean, I’d say they fall into a couple different categories. AI is definitely a big one. Um, and when it comes to AI, it’s sort of like how to utilize it. Um, where does it make sense, where doesn’t it make sense? How do we make sure that we still have a job, you know, five years from now? There’s a lot of concern uh, about that, what’s everything going to look like? You know, the pace of change. This has always been an industry. It’s tech, right? So, nothing stands still.
It’s always been an industry where there’s been a ton of change happening. But that pace has really picked up, I think, in the last couple of years. Um, so we talk a lot about things changing. You know, you have things working in your account and then one day you wake up two months later and it’s not working anymore. So there are a lot of conversations around, hey, is anybody else seeing this? You know, we had X, Y, and Z and it was going along perfectly and now it’s not. We have a lot of conversations about those types of things.
How to set things up now, how to manage things. Um, we talk a lot about what goes on in different platforms and how you can have efficiency across multiple platforms because many people are managing accounts on just more than just one uh, one platform as well. So where to spend your time, how to figure out how to prioritize, you know, where clients should be spending their efforts, that type of thing.
And it’s all getting more expensive. So how do we keep clients happy when the cost, the click, the cost of the clicks keep, you know, going up year after year? So they have to spend more money to really get the results that they’re used to getting like in the previous year. How do you manage that? How do you manage those expectations and how do you handle that and try to keep things efficient? I would say those are some topics we’ve talked about lately.
Frederick Vallaeys: Okay. So if that’s on anyone’s minds, know you’re not alone in being frustrated with the ever rising CPCs. Uh, but let’s go back to the thing you said right before that maybe about cross-platform management. And I think last year there was some serious concern over the future of Google in the AI world where ChatGPT was answering everything and Google’s AI wasn’t that good. But now Gemini is a leader.
Gemini is actually really useful because it ties directly into all of this amazing data that Google already has about us. So I don’t have to use a Gmail connector. I don’t have to connect it to the calendar. I don’t have to connect it to drive. It’s already all in that Google universe. Um, so what’s your take on Google Ads as a platform and its viability? Should people continue to feel safe in investing there or is it really time to maybe look at some other places too?
Julie Bacchini: I mean, I think it’s always wise to not have all of your eggs in one basket because as we know we are at the whim of any platform where we have advertising going on. Um, but do I think that Google as a top destination for paid ads is gonna go away anytime soon? I don’t. Um, they have, they’ve spent 20 odd years teaching people how to find stuff, right? And a lot of that results in ads showing when people are finding things.
And of course they’ve expanded into, and they’ve got the newer campaign types where you have um, more image-based and video-based assets. So you’ve got PMax, you’ve got um, demand gen, you know those types of things. So Google has been evolving, and you know they just came out here uh, very recently talking about how you can connect all your stuff you know into Google’s AI now and you can have an even more personalized experience. So, I don’t think Google is, um, interested in falling behind really when it comes to the other, right? Like, they’re going to, I believe they’re going to protect their turf and like why wouldn’t they, right?
They have a very expensive turf that they spent a lot of years building. So, I think it’s going to take a lot for them to be pushed off the mountain.
Now, that doesn’t mean that there aren’t other places. OpenAI is serious now and are announcing and talking about we’re going to have ads. We are like we really truly are going to start having ads. So things are moving forward on other platforms as well, but Google has such an advantage at this juncture. It’s going to take a lot to dethrone them and make them a place where it’s not going to be a priority for businesses and organizations who want to be, you know, seen.
Frederick Vallaeys: Yeah. And I always find it quite interesting when none of these companies truly go into ads and I’m talking about the newer ones, right? Or they make a 360. Well, 360 is actually going back to where you were going before.
So, they make a 180 on uh, on ads like Perplexity. Uh, but it’s quite interesting because if they think about what’s the biggest monetization potential of this massive AI that they’re building, sometimes they don’t see ads as the thing. And that’s perhaps a little bit surprising to us knowing how big and massive of an industry this is. Um, but uh, but yeah, at some point they certainly have to, it just makes sense to support advertisers in all of this.
So I can’t wait to see what’s to come. Now one uh, type or campaign type thing that you didn’t really mention was AI Max. Is that for a reason? Do you not see a lot of popularity on that one or what are your thoughts around AI Max?
Julie Bacchini: Um, I would say generally in the community people are experimenting with—I mean it’s still relatively new, right? It hasn’t been around for that long. So people are experimenting with AI Max. Um, and I would say it’s like any other, any other like Google product launch.
Some people start with it and it works awesome and they love it and they’re like, “Wow, I can’t believe we didn’t have this before.” Right? And then you have other people who are like, “Hm, I tried it. It was very meh.” And then you have other people who are like, “Oh my god, it was a disaster.” So it’s typical. I would say it feels a lot like when PMax came out, right? Like the same thing where some people are early adopters, some people—and it is like PMax, I would say it probably performs better under some circumstances versus others, right?
And so there’s a pretty wide gamut, I think, in the experience that people have had when they’ve tested it and they’ve run it. So it’s like everything else, your mileage may vary. So, if it feels like something that you want to try or you think could potentially work in your account, like, you know, give it a whirl. See what happens with your particular um, you know, your particular setup. And, you know, you’ll know, I think, fairly quickly if it’s going to be something that’s going to work for you. And I’m sure it’s going to evolve because like the PMax that we have today is not the PMax that we had when that baby rolled out, right? It’s not.
Frederick Vallaeys: Not at all. Right.
Julie Bacchini: So I feel like AI Max will probably be the same way. Like they rolled it out like here’s what it is, right? And then it’s going to morph over, you know, over time. I would expect. So we’re still in that first, that first release period really.
Frederick Vallaeys: And that makes sense. And so that’s hopeful news then for the people who are managing PPC accounts because despite all of the automation capabilities and the AI, it still sounds like it’s not a guaranteed win. Um, now having been part of so many of the PPC chats and obviously you moderate the vast majority except the one or two weeks when you go on vacation and you have a guest host. Uh, but what are there any patterns that you’ve seen around AI Max and when it does work better than in other scenarios? Have you picked up on any of those?
Julie Bacchini: Um, I mean I would say generally it’s more, things are harder for lead gen, and I would say that’s true for Google as a whole. So, if you’re lead gen things like you’re not the target market for Google advertising, like you’re just not. And it’s okay. It is what it is.
Frederick Vallaeys: Wait, did I just—did I just hear you say if you’re lead gen, you’re not the target audience for Google Ads?
Julie Bacchini: Yeah, I said it. I said what I said.
Frederick Vallaeys: Heard it here. Uh, well, that’s amazing. I mean, so uh, go a little bit deeper on that. I mean—
Julie Bacchini: Okay, as a lead gen PPCer, I will say that um, most of what Google comes out with is ecom first, right? It doesn’t mean that you can’t use it for lead gen. You can, we do, right? But most of what they come out with is like it’s led by, it’s led by ecom. It’s designed for ecom. It’s designed for the kind of volume that ecom brings.
It’s just like it just is. Um, and I’ve made peace with that because it’s been this way forever. Um, and every year at GML, I get my hopes up. I have a square in my bingo card every year like we’re going to get serious about lead gen, right? And every year that does not get crossed off. And like it is what it is, right? Um, so I think that again when you look at how I would say the same is still true for PMax. Like the people who find the most success with PMax are probably ecom versus—I’m not saying that people aren’t finding creative ways to have it be successful in lead gen.
There are plenty of people who are doing that, but it requires a higher level of creativity to make some of these things work. They’re just designed for the cadence and the rhythm and the way that ecom works, which is different from how lead gen works, right? They are just different animals.
Frederick Vallaeys: Do you think it’s about the richness of the signal where lead gen tends to just be lower volume in general or a longer sales cycle? So would you, is that kind of a fair um, point? Would it then be okay to say like maybe a lower volume e-commerce client is going to struggle in many of the same ways as lead gen?
Julie Bacchini: I would say probably because I do think the biggest factors that make it more challenging let’s say to find the type of efficiency that high volume ecom accounts can find for lead gen folks or for lower, lower volume folks is that the internal machine learning AI whatever, whatever you prefer to call it, what that the mechanism that drives all of the automation inside of the platform, it requires a certain baseline critical mass of data. Right?
And Google’s honest about it, right? Um, but if you struggle to consistently meet or surpass those thresholds, it is more difficult for the um, the internal automation and AI to really find a groove and find momentum in the same way it can when it has more to work with.
And if you think about it, it makes sense, right? Like the way that the automation works is the more information it has, the better it is at being able to figure stuff out and figure it out faster and do it more efficiently. So, if it has, you know, 500 or more conversions that are happening in a 30-day period, right? If it has 5,000 that are happening, that’s so many more data points for it to look at to find patterns, to find commonalities than if you’re an account where you have 30 conversions, you have 52 conversions, right?
Like that’s just math and it’s data going into, you know, that’s being put into the mechanism that’s like doing all the things that it’s doing behind the scenes. So, like it kind of is what it is. You know, I’m not trying to throw shade in the way that maybe it seems like I am. I’m just being realistic about what it’s like when you work on accounts that aren’t the primary use case because we don’t have massive amounts of data, you know, that are flowing into the algorithm.
Frederick Vallaeys: Well, and so let’s stop forcing the square peg in the round hole here is maybe what I’m hearing. So, uh, where’s the round hole or where’s the round peg for lead gen advertisers? Where are you investing now to get those leads for Neptune Moon?
Julie Bacchini: I mean, you still have to be on, you still have to be on Google. It’s just it’s harder. I mean, I think there are some options that can help you too—I don’t want to say trick the back end, but it’s like portfolio bidding kind of like—you kind of have to, you have to be a little bit more creative. Like you have to build things in a way where you’re trying to lessen your data disadvantage. I would say it would be kind of the way I would if I had to put a bigger topic over it. Knowing that the volume of conversions impacts everything that’s able to happen inside of the platform itself, you have to be a bit more creative.
If you’re lower volume to try to get to that critical mass so that those efficiencies can start to happen so that that pattern recognition can happen inside of the automation for you in the same way that it would for an account that has like gobs of data coming in, right? You just have to, you have to think about it a little bit, a little bit more maybe in how you structure things, you know, and that type of thing right.
Frederick Vallaeys: And okay so that’s the important nuance I think um, when you made the initial point that I didn’t pick up on is yes Google ads or Google search is a place where people are looking for lead gen solutions or what lead gen companies sell, however you’re saying the Google ads platform isn’t designed first to support those advertisers so you have to find these workarounds to make it work for you because the audience is there. It’s just harder to connect with the right audience than it was if you were an ecom advertiser.
Julie Bacchini: And it’s harder than it was five years ago. Um, the automation, all of the automation, you know, which is sort of like under the AI umbrella again, it’s designed for high data volume accounts. Like that’s how it’s built. That’s what it’s looking for. So we had more options of things that we could do and we weren’t doing our bidding and like all the things that happened in the background half of those things didn’t happen in the background in the same way that they do now. Right.
Frederick Vallaeys: So, I think the challenge then is you’re saying, okay, we could still kind of go back to the old ways of manually adding keywords, manually managing bids, but there’s all this automation that’s come in and kind of derails. And that was also your earlier point, right? Things might be running fine now, but then all of a sudden, Google does an algorithm change or they launch a new feature in Google Ads and that messes with that perfect uh, workaround setup that you had and now you have to figure out what’s the new workaround to continue making it work, right?
Julie Bacchini: And I think you also have to think about how long you can do workarounds, right? Like I think that’s the other piece where like the people who are in the lead gen space talk about and think about like all right you know it’s not realistic to say like well you know I’m manual bid forever right, like in some ways you feel like well that would solve all the problems that you’re facing with the automated bidding needing the higher levels of data. You’re like, “Well, just do manual bidding then, right?” It’s like, “That seems like that should be a simple solution.” When you do the manual bidding, can you do that? Yes, you can do that.
And you may find that you have a perfectly acceptable level of success as far as the advertiser is concerned. That’s absolutely a possibility. But what happens now is when you don’t use an automated or you know smart bidding, automated bidding, it’s called 19 different things. But like when you’re not taking advantage of one of those technologies within the platform, then you also don’t get access to a lot of the goodies that come with that, right, that Google’s doing in the background for you. So I think, you know, you can try to hold on to like, oh, I’ll just do it the old way.
I feel like the days are numbered for that. I feel like you could make an argument that it’s already over for that. Although there could be pockets of people who are like, I got, I got extended text ads that are still running and doing great, right? Like that there are people out there who have ETAs who are still going like gangbusters, right? Like they exist. So, but does that mean that’s how you should think about your account now going forward? Like probably not, right?
Frederick Vallaeys: Like so I think that—well you’d have to invent time travel for that, right?
Julie Bacchini: Right. But you pause them and you can never unpause them again. So it’s all true. So that’s not realistic, right? And that hasn’t been realistic for a while. But like when we were in that transitionary period when you, you know, so it—you have to, you have to be able to absorb what’s happening and see what you’re experiencing in the accounts that you’re responsible for. And then you have to try to make responsible decisions as far as what things can we try. Um, I think it’s nice having you know like programs like this and you know the PPC chat community where people can kind of bounce ideas off of each other in like a safe way too to say like hey has anybody had any success with X, Y, or Z.
We get a lot of questions posed in the community like not even necessarily during the chat time but like you know whenever people will pose questions of hey um, you know has anyone had success with you know like this scenario running these types of ads or running this type of campaign or what’s, has anyone like I haven’t run you know I’ve never run the demand gen um, what do I need to know if I’m going to do that right so we have a lot of conversations that go on like that where people are trying to adapt and they are trying to take advantage of the uh, the technology that exists even if it can feel like you know maybe it’s a little bit challenging, it’s a little bit scary, it’s a little bit like I don’t really know what I’m supposed to do with this you know in this account.
but I think people have a hunger to want to try, nobody wants to get left behind, nobody wants to feel like they wake up one day and suddenly they’re you know a dinosaur who doesn’t speak the language of anything that’s going on like nobody wants that.
Frederick Vallaeys: And what’s that language? What’s the language? Obviously, we’re talking about AI here, but what are sort of the—It almost leads into a question of what people should learn today to remain relevant in the future.
Julie Bacchini: Okay. So, I’m not the hugest fan of AI. I’ve been vocal about that. Like, that shouldn’t be a surprise to anybody who’s followed me anywhere. Um, because I have questions about it and I think it has shortcomings. But I also realized that the train has left the station. So my mindset and the mindset that I try to encourage people to have is that you know everything changes in this industry and we are affected by changes that are outside of our industry.
So like we have certain things that are happening in AI that are directly in the day-to-day stuff of what we do, right? Like Google has their assistant inside of Google Ads now, right? where theoretically you can just have a little conversation with the assistant, you know, and kind of tell it what you’re looking to do and it’s going to spoon feed you everything that you need to do. Like that’s new, right? Like that didn’t exist before. Um, you can ask questions like you would ask a person. You could—we didn’t have that before.
Like, hey, why are my, you know, why are my CPCs up? Why are my conversions down? You can ask it now, right? You could—you couldn’t do that two years ago. You had to figure it out. I mean, and I’m not suggesting that you only ask the Google Assistant why something’s happening. Um, because you should probably verify what the Google Assistant is telling you. Um, but like that tech didn’t exist before.
So, how do we, how do we manage all that? How do we take the knowledge that we have for however long we’ve been working in PPC, whether you’ve been doing it for a year, whether you’ve been doing it for 5 years, 10 years, 20 plus years, how do you take the knowledge that you have of how all these things work and sort of what is needed to create successful campaigns and then how do you integrate that with the AI that seems to be everywhere?
Like figuring out which pieces are actually helpful. Like, oh, this is like a super timesaver. I could automate this, this, or this, right? I can have a script that does this. I can create a little um, you know, bot that does something for me that saves me time.
I don’t have to do it manually anymore. I can just skip to the part where it really requires my brain. Like, these are all the things that I think people are really chewing on right now, trying to figure out how they can use AI in a reliable way so that they’re not potentially having AI generate something that turns out to be incorrect and then you’re moving forward using incorrect data. Um, that’s a real fear.
But where does it fit in? Like where does it make sense? How can it make you more efficient? Where can it fill in gaps that maybe you know you had previously that it actually does like does a nice job with? And people are experimenting like crazy and they share, you know, they share their thoughts of like, oh, I created this or oh, these are the prompts I use for that or whatever. So, there’s a lot of, there’s a lot of testing and experimentation going on right now in general.
Frederick Vallaeys: Yeah. And I’d love to hear some of your favorite examples of what you’ve seen people do. But before that, I think it also kind of breaks down into there’s automation that comes out of AI where it builds scripts or it just does the thing you want it to do.
And then there’s the AI as the thinking partner, the ideation platform, maybe pushing you to, you know, thoughts that you hadn’t had in the past, right? If you’re going to be writing some ads, have you put yourselves in the shoes of every different persona that might be consuming those ads? Right? And then that was really difficult because I can’t put myself in the shoes of um, an 18-year-old black woman. We don’t have a lot of common experiences.
So it’s difficult for me to write ads towards that audience but AI has absorbed so much information that it is in some cases able to do that and so um, how do you see that breaking down into in terms of the ways that people use it? Do they like it as a thinking partner? Do they like it as an automation partner? Or is it one or the other?
Julie Bacchini: It’s kind of split, I think. Like some people love it and are using it for everything, right? They’re using it for strategy. They’re using it for ad copy. They’re using it to generate images. Like you have people who are in the like what else can I have it do camp, right? And then you have other people who are like, I don’t want it to do anything because I don’t trust it at all. Like those are the two like far ends of the spectrum. And I would say you have a lot of people who are sort of in between who want to figure out ways that it can help but also not open themselves up to risk if the AI is wrong. So I think um, finding ways like one of—
Frederick Vallaeys: Go ahead. Well, I was going to say talk a little bit about risk because I know you have a great example from your own business on ways in which risk plays out that I hadn’t necessarily thought about, but where it goes pretty deep into uh, talking to your insurance company, right?
Julie Bacchini: Yeah. So, I’m, I mean I’m a big person who I talk a lot about like what I call the business of PPC. So like obviously we have all the stuff we do where we take care of the brands that we work on and in the accounts and all that, but I also spend a lot of time thinking about like as if you’re a practitioner whether you’re an agency whether you’re a consultant whether you’re a freelancer we’re providing service to our clients um, and it is smart to carry various types of business insurance so if you’re not I would think about doing that um, but even if you are this technology moves faster. It moves faster than legislation.
It moves faster than your insurance policy, right? It just moves fast. And so I’ve been having some conversations with my insurance agent, which um, they’re investigating answers to these questions because they’re like, “Well, that’s a really good question.” You know, so you’re asking like, “Okay, for example, what if a client uses AI to dig into all of their internal data, right? and AI puts together, let’s say, like a summary of this is everybody who bought what we’re selling in the last year and these are their personas and like they put this whole massive thing together, right? And they give it to you as their PPC strategist and they’re like, “Hey, this came from all of our data. We want you to use this.” And you’re like, “Okay, great.”
And then you have this data and they’re instructing you to utilize it when you are coming up with strategy, when you’re making decisions, when you’re prioritizing, you know, all the stuff that we do. What happens if that data that got spit out of whatever AI that they use is wrong, right? Where does that liability lie? Is that solely with the client because they gave you bad information? Like I don’t know, these are, these are you know these are sort of like existential questions, questions that I’m pondering as well.
Frederick Vallaeys: And that’s an interesting—I just signed you know our new business insurance documents and honestly for the first time in 10 years of running this business I think I understand what I signed because I took the PDF and I gave it to ChatGPT and I asked it does anything look weird in here like what should I be concerned about um, in the past I would have just been like this is 150 pages of probably boilerplate stuff so here’s my signature, here’s my payment, let’s move on to the next item of business, right?
But uh, but I find it interesting where AI just opens up these opportunities and even like healthcare insurance um, you know when picking a health plan, put in what your plan has and like the medical billing codes and now you can ask your custom GPT a question. It’s like, hey I’ve got this thing and like do you think that’s going to be covered? And so it looks at my insurance, it’s like yeah that looks like a $60 co-pay but otherwise covered. Um, and again, these are things where in the past I would have just been like, “Yeah, I go to the doctor and uh, whatever bill comes out like that it is what it is.”
Julie Bacchini: Oh, see, I have a spreadsheet. I just went through. I made a spreadsheet with all the options and all of the costs from the previous year and I blended them together to figure that out. But that’s me.
Frederick Vallaeys: Did you make it or did you have AI help?
Julie Bacchini: No, I made it. No, I didn’t.
Frederick Vallaeys: So, is this a service you sell if anyone wants—
Julie Bacchini: No. Oh my god, no. It was migraine inducing. No, that is not true. Um, okay.
Frederick Vallaeys: But so that’s fascinating, right? How we have to think about the AI risk at these multiple levels and then how it comes down to insurance. Um, and it also then goes into one of the big roles that humans play is verification. Okay. So, we get this data that’s generated by AI. Can we verify that it’s grounded in the reality of the business? To what degree do you think agencies need to evolve into being more of a business partner rather than just being the ones that get that doc and then do something with it, but have no insight beyond, you know, where did that data come from? Was it correct? Um, was it the most exhaustive data? Should it have been more. Where do you fall on that?
Julie Bacchini: I think we’re going to have to do more of that kind of work because um, someone has to double check, right? Where we are in the evolution of AI, like somebody needs to double check. I had a conversation with um, another PPC pro and we were talking about the fact of like if you’re working with a client who expresses an interest in having a lot of AI stuff like be part of the process, are you building into your pricing and building into your time the fact that like double-checking is going to have to happen, right? Like someone should responsibly double check stuff. Who is that? Is that somebody on the client’s end? Is that like—we get it?
Like I feel like there’s, we’re at a very interesting time where the way things have been done previously, like they would send you a spreadsheet like oh we exported this from our CRM or oh you can, we’re going to give you a log into the CRM and you can pull whatever data you want to pull, right? Like that’s how things have been done in the past a lot or they just send you a spreadsheet, an Excel sheet that has you know customer data or like summaries of X, Y, or Z, like whatever you ask for.
Now it’s just as likely that what you would get from a client would be that information shoved into their AI of choice. Um, and then it would spit back either, you know, some type of spreadsheet or a narrative or a description or a summary of like, oh yes, our typical customer profile is, you know, X, Y, and Z.
Frederick Vallaeys: Yeah. How do you know that’s—how do you know that’s right? I guess it is my, you know, like I look for places where there’s going to be a problem because I want to make sure there’s not a problem so that when we’re doing the stuff that we’re doing, we don’t find out later it was based on faulty information. And that can come not from AI, too, to be clear. People can give you bad just as much as AI can.
Julie Bacchini: Well, and that’s where your example is interesting because it’s like this is our ideal customer profile. I think some assumptions go into that and whether it’s generated by a human or an AI. We don’t really know every one of our customers, right? So, we’re drawing some parallels. We’re making some average decisions.
And as we all know, averages suck because nobody is average. Um, right? But that’s a very different sort of um, misrepresentation in a way from well, I got the Google ads numbers and it had a thousand clicks and 100 conversions. So the AI says, “Oh, that was a 5% conversion rate,” which is incorrect because it did math incorrectly and it should have been a 10% conversion rate. And so when I think about that, having written scripts for so many years, like one of the things about a script is you can put in a script and you can say, “Okay, here’s um, here’s a thing you need to do and then something comes out of it at the bottom and you can just take that spreadsheet with data and just trust that it’s correct.
But what if you pulled the incorrect field and maybe instead of pulling conversions, you pulled all conversions and so you’re basing it on something that’s not in line with what the business wanted. Um, and so the way that I’ve dealt with that in scripts was as the tool goes through its logic, um, it spits out in the debug, okay, here’s what I’m doing and here’s a sample piece of data. And then I can go in and I can say, okay, let me verify not a thousand pieces of data, but let me grab a few of them. And if they seem correct, then the assumption in deterministic code is that it’s probably made some logically sound decisions that we can trust a little bit more. Um, right?
And so, but the question becomes, how do you do these types of things when it comes to an AI? And for me, the way that I see that is to ask the AI to build a piece of code for you. Ask it to build a piece of software.
And so Gemini is doing more of this. Claude with their artifacts can do it. But basically, don’t just answer it, but put it on a table. Maybe give me some filter so that I can say, okay, well, if you selected keywords based on a minimum of five conversions, give me a filter in the UI where I can go and set that I want a minimum of 10 conversions. Um, and that builds the confidence through a UI that it is doing what I expected it to do.
But I do think there is a future where uh, the crutch of the UI goes away because we start seeing the AI is consistently doing a good job. We can start to trust it more and more and these scratches that are the UI go away and eventually we can just tell it, hey, can you find me high performing search terms that we should consider adding and it just knows what that means and it does it and then you can question it. You can say how, what was the math behind it and it’ll explain it but you don’t need to see everything anymore.
Julie Bacchini: I think that’s true and I think one of the other things that you can do is you can ask questions you already know the answer to. So in other words, what we’re talking about here is like how are we using AI going forward like something that a client wants from you or you know your stakeholders want from you a week from now, two weeks from now, a month from now, right? That’s a different animal. If you want to get a feel for how accurate a particular AI is under certain conditions, you can ask it questions that you already know the answers to and see how correct it is, right?
Like I feel like that’s a decent way at where we are right now to be able to develop I guess I would call it like your confidence level in a tool that you might be wanting to use. So, in other words, instead of giving it something that you’re still trying to figure out yourself and you don’t have all the information or you’re working on figuring it out, why don’t you ask it things you already know the answer to?
And then you can see, you know, does it come up with the correct answer? Does it—like that’s a better way in my mind to be putting yourself in a position to have a higher confidence level when you’re asking a question that you don’t already know the answer to. And so that is one way that you might want to spend a little bit of time, like pull some historical stuff, pull it from periods where you know what the answer is to the performance.
You know, like you already know that stuff. You’ve already done that work. Um, that’s another way that you can work towards validating which technologies, what kind of prompts, like what gets you where you need to go where it’s spitting out something that you feel confident enough that like yes, I would use this to move forward on the things that I’m supposed to do, you know, going forward from here.
So that’s another way that you can work with AI where it’s like maybe it’s not doing something for you in your immediate like deliverable workflow, but you’re building, you’re building a way for you to feel confident about using it in particular situations because you feel like you’ve vetted it almost, right? Like I think that’s something that doesn’t get talked about a lot, but it probably should.
Frederick Vallaeys: Yeah, that’s a really good piece of advice in terms of how you should use AI to build confidence and make sure it works the right way. Um, and even though you’re an AI skeptic in many ways, like what other tips do you have that people could put in their workflows to get the most out of AI when they sort of have no choice but to use it?
Julie Bacchini: Um, I mean, I think sometimes you don’t have a choice. Sometimes it’s dictated to you. This is what you’re going to do. We’ve all been there. And I think you just want to document what you’re doing. Um, especially if it’s something that you haven’t done before that you know the expectation is that you’re going to be integrating different types of AI at different points of what you’re doing. I think you want to be fairly meticulous in your documentation of things like this is what I did, this is how I did it.
Um, because sometimes you can ask an AI the same question twice and it’ll give you different answers. So I think you want to be meticulous in documenting and it’s a pain, right? Who wants to have to document everything that you’re doing. Like the whole point of the benefit of using AI is to save you time to do things like oh I don’t have to do that anymore, right? I have this tool that reliably will spit out what I need if I give it, if I feed the right information into it. Um, but while you’re developing that, while you’re being asked to use it in ways that you haven’t used it before, I mean um, I would document everything that I did. Like keep your like essentially like your own change log almost like here’s what I did. You know, this is the prompt that I put in. This is the data that it had access to. This is the instruction that I was given.
This is what the results were. Like you just want to track that because you want that historical information for yourself too so that again you can find your own pattern recognition, right? Like wait a minute, I asked it to do something very similar like three or four times and the results I got were very different. Like two of them were super similar and then the other two were very, very different but I followed the same methodology while I was doing it. Like that would make me be like hm, what is going on there? Um, if you’re not documenting it, you probably wouldn’t necessarily know that because your focus is elsewhere. Your focus is like, oh, I asked it to do X, Y, and Z, and it gave me this, and now I’m going to go on and do tasks A, B, and C. Right?
For me, I like to document stuff. It’s just a habit I have of just like covering yourself. If there are questions that come up as to well why did you do this or what happened here or what was the sequence of things that you did, like I just I think that we’re in a place where all of this is so new and it’s changing so rapidly that that would be my biggest piece of advice is to just create a documentation system and it doesn’t have to be anything crazy. You can have a Google sheet where you like, note you know this is what I was asked for, this is how I you know created the prompt, this is the platform that I used, this is what I fed it, and this is what it spit out like you know just keep a record of that for yourself.
Frederick Vallaeys: Yeah. It makes me think of the uh, my analogies of like the human role in the PPC automated world like the PPC doctor. I mean nowadays anytime you go to a doctor they’re recording, right? They got either a human assistant or they’re recording device and everything’s being transcribed. Why not have something like that running on your computer? And every time you get something in or out you say, “Hey, grab a screenshot.” Do you talk through here’s what I think about this or here was my thought process going through it and then at the end you have this big transcript and the AI actually does a good job of summarizing that transcript in the cases where you give it very specific information, it doesn’t hallucinate quite as often, plus you still have the recording right so if you go back to the summarization it’s like I don’t think that’s what I said. Well go and listen to the recording and maybe you did say it, like how often have I found myself mixing up words where I’m like, you know, that’s not what I said. But yeah, actually that did come out of my mouth in that way because the human brain works in funny ways.
Julie Bacchini: It does. It does. So I like that whole idea of um, taking notes, copious notes, recording, documenting. Uh, but it also seems like a great opportunity there to use AI to help you. Um, and I’m a big—
Frederick Vallaeys: Vi coder. Then the AI’s checking—then the AI’s checking the AI.
Julie Bacchini: I don’t know. Just you just make sure. Again, I’m big like don’t expose yourself to risks you don’t need to, right? I could have that as a tattoo. I mean, it’s a mantra as far as business things go. Like, just be sure that if push comes to shove and you have to explain why you did something or where did this come from, myself, I don’t want to be like, I don’t know, like, oh, I just I did the AI like I—that can’t be the end of my answer for me.
So, you know, again, documenting stuff is not the sexiest work we do, but we do do it for a lot of things, right? We have spreadsheets that have all the ad copy. We have all the images. Like, we do work like this on a fairly regular basis where we’re trying to keep track of what tests are we running this month, like what, you know, what’s coming up next?
Like if you think about it in that type of workflow, I don’t think it’s terribly different from a lot of stuff that we’re sort of already doing when we’re managing the flow and we’re managing a strategy and we’re managing like our testing, you know, what are we testing? Um, what’s our priority this month? You know, that type of thing. Um, it’s just an extension of that.
Frederick Vallaeys: Yeah. Um, I’m curious, do you use a project management tool?
Julie Bacchini: Do I? No. No, I don’t.
Frederick Vallaeys: Okay.
Julie Bacchini: But I, but I work—I’m a consultant. Like it’s just me. So I just have to create stuff for me that makes sense for me. So that’s a different situation than if you have, you know, multiple people who are working on things. I think if you have multiple people who are working on things, you probably need something that’s a little more codified as far as this is where we put it. This is the format we want it in, right?
Like somebody should come up with a template for this is how we’re going to do this. So, we’re all going to document it in the same way. Like, for me, it’s just me. It has to make sense to me. Um, so I’m a bit of a different use case than someone who, you know, is working with multiple people or if you’re in-house, you have different documentation needs than if you’re, you know, working as a consultant and that type of thing.
Frederick Vallaeys: Yeah. I uh, I built uh, Founders Voice. It’s a content management system for myself to figure out what to blog about or what to put on social media. And um, even though I’m really the only user of it within my company, it was very important because I struggle with the same thing that you’re saying, which is okay sometimes uh, we do a data study and the analysts send me the CSV file and then I process the CSV file and it becomes different iterations of it and then I go to Claude and then I go to Gemini and then I go to GPT and I ask them different questions about it and eventually I’m just like wait I said this was the conclusion but like where did that conclusion come from again?
And like good luck finding back that one chat because I might be like I remember it was Claude who said it but Claude at some point ran out of memory. So then I started a new conversation and now I spend an hour digging through old conversations.
Um, and so then I was like okay within this tool that manages my projects in my brain I do need a place to put in what was the AI chat that I had right so it keeps it all in one place but it makes it much more findable and that’s kind of why I’m asking like how you go about it um, I know everybody’s brain works different ways but for me it’s helpful to have these things connected in some sort of a structured manner even though I don’t have other people necessarily looking at it.
Julie Bacchini: Yeah. Yeah, and I think that’s a great point. Everybody sort of thinks about things differently. So, you know, I’ve been saying forever there’s hardly ever one right way to do anything in PPC. And I would say all of this AI stuff definitely falls into that category. Like, is there a right answer? And no, there’s not a right answer. There’s not a wrong answer. There’s a myriad of answers for how you might do things, how you might utilize a tool, how you might apply it, how you want to manage your backend, how much of it you want to retain.
Like there’s no right or wrong way to do it. I just I always like to bring up the sort of the business side and like the risk management piece because there’s nothing, it pains me to think that people in the community could not think about that stuff because it’s not necessarily top of mind, but have them end up in a space where they’re in hot water um, because it just didn’t occur to them that like you have to document this stuff or this might be important or somebody might ask for it or god forbid you get sued because of something that happened or you get pulled into a suit where even if you’re not the direct person like you touch the account, like you’re going to get pulled in.
Like there’s a lot of things that potentially can happen and I hope they never happen to everybody who’s listening here today, but um, I have had an experience like that and that um, very early on in my uh, career as an independent business person and that has molded me into somebody who thinks about all this stuff like very forward in everything that I do. Um, so I do try to share that with others because I’m like, well, my pain should be able to benefit other people to not have to go through some of the stuff that I had to go through at one point.
Frederick Vallaeys: Yeah. Thanks for doing that. And so, um, we’re coming to the end of the episode here, but uh, Julie, is there like one story, and you can take this either into the like the happy place or the not so happy place, but like, do you have one specific AI story where it either did something amazing for you or it was like a classic AI fail that illustrates why people should be cautious?
Julie Bacchini: Um, again, I don’t use AI a ton. Um, but I did have a client provide a massive, I mean massive document um, which we’ll call it guidance based sort of like this is like here’s our sales data, here’s who bought stuff, you know. Um, and there wasn’t really any way to verify it.
So, like you ask, you know, questions like, okay, well, can I, can I just see the raw data for, you know, the actual things that were purchased? Oh, you have demographic data. Can I just see the raw demographic data, right? Um, and that wasn’t possible. The only results I got was that it was AI. Um, that made me uncomfortable because I, you know, it’s putting a lot of faith in something that I didn’t have any way of verifying. So you know that piece for me I would say in the last year that’s probably—and that’s happened more than once.
That’s the space where I really like, ooh, um, I’m going to need more than that, you know, and some clients are happy to provide more and others are like, but I gave it to you in the AI. And then you’re like, okay, I guess we’re going with what the AI said.
Frederick Vallaeys: Yeah. Uh, no, that absolutely is. I think the AI handoff problem uh, whether it’s AI handing off from one AI to another AI to continue work on this previously developed piece is a big one uh, but also to humans then, right? So you should be able to say okay what was the underlying data set and even going back all the way into the AI chat to the beginning like what was the PDF, what was the CSV that was attached, let me look at that, let me verify um, certainly very needed. Okay, well Julie uh, this has been great. So, uh, if anyone’s interested in working with you or learning more, where should they go?
Julie Bacchini: Uh, probably the easiest way to connect with me is you can find me on LinkedIn. So, Julie Freriedman Bacchini, Neptune Moon. Um, obviously you can find me through PPC chat as well. So, we are everywhere. Um, we have Slack, Twitter, Discord, we have a group on LinkedIn. Like, we’re everywhere. Um, you know, as far as PPC chat goes. So, I’m highly findable. If you Google me, there’s a bazillion results. So, you can definitely find me if you’re interested in chatting about anything.
Frederick Vallaeys: Great. Hey, well, thanks everyone for watching. Thanks Julie for being on as a great guest today. Please subscribe if you want to know when the next episode comes out and also engage with us in the comments and the questions and we’ll try our best to answer those. And with that, thank you Julie. Thank you everyone and we’ll see you for the next one.





