Quality Score isn’t new. It has been around for nearly two decades. What’s new is how uncertain many advertisers feel about what to do with it today.
At its core, Quality Score is Google’s estimate of how relevant your ads, keywords, and landing pages are to a searcher. It’s based on expected click-through rate, ad relevance, and landing page experience. Historically, higher scores often meant better positions and lower cost per click. Google’s goal was simple: reward ads that create a better user experience.
Frederick Vallaeys, CEO & Co-Founder at Optmyzr and part of the team that helped build Quality Score during his time at Google explains that QS was never meant to be a marketer’s KPI.
It started as a way to prevent search ads from becoming a pure “highest bidder wins” system filled with irrelevant results.
According to him,
“Quality score isn’t a cosmetic metric. It’s the economic engine that makes the whole system sustainable.”
But Google Ads has changed now. Bidding is automated. Broad match expands intent beyond tight keyword control. Smart bidding optimizes toward conversion data. AI systems decide which ad combinations show for which queries.
Quality Score is still there. It’s still visible and still calculated. But the real question many marketers have is whether it still deserves active attention in an automation-heavy world.
Why Quality Score feels broken now
If you’ve been anywhere near PPC Slack or Reddit lately, you’ve probably seen the same complaints.
- Keywords that are obviously relevant are flagged as “Rarely shown (low Quality Score).”
- Broad match terms sit at a 2 or 3, even though they’re converting.
- Accounts are hitting ROAS targets but still have red warnings all over the keyword report.
Does Quality Score still matter? A practical guide for automation-heavy Google Ads accounts.
It feels inconsistent. If performance is strong, why does the account look “broken”?
“Part of the issue is how Quality Score works. The number you see isn’t a live, auction-by-auction rating. It’s an aggregated estimate, mostly based on past exact match performance. It doesn’t update in real time for every query your keyword matches to.”
Google has also said that the visible Quality Score is a diagnostic metric and isn’t directly used in the auction. But Ad Rank still considers expected CTR, ad relevance, and landing page experience, which are the same components that make up Quality Score. So it’s easy to see why people get confused about how much it really matters.
Fred puts it this way: Quality Score was never designed to tell you whether a campaign is profitable. It’s not a KPI.
It’s a diagnostic that reflects how the system views relevance and expected usefulness in the auction.
Then automation complicates things further.
Broad match casts a much wider net. Smart bidding focuses on conversion signals, not keyword-level tweaks. AI systems decide which ad combinations show for which queries.
So now you can have:
- A low visible Quality Score
- Strong CPA or ROAS
- Healthy impression share
That gap is where the tension comes from. The red “low QS” label looks alarming in reports. Stakeholders ask what’s wrong. Teams feel pressured to “fix” a number that may not actually be the problem.
The real question today isn’t what Quality Score measures.
It’s whether it still points to the thing that’s actually limiting performance in automation-heavy accounts.
Read More: How to Choose the Best Keyword Match Type for Your Google Ads Campaign
Google Ads practitioners are asking these questions
Once you get past the surface frustration, the debate around Quality Score usually boils down to four practical questions.
Not theoretical ones. Not “what is QS?” questions. But workflow questions.
1. Is it still worth actively trying to improve Quality Score in 2026-style accounts?
In PPC Slack, one practitioner put it plainly:
Quality Score looks fine on phrase and exact. But broad match terms sit at 2–3, even when the landing page is tightly aligned.
On Reddit, others echoed a similar concern:
Is QS low because we’re doing something wrong?
Or because broad match now maps to so much intent that no ad can look perfectly relevant across all of it?
That’s the tension.
Automation has expanded intent matching. Broad match reaches far beyond tightly defined keyword variants. AI systems evaluate more signals. So when QS drops in these environments, it’s not always obvious whether the issue is structural or tactical.
We dove into one of our PPC Townhall episodes with Google Ads Product Liaison - Ginny Marvin and here’s what she said when asked about Quality Score in today’s environment.
“I wouldn’t say there’s been any change really in how to think about Quality Score.”
She reinforced the underlying principle:
“The whole goal is to serve an ad and a landing page that is highly relevant to the user what their need is, what their problem is that they’re searching for. Fundamentals are still fundamental and they really haven’t changed very much.”
From her perspective, automation hasn’t replaced relevance. It still relies on it.
Jyll Saskin Gales echoes that view from the practitioner side in one of her LinkedIn posts. She refers to Quality Score as one of her favorite diagnostic tools that is just as relevant now as it was in the AdWords days.
But, she does add an important guardrail.
“Quality Score is just an indicator, not an ‘end goal’ metric. Stop trying to get a 10/10. A 7 is a really good score. Even a 6 is fine.”
So the answer isn’t that Quality Score stopped mattering.
What has changed is the surface area.
Broad match expands coverage. AI systems use landing page content as a signal. And more variables influence how relevance is evaluated.
That can make visible Quality Score feel blunt or noisy. But that doesn’t automatically make it meaningless.
If your ads don’t reflect user intent, or your landing page doesn’t deliver on the promise of the ad, Quality Score will usually surface that.
2. Do broad match and smart bidding make Quality Score less meaningful?
Google has spent years encouraging broader matching. Broad match now leverages signals unavailable to phrase or exact match, including landing page content, historical behavior, and contextual intent modeling.
As an article from Search Engine Land recently noted, broad match is increasingly favored in pricing dynamics. Between 2023 and 2025, phrase match CPCs rose faster than broad match CPCs in many datasets. At the same time, close variants have made phrase behave more like broad without the same AI filtering advantages.
So what does that mean for Quality Score?
Aaron Levy, Product Evangelist at Optmyzr, does not believe its importance has diminished:
“I don’t think the importance has diminished over time. It’s still a core factor in determining ‘is this a good ad for this person and this query.’”
However, he acknowledges that the environment is more complex:
“Certainly there are more variables involved with the rise of broad match and AI Max, but the fundamentals are still the same. Do your ads and landing pages reflect a good user experience? If the answer is no, well, quality score will tell you.”
He also points out something that often gets overlooked:
“I do think the importance of landing page content and page experience has gone up quite a bit over the years, especially with AI Max, which uses landing page content.”
That shift matters.
Broad match widens query eligibility. AI systems rely more heavily on landing page signals. Relevance is evaluated across a broader range of user intents.
So when Quality Score drops under broad match, it may not mean something is broken. But it may reflect that the keyword now represents a much wider spectrum of intent.
In that sense, QS is not less meaningful. It is just being applied across a larger and more varied demand pool.
3. When is Quality Score not a constraint anymore?
In many accounts, you will see keywords with 2–3 Quality Scores that still drive profitable conversions.
That raises a practical question:
If CPA and ROAS are healthy, does QS actually matter right now?
Aaron frames it this way:
“I like to look at QS as a check engine light. It indicates something is broken, but doesn’t indicate that the car won’t run.”
That distinction is critical.
A low Quality Score may signal friction. It does not automatically mean performance will collapse.
He adds:
“A low QS could still indicate that Google is punishing you for something it doesn’t like, but doesn’t necessarily mean it’ll shut the whole ad group down.”
Another important nuance is the difference between what you see and what happens in the auction:
“Remember there’s a difference between Visual Quality Score and Auction Time QS.”
The 1–10 number in the interface is aggregated and historical. Auction-time quality is recalculated for each query.
Aaron explains:
“AQS may be okay for the best queries, but may be poor for the worst which dominate what you can see.”
In other words, you might be winning on high-value searches while the visible score is dragged down by lower-intent traffic.
Quality Score is not the constraint when:
|
That does not mean ignore it.
Aaron is clear:
“I wouldn’t necessarily do changes solely for QS purposes at the expense of ROAS/ROI.”
Quality Score should inform testing. But it should not override business outcomes.
And if it appears stuck?
- Refine poor queries
- Improve landing pages
- Test new assets.
- Focus on user experience, not cosmetic scores
4. How should advertisers interpret “Rarely shown (low Quality Score)” warnings?
This is where the debate becomes visible.
The “Rarely shown” warning appears in red. Reports look alarming. Stakeholders assume something is broken.
Aaron’s first reaction is candid:
“They should work on educating their stakeholders. I don’t care for this practice either…”
But he does not recommend ignoring it.
First, remember:
“VISIBLE quality score is not necessarily what’s calculated each time that keyword triggers a query/ad combo.”
Second, understand that expected CTR is relative:
“Your CTR could be 50% for a term but if Google’s expecting 70%, well, that’s going to be below average.”
So even objectively strong performance can still register as “Below average” in a competitive context.
When diagnosing landing page experience, he suggests asking simple questions:
“Does my landing page deliver the ‘promise’ I established with my ad?”
“Does it function well on all devices?”
“Does it load quickly?”
On the warning itself:
“The ‘rarely shown low quality score’ is worth investigating to make sure nothing is truly wrong, or if it’s just a broader term that Google may not like en masse but may like pieces of.”
The key word is investigate.
Do not panic, blindly restructure, or make the mistake of optimizing for optics alone.
He also emphasizes trend over snapshot:
“I don’t love looking at quality score as a snapshot in time, but rather a trend. If quality score is trending down, something new is broken and it’s worth investigating.”
So instead of asking: Is this keyword red today?
Ask:
- Is Quality Score declining across themes?
- Did something change recently?
- Is performance moving with it?
What does this mean in practice?
If Quality Score is neither dead nor sacred, how should teams treat it in automation-heavy accounts?
The answer depends on context.
When it’s worth actively working on QS
There are still scenarios where Quality Score deserves focused attention.
- If impression share is being lost primarily due to rank, and CPA is creeping up, low QS may be contributing to inefficient auctions.
- If most keywords in a theme show “Below average” ad relevance, that often signals structural or messaging misalignment.
- If landing page experience is consistently poor across tightly themed keywords, that’s usually not noise. That’s friction.
And sometimes, the motivation is internal. If red QS labels are creating stakeholder anxiety or undermining confidence in the account, addressing obvious relevance gaps can restore trust, even if performance is stable.
In these cases, the work is rarely about chasing a 10.
It’s about tightening message match, refining search term coverage, improving landing page clarity, and strengthening alignment between intent and experience.
When it’s smarter to deprioritize QS
There are also situations where QS should not be the focal point.
- Broad match campaigns often show lower visible scores simply because they match to a wider range of searches. That doesn’t automatically mean something is wrong.
- In Performance Max or other automation-heavy setups, the system is optimizing for conversions. In those cases, the quality of your conversion data usually matters more than a keyword-level score.
- If an account is already hitting its efficiency targets and scaling smoothly, restructuring it just to improve visible Quality Score rarely pays off.
As Aaron put it earlier, it doesn’t make sense to chase a better QS if it hurts ROAS or ROI.
If revenue is steady, scaling is healthy, and you’re not losing impression share due to rank, Quality Score probably isn’t the thing holding you back.
Where Optmyzr helps
If you’re trying to separate signal from noise in Quality Score, this is where having the right visibility matters.
The main goal with Quality Score should be understanding what it means over time and whether it’s actually affecting performance.
Here’s how Optmyzr helps make that clearer.
1. Access historical Quality Score data Google doesn’t show so you can identify real performance trends.
One of the biggest frustrations with Google Ads is that you can’t see historical Quality Score data.
Optmyzr’s Quality Score Tracker records QS daily at the keyword level and rolls it up to ad group, campaign, and account levels. That changes how you interpret it.
Instead of reacting to one red label, you can:
- See how QS moved over time, including highs and lows within a date range
- Compare QS trends against impressions, CTR, CPC, conversions, and cost per conversion
- Identify whether a CPC spike lines up with a QS drop
- Analyze how keywords are distributed across QS 1–10 buckets
It also breaks out the three components, Expected CTR, Ad Relevance, and Landing Page Experience, weighted by impressions. That makes it easier to pinpoint where the friction actually sits.
For large, complex accounts, that level of visibility matters.
Zeller Media, a US-based marketing agency managing a job recruitment platform with over 600 campaigns and 4,000 ad groups used the Quality Score tracker to monitor QS weekly and quickly isolate which component was slipping.
Faster detection translated into faster optimizations and ultimately lower CPCs, with measurable savings and reduced manual reporting time.
In another example, Topline Films, a video production company rebuilding its paid strategy used historical QS tracking alongside ad text optimizations to regain momentum after a site migration.
Seeing trendlines instead of isolated numbers helped them focus effort where it mattered and contributed to significant lead growth within months.
The shift from isolated scores to pattern recognition makes it easier to determine whether something is structurally deteriorating or simply fluctuating.
2. Add performance context to Quality Score for better stakeholder reporting.
Sometimes the friction around QS is perception and not performance.
The red label shows up in a report and raises alarms. Optmyzr’s reporting clears up that confusion.
It lets you:
- Overlay QS with metrics like CPC, conversions, impression share, and ROAS
- Add calculated metrics (like ROAS = totalConvValue/cost)
- Save report snapshots with commentary
- Share static links with stakeholders
That matters because you can clearly show:
- QS dropped, but impression share stayed stable
- QS improved and CPC followed
- Performance improved even though visible QS didn’t move
Instead of debating a 1–10 number, you can frame it in performance context.
3. Use Root Cause Analysis to confirm whether Quality Score drove the performance change or just moved alongside it
If performance shifts and QS is part of the story, you need to know why.
PPC Investigator doesn’t just show that clicks or conversions changed — it helps identify:
- Which campaign, ad group, or keyword drove the change
- Whether device, network, or combinations (like keyword + device) contributed
- Whether recent changes in account history correlate with performance shifts
Through Cause Charts and Root Cause Analysis, you move from:
“What happened?” to “Why did it happen?”
If QS drops and conversions dip, this helps you confirm whether the two are connected or just coincidental.
Agencies using this workflow have described it as the difference between guessing and diagnosing. Instead of manually pulling reports across multiple views, they can identify the top positive and negative movers quickly and decide whether a QS drop is truly the issue or just correlated noise.
That clarity is especially important in automation-heavy accounts, where multiple signals are shifting simultaneously.
4. Spot recurring structural gaps before they drag down account performance.
When QS is consistently low across themes, it’s usually structural.
The PPC Account Audit tool evaluates ad groups, ads, campaigns, and keywords using a scoring framework (0–100). It flags issues like:
- Overloaded ad groups
- Weak keyword-to-ad alignment
- Landing page inconsistencies
- Missing best practices
It also provides AI-generated summaries highlighting strengths and opportunities before you even dig in.
That makes it easier to diagnose whether a low QS is:
- A messaging problem
- A structure problem
- A landing page issue
- Or simply normal variance in a broad-match environment
Quality Score isn’t dead. But it isn’t what it used to be either.
Quality Score hasn’t disappeared.
What’s changed is the environment around it.
We’re no longer working in tightly structured exact match accounts with full control over every keyword-to-ad pairing. Now we have broad match, Smart Bidding, AI-generated assets, and systems making decisions in real time.
But that doesn’t mean Quality Score stopped mattering.
As Fred explains, Quality Score was never meant to be a profitability metric. It wasn’t designed to tell you whether a campaign is winning. Rather, it was built to protect relevance in the auction. To make sure useful ads could compete without simply bidding the highest.
This principle still holds true today.
Quality Score works best when you treat it something that helps you understand how the system views expected CTR, ad relevance, and landing page experience.
It’s worth digging into when:
- Impression share is dropping because of rank
- Landing page experience is clearly weak
- QS trends downward across tightly themed areas
In those cases, it may be pointing to real friction in the auction.
But if CPA is healthy, revenue is steady, and the account is scaling efficiently, a low visible QS may not be the main constraint.
The key here is context and it’s also where better visibility matters.
Optmyzr is built for that stage. It helps you track Quality Score trends over time, connect them to CPC and conversion data, diagnose root causes when performance shifts, and identify structural gaps before they become expensive.
Thousands of advertisers worldwide—from independent consultants to large agencies and enterprise brands—use Optmyzr to manage over $5 billion in ad spend every year.







