Most advertisers learn the limits of Amazon Ads reporting after they’ve already made a bad decision. A campaign may look weak, so you pause it. A week later, it turns out that the campaign was driving conversions that hadn’t shown up yet. Now you’re unsure what to trust.
At Optmyzr, we ran a study across 14,991 Amazon Ads campaigns to find out exactly how delayed and unreliable the reporting really is. Here’s what we found: in the top 5% of campaigns, attributed sales figures grew by at least 18.75% between Day 1 and Day 17 of the same reporting period. Impression counts shifted by at least 36.67% in that same window. And one in every 20 campaigns showed discrepancies large enough to entirely change a budget or optimization decision.
So that’s not a data lag but a decision-making problem waiting to happen.
Why your Amazon Ads data doesn’t reflect reality
Amazon gives you a lot of data. Campaign reports, search term reports, placement reports, advertised product reports, targeting reports, purchased product reports — it’s all there. What the platform doesn’t tell you is that some of it will look completely different if you pull the same report next week.
Before getting into reporting infrastructure and what to do about it, it’s worth understanding why the gap exists in the first place. Most of the confusion comes from three specific places:
1. Two different definitions of “when a sale happened”
A seller on Reddit recently posted about spending $2,000 on Sponsored Products and seeing $12,000 in Ad Console attributed sales against $9,500 in Seller Central revenue for the exact same week, and couldn’t figure out where the gap was coming from.
This is one of the most common reporting misunderstandings in Amazon Ads.
In campaign reports, sales are reported on the ad interaction date, and not the date the purchase actually happened. A sale reported on February 3rd means an ad interaction occurred on February 3rd, but the actual purchase could have happened any time between February 3rd and February 17th, depending on the attribution window.
Over time, these numbers should align more closely as the revenue from the purchases “catches up” to the clicks.
So when you’re looking at a day’s performance and the sales number looks lower than expected, it’s not necessarily because that day underperformed. It’s because the purchases that will eventually be credited to that day haven’t happened yet.
There are a few other things sitting underneath this that make the numbers harder to read at face value. Attributed sales don’t reflect discounts applied in the cart, and inclusive taxes like VAT are subtracted from sales totals, so the revenue figure is already adjusted before you see it. Payment failures and orders canceled within 72 hours of creation get deducted from attribution metrics, but those deductions can take up to 7 days to process after the cancellation.
So when you pull a report before that processing is complete, you’re looking at revenue that may not fully exist yet.
2. The data you’re looking at is still moving
As we understand that there are different “versions” of when a sale has happened, so are the variations of other metrics that will keep changing for up to 28 days.
Click and impression data can shift for up to 3 days after the initial date as Amazon runs its traffic validation process, filtering out bot clicks and invalid interactions. Conversion data takes longer: initial figures are available within 24 hours, but Amazon restates conversion data at 1 day, 7 days, and 28 days after the conversion event.
And because conversions are reported on the date of the ad interaction rather than the date of purchase, those restatements reach back through your entire attribution window. Running Sponsored Brands with a 14-day window means restatements can update attribution data up to 42 days after the original report date.
The practical result is that any report covering the last 28 days should be treated as provisional. Numbers you pulled three weeks ago may look different if you pull them again today, because Amazon is still processing cancellations, returns, payment failures, and traffic validation against that period.
Scott Desgrosseilliers, CEO of Wicked Reports, mentioned in a PPC Town Hall episode, “I find that time lag is real and not a lot of people understand it. And it’s a big advantage — it’s a competitive advantage if you do — ‘cause you can spend where audiences look like they’re not converting and, you know, ignore some other ones that are inflated.”
So the key here is to stay cautious of the attribution window and make the most of it.
3. Your ad gets credit for sales it didn’t directly cause
This is one of the biggest reasons campaign performance gets misread: brand halo attribution. Ads that feature a specific product can have conversions attributed to them for completely different products, as long as those products belong to the same brand as what was advertised.
Amazon’s attribution hierarchy prioritizes promoted ASIN clicks first, then highly relevant brand halo clicks defined as the same brand, same subcategory, then views, then broader brand halo interactions across the rest of the brand catalog.
So if a customer clicks an ad for your blue carpet and buys a wine glass set from your brand instead, that sale gets credited to the blue carpet ad they clicked. So the ad gets attribution for a transaction it didn’t directly drive in any obvious sense.
What Amazon’s reporting can and can’t do
Amazon’s reporting infrastructure is built to show you advertising activity at scale. It does that well. What it is not built to do is tell you whether the data you’re looking at is settled, explain what drove a change in performance, or connect ad metrics to the business outcomes your leadership actually cares about.
The problems covered above — attribution timing, data restatements, and brand halo attribution — are structural. They are not going to change, and there is no setting you can toggle to get real-time data or make your metrics match.
What you can do is build your reporting process around those limitations instead of being caught off guard by them every time they surface.
Frederick Vallaeys, CEO of Optmyzr, put it plainly in a recent PPC Town Hall:
“Sometimes you look at your CPCs, and you’re like, how is it that I’m paying so much for a click and I can’t get a positive return on ad spend? Like, how is it that everybody else seems to be bidding so much higher and making money on this?
But I think what it comes down to is they’re just more holistically measuring what’s happening. And so they do have a better understanding that it’s not just that one-time intraday event that leads to the conversion.
Fred further added, “Because they understand where that click fits into the broader picture, they’re able to bid more for it because they know it does lead eventually to more value than you might think at that instant in time.”
The advertisers winning those auctions are not necessarily spending more. They are just working from a more complete picture of what their data actually means. And the gap between them and everyone else compounds over time.
“If you don’t put these measurement models and attribution models in place, you are sort of playing on a different field than smarter competitors, and you’re always gonna be behind them in terms of bids,” Fred emphasized.
That means pulling reports outside active attribution windows before making optimization calls, treating any data from the last 28 days as tentative rather than final, and building a separate reporting layer that combines ad data with the business context Amazon has no visibility into. The rest of this guide covers exactly how to do that.
4 things your Amazon Ads report isn’t telling you (and what you can do about it)
Now that we’ve looked at why Amazon Ads reporting gets misread so often, here are 4 things the native report still doesn’t show clearly, and how to close that gap.
1. It doesn’t show what actually drove the change
Let’s say your ACoS increased from 3.1 to 4.2 in a single month. The report shows that clearly enough. But what it won’t show you is whether the increase came from a bid strategy change on branded terms, a competitor entering your category and absorbing impression share that used to be yours, or a placement shift that pushed CPCs up without a corresponding improvement in conversion rate.
Every one of those scenarios produces an identical-looking result in a high-level report. ROAS dropped, spend went up, but sales didn’t follow. Without another layer of analysis behind, you genuinely can’t tell which situation you’re dealing with, which means you can’t fix the right thing, and you definitely can’t explain it clearly to someone in leadership.
What you actually need: A layer of performance change attribution
You need an easy-to-interpret solution that explains and digs into the root cause behind the numbers you see.
Optmyzr’s PPC Investigator does exactly this. Rather than manually pulling campaign-level data and trying to work out the reason behind it, the Root Cause Analysis and Cause Chart in PPC Investigator can help identify the main drivers behind a performance shift.
You can pick a KPI from here and track it backward. Let’s consider the same example: when ACoS increased, it could be because your sales dropped, which is due to a drop in conversions, and could also be a drop in impressions and CTR. You can keep drilling down until you reach the part of the account that most likely explains the change.
So the next time, instead of saying, “ACoS increased,” you can say, “ACoS increased by 12% on our branded campaign because conversion rate decreased by almost 16%, and impressions dropped by 40%. So this is not a cost issue, but a demand issue.” And now, you know that you’ll need to find out why there was a drop in demand, and you’ll have the reason for the increase in ACoS.
This helps explain things to your clients and leadership team while also fixing issues from the root cause.
2. It doesn’t show the right time window
Most Amazon PPC managers know attribution data takes time to settle. Optmyzr’s research across nearly 15,000 campaigns found that Amazon Ads reporting delays are significant enough to materially affect optimization decisions, especially if you’re making bid or budget changes based on recent data that hasn’t fully settled yet.
Knowing that and actually accounting for it in your reporting process are two genuinely different things, though.
- If you’re pulling a last-7-days report on a Tuesday and making budget decisions off it, you’re working with incomplete conversion data from the weekend.
- Month-end reports often look worse than they actually are because the final days of the month haven’t been fully attributed yet.
- Period-over-period comparisons fall apart when one window contains settled data, and the other is still moving.
What you actually need: Flexible date ranges
Optmyzr’s flexible date range controls make this easier to do consistently, so you’re pulling comparisons over windows that actually reflect real settled performance rather than defaulting to whatever preset the platform defaults to. Make sure to include date ranges that are outside the attribution window so the numbers you see are accurate.
3. It’s not showing you the relationship between KPIs
Amazon’s report gives you columns of data, but the relationships between those columns are entirely yours to work out.
Let’s approach it with an example: the monthly Amazon Ads report shows that CTR has increased, the conversion rate has decreased, and ACoS has climbed alongside it. When we look at it in isolation, that reads as a straightforward negative scenario.
But it might mean you expanded into broad match and are now capturing top-of-funnel traffic that converts on a longer timeline. Or it might mean you’re winning more impressions in lower-quality placements. Or it might mean there’s something wrong on the product page itself — images, reviews, price competitiveness relative to where it was — that no bid change is going to solve.
The individual metrics don’t tell you which of those is actually happening. The relationship between them over time does, and that’s only visible when you’re looking at multiple KPIs moving together on the same timeline.
If you only look at one column at a time, you’ll draw the wrong conclusions constantly. A rising ACoS with a rising CVR might actually be a healthy sign. You’re spending more because more people are converting, and the economics still work. A falling ACoS with a falling CVR is a red flag; you’re spending less because your ads are reaching fewer buyers, not because you’ve become more efficient.
What you actually need: Trend comparisons that layer multiple KPIs on the same timeline
Optmyzr’s Account Dashboard lets you view related metrics together, so these patterns surface without needing to cross-reference spreadsheet columns manually.
When you can actually see the movement of multiple metrics, it helps to surface the reasoning behind it fairly quickly.
4. It’s not accounting for business context
This is where the gap between the PPC manager and the leadership team or the client becomes more visible.
Inside the ad account, it’s easy to get absorbed in Amazon’s metrics: ACoS, TACoS, CTR, CVR, NTB, placement share. That’s natural; the platform is designed to make you think that way.
But your CFO isn’t looking at ACoS. Your client isn’t hoping to see an improved CTR, since they aren’t the ultimate measure of whether paid media is working.
They’re focused on bigger questions: Is this growth smart? Is it actually making us money? Can we sustain this? Are we building real momentum and going towards sustainable growth?
The native Amazon interface can’t answer those questions on its own. It has no visibility into your margins, your true profitability after all fees and COGS, the organic lift you get, changes in your pricing, your current stock levels, or how each product fits into the wider business strategy. All it sees is ad spend and the sales it attributes.
This means a report can look fantastic on the platform, while the reality is weak.
What you actually need: A reporting layer that combines business metrics with paid metrics
Closing that gap means building a reporting layer you actually control, where ad data, margins, and performance context are included together. And Optmyzr’s custom widget-based reports let you build exactly these kinds of views, showing the metrics your leadership and clients care about.
You can also layer multiple metrics on the same table or use Metric Comparison Charts to visualize relationships between different metrics. We’ll see more on how to customize and build a report in the next section.
How to build an Amazon Ads report that answers the right questions: A simple 6-step process
Optmyzr lets you create a report with key insights and business metrics behind that matter to clients and leaders. And the best part is that you only need to create this once, then automate it to go out periodically, or edit the template just a bit if needed, and send them out.
Here’s the simplest way to fill in the gap where Amazon reporting lags and create a recurring template.
Step 1: Decide on your scope and the reporting window
In Optmyzr, go to the Reporting section to build a reusable report.
If you’re reporting on one Amazon account, build it as a standard report. If you’re reporting across multiple marketplaces, brands, or ad accounts, start with a Multi-Account Report instead.
Optmyzr lets you pull data from multiple Amazon (and even other PPC) accounts into one report, and you can use account selectors to include full accounts, specific campaigns, or labeled segments. There are also multi-account widgets (KPI, summary, top campaigns, etc.) that aggregate performance across all selected accounts.
After this, decide on a date range. You can also set custom date ranges, so just lock this in early and build everything on top of it.
Step 2: Add the top-level KPIs
Once you have the date ranges set, start building the top section as an executive summary.
Add KPI widgets for a few widgets that belong at the top of the report, like spend, ad-attributed sales, ROAS or ACOS, and TACoS if you track it outside Amazon.
If you need an additional high-level view, add a summary widget beneath the KPI row so the report opens with a clear before-and-after comparison, rather than tables. Once you have that. you can use AI to automatically summarize the KPIs and findings, and can also give instructions based on how exactly you want to present the info.
For instance, in this report, I asked AI to ‘Summarize the key changes in a clear and concise manner,” and it did exactly that with bulleted list of the key metrics.
Right after the KPIs, you can add a section that explains what drove the change. This is where you explain what moved the top-line numbers. You can include PPC Investigator or Cause Chart-style analysis here to show performance breakdowns inside the report itself.
Step 3: Add KPI trend comparisons
Now layer metrics together so you can interpret them easily. Add charts or comparison widgets that place related metrics on the same timeline so they can be read together.
Optmyzr’s reporting widgets include Performance Comparison and Time-Wise Stats options, which are designed for exactly this kind of trend reading across periods or intervals. This is where you can show how metrics moved together.
Look at this video below to understand the full extent of Optmyzr’s reporting capabilities:
Step 4: Break performance into meaningful segments
Now split the data in a way that actually helps you understand it. You can choose from the common ones, like:
- Branded vs non-branded
- Campaign types (SP, SB, SD)
- Placements (top of search, rest of search, product pages)
- Product groups/ASIN clusters
- Match types or targeting types
The reporting system supports multiple widgets and account selectors, so you can structure these views deliberately based on how you want to present them.
Step 5: Add calculated metrics
Optmyzr supports calculated metrics, so you can create your own formulas and include them in tables or widgets. Use that to bring in metrics that matter to the business but are not obvious in native Amazon reporting. That could be a custom efficiency metric, blended view, target threshold, or any calculation your team uses consistently.
You can also further add more ground-level data, like campaign tables, top performers, top campaigns, and search-term-level or ASIN-level performance.
Step 6: Save it as a template and schedule it
Once the structure is done, save it as a template and schedule it. It supports reusable templates, PDF export, shareable links, and scheduled delivery. That is the final step because the point is to remove manual rebuilding from the process. Once the report has the right sections and the right date logic, it should be something you can run again and again with only light edits to the written diagnosis.
A clean final structure usually looks like this: headline KPIs at the top, diagnosis under that, KPI trend and comparison charts next, segmented views after that, business-context metrics after that if needed, and deeper insights at the bottom.
The “why” behind your Amazon numbers
Amazon’s reporting does exactly what it’s designed to do: surface advertising data at scale. The problem is that advertising data, on its own, doesn’t explain performance. And this gap between description and diagnosis is where the challenge comes in.
With the right reporting setup, you can build around the questions leadership or clients are actually asking, while being smarter about attribution windows and unsettled data.
Looking to close the gap between your Amazon data and your reporting? Optmyzr’s Amazon Ads tools and reporting features are built specifically for teams that need more than surface-level metrics.
FAQs
Why does the attribution window affect how my recent performance looks?
Amazon uses different attribution windows depending on the ad type: 7 days for Sponsored Products, and 14 days for Sponsored Brands and Sponsored Display.
A customer who clicks your ad on Monday but completes the purchase the following Sunday gets attributed back to Monday’s report. If you pull Monday’s numbers mid-week, that conversion hasn’t happened yet and won’t show up at all.
This is why recent reporting periods almost always look weaker than they actually are. The window hasn’t closed yet, so not all the downstream conversions have been captured.
What is the Brand Halo Effect, and why does it make my sales figures look bigger than expected?
When you run an ad for a specific product, Amazon doesn’t limit attribution to just that product. If a customer clicks an ad for your wireless headphones and later buys a different pair from your brand, that sale can be attributed to the original ad as a brand-halo conversion.
Amazon’s attribution hierarchy prioritizes promoted ASIN clicks first, then highly relevant brand halo clicks (defined as same brand, same subcategory), then views, then broader brand halo interactions.
Are canceled orders and returns reflected accurately in my ACoS?
Canceled orders within 72 hours are typically removed through Amazon’s restatement process. Returns can take longer to show up correctly in restated data. Amazon restates conversion data at 1, 7, and 28 days post-conversion, so a returned item may or may not have been removed from your historical figures by the time you pull a report.
If you’re pulling campaign performance before the relevant restatement has run, you may be looking at ACoS calculated against revenue that will eventually be reversed. The 28-day restatement window exists to catch these corrections; it just means any report pulled before that window closes is working with numbers that aren’t fully settled yet.
Does Amazon double-count sales if a customer clicks more than one of my ads?
No. Amazon attributes a conversion to no more than one ad interaction, using a last-touch model that credits the last click before purchase. If a customer saw three of your ads and clicked two of them before buying, the last click gets the attribution, and the others get nothing.
Why is there a difference between the Advertised ASIN and Purchased ASIN reports?
The Advertised ASIN report shows performance for the specific product in the ad. The Purchased ASIN report shows what customers actually bought after clicking. Because of how brand halo attribution works, a customer can click an ad for one product and end up purchasing a completely different variation or related item, and the Purchased ASIN report captures that gap.
If you’re only looking at Advertised ASIN data, you’re seeing how your targeting is working. Adding the Purchased ASIN layer shows how customers are actually navigating your catalog after the click, which products are capturing demand originally directed somewhere else, and whether there are cross-sell patterns worth building a campaign strategy around.
Why do my click and impression counts sometimes drop a few days after I pull a report?
Amazon runs an ongoing traffic validation process to filter out bot traffic, accidental double-clicks, and other non-human interactions. This process can adjust click and impression data for up to 3 days after the initial reporting date, so click counts you saw on Tuesday might be slightly lower if you pull the same date range on Friday.
The upside is you aren’t charged for the interactions that get filtered out. The downside is that any report pulled within 72 hours of the activity date is working with pre-validation numbers that may overstate actual qualified traffic by a small margin.







