1. Data-driven introduction with metrics
The data suggests your headline problem is: organic sessions are down 22% month-over-month while Google Search Console (GSC) reports average position and query rankings within a 0.2–0.5 position range. Simultaneously, competitor domains are appearing inside AI Overviews / “AI snapshots” on key queries; their 2022 blog posts are being cited by generative answers instead of your updated 2025 content. Marketing leadership is asking for better attribution and ROI proof as budget scrutiny increases. You — the marketer, content lead, or growth owner — are left with contradictory signals: stable rankings but fewer clicks, less visibility in AI-driven SERP elements, and no visibility into what ChatGPT/Claude/Perplexity write about your brand.
MetricBaseline (Last 30d)Previous PeriodChange Organic sessions (GA4)78,400100,400-22% GSC avg position (site-core queries)6.86.6~+0.2 % Queries with AI Overview in SERP34%12%+22pp Clicks attributed to branded search18,90023,400-19% Pages where competitor 2022 post is cited in LLM answers7 high-volume queries-N/AAnalysis reveals a mix of measurement blind spots (GSC sampling and position stability) and real SERP evolution (AI Overviews and zero-click answers). The rest of this analysis breaks the problem into components, examines each with evidence, synthesizes conclusions, and ends with prioritized, actionable recommendations you can implement this week, quarter, and year.
2. Break down the problem into components
- Measurement mismatch: GSC vs actual clicks (sampling, delayed data, position reporting differences) SERP feature displacement: AI Overviews / generative answers creating zero-click experiences Citation & authority mismatch: AIs citing older competitor content over your fresh 2025 posts Attribution weaknesses: first-party tracking gaps and inability to prove incremental value Competitive content/SEO signals: backlinks, semantic entities, structured data, and content format Audience behavioral shifts: search intent changing or users satisfied by AI summaries
3. Analyze each component with evidence
Measurement mismatch
Evidence indicates GSC reports "average position" across multiple SERP layouts; it does not show whether SERP features (like AI Overviews) occupy the prime real estate of Page 1. The data suggests a high probability that your pages kept rank for the query, but clicks dropped because the SERP now surfaces an AI Overview with a citation to competitor content. Compare server logs and GA4 sessions to GSC clicks — if GA4 sessions fall while server logs show stable impressions served, you have a click-through problem caused by SERP changes, not an index issue.

SERP feature displacement
Analysis reveals that queries converted to AI Overviews or aggregated answers show a significant click drop. Contrast: queries where no AI Overview appears show stable clicks. Run a targeted SERP audit (manual or via SerpApi/Puppeteer) for your top 100 decline queries and record presence of AI Overviews, featured snippets, and knowledge panels. Evidence indicates queries with AI Overviews have 35–60% lower click-through rates compared to identical-position results without them.
Citation & authority mismatch
Analysis reveals AIs preferentially cite pages with https://alexisiqyl897.yousher.com/why-organic-traffic-can-fall-while-google-search-console-shows-stable-rankings-and-what-to-do-about-competitors-showing-up-in-ai-overviews stronger perceived authority signals (backlinks, earlier publishing date, entity mentions). Comparison: competitor’s 2022 post has 120 referring domains and multiple authoritative citations in news/industry sites; your 2025 post has 12 backlinks and no prominent third-party citations. The data suggests the LLM citation heuristics (similar to a mini "link graph") are weighting older, widely-linked content higher despite being outdated.
Attribution weaknesses
Evidence indicates you have high levels of "direct" and "unknown" traffic in reports, a sign of lost UTM data, cross-domain issues, or cookie attribution loss. Contrast server-side event capture (BigQuery export from GA4) vs client-side GTM shows 8–12% discrepancy. This gap undermines your ability to quantify ROI precisely and fuels budget scrutiny.
Competitive content/SEO signals
Analysis reveals competitor content uses structured data (Article schema, FAQ schema) and E-E-A-T signals (author bios, citations, publisher metadata). Contrast your page: clear on-topic, but missing semantic markup and concise snippet-friendly sections. LLMs and AI Overviews prefer content with short “answer” blocks, explicit definitions, and clear entity tags — attributes your competitors happen to have.

Audience behavioral shifts
Evidence indicates users are satisfied by summarized answers. Compare session duration, pages per session, and scroll depth month-over-month: session duration drops 18% on queries with AI Overviews, suggesting users read the snippet and don't click through. Conversely, queries without AI Overviews show stable engagement metrics.
4. Synthesize findings into insights
The data suggests the main drivers of your traffic decline are not classic ranking drops but two combined forces: (1) SERP evolution introducing AI Overviews that create zero-click behaviors, and (2) LLM citation preferences that favor older competitor content with stronger authority signals. Analysis reveals measurement blind spots (GSC smoothing of position data and attribution gaps) that mask these drivers, making performance look inconsistent to stakeholders.
Evidence indicates your 2025 content is high quality but lacks the backlink authority, structured citation cues, and snippet-ready formatting LLMs and generative SERP elements use to select sources. Comparisons show competitors with fewer recent updates but stronger linking and schema signals are getting cited in AI Overviews — reducing organic CTR and creating the appearance of traffic collapse despite stable rankings.
5. Provide actionable recommendations
Below are prioritized, tactical to strategic steps. I group into Quick Wins (0–30 days), Mid-term (30–90 days), and Strategic (90–365 days). Each item includes how to measure impact.
Quick Wins (0–30 days)
Implement SERP monitoring: use SerpApi or run Puppeteer snapshots for top 200 decline queries to log presence of AI Overviews, featured snippets, and the specific URLs cited. Measure: weekly count of queries with AI Overviews. Create snippet-first edits: add 2–3 concise, factual answer blocks (40–60 words), labeled headings, and a short summary at top of each target page to match LLM answer patterns. Measure: CTR change for those pages within 2–4 weeks (GSC clicks and GA4 landing page sessions). Fix tracking & attribution leaks: audit UTM consistency, implement server-side tagging for critical conversion events (server GTM), and enable GA4 BigQuery export for raw session reconstruction. Measure: reduction in “direct/unknown” channel and discrepancy between client/server events. Add Article and FAQ structured data to target pages to increase chance of being cited. Measure: Rich result appearances in GSC and SERP snapshots.Mid-term (30–90 days)
Run an incremental lift test: holdout region or query set where you boost PR/backlinks and snippet optimization vs control. Measure: incremental traffic lift and conversion lift attributable to interventions. Execute targeted link acquisition: outreach to sites that cite the competitor 2022 piece. Offer updated data slices and ask for citation updates. Comparison: pages that receive new authoritative backlinks vs pages that don’t — track citation presence in LLM outputs via periodic API queries to generative models with the same prompt set. Build an "AI Listening" routine: query ChatGPT (with browsing), Claude, and Perplexity weekly for your top 50 queries and capture the citations, text snippets, and whether your domain is referenced. Measure: share of voice in AI answers.Strategic (90–365 days)
Invest in entity & knowledge graph work: create robust author profiles, brand mentions, publish whitepapers or datasets that authoritative sources cite, and register on industry knowledge repositories. Measure: growth in authoritative citations and eventual presence in knowledge panels. Data-first content strategy: produce short-form “answer” assets that LLMs can ingest (concise definitions, numbered steps, data visualizations with alt text and tables). Contrast this with long-form evergreen pieces — both are necessary, but prioritize answer blocks for AI-visible queries. Attribution modernization: blend MMM with digital attribution and controlled experiments. Use server-side conversion APIs for ad platforms and model lift to prove ROI. Measure: decreasing uncertainty in budget impact and improved ROAS reporting.Technical and Analytical Playbook (advanced techniques)
- SERP rendering replication: use Puppeteer to save full rendered DOM of SERPs for target queries. Compare HTML snapshots over time to detect when AI Overviews appear. LLM citation experiments: craft reproducible prompts that mimic user queries and run them across models; log citations and seed the models with your pages via published metadata and high-authority links. Entity-matching using NLP: run Google Cloud NLP or spaCy to extract entities from your content and competitor content. Build a matrix of shared entities, co-occurrence, and missing entity signals. Backlink velocity funnel: prioritize outreach to domains already citing competitor content. Offer updated data or combined resource pages to win citation transfers. Server log & GSC reconciliation: export raw server logs and GSC query-level CSV; join on path + date to create a more complete impression-click dataset for attribution modeling.
Interactive self-assessment: How exposed is your site?
Score each question 0 (No) / 1 (Partially) / 2 (Yes). Total = 0–14.
Do your top decline pages contain a 40–60 word answer summary at the top? (0/1/2) Do those pages have Article or FAQ schema implemented? (0/1/2) Do you have recent backlinks from .edu/.gov or high-authority publishers to those pages? (0/1/2) Have you captured weekly LLM citations for your target queries in the last month? (0/1/2) Is server-side tracking implemented for critical conversions? (0/1/2) Do you run ongoing SERP snapshots for top 200 queries? (0/1/2) Do you have a plan to run controlled lift tests for content/PR interventions? (0/1/2)Interpretation:
- 0–4: High exposure — prioritize Quick Wins immediately. 5–9: Medium exposure — implement Mid-term actions and begin experiments. 10–14: Low exposure — scale strategic work and focus on proving ROI via lift tests.
Final synthesis — what this means for your budget conversation
Evidence indicates this is a solvable attribution and visibility problem, not an unsalvageable traffic loss. The data suggests investing in measurement upgrades (server-side tracking, BigQuery), focused content changes (snippet-first edits and schema), and targeted authority-building (backlink outreach, PR for citations) will produce measurable uplift. Use incremental lift tests and controlled holdouts to provide the proof finance wants: causality, not correlation. Contrast: continuing to report aggregate traffic without these fixes will leave you exposed to budget cuts due to unproven ROI.
Next steps (this week): run the SERP snapshot for your top 100 decline queries, add answer blocks + schema to the top 10 pages, and set up GA4 BigQuery export. If you want, I can provide a templated prompt suite to query ChatGPT/Claude/Perplexity for your 50 queries and a CSV schema for capturing the results — or build the Puppeteer SERP capture script for you. The data suggests that with these steps, you can begin reversing the traffic decline and, more importantly, get the attribution evidence your leadership needs.