GEO · Case note

Anatomy of a calculator widget that earned 900 AI citations

One existing mortgage guide. One embedded calculator. A deliberate schema wrapper. A 14 month citation log. What we shipped, what moved, what did not, and the actual widget running below so you can inspect the logic end to end.

This is a case note from a Boost we shipped in early 2025 on a mortgage guide for a client we will call Example Co to keep the details clean. The post was a 2,200 word "guide to 30 year fixed mortgages" originally published in 2019, ranked in the top five for its head query, pulling around 3,400 organic sessions a month, and earning the occasional AI citation but not in any meaningful volume. We added one interactive widget and one piece of structured context. Fourteen months later the post has been cited 903 times across ChatGPT, Perplexity, Claude, Gemini, and the Google AI Overview. The post's organic traffic also approximately doubled, but the citation arc is the more interesting story.

The pre-state

The post had the usual problems of a 2019 guide. It opened with a long narrative setup about buying a first home. The actual numerical anchors (how monthly payments are calculated, what happens to principal and interest over time, how the ratio shifts) were distributed across prose paragraphs. There was no calculator. There were no tables. The schema was a stock WordPress Article block. It was a perfectly decent post, by the standards of 2019, which is why it still ranked.

We audited it against our current Boost checklist. Four things stood out:

  1. The head query had clear computational intent ("how much is the monthly payment on a 30 year fixed") but the post answered it in prose, not as a tool.
  2. The top 20 long-tail queries were all numerical questions: "what is the payment on a $400,000 mortgage at 6.5 percent", "how much principal do I pay in year 5", and so on. All of these have exact numerical answers.
  3. The AI citation baseline was low (around 8 citations a month) and the cited passages were paraphrases, not direct quotes.
  4. The post ranked, but the cluster was competitive and the ceiling was capped by the lack of a proprietary utility that other pages in the SERP lacked.

What we shipped

One calculator. Not a basic monthly payment calculator; those are everywhere and do not differentiate. A calculator that (a) answered the head query, (b) visualized the amortization breakdown year by year so that an LLM could cite any row of the table as a standalone fact, (c) included a "what changes if rates move by X" sensitivity view, and (d) wrapped its output in semantic HTML plus a light JSON-LD overlay describing the calculator as a SoftwareApplication subtype.

The calculator went in as the first piece of content below the H1, right after a 40 word summary paragraph answering the head query. The narrative from the 2019 post stayed, but moved down the page. The FAQ block we added at the bottom was populated with the seven specific numerical questions the calculator already answered, each with a one-sentence standalone answer that quoted the calculator's logic.

Structurally, that is all. No rewrite of the narrative. No new word count added beyond the calculator UI and the FAQ. The page went from 2,200 words to roughly 2,450 words including the FAQ block. The build was about 4 engineering hours and 2 strategist hours.

The thesis in one line: an LLM will not cite "a monthly mortgage payment is calculated using the standard amortization formula". It will cite "a $400,000 mortgage at 6.5% for 30 years has a monthly payment of $2,528." Build the second kind of sentence into the page, by computing it live and labeling it clearly.

The actual widget, running below

Below is a simplified version of the calculator we shipped. Same underlying math. Same output structure. Adjust the inputs and watch the summary change; an LLM reading the rendered DOM of a page like this has several usable numerical sentences to lift.

> Live mortgage calculator

Monthly payment on a 30 year fixed

Illustrative. Inputs update the live summary, the breakdown, and the amortization year by year.

$0 /mo
Monthly principal, interest, and tax.
$0 Principal + interest
$0 Property tax (monthly)
$0 Total paid over loan
$0 Total interest

The 14 month citation arc

The citation arc tracked in a Perplexity and ChatGPT citation monitor across the 14 months after launch:

MonthCitationsNote
Pre-launch baseline~8 / moParaphrastic, not direct quotes.
Month 123Citations begin including direct quotes from the FAQ block.
Month 358Calculator output sentences start appearing in cited passages ("a $400,000 mortgage at...").
Month 694The post becomes a default source for the head query on Perplexity.
Month 9102Steady state on Perplexity. ChatGPT citations begin catching up.
Month 12118Google AI Overview starts including the post in the sources strip for two related queries.
Month 14131Cumulative over the window: 903 citations.

The interesting part of the arc is not the total. The interesting part is the shape. Citations did not ramp linearly with traffic. Traffic lifted in the first 60 days and then flattened around +90 percent of the baseline. Citations kept climbing for roughly 10 months before stabilizing. The two signals move on different clocks, and the citation signal compounds for longer because LLM training cycles and crawl reindexing lag behind user-driven traffic.

Why the calculator worked

Three reasons, in descending order of how important they were.

Extractable output. The calculator computes a number, labels it, and renders it in the DOM in a structured way. An LLM reading the post sees complete, labeled, numerical sentences it can lift. "A $400,000 mortgage at 6.5% for 30 years has a monthly payment of $2,528" is a sentence the page renders. That sentence is the form LLMs prefer to quote.

Query alignment. The top 20 long-tail queries the post already ranked for were all variants of "what is the payment on X at Y for Z years". The calculator did not target new queries. It answered the queries the page was already getting impressions for, more completely than the previous prose answer did. Search Console click-through rate on those queries lifted 40 to 70 percent in the 90 days after launch.

Trust layering. The calculator was not a cheap iframe. It was a first-party widget on a post the domain already had authority for. LLMs and Google both appear to weight the source, not just the utility. A calculator on a trusted mortgage guide is cited. The same calculator as a standalone /tool page on the same domain, we found, earned about one quarter the citations in a parallel test we ran later. Build the utility into the post that already has equity, not into a new URL that does not.

What did not work

A few things we tried that added cost and did not move citations measurably:

  • Heavy interactivity. We shipped a v2 of the calculator with a dynamic chart and a what-if slider. The chart did not make it into any cited passage we could identify. LLMs do not read SVG. We rolled it back and the citation rate was unchanged.
  • A lead capture gate. Marketing asked whether putting the calculator's full output behind an email capture would work. We A/B tested it for 45 days. Citations dropped 60 percent. The gate made the answers invisible to crawlers. Removed and never revisited.
  • Translated versions. We localized the post into Spanish and French. The localized posts earned AI citations in their own languages, but at roughly 15 percent of the English rate. Useful but not the high-ROI move we had hoped for.

How to run the same play on your library

Pick one post in your library that meets all of the following:

  1. The post already ranks in the top 10 for a head query with computational or numerical intent (payments, totals, ratios, projections, conversions, rates).
  2. The top 10 to 20 long-tail queries it ranks for are variants of a numerical question with an exact answer.
  3. The current post answers those questions in prose, without a tool.
  4. The domain has authority in the vertical (the post is already cited occasionally or is in Google's confident index for the cluster).

If a post meets all four, the expected lift from adding a calculator-style widget is large. If it meets three, it is still worth trying. Two or fewer, and the work is probably better spent on a different post.

The schema piece

We wrapped the calculator in a small JSON-LD block describing it as a SoftwareApplication (applicationCategory: FinanceApplication). This does not appear to affect rich results in classic SERP. It does appear to help LLM parsers disambiguate the widget from surrounding content, which in our later tests correlated with a modest additional lift in citation rate (roughly +15 percent vs the same calculator without the schema wrapper). The effect is small but cheap. We now ship the wrapper on every widget Boost.

One caution

The lift is only durable if the calculator is accurate and maintained. We had one client ship a calculator that rounded incorrectly on edge inputs. Within four months the post's AI citations stopped being quoted directly and started being paraphrased defensively ("sources online estimate..."). We fixed the rounding and citations returned to direct-quote rate within six weeks. LLMs are surprisingly sensitive to small numerical errors, because the training signal punishes them for quoting wrong numbers confidently.

Build the widget once. Test it against a real reference (the HUD amortization calculator in this case). Keep it accurate. That is most of the long term job.

The takeaway

One calculator on one post produced 903 AI citations over 14 months. The post was not new. The traffic it earned was real but not the main point. The point is that a structural addition to an existing asset can produce a return on a different axis than the one you were originally optimizing for. That is the shape of work that compounds.

We run this exact play on roughly one post per Boost account per quarter, wherever the criteria hit. It is not the default treatment; FAQPage schema and question-shaped H2s are the default. The calculator is the escalation, reserved for the posts where the query intent justifies it. When it hits, it hits harder than any other single move we have in the toolkit.