One existing mortgage guide. One embedded calculator. A deliberate schema wrapper. A 14 month citation log. What we shipped, what moved, what did not, and the actual widget running below so you can inspect the logic end to end.
This is a case note from a Boost we shipped in early 2025 on a mortgage guide for a client we will call Example Co to keep the details clean. The post was a 2,200 word "guide to 30 year fixed mortgages" originally published in 2019, ranked in the top five for its head query, pulling around 3,400 organic sessions a month, and earning the occasional AI citation but not in any meaningful volume. We added one interactive widget and one piece of structured context. Fourteen months later the post has been cited 903 times across ChatGPT, Perplexity, Claude, Gemini, and the Google AI Overview. The post's organic traffic also approximately doubled, but the citation arc is the more interesting story.
The post had the usual problems of a 2019 guide. It opened with a long narrative setup about buying a first home. The actual numerical anchors (how monthly payments are calculated, what happens to principal and interest over time, how the ratio shifts) were distributed across prose paragraphs. There was no calculator. There were no tables. The schema was a stock WordPress Article block. It was a perfectly decent post, by the standards of 2019, which is why it still ranked.
We audited it against our current Boost checklist. Four things stood out:
One calculator. Not a basic monthly payment calculator; those are everywhere and do not differentiate. A calculator that (a) answered the head query, (b) visualized the amortization breakdown year by year so that an LLM could cite any row of the table as a standalone fact, (c) included a "what changes if rates move by X" sensitivity view, and (d) wrapped its output in semantic HTML plus a light JSON-LD overlay describing the calculator as a SoftwareApplication subtype.
The calculator went in as the first piece of content below the H1, right after a 40 word summary paragraph answering the head query. The narrative from the 2019 post stayed, but moved down the page. The FAQ block we added at the bottom was populated with the seven specific numerical questions the calculator already answered, each with a one-sentence standalone answer that quoted the calculator's logic.
Structurally, that is all. No rewrite of the narrative. No new word count added beyond the calculator UI and the FAQ. The page went from 2,200 words to roughly 2,450 words including the FAQ block. The build was about 4 engineering hours and 2 strategist hours.
Below is a simplified version of the calculator we shipped. Same underlying math. Same output structure. Adjust the inputs and watch the summary change; an LLM reading the rendered DOM of a page like this has several usable numerical sentences to lift.
The citation arc tracked in a Perplexity and ChatGPT citation monitor across the 14 months after launch:
| Month | Citations | Note |
|---|---|---|
| Pre-launch baseline | ~8 / mo | Paraphrastic, not direct quotes. |
| Month 1 | 23 | Citations begin including direct quotes from the FAQ block. |
| Month 3 | 58 | Calculator output sentences start appearing in cited passages ("a $400,000 mortgage at..."). |
| Month 6 | 94 | The post becomes a default source for the head query on Perplexity. |
| Month 9 | 102 | Steady state on Perplexity. ChatGPT citations begin catching up. |
| Month 12 | 118 | Google AI Overview starts including the post in the sources strip for two related queries. |
| Month 14 | 131 | Cumulative over the window: 903 citations. |
The interesting part of the arc is not the total. The interesting part is the shape. Citations did not ramp linearly with traffic. Traffic lifted in the first 60 days and then flattened around +90 percent of the baseline. Citations kept climbing for roughly 10 months before stabilizing. The two signals move on different clocks, and the citation signal compounds for longer because LLM training cycles and crawl reindexing lag behind user-driven traffic.
Three reasons, in descending order of how important they were.
Extractable output. The calculator computes a number, labels it, and renders it in the DOM in a structured way. An LLM reading the post sees complete, labeled, numerical sentences it can lift. "A $400,000 mortgage at 6.5% for 30 years has a monthly payment of $2,528" is a sentence the page renders. That sentence is the form LLMs prefer to quote.
Query alignment. The top 20 long-tail queries the post already ranked for were all variants of "what is the payment on X at Y for Z years". The calculator did not target new queries. It answered the queries the page was already getting impressions for, more completely than the previous prose answer did. Search Console click-through rate on those queries lifted 40 to 70 percent in the 90 days after launch.
Trust layering. The calculator was not a cheap iframe. It was a first-party widget on a post the domain already had authority for. LLMs and Google both appear to weight the source, not just the utility. A calculator on a trusted mortgage guide is cited. The same calculator as a standalone /tool page on the same domain, we found, earned about one quarter the citations in a parallel test we ran later. Build the utility into the post that already has equity, not into a new URL that does not.
A few things we tried that added cost and did not move citations measurably:
Pick one post in your library that meets all of the following:
If a post meets all four, the expected lift from adding a calculator-style widget is large. If it meets three, it is still worth trying. Two or fewer, and the work is probably better spent on a different post.
We wrapped the calculator in a small JSON-LD block describing it as a SoftwareApplication (applicationCategory: FinanceApplication). This does not appear to affect rich results in classic SERP. It does appear to help LLM parsers disambiguate the widget from surrounding content, which in our later tests correlated with a modest additional lift in citation rate (roughly +15 percent vs the same calculator without the schema wrapper). The effect is small but cheap. We now ship the wrapper on every widget Boost.
The lift is only durable if the calculator is accurate and maintained. We had one client ship a calculator that rounded incorrectly on edge inputs. Within four months the post's AI citations stopped being quoted directly and started being paraphrased defensively ("sources online estimate..."). We fixed the rounding and citations returned to direct-quote rate within six weeks. LLMs are surprisingly sensitive to small numerical errors, because the training signal punishes them for quoting wrong numbers confidently.
Build the widget once. Test it against a real reference (the HUD amortization calculator in this case). Keep it accurate. That is most of the long term job.
One calculator on one post produced 903 AI citations over 14 months. The post was not new. The traffic it earned was real but not the main point. The point is that a structural addition to an existing asset can produce a return on a different axis than the one you were originally optimizing for. That is the shape of work that compounds.
We run this exact play on roughly one post per Boost account per quarter, wherever the criteria hit. It is not the default treatment; FAQPage schema and question-shaped H2s are the default. The calculator is the escalation, reserved for the posts where the query intent justifies it. When it hits, it hits harder than any other single move we have in the toolkit.