An AI session summary is a short, plain-English narrative of a single visitor's session on your store — usually two sentences — generated automatically from the recorded event stream. Instead of scrubbing through three minutes of replay to figure out what happened, you read thirty seconds of summary: where the visitor came from, what they engaged with, where they hesitated, whether they bought. It's how session recording becomes a workflow you actually use, instead of a guilt pile of unwatched replays.
A session summary is the difference between watching 200 replays and reading them.
Three things change about session replay once every replay comes with a summary attached.
Plain English. No jargon, no event taxonomy, no "user_session_id_4F8B traversed product_listing_node." The summary reads like a teammate describing the session out loud: "Visitor landed on the gua sha PDP from a Meta ad, scrolled the gallery twice, paused on the size selector, added to cart, then bounced from checkout when the discount code wouldn't apply."
That single sentence is the difference between watching three minutes of replay to figure out what went wrong and reading thirty seconds to find out instantly. The friction signals — the paused size selector, the failed discount — surface in the narrative without you having to scrub for them.
Click into a replay, read the summary, close the tab. Click into the next one, read, close. After ten minutes you've moved through more sessions than most merchants watch in a month. Most of them confirm the obvious — bounced visitors, browsing visitors, easy converts. The interesting ones jump out of the list.
Filter the session list first ("returning customers who reached checkout but didn't pay") and skim mode becomes targeted, not random. You're not skimming all sessions — you're skimming the segment that matters. Twenty minutes of triage replaces what used to be a full afternoon of guilt about the unwatched recordings.
Read 100 summaries in a row and patterns surface that no single replay would reveal. "A lot of people pause on the size selector." "Most paid-traffic visitors abandon before scrolling." "Returning customers re-add the same product five times before checking out." None of these show up if you're watching three sessions a day.
Once you see the pattern in the summaries, you can drill into the specific replays that show it best — and write the fix the same afternoon. The summaries don't replace the replay; they tell you which question to ask the replay.
Per-replay AI summaries aren't unique anymore — Microsoft Clarity and a handful of others ship some version of this now (we cover the field in our roundup of the best Shopify session replay apps for 2026). Where Propel pulls ahead is what the model gets to read. A generic session-replay tool sees a stream of clicks and page transitions; it can describe the mechanics of the session, but it doesn't know who the visitor is or what they bought before. Propel feeds the summary the Shopify customer and order context: returning vs. new, total spent, customer tag, the order they placed last time, the cart they're abandoning right now.
That changes the narrative. "A returning customer with a $1,200 lifetime value abandoned their cart at the shipping step for the second time this month" is a different summary than "A user clicked, scrolled, and exited." The first is actionable in one read. The second is a description of a session. We sweat the prompt and re-evaluate the underlying model regularly so the writing stays sharp — and we keep tightening the structured signal we feed it as merchants tell us what they wish the summary noticed.
The honest framing: AI summaries on replays are no longer rare. AI summaries that know your customer and order graph still are.
Propel's insight alerts surface the day's biggest orders, biggest abandoned carts, and biggest abandoned checkouts; product-page alerts flag significant drops in add-to-cart rate on specific PDPs. Open the session list filtered to that signal, read the first fifty summaries, watch the three that confirm the pattern, ship the fix Friday afternoon. The alerts tell you where to look; the summaries tell you what's there.
A customer emails saying their cart broke. Look them up by email, open the replay, read the summary first: "Visitor added two SKUs to cart, applied a discount code, hit the Shop Pay button, then bounced from checkout when the country dropdown wouldn't expand on iOS." That's the answer in one sentence. You reply with a fix instead of a "can you send a screenshot?"
You're spending money on Meta and the landing page is converting at 0.4%. Filter sessions to that UTM segment. Read 200 summaries in twenty minutes. The pattern usually appears in the first thirty: "visitors land, scroll once, abandon" — your hero isn't matching the ad, or your fold is too long, or the page is loading too slow on 4G. The diagnosis used to take a week of CRO consulting; now it takes a coffee break.
Summaries are for triage, not forensics. When a customer reports a real bug, when a checkout regression starts costing you sales, when something genuinely weird happened in a session — you still watch the replay end-to-end. The summary tells you which replay; the replay tells you exactly what broke at second 47.
We say this on purpose. The AI summary is a workflow upgrade, not a replacement for the underlying recording. Tools that promise "you'll never have to watch a session again" are overselling. The right framing is: read 200 summaries, watch the 5 replays the summaries point at, ship the fix that came out of those 5. That's the workflow we built for, and that's the one merchants actually use.
Install Propel Replays and skim your first session before lunch. Free up to 750 pageviews/mo. AI summaries on every paid plan, with a 7-day free trial.