If you’re responsible for NPS in Delighted today, you need a concrete plan for where that program lives next.
Delighted is sunsetting on June 30, 2026. If NPS is your main use case, that timeline means you need to:
📤 Export your NPS data from Delighted
🧱 Rebuild NPS surveys and sending rules in another tool
📈 Keep your NPS trendline and segments comparable over time
Simplesat has already covered the broader shutdown picture here, but this post focuses on a narrower question teams keep asking:
How do you move your NPS program from Delighted to Simplesat without losing momentum?
TL;DR for busy teams
If you only have a few minutes, here’s the short version:
- Have your Delighted NPS replacement in place before June 30, 2026.
- Keep your NPS methodology stable first (same 0-10 model, same core question, same key segments).
- Export Delighted history with the context you rely on (scores + comments + timestamps + the properties you use for segmentation).
- Rebuild your NPS flow in Simplesat, then run a quick continuity check (distribution, comment rate, and your top 3 segments).
- Mirror your existing flow first; then refine timing, follow-ups, and analysis using Topics and saved views.
Why Delighted’s shutdown matters for NPS
A reliable NPS program rests on three things: continuity, context, and cadence. When any one of them slips, the score becomes harder to interpret or act on.
Continuity
NPS is most useful as a trendline you can trust.
Measurement drift is when the score moves because the conditions of the survey changed, not because customers’ loyalty did. It often shows up in subtle ways:
- Surveying at a different moment (after a support interaction vs a neutral touchpoint)
- Shifting the audience mix (more new customers, a different region, a different plan tier)
- Using a different channel (email vs in-app vs link)
- Tweaking follow-ups in a way that changes what people focus on
None of these changes are bad, as long as they’re intentional. They can lead to richer, more accurate data.
But if you switch tools and every part of your survey delivery changes at once, your internal benchmarks stop being reliable. You can’t tell whether you’re seeing a real change in loyalty or just a new measurement setup.
Context
Context is what makes NPS usable across the business. The score tells you how loyalty looks right now, but it doesn’t tell you who that reflects, what experience it’s tied to, or what to do next.
That’s why the supporting details matter.
In practice, “context” means your NPS data includes:
- Who the response belongs to (customer or account identifiers) so you can connect feedback to the right record
- Account properties (plan, region, lifecycle stage, owner, product line) so you can segment and route insights to the right teams
- When it was collected (timestamps) so your reporting periods and trendlines stay credible
- History over time (previous responses for the same customer/account) so you can see patterns, not isolated snapshots
- The explanation behind the rating (comments and follow-ups) so you understand what customers are reacting to
This is what turns NPS from a quarterly number into something operational. For example: instead of reviewing “overall NPS for Q2,” you can answer questions like “What’s NPS for customers on annual plans?” or “What are promoters consistently praising?” or “What themes show up most often in detractor comments?”
Cadence
Most teams run two distinct NPS motions:
- Relationship NPS (rNPS): a steady cadence that measures overall loyalty
- Transactional NPS (tNPS): a targeted send after key moments in the journey
If you want a concise refresher on why these should be treated differently, Simplesat’s guide is a solid reference!
Cadence is often the first casualty of a late migration. Leaving it too close to the deadline increases the odds of missed cycles, inconsistent sampling, and reporting gaps.
How does migration impact continuity, context, and cadence?
| Pillar | If this breaks… | What it looks like in real life |
|---|---|---|
| Continuity | Your trendline stops being comparable. | “Is this drop real, or did we change how we’re measuring?” Meetings turn into methodology debates. People lose confidence in quarter-over-quarter reads. |
| Context | The score becomes hard to explain or act on. | You can’t answer “who is this about?” or “what’s driving it?” Segmentation falls apart, comments aren’t tied to accounts, and the score becomes a vague temperature check. |
| Cadence | Your NPS program becomes inconsistent and easier to ignore. | Skipped cycles, uneven sampling, and gaps in reporting make trends feel less trustworthy. |
You already know how NPS works, but it helps to name the pieces when you’re planning a migration across teams.
NPS ranges from -100 to +100. The standard NPS question is:
“How likely is it that you would recommend our company to a friend or colleague?”
Customers respond on a 0-10 scale:
- Promoters: 9-10
- Passives: 7-8
- Detractors: 0-6
Your NPS is: % of promoters − % of detractors.
If you want internal references to share with your team:
In a migration, the real assets you’re preserving are:
- trendlines over time (overall + by segment)
- drivers hiding in comments and follow-ups
- your team’s reporting + follow-up habits
The awkward part of most migrations is this: you can rebuild the survey in a day, but it takes weeks to rebuild the operating system around it (segments, views, reporting cadence, follow-up habits).
Simplesat is a good fit when your goal is to keep those pillars intact, because it supports:
- segmentable reporting (filters, saved views, dashboards)
- scheduled reporting from views, so cadence doesn’t depend on manual exports
- conditional follow-ups that let you gather context without bloating the survey
- Topics + sentiment to group open-text feedback into themes
If you’re coming from Delighted, you don’t need a new approach to NPS. You can keep everything that works, and add some improvements as well.
What stays the same as in Delighted
You keep the familiar NPS foundations: the 0-10 scale, the promoter / passive / detractor groupings, and the same NPS calculation.
You also keep the part that actually gives NPS meaning: the open-ended “why” behind the rating. Delighted emphasizes that follow-up as the real source of insight, and Simplesat supports the same score → comment flow (with optional follow-ups if you want more detail).
Here’s what should feel familiar right away:
✅ Score + comment as the core flow (same basic NPS motion)
✅ Different follow-ups for different scores — Delighted’s Additional Questions can be shown based on the rating; Simplesat’s conditional logic does the same job: keep the survey short while collecting more relevant context
✅ A steady cadence — Delighted supports recurring sends; Simplesat supports ongoing and event-based email survey delivery as part of how surveys are sent
✅ Multiple ways to reach customers — Delighted supports multiple distribution options (including embedded email surveys via an embed workflow); Simplesat supports email delivery plus options like web-embed surveys
What changes in Simplesat
The difference is mostly operational: fewer repeated steps, more repeatable reporting, and faster ways to make sense of open-text feedback.
1) The views you rely on become repeatable.
In Delighted, you can filter and report on results. In Simplesat, you can save the slices you care about (plan tier, region, lifecycle stage, owner, product line) and come back to them consistently, which helps keep reporting steady across weeks and months.
2) Reporting cadence gets easier to maintain.
Once a view exists, scheduled reports can deliver it on a set cadence. That reduces the need for manual exports and “someone should send the update” reminders.
3) Comments are easier to summarize at scale.
If you collect a lot of open-text feedback, the bottleneck is usually interpretation. Simplesat Topics groups feedback into themes (with sentiment) so you can quickly see what’s coming up most often and where it’s concentrated.
4) Feedback can sit closer to day-to-day workflows.
If support is a major part of your customer experience, Simplesat can live inside Zendesk and embed one-click surveys into solved-ticket email notifications, so your team can stay closer to the context.
A simple way to think about it:
- Delighted often functions as a strong home for collecting NPS responses and reviewing them.
- Simplesat is set up to support the ongoing routine around NPS — consistent views, shareable themes, and a repeatable reporting rhythm.
What your NPS program looks like day to day in Simplesat
Once your NPS program is established in Simplesat, the day-to-day work becomes simpler and more consistent. The goal is to keep a small set of reliable views and routines that make NPS easy to interpret and easy to share.
- A few saved views you actually use — Instead of rebuilding filters every time, you keep repeatable slices (overall, your key segments, recent detractors). Those views become the backbone of how you read NPS over time.
- A fast read on what customers are talking about — When open-text volume grows, Topics helps group feedback into themes and sentiment so you can summarize drivers without manually tagging everything.
- A steady reporting cadence that doesn’t depend on exports — Scheduled reports can send the same saved slices on a set schedule, which helps keep NPS in rhythm across teams.
A simple internal readout many teams stick with:
- overall NPS + score distribution (promoters / passives / detractors)
- your top 2–3 segments
- top themes in open-text (especially detractors)
- a quick glance at response/comment rate so you don’t overinterpret noise
If you want a simple internal reference point for structure, check out our NPS template page!
When is Delighted shutting down?
Delighted’s sunset date is June 30, 2026. Delighted already stopped annual renewals on July 1, 2025, with customers able to switch to a monthly plan and use Delighted until the sunset date.
What’s the biggest risk when moving NPS off Delighted?
Losing interpretability. If timing, audience, channel, and follow-ups all change at once, the score can shift even if loyalty hasn’t.
Will our NPS score change after switching tools?
It can, especially if delivery conditions change (who gets surveyed, when, and how). That’s why the safest approach is to keep methodology stable first, then improve deliberately.
Does Simplesat have default NPS surveys?
Yes. Simplesat supports the standard NPS structure you can customize at will.
Can we keep the simple approach Delighted is known for?
Yes! Simplesat supports the same core flow (rating + open-text), and you can add conditional follow-ups when you want more context from specific score groups.
How does Simplesat help summarize open-text feedback at scale?
Topics groups feedback into themes and sentiment so it’s easier to spot drivers and share a consistent summary.
How do we keep NPS reporting cadence consistent after switching?
Saved views keep key slices repeatable, and scheduled reports deliver those slices on a regular cadence.
Where can I share the official Delighted sunsetting details internally?
NPS works because it asks a loyalty question directly: would you recommend us? That’s a stronger signal than “were you satisfied today,” because it captures something stickier: trust, value, and the willingness to put your name behind a brand.
But NPS only stays useful if it stays trustworthy.
Delighted sunsetting means an inevitable change to your tech stack, but you can protect the loyalty signal you’ve built. Preserve the structure, keep the context, maintain cadence — and you’ll carry forward an NPS program your team can keep counting on.
About Simplesat: Simplesat is the leading omnichannel survey app designed to enhance customer feedback management across various platforms, including Zendesk, Salesforce, and Gladly. Trusted by businesses worldwide, Simplesat delivers actionable insights that drive business growth and customer satisfaction.

