UX Case Study
February 23, 2026

Hunting ghosts in a serverless world: how we use QStash in Ranking Raccoon

Fülöp Kovács

A designer, a marketer, a developer, and a CEO who’s a bit of all three (I love my job, so I have to write this): that was our entire team for the first seven months of Ranking Raccoon.

Our product was (and still is) a platform for ethical link builders: people who want to find good placement opportunities on each other’s sites and coordinate A–B–C link building in a transparent, long‑term way. We had motivation and an ambitious dream: taking some of the oldest ideas on the internet (social networks and search) and repurposing them for link building.

What we didn’t have was a large engineering team, a huge infrastructure budget, or time in general. More importantly, we hadn’t validated the idea yet, and building the perfect version of the wrong thing is how startups die. Every hour we spent away from core features wiring up generic infrastructure was an hour we didn’t spend learning from users and improving the product.

One of the most surprisingly hard problems we ran into early on was ghosting. In a chat‑based product, ghosters silently kill communication. We knew we had to “hunt ghosts” automatically, and we had to do it in a serverless environment.

That’s where QStash came in.

The Problem: Delayed Work in a Serverless App

Ranking Raccoon runs on a serverless stack (a Next.js) app hosted on Vercel).

That was great for a tiny team: we didn’t have to think about provisioning or scaling servers.

But serverless also removes shared states between the functions that respond to requests, which makes long‑running workflows and delayed checks harder to model.

This problem is not unique to us: Vercel introduced Workflows this year to address this limitation.

Despite being serverless, our product still needed scheduled tasks:

  • Check if users replied to important messages after a few days.
  • Send reminders or follow‑ups when conversations stall.
  • Nudge users who marked themselves “out of office” and forgot to come back.

In a traditional server setup, you might solve these with a cron job and a worker process that polls the database. In serverless, you don’t have a long‑running process. Your functions only execute when triggered by an HTTP request, an event, or a scheduled task from some external system.

For our main ghosting workflow, we needed two specific delayed checks:

  • In 3 days: has the receiver replied to the link request?
  • In 16 days: if they still haven’t replied, escalate the consequences.

These checks must happen regardless of traffic patterns or deployments. We can’t rely on someone being online, or on a particular page being hit, at just the right moment.

We needed something that could reliably call our backend in the future, without us running our own queue and workers.

How We Decide What to Outsource

With four people and a lot of product ideas, we can’t build everything ourselves. From the beginning, we had to keep making decisions about what was core to Ranking Raccoon, and what infrastructure would we outsource.

To remain honest with ourselves, we used a simple checklist when evaluating third‑party services:

1. How much developer time does it save now and in the future?

 We care about both initial integration and ongoing maintenance. If a service avoids us becoming an accidental infra team, that’s a strong plus.

2. Can the provider be trusted?

 Is this a team with a track record? Are other production systems relying on them? Do the docs and support look like they’ll still be there a year from now?

3. How hard would it be to replace them?

Vendor lock‑in is real. We try to integrate via clear boundaries and small wrappers, so that if we ever need to swap out a provider, most of the application doesn’t notice.

4. Do expected expenses fit the budget (and scale to zero)?

For an early‑stage startup, paying for unused capacity hurts. We want costs to mostly grow with usage, not with theoretical maximums.

Delayed jobs and queues are the definition of infrastructure. They’re crucial, but they’re not what makes Ranking Raccoon unique. That made them a perfect candidate for outsourcing.

Why We Picked QStash

QStash, from Upstash, can act as "HTTP in the future" for our use case: you tell it what URL to call and when, and it will POST to that URL at the scheduled time, with retries and delivery guarantees.

When we compared QStash against rolling our own queue or using more heavy‑weight cloud services, it matched our checklist very well:

  • Saves time: Requires a minimal code on our side: define a handler as an HTTP endpoint, then ask QStash to call it later.
  • Easy to integrate: QStash works great with Next.js API routes. There’s an official integration for verifying signatures in Next.js, and the TypeScript support is good. Our existing request/response and validation patterns fit naturally.
  • Scales to zero and is cheap for our scale: QStash's pricing is usage‑based. At the time of writing, there’s a free tier with up to 1,000 messages per day, and a pay‑as‑you‑go tier at $1 per 100K messages. For an app like ours, that’s effectively negligible infrastructure cost, and we don’t pay for idle capacity.
  • Trusted provider: Upstash is already well‑known for their Redis offering, and they’re widely used in the serverless/Edge ecosystem. That gave us confidence that QStash wouldn’t disappear overnight.

QStash let us keep our focus on product. We stayed a product team, not a queue team.

Hunting Ghosts: The Ghosted Link Request Flow

Let’s look at the main feature where QStash shines for us: identifying ghosters.

In Ranking Raccoon, a “link request” is essentially a structured conversation starter: one user reaches out to another to propose a link placement. If the receiver ignores it, the sender is left hanging.

Ghosters are dangerous for this kind of platform:

  • They create a bad experience for the sender.
  • They reduce trust in the marketplace.
  • They make it harder to build long‑term relationships.

We wanted the platform to help keep these conversations healthy. That meant automatically:

  1. Detecting when someone hasn’t responded within a few days.
  2. Nudging them to reply.
  3. Escalating if they continue to ignore the message

To be precise, our flow looks like this:

Ranking Raccoon's ghosting workflow

1. User sends a link request.

  At this moment, our backend schedules a delayed QStash message for 3 days in the future.

2. After 3 days, QStash calls our “ghosted link request” webhook.

  This is a Next.js API route at:

  /api/handle-unanswered-link-request?ghosted_days=3

3. The webhook checks the conversation.

  If the link request is still unanswered:

  • We update the sender’s ghosting state to `TIME_TO_REPLY`.
  • We show a message in the UI telling them it’s time to respond.
  •  We schedule another QStash message, this time for 13 more days (16 days total)

4. After 13 more days (16 total), QStash calls the same webhook again.

  This time with:

  /api/handle-unanswered-link-request?ghosted_days=16

5. Final check and escalation

  If the link request is **still** unanswered after 16 days:

  • We set the user’s ghosting state to `SUSPENDED`.
  • We send emails and track the event in Mixpanel.
  • The platform restricts them until they become active in that conversation again

All of this happens even if traffic is low, no one is logged in at the exact moment, and we haven’t deployed anything recently. QStash is the reliable clock that calls our backend exactly when we need it to.

Scheduling Ghost Checks with QStash

On the application side, scheduling a ghost check is just a call to QStash’s API from our backend.

When a link request is created, we do something conceptually like this (simplified for clarity):

// Pseudo-code: scheduling the first ghost check
await qstash.publishJSON({
url: `${RR_URL_ORIGIN}/api/handle-unanswered-link-request?ghosted_days=3`,
body: eventBody, // describes the link request and triggering user
deduplicationId: `ghosted-3-days-${eventBody.linkRequest.id}`,
// delay in seconds
delay: GHOSTING_TIME_MS.TIME_TO_REPLY / 1000,
});

A few important points here:

  • `url` points to our webhook endpoint. QStash will POST to this URL when the delay has passed.
  • `body` is a JSON payload (validated by `linkRequestEventBodySchema` on the receiving side) that contains the link request ID and the user IDs we need.
  • `deduplicationId` helps avoid accidentally scheduling the same job twice.
  • `delay` is simply how long QStash should wait before calling our endpoint, in seconds.

From our perspective, this is the entire scheduling story: call `publishJSON` with the right URL, body, and delay, and QStash takes it from there.

The Ghosted Webhook: Turning Delays into Decisions

The other half of the equation is the webhook that QStash calls. In our codebase, it lives at:

src/pages/api/handle-unanswered-link-request.ts

At a high level, the handler does four things:

1. Validate the request

  •   Ensure we have a `ghosted_days` query param and that it’s either `3` or `16`.
  •   Ensure the HTTP method is `POST`.
  •   Parse and validate the JSON body using `linkRequestEventBodySchema`.

2. Look up the link request and its status.

  We fetch the link request from the database, making sure the sender isn’t deleted and selecting just what we need (ID and whether it’s unanswered).

3. Decide what to do based on `ghosted_days` and the current state.

  If the link request is unanswered:

  •   For3 days:
    • We update the user’s ghosting state to `TIME_TO_REPLY`.
    • We schedule the 16‑day follow‑up with another `qstash.publishJSON` call.
    • We store the new QStash message ID in the database (`qstashMessageId`) for tracking.
  • For 16 days:
    • We update the user’s ghosting state to `SUSPENDED`.
    • We send analytics events (for example, `USER_SUSPENDED` to Mixpanel).

4. Log what happened.

We use Axiom for structured logging, attaching the user and link request IDs, and logging the new ghosting state.

The actual implementation is also wrapped with:

ts
export default verifySignature(handler);
```
from `@upstash/qstash/dist/nextjs`, which verifies that the request really came from QStash. Since signature verification needs access to the raw request body, we also configure the route with (note that we use Next.js with the pages router):
```ts
export const config = {
 api: {
   bodyParser: false,
 },
};

In plain terms: QStash calls a public URL on our backend, we verify the signature coming from Upstash, we look up the conversation in the database, and we decide whether this user has become a “ghost” yet-and what to do about it.

Another Pattern: Out‑of‑Office Reminders

Ghost hunting isn’t the only thing we use QStash for.

Ranking Raccoon also lets users mark themselves as "out of office" for a specific amount of time (within certain limits). When someone enables this, we want to:

  • Respect their status while they’re away.
  • Let others know they might not get a response while this person is out.
  • Remind them when their out‑of‑office period is about to end, so they can come back or extend it.

The pattern is almost identical to ghost hunting:

1. When a user sets their out‑of‑office status, we schedule a QStash message for the end of that period.

2. If they manually modify the end of the period later, we cancel the scheduled QStash message so it never fires.

3. When QStash calls our endpoint at the scheduled time, we:

  •   Remind the user via email or in‑app message.
  •   Potentially clear or update their status.

The business logic is different, but the building block is the same: “call this URL in the future with this payload”. Once we had the QStash integration in place, adding this second use case was mostly a matter of writing new application logic.

Downsides, Local Development, and Learnings

No tool is perfect, and QStash is no exception.

When we started using it, one big downside was local development. There was no official way to run QStash locally, so we had to rely on tunneling tools like ngrok to expose our dev server to the internet. That added friction: webhooks failed when the tunnel wasn’t running, setup was easy to forget across machines, and we couldn’t test these flows fully offline.

We mitigated this by keeping webhook handlers small and pure for unit testing, and leaning on logging/analytics to understand production behavior. Upstash has since released a QStash CLI that can run QStash locally, which should remove the tunneling dependency entirely.

There’s also the perennial concern of vendor lock‑in. Our approach there is to:

  • Keep QStash integrations behind small helpers (`~/lib/qstash`, etc.).
  • Pass around domain data (like link request IDs) instead of vendor‑specific objects.

If we ever needed to replace QStash, most of the changes would be in those small integration layers, not spread across the entire codebase.

Takeaways

For us, QStash solved a very specific but important problem: how do you execute delayed actions in a serverless environment without building your own queue and worker stack?

By delegating delayed work to QStash, we were able to:

  • Ship a reliable ghost‑detection and escalation flow quickly.
  • Add related features (like out‑of‑office reminders) with very little extra infrastructure work.
  • Keep our focus on the core product rather than on maintaining background workers.

If you’re a full‑stack developer or product leader working on a small team, our experience generalizes into a few simple guidelines:

  • Be explicit about what’s core to your product and what’s infrastructure.
  • For infrastructure‑shaped problems (queues, delayed jobs, auth, payments), strongly consider managed services.
  • Evaluate providers on time saved, trust, replaceability, and pricing that scales with you.

In our case, that process led us to QStash. It gave us a way to hunt ghosts in a serverless world without having to build a haunted house of infrastructure ourselves.

Want to learn more?

Sign up to our blog and get monthly notifications with a summary of our latest posts.
Thank you! You have been subscribed!
Oops! Something went wrong while submitting the form.