UX Case Study
UX Design
May 12, 2025

Building AI features at Ranking Raccoon: How our design process has shifted

Paola Zardo

Ranking Raccoon is a community-based product created by UX studio for ethical marketers and SEOs to build high-quality backlinks. From day one, Ranking Raccoon has been taking away a big chunk of the tiresome and time-consuming work involved in link building. Recently, our team has started to integrate AI in order to further streamline the link building process.

Building AI features has led to a significant shift in how we design, develop and collaborate at Ranking Raccoon. Our traditional approach — designing, testing and then figuring out the technical part — has evolved into a more dynamic and integrated process.

In this article, we'll take you behind the scenes of building our AI features, and explore how our design process has adapted to these changes. We'll walk you through the main steps we followed, highlighting the most substantial shifts and providing examples of AI features we've built.

Defining AI's role – why do we need it and what is our goal?

Currently, it seems like AI is being incorporated into every existing product, leading many product teams to feel pressured to follow the trend without a clear reason. From our experience, the best approach is to figure out what is the purpose of incorporating AI into a product, and only pursue this path if there's a clear goal that AI can help achieve.

For Ranking Raccoon, one of our earliest challenges was a lengthy signup process that required users to provide extensive data about their sites. Although this step was necessary for assessing site quality, we recognized that it could be optimized to reduce user workload. To add to that, we noticed a higher-than-expected drop-off rate at this step of the signup flow, potentially related to the high effort involved in it.

This is where we identified an opportunity of outsourcing the work and automatically collecting the necessary site information with AI. The potential benefits were clear: users could finish the signup much faster and focus on the core task of finding relevant opportunities for link collaborations.

By implementing AI, we managed to streamline the process of adding a site from a multi-step form with six different input fields to a single, quick step: adding a site's URL. This clearly aligns with our overarching goal of making link building as efficient as possible.

Two UI cards showing the before and the after of a form entitled "Add your website”. The first version has six different input fields (URL, industry, topics, meta title, domain rating and organic traffic), while the second version only has a single input field (URL).

So, before simply deciding to incorporate AI features into your product, it's crucial to reflect on what specific problem you're trying to solve. What is it that AI can do better and why? Does it truly help your users achieve their goals or experience the value of your product? We collected the data, defined what AI UX design principles to follow, and got to work.

Collecting all the necessary data

After having a clear picture of the task we wanted AI to perform and why, our next step was to collect all the necessary information that our team or users relied on when completing such a job manually.

You can think about it as if we were documenting a process for a new colleague joining the team – the key question we aimed to answer at this point was: what does AI need to know so that it can deliver what we expect from it?

For AI to be able to provide us with core data about sites added, we needed to scrape their landing pages’ content. This allowed us to provide enough context for AI to determine key attributes, such as the site's industry and what topics it covers.

Since we aimed for an industry-based categorization, we also had to come up with a predefined list of industries. This ensures that similar sites are grouped together, which wouldn't be possible if we gave AI the flexibility of naming industries in multiple, unstructured ways.

Defining expected results

Just like different types of co-workers, AI can perform the same task in thousands of different ways, so we had to define the scope of what we expected as a result.

In the case of site data, we required the information to be structured. For example: each site should be associated with no more than three industries and ten topics, with each topic limited to 40 characters or less. All of these requirements were collected beforehand, which made our lives much easier when crafting the prompt itself.

The left side of the image shows part of a long list of industries entitled “Initial data set”, while the right side shows the expected final result, containing a site's URL with two industries assigned to it (Software & IT Services, and Marketing & Advertising).

Another AI feature we built at Ranking Raccoon was creating an initial message draft to help first-time users start conversations. In this case, we established a few requirements for an acceptable draft, such as: being personalized, highlighting the overlaps between the sender's and recipient's sites, following best practices in link building, and using certain preferred tones of voice.

What helps set goals and requirements for results is to think about the best examples you have available. In the case of messages, we referred to user feedback about what a great link building message looks like and what were the most remarkable high-quality messages they received from other users on Ranking Raccoon.

Experimenting with prompts

With all the data available in place and expected results set, we started writing the prompts and experimenting with AI. While doing so, we evaluated multiple things to help us move forward: how results varied across different language models (and which one to use), where our requirements needed clarification, what specific criteria we overlooked, what unexpected outcomes AI generated that we hadn't anticipated, and many more.

In the case of collecting site data, we realized that some of our industry names were too vague for an accurate categorization. We had many sites being categorized as "Consumer Services” when they had nothing to do with it. To solve this problem, we incorporated descriptions for certain industries, providing as much context as possible to prevent misleading categorizations. 

For message drafts, we encountered several unexpected issues, such as the use of markdown syntax to format URLs. Also, we had to set specific requirements for AI not to use certain language or sentences that made the messages sound like spammy link building emails. This includes instructions as specific as "do not use [certain words]” or “do not compliment sites excessively”.

The left side of the image shows part of an AI-generated message draft where a site URL is formatted using markdown syntax, while the right side shows a new sentence added to the prompt as a result, saying "Do not use markdown syntax”.

This part of the process is inherently iterative. For this reason, it's important to keep a log of what’s being changed from one version of the prompt to another, along with what the results look like for each iteration. We learned that prompts can be very sensitive to changes, and modifying one requirement might also affect results related to another part of the prompt. Therefore, teams must be aware of what led to such changes.

We also figured that using as much real-world data as possible allowed us to catch edge cases more easily and adapt the prompt accordingly. Using examples of real sites registered on Ranking Raccoon helped us prepare for cases such as when the landing page lacks sufficient content to set proper context, and come up with backup plans to address these situations.

Early engineering involvement

Unquestionably, this was the biggest shift in our design process. Even in agile environments, it's pretty common for developers to get fully involved in new features once the team has a concept in mind already. However, with AI features, tech plays such a pivotal role that we naturally had to involve engineers much earlier in the process.

At the beginning, we held discussions around predefined data and expected results. They helped figure out what data we could provide the AI with, and the best format for including them as input in the prompt. Similarly, they helped determine the optimal output formats for the expected results so that we can translate them into user-facing solutions. In most of our cases, we required outputs in JSON format.

Because they're so tech-dependent, AI features often involve the creation of a POC (proof of concept) to assess its feasibility before fully committing time and resources to it. Beyond technical viability, a POC helps evaluate whether the expected results are actually achievable, define the toolset needed, and estimate time and assess costs. 

Engineers were also involved in testing the prompts with real-world data to ensure scalability and identify edge cases. This goes to show that allocating engineering time in early stages of an AI feature is a strategic move that increases the chances of success and reduces risks.

Design phase

Then, and only then, we started working on the actual designs

Depending on the scope of the AI feature, design might play a relatively secondary role.

A prime example is our relevance feature, where AI calculates how relevant sites are for each other based on their content. This enabled us to recommend the most relevant sites to users, and allowed them to sort results by relevance. In this case, a heavily technical feature with huge impact for users required a straightforward UI solution: a simple "Most Relevant” sorting option for the list of sites.

User interface of a page where users can browse a list of sites. The sorting options drop-down is opened and the option "Most Relevant” is selected.

But that's not always the case. In general, the design phase should address multiple aspects of the AI feature and answer several key questions. We collected some examples:

  1. Discoverability: Where should the AI feature live in the user journey? How will users find out about it?
  2. User Input: If the AI requires any input from users, when and how should they be asked for it? What context do they need? How can they provide input?
  3. Transparency: How can we be transparent about the process and the AI involvement? 
  4. Feedback: How to provide users with feedback when an ongoing process is happening in the background?
  5. Output interpretation: How to translate the AI output to users? What actions should they take with it? How to guide them towards success?
  6. Error and empty state handling: What could unexpectedly go wrong? How to be transparent about it and still guide users towards success?

Engineers were also involved in this phase as they typically are, to provide feedback on the feasibility of designs considering the technical aspects behind the AI implementation.

User testing

Before jumping into implementation, we ran user interviews and tests, collecting qualitative feedback and checking for usability issues. In this phase, you might find that users have different expectations than the output you're providing, or that some users may want to opt-out of AI features altogether, or simply have more control over the results. 

In the case of AI-generated drafts, we found out that users were generally satisfied with the messages but appreciated the ability of editing them and tweaking parts of the copy as needed to have a higher response rate.

Our Smart Filtering feature is another illustrative example. This filter hides sites that are not very relevant or not relevant at all, using AI-calculated relevance. While testing it, we found out that users appreciated being able to disable this filter at any time and view the entire list of sites available on Ranking Raccoon instead. This feedback underscored the value of giving users the flexibility to choose how they interact with AI-driven features.

A toggle switch named “Smart Filtering”, which is on and can be turned off. The description of the toggle switch says "View a selection of AI-curated site recommendations”. 

Documentation

With AI features, design documentation often includes prompt documentation as well. In our case, we found it beneficial to keep all feature information in one place, so we linked to the prompt documentation directly from the design file in Figma.

For processes like message draft creation, where users have to wait and watch AI perform a task, we carefully considered the loading states (based on the POC, engineers can usually tell how long certain processes may take). The documentation also included animations and error/empty states to handle situations where AI tools are unreachable or temporarily unable to function.

 Loading state of an input field named “Message". The loading state shows a rotating loading wheel icon next to the text “Generating personalized message…”.

Implementation & testing

Once the feature has been implemented, it's both exciting and rewarding to test it in a staging environment before it goes live for users. We always test our features internally before any release happens. This final check provides one last opportunity to catch bugs or edge cases, which means you might have to quickly go back to the design or code for adjustments.

Analyzing usage data

Tracking usage data is always valuable after shipping a feature, but it becomes even more crucial for AI features. This is because AI often relies on external services and interacts with real-world data in ways that may lead to unpredictable results when scaled.

We use Mixpanel to track quantitative data regarding feature adoption, leveraging its event tracking and funnel capabilities to monitor how users interact with our product. Additionally, we continuously analyze AI-generated results in our database to ensure they remain effective and relevant over time. By combining these strategies with consistently collecting user feedback, we gather data from multiple sources to refine and update features in the long run.

This phase also provides an opportunity for celebration. Seeing high adoption rates and positive engagement with our AI features validates our efforts and highlights their value to users. 

To sum it up

As much as we could pinpoint specific changes in our design process derived from the particularities of AI features, they didn't happen by design. Instead, they emerged naturally as we navigated uncharted territory, which means that every team's experience will likely be unique.

However, we believe that certain patterns will potentially remain consistent for every product team. These include early engineering involvement, delaying the design phase, and recognizing that design can sometimes play a secondary role for specific features. Beyond the commonalities, each team has to embark on its own journey of figuring out which needs emerge from building with AI.

Our biggest lesson learned so far is that having a flexible mindset and fostering strong cross-functional collaboration are key for successfully tackling unprecedented challenges. We hope that our experiences will inspire other teams to adopt a similar mindset, and we look forward to learning from their successes as well.

Looking to implement AI features?

Get expert help from UX studio. We offer AI UX design and research services for collecting first-hand user feedback, analysis, testing and design.

UX studio banner saying "gain a competitive edge with expert help. Book a meeting."