How do you measure the success of a design project?

At Studio Soren, we believe measuring the success of a design project goes far beyond "looking good." For leadership in public services and non-profits, design success is defined by how effectively a solution solves a specific problem, improves service delivery, and aligns with organisational goals. It is a data-driven evaluation of functional performance, user satisfaction, and resource efficiency.

Moving Beyond Aesthetics to Outcomes

For many of our partners in the public sector, "design" can initially feel like an intangible aesthetic luxury. However, in modern product design, success is a metric, not a feeling. When Studio Soren designs a new portal for a local authority or a donation flow for a national charity, we aren't just creating a visual interface; we are engineering a measurable result.

To measure this effectively, we employ a mixture of quantitative data (the "what") and qualitative insights (the "why"). If you cannot measure it, you cannot manage it—and you certainly cannot justify the budget for it in the next fiscal year.

The Studio Soren Framework: A Step-by-Step Guide

To ensure a design project delivers a genuine return on investment (ROI), evaluation must be baked into the process from the very first meeting. Here is our three-phase lifecycle of measurement.

Step 1: The Preparation (Establishing the Baseline)

Success is impossible to measure without a starting point. Before a single pixel is moved at Studio Soren, we work with you to define what "better" actually looks like.

  • Establish Baseline Metrics: If we are redesigning a legacy service, what is the current bounce rate? How many "avoidable contacts" (calls or emails) does your help desk receive regarding the current interface?

  • Define Key Performance Indicators (KPIs): We help you choose 3–5 metrics that move the needle. For a charity, this might be "Gift Aid Declaration Rate." For public services, it might be "Reduction in Application Errors."

  • Stakeholder Alignment: We ensure the CEO, COO, and our Design Lead are in total agreement on these goals. This prevents the "subjectivity trap" during later reviews.

Step 2: The Review (The Mid-Project Pulse)

Measurement isn't just a post-launch activity. During the design phase, we use "proxy metrics" to ensure the project is on the right track.

  • Usability Testing: We watch real citizens or donors interact with our prototypes. Success here is measured by the Task Success Rate—the percentage of users who can complete a critical action without assistance.

  • Accessibility Auditing: For UK public services, success is often binary: does the design meet the legal requirements? We follow the Gov.uk Service Standard, which provides a rigorous framework for accessibility and usability.

  • Time on Task: If a user takes five minutes to find a form that should take two, the design requires iteration.

Step 3: The Post-Review (Impact Analysis)

Once the project is live, the "real" measurement begins. We typically recommend an audit 3–6 months post-launch to allow for data stabilisation.

  • The Delta: We compare your new live KPIs against the baselines established in Step 1.

  • System Usability Scale (SUS): This is a 10-item questionnaire we use to give you a "Global Satisfaction" score from 1–100. You can read a deep dive on how this is calculated via the Nielsen Norman Group's guide to SUS.

  • Organisational ROI: We calculate the hard cost savings. If the new design reduces support calls by 20%, how many staff hours—and taxpayer pounds—have been saved?

Best Practices for Constructive Measurement

As a leader, your role in measuring success is to ensure the project stays aligned with the mission. To keep your evaluations constructive, we suggest the following:

  • Prioritise "Actionable" over "Vanity" Metrics: A million page views mean nothing if no one is completing the service. Focus on conversion and completion.

  • Focus on Outcomes, Not Outputs: Instead of measuring how many screens were designed, measure how much the "User Frustration Score" decreased.

  • Ask "How Does This Solve the User's Problem?": Force the design team to justify every element based on the data collected during user testing.

  • Check Your Bias: Recognise that as a CEO or COO, you are rarely the "target user." Trust the data gathered from real-world testing over personal preference.

Why Design Metrics Matter for Mission-Led Organisations

For organisations driven by impact rather than just profit, design success is often tied to Trust and Inclusion.

  1. Trust: A professional, intuitive interface signals competence. For a charity, this directly increases donation likelihood.

  2. Inclusion: Good design ensures that those with visual, cognitive, or motor impairments can access your services. Success is measured by the breadth of the audience you can successfully serve.

If you are looking to start a project with clear, measurable outcomes, Studio Soren’s approach to design strategy ensures that your digital transformation is backed by evidence, not guesswork. We focus on delivering high-authority solutions for complex sectors.