Shining a light on
the Dark Web

CyberAgent monitors the Dark Web for compromised personal information. It alerts users when it finds any of their information, helping them avoid potential fraud. Unfortunately, over time, we saw that a substantial number of users were not receiving CyberAgent’s full benefits.

I led the design of a new identity risk score to help users better understand their risk level and achieve their identity protection goals.

Detecting threats, missing people

CyberAgent was effective at detecting threats, but users struggled to assess their risk levels and were unsure what to do next. Experian's 100+ million users were overwhelmed by technical alerts and jargon they couldn't understand, and were uncertain how to respond.

Frustrated, users were calling for help, driving up support costs, or, worse, developing alert fatigue, tuning out, and failing to engage with the protective advice.

1

Grouped by event and date.

2

Data type found.

3

Long-winded explanation.

4

Additional data points given without context.

5

Multiple de-emphasized and vague actions for the user to undertake.

6

Confused, call us.

Before: CyberAgent alert.

The Challenge

Transform Dark Web monitoring into proactive protection

CyberAgent was effective at detecting threats, but Experian's 100+ million users were overwhelmed by technical alerts they couldn't understand or act on.

My role

I led the end‑to‑end product design for Identity Health Score, from early concept through launch. My work spanned the score presentation and information architecture, the personalized action plan experience, and the behavioral assessment flow, as well as the underlying journeys, research synthesis, and key UX patterns that now support partner implementations. 

I collaborated closely with a Product Director, two Researchers, a key business stakeholder, the Data Science team, and the Development team throughout the definition, iteration, and delivery phases.

My engagement ran for approximately six months over the course of a year, culminating in the launch in November 2023.

KICKOFF

Project origins and evolution

The original concept of the product was to produce an identity risk score analogous to a credit score. We kicked this idea around for years, but it never got off the ground. We lacked a way to generate a meaningful score.

The breakthrough: machine learning enables prediction

Using machine learning, our Data Science team developed an algorithm that combines each user's unique exposure with historical fraud data to predict the likelihood of identity theft.

Inputs

  • User’s Dark Web exposures

  • User real-time security behaviors

  • Historical fraud data

  • User’s Dark Web exposures

  • User real-time security behaviors

  • Historical fraud data

ML Algorithm

  • Analyzes patterns

  • 87% accurate prediction rate

  • Analyzes patterns

  • 87% accurate prediction rate


Outputs

  • Risk score

  • Prioritized action plan

  • Risk score

  • Prioritized action plan


Early generative interview insights

Many people take a passive approach when data breaches occur, assuming the responsible company will handle the situation. This approach stems partly from a lack of awareness of the various ways their identity could be compromised and the different risk levels associated with different scenarios or data types.

However, people would find a personalized risk score both useful and helpful, particularly if it comes with additional context, such as how they compare to others and a clear explanation of what went into the score. When risk indicators are paired with a concrete action plan, people feel more empowered and in control of their security. The key is to ensure these action plans are truly personalized, going beyond generic advice or common recommendations to provide specific, tailored steps that address their unique situation.

It depends on how personally tailored it is; if it gives general advice it won't be helpful.

–Participant 4, Charlotte, NC

Discovery

Understanding what users actually need

A journey mapping exercise revealed how users at different stages of identity protection progressed from uncertainty to empowerment. The presentation of the action plan would matter as much as its content; users needed to see how their actions moved their score.

My competitive review revealed that the level of personalization that we could provide would be an advantage. Still, it uncovered a data gap: certain user behaviors contributed to users' overall risk level, but this information was missing from our Dark Web data.

This competitor’s assessment was entirely generic.

This competitor’s assessment was entirely generic.

FRAMING THE PROBLEM

From risk score to comprehensive solution

One participant said it plainly: "Great, because the next question is what do I do?" The score was the starting point, not the solution. That insight prompted a journey‑mapping exercise tracing how users moved from receiving a Dark Web alert to either taking action or dropping off. We saw three consistent breakdowns: understanding their risk, knowing what to do next, and sustaining good security habits over time.

I reframed the project from “ship a risk score” to solve three linked problems:

  • Task 1 – Make risk legible: a clear, intuitive score and explanation.

  • Task 2 – Turn insight into action: a personalized, prioritized plan.

  • Task 3 – Close data gaps: a behavioral assessment that both educates users and improves the model.

This framing aligned Product, Data Science, and Engineering on a cohesive solution rather than a standalone feature.

Problem
Approach
Success metrics
Task 1

Users can't interpret Dark Web alerts or assess their actual risk level, leading to confusion and inaction

Create an intuitive Identity Health Score using familiar credit score patterns with real-time feedback and transparent scoring factors

  • Score comprehension rate

  • User trust in recommendations

  • Reduction in support calls

Task 2

Generic security advice feels irrelevant, and overwhelming lists of recommendations cause user abandonment.

Design personalized action plans based on compromised data and behavioral assessments, with real-time feedback on score improvement.

  • Plan completion rate

  • User engagement time

  • Protective action adoption

Task 3

Dark Web monitoring cannot detect real-world behaviors users engage in, whether mitigating or risky.

Design a progressive assessment that educates users while capturing behavioral data to improve score accuracy.

  • Assessment completion rate

  • Score accuracy improvement

  • User learning outcomes

These three tasks became the backbone of the Identity Health Score experience and directly informed the design of the score, the action plan, and the behavioral assessment.

Problem
Approach
Success metrics
Task 1

Users can't interpret Dark Web alerts or assess their actual risk level, leading to confusion and inaction

Create an intuitive Identity Health Score using familiar credit score patterns with real-time feedback and transparent scoring factors

Score comprehension rate

User trust in recommendations

Reduction in support calls

Task 2

Generic security advice feels irrelevant, and overwhelming recommendation lists cause user abandonment

Design personalized action plans based on compromised data and behavioral assessment, with real-time score improvement feedback

Plan completion rate

User engagement time

Protective action adoption

Task 3

Dark Web monitoring cannot detect any real-world behaviors that users engage in, whether they are mitigating or risky

Implement progressive disclosure assessment that educates users while capturing behavioral data for accurate personalization

Assessment completion rate

Score accuracy improvement

User learning outcomes

Design

The Identity Health Score experience

Identity Health Score transforms overwhelming Dark Web alerts into clear, personalized guidance. By combining risk assessment with prioritized action plans and real-time feedback, users gain a clear understanding of their vulnerability and know exactly what to do about it.

New user sees the IHS feature

Progressive disclosure wizard

First results with transparency

Real-time feedback after completing actions

Making complexity comprehensible

The interactive tooltip and contributing factors breakdown show users exactly what influences their score. Higher scores indicate better protection, aligning with familiar mental models like credit scores.

Marking a task "done" triggers a scroll to the animation.

Connecting risk to relevant actions and rewarding progress

Tasks are prioritized based on users' actual compromised data, addressing specific breaches first. Completing actions triggers immediate score updates with animations, creating visible progress that maintains momentum. This tight connection between actions and score movement later contributed to higher completion rates of protective tasks and a 12% reduction in fraud events among engaged users.

Survey questions have personal data embedded.

Gathering data without overwhelming

One question at a time prevents overwhelm while enabling the collection of comprehensive behavioral data. Questions embed users' actual compromised data, making abstract risks concrete and turning data collection into education.

STRATEGY

From alerts to action

Designing identity protection that users understand, trust,
and act on

Designing identity protection that users understand, trust, and act on

To bridge the gap from detection to protection, I designed three interconnected product components that work together to help users understand risk, take action, and build better security habits.

  1. The Score - An intuitive risk assessment that builds trust through transparency

  2. The Action Plan - Personalized recommendations that connect to actual risk

  3. The Assessment - A progressive experience that educates while gathering data

Designing the Score

Making risk comprehensible and trustworthy

This project was initiated because our users couldn’t assess their identity risk from CyberAgent’s technical alerts. The new score was supposed to fix this, but initial testing with eight participants revealed a flaw: they misread the “Identity Risk Score,” assuming lower numbers meant they were safer.

Problem. That misinterpretation inverted our intended meaning and undermined trust in the system.

Exploration. I mapped familiar scoring models (credit scores, school grades, app ratings) and reviewed how each conveyed “good vs. bad” at a glance. We considered keeping a “risk” framing and inverting the visual scale, but that would still have required users to relearn a pattern that conflicted with their existing mental models.

Decision. I recommended reframing from risk to health, renaming the feature Identity Health Score, and aligning the scale so that higher = better. I also adjusted the rating bands to follow the actual score distribution, so that “average,” “good,” and “excellent” aligned with real user cohorts.

Why this works. This approach borrowed directly from familiar patterns (credit scores and letter grades), reduced cognitive load, and gave users a realistic sense of where they stood relative to others.

Outcome. In the next concept test, all participants correctly interpreted the scale and described it as “visual feedback that indicated whether the user was moving in the right direction.” Confusion about “high vs. low” risk disappeared.

Early and final iterations of the score display.

Building trust through transparency

Users wanted to know what influenced their scores, but would not read long text blocks. I audited where we explained the score and found we were front‑loading too much information in a single dense overview. I replaced this with a layered transparency model, distributing information throughout the product in small, scannable pieces:

  • At a glance: clear label (Identity Health Score) and A–F grade.

  • Next layer: a short tooltip explaining what the score represents.

  • Deeper layer: a breakdown of contributing factors that surface the biggest drivers of their score.

This lets users pull in details only when they need them, while still satisfying those who want to verify the system’s logic. Each layer built trust without creating overwhelm.

Dashboard CTA with next steps list.

Dark Web scan items scanned.

Real-time feedback

Problem. In earlier flows, users completed recommended actions but didn’t see a tangible payoff. Without a visible link between their effort and improved protection, motivation dropped off after the first few tasks.

Exploration. I examined motivation mechanics in other products (fitness trackers, learning apps) and saw how progress bars and live updates kept people engaged.

Decision. I added real‑time score feedback: when a user marks a protective action “done,” the page scrolls the score into view and animates the change from its previous position to the new one.

Why this works. This creates a tight feedback loop where users immediately see “I did X, my security improved by Y,” turning a static score into an ongoing dialogue about their risk.

Outcome. In testing, participants said they “enjoyed watching the score go up” and wanted to stay in the “healthy green range,” aligning with the 68% increase in engagement time we later observed.

On marking a task “done” the chart scrolls into view and the score animates to it’s new value.

Validation

The third round of concept testing confirmed these changes worked. All participants understood how their scores were generated and what factors affected them. The letter grades (A–F) resonated because everyone was familiar with school grading systems. Participants described the score as “visual feedback that indicated whether the user was moving in the right direction.”

It's the school mentality, you are at a B minus, you want to get to A.

It's the school mentality, you are at a B minus, you want to get to A.

–Participant 1, Charlotte, NC

Designing the Action Plan

Connecting recommendations to actual risk

Existing experiences overwhelmed users with long, generic checklists. Discovery research with 13 participants showed people either ignored the advice or completed a few easy tasks that didn’t materially reduce their risk. They told us they needed “something more than just commonly known, or generic/vague information.” Without personalization tied to their specific compromised data, recommendations felt irrelevant and were routinely dismissed.

Problem. Generic, one‑size‑fits‑all guidance created fatigue and inaction. Users couldn’t tell which steps would actually make them safer.

Exploration. I reviewed competitors’ recommendation flows and mapped our own alerts to concrete user goals. I partnered with Data Science to understand which actions actually reduced the likelihood of fraud, and with Research to identify the types of tasks users were realistically willing to complete.

Decision. Based on this work, I designed a prioritized, personalized action plan:

  • Tasks tied directly to the user’s breached data appear first (for example, “Update password for janedoe@gmail.com discovered in breach X”).

  • Lower‑impact, more generic hygiene tasks are grouped later in the list.

  • Each task includes a short “why this matters” explanation to connect effort to outcome.

Prioritizing by impact

I ranked action plan tasks by their effect on users’ scores. Tasks addressing compromised Dark Web data appeared first due to their higher risk weight. If CyberAgent found your credit card number on the Dark Web or found your email address and password in a breach, addressing these issues ranked higher than general advice like avoiding unsecured public Wi‑Fi. This prioritization helped users focus on what mattered most for their specific situation.

Task are sorted by default based on the impact each would have on their score.

Connecting to actual exposure

Rather than offering only generic, one‑size‑fits‑all checklists, most of our actions are directly related to users’ actual breached data. When it was safe to do so, I surfaced the user’s personal information in the header for the recommended action (for example, “Update my janedoe@gmail.com login …”). Expanding the action revealed why each step mattered to their security, providing the context users demanded. This approach transformed generic advice into personally relevant guidance.

Why this works. Prioritization helps users allocate limited attention to the most impactful steps. Personalization reduces the sense that this is just “generic security advice” and instead frames tasks as targeted responses to their situation.

As users complete high‑impact tasks, the Identity Health Score updates in real time, visually reinforcing that each action materially improves their protection and encouraging them to continue through the plan.

Email exposure with contextual information.

Strategic partnership integration

Some of the recommended actions related directly to another product we offered: privacy tools, including a VPN, a password manager, and a secure browser. If a partner offered them and they were relevant to the user’s risk profile, we could offer them from within the specific recommended action. These inline suggestions aligned user security needs with partner revenue goals, so both benefited.

Dynamic inline upsell offer.

Validation

Testing confirmed the personalized approach worked. Participants found the plan “engaging and trustworthy.” One participant captured the transformation: “I feel violated when my information is stolen. Now I have a bit of control on my end.” Production data showed that users who followed their personalized plans experienced a 12% reduction in fraud events compared with those who did not, demonstrating that relevant, prioritized guidance drove real security improvements.

I like that I can do things actively on my end to improve my score. Want to keep it in the healthy green range. Will enjoy marking something done and watching the score go up.

–Participant 3, Upstate New York

Designing the Assessment

Educating while gathering behavioral data

Our Dark Web database was initiated in 2005. When users signed up today, they could receive alerts for breaches from a decade ago that they’d already addressed. Because our algorithm wasn’t aware that they had fixed these issues, it could drag down their initial score and create a poor first impression.

Problem. The Dark Web data captured historical breaches, but not whether users had already taken remedial action. New customers could start with an artificially low score based on issues they’d already resolved, which damaged trust. We also had no visibility into ongoing security behaviors (like password reuse or MFA usage) that weren’t reflected in breach data but meaningfully influenced risk.

Exploration. I evaluated competitors’ questionnaires and cataloged which behaviors truly affected risk but weren’t visible in our data. I then mapped where in the journey users would tolerate a short assessment without feeling blocked, and how we could use it to both correct stale data and educate users about what puts them at risk.

Decision. I designed a wizard‑style behavioral assessment integrated into the enrollment flow so users could:

  • Indicate which historical breaches they have already addressed to improve their first score.

  • Answer one simple yes/no question per screen, with a clear progress indicator to keep the experience lightweight.

  • Respond to questions that referenced their own compromised data (for example, known breached email/password combinations).

  • See the intro and outro screens that tied the assessment back to how the Identity Health Score is calculated.

This ensured their first score was both accurate and clearly explained.

Behavioral survey question not related to Dark Web fraud data.

Progressive disclosure

As part of the enrollment flow, I created a wizard‑style interface presenting one question at a time. Questions were short with yes/no answers, and a progress bar showed users where they were in the process. Testing revealed an unexpected benefit: walking users through each question actually increased their understanding of the product’s capabilities, so I refined the framing on the intro and closing screens to reinforce how IHS calculates the score.

Survey intro screen.

Survey outro screen.

Educational questions

Questions embedded users’ actual compromised data, for example: “Your email john.doe@gmail.com was found on the Dark Web along with a suspected password. Do you use this same password for other accounts?” Users immediately recognized their personal information, which underscored that the product was personal to them rather than generic. Answering these questions during onboarding helped users understand what put them at risk, transforming data collection into learning moments.

Why this works. The format keeps cognitive load low while capturing high‑value behavioral data that our monitoring can’t see. Referencing users’ actual compromised information makes the experience feel relevant and helps them connect their behaviors to their score.

Behavioral survey question related to Dark Web fraud data.

Validation

Participants found the assessment “easy to understand” and “not time consuming.” One participant noted: “This is cool because you are educating your customer about what puts them at risk.” Internally, the assessment allowed us to correct for previously resolved breaches, improve the accuracy of initial scores, and prevent duplicate recommendations.

I know the basics but I like that this is more specific and personalized. I feel more secure taking a proactive stance.

–Participant 3, Oregon

IMPACT

Engagement up, fraud down

Identity Health Score launched in November 2023 and quickly became the most utilized feature when offered as part of a partner bundle. The results validated our research-driven approach: personalization and transparency drove both engagement and real-world security improvements.

User Engagement

68% increase in portal engagement time

Partners reported significantly higher user interaction with the platform. Identity Health Score became the most utilized feature when offered as part of a bundle, with real‑time score updates and personalized plans turning passive monitoring into active engagement.

Elevated user satisfaction

Testing showed users found the experience "engaging and trustworthy," with the score serving as "visual feedback/motivator that indicated whether the user was moving in the right direction."

Security Outcomes

12% reduction in fraud events

Users who followed their personalized action plans experienced 12% fewer fraud events. Better UX produced real security improvements.

Increased completion of protective actions

By connecting recommendations to users' actual compromised data and providing real-time feedback, users consistently completed more security tasks than with generic advice alone.

Business Value

Transformed passive alerts into proactive engagement

Shifted the product positioning from reactive notifications to an interactive engagement tool, creating ongoing touchpoints between partners and their customers.

Created strategic upsell opportunities

Inline recommendations for privacy tools (VPN, password manager, secure browser) aligned user security needs with partner revenue goals. Both sides benefited.

Streamlined content operations

Repurposed existing alert content into the action plan framework, scaling the feature across 100+ million users without requiring new writing resources.

Operational Efficiency

Reduced support burden

Transparent design and clear guidance decreased user confusion and reduced support call volumes.

Scalable platform solution

The design was deployed across Experian's B2B2C partner network, serving diverse audiences and contexts without modification.

Conclusion

Takeaways

This project shifted how I approach complex products for non-technical users.

Research is the foundation for strategic pivots. Thirty-two participants across multiple research rounds reshaped the product strategy, expanding a single scoring feature into a comprehensive protection system.

Balancing complexity with clarity. Working closely with data scientists, I learned to bridge technical sophistication with user comprehension. The most complex challenge was making machine learning predictions feel transparent and trustworthy. Reversing the scoring logic eliminated confusion. Layered explanations and real-time feedback did the rest.

Personalization as a differentiator. Generic security advice feels irrelevant. Connecting recommendations to users' actual compromised data and showing that their score improves when they act on them made identity protection feel manageable rather than paralyzing. The 12% reduction in fraud events for engaged users validated that better UX directly improves security outcomes.

Content strategy matters. By repurposing existing alert content into the action plan, I scaled the feature across 100+ million users without adding writing resources.

Looking back, the most rewarding aspect wasn't just the metrics, though a 68% increase in engagement and a measurable reduction in fraud are validating. It was seeing users describe feeling "in control" of their identity protection for the first time.

Copyright © 2025 Tricia Bayne

Copyright © 2025 Tricia Bayne

Copyright © 2025 Tricia Bayne