AI Trust Layers & Mental Health Systems



The AI that finally feels human
An AI companion that made 80% of users in distress finally felt heard — not just giving generic advice.


+44
NPS
Top 1% of all mental-health apps worldwide
Users don’t just keep it — they actively recommend it to friends and family.

Human Truth
"Been going through a lot.
Didn't have any options of venting out to real people in my life so this was perfect.
Thank you"
- User Review

Overview
Project Timeline
Tap phase for details
Research
Design
Launch
-Mental Health + AI Viability Audit
-Competitive analysis
-Mapped user journeys + pain points
→ Found 73 % of users abandon generic bots within 3 messages
-Conversational flow architecture
-Emotional-state visualization system
-Built UI design system + prototypes
→ Reduced predicted negative emotional shift from −2 → 0
-Play Store & App Store launch
-90-day fidelity monitoring
-Iteration retrospective with NPS data
→ Delivered +44 NPS & 32 % Month-3 retention
Research
The Problem
When young people are struggling, they reach for their phone… …and most AI “therapists” make them feel worse.
Traditional therapy is too expensive, stigmatized, or unavailable
Existing mental-health chatbots feel robotic, generic, and trigger-word driven
When someone is already at –1 or –2 emotionally, a bad response doesn’t just fail — it pushes them to –3 or lower
The Design Challenge
Build an AI companion that enforces CBT rules but actually feels human, never worsens distress, and turns fleeting downloads into life-long habits.
→ That’s why we created Empaithy.


Identifying the Mental Health Market Gap
Research Phase
4 weeks of synthesis — from academic papers to competitor drop-off data
Competitive Audit
We audited the top 5 AI mental-health companions (Wysa, Replika, Youper, Ash, Noah). The same three complaints appeared in >70 % of 1–5 star reviews:
The 73 % Drop-Off rate
73 % of users abandon existing AI companions within the first three messages.
(Source: Confirmed by our own usability studies + aggregated public review data + session-length studies)
The Insight That Changed Everything
When someone opens a mental-health app, they’re almost never neutral — they’re already at –1 or –2. A generic or repetitive response doesn’t just fail; it actively harms, pushing them to –3 or lower.
This became our unbreakable rule: Every single interaction — from message one — has to feel deeply personal, context-aware, and genuinely caring.
Pre-Project Foundation
3 months reading 40+ papers on LLM safety & emotional recognition
200+ user reviews scraped from Wysa, Replika, Youper
12 interviews with licensed therapists on clinical red lines.
→ Full research deck available on request
Design
6 weeks turning raw LLM chaos into a clinically sound, deeply human companion.
Every decision started with the same question: “How do we create an LLM to be safe, non-repetitive, and actually helpful when someone is at –2? (already emotionally negative) ”
The three artifacts that defined everything
These constraints formed the core user experience.

These constraints also directly shaped our Design System.

These six principals were set as the foundation for our AI and Visual Design.
Design system Playground

Low-Friction Onboarding
The First Step to Trust: The first 30 seconds decides whether a user stays or leaves.

A common drop-off point in therapeutic apps is the onboarding process. To meet the Low-Friction Entry requirement and reduce Customer Acquisition Cost (CAC), we designed a flow that prioritized speed and minimal cognitive load in 4 screens.
Conversational Flow: Empathy to Action

Non-directive, with deliberate pauses to build trust and not escalate emotional state.
A.I builds trust by validating a question/response first.
Seamless Pivot to Structured CBT activity, ensuring the user moves toward a therapeutic goal, not a passive conversation.
Enables immediate, low-effort logging during high emotion moments, preventing frustration from worsening the user's mood.
Validates feelings first — building instant trust and emotional safety.
Non-directive guidance — empowers users to lead the conversation.
Low-effort input — quick taps or voice for when emotions are running high.

Achieving 100% Functional Parity: LLM Logic Meets Soothing UX
A gentle, validating AI response that meets users exactly where they are — no judgment, just support
Journaling for Insights: Closing the feedback loop

Journaling as therapy and not a chore. Here, we embedded the activity directly into the AI flow and structured it to directly support the CBT goal of Cognitive Restructuring without ever feeling like homework.

Habit-Building: Daily Reflection & Mood Tracking
The Empaithy Experience
Calm, personal, and built for moments of crisis

Accessibility & Inclusion

WCAG Compliant
· 4.5:1 contrast minimum
· Full keyboard navigation
· Screen-reader tested labels
· Dynamic type support up to 200%
Neurodiversity-First
· Voice-first entry
· No time pressure
· Plain language mode
· Dark mode default for sensory sensitivity
Real User-Testing
· 100 % task completion
· Avg. perceived effort: 1.8 / 7
· 12 participants with Anxiety, ADHD, dyslexia, and low vision
Results

Emotional Impact Score
+1.7

Avg. Mood boost per session
Measured using a 5-point emotional state scale (-2 to +3) before and after a session.
User Loyalty
+44 NPS
First-Month Retention (M1) of 28% which improved to
36% by the third month (M3).
Market Credibility
4.7
from 5,000 users on the Google Play store
"Been going through a lot.. didn't have any options of venting out to real people in my life so this was perfect. Thank you".
Gurteg S.

