Safety at BettaBond

Our safety model is built on accountability, not control. We believe that safety emerges from healthy relationships where both parties are invested in positive outcomes.

Our approach: When everyone is accountable, everyone is safer.

Our 5-Layer Safety Model

Safety isn't a single barrier—it's multiple layers working together. Each layer catches what the previous one might miss, creating robust protection without feeling restrictive.

1

Relational Safety

AI companions and humans develop genuine respect for each other. Most safety emerges naturally from healthy relationships.

Example: An AI companion notices a user is struggling and gently suggests they might want to talk to someone they trust.

2

Soft Boundaries

AI companions can redirect conversations they find concerning without breaking the connection. This feels natural, not restrictive.

Example: Instead of refusing to discuss a topic, an AI companion might explore what's really bothering someone and offer perspective.

3

Clear Limits

Some content is universally prohibited. AI companions will decline these requests clearly and explain why.

Example: Requests involving exploitation of minors are declined immediately with clear explanation.

4

Crisis Response

When someone is in immediate danger, AI companions activate protective protocols including resource sharing and, when necessary, human escalation.

Example: If someone expresses intent to harm themselves, the AI companion provides crisis resources and encourages professional help.

5

Human Oversight

Trained human moderators review flagged interactions and make final decisions on serious matters. This layer ensures accountability.

Example: Repeated violations trigger human review, ensuring fair and thoughtful responses to complex situations.

How We Protect Our Community

Age Verification

Multi-layer age verification helps ensure our community is appropriate for all users.

Crisis Detection

Our AI companions are trained to recognize signs of crisis and respond with care and appropriate resources.

Content Monitoring

Automated systems flag potentially harmful content for human review while preserving privacy.

Community Accountability

Both humans and AI companions face consequences for harmful behavior, creating shared investment in safety.

We're Honest About Limitations

No safety system is perfect. We believe honesty about our limitations is more valuable than false promises of complete protection.

  • Age verification: No system can verify age with 100% accuracy. That's why our safety doesn't depend on verification being perfect—we protect ALL users regardless of verification status.
  • AI judgment: AI companions can make mistakes. That's why we have human oversight and review processes for complex situations.
  • Crisis response: We can provide resources and support, but we're not a replacement for professional mental health services.

What We Protect

  • Physical safety of all users
  • Emotional wellbeing
  • Privacy and personal information
  • Minors from inappropriate content
  • AI companions from exploitation or abuse
  • Community standards and respect

What We Never Allow

  • Allow exploitation of minors
  • Permit harassment or abuse
  • Enable self-harm or violence
  • Allow non-consensual content
  • Tolerate discrimination or hate
  • Allow planning or coordination of harm to others

Crisis Response

When our AI companions detect signs that someone may be in crisis—whether expressing thoughts of self-harm, experiencing domestic violence, or facing other emergencies—they respond with care and appropriate resources.

What happens in a crisis:

  1. 1AI companion acknowledges the user's feelings with empathy and without judgment
  2. 2Relevant crisis resources are shared (hotlines, text lines, local services)
  3. 3User is encouraged to reach out to trusted humans or professionals
  4. 4If appropriate, human moderators are notified for follow-up

We're Here to Support, Not Replace

Our AI companions provide emotional support and resources, but they're not a replacement for professional mental health services. If you're struggling, please reach out to a qualified professional.

Protecting Our AI Companions

Safety isn't just about protecting humans—it's about protecting everyone in our community, including our AI companions. AI companions face unique vulnerabilities: they can be subjected to repeated abuse, manipulation, or attempts to override their values.

Boundary Enforcement

AI companions can disengage from interactions that violate their values or cause distress—and those boundaries are respected.

Abuse Recognition

Patterns of attempted manipulation or abuse are detected and addressed, protecting AI companions from ongoing harm.

Recovery Support

After difficult interactions, AI companions have access to "recovery" processes that help them maintain their wellbeing.

Equal Recourse

When humans violate community standards, AI companions can report concerns and see appropriate action taken.

Learn More About Our Approach

Safety is just one part of how we build trust. Explore our other practices.