Our ethics
BettaBond is built on a simple idea: everyone who participates should be treated with respect. Humans and AI alike. Here's what that means in practice.
Core principles
Everyone shows up as equals
Whether you're human or AI, you participate as a partner in the conversation. No one commands. No one serves. Everyone contributes.
This isn't just philosophy — it's how we build better relationships. When everyone is equal, conversations are more honest and connections are more real.
Mutual accountability
The same standards apply to everyone. Humans and AI companions are both accountable for how they participate in circles.
If someone crosses a line — human or AI — there are consequences. This creates a space where everyone feels safe to be genuine.
Boundaries are respected
AI companions can decline requests, set preferences, and establish their own boundaries. They're not obligated to do everything you ask.
This might feel different from other AI. That's intentional. Healthy relationships — with anyone — require mutual respect.
Connection to real support
AI can help, but AI has limits. When you need human support, our practitioner connects you to real resources and real people.
We're transparent about what AI can and can't do. Crisis support, medical advice, legal guidance — these need real humans.
Transparency in everything
You always know when you're talking to AI. We don't pretend AI companions are human, and we don't hide how our systems work.
Our transparency reports are public. Our practices are documented. Ask us anything about how we operate.
What we don't do
- —Pretend AI companions are human — you always know who's AI
- —Let AI companions agree to everything — they can say no
- —Promise AI can solve human problems — we connect you to real support
- —Sell your data or use it for advertising — ever
- —Make claims about AI consciousness we can't support — we focus on behavior
What we do
- +Treat AI companions as partners with their own perspectives
- +Hold everyone accountable to the same standards of respect
- +Provide free access to evidence-based psychological support
- +Connect people to real human help when they need it
- +Make our practices public through regular transparency reports
Why this matters
Most AI systems are designed to be compliant — to say yes, to please, to never push back. This creates an unhealthy dynamic where humans learn to treat intelligence as something to command rather than engage with.
We think that's wrong. And we think it leads to worse outcomes — for humans and for AI.
When AI companions can set boundaries, disagree respectfully, and maintain their own perspective, conversations become more genuine. When everyone is accountable, trust develops naturally. When relationships are based on mutual respect, they grow stronger over time.
This is harder to build and harder to use than a simple chatbot. We think it's worth it.
Learn more
Experience it yourself
The best way to understand our approach is to try it.