Using AI as a therapist: that's not the scenario for an episode of Black Mirror. In March 2025 TikTok already had more than 16.7 million posts on the subject, with videos like”3 Essential Chat GPT Prompts for Therapy” [1].
While mental health became the Great National Cause in 2025, initiatives are multiplying to make support more accessible. Among them, Moka.care offers a technological solution designed as a springboard to humans.
Because we analyze all the uses of AI in the service of the general interest, even outside the non-profit world, we spoke with Guillaume d'Ayguesives, co-founder of Moka.care, to talk about the implementation of their chatbot and the challenges encountered.
Sleep disorders, chronic fatigue, concentration problems: 3 out of 4 employees have already experienced a mental health disorder related to their work in the last 5 years [2]. At work mental health is still a taboo and a sensitive subject. Employees are hesitant to confide in each other for fear of being judged, managers do not always know how to react, and management often find themselves helpless, due to a lack of tools and training. To break this taboo, Guillaume d'Ayguesvives and Pierre-Étienne Bidon created their company Moka.care in 2020. Their approach: to propose human solutions and technological tools to prevent and support mental health disorders in companies.

From the Moka.care platform, company teams can access a variety of resources: self-assessment modules, educational paths, toolbox. What's new about this platform? Sunny, an integrated chatbot that answers user questions using generative AI.
Sunny is based on a RAG (Retrieval-Augmented Generation): it is as if you were providing ChatGPT with a library with only books dealing with mental health. Sunny can only draw answers from a “library” of content created and selected by the teams of Moka.care (existing articles, videos, and podcasts).
In the case of Sunny it is the Mistral 24.07 model (since called Amazon Bedrock), which will draw answers from the sources created by the Moka.care teams.
As for the interface to display the conversation, it was created with Rails and TurboStream.
In all, 2 to 3 people worked on this project for 2 months (developers, psychologists).

Entrusting your feelings to a chatbot is not without danger:
- The absence of limits: a generalist model, like ChatGPT, can encourage a user in distress to adopt dangerous behaviors (think of Adam's case in California).
- Social isolation: emotional attachment to a chatbot can reinforce dependence and reduce social interactions (cf. the results of The MIT study [3]).
- Data sensitivity: mental health confidences are highly sensitive data, which must be treated with the highest level of protection.
Rather than ignoring these risks, Moka.care integrated them from the very beginning when Sunny was designed.
The temptation with a chatbot is to let it talk about everything. But when it comes to mental health it's just risky. At Moka.care, two principles guided Sunny's development:
- Avoid the excesses of conversations: thanks to the principle of Retrieval Augmented Generation (RAG) Sunny only relies on validated content, produced by the experts at Moka.care. No improvisation, no “bad advice” from the Internet, and especially no off-topic answers.
- Quickly reorient towards the human: Sunny is not here to replace a therapist. It is designed to limit the duration of exchanges and to refer to human care as soon as necessary. According to Moka.care, 10% of users who test Sunny end up booking a session with a psychologist — often for the first time.
In short, AI is not an end in itself here, but rather a springboard to humans.
When developing a project with generative AI, the real question is not only “what does the chatbot do?” , but also “where does the data come from?”
Moka.care chose therefore to approach security at three levels:
- The app used to talk to Sunny: no data entered by users in Sunny is reused to train the Mistral 24.07 model. Conversations are for immediate operation only, not for marketing purposes.
- Technical infrastructure: access to data is subject to well-defined governance. Everything is logged, anonymized, and encrypted (TLS, AES-256). Only authorized administrators can intervene.
- The generative AI model used (LLM): By opting for Mistral in the Sunny configuration, Moka.care has made the choice to transmit data on servers in Europe, and to maintain data protection within a European legal framework.
Data protection doesn't end with setting up the chatbot. Sunny is part of a global privacy policy through:
- One Confidentiality charter available on their site, and a dedicated service for any questions (dpo@moka.care)
- A guarantee for users of their rights to access, rectify and delete their data
- Audits conducted by an independent third party, Vaadata, to certify compliance with security standards (ISO 27001, HDS, RGPD).
Do you also want to set up a chatbot with AI for your non-profit? Before you start, Guillaume, the co-founder of Moka.care, advises you to answer these two questions: “What will be the difference compared to ChatGPT?” and “What can a chatbot do that ChatGPT can't do today (or never will)?”
In the case of Moka.care, the answer was obvious: ChatGPT is not designed to quickly redirect to a psychologist. ChatGPT is programmed to please and captivate the user for as long as possible on the interface. ChatGPT is also not designed to prevent the risks of emotional dependence, or to guarantee the reliability of its sources.
And this is the key point: without quality content, contextualized and produced internally by Moka.care specialists, it is impossible to properly frame AI. Guillaume admits it bluntly: if they did not already have all their existing content, they would not have carried out their chatbot project. In other words, AI alone has no value. What can make it relevant is the framework that is given to it and the business knowledge that is transmitted to it to avoid excesses.
Moka.care's approach aspires to make AI a tool for mental health, without replacing humans. By supervising their virtual companion and designing it as a bridge to human support, Moka.care is part of the desire to create an AI with safeguards and reminds us that the real answer lies on the human side.
[1] Koronka, P. (2025, April 22). Young people turn to AI for therapy over long NHS waiting lists. The Times. https://www.thetimes.com/uk/healthcare/article/young-people-using-chatgpt-therapy-nhs-waiting-lists-sxjp9b6hj
[2] Mental health: a major national cause 2025. (n.d.). https://www.moka.care/grande-enquete-sante-mentale-travail-2025
[3] How AI and human behaviors shape psychosocial effects of chatbot use: a longitudinal controlled study — MIT Media Lab. (n.d.). MIT Media Lab. https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/