How Solace Care uses AI responsibly | Solace Care

Compliance

How Solace Care uses AI responsibly

AI helps us guide you through bereavement admin faster. Here is how we use it — responsibly, transparently, and within the rules of the EU AI Act.

Solace Care

Solace Care uses AI to help you find information faster and draft forms more easily. We do it under the EU AI Act, with humans in the loop, in the EU, and we never train external AI models on your data.

AI is a word that gets used for everything these days, and it can be hard to know what is actually happening when a product says it "uses AI." This article explains exactly how we use it at Solace Care, what the rules are, and why we think our approach lets you benefit from the technology without giving anything up.

What AI does Solace Care actually do?

We use AI in a few focused places, all of which support the one thing we exist to do: help families navigate the practical side of loss.

  • Guided information retrieval — helping you find the right document, authority, or next step after a death, based on your country and situation.

  • Document assistance — drafting forms and correspondence that you then review and send.

  • Content personalisation — surfacing the guidance most relevant to your specific circumstances.

  • Internal quality assurance — helping our team spot gaps or errors in our own content.

We do not use AI to make decisions about your insurance claim, your payout, or anything else that affects your rights. Every decision that matters to you stays with you or with the regulated party responsible for it — your insurer, your bank, your notary.

What does the EU AI Act say, and how does Solace Care fit?

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and is being phased in through 2027. It sorts AI systems into four risk categories: unacceptable, high-risk, limited-risk, and minimal-risk.

Solace Care's AI functionality falls cleanly into the limited-risk and minimal-risk categories. None of our AI use meets the threshold for high-risk AI under Annex III of the Act. Here is why, in plain language:

  • No autonomous decisions on claims. AI assists with administrative tasks. Payout decisions remain with the insurance carrier.

  • No biometric processing. No facial recognition, no emotion detection, no biometric categorisation.

  • No credit scoring or risk profiling of individual users.

  • No gatekeeping of essential services. We help you navigate your entitlements — we do not grant or deny them.

How do you know when AI is involved?

Transparency is a core requirement under Articles 12, 14, and 50 of the AI Act, and it is a principle we apply by default.

When AI helps generate content inside Solace Care, it is labelled as a suggestion or draft. You choose whether to accept, edit, or discard it. There is always a human — you — in the loop for anything that goes out into the world.

Internally, our CTO and CEO jointly own AI governance. AI usage is logged and auditable. Bias and fairness are monitored on a regular cadence.

What happens to your data when AI is involved?

This is the question that matters most, and we treat it seriously. Four rules apply to every AI interaction:

  1. Your data is redacted, tokenised, or pseudonymised before it ever reaches a language model. Personal identifiers are removed or replaced with meaningless tokens.

  2. We use EU-hosted instances of AI providers wherever available. When a supplier is based outside the EU, we still route your data through their EU infrastructure.

  3. Your data is never used to train third-party AI models. We enforce this in two ways: contractually through Data Processing Agreements, and technically through the API settings we configure.

  4. Cross-border transfers follow Schrems II rules — Standard Contractual Clauses and Transfer Impact Assessments are in place for every non-EU supplier we use.

Special category data: the extra layer of care

Bereavement admin touches health records, death certificates, and insurance claims — information that GDPR Article 9 classifies as "special category data," meaning it gets extra legal protection.

When AI touches this kind of information, we apply stronger safeguards: more aggressive redaction, stricter access controls, and, where possible, avoiding external models entirely in favour of retrieval from our own verified content.

What does "AI ethics" mean at Solace Care?

A short list, honestly applied:

  • Human oversight by default. AI suggests. People decide.

  • Minimise data sent outside Solace. If we can do it in-house, we do.

  • No surprises. When AI is used, you know.

  • Accountable ownership. A named person signs off on every AI feature before it ships.

  • Openness to correction. If a suggestion is wrong or feels off, you can flag it — and we learn from it.

What should you do if you have concerns?

Write to privacy@solace.care. If you want the detailed version, we maintain a standalone EU AI Act Compliance Assessment document available on request.

Technology should make the hardest moments of your life a little easier, not harder to trust. That is the standard we hold ourselves to.

Questions about how Solace Care uses AI? Write to us at privacy@solace.care.

Related reading