What are "hallucinations" in AI – and why are they dangerous in law?

AI models like ChatGPT often seem impressive. But they have a weakness: Hallucinations.
This means: The AI invents facts, laws or judgments – but sounds absolutely convinced.

Why is this a big problem in law?

1. Incorrect legal bases
The AI cites paragraphs that don't exist or don't apply to the case.

2. Costly wrong decisions
HR managers or SME bosses can misjudge terminations, contracts or wages based on incorrect answers.

3. Liability risks
Anyone who makes decisions based on fabricated information risks legal disputes.

4. Prohibition by providers
ChatGPT & Co. prohibit use for legal applications – precisely because of these errors.

Conclusion:
In law, you need 0% hallucinations and 100% verified facts.
That's why secure legal bots work with Swiss lawyers, fixed rules and verified data – not with improvised AI answers.

Ready to make legal work Faster & Safer?

Verified answers with citations

Core workflows for everyday questions

Fast onboarding

No pressure. One short call to see if Jurilo fits your workflows. Join Swiss teams who've made legal work simpler.

Ready to make legal work Faster & Safer?

Verified answers with citations

Core workflows for everyday questions

Fast onboarding

No pressure. One short call to see if Jurilo fits your workflows. Join Swiss teams who've made legal work simpler.

Ready to make legal work Faster & Safer?

Verified answers with citations

Core workflows for everyday questions

Fast onboarding

No pressure. One short call to see if Jurilo fits your workflows. Join Swiss teams who've made legal work simpler.