Based on a current Swiss analysis (watson.ch)
https://www.watson.ch/!669651522
A new analysis from Switzerland clearly shows: Many AI models derive their knowledge from opaque, sometimes questionable sources. Users do not learn where certain statements come from – and whether they are even correct. The investigation reveals several core problems that are particularly relevant for legal applications:
1. AI invents sources or uses unreliable data
The tested models provided answers based on false, invented, or poorly traceable sources.
👉 In the legal world, this would be fatal: A wrong source = wrong legal advice = potential damage case.
2. Models contradict themselves
Depending on the question or phrasing, the AI systems gave different, contradictory answers.
👉 In law, this leads to complete uncertainty – a judgment or law does not change depending on sentence structure.
3. Lack of transparency: Users don't know what is accurate
The analysis shows: Users have no chance to verify the origin or quality of the answer.
👉 This is exactly where the greatest risk lies: Without clear references, no one can assess whether the information is legally sound.
4. With more complex questions, the AI "hallucinates"
As soon as the questions became somewhat more complex, the models began to invent facts or oversimplify significantly.
👉 In law, however, it is precisely the complex cases that are crucial – an invented answer can lead to costly wrong decisions.
What does this mean for legal work?
This analysis exemplifies why "General Purpose AI" is unsuitable for legal questions:
❌ No reliable sources
❌ No guarantee of currency
❌ No transparency
❌ High error and hallucination rate
In law, "approximately correct" is not enough. It must be correct.
Why Lawise / Jurilo is different
Lawise was built precisely for this problem:
✓ Legally verified answers with Swiss sources (laws, commentaries, Federal Supreme Court rulings)
✓ 0% hallucinations – every answer is based on real legal sources
✓ Transparent references – always traceable, always verifiable
✓ Specialized model instead of black-box chatbot
While generic AI models often deliver entertaining answers, Lawise delivers correct answers – and only that counts in law.
Conclusion
The Swiss analysis shows: Even with simple everyday questions, AI models provide faulty or unclear information.
In legal practice, this would be irresponsible.
👉 That's why specialized, verified Legal AI is needed – like Lawise – that is based on real legal sources and offers maximum security.
Watson Article:
When we ask ChatGPT, the AI explains even complex topics of Swiss politics to us. The research is not always broadly supported or balanced.
image: AI-generated/ChatGPT/bev