ChatGPT and Claude or other Foundation Models on their own will never be able to reliably answer your legal questions.
No legal verification
The answers are not reviewed by lawyers and may be factually incorrect or misleading.
No access to current legal texts & rulings
Models work with training data, not with constantly updated official sources such as the CO, CC or Federal Court rulings.
Lack of contextual knowledge of Swiss law
Legal systems are local. Foundation models are often oriented towards US or global law – not Swiss specificities.
Hallucinations (= incorrect answers)
Models "invent" answers when they are not sure – this is highly risky in the legal field.
No liability or traceability
There is no legal responsibility or documentation for how an answer was generated.
Not always up to date
The models are unaware of legislative changes or new rulings after their training date (e.g. ChatGPT with data up to end of 2023).
Data protection issues with sensitive cases
For confidential legal questions, text should not simply be entered into a generic AI model.
Lack of depth and nuance
Legal answers often require differentiated interpretations – models usually produce superficial standard texts.
No source references or legal basis
Foundation models do not provide direct legal articles, paragraphs or court rulings – you don't know what they are based on.
10. No substitute for legal advice
In legal contexts what matters is: reliability, traceability and admissibility in court. Foundation models do not offer this.