If you can train an AI to be very truthful, then people can rely on it to give useful information, e.g. as a broker.
The problem with humans is you can’t validate their behaviour — even if you test a human’s honesty in a series of tests, they can just act honestly for the tests. With LLMs you can do this.
This is exploiting another property of LLMs (like RIM exploits its ability to forget) — control over its context window, so it doesn’t know if it’s being trained/tested or deployed.