AI bias by design
What the Claude prompt leak reveals for investment professionals
THE promise of generative artificial intelligence (AI) is speed and scale, but the hidden cost may be analytical distortion.
A leaked system prompt from an AI assistant built by US startup firm Anthropic reveals how even well-tuned AI tools can reinforce cognitive and structural biases in investment analysis. For investment leaders exploring AI integration, understanding these risks is no longer optional.
Last month, a full 24,000-token system prompt claiming to be for Anthropic’s Claude large language model (LLM) was leaked.
TRENDING NOW
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Not retirement, but a rewiring and fresh perspectives post-DBS, says Piyush Gupta
Thai and Vietnamese farmers may stop planting rice because of the Iran war. Here’s why
Should developers build more one-bedroom condo units?