AI bias by design
What the Claude prompt leak reveals for investment professionals
THE promise of generative artificial intelligence (AI) is speed and scale, but the hidden cost may be analytical distortion.
A leaked system prompt from an AI assistant built by US startup firm Anthropic reveals how even well-tuned AI tools can reinforce cognitive and structural biases in investment analysis. For investment leaders exploring AI integration, understanding these risks is no longer optional.
Last month, a full 24,000-token system prompt claiming to be for Anthropic’s Claude large language model (LLM) was leaked.
Share with us your feedback on BT's products and services