THE BOTTOM LINE
·
SUBSCRIBERS

AI and reputational risks

    • When using AI, organisations should also adopt a “human-in-the-loop” approach, which calls for people to give feedback to the AI algorithms in place. As humans have more empathy and contextual understanding of the situation, this allows for better detection of ambiguous and sensitive situations.
    • When using AI, organisations should also adopt a “human-in-the-loop” approach, which calls for people to give feedback to the AI algorithms in place. As humans have more empathy and contextual understanding of the situation, this allows for better detection of ambiguous and sensitive situations. PHOTO: PIXABAY
    Published Tue, Sep 12, 2023 · 05:00 AM

    ORGANISATIONS have used some form or another of artificial intelligence (AI) since its conception decades ago. However, using it for customer-facing purposes has become more prevalent in recent years, and warrants closer attention from the industry.

    Previously, AI was used primarily for data analytics and predicting customer behaviour. Today, AI has moved to the front-end of operations to directly engage and interact with customers. While it may allow organisations to be more efficient and effective, it can also lead to reputational risks.

    Reputational risks are heightened when AI has direct interaction with a more extensive public audience through customers. This means any mistakes or missteps become more glaringly obvious and, ultimately, more impactful. When customer trust ends up being broken, this causes the organisation to have a tarnished reputation and can magnify negative press coverage and public scrutiny.

    Share with us your feedback on BT's products and services