WHO warns against bias, misinformation in using AI in healthcare

Published Tue, May 16, 2023 · 07:35 PM
    • It was “imperative” to assess the risks of using generated large language model tools like ChatGPT to protect and promote human well-being and protect public health, the United Nations health body said.
    • It was “imperative” to assess the risks of using generated large language model tools like ChatGPT to protect and promote human well-being and protect public health, the United Nations health body said. PHOTO: REUTERS

    THE World Health Organization (WHO) called for caution on Tuesday (May 16) in using artificial intelligence (AI) for public healthcare, saying data used by AI to reach decisions could be biased or misused.

    The WHO said it was enthusiastic about the potential of AI, but had concerns over how it will be used to improve access to health information as a decision-support tool and to improve diagnostic care.

    The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information, and the models can be misused to generate disinformation.

    It was “imperative” to assess the risks of using generated large language model tools (LLMs) like ChatGPT to protect and promote human well-being and protect public health, the United Nations health body said.

    Its cautionary note comes as AI applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate. REUTERS

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Share with us your feedback on BT's products and services