THE BROAD VIEW
·
SUBSCRIBERS

Ensuring AI decisions are made fairly and morally

Guidelines are necessary to provide a framework for the developers of AI and the users of AI output in assuring that moral and materiality questions have been taken into consideration.

Published Fri, Jan 8, 2021 · 09:50 PM

    CAN Artificial Intelligence be moral? In my opinion, no. Should this prevent us from establishing how to morally use AI? Absolutely not. In fact, the absence of AI moral capability should drive our need for explicit and clear frameworks for the moral use of AI outputs. I use the term "moral", somewhat sensationally, to emphasise the use of AI as a tool of judgement (decision making or decision support) where outcomes need to adhere to principles of "right" and "wrong". However, in reality, such polarity is not always practicable and the terms "ethical" and "fair" are more familiar and more commonly used.

    The discourse on AI ethics, or to be more specific, fairness, is not new. The more we embed AI in our lives, and the more we understand the possibilities that AI may bring, the more we want to assure that AI decisions, whether made in a supervised or unsupervised manner, are done so fairly, ethically, morally.

    Why? because we want - no, we demand - fairness in our lives.

    Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.

    Copyright SPH Media. All rights reserved.