For people to trust AI, build fairness into it
Algorithms learn from data given to them: if the data is biased, the results will be biased, so all those involved in the system need to be responsible.
NOT long ago, professional networking platform LinkedIn published a fairness toolkit, which is an open source software library that other companies can use to measure fairness in their own AI models.
This is an addition to the list of companies and governments that have tried to address the issue of fairness in the use of AI. Google lists some tips on checking for unfair biases on their website, as well as links to their own research papers on this topic.
Singapore has a Model AI Governance Framework that advises on how to translate fairness and transparency into practice, for example, making AI policies known to stakeholders. The European Union has the Ethics Guidelines for Trustworthy AI.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Copyright SPH Media. All rights reserved.