You are here

For people to trust AI, build fairness into it

Algorithms learn from data given to them: if the data is biased, the results will be biased, so all those involved in the system need to be responsible.

Google lists some tips on checking for unfair biases on their website, as well as links to their own research papers on this topic.

NOT long ago, professional networking platform LinkedIn published a fairness toolkit, which is an open source software library that other companies can use to measure fairness in their own AI models.

This is an addition to the list of companies and governments that have tried to address...

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to