AI can be biased, too
We must recognise the inherent biases in data, and expand diversity among those working in AI.
AS artificial intelligence (AI) continues to advance and become more widely adopted, ethical discussions surrounding its development have largely centred on manpower displacement, as well as the potential impact of errors or inaccuracies in the programming process.
One issue often overlooked is the inherent bias that comes with the very AI systems that are beginning to drive our society. For example, a study found that online ads for high-paying jobs powered by machine-learning are shown more often to men than women - raising concerns about the potentially discriminatory patterns of complex algorithms.
This isn't the first time that algorithmic systems have appeared to be sexist - or racist, for that matter. Studies have shown that algorithms trained on historically racist data have huge error rates for communities of colour, especially in over-predicting the tendency of a convicted criminal to reoffend. In fact, one common risk-assessment algorithm was shown to be just as accurat…
BT is now on Telegram!
For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes
Columns
An overstimulated US economy is asking for trouble
Too many property agents? Cap commissions on home sales
Time to study broadening of private market access
China’s better economic growth hides reasons to worry
In AI-copyright battle, an existential crisis emerges
Europe shows diversifying from China’s economy is hard to do