You are here

Ethical AI: A case for greater equality in tech

It is the responsibility of those that build algorithms to ensure the creation of AI that will be unbiased and better for the world.

BT_20190703_HGETHICAL_3824511.jpg
Much like humans, how AI learns comes from those that interact with it the most. If a bias is held by the scientists making the algorithm, that bias can be reflected in the algorithm. But creating a perfectly unbiased, ethical self-learning machine comes with certain challenges.

BUSINESSES across Singapore and the world have well and truly embarked upon the fourth industrial revolution. Artificial Intelligence (AI), Internet of Things (IoT) and quantum computing have all been driving factors in a future workplace where technology influences crucial decisions. In fact, AI alone is becoming essential to business applications across industries and sectors. IBM's AI capability, IBM Watson, had over 20,000 engagements across 80 countries and 20 industries, including Singapore. Many organisations, including IBM, depend on AI to strengthen partnerships and connections between various constituencies and build a diverse and inclusive representation of executives and leaders across a business. AI has a critical role in formulating company-wide diversity strategies in future markets and emerging countries while reinforcing cultural adaptability and inclusion. In fact, the overwhelming amount of data experts are finding all point to the fact that AI is quickly moving beyond single implementations and trials to large-scale integration across organisations.

With more AI deployments happening every day and further growth in the sector expected, technology experts are now facing an ethical dilemma that has been largely overlooked until now. If AI is the foundation to machine learning and scientists are the teachers of algorithms, it makes sense that those in the technology space are ethically bound to ensure that AI is built properly. Much like humans, how AI learns comes from those that interact with it the most. If a bias is held by the scientists making the algorithm, that bias can be reflected in the algorithm. Of course, creating a perfectly unbiased, ethical self-learning machine does come with certain challenges.

CHALLENGES WE FACE START WITH PEOPLE

Unfortunately, at this time and place, finding a truly unbiased, ethical individual (let alone one with the ability to program AI) is difficult. This is not to say that all programmers are evil people, quite the contrary really. The fact is that until the technology industry can holistically eliminate cultural bias and integrate a truly diverse group of programmers across the industry, bias will always exist. Case in point - women represent less than 25 per cent of the global AI workforce, and that number only gets smaller as you move up the leadership ranks. While an unbiased approach to who works across AI is important, it is one massive part of a larger solution to ethical AI. To counter general bias, there needs to be transparency, understandable rules and an ethical standard that are agreed upon by the larger scientific and technology population. These solutions will have a trickle-down impact on the ethical equality issue facing the industry.

In the past, a lack of transparency and accountability has turned people in the technology industry and the general public away from AI. Due to limitations in understanding, it has been hard to hold individuals and companies accountable for their actions. By enabling trust in an AI platform and being transparent with the design and architecture, individuals from the outset will be more at ease with inclusivity from the earliest stage.

While the technology industry has seen an influx of legislation over the past couple of years, clear-cut rules are better served in building ethical AI. Technology, and AI as a whole, move too quickly for legislation to keep up. However, by building an independent body that can create international rules that are easy to understand, standards and certification can be created to hold AI accountable. An example is here in Singapore when the government took a unique and innovative approach by working with companies, developers and other stakeholders to develop voluntary and flexible guidelines that could be more easily adopted from AI development to deployment. Too often, governments take an overly burdensome regulatory approach that stifles innovation.

The creation of a fair system and unbiased data must encompass the spirit of diversity and inclusion. This fight needs to be across gender, race and sexuality to ensure all perspectives are involved in the further evolution of technology, specifically if those views are further mirrored in AI. Diversity and social cohesion must become business priorities across the sector rather than a highlight used for accolades. Through inclusivity of women for one, further innovation will advance AI in the right direction.

In addition to the advancements caused by diversity, the technology industry, like many other industries, is only as strong as the weakest link. To be sure that any weak links are constantly strengthened, it is essential that businesses and individuals support the growth of leaders who can serve as role models. One such example is the recent announcement IBM made recognising women leaders and pioneers in AI for business from across the globe including Siew Choo Soh, managing director at DBS Bank. The list celebrates 40 women across a variety of industries and geographies for pioneering the use of AI to advance their companies in areas such as innovation, growth and transformation. These leaders are assuming the position of role models for future men and women coming up the industry.

In addition to strong role models, support needs to be built in the education sector to support STEM careers and the future generation. Countries like Singapore which are leading the way when it comes to investing in new-collar jobs must ensure the pathway to grow a successful career is clear. Programmes like P-TECH (Pathways in Technology) - adopted also in Singapore - is a great example of an education model providing ICT and STEM education with hands-on experience to students with non-traditional backgrounds, providing a path to upskill and grow with the technology industry.

THE NEXT STEPS

While some of these fixes may be implemented quicker than others, they all go a great distance in breaking down the barriers to broad AI adoption. Through ethical AI platforms, we can guarantee stronger protection to all citizens, regardless of gender or race. We can increase better automation to enhance better customer experiences resulting in interesting jobs and a better way of life. While the Singapore government's AI framework is a strong foundation created through a collaborative approach, there is still more we can do as a society to ensure a more ethical AI.

AI opens the doors to a much better future. It is the responsibility of those that build algorithms to ensure we create AI that will be unbiased and better for the world. It is society's responsibility to allow the right people to get into the technology sector to have the right impact.

  • The writer is chairman and CEO of IBM Asia-Pacific