The Business Times

I See You: Is our data safe, and are we safe from our data?

Vast amounts of data are being collected for Singapore to run as a Smart Nation. Cybersecurity is the first priority. But also crucial are the ethics surrounding how the data is used.

Published Fri, Oct 12, 2018 · 09:50 PM

EVER since the Smart Nation plan was launched in 2014, Singapore has been powering full steam ahead towards its digital grail. Rolling out one innovation after another, we advance on all fronts, as part of a grand design to harness networks, data and technology to transform our future, to better our lives. In order for these systems to work as they should, data is constantly being collected on every aspect of our daily lives, to the point where nothing seems off-limits.

Singaporeans are well aware of the obvious benefits of data collection. Traffic and GPS data tells us exactly when our bus, train, car or food will arrive. Having our identity, address details and health records in public cyberspace makes it easier to get things done in government services. Putting our financial information online speeds up payment transactions and renders them seamless.

But news of the SingHealth data breach in June exposed just how vulnerable we are to hackers. More crucially, it highlights the fact that the potential dangers of such connected systems are discussed far less than the advantages.

Many initiatives are welcomed for their obvious advantages over older, more unwieldy systems, like the national Quick Response (QR) code system set in motion in September to progressively replace the more than 19,000 QR codes currently in use. Others stoke privacy fears, like plans to start deploying smart lamp posts next year, equipped with temperature and rainfall sensors as well as facial-recognition capabilities.

Without proper public education and ethical protocol to protect the population of a technology-dependent society, artificial intelligence (AI) could become at the very least a potential cause of social divide, and at worst a tool that places its users at risk, say experts.

"Too much has perhaps been made of the economic upsides of AI," says Singapore Management University (SMU) law don Eugene Tan. "What about the downsides for the widespread use of AI in Singapore? What are we doing to prepare for those who would be negatively impacted, especially when not everyone is going to be 'AI-enabled'? If we are going to embrace AI as a society, then the conversation must begin on the societal impact of AI."

Keeping it safe

The vast amounts of data required make a smart city a sitting duck for hackers, demanding ever-increasing levels of cybersecurity.

Companies have a responsibility to protect what data they hold in their hands and use it ethically. But they are struggling to do so, given that most of the necessary regulations and guidelines have not yet been formulated.

Large data companies like Facebook, Twitter and Google have come under much scrutiny in recent months. The European Union demanded this year that they change their terms of service and ensure greater transparency around the commercial use of consumer data. Facebook, in particular, faced huge backlash for providing political data firm Cambridge Analytica with user data, which was then used to manipulate voters in the 2016 US presidential elections with targeted ads. It took more heat last month when it revealed that 50 million user accounts had been breached with identities possibly stolen.

Singaporean companies looking to avoid similar mistakes are wrestling with questions of what kinds of data use are permissible, and how to protect both their users' data and their own companies' interests.

"There are a lot of best practice guidelines, but most companies are overwhelmed by information and different kinds of data," says Shaun Wang, a professor of banking and finance who leads the Cyber Risk Management (CyRIM) project at Nanyang Technological University (NTU).

"People should really go back to the basics of having proper cybersecurity measures, getting the right expertise, doing regular check-ups and security updates."

Such measures could include differential treatment for more important data, like placing an additional layer of defence on the prime minister's information since it is more valuable than ordinary citizens' details to hackers, he says.

Businesses should treat data-related projects like any other business project, Prof Wang adds, and conduct careful risk assessments rather than be driven by the fear of falling behind other industry players. They should also assess third-party vendors to ensure that they can handle data with proper care, before allowing them to access the information. While protecting data collected must be the first priority, another key concern is how the data is used.

Decisions made by AI could have unpredictable side effects, like racial or gender discrimination. This is because the algorithms base their decisions purely on the data they are given, and could replicate biases already present in the data.

When trends become bias

A computer program called COMPAS showed exactly that result when used in the United States to assess whether someone who had committed a crime would be likely to reoffend in future. A 2016 study by ProPublica found that the algorithm tended to deem black people more likely to reoffend than they actually were, while white people were more often predicted to be at lower risk of reoffending.

"What algorithms are very good at doing is discriminating, not in the bad sense of the word, but in the sense of distinguishing between two inputs," says National University of Singapore (NUS) assistant professor Yair Zick, whose research interests include ethics in AI.

"If they pick up on a trend in certain parameters that we see as protected and should not be used to make predictions, like gender, sexual orientation, race and so on, they may make a connection and base their decision on them," Prof Zick adds. The bias occurs as a result of the algorithm just doing its job with the data it has been given.

This is why, even as AI helps people accomplish certain tasks more quickly and accurately than ever before, human intervention and regulation are needed to ensure it does not cause more harm than good, says Chen Tsuhan, deputy president of research and technology at NUS. Prof Chen is also chief scientist at AI Singapore, a national programme created to boost the contry's AI capabilities.

"AI is like fire. After fire was discovered, it took a while for humans to realise that we need not only the fire, but also all kinds of things to make it useful, including regulations, safety measures, and people like firefighters," says Prof Chen. "We've discovered AI, but now what's important is the ethics, regulations and 'firefighters'."

In June, Singapore set up the Advisory Council on the Ethical Use of AI and Data, which will meet later this year. And last month, SMU's School of Law launched a Centre for AI and Data Governance, to support the council's work and promote thought leadership in the use of AI.

"We need to regulate this the same way we've regulated every other industry where there was a benefit and a significant risk both to the public as well as to trusted public institutions," said Janil Puthucheary, Senior Minister of State, Ministry of Transport and Ministry of Communication and Information, at the centre's launch.

However, he advocated for as light a regulatory touch as possible, saying: "I worry that we are on the cusp of instituting controls that prevent us from realising the potential benefits that these types of technologies can offer."

Using the analogy of how airplanes were built and used even before engineers fully understood the physics of flight, Dr Janil stressed that having only partial understanding of AI should not hold us back from using it to advance society. "If we took the approach that we needed to be absolutely positive about how the wing worked, we wouldn't have all the benefits of aviation."

Educate to encourage

While the public and private sector grapple with regulation, efforts are underway to educate Singaporeans on how AI works and will be implemented. Aside from increasing trust and buy-in for Smart Nation initiatives, such education is needed to avoid deepening the digital divide: if less-knowledgeable groups fear and avoid using the technology, they could ultimately lag behind the rest of society, says SMU's Prof Tan.

"We can expect the large companies and AI-literate individuals to reap the most gains. This is perhaps no different from the situation today where those with the requisite knowledge and skills benefit from the knowledge-based economy," he says. "The irony is that AI thrives on data, and such data is provided by or sourced from society. Should such benefits then be wholly privatised while the costs, such as the AI divide, are socialised?"

According to the Lloyd's Register Foundation Institute for the Public Understanding of Risk (LRFI), ill-managed negative public sentiment could cause people to boycott products and reject new technologies like AI and smart initiatives. "Effective two-way risk communication is needed to improve people's understanding of the risks, benefits, and possibilities, as well as to improve the usefulness of the technology so that it does what people want, or does not do what people don't want," says Phoon Kok Kwang, director of LRFI.

LRFI is working on several projects to study and address public perception of the risks surrounding AI and other new technologies. One such project, still under development, is a Risk Pulse Monitor, which mines open-source data from social media to get a sense of the society's prevailing concerns. And, in August, AI Singapore launched AI for Everyone (AI4E), a programme that aims to demystify AI for the common man.

AI Singapore's director for industry innovation Laurence Liew believes that education is the key to dispelling fear and mistrust. "For example, when we talk about image recognition, once you understand the maths behind it, you will realise that this is nothing more than pattern matching," says Mr Liew. "There's nothing magical about it."

Be smart about your data

However, no amount of regulation and education can protect individuals who fail to safeguard their personal data. "To be a smart nation is not just to be technologically smart," notes NTU's Prof Wang. "You need smart regulations, smart legal infrastructure, and smart people as well."

Without even thinking about it, we divulge far more personal information on a daily basis than we realise - on websites, our smartphones and social media.

Most of us are guilty of not thinking twice before clicking "accept" on the terms and conditions of new apps and online services, which then go on to harvest data from our profiles.

Social media posts are another pitfall for many, regardless of age. Prof Wang has seen people share photos online without realising that sticky notes containing passwords are visible in the background. Other examples include snaps of individual physical proficiency test (IPPT) result slips and polyclinic queue tickets. Both items display full names and National Registration Identity Card (NRIC) numbers, sensitive data that can be easily used for malicious purposes.

Boarding passes are another heavily photographed item containing a wealth of personal information, such as the traveller's full name, frequent flier programme number, booking reference number and many other details. This information can be used to obtain additional data like credit card numbers, cancel or book flights, and even steal one's identity.

Blurring out sensitive information is one way to protect one's data, but it is not foolproof - one might not think of the boarding pass barcode as sensitive information, but it can be scanned to reveal all of the information on the ticket.

Who's watching?

That being said, there are some things that an individual cannot avoid sharing, especially when we live surrounded by cameras and sensors. Smartphones can provide location data even when location services are disabled, since information from other sources like Wi-Fi signals and numerous sensors in the phone itself can be combined to pinpoint the user's location.

In the future, your whereabouts could be tracked even if you do not have a phone with you, says Subodh Mhaisalkar, executive director of the Energy Research Institute @ NTU (ERI@N).

The same sensors in infrastructure like the smart lamp posts that will help autonomous vehicles orient themselves and react to their environments could also provide data about the people in their vicinity.

"It's a challenge to keep your information private," notes Prof Mhaisalkar. "But the question is, what information should remain private?"

Drawing parallels between data privacy and how some topics were once considered taboo but are now discussed publicly, he says: "Similarly, from a data perspective, I personally feel we will become more relaxed in terms of sharing data."

Even though AI is uncharted territory, fear of the unknown should not hold us back from using it, because its benefits far outweigh its risks, Prof Mhaisalkar says.

The greater good

He explains that AI fulfills the "zero principle" test, which is used to describe the long-term potential of new technology. Technologies that pass this test have the potential to dramatically reduce the costs of certain processes to nearly zero - like the steam engine and the computer, which greatly cut the costs of running industrial machinery and performing numerical calculation respectively.

In its use in autonomous vehicles alone, AI can greatly reduce accident rates, the need to learn driving skills, time spent driving, and even the size of delivery vehicles, which would no longer need to be large enough to accommodate a human driver, Prof Mhaisalkar points out.

To harness AI's potential to solve large-scale problems, AI Singapore launched its first AI in Health Grand Challenge in June, calling for proposals on how to help primary care teams stop or slow disease progression and complications in high blood pressure, high cholesterol and high sugar in the local population by 20 per cent over the next five years.

Over the next few years, AI Singapore will support multi-disciplinary research teams to tackle this and other challenges in fintech and urban infrastructure. The goal is to promote bold ideas and apply innovative approaches to solve these challenges, which will have significant social and economic impact on Singapore and the world.

The National University of Singapore is already developing applications that will help analyse such nitty-gritty details as real-time traffic flow and crowd data, which can be used to plan different kinds of travel from daily commutes to emergency escape routes; customer crowd and behaviour patterns at retail outlets; as well as looking into using drones to conduct building inspection and coordinating deliveries for logistics companies.

There is also potential for Smart Nation platforms to be used for social good, to deliver funds, aid and care to where they are needed the most.

Ultimately, Singapore needs to adopt an attitude of cautious optimism towards AI, says Prof Chen. "We should be cautious as we move forward in developing AI technology, but we should remain optimistic in advancing AI research to improve human life. AI is worrisome to some people, but that is actually good, because it means we are paying attention to its issues."

KEYWORDS IN THIS ARTICLE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

Features

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here