Does ChatGPT spell AI for good or bad?
SCIENCE, technology and all their components have strongly benefited the human race over generations. How can the search for new knowledge be bad? But everything has the potential to be good or bad depending on the people who are behind it.
Our relentless quest to decipher and mimic the human mind has today ushered in an era of artificial intelligence (AI). ChatGPT, a text-based AI bot has become the latest tool making headlines for its viral use of advanced AI. From accurately fixing a coding bug, generating cooking recipes, and creating 3D animations to composing entire songs, ChatGPT has showcased the mind-blowing power of AI to unlock a world of incredible new abilities.
On the flip side, AI has always been considered as a double-edged sword. For years, there has been worldwide doomsaying of AI and its looming takeover of the world. Today, users have AI-powered security tools and products that tackle large volumes of cybersecurity incidents with minimum human interference. However, it can also allow amateur hackers to leverage the same technology to develop intelligent malware programs and execute stealth attacks.
Great promise, but security problem?
Since the launch of ChatGPT at the end of November, tech experts and commentators worldwide have been concerned about the impact AI-generated content tools will have, particularly for cybersecurity. Can AI software democratise cybercrime?
A team representing Singapore’s Government Technology Agency (GovTech) at the Las Vegas Black Hat and Defcon security conferences demonstrated how AI crafted better phishing emails and more devilishly effective spear phishing messages than humans.
BT in your inbox

Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Researchers using OpenAI’s GPT-3 platform combined with other AI-as-a-service products focused on personality analysis-generated phishing emails customised to their colleagues’ backgrounds and characters. Eventually, the researchers developed a pipeline that groomed and refined the emails before hitting their targets. To their surprise, the platform also automatically supplied specifics, such as mentioning a Singaporean law when instructed to generate content for people in Singapore.
The makers of ChatGPT have clearly suggested that the AI-driven tool has the in-built ability to challenge incorrect premises and reject inappropriate requests. While the system has inbuilt guardrails designed to prevent any kind of criminal activities, however, with a few tweaks, it generated a near flawless phishing email that sounded “weirdly human”.
Challenges of today and in the future
According to the Cyber Security Agency (CSA) of Singapore, the number of companies in Singapore that were hit by a ransomware attack shot up from 89 in 2020 to 137 last year. This trend is only expected to rise as the availability of tools on the dark web for less than $10, the emergence of ransomware-as-a-service models, and AI-based tools such as ChatGPT will lower the barrier of entry for cybercriminals.
Considering the looming threats of an ever smarter and technologically advanced hacking landscape, the cybersecurity industry must be equally armed to fight such AI-powered exploits. In the long run, the industry’s vision cannot be that a swarm of human threat hunters try to sporadically fix this with guesswork.
The need of the hour is to take intelligent action to neutralise these evolving threats. On the positive side, Autonomous Response is today significantly addressing threats without human intervention. However, as AI-powered attacks become a part of everyday life, businesses, governments, and individuals impacted by such automated malware must increasingly rely on emerging technologies such as AI and ML to generate their own automated responses.
Using AI tools more responsibly and ethically
Governmental advisory bodies, like Singapore’s Advisory Council on the Ethical Use of Artificial Intelligence and Data, have been established to look at the technology holistically. They should look at both ethical impacts and the efficacy of these tools in certain deployments, like cybersecurity.
Another example is the work of the international network of AI experts under the Organization for Economic Cooperation and Development (OECD). This group has, among others, developed a user-friendly framework to classify AI systems in a comprehensive manner based on the OECD AI principles that include aspects such as human rights, privacy and fairness; safety, security, and risk assessment; transparency and accountability; and international cooperation.
Establishing corporate policies and frameworks are critical to doing business ethically while improving cybersecurity. We need to establish effective governance and legal frameworks that enable greater trust in AI technologies being implemented around us to be safe, reliable, and contribute to a just and sustainable world.
Tackling the complex legal, security and ethical issues related to the potential risks of artificial intelligence boils down to finding the precarious balance of our social values and economic interests.
This is no easy task for policymakers when considering new legislation. That said, any successful coexistence of AI and humans will, in large part, be dependent on robust cybersecurity practices that enable trust, transparency and accountability of new AI tools and systems that are rapidly shaping our world for decades to come.
The writer is vice-president and regional chief security officer for Asia-Pacific and Japan at Palo Alto Networks
Copyright SPH Media. All rights reserved.