Navigating generative AI – around innovation, safety and security
Companies have to learn how to maximise its potential while effectively managing its risks
DeeperDive is a beta AI feature. Refer to full articles for the facts.
AS THE adoption of generative AI (GenAI) becomes more prevalent, the central question emerges: is the technology a super productivity engine, an adversarial agent, or an information leaker and safety hazard? The decision on how to adopt generative AI is far from straightforward. And the answer to the question is that it can be all of the above.
In my interactions with enterprises, I observe three competing worldviews on GenAI:
- The AI Innovator’s Stance: These innovators champion rapid development and iteration, pushing the boundaries of what’s possible with AI.
- The AI Hunter’s Perspective: This camp adopts a “detect, monitor, and respond” approach, viewing AI systems as potential threats that require constant vigilance.
- The AI Custodian’s Position: This group emphasises the need for comprehensive oversight, encompassing both safety and privacy dimensions. They advocate responsible AI development and a “need-to-know” basis for data sharing.
For example, in software development, AI-generated code promises substantial time and cost savings, yet it has become a top security concern due to potential vulnerabilities and quality issues.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services
TRENDING NOW
Vietnam formalises new state leadership, redefining ‘four pillars’ power balance
‘Largest Singapore commercial S-Reit proxy’: analysts say buy CICT shares after Paragon acquisition
From 1MDB to ‘corporate mafia’: Is Malaysia facing a new governance test?
Why where you park your joint venture matters: Lessons from a US$689 million shareholder dispute