Navigating generative AI – around innovation, safety and security
Companies have to learn how to maximise its potential while effectively managing its risks
AS THE adoption of generative AI (GenAI) becomes more prevalent, the central question emerges: is the technology a super productivity engine, an adversarial agent, or an information leaker and safety hazard? The decision on how to adopt generative AI is far from straightforward. And the answer to the question is that it can be all of the above.
In my interactions with enterprises, I observe three competing worldviews on GenAI:
- The AI Innovator’s Stance: These innovators champion rapid development and iteration, pushing the boundaries of what’s possible with AI.
- The AI Hunter’s Perspective: This camp adopts a “detect, monitor, and respond” approach, viewing AI systems as potential threats that require constant vigilance.
- The AI Custodian’s Position: This group emphasises the need for comprehensive oversight, encompassing both safety and privacy dimensions. They advocate responsible AI development and a “need-to-know” basis for data sharing.
For example, in software development, AI-generated code promises substantial time and cost savings, yet it has become a top security concern due to potential vulnerabilities and quality issues.
TRENDING NOW
Gojek founder Nadiem Makarim faces 18-year jail demand in Indonesia laptop graft trial
On the board but frozen out: The Taib family feud tearing Sarawak construction giant apart
Singapore developer in limbo after Timor-Leste scraps major township project
Not retirement, but a rewiring and fresh perspectives post-DBS, says Piyush Gupta