SUBSCRIBERS

Navigating generative AI – around innovation, safety and security

Companies have to learn how to maximise its potential while effectively managing its risks

    • In software development, AI-generated code promises substantial time and cost savings, yet it has become a top security concern due to potential vulnerabilities and quality issues.
    • In software development, AI-generated code promises substantial time and cost savings, yet it has become a top security concern due to potential vulnerabilities and quality issues. ILLUSTRATION: PIXBAY
    Published Tue, Nov 26, 2024 · 05:00 AM

    AS THE adoption of generative AI (GenAI) becomes more prevalent, the central question emerges: is the technology a super productivity engine, an adversarial agent, or an information leaker and safety hazard? The decision on how to adopt generative AI is far from straightforward. And the answer to the question is that it can be all of the above.

    In my interactions with enterprises, I observe three competing worldviews on GenAI:

    • The AI Innovator’s Stance: These innovators champion rapid development and iteration, pushing the boundaries of what’s possible with AI.
    • The AI Hunter’s Perspective: This camp adopts a “detect, monitor, and respond” approach, viewing AI systems as potential threats that require constant vigilance.
    • The AI Custodian’s Position: This group emphasises the need for comprehensive oversight, encompassing both safety and privacy dimensions. They advocate responsible AI development and a “need-to-know” basis for data sharing.

    For example, in software development, AI-generated code promises substantial time and cost savings, yet it has become a top security concern due to potential vulnerabilities and quality issues.

    Share with us your feedback on BT's products and services