AI safety: How to balance innovation and risk management for business growth

Robust API security and strict access controls are key measures firms can adopt to protect data integrity, says Daniel Toh, Imperva’s chief solutions architect for Asia Pacific and Japan

    • By implementing strong safeguards, firms can minimise AI risks while maximising business potential.
    • By implementing strong safeguards, firms can minimise AI risks while maximising business potential. PHOTO: GETTY IMAGES

    Kenette Gelyn Cabotaje

    Published Thu, Aug 15, 2024 · 05:50 AM

    DATA breaches can potentially be a company’s worst nightmare. Not only will affected organisations have to relook their entire security system, they will also have to invest heavily in resources to mitigate the situation and restore consumer confidence.

    With generative artificial intelligence (GenAI) at the forefront of global business operations today, ensuring data security is even more crucial. According to the International Data Corporation, two-thirds of organisations in Asia-Pacific are exploring or are already in the process of integrating AI into their internal systems.

    These digital-first enterprises are looking to apply both GenAI and large language models (LLMs) to elevate enterprise intelligence – which is all the relevant information available to an organisation and the useful data that can be extracted – and drive efficiencies across marketing, sales, customer care, research and development, design, manufacturing, supply chain and finance.

    While enterprises are eager to embrace these technologies and embed them into their products and services, it would be irresponsible not to consider the risks. GenAI tools enable cybercriminals to evolve and execute threats faster than ever, making them far more challenging to detect and protect against.

    Privacy and confidentiality

    Inadvertent data exposure, particularly with publicly available GenAI tools and LLMs, heightens concerns around privacy and confidentiality management.

    When entering information into a GenAI prompt, individuals and organisations lack control over how the data will be used and who can access it. As a result, robust controls are needed to prevent accidental inclusion of sensitive data, such as customer information or intellectual property.

    BT in your inbox

    Start and end each day with the latest news stories and analyses delivered straight to your inbox.

    Prompt injection vulnerabilities that manipulate LLMs to produce incorrect or harmful outputs often go unnoticed due to the trust in the model’s output, potentially leading to the exposure of sensitive information. In complex cases, the LLM could be tricked into unauthorised actions or impersonations, effectively serving the attacker’s goals without alerting the user or triggering safeguards.

    Preventive measures enterprises can take to protect against these risks include privilege control, enhanced input validation, and segregation and control of external content interaction.

    Today, many business decisions are data-driven, so organisations must ensure the accuracy of any data or information they use. GenAI tools, like ChatGPT, draw from Internet-based sources and may produce erroneous outputs. Employees need to fact-check AI-generated information and avoid violating intellectual property laws.

    Organisations can enhance their protection measures by verifying the supply chain of their training data, keeping official records and certifications that prove data was obtained ethically, and ensuring their systems run in controlled and isolated environments (sandboxing) to prevent models from accessing unintended data sources. Additionally, they should implement strict vetting or input filters for specific training data and data sources to minimise the risk of falsified data.

    Lack of AI governance

    In 2022, research from Stanford University found that participants who used AI-assisted code generators were more likely to produce code with security vulnerabilities. Although not intentionally done, this code could expose the company to risk. As AI evolves from a technology being tested within specific scenarios to a general-purpose technology, the issues of control, safety, and accountability come into play.

    As guidance on AI usage remains unclear, organisations may inadvertently breach compliance regulations. To mitigate the risk of insider manipulation, organisations must stress the importance of careful consideration of all data, provide comprehensive training, and implement strict data access controls and internal AI usage policies.

    Effectively governing the use of AI will require collaboration between private and public sectors. Arriving at a consensus, however, may not be easy as both sectors have different risk tolerance levels around ethical concerns, limiting misuse, managing data privacy, and copyright protection.

    APIs are the cornerstone of LLMs

    LLMs heavily depend on application programming interfaces (APIs) to access and process data, essentially making discussions about LLMs inherently about these critical interfaces.

    The Imperva State of API Security in 2024 report revealed that APIs drive a staggering 71 per cent of web traffic, with a typical enterprise site handling an average of 1.5 billion API calls per year. Alarmingly, about one in 10 APIs remains vulnerable to attack due to improper deprecation, inadequate monitoring, or insufficient authentication controls.

    Given their reliance on data and frequent API interactions, LLMs face a heightened risk of data poisoning attacks. Data poisoning involves intentionally injecting corrupted, misleading, or malicious data into a machine learning model’s training dataset. This manipulation, sometimes facilitated through API vulnerabilities, distorts the learning process, leading to skewed outcomes and potentially biased decisions and predictions.

    To mitigate such risks, it is crucial to secure the APIs that feed data into AI models. Enterprises must develop a robust API security strategy that includes thorough discovery, classification, and inventorying of all APIs, endpoints, parameters, and payloads. Additionally, proactive measures like vulnerability detection, monitoring for behavioural anomalies, and conducting risk assessments on high-risk APIs are essential components of a holistic API security approach.

    Ultimately, organisations must first understand the challenges around AI and define a solid outline of the intended outcomes. By pinpointing specific use cases as well as risks, enterprises can establish a logistical action plan to mitigate these with built-in guardrails. Organisations should look at GenAI as an assistant or facilitator that augments specific use cases rather than a solution for all business needs.

    This story was written by Daniel Toh, chief solutions architect, APJ at Imperva, a Thales Company.

    Share with us your feedback on BT's products and services