Reap the benefits of GenAI while protecting against the cyber security risks
While the rapid rise of generative AI (GenAI) is offering businesses a wide range of benefits and opportunities, the technology is at the same time exposing them to cyber security risks. These risks should be carefully considered and mitigated, write Prashant Deo and Kanwar Preet Singh Sandhu from Tata Consultancy Services.
The use of generative AI (GenAI) within businesses has been building for many years, yet the arrival of ChatGPT has increased public awareness about the opportunities and risks the technology brings.
Within businesses, the adoption of AI technologies is not only reducing the impact of human error, but also transforming companies at scale and at pace. In healthcare, AI is helping to transform care delivery, advance precision medicine and discover new drugs. In retail it’s being used for predictive maintenance, crew scheduling and ticketing. In financial services it’s helping to analyse data and detect fraud.
Cybersecurity risks
While GenAI is transforming many industries, it is also creating novel cybersecurity risks for enterprises. AI applications are built on large language models (LLMs) and the technology environment must be protected.
However, GenAI domains have different characteristics and behaviours than the usual IT environments that organisations are used to protecting. They must now learn to protect the new attack surface introduced by AI/GenAI assets in an enterprise.
The risk of tampered signals
The use of GenAI in smart motorways is one high-profile example that has generated significant debate. Information from cameras, radars and sensors is used to dynamically manage traffic flow, increase capacity and reduce congestion. The data is generated by multiple sources and channels and there is concern that if those signals are tampered with it could pose a significant risk.
From the source where signals are captured to the point where they are arriving, if they are tampered with, there is a high risk of the wrong actions being taken.
Another risk is that the prompts used to interact with GenAI chatbots could be manipulated by bad actors to get a model to do things or disclose data in unintended ways. Therefore, it is vital that organisations safely manage the privacy and security of the extensive data being used in the models.
AI cybersecurity strategy
As companies race to realise the benefits of AI, there is concern from regulators that rapid, unstructured adoption may open doors for cybercriminals to target companies through:
- Direct and indirect prompt insertion attacks
- Data or model poisoning
- Supply chain, API, and integration attacks
- Denial of service
To ensure that organisations prioritise data protection and security, regulations such as GDPR in Europe, EU AI Act and The Data Privacy Act in the UK are increasing controls and restrictions. The cost of non-compliance is significant, so, while many companies have solutions such as Microsoft Copilot in place, they understand the need to plan properly, put controls in place and ensure the integrity of data before fully adopting AI.
At Tata Consultancy Services, we encourage our customers to follow these steps to put a robust security strategy in place:
- Cyber security teams should identify all AI and machine learning (ML) assets in an enterprise and assess the risks they present
- Leaders should develop AI-specific security policies and standards, setting clear expectations and establishing necessary guardrails
- Software security engineering and processes should be modified to consider Large Language Model-related security and privacy
- As adoption widens, enterprises must establish standards and governance to ensure that effective controls are implemented – in the engineering and operational environment and throughout the development life cycle.
GenAI advancing security
While GenAI may create new risks, it also has a crucial role to play in the fight against cybercrime. New AI tools can strengthen security measures by detecting and responding to risk much faster than human intervention.
At Tata Consultancy Services we are supporting our aviation customers, those in the hospital industry and the retail world to utilise GenAI to provide better analytical engines that can quickly analyse risk. We are helping to run and test how secure their monitoring systems are, so they can detect suspicious activity in time and stop it.
By leveraging advanced AI technology and putting a robust strategy in place, organisations can combat sophisticated threats and benefit from the opportunities for transformation that the technology provides.
About the authors: Prashant Deo is Global Head of Security Tools and Engineering in the Cybersecurity business of Tata Consultancy Services, where Kanwar Preet Singh Sandhu is CTO and Global Head of Strategic Initiatives.