Image Source: Getty
Recently, three important Indian government authorities made startlingly simple but important statements about Artificial Intelligence (AI).
The first statement was made by S. Somanath, chairman of the Indian Space Research Organisation (ISRO. He said that “AI is set to change the space technology (SpaceTech) sector with potential use in the design of rockets and satellites”.
Samir Kamat, chairman of the Defence Research and Development Organisation (DRDO) made a similar comment, “The government wants startups to focus on deep tech areas like artificial intelligence, robotics, cyber security, unmanned systems, and advanced materials, including dual-use technology, where rapid innovation can significantly enhance capabilities”.
“It is essential to ensure these (AI) systems not only work as intended but are also resilient to attacks from adversaries,” an alarming statement made by the Chief of Defence Staff, Gen. Anil Chauhan.
The government wants startups to focus on deep tech areas like artificial intelligence, robotics, cyber security, unmanned systems, and advanced materials, including dual-use technology, where rapid innovation can significantly enhance capabilities.
While there are a lot of buzzwords currently surrounding AI rhetoric, these rather simple claims by key policymakers often go unnoticed, leading to inaction from the AI innovation ecosystem, which can expand on the multitude of possibilities presented by these statements. In all the excitement about AI, three crucial questions have still not been addressed:
- Has the Government of India thought through whether C and C++ will continue to be the preferred programming languages for the strategic laboratories of ISRO, DRDO, the Department of Atomic Energy or even the Indian Armed Forces?
- Are these programming languages safe-enough when it comes to the ‘memory safety’ of equipment that these institutions use for national security purposes?
- Can these institutions be assured to have secure AI-readiness if they are not addressing the memory safety threats inherent in yesteryear programming languages, that are becoming apparent with the critical need for AI tools.
Memory safety is the attribute of certain programming languages that protects them from detrimental software bugs and vulnerabilities that affect the intrinsic memory of a software system. Memory safety has a direct impact on the quality, security and cost of operating a system. The better a language fares in memory safety, the higher its demand in critical computing systems.
Where the Indian AI ecosystem is singing paeans to large language models, about stacks and AI-powered applications, it has not taken any visible steps to convert legacy systems running on C and C++ languages to AI-ready languages like ‘Swift,’ ‘Rust’ and ‘Go.’ India’s flagship AI initiative, IndiaAI, which is managed by the Ministry of Electronics, and Information Technology (MeitY), seems to have not yet invested in this direction, since it does not appear in its mission pillars. Memory safety does not even appear once as an issue in the so-called India’s AI ‘Readiness Assessment Methodology’ prepared by UNESCO and MeitY. Readiness is only measured on ‘Ethical’ parameters. Not taking security vulnerabilities into account while using superficially all-encompassing terms like ‘Readiness Assessment’ does not help India at all; rather, it promotes complacency.
Memory safety is the attribute of certain programming languages that protects them from detrimental software bugs and vulnerabilities that affect the intrinsic memory of a software system.
This transition to new AI-ready programming languages has been identified as the next big step for the global software industry, as all the legacy languages have been found to falter on various AI-critical parameters, especially memory safety. The US’s Cybersecurity and Infrastructure Security Agency stated in 2023 that almost 70 p meercentof Microsoft and Google’s security updates have been due to memory safety issues, which have emerged because the two use C and C++ extensively. For a long time, the inherent limitations of C and C++ led to the creation of numerous ‘bug-bounty’ jobs in the information technology industry.
This issue has implications for India. The country’s strategic, atomic, telecom, cyber and intra-government networks may not yet be running entirely on memory-safe programming languages. Imagine the consequences of the Indian government’s strategic assets running on memory-unsafe programming languages. An adversarial state actor or its proxy could use ‘adversarial AI’ or other means to identify vulnerabilities in code crucial for the strategic asset’s operations and carry out a devastating ‘memory safety attack’ that would lead to both the destruction of vast amounts of critical data and the denial of services.
The American Defense Advanced Research Projects Agency (DARPA), in July 2024, commenced a new program called TRACTOR (Translating All C to Rust) to convert all the legacy C language code to a memory-safe Rust programming language. One can assume the legacy C language code could be part of numerous Department of Defense installations and mobile assets. The US has time and again warned against the failure to secure memory safety. Early in February 2024, the US White House Office of the National Cyber Director released a report to address the root cause of many disastrous cyberattacks - memory safety. In 2022, the US National Security Agency released a cybersecurity information sheet on Software Memory Safety, in which it recommended users to transition to memory-safe languages to avoid the compromise of critical systems.
The country’s strategic, atomic, telecom, cyber and intra-government networks may not yet be running entirely on memory-safe programming languages.
Not just the US, China too has started to adopt new programming languages. China’s Huawei is the only Asian founding member of the Rust Foundation, along with Amazon, Google, Meta, and Microsoft. The ongoing Chips Act-driven sanctions by the US against China have not impeded the Chinese company from becoming a part of the foundation overseeing this programming language. It could be possible that Huawei’s Ascend AI chips will use memory-safe languages like Rust. Co-founder of the cryptocurrency Ethereum, Vitalik Buterin, and many like him are placing their bets on using AI to find bugs in their core business software code. Ethereum uses both Rust and Go for its code.
It is not known whether Indian strategic agencies have recognised this growth of memory-safe programming languages like Swift, Rust, and Go. If they had taken any steps, however, a nationwide governmental initiative to convert memory-unsafe code in critical infrastructure to memory-safe code would have been announced or at least talked about. The Chairman of ISRO once said that his space agency experiences over 100 cyberattacks every day. But this begs the question, do these attacks have roots in memory-unsafety, and what systemic measures has ISRO taken beyond warding off the attacks? If ISRO, or similar strategic agencies in the country are content finding bugs, fighting one battle at a time and not tackling the memory safety issue, India is going to have a tough time keeping its legacy mission-critical infrastructure safe, especially with the advent of adversarial AI hunting for bug bounties.
Co-founder of the cryptocurrency Ethereum, Vitalik Buterin, and many like him are placing their bets on using AI to find bugs in their core business software code.
New Delhi must give an unwavering diktat to all of its agencies to explore transitioning from unsafe C, C++-based mission-critical code to code written in more AI-stable programming languages. Or, they can fine-tune their hardware for memory-safe AI-readiness. This could also include the development of alternative memory-safe C++ code. Unless the National Security Council Secretariat and other agencies that are part of the strategic establishment make ‘memory safety’ India’s priority in strategic domains, our effort to leverage the transformative potential of AI would only remain half-hearted.
Chaitanya Giri is a Fellow with the Centre for Security, Strategy and Technology at the Observer Research Foundation
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.