Special ReportsPublished on May 24, 2024 PDF Download
ballistic missiles,Defense,Doctrine,North Korea,Nuclear,PLA,SLBM,Submarines

‘Mumbai and Tel Aviv Effect’: An Alternative to the ‘Bandwagon Effect’ of Brussels and Washington in Global AI Regulations


Rajan Luthra and Tehilla Shwartz Altshuler, “‘Mumbai and Tel Aviv Effect’: An Alternative to the ‘Bandwagon Effect’ of Brussels and Washington in Global AI Regulations,” ORF Special Report No. 226, May 2024, Observer Research Foundation.


The need for forward-thinking Artificial Intelligence regulatory frameworks has never been more pressing.[1] As nations grapple with the dual imperatives of nurturing innovation and safeguarding public interest, policy deliberations in the democratic world are oscillating between two opposing imperatives: the comprehensive regulation seen in the European Union (EU), described as “innovation-stifling”; and the laissez-faire, innovation-centric approach of the United States (US), largely driven by private sector interests. Meanwhile, the global discourse on AI regulations has long crossed borders and is now a multilateral geopolitical challenge.[2]

This special report ventures away from these two extremes and offers a new approach that would be suitable for countries like India and Israel that are technologically advanced but have been more mindful of putting into place relevant policy to grapple with AI challenges. This approach calls for a regulatory framework that allows for localised innovation, addresses national security needs, and bolsters each country’s democratic institutions, while remaining interconnected in the technological mosaic of the global community. The authors call this the ‘Mumbai and Tel Aviv Effect’.

To place the Mumbai and Tel Aviv Effect on the spectrum of regulatory approaches, the two poles must be clearly understood, beginning with the EU. The EU’s regulatory strategy, reflected in policies such as the 2018 General Data Protection Regulation (GDPR), the 2023 Digital Services Act, and the Digital Markets Act of 2022E, is comprehensive in nature. The EU AI Act,[3] enacted in 2024 and regarded as a landmark piece of legislation, adopts a risk-based methodology to regulate AI applications, setting guidelines and obligations for developers and deployers;[4] it completes the EU’s ‘digital regulation package’. This package positions the EU as a global pioneer in, and exporter of, digital regulation, crafting a framework that influences tech development and deployment both within and beyond its borders, aspiring to become the de-facto worldwide standard in what is termed the ‘Brussels Effect’, compelling international companies and other governments to comply if they wish to access, or cooperate with, the lucrative European market.

However, while the Brussels Effect champions high regulatory standards, it also sparks debates about its potential impact on technological innovation. Critics and researchers argue that overly prescriptive regulations might stifle creativity and competitiveness, thereby slowing the pace of AI advancements, especially for early-stage ventures.[5]

In contrast to EU’s all-encompassing approach, the United States takes a different stance on AI governance, creating a phenomenon that this report calls the ‘Washington Effect’. Characterised by a reluctance to impose overarching federal regulations on AI, the US approach leans toward industry-specific guidelines and encourages self-regulation. The underpinning aim is to foster an environment where innovation and market forces drive the development, application, and responsible adoption of AI-enabled digital platforms and technologies.

A prime example of this is Section 230 of the Communications Decency Act of 1996,[6] which, in the words of  Jeff Kosseff, cybersecurity law professor at the US Naval Academy, is “The Twenty Six Words that Created the Internet”.[7] The Biden administration’s October 2023  executive order[8] on AI also illustrates this hands-off regulatory philosophy.[9] While Biden described the order as “bold action”, he acknowledged that “we still need Congress to act.”[10] The order’s scope is limited, emphasising a continued collaborative engagement with the private sector, where industry leaders are relied upon to set their boundary markers for responsible AI usage while maintaining American technological advantage in the global arena.

While this model has undeniably fueled rapid scientific progress and has contributed to making the US a technology leader, the domestic debate continues on finding the optimal balance between fostering innovation and ensuring adequate safeguards in an increasingly AI-driven world. Critics argue that without a coherent national framework, there might be gaps in oversight, leading to inconsistent standards and potential risks to consumer rights, privacy, security, democracy, and the broader society at large.

For instance, the lack of oversight over social media algorithms impacted the 2016 and 2020 US elections, raising concerns about whether less robust democracies could withstand similar challenges. This situation also necessitated legislative actions, such as the proposed requirement for TikTok to be sold to an American company, due to its perceived negative influence on public opinion in the United States. Additionally, in the realm of facial recognition technology, the absence of stringent regulations has permitted unchecked use by law enforcement, leading to issues of racial bias and violations of civil liberties that have contributed in some part to social unrest.[11]

The Washington Effect indirectly influences AI governance as US-based tech giants play a dominant role in the global AI landscape. Moreover, US regulatory choices encourage other nations to adopt a similar laissez-faire approach to foster their AI companies.

The Brussels Effect and the Washington Effect represent two paradigms in the realm of AI governance, each with its own strengths and challenges. They highlight a critical question in global technology governance: how to achieve the optimum balance between the need for innovation with the equally critical imperative to protect society from the risks posed by emerging technologies. Yet, this ever-expanding tension between “accelerationists” and “regulationists” demands some alternative proposals. One of them is what this report introduces as the ‘Mumbai and Tel Aviv Effect’.

The Need for an Alternative Paradigm

In countries like Israel and India, grassroots innovation is encouraged, yet tech policy is weak. India and Israel both recognise the need to regulate technology providers but choose the path of ethics-based self-regulation—or “soft” regulation. This choice might be due to insufficient motivation among decision-makers, or else, regulatory lethargy. It could also stem, however, from the ‘Regulatory Bandwagon Effect’ of Brussels and Washington. In both countries, there is an active discussion regarding the need for AI regulation to align with Europe while, at the same time, there is genuine concern of harming innovation as they face pressure from their local technology ecosystem, which causes the de-facto effect of Washington to become dominant.[12] Thus, a sub-optimal situation of AI regulation is created.

India possesses an enormous talent pool[13] for software and AI development, and a booming consumer market. It has also emerged as the world’s third largest ecosystem[14] for startups. Its approach to AI regulation is progressively shaping up,[15] and in March 2023nnounced the IndiaAI Mission.[16] The Indian government, through initiatives like the 2018 National Strategy for Artificial Intelligence hashtagged ‘#AIForAll’,[17] has outlined its vision to leverage AI for economic growth and social progress and an ambition to become the ‘garage’ for emerging and developing technologies. This strategy underscores the importance of collaboration between the government, private sector, and academia to achieve a comprehensive AI ecosystem. NITI Aayog, India's apex public policy think tank, has so far settled for establishing guidelines and promoting ‘responsible AI’ without regulations in its guide from 2021.[18] India also lacked a comprehensive data protection legislation until the introduction of the Digital Personal Data Protection Act in 2023.[19]

On 1 March this year, India's Ministry for Electronics and Information Technology (MeitY) issued an advisory[20] requiring “significant platforms” to seek Government permission before the public release of “untested AI platforms”. Following strong reactions from the industry and legal experts expressing apprehensions, the MeitY’s Minister Rajeev Chandrashekhar had to issue clarifications[21] seeking to quell the concerns.[22] Two days later, in a public forum, he stated that the government is working on a draft AI regulation framework set for release around July.[23] On 15 March, MeitY issued a fresh advisory superseding the previous one and eliminating the obligation to obtain prior governmental approval.[24] Given the complexity of the subject and its broader implications, this avoidable embarrassment highlights the importance of conducting several rounds of multi-stakeholder reviews of draft regulations before their public release, even when pressed for time.

Israel, for its part, is well-known for its “Startup Nation” innovation atmosphere,[25] and its current AI policy is characterised by its choice to forgo formal AI legislation. The Israeli Ministry of Innovation, Science, and Technology introduced a policy paper in December 2023,[26] which outlines Israel's official stance on AI. This document settles for ethical guidelines (or “soft law”) and the creation of a knowledge center and a steering committee,[27] and does not call for regulatory intervention. Israeli digital regulation already lags in certain aspects of privacy, cyber, and social media regulation.[28]

AI externalities—the spectrum of intentional and unintentional consequences that AI may harbour—are wide-ranging. They include threats to information and cybersecurity, the production of inaccurate content (confabulation), and the manipulation of humans through various types of artificial content, such as impersonation, promotion of conspiratorial narratives, obscene content, and toxicity. These technologies also have the capacity for precise prediction and planning, which enables them to create the social modeling necessary for broad social influence.

AI can facilitate the development of weapons, including eased access to chemical, biological, radiological, or nuclear weapons, dual-use systems, enhanced situational awareness, and the bypassing of safety mechanisms. Moreover, AI could autonomously expand and distribute itself. Additional concerns include biases, hallucinations, dangerous recommendations, violations of data privacy, compromised information integrity, pervasive surveillance, and challenges in human-AI interactions. The competitive drive among nations and corporations could accelerate AI development, leading to relinquishing control over these systems, particularly if profits are prioritised over safety.

However, the likelihood and impact of such types of risks for individuals and society, vary significantly across different nations, influenced by a myriad of factors, some of which transcend the realm of technological advancement. These factors include the structure of the labour market and the dynamics of technology and high-tech industries. For example, India boasts a robust software market and engineering talent, while Israel is known for its advanced cyber market. Other influential factors are the resilience and integrity of democratic and regulatory institutions. These can have an impact on the stability of financial markets and the integrity of electoral processes.

Additionally, there is a relative lack of urgency or capability in these countries to set global benchmarks for tech giants, unlike the approaches taken by the EU and the US. National security considerations and geopolitical rivalries, often diverging from Western paradigms, also play a role. The use of unique languages like Hindi and Hebrew adds another layer of complexity. Thus, the equilibrium between AI's potential risks and its benefits necessitates a distinct assessment for each of these countries, differentiating themselves from the frameworks established in the EU or US.

Key Imperatives

Democratic Stability

In both Israel and India, complex social fabrics and geopolitical contexts make these societies particularly susceptible to AI-driven disinformation campaigns, which could undermine electoral integrity and exacerbate tensions or influence public opinion in ways that are detrimental to the democratic process. To capitalise on the Mumbai and Tel Aviv Effect, countries should adopt proactive frameworks and methodologies aimed at mitigating the risk from AI-powered interference in their democracies. This can have an internal effect as well as an international effect, and that would prevent cross-border AI-driven deep-fakes for mal-information and more sophisticated cyber threats.

The impact of foreign intervention using AI tools to increase social polarisation or influence elections[29] is especially significant in relatively new or complex democracies. However, disinformation extends beyond just affecting elections; it is also a crucial player in the arena of cyberattacks on the private sector. The threats include the manipulation of markets and public perception through the rapid dissemination of false narratives or manipulated data by AI-trained chatbots and botnets. Such scenarios can lead to stock market manipulations, precipitate bank runs, or falsify government transactions. Consider the potential chaos from the takeover of a news outlet's platform to broadcast a false terrorist attack; such misinformation could trigger catastrophic financial reactions within minutes. The risk of disinformation carries a heavier toll in environments with fragile democratic institutions, or in emerging markets, where financial and regulatory frameworks are more vulnerable to destabilisation through AI-fabricated disclosures.

National Security

Israel and India both face significant national security threats, necessitating AI policies that are acutely attuned to defence and security needs. AI offers potent capabilities for defence, surveillance, and cybersecurity. However, it also presents new vulnerabilities, such as AI-powered cyber-attacks and autonomous weapons systems. The Mumbai and Tel Aviv Effect, applied to national security, advocates for a policy approach that leverages AI's strengths in protecting citizens and safeguarding national interests while implementing stringent safeguards against the risks AI poses of escalating conflicts or enabling new forms of warfare. Incorporating national security into the AI policy framework requires not only a focus on the defensive and offensive capabilities of AI but also the consideration of international norms and partnerships to prevent an AI arms race, in order to ensure global stability and align with broader humanitarian principles.

Equal Economic Gain

While the epicentres of the biggest tech companies predominantly reside in the United States, nations like India and Israel must ensure that the economic growth and benefits stemming from the AI technology sector permeate their entire societies, rather than being monopolised by a handful of large corporations. Tel Aviv and Mumbai, as the financial capitals of their respective countries, are instrumental in shaping these outcomes. Consequently, AI regulation should be closely tied to competition laws. The focus, however, should not be solely on contending with Big Tech companies, as seen in the US and EU. Instead, it should primarily aim to be watchful of the acquisition of deep-tech startups in critical and emerging technologies in a manner that could suppress competition and domestic innovation.

Linguistic Challenges of LLMs

Both Israel and India are likely to be dependent on the AI foundation models developed abroad. This poses a challenge, since AI models might not perform as accurately in Hebrew, or the multitude of languages spoken in India, as they do in more widely represented languages like English. This could lead to suboptimal, or even discriminatory outcomes in AI-driven services, as seen in the past regarding automated content moderation in social media platforms. Policies that encapsulate the Mumbai and Tel Aviv Effect would prioritise the development of high-quality, inclusive AI language models and content moderation procedures that cater to these diverse linguistic needs, ensuring that AI risks are mitigated and the solutions are culturally relevant.

Emphasis on Local Interests

As much as Big Tech companies such as Microsoft[30] or Amazon would involve themselves in the drafting of AI policy in countries like India or Israel, their interests may be misaligned with local needs. Big Tech companies are not only revenue-oriented but also inherently anti-state—i.e., they prefer global, borderless regulation over regulation grounded in community and nation-specific vulnerabilities. Therefore, it is crucial to create top-down policies that, foremost, consider local interests.

Optimising the Mumbai and Tel Aviv Effect: A Blueprint

While the Mumbai and Tel Aviv Effect offers a compelling framework for balancing innovation with ethical, cultural, and societal considerations in AI policy, its implementation faces inherent challenges. First, there is a need to ensure continuous adaptation of regulations to keep pace with rapid technological advancements, fostering international cooperation in a multifaceted global landscape, and securing sufficient resources and expertise to develop and enforce effective AI governance.

Second, effective implementation requires establishing a hybrid regulatory framework that merges ethical principles, a risk management concept, and enforceable directives and laws, while ensuring innovation is not stifled. Indeed, both India[31] and Israel[32] have embraced the principle of safety and reliability throughout the lifecycle of AI products and technologies, advocating for values such as equality and fairness, inclusivity and non-discrimination, privacy, security, transparency, accountability, and human-centric approaches to AI, which emphasise the protection and reinforcement of positive human values. These principles are also reflected in a variety of policy documents worldwide. However, ethics alone are insufficient; broad and vague principles cannot significantly impact the market without precise legislation and regulatory frameworks.

Third, prioritisation is crucial: distinguishing between what is critical and immediate and what can await further deliberation. A pressing issue in Europe, for example—real-time facial recognition by law enforcement—may not translate directly to other regions. In contrast, issues like election meddling through Large Language Models (LLMs) should be a higher priority in countries like Israel and India. These countries cannot allow Europe to be the sole decision-maker about issues like classifying algorithms into risk categories or banning overly risky applications, as even algorithms deemed ‘medium risk’ can still lead to significant issues, especially with unique languages. A balanced approach might involve selective beta testing, combined with imposing limited liability on technology producers and application marketers to incentivise self-examination.

Fourth, there is a need to integrate local legislative frameworks with the adoption of international technological governance standards. Creating regulatory incentives for AI-based products and services, such as professional training, to align with global standards like the US's National Institute on Standards and Technology (NIST), the EU’s Committee for Standardization (CEN) and Committee for Electrotechnical Standardization (CENELEC), or the International Organization for Standardization (ISO) can further facilitate this integration.

The NIST draft documents on AI Risk Management Framework provide a breakdown of AI-related risk subcategories and actionable items for mitigation. These documents include the Generative AI Profile;[33] Secure Software Development for Generative AI;[34] and Reducing Risks Posed by Synthetic Content,[35] along with NIST's proposed plan for Global Engagement[36] on AI Standards and the challenge program to improve assessments of GenAI, all released in April 2024. Consequently, local legislation can establish priorities, expectations, requirements, and governance frameworks, while international standards will allow governments and organisations to tailor their approaches to specific use cases, sectors, or applications, based on their unique requirements, risk tolerances, and resources. This will ensure responsible, governable, and contestable oversight and governance to enforce regulatory and enterprise-wide compliance in AI system development and outcomes, which will be future-proof in the sense that they will update in tandem with international standardisation and will be applicable to actions varying in relevance to different AI actors.

Sir Tim Berners-Lee, the British computer scientist best known as the inventor of the World Wide Web, the HTML markup language, the URL system and HTTP, wisely stated, “We need diversity of thought in the world to face the new challenges.”[37] The notion of the Mumbai and Tel Aviv Effect advocates for more diverse perspectives to the global dialogue on AI policy and governance, moving beyond the conventional dichotomy of the EU’s and the US’s approaches. It should encourage nations to foster AI ecosystems attuned to local needs that not only drive economic growth and innovation but also safeguard democratic values and individual rights, take into account domestic cultural values and governmental institutions, and enhance national security.

Ultimately, the journey toward responsible AI governance—international and domestic—will require collaboration across borders, sectors, and disciplines. By embracing the principles embodied in the Mumbai and Tel Aviv Effect and heeding the call for diversity of thought, democratic countries can navigate the AI era with foresight and responsibility, ensuring that these transformative technologies enhance, rather than undermine, our shared human future.


As the world navigates the labyrinth of AI regulation, the global landscape is at a critical juncture, with nations veering towards a fragmented legal order where the strategic deployment of AI technologies could intensify global divisions and challenge the bedrock of international cooperation. In this context, the absence of a unified approach risks pushing emerging economies and the Global South towards adopting AI frameworks aligned with non-democratic powers like China and Russia, thereby diluting the influence of democratic norms in the digital realm.

Herein lies another benefit of the Mumbai and Tel Aviv Effect, which transcends regional innovation to offer a balanced, inclusive blueprint that could guide them toward cohesive AI governance. This effect has the potential to resonate with the wider world, particularly in countries of the Global South, which currently lack agency in global norm-making.

Furthermore, there is a need for deeper multi-stakeholder discussion on how the Mumbai and Tel Aviv Effect will impact real-life issues of local concern in these countries. This is particularly vital in sensitive sectors such as social media, the labour market, healthcare, transportation, and AI implementation within the public sector. Additionally, discussions should address basic concepts like the interpretation of “algorithmic transparency”. This involves deciding whether to prioritise deep transparency of models for regulatory supervision or to focus on explainability to the public. These are not just technical decisions but important social and cultural ones. Similarly, the interpretation of “algorithmic discrimination” may vary between multicultural and more homogeneous countries, necessitating tailored approaches. There is also a need for a clear framework for identifying decisions and operations that should never be delegated to AI, reflecting local values and considerations.ראש הטופס

In a world grappling with the complexities of AI, the Mumbai and Tel Aviv Effect presents an opportunity to set a new course, championing a geostrategic paradigm that promotes collaboration and shared progress over divisiveness, and shaping a future where global AI regulation is anchored in a commitment to democratic principles and respect for humanity as a whole.

Rajan Luthra is Distinguished Fellow at Observer Research Foundation; Honorary Practice Fellow at Institute of Security Science and Technology, Imperial College London; and member of Innovation Council at Jio Institute. 

Dr. Tehilla Shwartz Altshuler is Senior Fellow at the Israel Democracy Institute. Her recent book, Man, Machine, and the State (August 2023), deals with AI regulation from Israeli and comparative perspectives.


[1] Yonathan A. Arbel et al., "Open Questions in Law and AI Safety: An Emerging Research Agenda",  Lawfare, March 11, 2024, https://www.lawfaremedia.org/article/open-questions-in-law-and-ai-safety-an-emerging-research-agenda.

[2] See also: OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, February 2024, https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449; UN, Principles for the Ethical Use of Artificial Intelligence in the United Nations System, October 2022, https://unsceb.org/sites/default/files/2023-03/CEB_2022_2_Add.1%20%28AI%20ethics%20principles%29.pdf.

[3]Artificial Intelligence Act - REGULATION (EU) Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828, final version 19 April 2024, https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf, (accessed May 17, 2024).

[4]Melissa Heikkilä, “Five things you need to know about the EU’s new AI Act,” MIT Technology Review, December 11, 2023, https://www.technologyreview.com/2023/12/11/1084942/five-things-you-need-to-know-about-the-eus-new-ai-act/.

[5]  See for example: Cristiano Codagnone and Linda Weigl, “Leading the Charge on Digital Regulation: The More, the Better, or Policy Bubble?,” Digital Society, Vol. 2(4), (2023), https://link.springer.com/article/10.1007/s44206-023-00033-7.

[6]  Telecommunications Act of 1996, 47 U.S.C. § 230.

[7]  Jeff Kosseff, The Twenty-Six Words That Created the Internet (Ithaca, Cornell University Press, 2019).

[8] The White House, Government of the US, “Fact Sheet: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/, 2023.

[9] Tate Ryan-Mosley and Melissa Heikkilä, “Three things to know about the White House’s executive order on AI,” MIT Technology Review, October 30, 2023, https://www.technologyreview.com/2023/10/30/1082678/three-things-to-know-about-the-white-houses-executive-order-on-ai/.

[10] Cecilia Kang and David E. Sanger, “Biden Issues Executive Order to Create A.I. Safeguards,”  The New York Times, October 30, 2023,  https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html.

[11] "Police Surveillance and Facial Recognition: Why Data Privacy is Imperative for Communities of Color,” Brookings Institute, April 12, 2022, https://www.brookings.edu/articles/police-surveillance-and-facial-recognition-why-data-privacy-is-an-imperative-for-communities-of-color/.

[12] Amir Cahane and Tehilla Shwartz Altshuler, Human, Machine and the State: Toward Regulation of Artificial Intelligence (Jerusalem, Israel Democracy Institute Press 2023), English Abstract, https://www.idi.org.il/media/21222/human-machine-state.pdf.

[13] Shaoshan Liu, “India is the World's Next Tech Manufacturing Hub,” The Information, April 6 2023, https://www.theinformation.com/articles/india-is-the-worlds-next-tech-manufacturing-hub.

[14] Nasscom, State of Data Science and AI Skills in India, Nasscom, Bangalore, India, 2023, https://nasscom.in/system/files/publication/data-science-and-ai-skills-feb-2023-final-new.pdf.

[15]Shaoshan Liu, “India’s AI Regulation Dilemma,” The Diplomat, October 27, 2023, https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/.

[16]  Cabinet, Government of India, https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2012355, 2024.

[17]  India, NITI Aayog, National Strategy for Artificial Intelligence, Delhi, June 2023, https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf.

[18] India, NITI Aayog, RESPONSIBLE AI #AIFORALL - Approach Document for India Part 1 – Principles for Responsible AI, February 2021, https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf.

[19] The Digital Personal Data Protection Act, 2023 (DPDP Act) (NO. 22 OF 2023) English version available here: https://www.meity.gov.in/writereaddata/files/Digital%20Personal%20Data%20Protection%20Act%202023.pdf.

[20]  India, The Ministry of Electronics and Information Technology (MeitY), Advisory No. eNo. 2(4)/2023-CyberLaws-3, March 1, 2024. See also: Arya Tripathy, "Analysis – MEITY Advisory on AI Tools and Intermediaries", comment posted on PSA Legal Blog, March 28, 2024, https://www.psalegal.com/analysis-meity-advisory-on-ai-tools-and-intermediaries-2/#.

[21]  Rajeev Chandrasekhar (@Rajeev_Gol), “Recent Advisory of MeitY Needs to be Understood,” X, March 4, 2024, https://x.com/rajeev_goi/status/1764534565715300592?s=46&t=N3dEYbOqzHM5ImT_4l4r3Q.

[22]  Rajeev Chandrasekhar (@Rajeev_Gol), “There's Much Noise and Confusion,” X, March 4, 2024, https://x.com/Rajeev_GoI/status/1764577260647092368.

[23]  India, INDIAai, “India plans to release the draft AI framework by July,” March 6, 2024, https://indiaai.gov.in/news/india-plans-to-release-the-draft-ai-framework-by-july-mos-it-rajeev-chandrasekhar.

[24] India, Ministry of Electronics and Information Technology, eNo.2(4)/2023-CyberLaws-3, March 15, 2024.

[25]  Israel Innovation Authority, The State of Hi-Tech 2023, June 2023, https://innovationisrael.org.il/en/report/high-techs-contribution-to-the-economy/. See also: Dan Senor and Saul Singer, Start Up Nation (New York, Warner Books 2011).

[26]  Israel, Ministry of Innovation, Science and Technology and Ministry of Justice, Responsible Innovation: Israel's Policy on Artificial Intelligence Regulation and Ethics, 2023 (Jerusalem, Ministry of Innovation, Science and Technology)  https://www.gov.il/BlobFolder/news/most-news20231218/en/Israels%20AI%20Policy%202023.pdf.

[27]  Israel, Government Secretariat, Government Resolution No. 173, Reinforcement of the Technological Leadership of the State of Israel , 2023 (Jerusalem, Israel Government Secretariat) https://innovationisrael.org.il/wp-content/uploads/2023/10/Governmnet-Resoluion-No.-173.pdf.

[28] See for example: Omer Kabir, “Israel's Privacy Laws Dawdling Will Be Catastrophic, Says Law Researcher,” CTech By Calacalist, July 18, 2019,  https://www.calcalistech.com/ctech/articles/0,7340,L-3766594,00.html.;

Tehilla Shwartz Altshuler, “Israel's Cybersecurity is a Ticking Time Bomb,” Jerusalem Post, January 5 2023, https://www.jpost.com/opinion/article-726659.

[29] Pranshu Verma and Cat Zakrzewski, “AI Deepfakes Threaten to Upend Global Elections. No One Can Stop Them,” Washington Post, April 23, 2024, https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/.

[30] Brad Smith, Vice Chair & President, "India’s AI Opportunity", Microsoft Website,

Aug 23, 2023, https://blogs.microsoft.com/on-the-issues/2023/08/23/indias-ai-opportunity/.

[31] NITI Aayog, RESPONSIBLE AI #AIFORALL - Approach Document for India Part 1 – Principles for Responsible AI

[32]Israel, Ministry of Innovation, Science and Technology and Ministry of Justice, Responsible Innovation: Israel's Policy on Artificial Intelligence Regulation and Ethics, 2023 (Jerusalem, Ministry of Innovation, Science and Technology)  https://www.gov.il/BlobFolder/news/most-news20231218/en/Israels%20AI%20Policy%202023.pdf.

[33] US Department of Commerce, National Institute for Standards and Technology,  NIST AI 600-1 Initial Public Draft - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence (Washington DC, National Institute for Standards and Technology) April 2024,  https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf.

[34] US Department of Commerce, National Institute for Standards and Technology,  Harold Booth et al., NIST SP 800-218A ipd  - Secure Software Development Practices for Generative AI and Dual-Use Foundation Models - An SSDF Community Profile Initial Public Draft, (Washington DC, National Institute for Standards and Technology), April 2024, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-218A.ipd.pdf.

[35] US Department of Commerce, National Institute for Standards and Technology,  NIST AI 100-4 - Reducing Risks Posed by  Synthetic Content  - An Overview of Technical Approaches to Digital Content Transparency (Washington DC, National Institute for Standards and Technology)  April 2024, https://airc.nist.gov/docs/NIST.AI.100-4.SyntheticContent.ipd.pdf.

[36] US Department of Commerce, National Institute for Standards and Technology,  NIST AI 100-5 - A Plan for Global Engagement on  AI Standards Washington DC, National Institute for Standards and Technology), April 2024, https://airc.nist.gov/docs/NIST.AI.100-5.Global-Plan.ipd.pdf.

[37] Tim Berners-Lee Quotes. BrainyQuote.com, BrainyMedia Inc, 2024, https://www.brainyquote.com/quotes/tim_bernerslee_179893.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.


Rajan Luthra

Rajan Luthra

Rajan is a technologist with 30+ years’ experience cutting across industry sectors and hi-tech domains. He is a Distinguished Fellow and a Member Trustee at ...

Read More +
Tehilla Shwartz Altshuler

Tehilla Shwartz Altshuler

Dr. Tehilla Shwartz Altshuler is Senior Fellow at the Israel Democracy Institute. Her recent book, Man, Machine, and the State (August 2023), deals with AI ...

Read More +