Author : Kiran Yellupula

Expert Speak Digital Frontiers
Published on Mar 22, 2023

The growing dependency on AI can have adverse impacts if adequate safeguards are not put in place

The questions ChatGPT will not answer for you

While experimental, Artificial Intelligence chatbots like ChatGPT, look like fascinating, useful, free tools instantly answering all queries based on the prompts that users give it, it poses serious security risks that legal regulations don’t address. They are disrupting the way we work, live, and evolve as a nation. Are we ready for such national security threats without the necessary safeguards being set in place?

AI opens a new front in cyber arms race

Artificial Intelligence (AI) arms race is disrupting our lives. A paradigm shift in technology fuels the battle for supremacy between man, machine, and nations. So, what does it mean to be human? Can machines think? Can machines trump human creativity? Does AI pose an existential risk to humans? Will your brain become vestigial due to a growing dependency on AI? No one truly knows the answers. And, a few who may know the answers aren’t going to share, as the stakes are too high.

OpenAI’s ChatGPT, backed by Microsoft and Google's Bard, are experimental, conversational, AI chat services that use online information to give replies to your ‘questions’.

With the advent of AI systems endowed with perception, cognition, and decision-making, the AI application layer has increasingly turned vast, deeply impacting our lives. For instance, OpenAI’s ChatGPT, backed by Microsoft and Google's Bard, are experimental, conversational, AI chat services that use online information to give replies to your ‘questions’. But asking questions means providing personal data to the AI bots.

Unregulated risks of global experiments

Functionally an advanced search, these AI systems are massive global AI experiments driven by AI bots, without the subjects’ explicit consent. The goal of such experimental research is to thrive on user data freely to refine products for profit while claiming to advance AI to benefit humanity. As renowned linguist Noam Chomsky says, “AI could significantly impact people's capacity for independent thought and creation.”

Your questions and inputs are used as ‘training data’ for refining these bots, and the individual outputs become part of collective intelligence. A responsible firm committed to benefitting humanity would not have released an unfinished technology to the public with risks, be it privacy intrusion, plagiarism, non-explainability, algorithmic biases or harm to social info-ecosystems.

The problem is while AI firms say humanity is not far from potentially scary AI, they still release AI technology publicly, even before the rules are laid. This will open the doors to a few Big Tech companies to gain control over our lives and society. Thus, there is a need to protect people from corporates using AI for profit, with no accountability. Employees are feeding sensitive business data as training data into AI-driven ChatGPT, leading to growing security concerns and the risk of leakage of proprietary information or trade secrets—strategy, approach, business models, and tech contains confidential information. Owing to privacy, many companies, including Amazon, Walmart, JPMorgan, Verizon, Goldman Sachs, and KPMG are curbing the use of ChatGPT by employees and evaluating safer pathways. From New York to China, schools, colleges, universities and regulators are banning ChatGPT amid serious security fears. America’s Federal Trade Commission has warned Silicon Valley against false AI claims, urging truth, fairness, equity in the usage of AI tools.

Employees are feeding sensitive business data as training data into AI-driven ChatGPT, leading to growing security concerns and the risk of leakage of proprietary information or trade secrets—strategy, approach, business models, and tech contains confidential information.

India needs to wake up to the threat and act now. The real challenge facing people is the ‘integrity’ of the data used by the AI system and its accuracy, reliability, and consistency. More so, as existing human bias is too often transferred to AI. And, the latest AI bots work well upon having access “lots of data” to mimic your intelligence without revealing how it will be used. Data labelling, network portability, data portability and interoperability of data will help us tame AI-powered systems.

Let’s ask: Who owns the content generated by ChatGPT? What can the owner do with the content? Does it infringe copyright? Is it “openly” plagiarised? Can the content generated be copyrighted? Does the content violate intellectual property rights and privacy? Will the platform or owner be liable for risks?

Need for stricter AI regulations

The limitations of above questions are, by design, to steer the evolution of AI to benefit a select few. EU Industry Chief has demanded stricter AI regulations. Even OpenAI’s Founder, Sam Altman, says relying on chatbots for anything important right now is a mistake. The directive has been largely ignored, which may lead to unintended consequences.

Tech news site CNET used AI for writing articles that were found to have serious errors, denting its publication. Tech publication WIRED doesn’t publish stories with text generated or edited by AI, except for new headlines or text for short social media posts, brainstorming story ideas, or using AI more like a search engine. Interestingly, it vowed not to publish stories with text generated by AI, except for headlines or text for short social media posts, story ideas, or using it like a search engine.

AI failure also carries high reputational risks. As the incidences are likely to increase, preventing ‘data creep, scope creep, and the use of biased training data’ will make AI more responsible. AI stakeholders, the policymaking community, and governments should invest in building social norms, public policy, and educational initiatives to prevent machine-generated disinformation. Mitigation will require effective policy and partnerships across the ecosystem.

AI stakeholders, the policymaking community, and governments should invest in building social norms, public policy, and educational initiatives to prevent machine-generated disinformation.

Research indicates an over-reliance on AI, which may help generative AI become smart, but humans become dumber by losing their unique knowledge and diversity of thought. When humans start mimicking AI output and stop taxing their brains, it may stunt their ability to think with little unique knowledge, leading to suboptimal collective performance detrimental to innovations. As human choices converge toward similar responses, increasing human accuracy but reducing unique human understanding.

While the AI arms race intensifies, India needs to invest more in R&D and swiftly work on the AI systems governance, responsibility, and accountability to protect privacy, safety, and unintended consequences. No laws in the nation prevent AI from infringing on human rights, or scooping the human brain without consent as technology continues to refine itself.

An algorithm has no feelings. Yet, the way the programme ‘trains itself”, AI can reinforce social biases and prejudices by maximising its designated outcome. Humans must be clear in how we envisage AI to shape our future: Do we want AIs to maximise narrow goals while producing negative externalities? Or, training AI to evolve a smarter planet (eliminating harmful biases)?

Growing self-reliance on AI

Being part of the Global Partnership on Artificial Intelligence (GPAI) floated by G7, India should critically nurture equitable rules for data governance, safety, and trust around AI rooted in collaborative research and set up global Centres of Excellences (COEs) fostering human rights, inclusion, diversity, innovation, and growth. Interestingly, China shuns the US-led GPAI, but is eyeing global AI leadership by 2030. Initiatives like Make AI in India and Make AI work for India will yield limited impact unless the nation balances the trade off with whom it wants to work as a trusted partner or takes bold steps to develop self-reliance like China.

< style="color: #0069a6">Technologies powering ChatGPT may also geopolitical tools like propaganda, cyberattacks, synthetic biology or autonomous-fighter jets that give huge economic leverage.

GCHQ, the global cybersecurity agency, warns that ChatGPT and rival chatbots are security threats. Technologies powering ChatGPT may also geopolitical tools like propaganda, cyberattacks, synthetic biology or autonomous-fighter jets that give huge economic leverage. As AI expands in people’s lives even in useful ways, these systems may diminish individuals’ ability to control their choices. Yet, if AI isn't kept in check, we may expect an Orwellian future. As the Federal Trade Commission warns, policymakers need to exercise caution in using AI

So, how can humans future-proof themselves? AI is just a tool, and shouldn’t define humankind. We still need thinking, curiosity, adaptability, empathy, judgement, creativity, and skills that oversee, fix, and create these technologies. Living with AI means asking the right questions, questioning answers, and learning to explain why you accept an output.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Kiran Yellupula

Kiran Yellupula

Kiran has over two decades of leadership experience in managing strategic communications for IBM Accenture Visa Infosys and JLL. He has also worked as an ...

Read More +