-
CENTRES
Progammes & Centres
Location
As AI tools reshape modern hiring, their unregulated use raises urgent concerns about bias, accountability, and data privacy.
Image Source: Getty
When John McCarthy introduced the term in 1956, artificial intelligence (AI) was envisioned primarily as a subject of academic inquiry, with early research centred on symbolic reasoning, logic, and problem-solving—abstract tasks that mirrored human cognition but remained far removed from real-world application. Several decades later, AI has evolved from an abstract academic concept into a deeply embedded force in everyday life. It is a multifaceted tool increasingly embedded in diverse aspects of daily life, ranging from education in subjects like physics to serving roles as a life coach, therapist, and data analyst. Critically, AI now also influences decision-making processes such as hiring and recruitment.
A recent survey identified that 93 percent of Fortune 500 Chief Human Resource Officers (CHROs) have begun integrating AI tools and technologies to improve their business practices. AI adoption in recruitment has seen significant growth, with approximately 88 percent of companies worldwide incorporating AI into their recruitment processes for initial candidate screening. However, as these tools gain traction and begin to influence the global job market, they raise serious concerns about legal accountability and data privacy.
AI adoption in recruitment has seen significant growth, with approximately 88 percent of companies worldwide incorporating AI into their recruitment processes for initial candidate screening.
This article focuses on two specific and interrelated issues: first, the absence of enforceable accountability structures when AI systems make or influence hiring decisions; and second, the excessive personal data that these AI tools collect, often without meaningful consent. It further evaluates how current global and Indian legal frameworks remain inadequate in addressing these challenges.
In today’s fast-paced world, in order to retain top talent, corporations need to build the capacity to quickly recruit and onboard individuals. To address this imperative, organisations have increasingly adopted AI-driven recruitment solutions, which have demonstrably enhanced process efficiency. Some evidence suggests that companies utilising AI in their hiring processes have achieved up to a 50 percent reduction in time-to-hire, alongside substantial cost savings, underscoring the transformative potential of AI in modern talent acquisition strategies.
Further, the ability of AI systems to handle large applicant volumes allows employers to rank and filter candidates at a speed that human recruiters cannot match. AI systems can screen hundreds or even thousands of resumes within minutes or even seconds. By using advanced natural language processing (NLP) techniques, AI tools assess candidates based on criteria such as skills, experience, and education. Tasks that would traditionally require days or even weeks, when performed by human recruiters, can now be executed significantly faster through the use of AI. By automating the initial stages of candidate screening, AI systems streamline the recruitment process and allow human recruiters to concentrate on more complex, higher-value activities such as in-depth candidate evaluations and final selection.
By using advanced natural language processing (NLP) techniques, AI tools assess candidates based on criteria such as skills, experience, and education.
These systems also help reduce the risk of unconscious human bias, which can be due to a human’s subjective interpretation of data. However, despite these efficiencies, AI tools can reflect biases present in their training data, leading to unfair disadvantages for candidates based on gender, race, or other demographic markers.
While these efficiencies make AI an attractive solution, they also raise serious concerns. What happens when an AI system makes a flawed or biased hiring decision? Can such outcomes be traced, questioned, or reversed? Despite the advantages, the growing reliance on AI in recruitment demands closer scrutiny.
Unlike human recruiters, AI recruitment tools often function as black-box systems where the users can access the input and the output but do not get any insights on how the output was achieved. Hence, if a qualified candidate is unfairly rejected due to biased training data, flawed algorithms, or misinterpreted inputs (such as facial expressions or accents), the question of attribution becomes murky. Employers may point to the software provider, while developers might argue that the final decision rests with the company using the tool. This lack of clear attribution creates an accountability gap, leaving candidates stuck in the middle, with no practical way to challenge or appeal the decision. The good news is that some countries are beginning to take these challenges seriously, looking at ways to regulate how AI is used in hiring and who should answer when things go wrong.
Globally, regulatory frameworks have begun responding, albeit unevenly. The European Union’s General Data Protection Regulation (GDPR), under Article 22, provides individuals with the right not to be subject to solely automated decisions that have a significant impact, such as being hired or rejected for a job. However, in practice, this right could be dodged, as employers can escape accountability by claiming that a human was involved somewhere in the process, even if only nominally. The EU Artificial Intelligence Act goes further by labelling hiring-related AI as “high-risk,” which means companies will need to meet stricter standards around transparency, documentation, and human oversight. That said, as of 2025, the already existing regulations need to be properly enforced.
The EU Artificial Intelligence Act goes further by labelling hiring-related AI as “high-risk,” which means companies will need to meet stricter standards around transparency, documentation, and human oversight.
In contrast, the United States lacks a federal law explicitly governing algorithmic hiring. The Equal Employment Opportunity Commission (EEOC) has issued guidance stating that employers are liable if their AI tools discriminate, as seen in the EEOC v. iTutorGroup case of 2023. Still, enforcement remains rare and heavily reliant on post-facto detection or whistleblower complaints.
India, which has witnessed an increasing uptake of Applicant Tracking Systems (ATS) and video interview platforms in IT, finance, and ed-tech sectors, currently lacks any AI-specific regulation. The Digital Personal Data Protection Act (DPDPA), 2023, does not address automated decision-making in hiring or require transparency in algorithmic processes. It places accountability with the “data fiduciary” but does not specify whether this includes vendors operating opaque AI tools. Furthermore, India’s labour and anti-discrimination laws, primarily framed for human decision-making, do not account for procedural fairness in algorithmic hiring.
The result, across jurisdictions, is a vacuum of enforceability. Employers often outsource recruitment to third-party AI vendors, but when decisions go awry, these vendors frequently evade responsibility by presenting themselves as neutral software providers, while employers deflect blame onto opaque algorithms. Ultimately, the applicants have no means to understand how such automated decisions were made, and have recourse to no legal remedies.
The second critical concern lies in how AI recruitment systems collect and process vast amounts of personal and behavioural data throughout the hiring pipeline. AI-powered hiring platforms such as Pymetrics (acquired by Harver) and Modern Hire (Acquired by HireVue) have been widely adopted to automate early-stage recruitment. These tools systematically capture and analyse a broad range of data collected during virtual interviews and assessments, which extends far beyond what candidates typically understand or expect.
The GDPR protects the processing of special categories of data, particularly in the context of identification. However, how behavioural and inferred data would be treated under the law, remains ambiguous. Similarly, India's DPDPA fails to recognise behavioural or inferred data as a distinct or sensitive category, leaving such data to be collected and repurposed in any manner.
India’s DPDPA framework compounds this problem by allowing “legitimate use” exceptions under Section 7, which may permit companies to reuse candidate data without fresh consent, effectively turning one job application into a long-term data asset.
Consent is another aspect that forms the legal basis for data processing in global data regimes, including India. It is tricky to obtain valid consent in hiring contexts by way of tick boxes with little to no explanation regarding what part of the data is being utilised, for how long it will be stored, and whether such data will be reused. India’s DPDPA framework compounds this problem by allowing “legitimate use” exceptions under Section 7, which may permit companies to reuse candidate data without fresh consent, effectively turning one job application into a long-term data asset.
This enables a form of surveillance, where what appears as a fair assessment becomes a mechanism to collect and analyse behavioural and inferred data, often without the candidate’s full knowledge.
Across the world, and increasingly in India, AI recruitment systems are making decisions that deeply impact people’s lives, yet regulations to govern them remain either absent or lacking. The result is a recruitment process that appears more efficient but is far less accountable and often blind to the subtle ways in which data-driven decisions can go wrong.
In a world where everyday tasks are increasingly being automated with the help of AI, ensuring fairness in hiring will require regulatory frameworks worldwide to expand their scope and introduce enforceable standards. Clearly defining the limits on how behavioural data is collected, used and retained will be essential. Without these key additions, AI systems might prove advantageous in the hiring process, but at the cost of transparency and trust of the applicant.
Tanusha Tyagi is a Research Assistant with the Centre for Digital Societies, Observer Research Foundation.
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Tanusha Tyagi is a research assistant with the Centre for Digital Societies at ORF. Her research focuses on issues of emerging technologies, data protection and ...
Read More +