Expert Speak Digital Frontiers
Published on Feb 09, 2023

A relational governance framework could help ensure that the benefits of AI are maximised while minimising potential harm.

Responsible Artificial Intelligence (AI) governance using a relational governance framework

Artificial Intelligence (AI) was regarded as a revolutionary technology around the early 21st century but its uptake had been slow and encumbered. Although AI has encountered its rise and fall, currently its rapid and pervasive applications have been termed the second coming of AI. It is employed in a variety of sectors, and there is a drive to create practical applications that may improve our daily lives and society. Healthcare is a highly promising, but also a challenging domain for AI. The two main uses of AI are to support health professionals in decision-making and to automate some repetitive tasks to free up time for professionals. While still in its early stages, AI applications are rapidly evolving. For instance, ChatGPT is a large language model (LLM) that utilizes deep learning techniques that are trained on text data. This model has been used in a variety of applications, including language translation, text summarisation, conversation generation, text-to-text generation and others.

One of the main concerns of using AI tools in the medical field is the potential for misinformation to be generated.

However, the use of AI in medical and research fields has raised concerns about the potentially harmful effects it could have on the accuracy and integrity of the information it produces. One of the main concerns of using AI tools in the medical field is the potential for misinformation to be generated. As the model is trained on a large volume of data, it may inadvertently include misinformation in its responses. This could lead to patients receiving incorrect or harmful medical advice, potentially leading to serious health consequences. Another issue with using AI tools in medical research is the potential for bias to be introduced into the results. As the model is trained on data, it may perpetuate existing biases and stereotypes, leading to inaccurate or unfair conclusions in research studies as well as in routine care. In addition, AI tools’ ability to generate human-like text can also raise ethical concerns in various sectors such as in the research field, education, journalism, law, etc. For example, the model can be used to generate fake scientific papers and articles, which can potentially deceive researchers and mislead the scientific community.

Despite these concerns, it is important to note that AI tools, like any other tools, should be used with caution considering the context. One of the ways to address this is to have a governance framework in place which can help manage these potential risks and harms by setting standards, monitoring and enforcing policies and regulations, providing feedback and reports on their performance, and ensuring development and deployment with respect to ethical principles, human rights, and safety considerations. Additionally, governance frameworks can promote accountability and transparency by ensuring that researchers and practitioners are aware of the possible negative consequences of implementing this paradigm and encouraging them to employ it responsibly.

The agreement established a transparent framework for tracking and reporting progress towards achieving long-term goals and provided a platform for regular dialogue and cooperation among parties.

The deployment of a governance framework can provide a structured approach for dialogue and facilitate the exchange of information and perspectives among stakeholders, leading to the development of more effective solutions to the problem. For instance, the United Nations Framework Convention on Climate Change (UNFCCC): One of the most notable outcomes of the UNFCCC process is the Paris Agreement, which was adopted under the UNFCCC in 2015. The agreement established a transparent framework for tracking and reporting progress towards achieving long-term goals and provided a platform for regular dialogue and cooperation among parties. Although governance frameworks can provide structure and stability, they also have limitations that impact their effectiveness, such as a lack of uniformity, consistency across different governments in agenda setting, and difficulty in implementing and enforcing governance frameworks, making compliance challenging. For instance, despite having an effective governance framework, the 27th Conference of the Parties (COP27) failed to achieve its objective according to a majority of analysts. Another example is COVID-19, where the lack of a governance framework made it difficult for countries to work together and share information and resources, resulting in an inconsistent and fragmented response to the crisis.

The implementation of AI regulation in healthcare requires a thoughtful and well-balanced approach to ensure that the benefits of AI are maximised while minimising potential harm. After evaluating all facets of the issue, the authors propose the incorporation of a relational governance model into the AI governance framework. Relational governance is a model that considers the relationships between various stakeholders in the governance of AI. Implementing AI governance in healthcare at the international, national, user, and industry levels using a relational governance model requires the consideration of the roles and responsibilities of each stakeholder in ensuring the responsible and ethical use of AI in healthcare. At the international level, relational governance in AI in healthcare (AI-H) can be facilitated through the establishment of international agreements and standards. This includes agreements on data privacy and security, as well as ethical and transparent AI development. By establishing a common understanding of the responsibilities of each stakeholder in AI governance, international collaboration can help to ensure that AI is used in a consistent and responsible manner across borders. At the national level, relational governance in AI-H can be implemented through government regulations and policies that reflect the roles and responsibilities of each stakeholder. This includes laws and regulations on data privacy and security, as well as policies that encourage the ethical and transparent use of AI-H. Setting up periodic monitoring/auditing systems and enforcement mechanisms, and imposing sanctions on the industry for noncompliance with the legislation can all help to promote the appropriate use of AI.

Implementing AI governance in healthcare at the international, national, user, and industry levels using a relational governance model requires the consideration of the roles and responsibilities of each stakeholder in ensuring the responsible and ethical use of AI in healthcare.

At the user level, relational governance in AI-H can be promoted through education and awareness. Patients and healthcare providers should be informed about the benefits and risks of AI, as well as their rights and responsibilities in relation to AI use. This can help to build trust and confidence in AI systems, and encourage the responsible use of AI-H. Finally, at the industry level, relational governance in AI-H can be promoted through industry-led initiatives and standards. This includes establishing industry standards and norms (for example, International Organization for Standardization) based on user requirements (healthcare providers, patients, and governments), as well as implementing data privacy and security measures in AI systems.

India's presidency of the G20 summit provides a platform to initiate dialogue on AI regulation and highlight the need for the implementation of AI regulations in healthcare. The G20 members can collaborate to create AI regulation, considering the unique needs and challenges of the healthcare sector. They can explore ways to ensure that patient data is secure while allowing for the responsible use of AI in healthcare. They can also work towards establishing best practices for the development of AI algorithms, ensuring that they are transparent, ethical, and accurate. These set of measures, carried out at various levels, would assure that AI systems are regularly reviewed and updated and ensure that they remain effective and safe for patients.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Authors

Viola Savy Dsouza

Viola Savy Dsouza

Miss. Viola Savy Dsouza is a PhD Scholar at Department of Health Policy Prasanna School of Public Health. She holds a Master of Science degree ...

Read More +
Julien Venne

Julien Venne

Naturally evolving in an international context passionate about social impact health &amp: wellbeing as well as environmental challenges Julien is a seasoned expert in innovation ...

Read More +
Sanjay Pattanshetty

Sanjay Pattanshetty

Dr. Sanjay M Pattanshetty is Head of theDepartment of Global Health Governance Prasanna School of Public Health Manipal Academy of Higher Education (MAHE) Manipal Karnataka ...

Read More +
Helmut Brand

Helmut Brand

Prof. Dr.Helmut Brand is the founding director of Prasanna School of Public Health Manipal Academy of Higher Education (MAHE) Manipal Karnataka India. He is alsoJean ...

Read More +