We are delighted to announce the inaugural edition of our conference AI for All from November 15-16, 2018 in Mumbai, India. NITI Aayog, the Indian government's policy think tank, will be the partner for the event. Professor Wendell Wallach, consultant, ethicist and scholar at Yale University's Interdisciplinary Center for Bioethics, and a senior advisor to The Hastings Center, will be the Co-Chair.
Over the past two years, more than a dozen countries have either published national AI strategies or launched government task forces to deliberate on the subject. It is clear that states are viewing AI not just as a technological advancement but a race towards economic and military superiority. However, AI promises to implicate more than just politics and economics. It could fundamentally alter how communities and societies will be organised in the future.
Accordingly, the themes around which the platform is designed include exploring the impact of AI on geopolitics, skilling and training, accountability, data infrastructure and the military. A brief description of the pillars and tentative themes can be found below.
See the agenda here.
Pillars:
Geopolitical Impact | Skilling and Training | Accountability/Ethics | Data Infrastructure | Military
AI and Geopolitical Impact
1. The Race for Autonomy: Strategies, Security and Implications of AI
Technology may no longer be the great equalizer as countries compete to gain a head start in AI adoption. As states declare their strategies to harness the potential of AI to meet national priorities, the global balance of power will be challenged with the onslaught of the new technology. What will competition look like in the AI age? How can governments redesign and manages transitions in manufacturing, labour, defense and the larger economy as a result of AI?
2. Carving out AI Futures: Framing Inclusive Interests of Emerging Economies
The increasing US-China bipolarity in the fight for AI dominance has resulted in other countries evaluating and building their capacities to become contenders. As governments adopt different approaches to create value in the global supply chain, how can access to AI be made inclusive, equitable and transparent? India, for instance, has stated its vision to deploy AI to solve socio-economic problems at scale and even leverage the technology to benefit the differently abled. What are some of the challenges unique to emerging economies when pushing for AI adoption and deployment? How can the West partner with these countries to promote regional stability and create a new global order?
Accountable AI
3. Bias, Safety and Security in AI Systems
Ongoing research in AI indicates that protecting against bias and securing systems for safety and against emerging threats must be incorporated at the stage of designing applications. How do we future-proof self-driving vehicles, Internet of Things and smart cities from vulnerabilities and attacks? How will institutions integrate automated decision-making in their operations and ensure that it is fair and equitable?
4. AI and Ethics: Big Tech’s Big Responsibility
The backlash from Google’s controversial military AI program, Maven, and Amazon’s real-time facial recognition tool, Rekognition, has brought into question the ethical responsibility of companies in using their tools for weapons and surveillance. As defense and law enforcement modernize their operations, can AI be leveraged to serve national security interests? How can companies be held responsible by employees and users when their technologies are used to perpetuate harm?
Data Infrastructure
5. Data for Good: Public-Private Cooperation in Data Sharing
The French Government’s new AI strategy calls for making data a public good with the state convening platforms to encourage data sharing among the private sector. India on the other hand has built the world’s largest public data infrastructure with the national digital-id programme, Aadhaar. As more states mandate data to be localised, trends indicate that policymakers world over are looking to develop solutions to leverage data to prioritize local startups and level the playing field. The NITI Aayog National AI Strategy, for instance, recommends that a National AI Marketplace be established for data aggregation and annotation. Will dominance in data result in a winner-takes-all outcome and determine the leaders in an AI-driven economy? How can the government enable data ecosystems and access to intelligent data?
6. Privacy in the age of AI-driven Big Data Analytics
As algorithmic decisions and AI become more ubiquitous redefining everyday experiences, data will be collected at a scale that will challenge principles of data protection such as consent and purpose limitation. With machine learning systems becoming more sophisticated in drawing patterns and profiling users, civil liberties, and as evidenced, even democracies can be threatened. How can privacy including principles of transparency, consent and accountability be reimagined in the age of big data? How can elections and other democratic processes be secured as profiling and data sharing becomes more rampant?
Skilling and Training
7. Future of Work: Skilling for the Machine Age
Uncertainty over the future of jobs has lurked with every technological leap over centuries but those fears were about the replacement of physical labor alone and immune to cognitive function. Now, the tally of potential job destruction is linked to machines’ predictive power at scale. For workers to use AI productively, we will require new skills. For firms to do the same, they must go boldly where they fear to tread and recalibrate internal business processes. Experts agree that machine learning is not going to replace managers but managers who know how to use machine learning will replace managers who don’t. If that’s where the old ‘job’ ends and the future of work begins, how do we train for it and get it right? Can context and country specific quantitative models be built to analyse the impact of AI on employment?
AI in Warfare
8. Bots of War: AI in the military
With autonomy in weapon systems seeming like a distinct possibility, the clamour around their active use in warfare has steadily risen. Nations have already begun deploying these systems for enhancing targeting and maneuverability. What other functions - both lethal and non-lethal - can autonomous systems discharge in the military? How must military command and control structures be reimagined to integrate AI into the military?
9. Ensuring Human Responsibility and Accountability in the Use of Autonomous Systems
Retaining human responsibility for decisions on the use of autonomous weapons has long been considered as a means of both mitigating unpredictability of intelligent machines and ensuring respect for International Humanitarian Law. Questions around human accountability over the development, activation, execution and oversight of these weapons are technologically complex and politically fraught. How can human accountability be retained across the entire life cycle of the weapon system in compliance with applicable international law? Is it enough to protect against the unintended consequences of the blackbox algorithms that weapons rely on?