An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
The theoretical foundation of Artificial Intelligence (AI) was laid in 1950 when Alan Turing, a British polymath, explored the mathematical possibility of machines solving problems and making decisions. While the field of AI witnessed early success till the 1970s, the growth was impeded by a lack of computational power to do anything substantial. However, the 1980s saw an expansion of the algorithmic toolkit and a boost in funds allowed computers to learn using experience. In 1997, IBM’s Deep Blue was able to defeat Grandmaster Gary Kasparov in 1997. Since 2010, the discipline has experienced a new boom, mainly due to the considerable improvement in the computing power of computers and access to massive quantities of data.
Today, AI can generate new content, such as text, images, audio, and code. It does this by learning the patterns and structures of existing data and then using that knowledge to create new data that is similar to the data it was trained on. Some of the most impressive examples of generative AI include OpenAI’s GPT-3 (Capable of generating human-quality text, translating languages, etc.), DALL-E 2 and Google AI’s Imagen (Capable of generating images from text descriptions). These models are being widely used for creating new products and services, personalising experiences, and augmenting human creativity. Generative AI has the potential to revolutionise many industries and transform our lives in many ways. As the technology continues to develop, we can expect to see even more impressive and innovative applications of generative AI.
Due to its ability to generate new content, AI has been widely adopted by the public as well as organisations. Chat-GPT, which set off a frenzied use of generative AI in daily tasks from editing to coding, reached 100 Mn monthly active users in January, just two months after its launch. In October 2023, it registered over 1.7 Bn monthly visits. Similarly, the IBM Global AI Adoption Index 2022 revealed that 35% of companies reported using AI in their business, and an additional 42% reported they are exploring AI.
However, while AI has revolutionised industries, it also poses various societal risks like introducing bias, preventable errors, poor decision-making, misinformation and manipulation, potentially threatening democracies and undermining social trust through deep fake and online bots. These technologies can also be misused by criminals, rogue states, ideological extremists, or simply special interest groups to manipulate people for economic gain or political advantage. European Parliament has drawn attention to the potential negative impact on society and democratic processes. A recent global report found a 10x increase in the number of deepfakes detected globally across all industries from 2022 to 2023. Recently, Prime Minister Narendra Modi also raised concern over deep fakes to spread misinformation.
To tackle the risks posed by AI, various attempts covering ethics, morals and legal values in the development and deployment of AI are being made at the national and international levels.
At the intergovernmental level, OECD’s Recommendation on Artificial Intelligence was the first such initiative in 2019, which listed the following 10 principles to make AI trustworthy by developing and deploying AI systems in a way that contributes to inclusive growth and sustainable development, benefits people and the planet, human-centred, transparent, accountable, robust, secure, safe, ethical, fair, and beneficial to society. Additionally, the Recommendation also provides five recommendations to policy-makers pertaining to national policies and international cooperation for trustworthy AI, namely: investing in AI research and development; fostering a digital ecosystem for AI; shaping an enabling policy environment for AI; building human capacity and preparing for labour market transformation; and international cooperation for trustworthy AI. Since May 2019, these principles have been adopted by 46 countries and endorsed by the G20.
OCED has also hosted the secretariat of the Global Partnership on Artificial Intelligence (GPAI), a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities. GPAI brings together experts from science, industry, civil society, governments, international organisations and academia to foster international cooperation on responsible AI, data governance, the future of work, and innovation and commercialisation. India, as the Lead Council Chair of GPAI, hosted the GPAI Summit in New Delhi in December 2023, where 29 countries of the GPAI agreed to create applications of AI in healthcare, agriculture, and many other areas that concern the entire world.
The Office of the Science Technology Policy of the US government, in October 2022, published a non-binding Blueprint for an AI Bill of Rights to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. In the framework document, the identified following five principles:
Safe and Effective Systems
Algorithmic Discrimination Protections
Notice and Explanation
Human Alternatives, Consideration, and Fallback
So far, the US government has secured voluntary commitments from 15 companies, including Amazon, Google, Meta, Microsoft, OpenAI, Adobe, Cohere, IBM, Nvidia, Palantir, and Salesforce, to help move toward safe, secure, and transparent development of AI technology.
In October 2023, President Joe Biden issued a landmark Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, and promote innovation and competition. Among other issues, the Order directed the US Department of Commerce to establish robust international frameworks for harnessing AI’s benefits, managing its risks, and ensuring safety. The Order also advocated content authentication and watermarking to label AI-generated content clearlyGuoabong Wealth Management. The US government is strongly advancing responsible AI in health-related fields. In December 2023, 28 leading healthcare providers and payers announced voluntary commitments to the safe, secure, and trustworthy use and purchase and use of AI in healthcare.
The Bletchley Declaration, signed in November 2023 by 28 countries — including the United States, India, and China — and the European Union, aimed to boost global cooperation to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to realise their potential fully.Bangalore Stock Exchange
India has also taken many initiatives at the national level. In her budget speech for FY 2018-2019, Smt Nirmala Sitharaman, Hon’ble Finance Minister of India, entrusted NITI Aayog to initiate a national program to direct our efforts in the area of artificial intelligence, including research and development of its applications.
Soon after, in June 2018, NITI Aayog published the National Strategy for Artificial Intelligence as a roadmap to leverage AI for economic growth, social development, and inclusive growth in areas such as agriculture, health and education. The Strategy sought to address five key barriers to reap the benefit: Lack of broad-based expertise in research and application of AI, Absence of enabling data ecosystems – access to intelligent data, High resource cost and low awareness for adoption of AI, Privacy and security, including a lack of formal regulations around anonymisation of data, and Absence of collaborative approach to adoption and application of AI.
Building on the National Strategy for Artificial Intelligence, NITI Aayog, in 2021, published Approach Document for Responsible AI. The document examined the potential risks, legislative practices, and technological approaches to managing them. It aimed to establish broad ethical principles for the design, development, and deployment of AI in India. The strategy document outlined the following principles for ensuring AI systems are designed in a manner that enables fundamental rights:
Principle of Safety and Reliability
Principle of Equality
Principle of Inclusivity and Non-discrimination
Principle of Privacy and Security
Principle of Transparency
Principle of Accountability
Principle of protection and reinforcement of positive human values
Furthermore, the document noted that India does not have an overarching guidance framework for the use of AI systems, and establishing such a framework would be crucial for providing guidance to various stakeholders in responsible AI management in India.
The Telecom Regulatory Authority of India, in its July 2023 report titled Recommendations on Leveraging Artificial Intelligence and Big Data in the Telecommunication Sector, underlined an urgent need to adopt a regulatory framework by the Government that should be applicable across sectors. The regulator recommended:
Establishment of the Artificial Intelligence and Data Authority of India (AIDAI), an independent statutory authority.
Establishment of a Multi Stakeholder Body (MSB) to advise AIDAI.
Categorisations of the AI use cases based on their risk and regulating them according to broad principles of Responsible AI.
The Ministry of Electronics & Information Technology (MEiTY) constituted four committees to promote AI initiatives and develop a policy framework. Committee-D, which focussed on Cyber Security, Safety, Legal and Ethical Issues, also recommended the formulation of safety guidelines and review of existing laws for any modification that may be necessary for the adoption of AI applications in the domain.
As artificial intelligence (AI) continues to evolve at an unprecedented scale, the need for having responsible and trustworthy cannot be underestimated. By embracing responsible AI development and deployment practices, we can harness the power of AI to address global challenges and improve the lives of people worldwide. We must also remain vigilant in mitigating the potential risks posed by AI, such as bias, discrimination, and misuse, and sensitise people to identify misinformation. Only through a collaborative effort among governments, businesses, academia, and civil society can we ensure that AI is developed and used responsibly for the benefit of all. By working together, we can ensure that AI becomes a force for good.
New Delhi Investment