GS Paper 3
Syllabus: Science and Technology
Context: The Group of Seven (G7) has proposed a “risk-based” regulation for artificial intelligence (AI) tools, which could be a first step towards creating a template to regulate AI such as OpenAI’s ChatGPT and Google’s Bard.
What is AI?
AI stands for artificial intelligence, which is the ability of machines to learn and perform tasks that normally require human intelligence, such as problem-solving, decision-making, and language understanding.
What is GPT?
Concerns related to rising AI software and chatbots:
|Privacy||There is a risk that personal and sensitive information data could be used for unethical purposes, such as for targeted advertising or for political manipulation.|
|Responsibility||Since AI models can generate new content, such as images, audio, or text it may be used to generate fake news or other malicious content, without knowing who is responsible for the output. This could lead to ethical dilemmas over responsibility.|
|Automation and Lowering of Job||AI has the potential to automate many processes, which could lead to job displacement for people who are skilled in those areas.|
|Bias and Discrimination||AI can be trained on biased data, which can result in the algorithm making decisions that unfairly disadvantage certain groups. This can perpetuate societal inequalities and lead to discrimination.|
|Lack of Transparency and Accountability||There are concerns about who should be held responsible for the actions of AI systems – creators of the AI systems, the companies that deploy them, or the governments that regulate them.|
Various steps taken by countries to regulate AI are:
|G7||The EU’s “risk-based” regulation of AI refers to the proposed AI Act that seeks to regulate artificial intelligence tools based on their level of risk. The act categorizes AI systems into four categories:
· Unacceptable risk (e.g., in case of critical infrastructure)
· high risk
· limited risk
· minimal risk (e.g., spam filters, word processing)
The level of risk determines the degree of regulatory scrutiny and compliance requirements that the AI system would be subject to.
|EU||The proposed AI Act segregates artificial intelligence by use-case scenarios based broadly on the degree of invasiveness and risk. The AI Act is due next year.|
|Italy||Became the first major Western country to ban Open AI’s ChatGPT out of concerns over privacy.|
|UK||Adopts a ‘light-touch’ approach that aims to foster innovation in the AI industry.|
|Japan||Takes an accommodative approach to AI developers.|
|China||Drafted a 20-point draft to regulate generative AI services that are likely to be enforced later this year.|
|India||ICMR releases guidelines for artificial intelligence use in the health sector; Niti Aayog’s National Strategy for Artificial Intelligence and the Responsible AI for All report. India is not considering any law to regulate AI currently. India’s AI penetration factor at 3.09, the highest among all G20, OECD countries|
|US||Blueprint for an AI Bill of Rights that proposed a nonbinding roadmap for the responsible use of AI. The Blueprint spelt out five core principles to govern the effective development of AI systems.|
Although the risks of AI are widely known, it remains unclear how the proposed AI Bill of Rights would address these risks and how grievances would be remedied. Elon Musk, Steve Wozniak, and over 15,000 others have called for a six-month pause in AI development, and for shared safety protocols to be implemented by labs and independent experts.