Print Friendly, PDF & Email

EDITORIAL ANALYSIS : Children, a key yet missed demographic in AI regulation


Source: The Hindu

  • Prelims: Science and technology, Artificial intelligence(AI), Generative AI, Big Data, GANs, ChatGPT1 tool, DALL.E2 etc
  • Mains GS Paper III and IV: Significance of technology for India, AI, indigenisation of technology and development of new technology.


  • India is to host the first-ever global summit on Artificial Intelligence (AI).
  • As the Chair of the Global Partnership on Artificial Intelligence (GPAI), India will also be hosting the GPAI global summit.
  • AI is projected to add $500 billion to India’s economy by 2025, accounting for 10% of the country’s target GDP.




Artificial intelligence(AI):

  • It is a branch of computer science dealing with the simulation of intelligent behavior in computers.
  • It describes the action of machines accomplishing tasks that have historically required human intelligence.
  • It includes technologies like machine learning, pattern recognition, big data, neural networks, self algorithms etc.
  • g: Facebook’s facial recognition software which identifies faces in the photos we post, the voice recognition software that translates commands we give to Alexa, etc are some of the examples of AI already around us.


Generative AI:

  • It is a cutting-edge technological advancement that utilizes machine learning and artificial intelligence to create new forms of media, such as text, audio, video, and animation.
  • With the advent of advanced machine learning capabilities: It is possible to generate new and creative short and long-form content, synthetic media, and even deep fakes with simple text, also known as prompts.


AI innovations:

  • GANs (Generative Adversarial Networks)
  • LLMs (Large Language Models)
  • GPT (Generative Pre-trained Transformers)
  • Image Generation to experiment
  • Create commercial offerings like DALL-E for image generation
  • ChatGPT for text generation.
    • It can write blogs, computer code, and marketing copies and even generate results for search queries.


The governance challenge:

  • Regulation will have to align incentives to reduce issues of addiction, mental health, and overall safety.
  • In absence of data hungry AI-based digital services can readily deploy opaque algorithms and dark patterns to exploit impressionable young people.
  • Tech-based distortions of ideal physical appearance(s) which can trigger body image issues.
  • Other malicious threats emerging from AI include
    • misinformation
    • radicalisation
    • cyberbullying
    • sexual grooming
  • AI is known to transpose real world biases and inequities into the digital world.
    • Such issues of bias and discrimination can impact children and adolescents who belong to marginalized communities.
  • The data protection framework’s current approach to children is misaligned with India’s digital realities.
    • It transfers an inordinate burden on parents to protect their children’s interests
    • It does not facilitate safe platform operations and/or platform design.
    • A significant percentage of parents rely on the assistance of their children to navigate otherwise inaccessible user interfaces and user experience (UI/UX) interfaces online.
    • It bans tracking of children’s data by default
    • It can potentially cut them away from the benefits of personalisation that we experience online


What steps need to be taken?

  • The next generation of digital nagriks must grapple with the indirect effects of their families’ online activities.
    • Enthusiastic ‘sharents’ regularly post photos and videos about their children online to document their journeys through parenthood.
  • While moving into adolescence we must equip young people with tools to manage the unintended consequences.
    • For example: AI-powered deep fake capabilities can be misused to target young people
    • The bad actors create morphed sexually explicit depictions and distribute them online.
  • AI regulation must improve upon India’s approach to children under India’s newly minted data protection law.
  • International best practices can assist Indian regulation to identify standards and principles that facilitate safer AI deployments.
  • UNICEF’s guidance for policymakers on AI and children identifies nine requirements for child-centered AI which draws on the UN Convention on the Rights of the Child (India is a signatory).
    • The guidance aims to create an enabling environment which promotes
      • children’s well-being
      • inclusion
      • fairness
      • non-discrimination
      • safety
      • transparency
      • explainability
    • The ability to adapt to the varying developmental stages of children from different age groups.
    • California’s Age Appropriate Design Code Act serves as an interesting template.
      • It pushes for transparency to ensure that digital services configure default privacy settings
      • It assess whether algorithms, data collection, or targeted advertising systems harm children
      • Use clear, age-appropriate language for user-facing information.
    • Indian authorities should encourage research which collects evidence on the benefits and risks of AI for India’s children and adolescents.
      • This should serve as a baseline to work towards an Indian Age Appropriate Design Code for AI.
    • Better institutions will help shift regulation away from top-down safety protocols which place undue burdens on parents.
    • Mechanisms of regular dialogue with children will help incorporate their inputs on the benefits and the threats they face when interacting with AI-based digital services.


Concerns around AI use:

  • Generative AI can create harm and adversely impact society through misuse, perpetuating biases, exclusion, and discrimination.
  • Bias and exclusion: Generative AI systems can perpetuate and amplify existing biases.
    • If the models are trained on biased, non-inclusive data, they will generate biased outputs.
    • For example: initially, generative imagery would show only images of white men for the prompt “CEO.”
  • Generative AI systems can create content for malicious purposes, such as deep fakes, disinformation, and propaganda.
  • Inappropriate content: It can generate offensive or inappropriate content.
  • Nefarious actors may use AI-generated media to manipulate people and influence public opinion.
  • It may also produce low-quality and less accurate information, specifically in the context of complex engineering and medical diagnosis.
  • It can be challenging to determine who is responsible for the content generated by a generative AI system.
  • The acquisition and consent model around the training data and intellectual property issues make it difficult to hold anyone accountable for any harm resulting from its use.


Ethical Issues with AI:



Way Forward

  • India can assume leadership in how regulators address children and adolescents who are a critical demographic in this context.
  • The nature of digital services means that many cutting-edge AI deployments are not designed specifically for children but are nevertheless accessed by them.
  • An institution similar to Australia’s Online Safety Youth Advisory Council which comprises people between the ages of 13-24
    • Such institutions will assist regulation to become more responsive to the threats young people face when interacting with AI systems
    • Preserving the benefits that they derive from digital services.
  • Regulation should avoid prescriptions and instead embrace standards, strong institutions, and best practices which imbue openness, trust, and accountability.
  • As we move towards a new law to regulate harms on the Internet, and look to establish our thought leadership on global AI regulation, the interests of our young citizens must be front and center.



What are the different elements of cyber security ? Keeping in view the challenges in cyber security, examine the extent to which India has successfully developed a comprehensive National Cyber Security Strategy.(UPSC 2022) (200 WORDS, 10 MARKS)