Print Friendly, PDF & Email

Generative AI

GS4/GS3/GS1

 Syllabus: Ethics/Society/ Science and Technology

 

Source: WEF

 

Context: In the wake of newly released models such as Stable Diffusion and ChatGPT, generative AI has become a ‘hot topic’ for technologists, investors, policymakers and society at large.

What is Generative AI?

Generative AI is a type of artificial intelligence that involves creating new, original content or data using machine learning algorithms.

  • It can be used to generate text, images, music, or other types of media.

 

What is GPT?

A Generative Pretrained Transformer (GPT) is a type of large language model (LLM) that uses deep learning to generate human-like text.

  • generative” because they can generate new text based on the input they receive
  • pretrained” because they are trained on a large corpus of text data before being fine-tuned for specific tasks
  • transformers” because they use a transformer-based neural network architecture to process input text and generate output text

Potential of Generative AI:


 

Uses of Generative AI:

  • Create realistic images and animations
    • Text-to-image programs such as Midjourney, DALL-E and Stable Diffusion have the potential to change how art, animation, gaming, movies and architecture, among others, are being rendered
  • Generative AI can be used to compose music and create art
  • Create brand logo: E.g. many startups are exploring services like DALL.E2, Bing Image Create, Stable Diffusion, and MidJourney to create their brand logo
  • Generate text messagesg. ChatGPT to generate news articles, poetry, and even code.
  • AI-assisted drug discovery
  • Generative AI can be used to design and control robotic systems
  • Automate things e.g. Microsoft-owned GitHub Copilot, which is based on OpenAI’s Codex model, suggests code and assists developers in autocompleting their programming tasks.

Issues Associated with Generative AI:

  • Governance: Companies such as OpenAI are self-governing the space through limited release strategies, and monitored use of models, however, self-governance leaves chances for manipulation
  • Fear of Job losses:g. automation of tasks that were previously done by humans, such as writing news articles or composing music.
  • Reduced need for human cognition:g. young children who will see AI as their friend to do their homework.
  • Fear of Societal Bias being replicated by AI
  • Issues surrounding intellectual property and copyright: The datasets behind generative AI models are generally scraped from the internet without seeking consent from living artists or work still under copyright
  • Fear of Misinformation and Mistrust by manipulation of information, creating fake text, speech, images or video
  • Fear of Concentration of Power in the hand of a few companies
  • Risks for national security using automated troll bots, with advanced capabilities

Suggestions:

  • Need to make generative AI modelsmore transparent, so that the public can understand how and why the model is making certain decisions
  • Use of diverse training data, as well as techniques like fairness constraints or adversarial training to mitigate bias.
  • Privacy: Ensuring the privacy of people
  • Accountable governance esp. of BigTech companies using a designated “AI ethicist” or “AI ombudsman”
  • Designing a system wherein humans make the final decision and AI can be used as a support system
  • Collaboration with civil society and policymakers: To mitigate the impact of Generative AI on -the disruption of labour markets, legitimacy of scraped data, licensing, copyright and potential for biased or otherwise harmful content, misinformation, and so on.

Conclusion:

While generative AI is a game-changer in numerous areas and tasks, there is a strong need to govern the diffusion of these models, and their impact on society and the economy more carefully.

Related News:

Source: Indian Express

A court case next month will be taken up by the “world’s first robot lawyer”. The AI-enabled legal assistant will help get a defendant out of a speeding ticket in court by telling them what to say throughout the case via an earpiece.

Behind the robot, lawyer is San Franciso-based startup DoNotPay, which will cover any potential fines in case things do not work out.

 

Insta Links

 

Editorial:

Generative AI

AI and Robotics

  

Mains Links

 Ethics Case Study:

We’ll probably look back on 2022 as the year generative Artificial Intelligence (AI) exploded into public attention, as image-generating systems from OpenAI and Stability AI were released, prompting a flood of fantastical images on social media. Last week, researchers at Meta announced an AI system that can negotiate with humans and generate dialogue in a strategy game called Diplomacy. Venture capital investment in the field grew to $1.3 billion this year, according to Pitchbook, even as it contracted for other areas in tech.

The digital artist Beeple was shocked in August when several Twitter users generated their own versions of one of his paintings with AI-powered tools. Similar software can create music and videos. The broad term for all this is ‘generative AI’ and as we lurch to the digital future, familiar tech industry challenges like copyright and social harm are re-emerging.

Earlier this month, Meta unveiled Galactica, a language system specializing in science that could write research papers and Wikipedia articles. Within three days, Meta shut it down. Early testers found it was generating nonsense that sounded dangerously realistic, including instructions on how to make napalm in a bathtub and Wikipedia entries on the benefits of being Caucasian or how bears live in space. The eerie effect was facts mixed in so finely with hogwash that it was hard to tell the difference between the two. Political and health-related misinformation is hard enough to track when it’s written by humans.

    1. What are the ethical issues in the above case?
    2. Can we have ‘ethical AI’?
    3. Suggest measures that must be taken to prevent moral damage that can from AI.