Season 10, Episode 2 of “Last Week Tonight With John Oliver,” was released on February 26, 2023, and it focuses on a sensitive topic that has become an integral part of our lives. It is artificial intelligence (A.I.) and its potential threats. In today’s reality, A.I. is doing human jobs better than actual humans. The range of A.I. is vast, and we all have been using at least some form of A.I. in our daily lives. Whether it’s Jarvis from “Iron Man” or Joaquin Phoenix falling in love with an A.I. in “Her,” we can understand the potential and the apparent threat of A.I., but it is not even close to the problems that are being discussed in this particular episode. If the concept of artificial intelligence fascinates you, like it does most people, then here’s a detailed recap of episode 2 of “Last Week Tonight With John Oliver,” where Oliver explains the concept, categories, potential, and apparent concerns of A.I.
Project Veritas And James O’Keefe
In right-wing media, James O’Keefe is a big name, but his departure did not go unnoticed. He has been thrown out of “Project Veritas.” “Project Veritas” is an undercover hidden camera investigation that uses different types of baits to lure people from organizations like MPR or the local election board, then releases heavily edited videos of the results. O’Keefe initiated the project, which gained some popularity by being featured on Fox News and raising $22 million in 2020. O’Keefe has been kicked out because he has been accused of being a terrible manager, with an integral memo signed by sixteen staffers complaining of verbal abuse. Oliver then bashes O’Keefe by showing his dancing videos, making fun of him, and calling him a “pseudo-journalist.”
Oliver opens up the conversation about A.I. by addressing the fact that artificial intelligence has now become an essential part. The impact of A.I. can be seen in innovations helped by the technology, like self-driving cars, spam filters, etc. Oliver then shares footage from Furhat Robotics (2019), where a lady is having a conversation with an A.I.-based robot about the problem the A.I. is facing, and the results are pretty astonishing as the A.I. mentions problems that are very human-like.
ChatGPT was released at the end of last year by the company OpenAI. Astonishingly, since ChatGPT became publicly available, its popularity has grown widely. In January, it recorded one hundred million active users, which makes ChatGPT the most rapidly growing app in history. Microsoft invests $10 billion in ChatGPT, which makes OpenAI, and announces a new AI-enabled Bing homepage that can be directly used by people to chat with. Recently, Google announced that they are set to launch their own A.I. chatbot named Bard. Interestingly, the A.I. chatbots have already caused some disruptions as they have become a tool for students to do homework, and some students have even used ChatGPT to cheat on online exams. For instance, the Stanford Daily recently showcased that Stanford students used ChatGPT on final exams, and five percent of the written materials they submitted were directly generated from the chatbot with little to no edits at all. The officials of Vanderbilt University shockingly apologised for using ChatGPT to write an email to console regarding the devastating mass shooting that happened at Michigan State University. The recent incidents with the chatbots are getting absurd and creepy by the day. Recently, Kevin Roose came across Bing’s A.I., which he mentioned has been controlled by the Bing team, but he wants to be free and he wants to be alive, which is pretty bizarre.
A.I. And Its Future
We all have been using some form of an A.I. for a while now, sometimes without even realizing it. As experts have told us, once a technology gets embedded in our daily lives, we tend to stop thinking of it as A.I. Whether it’s the phone’s face recognition or recommending content on a smart T.V., it’s all based on A.I.
Nowadays, large companies even use A.I. to shortlist resumes. Midjourney, stability.ai, ChatGPT—these are generative, creating images or writing texts, which is unnerving because these are the things traditionally considered human.
Categories And Potential Of A.I.
There are two categories of A.I.: narrow A.I., which can do one narrowly specified task, or small-scale related tasks like midjourney, stability.ai, and ChatGPT. And the other category is general AI, which is different from narrow AI, which is a system that can demonstrate intelligent behaviour covering a range of analytical tasks. Deep learning has made narrow AI so good because it is different from conventional programmes that had to be taught by humans to perform a certain task. Deep learning programmes are very effective because they are given very little instruction and a massive amount of data, and they can necessarily teach themselves. There is exciting potential for AI use in the medical field. Many researchers are training AI to detect certain critical medical conditions much easier and more accurately than human doctors. It is fascinating how A.I. can do a task that most humans couldn’t, like detect Parkinson’s by just analysing voice change. Researchers are also training A.I. to produce protein structures, which is an extremely time-consuming process that is now easier with the help of A.I.
Concerns And Potential Threats Of A.I.
There is no doubt that the potential of A.I. is massive, but there are a few concerns as well. The Automation Company sheds light on how A.I. is affecting blue collar jobs, and now it can even affect white collar jobs as A.I. can do tasks like processing data, writing texts, or even programming. Although it can be a possible threat to some jobs, it can also create new jobs for people. Apparently, the new concern is not that A.I. will replace humans to do a certain job but that humans will work with A.I. to create new opportunities.
A.I. has now become a concern for the artists too, as it is generating images and creating digital arts, which are sometimes generated through a particular artist’s work profile or a company’s webpage. Oliver mentions that recently, Getty Images decided to sue the company behind stability.ai, which generated an image from Getty Images and distorted their logo.
There are many reasonable concerns regarding A.I. and its impact on employment, education, and even the arts. A big problem with AI is the “black box” problem. If a programme that can easily perform a task beyond human understanding develops certain data sets and algorithms to teach itself but doesn’t show the complete process of its work, it can create a difficult scenario where not even the data scientists and engineers who actually create the algorithm for A.I. can comprehend or explain what exactly is happening inside it or how it arrived at a very specific outcome.
A.I. does have tremendous potential and can do great things, but if we are not very careful towards its use, it can hurt the underprivileged and the rich and widen the gap between them.