Artificial Intelligence: Humanity’s Next Giant Leap

Daniel Perodin, Staff Writer

Throughout history, new technology has always been subject to both praise and scrutiny. Optimists have promoted the merits of new technology and explained its potential to improve the lives of the public by facilitating tasks, increasing efficiency and causing economic growth. Conversely, technological advances tend to alarm the public, leaving them wondering if the technology will render them jobless or less safe. Artificial intelligence has captured the imagination of the world for some time. Optimists explain that AI is simply another tool that humans can utilize, whereas skeptics question whether or not a technology that so often takes people out of the equation can benefit humanity.

For years, AI had been something that the average person only saw in the media. Movies like “Terminator,” “The Matrix” and “War Games,” depict AI overtaking its human owner and using its power to harm humanity. Those fears remain present in the public discourse surrounding AI; Elon Musk, for example, has warned of what he sees as the dangers of AI. 

On the other hand, there are others, like Bill Gates, who do not believe AI is any cause for concern. Optimists like Gates say that AI can help businesses find ways to increase profits, help scientists make new discoveries, help regulators prevent fraud and help doctors diagnose patients. We could soon be living in a world where AI drives our cars, plans our days and automates the majority of repetitive day-to-day tasks.

This prospect begs the question of regulation. Other world-changing technologies like automobiles, airplanes, computers and the internet get regulated to ensure the safety of their users. AI’s recent emergence and rapid evolution means that governments have not kept up with a legal framework to protect the public. That leaves certain moral and legal questions surrounding AI, for example, who should take responsibility when a self-driving car crashes? The company who produced the software or the owner of the vehicle? There is also the issue of bias. One would think that because of its purely analytical nature, AI would make more objective decisions than humans. At the end of the day though, humans program AI, and can implement their biases into the software. This has led to controversies, like the instances where AI partook in hiring discrimination. In fact, the federal government has warned employers of the discriminatory tendencies of AI.

The launch of Open AI’s ChatGPT in Nov. 2022 changed how people interacted with AI. It no longer served as something only people working in the technology industry could use: now the average person could experiment with it. People use ChatGPT to code video games, write novels and even ask for advice. People are captivated by its versatility and the numbers reflect that — ChatGPT reached one million users in just five days.

However, people have begun to abuse this newfound technology. For example, students have used AI to do their homework for them. This caused alarm in the education sector, and forced educators to reconsider the future of learning. If students can cheat so easily, how will teachers come up with new ways of forcing students to think critically?

The Greek philosopher Plato opposed the invention of writing. He believed that students would lose their ability to memorize information and would instead rely on written text to recall details. Contrary to Plato’s prediction, writing has helped humanity. People can still remember things, but writing them down helps them – just like any tool should. Thousands of years later the  AI faces the same criticism. Just like every new technology we must treat AI as a tool. Like any tool one should be careful when handling it. The next few years will most likely see a gradual increase of AI’s implementation to our lives. How we use it is up to us, so let us use it as a tool to improve humanity.