For years, the idea of advanced robots and machines completing common tasks has tantalized authors, filmmakers, and the general public. Works that took place in an idealized future, such as the 1960s animated sitcom The Jetsons, were immensely popular, and for good reason. Who wouldn’t want to live in a world where robots can vacuum your house, control your lights, or even start your car? This prevalence of these hypothetical technologies made everyone dream of a future where machines could handle many of the minute, everyday tasks that many abhorred.
Today, that future has undoubtedly arrived. With the necessary hardware, everyday people can tell their machines to turn on the lights, vacuum the house, and start their car. Many of the technological wonders that were fictionally highlighted in The Jetsons are a reality now. Simply put, we are living in an idealized future, thanks to artificial intelligence (AI). While AI has been in the works for decades, it has truly flourished in the past ten years. Now, nearly every new piece of technology utilizes AI in some form or another. So, as we enter a future that is likely to be dominated by artificial intelligence, it’s important to be aware of the history, uses, drawbacks, and future of this fledgling technology.
The History of AI
In the 1940s, neurologists discovered that the brain is powered by neurons, cells that can communicate using electricity. This breakthrough suggested that, if the brain could make decisions using electricity, so could a machine. Following this revelation, programmers and scientists worked vehemently to research the possibility of artificial intelligence. In 1950, famed computer scientist Alan Turing developed what he called an “imitation game” using artificial intelligence[1]. In this game, there are three participants: A player who asks questions, a human who responds to the player’s questions, and a computer that responds to the player’s questions. While this test didn’t really determine if machines could think like humans, it determined if machines could come to the same conclusions as humans. While Turing’s experiment was only a game at the end of the day, it was a promising first step for the theory of artificial intelligence.
For decades after Turing, artificial intelligence remained mostly experimental. However, beginning in the 1990s, the exponential increase in computer processing power allowed AI to be developed into useful products. In fact, in 1997, IBM’s chess software, Deep Blue, became the first AI program to beat a reigning world chess champion[2]. Additionally, in 2011, IBM’s question-answering program Watson handily beat two of Jeopardy’s winningest contestants in a match of the famous quiz show[3]. While these two examples were only used in games, they publicly showed just how powerful AI could be.
However, during the development of these innocuous AI software programs, another form of AI was being developed as well: Facial recognition software. Although certainly different from quiz shows and chess programs, facial recognition programs use similar AI techniques to analyze faces from images. Using AI, these systems are able to analyze a face, then match that face to a picture in an existing database. As the technology advanced, businesses began to sell the technology nationwide, particularly to law enforcement groups[4]. Ultimately, while innocent AI programs like Watson and Deep Blue enjoyed national press, programs that had serious consequences were being developed quietly as well.
The State of AI Today
In the business sector today, AI is utilized in nearly every facet of operation. One department that especially uses AI is marketing and customer acquisition. From automated emails to AI-powered chatbots, artificial intelligence can be used to advertise your business to new and existing customers. In human resources, AI can sift through applications quickly, allowing managers to spend less time searching for qualified candidates. In manufacturing, AI can track inventory and even anticipate demand[5]. Today, the most successful businesses utilize artificial intelligence in order to maximize their efficiency. These powerful machines can do things at a speed that humans simply can’t reach. It’s no wonder why AI has been embraced so extensively by the business world.
While efficiency-maximizing AI has been welcomed around the world by businesses, more controversial forms of AI have been embraced as well. In recent years, facial recognition programs have grown more popular, and are being used by law enforcement, security companies, and even social media sites. Additionally, many AI programs built for consumers are little more than extra ways to collect data. For example, Amazon’s Echo device was one of the first AI assistants and quickly became popular after release. Soon, Amazon set the Echo’s price so low that the company was selling them at a loss[6]. However, because of the vast amount of valuable data Amazon would get back, this strategy was sound. Ultimately, AI has become extremely popular for both businesses and consumers, significantly due to companies’ everlasting desire for our data.
Concerns of AI and Facial Recognition
Every AI program runs on datasets, but if those datasets are biased, then the entire program can be biased as well. And when it comes to bias, few programs are more criticized than facial recognition software. In fact, researchers found large amounts of racial and gender bias in AI programs sold by IBM, Amazon, and Microsoft[7]. In this research, the author found the programs’ facial recognition software had an extreme amount of bias. For example, the software had less than a 1% error rate for lighter-skinned men, but a 35% error rate for darker-skinned women[7]. This is because the datasets that these AI programs run on simply aren’t diverse enough. Unfortunately, these errors can have devastating consequences. False positives from facial recognition software have led to a countless number of unjustified arrests, disproportionately harming racial minorities[8]. Although facial recognition has been helpful to some degree, its unequal consequences cannot be ignored.
While it’s true that AI programs can offer extraordinary benefits, those benefits don’t come free. In reality, they’re further eroding what little privacy we have left. We’re willingly buying listening devices from a massive company with a less-than-stellar record on privacy in exchange for convenience. While some people may be willing to make this sacrifice, it doesn’t have to be this way. The incredible benefits of AI don’t have to be paired with this massive violation of privacy.
So, what’s the solution for better AI? First, datasets need greater diversity, particularly in the case of facial recognition. In order to truly be effective, AI needs to be representative of the general population. But because of biased AI datasets, these programs are helping some, but actively harming others. Additionally, legislation ought to be passed that minimizes Big Tech’s reliance on data collection. Today, many people refuse to try the most popular AI programs because of their overreliance on data collection. If Big Tech became less obsessed with our data, we could enjoy the benefits of AI without trading away our information.
About AXEL
At AXEL, we know the best businesses have valued privacy for decades. Now, in a world full of cybercrime and data collection, digital privacy is more important than ever before. That’s why we created AXEL Go. AXEL Go uses military-grade encryption, blockchain technology, and decentralized servers to ensure it’s the most secure file transfer software on the market. Whether you need to transfer large files or send files online, AXEL Go is the best cloud storage solution. If you’re ready to try the most secure file-sharing app for PC and mobile devices, download AXEL Go for free here.
Footnotes
[1] “Turing Test.” Encyclopædia Britannica. Encyclopædia Britannica, inc. Accessed March 16, 2022. https://www.britannica.com/technology/Turing-test
[2] Saletan, William. “The Triumphant Teamwork of Humans and Computers.” Slate Magazine. Slate, May 11, 2007. https://slate.com/technology/2007/05/the-triumphant-teamwork-of-humans-and-computers.html
[3] Markoff, John. “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not.” The New York Times. The New York Times, February 16, 2011. https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html
[4] Valentino-Devries, Jennifer. “How the Police Use Facial Recognition, and Where It Falls Short.” The New York Times. The New York Times, January 12, 2020. https://www.nytimes.com/2020/01/12/technology/facial-recognition-police.html
[5] Marr, Bernard. “10 Business Functions That Are Ready to Use Artificial Intelligence.” Forbes. Forbes Magazine, December 10, 2021. https://www.forbes.com/sites/bernardmarr/2020/03/30/10-business-functions-that-are-ready-to-use-artificial-intelligence/?sh=2df649c43068
[6] Smith, Rich. “Did Amazon Lose $100 Million Selling Its Most Popular Item?” The Motley Fool. The Motley Fool, January 8, 2018. https://www.fool.com/investing/2018/01/08/did-amazon-lose-100-million-selling-its-most-popul.aspx
[7] Buolamwini, Joy. “Artificial Intelligence Has a Racial and Gender Bias Problem.” Time. Time, February 7, 2019. https://time.com/5520558/artificial-intelligence-racial-gender-bias/
[8] Najibi, Alex. “Racial Discrimination in Face Recognition Technology.” Science in the News. Harvard University, October 26, 2020. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/