Home 9 AI 9 Artificial Intelligence: A Guide for Thinking Humans

Artificial Intelligence: A Guide for Thinking Humans

Jun 14, 2024

Over the summer many of us are looking for beach reading, an easy-to-read novel that is a break from our academic research. Surprisingly, Melanie Mitchell’s Artificial intelligence: A guide for thinking humans (2019) is a great option. It is a book about artificial intelligence (AI), but it does not read like a computer science textbook. Mitchell writes in an accessible, interdisciplinary style. She begins with a lighthearted story about her first visit to Googleplex, Google’s headquarters. In this post, we’ll explore Mitchell’s caution about predictions of future AI achievements and ethical concerns. Then we’ll consider how her questions could be infused into your classroom assignments. 

AI Overestimation 

Mitchell chronicles the history of triumphs and failures within the field of artificial intelligence. As she describes the developments of AI, she includes descriptions of several models and how they operate. This historical approach shows how frequently computer scientists have overestimated the potential of AI. Considering this pattern, Mitchell advises caution against future predictions of AI achievements. She argues that superintelligence is elusive because machines lack the commonsense reasoning abilities inherent in human cognition. These are valuable insights; however, AI has developed considerably since the book’s publication in 2019. Deep learning models have continued to evolve and there have been significant breakthroughs in multimodal foundation models. Challenges in reasoning and complex cognitive tasks persist, however.  

Mitchell’s work could be applied in higher education to support an informed and cautious use of AI. In the classroom, her perspective would show students the importance of critically evaluating AI claims and understanding its limitations. Mitchell’s historical perspective and honest assessment of AI capabilities provides a nuanced understanding of AI’s evolution and its current state. The example assignment “Debates with AI about AI Limitations” could apply a primary concern that Mitchell expresses.  

Classroom Engagement: Debates with AI about AI Limitations

An assignment to explore the limitations of AI could utilize Mitchell’s “On Understanding” (chapter 14), or a selection from the chapter (pages 275-277). Based on their reading and additional research, ask students to create an argument for or against the possibility of superintelligence. Their next step is to engage with Copilot by clearly defining their stance and reasons for it. Students can prompt Copilot to assume the opposing position and to provide counterpoints. After the debate, students can write a reflection essay that considers the points made by both sides and evaluate how they align with their initial stance. 

AI Ethics  

As Mitchell traces the roots of AI, she raises ethical questions about how AI is used in certain scenarios. Her concerns dovetail with many reports of AI’s possible impact on privacy, copyright, racial bias, and environment impact. In Chapter 7, face recognition technology is an example to illustrate ethical concerns of privacy and racial bias. Face recognition software uses AI algorithms to identify and store biometric information unique to an individual’s face. Social media sites, such as Facebook, leverage face recognition to tag users in photos and recommend friend connections. The company has faced backlash on storing biometric data without the user’s consent. Another ethical concern related to face recognition technology is the issue of racial bias. The American Civil Liberties Union tested Amazon’s Rekognition system and it had a 21% error rate on photos of African Americans. This and other studies have shown that the technology is prone to misidentifying individuals with darker skin tones, leading to a higher rate of errors. The implications of these errors are high when law enforcement or surveillance companies utilize the software. The ethical use of AI tools, from face recognition to advanced language models, is an important area for consideration.    

These ethical concerns can be seen in headlines about AI deep fakes or misinformation. While these headlines may be extreme, they tap into genuine questions that could be explored in the classroom. Mitchell’s insights help demystify AI, showing that, when used responsibly, it can be a tool for innovation. This balanced approach is crucial for students who are navigating a world where AI’s influence is growing, ensuring they are prepared not only to critically evaluate AI-related claims but also to harness its potential for positive impact. These points could be engaged in class through an assignment similar to “Case Studies on Societal Impact of AI.”  

Classroom Engagement: Case Studies on Societal Impact of AI 

To create an assignment on the impact of AI on society, ask students to research current AI applications or failures. Encourage students to analyze an AI scenario, considering factors like the technology used, the problem it aimed to solve, and the stakeholders involved. Then ask students to write a case study, detailing the background, the AI application or failure, the solution or fallout, and the results or lessons learned. To conclude the assignment, students are expected to present their case studies to the class, fostering a discussion about the potential and risks of AI. 

 

Melanie Mitchell’s “Artificial Intelligence: A Guide for Thinking Humans” serves as an engaging summer read but also as a useful text for understanding AI’s capabilities and limitations, making it a resource for educators and students. Her interdisciplinary approach could engage a diverse student body, including those less interested in a purely technical exploration of AI. Professors can leverage her approach to instill constructive skepticism in students, enriching their understanding of AI’s potential and boundaries. 

 Reference

Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Picador.  

Written By

Elizabeth E. Park, Instructional Designer at Penn State York

label AI

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *