Succeeding in higher education requires more than just relying on AI technology.

A recent survey of campus chief technology officers by Inside Higher Ed reveals a mix of uncertainty and excitement regarding the potential impact of generative AI on campus operations.

While 46 percent of those surveyed show enthusiasm about AI’s potential, almost two-thirds believe that institutions are not adequately prepared to handle the rise of AI.

I recommend that these CTOs and other decision-makers read two recent books that delve into artificial intelligence and the influence of enterprise software on higher education institutions.

The books in question are Smart University: Student Surveillance in the Digital Age by Lindsay Weinberg, director of the Tech Justice Lab at the John Martinson Honors College of Purdue University, and AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t and How to Tell the Difference by Arvind Narayanan, a professor of computer science at Princeton, and Sayash Kapoor, a Ph.D. candidate in computer science at the same institution.

How is it possible to have two books so relevant to the current AI discussion, especially considering that ChatGPT only became commercially available in November 2022, less than two years ago?

Narayanan and Kapoor illustrate that what we commonly refer to as “artificial intelligence” has deep historical roots in computer science and even beyond. The book provides a comprehensive examination of various algorithmic reasoning methods used for predicting or guiding human behavior, effectively translating technical concepts into practical insights.

A significant portion of the book focuses on the limitations of algorithmic prediction, particularly in technologies frequently used in higher education admissions and academic departments. The authors express concerns about the effectiveness of these technologies, as evidenced by the book’s title, AI Snake Oil.

See also  The Challenges of Implementing Initiatives in Public Education

Through numerous case studies, the book highlights the boundaries of data analysis, emphasizing that while data can provide valuable insights, it also has limitations, especially in predicting future events. The authors caution against overreliance on predictive algorithms, which may lead to biased decisions affecting individual students.

The chapter “Why AI Can’t Predict the Future” underscores the importance of understanding the flaws in algorithmic predictions, particularly for decision-makers in fields like college administration. Narayanan and Kapoor argue that many studies on AI predictions are flawed due to predetermined outcomes, which can distort the resulting behaviors and choices.

On the other hand, Weinberg’s Smart University sheds light on the negative impact of surveillance technologies in higher education, portraying student tracking as part of the broader financialization trend in academia.

Weinberg argues that the use of surveillance technology reinforces discriminatory practices, particularly in student recruitment and retention. The rise of technology-driven wellness applications further alienates students, suggesting that those who don’t benefit from such tools may not belong in the institution.

Both books raise concerns about how institutions may respond to the introduction of generative AI, warning against prioritizing efficiency over human autonomy and agency.

These works provide ample evidence that caution is warranted in embracing generative AI to ensure that human values and agency are not disregarded in the pursuit of operational efficiency.