New Article Published: “Opening the Black Box”

AI is rapidly reshaping higher education—but not all AI tools are created equal. My latest article explores why explainability must be at the heart of how colleges and universities adopt and govern these technologies.

less than 1 minute read

I’m excited to share that my latest article, “Opening the Black Box: Helping Faculty and Staff Understand an AI-Driven World”, is now published in University Business.

In this piece, I explore the impact of AI on higher education; from recruitment to research to the classroom. Too often, institutions treat AI as a monolithic technology, without recognizing the important differences between simple, explainable tools and opaque “black box” systems whose decision-making processes are hard (or impossible) to understand—even by their creators.

The article makes the case for explainability as a guiding principle for AI adoption in education. When institutions delegate human tasks to AI, they have a responsibility to ensure transparency and oversight, especially in areas that affect students’ lives. I also discuss the legal landscape, including the EU’s AI Act and emerging U.S. regulations, and offer a call for higher education leaders to choose AI tools with intentionality, focusing on ethics, accountability, and alignment with educational goals.

This is an important conversation about the role of AI in education—not just as a technological tool, but as an ethical commitment.

Read the full article here