S5 Ep91: Teaching With AI, Part 1: Navigating the New Era of Human Learning
Andy (00:01)
Welcome to the podcast. I’m Andy Hibel, the chief operating officer and one of the co-founders of HigherEdJobs.
Kelly (00:07)
And I’m Kelly Cherwin, the director of editorial strategy. Today we’re joined by Dr. José Antonio Bowen and Dr. Edward Watson, two national leaders helping campuses navigate rapid changes in teaching and learning where AI is involved. Dr. Bowen is a former university president, dean, and award-winning educator known for driving innovation in the classroom. Dr. Watson is the vice president for digital innovation at AAC&U, the American Association of Colleges and Universities, and a leading voice on AI, digital equity, and the future of undergraduate learning.
Andy (00:38)
And together, they’ve released the second edition of their book, Teaching with AI: A Practical Guide to a New Era of Human Learning, which spans everything from ethics and privacy to custom bots, individualized feedback, and the realities of how students are already using AI.
Kelly (00:54)
José, Eddie, thank you so much for joining us today. We’re excited for this conversation.
José (00:58)
We are, too.
Eddie (00:59)
Yeah, thanks for having us.
Kelly (01:00)
So before we start discussing the topics in your book, let’s take a big-picture look at the AI climate. Your first edition of the book was released in 2024, and now we’re entering 2026. From your vantage point, how have faculty attitudes toward teaching evolved? Are you seeing different things, less resistance, new forms, excitement, all the above? What are your thoughts on this?
Eddie (01:25)
Well, it’s an interesting landscape. I mean, you know, when generative AI first hit the landscape just over three years ago, there was sort of a semester of discovery and then a year of just challenges around academic integrity. But then the world of work began to adopt it fairly rapidly. So institutions started to focus on how might we prepare our students for life beyond graduation.
AI literacy became a thing, and there’s a lot of curricular reform that’s taking place right now. But José and I did a survey in one of our workshops with faculty from a lot of different institutions in the fall semester. And while institutions might be leaning toward AI literacy adoption, faculty are still very much concerned about academic integrity. And AAC&U just fielded a survey in October and November of 2025.
And we found that actually faculty have, broadly speaking, some pretty negative views of the AI landscape. I mean, specifically concerns about their students post-graduation, concerns about their career, concerns about teaching and learning practice in general. So, you know, a little bit of a pulse that we’ve taken: There’s a lot of anxiety and concern from faculty these days.
José (02:46)
And I just had that grief is another word, right? And we’ve come out of an amazing period, right? We had the pandemic, the demographic cliff has finally hit, we’ve got politics, public sentiment, and then AI. So people are tired. They’re burnt out. I think, quite reasonably, it’s just one more thing. But we’ve also heard faculty say, you know, OK, I should learn it if I just had time. Or even the faculty who were willing to say, let’s have a conversation, don’t have the time. So I think that people are all over the place.
I also think there’s about a third of people who are really racing ahead. I mean, there are certainly, every campus we’ve talked to has a group of people who are experimenting in a classroom and doing interesting things. So I think faculty are pretty divided. And I think one of the reasons for that, and for the grief and the anxiety, is also that this is existential, right? This is about identity. It’s about the things that I love because we’re, right, as faculty, it’s not just a job. It’s the thing that we love, the subject, the content. And that you’re telling me that my expertise in the thing I love might not be essential, or that students might not care enough to learn about it, that’s a hard thing to accept.
Andy (04:03)
Wow. I feel like I should have worn black today to represent the grieving that maybe we all are suffering from. I think that that’s an interesting place for us to go to the next step. If faculty are going in that direction and feeling that way, one of the parts I really like about the book is you both have spent time helping colleges adapt to major changes in teaching and learning.
In the introduction of the book, you make a powerful statement that AI is about to change our relationship with thinking the same way the internet changed our relationship with knowledge. So faculty may be grieving now that the thinking might not all be their own. For some faculty, particularly us Gen Xers, we remember where it was both the thinking and the knowledge that faculty had. The knowledge part, the internet did change our relationship.
Can you further elaborate for our listeners and explain what shifts should we expect in how we learn, create, and teach, and how our thinking is going to evolve?
José (05:12)
So I’ll start with the knowledge piece, which is from an old book, Teaching Naked. That’s a metaphor. And the idea was that the internet was new at that point. And so my argument was that before the internet, knowledge was relatively scarce, but relatively reliable. And after the internet, knowledge became abundant, but much less reliable.
Right? If before the internet, you found the encyclopedia, you probably had pretty good information. It wasn’t perfect, but there were no cat videos in the encyclopedia. And then you ended up with this world where most of what’s coming at you is cat videos and garbage. So that just made critical thinking more important. Right? Now I have to actually judge: Is this a good source? And before it was like, well, if I hand you an encyclopedia, it’s like, OK, I could just draw a textbook.
And so AI is going to increase that distinction. So I think AI is going to clearly change work. There’s not much faculty can do about that, but it is going to change our relationship with thinking, and that’s the place we have to pay attention. This is what we teach. We teach critical thinking. And so if this new technology can be creative or help us write a draft or give us feedback or give us another perspective or challenge our own thinking, if it’s really a partner in thinking, that’s something we really have to pay attention to.
Because if we start using AI for creativity, right, that will make us all more creative, or gives us the potential to be all more creative. But what does that do to the actual human creativity? Will we just cognitively offload and now we won’t be able to think at all without the AI, right? “Give me 500 ideas for new products,” or new ways to start this paragraph, or new ways to write this essay. AI is a great tool for that. But if I use AI for that indiscriminately, it’s probably going to reduce my own ability to do it independently. So AI can be a great tool, but it also could change and maybe damage human thinking significantly. And so that’s a place where we really have to pay attention.
Kelly (07:29)
So to build on that, José, that was fantastic. How do institutions prepare their students where careers are going to be utilizing AI and, like you said, it’s going to change the way we work? How can they do that so we aren’t just using it to say, draft 500 ideas, but actually partnering with it?
Eddie (07:50)
Well, I think from my perspective, it’s a bit of a, I mean, it feels like a big amorphous question, right? Like, how does higher ed respond? But in the truth, you know, professional schools have often responded to a shift, you know, from their accreditors, like here’s a new learning outcome. So you need to figure out how to approach that. And, you know, there are processes for doing that. You do a curriculum map. You discern where there might be places where you’re already touching on that particular learning outcome.
Do we need an additional course? How might we revise the curriculum further? I think those well-worn paths for preparing students for changes in what’s required post-graduation, we should follow those as well regarding preparing students with AI literacy skills.
So, you know, what is AI literacy? Do we have places in the curriculum where it makes really good sense to build that into the curriculum? Should it be a gen ed learning outcome? Should it be an institutional learning outcome?
Making these decisions, bringing faculty together to work with the administration to figure out where within the curriculum this might reside and how we might fund preparation for building those courses or advising those courses, all of that good work. I mean, we have processes for handling situations exactly like this that we should indeed incorporate as we move forward to respond to this AI challenge.
José (09:13)
And as Eddie likes to say, we’re good at curriculum, we’re just not fast. And so the problem is that AI is changing quickly. So we, you know, as faculty, we like to wait until we have all the information and we’ve tested and we’ve done research. We’re probably not going to be able to do that here.
So on the one hand, we’ve got the sort of curricular challenge. It’s a new learning outcome and we need to make sure students are ready for the workforce, but we also have to pay attention, like we like to do, to how this is changing humanity and the ways that humans think. And so we’re going to have to do both of those things simultaneously. And we don’t like to do that. We like to do one and then the other.
We want to make sure we have all the research, everything done, but industry is not going to let us wait 20 years to figure out how does AI change human thinking. So we’re going to have to employ nuance and do both at once. We’re going to have to add AI literacy to the curriculum as a learning outcome that doesn’t necessarily mean a course, and we’re going to have to study what happens. So that means we’re going to have to experiment.
And so I think the biggest thing for a campus is to create a culture where we know we’re going to fail a lot. Not every experiment is going to work. We’re going to create a bot for our students and we’re going to try this and the students don’t like it, or the students, or they offloaded something. But we still have to try. We’re going to have to see because education needs to be entirely rebuilt.
A lot of the things that we were doing probably weren’t working before, but AI has exposed the flaws in the things that we were doing.
Andy (10:51)
Actually, the second edition of the book that just recently came out, the discussion of AI literacy was expanded to include ethics, privacy, bias, environmental consideration, and mindsets. I think, José, you may have hit some of these, but maybe there’s some others. What are campuses still underestimating about responsible AI literacy?
Eddie (11:13)
Well, I think there’s a few things when we think about it as a learning outcome. You know, agility is a real challenge, right? You build a curriculum, you think about a learning outcome, you make changes to gen ed, and any listener that’s been a part of the gen ed reform effort, you know, well, those are fun efforts, right? We take care of those so quickly. Now, I mean, often, you know, I think probably many have been parts of, you know, participated in failed gen ed reform efforts. Changing the curriculum is not easy. It’s not quick.
But then we have a learning outcome that is not written in stone. I mean, I think the notion of what is AI literacy today hopefully will have a relationship to what AI literacy might look like in, say, two years, but there’s gonna be some significant changes. So the notion that we define what AI literacy might be in our view for our particular campus or for our particular discipline, and then we build curriculum around it, then we start teaching these courses.
Well, how do we make sure that that AI literacy that we previously defined with these components, these sub-dimensions, actually match what students are going to need when they graduate in six months or in three years or whatever it might be? The rate of change associated with AI literacy at this particular moment is significant. And so that is something that we need to incorporate into our thinking as we think about how do we move forward, and then how do we prepare faculty to be agile.
So if I’m teaching a course where AI literacy has been tagged or assigned as a learning outcome, what resources do I need? What support structures do I need to teach it in the fall and then think about revising my vision of what AI literacy is to teach it again in the spring and then think about revising my definition of AI literacy as we move into the following semester?
So that sense of agility within our curriculum, I don’t know that we’ve needed that level of agility and sort of a partnership with change, if you will. I don’t know that we’ve ever had that need with any learning outcome ever before.
José (13:26)
Yeah, and I’ll give you an example. Normally, right, we do definitions first. So every university wants to do a policy, right? We have to define AI literacy and what it’s going to be, and then we figure out where we’re going to put it. Well, AI literacy is a changing thing. So our suggestion is that, and students want this, by the way, students really want to be told, they want help in understanding how their use of AI could be better, what the dangers are. So I don’t think everybody has to teach the same thing.
I actually don’t think you need to decide what AI literacy is, but we need all faculty to be going, all right, so how are you using AI? How could I help you? And in that dialogue with students, because students are concerned about what AI might do to their learning.
On a practical level, though, I do think that there needs to be something in the first year. Students are coming from a variety of different high schools. Some high schools are banning it. Some high schools are using it. And then probably in the senior capstone or the senior seminar, again, every discipline, because AI use is going to be different in different disciplines. But that means that you’ve got someone, whoever’s teaching that course, right?
You can’t send students out into the world not having encountered this new technology and understanding, well, this might affect what you’re asked to do the day after graduation when you actually get a job. It would be like sending accountants into the world without Excel or a calculator, right? I mean, those are not things you do at the beginning, but they are things that you do at the end.
Kelly (14:54)
I want to build on that regarding the student perspective. In your book, you noted that a lot of students see AI as normal and it’s not cheating. You know, you’ve also argued that all assignments are now AI assignments. So how do institutions rethink integrity, assignment design, and what does it look like for high-quality human work to exist?
José (15:16)
So I think we’re probably gonna have to look really broadly at this because what we think about as authorship is changing as we speak, right? The idea of whose intellectual property can I use, right? When dictionaries were first introduced, there were other people’s intellectual property. And so if I use a dictionary or a thesaurus to improve my writing, I’m building on other people’s intellectual property, and that’s now accepted as, well, that’s OK. It’s still your work. You just used somebody else’s thesaurus.
And so now we have this other writing partner. So the old definitions of plagiarism don’t apply. In fact, AI use is not plagiarism. It might be misuse or fraudulent or misrepresentation or overuse, but plagiarism requires taking from a person. So borrowing from AI is not plagiarism. It might be something else. So that means we have to rethink cheating, academic integrity, assignments.
And our general advice is that raising standards to things that you can now only do with AI is closer to what students will do in the workplace. And I think it anticipates where authorship is going in the future. We all assume that the books that we read used a spell checker. We all assume that the books we read had an editor who made contributions to that.
So what will we assume in the future? Will we assume that all writing had some AI support? What kind of AI support? So for the moment, I’d be transparent, right? We should focus on transparency. How did you use AI? And more importantly, how did it help or hinder your learning?
But I think we have to rethink all of those notions of academic integrity and cheating and all those assignments and rethink what is it that students really need to be able to do.
And I think the answer to that is that in the workplace, students will be asked to do work that’s better than AI, but that’s also better than they could do themselves, right? It’s better than human work and it’s better than AI work. It’s combination work. And the same way that a spell checker makes my spelling better and enables me to focus more of my creative energy on other things.
And so the book has a lot of assignments that ask students to do things that they might have thought impossible by themselves, right? Not just come up with products, come up with products, iterate, come up with 500 focus groups, try different packaging size and different price, right? Things that you could now use AI to get feedback on to make the assignment harder, but also just to simply do more in the same amount of time.
But let’s be clear, that’s a radical rethink of what we ask students to do and how we are structured in higher ed.
Eddie (18:12)
Yeah, there’s interesting different perceptions regarding even what cheating is today. I mean, if you give an example like this to faculty and students and administrators, we all agree this is cheating. I ask AI to write something for me. I copy and paste it. I put my name on it and I turn that in. That’s my whole writing process. Everyone agrees that’s cheating.
But what if I go to AI and I take the writing prompt that I received from you and I ask AI to provide me with a detailed outline? I then write every word based on that outline. Is that cheating? Or if I write a paper and I ask AI for some feedback on how I could improve it based on the grading rubric and I take that feedback and I make another revision and then I turn that in, is that cheating? And to be honest, I think a lot of what drives it is the actual learning outcome. Where you are trying to teach your students in one class, something that might absolutely be cheating, might actually be a best practice in another class with a different learning outcome, though you're using AI in exactly the same way in both of those examples. And a good example of this might be the notion of learning how to write versus using writing to learn as an instructional tool. So I mean, if you're learning how to write, you know, it might be, well, you need to write everything, all of our students in this course need to write every word because that's what I'm teaching them. I want them to have practice writing. But there might be another class where you're developing ideas and so maybe asking AI for ideas like a brainstorming partner and then and providing that outline and then I work a little bit with that and then get feedback was really about idea development and so maybe might structure something very interesting that you know, helps you learn these other learning outcomes. I mean, so really kind of the notion of backward design. What, what is the learning outcome? What is the assessment that if a student completes it, it shows the professor, you know, if they've achieved that learning outcome and then what do you need them to do beforehand to sort of like develop the skills to be able to do that assessment. I mean, that's, you know, instructional design 101, but that's often the driver and we're really seeing this in the real world as well. I mean, like even in the field of medicine, which you would think, well, the field of medicine is a field, so probably everyone agrees, but you know, not taking AIDS for nurses that are doing rounds, you know, that listen to conversations, those are kind of being in some circles suggested that maybe we shouldn't use those because nurses aren't paying enough attention to all of the different patients that they have on their floor. While not taking AIDS for primary care physicians, you know, listening to conversations actually might be freeing them up to be more engaged with the one person they have in the room with them. So, you know, medica field, two different views. I mean, one of the things that Jose and I often say is, you know, what what we call cheating in higher education, business is calling progress. So how do we navigate a landscape where what we may see as cheating might actually be a skill that students need to possess post-graduation. So really, there's going to be difference from one class to the next. I don't think there's one-size-fits-all in terms of policy. And I think it's up to the individual faculty member to make decisions about whether a. i. should be embraced broadly for a particular assignment or somewhere in the gray area in between getting to the point where A.I. is absolutely not allowed in this class because it would be counter to achievement of the learning outcomes. So it's a complex space that really, I think the learning outcome and really the professor's own preference would drive what the choices might be in in an individual class regarding the use of A.I.
Mike (18:58)
Thank you for joining us for the first half of our discussion about AI with Eddie and José. In the spirit of experimentation, but without cognitively offloading my creativity, I partnered with AI to structure this recording into two episodes. And I had it help me write this wrap-up. When asked about ideas for music, the AI suggested a genre I was completely unfamiliar with: glitch hop.
We hope this collaboration results in something better than a human or bot could produce alone. But let us know if it sounds human enough or if it could still use some help by emailing us at podcast@HigherEdJobs.com
or messaging us on X @HigherEdCareers. Thanks for listening. We look forward to talking soon.