Speaker 1: Is this thing on? Speaker 2: Hi. Speaker 3: Hello? Speaker 2: Is this thing on? Speaker 4: Hi. Is this thing on? Speaker 3: Is thing on? Speaker 1: [inaudible 00:00:09] Is this thing on? Speaker 4: Is this thing on? Speaker 5: Hi, is this thing on? Speaker 1: Are we open? Mike Mills: Welcome to Open-Ed Mic, a podcast where voices from across the educational landscape share insights, stories, and strategies for transforming learning through openness. Whether you're new to open education or a seasoned practitioner, Open-Ed Mic invites you into the conversation. Let's hear who's joined us today and we'll start off with Brittany. Brittany Dudek: Hi, everyone. Brittany Dudek, Director of Learning Resources at the Colorado Community College System Office. Joining from a snowstorm here in Denver. Mike Mills: All right. Thanks for joining us, Brittany. Kevin? Kevin Corcoran: This is Kevin Corcoran from the University of Central Florida here in Orlando. Mike Mills: We go from snowstorm to Orlando. Let's go a little bit in between and find Zach. Zach Claybaugh: Hi, everybody. I'm Zach Claybaugh. I am at Dominican University in River Forest, Illinois, just outside of Chicago. And we are very wet and rainy today. Mike Mills: Thanks, Zach. And I'm Mike Mills, recently retired from 35 years in higher education and looking forward to the next chapter in my life. Today's guest is Dr. David Wiley. He is currently an Academic Affairs fellow for AI in Education at Marshall University and has served as the Chief Academic Officer of Lumen Learning and a foundational figure in the open education movement. For over two decades, David has been at the forefront of rethinking how knowledge is shared, most notably through his creation of the Five R's of Openness, which has become the gold standard for defining OER. Much of his recent work focuses on the seismic shift occurring at the intersection of open education and generative AI. Through his influential blog... Now, let me start that over. I'll start from the whole paragraph. Much of his recent work focuses on the seismic shift occurring at the intersection of open education and generative AI. Through his influential blog, David has been challenging the community to look beyond just free textbooks and instead toward a future where AI and open licenses work together to create truly personalized, continuous learning. He is a thinker who consistently asks, "Now that the technology has changed, how do we better serve the learner?" It's such a distinct honor to have a true pioneer of the field join us. David, we welcome you to Open-Ed Mic. Dr. David Wiley: Thank you. I am super excited to be here. I really appreciate the invitation. Mike Mills: Let me kick it off and just ask you your origin story and tell us how you first got started in the open space. Was there a specific aha moment where you realized that traditional copyright was a barrier to learning? Dr. David Wiley: Well, actually my original aha moment had very little to do with copyright. I was here at Marshall University in the late '90s working as the university's first webmaster, and I was developing a JavaScript calculator which was the very cutting edge of technology at the time, calculator and webpage. And it occurred to me as I was working on it, I just had this moment of the clouds parted and a shaft of sunlight came down and rested on me, I thought this digital calculator is very, very different from the physical calculators that I have known before. In an elementary school classroom of 25 kids, there might be four calculators and you're waiting your turn and they're getting passed around and you have to wait for your turn to use the calculator. But it occurred to me that once you take this calculator and put it online, a million people can all use it at the same time. And that just seemed like magic to me. Now, tons of people had understood this principle for a long time, economists have technical vocabulary to explain it. I'm not claiming to have discovered anything new, just that was my moment when I realized there's something really different here between digital and physical. And my first thought was that means that if you can just find the funding or whatever kind of support you need to create something the first time, after that, everybody in the world ought to be able to use it for free. And the internet gives us this power to share that we've never had before. And so I got very interested in that idea but it was right around the time that Napster was happening. Some of the more mature people are nodding their heads about Napster. It was about the same time that Napster was happening, and Napster was teaching us that just because the internet made something technologically possible didn't make it legal. We had this incredible capability to share on a scale that we had never shared before and then Napster leveraging this capacity to share at unprecedented scale in ways that were clearly breaking the law. And so some work needed to be done in order to make this technical capability make it also legally allowable. And so that's why in the late 1990s, I started working on open licenses for content. But for me, that moment, the aha moment will always be sitting in my office at Marshall University working on that calculator, realizing digital is different in some really special ways. Kevin Corcoran: That's awesome. Mike Mills: I had an acquaintance years ago who was charged with illegally downloading from Napster so I know it all too well. Zach Claybaugh: I love that though... Brittany Dudek: [inaudible 00:06:28] Zach Claybaugh: What's that? Brittany Dudek: I said I've never heard or met anyone who's ever been charged with that before but it was always such a fear. It was so funny. Kevin Corcoran: No confessions here. Zach Claybaugh: I love how that story really... It starts from that promise of the internet, it's that this is how we can really democratize information and open it up to people. And I love that it starts out with just a simple calendar or a simple calculator program and really looking at it from what is that potential wider benefit. And I think as it relates to the questions around copyright and how can we utilize these types of programs and products and learning materials, really looking at this defining the present, I think maybe through that beginning lens. For those who might be new to the field, you are the architect of the five R's, Retain, Reuse, Revise, Remix, Redistribute. And as we are entering this era of generative AI, do the five R's still hold up or are we entering into the territory of something like a Sixth R? Dr. David Wiley: Well, originally for the first, gosh, seven years maybe, it was only Four R's. You have to be really old to remember, there was a time when there were only four R's, and then a fifth R came along. Perhaps we'll need a sixth R at some point, I don't see that time in the future though at all. I think the five R's map very nicely into what's happening with generative AI and actually give us a framework for imagining things that wouldn't be immediately obvious that we need to think about. And if you set aside traditional OER like open textbooks or things like that and just lay a large language model down and put the five R's next to it, and try to think through what does it mean for each of these five R's when the object I'm talking about engaging in those five R activities with is essentially a program, it's a language model? And then that has really been driving my thinking for the last 24, 30 months to understand what they mean in this new world. Because they still mean the same thing but you apply them very differently in the context of a large language model than you do in the context of a press books site about introductory psychology. Kevin Corcoran: While you were starting there, I was thinking about remember back when there was only three network televisions and all of a sudden a fourth one came? Dr. David Wiley: We're just outing ourselves with our age here. Kevin Corcoran: Yeah. Well... Mike Mills: Those three networks were black and white at some point too. Kevin Corcoran: Yeah. Dr. David Wiley: Well, I was going to say, should we talk about the way that each of the five R's applies to large language models or do we not want to go in that direction? Brittany Dudek: I would be very interested in that. Zach Claybaugh: Yeah, absolutely. Brittany Dudek: I think there's the correlation between open-ed and AI, and I would love to hear what your thoughts are and how those five R's really... Dr. David Wiley: Awesome. And I've laid this out some in my blog that Mike mentioned a little bit ago. But when there were originally only four R's, the four R's were reuse, revise, remix, and redistribute. Retain wasn't in the original group. And retain only became a thing once the content industry's business model started shifting toward only providing you access to material but never letting you own any material. The drift toward things like Netflix and Spotify, we have to pay every month and you get access to this huge collection. But the month you stop paying, you lose everything. Our library databases, library collections on campus have all drifted this direction now as well so it occurred to me at some point that you can't do any of the four R's unless you can get your own copy of the thing. Well, yeah, you can only do the four R's if you can get a copy. Retain came in as this underlying... Actually, I realized I'd just been assuming that I would be able to retain all along but at some point I explicitly named Retain as that fifth R. In the context of large language models, what would retain mean then? It means that I need to be able to download a full set of the model weights, the brains of the large language model, and whatever software I need to be able to load those model weights and use them to be able to give them prompts and get answers back. I have to be able to make, own, control, keep my own copy of those, that's retain in the context of the language model. Then revise and remix, I want to be able to make changes to the language model once I download it because it's probably not going to behave pedagogically in the way that I want it to behave. It's probably been trained on a lot of customer service interactions where its goal is to make you happy and leave you feeling satisfied at the end of the experience that you're going to want to come back and use it again. That might lead it to do things like answer questions immediately rather than saying, "Well, why don't we talk about that a little more? Let's think about this a little more deeply." Or some of the pedagogical moves that you would make as a tutor or as an instructor. And so the difference between revise and remix is subtle but it's really important. When you're revising material... Let's just go back to the OER, the traditional OER context for a minute. When you're revising material, you're opening that material up and making changes to it and then saving it again. Maybe you'll take a chapter and you'll adjust the reading level up or down, or maybe you'll go into a chapter and create a new example to go in there, but you're working internally in the resource and you're adapting it, updating it, saving it. Whereas remixing, you're bringing together two or more preexisting things to combine them together. And that distinction might seem like it's not very important but it turns out to have a bunch of different implications, among which are license incompatibility issues. When you're revising, you never have to worry about license compatibility. But when you're remixing, you do. In the world of large language models, there are analogs to both of these things. In the world of large language models, there are analogs to both revise and remix. In revise, I might take a language model in which all the weights are given at a crazy degree of precision. Maybe 16 digits after the decimal point of precision and there might be billions and billions of these numbers in the model weights. One technique that people use to make these open source models smaller so that they can fit into the memory of your laptop and you can run them locally on your laptop or even run them on your phone, is they do a process where they decrease the precision of each of those numbers. Maybe say, "I'm going to go from 16 digits of precision down to 8 or down to 4." Conceptually, it's like rounding, how you might round a number up to the nearest 100th or something like that. Obviously, if you have a billion numbers or 16 digits long and you cut all of them in half, that decreases the size of the model by half, and now maybe it will actually fit into the RAM on your laptop so that you can run it and use it. This is called quantizing. Obviously, as you decrease precision, the model gets a little dumber because its brains are a little less precise than they were before. But that's an example of what it would mean to revise in the context of a large language model. I'm not bringing in any external data, I'm not adding some other preexisting thing to it, I'm just making changes to the model itself. Quantizing is an example of remixing... Or I'm sorry, quantizing is an example of revising. Remixing would be where you would go grab additional training data and use it to retrain the model, what we call fine-tuning the model. You might think about a model that comes off the assembly line to you maybe as having an undergraduate degree in general studies, but you need it to be really smart on a specific topic so you want to send it to graduate school. You want to take all that general knowledge it has and add a little more specificity to that, you might take data on a certain topic or data that exhibits a certain behavior. Maybe you're going to grab 15,000 examples of tutor interacting with a student in a way where those exchanges follow best practices of highly effective tutoring, and you want to show 15,000 examples of that to the model so that it can learn that behavior. Pulling that data in, doing some additional training, that would be an example of remixing where you're bringing the existing model weights and then you're adding in the impact of this other training data and doing a second training run. Revising and remixing both have their analogs here in the large language model space. And then reusing and redistributing I think are pretty easy. For redistribute, I need to have permission to be able to take my quantized model or my fine-tuned model and put it back online and let anybody else download it and do what they want to do with it. And then permission to reuse, of course, is just I need to be able to fire up the model and send prompts to it and interact with it and talk to it. But each of those five R's has a very clear mapping over into this world of generative AI now. And the arguments people are having about what does it mean for a large language model to be open? There's not quite a holy war going on about that but there are some very vigorous conversation about what that might mean. And for me, if I have permission to engage in the Five R activities, that's open enough for me. That was a long rambling answer but I hope that shows how the five R's apply in this new world. Brittany Dudek: I think it's great. Yeah, it's really helpful to see how open education and AI, generative AI specifically, how they relate and how really they're shifting into one another and how you don't really need to divide the two. I really appreciate that. It's very helpful, I think, for people who may think that there's such a delineation between the two. Kevin Corcoran: And David, you took us to school here on modeling and creating LLMs. For the folks in the open community that are more familiar with just using the publicly available tools, whether that's ChatGPT or Gemini or what have you, there's been so much conversation about using the tools to generate content and just reduce time to whatever it is, revision time, creation time, what have you. I wonder what your thoughts are about as these models get trained, right now some of these public models are being trained by commercial material or niche content or what have you, and whether or not you think that the content owners at some point in time are going to put up walled gardens or commercialized LLMs so if you want the latest and greatest on science data or medical data, you're going to have to subscribe to this model. I wonder what your thoughts on that are and how that actually might impact content generation. Dr. David Wiley: Yeah. The US Copyright Office has been pretty clear that material generated by large language models is not eligible for copyright protection. Whatever comes out of it goes straight into the public domain, meaning that everything generated... I'm going to say everything with a little asterisk. But everything generated by a large language model is an OER when it comes out. Intellectually, I understand the argument that says you shouldn't be able to train a large language model on copyrighted materials. But in my heart, I just don't believe it. When I came through graduate school, I was trained on copyrighted materials. I read lots of research articles, lots of textbooks, lots of monographs and manuscripts, and all of them were copyrighted and I learned from them. And then I turned around and I started producing things based on what I had learned from my reading of those copyrighted materials. Should I not have been allowed to do that? I understand that I'm not allowed to go reproduce word for word, pages and pages from there. But if I read a bunch of stuff and assimilate it all, then it doesn't seem crazy to me that if you're trying to... Again, if you think of this example of sending a large language model to graduate school. If you want it to be very smart, why would you restrict what it's able to read? And the US courts have been pretty clear, in all but one case, that training language models on copyrighted materials is a fair use. It's clearly transformative, right? What comes out the other side of the language model is very different from what goes in. The exception being one involving the results of legal cases and a company scraping just some copyrighted... Oh, I'm losing the technical vocabulary now but the write-ups or summaries of legal cases, using those to fine-tune a model and then creating a product that's a direct competitor with the company whose data they scraped in order to train their model. And in that case, the court said, "This isn't transformative, all you've done is just built a direct competitor. And so we won't allow training over these copyrighted materials in this case." But in a big case like the Anthropic case, it was just decided, the judge in that case said Anthropic was perfectly within its rights to train on that huge corpus of copyrighted material that they had because that's such a transformative use, which is the core of what fair use is about. Now, they did pay a very large fine because they illegally obtained all their copies of that copyrighted material, but there was not a problem with them training on the material. We all read from copyrighted materials, we all learn from them, we all go out and turn around and apply and write and create based on the things we learned from copyrighted materials. Anthropic got in trouble because they pirated terabytes of copyrighted material in order to do the training. Those are two separate issues though, right? Kevin Corcoran: Well, I'm going to channel something I think you've said at one point in time is that I think it was Isaac Newton who actually created calculus and his estate isn't getting a kickback for every calculus textbook that's sold. Dr. David Wiley: We all stand on the shoulders of giants. Mike Mills: I want to piggyback on that focus on the legality in the court and the confusion that it causes for faculty. And I've been around, you all have seen faculty just standing at a copier copying pages and pages of copyrighted material. And this whole focus now on generative AI has faculty members just continuously worried. And pick your social media platform, your trade journal, and you see it every day. With this world of open plus AI, how does the role of a faculty member shift? Does it shift from content creator to AI curator to prompt engineer? What's the role of the faculty member in all of this? Dr. David Wiley: Well, I think in some ways we've lost the thread over time. Because in my mind, the faculty member's primary role is to build relationships of care and trust with their students, where they can encourage their students and support them in a way that when students feel like they don't belong there or when something traumatic happens outside of school and a student wonders, are they going to be able to make it to the end of term, or when they just run into a really difficult concept that they're having a hard time wrapping their head around, that relationship is strong enough to pull those students through whatever obstacles get in their way. Help them over, under, around, whatever that might be. That seems to me like it's really the core function and we've lost that as we have bigified our classes to 50, 100, 250, 500 students in a giant lecture section. My first hope and goal when I look at a new piece of educational technology my question is always, how can I use this to shed some of this other stuff that has been put on me in terms of administrative or other requirements that I have so I can get back to that core reason that I'm here? I do think AI provides a lot of opportunities for us to shed or get rid of, or probably delegate is the best word, some work that's not super high value add that faculty do so that they can do more of the things that only they can do. We've talked about this with every kind of wave of technology that's come before, it takes a little more off of our desk. In theory, it's going to create more free time for us to be able to have these deeply personal, meaningful, inspiring kind of relationships that we want to have with students. And I do think that AI also does that, and I think there are more opportunities with AI than there have been with any technology that we've seen before for that. Mike Mills: I think it also demonstrates that the role of faculty is continuously changing and I think that's a struggle for a lot of individuals is that constant change. Kevin Corcoran: David, I'd love to have you put your idea hat on just for a second. And piggybacking on this conversation about faculty as creators, much of the conversation has really been... At least in the open community I've seen, is let's use AI to create open content that basically is in a textbook format, which yes, it's a speed to market, it actually improves the process, but it falls so short of what generative AI could do. I'm curious, if you're talking to faculty and what they can do, obviously they can use it to do updates and creation and images and illustrations, what would you encourage faculty to actually explore with AI? And then I'm going to come back to another question tied to that. Dr. David Wiley: In the past... And this is not a metaphor that I came up with, I heard it somewhere else. But I love the metaphor of driving a fighter jet down a highway, it has wheels and it can move forward, it can be propelled horizontally, that's a thing that you can do with a fighter jet, or you can get in an airplane and you can drive it along the road from one airport to another but it's just such a complete missing of the point of the capability of an airplane, you should be getting off the ground. It seems to me that the idea... Well, the idea of using generative AI to create things that look like traditional textbooks or traditional open textbooks is very much like a horseless carriage kind of way of thinking about it. It's, "Here's this new thing. How can I think about it in terms of the things I already know and the things I already understand as I'm trying to start to wrap my head around this new thing that is alien and foreign to me?" I get that's why we started there. But with my idea hat on teaching in our graduate programs, at least the two programs I was in at both Utah State and BYU, I had the great honor and pleasure of teaching with Andy Gibbons as a colleague, one of the smartest designers I've ever known or ever known to exist. And Andy used to talk about how he was worried that we were trying to reduce the instructional design process to the process of turning a crank, that if you just turn the crank this way that then content comes out the other side like ground beef or something. There's so much more to it than that. The idea that we would use all this power of generative AI to just turn that crank and have static words come out the other side of it like ground beef is disappointing. AI can do so many conversational and interactive things. I think a lot of people's first mental metaphor for generative AI is like Google. You put in a search or you put in a query, you hit enter, it gives you something back and you're done. I send a query, it gives me an answer and we're finished. Helping people understand that you can dialogue and converse and argue and debate and have these ongoing interactions is really the first step to helping people see more of what's possible with generative AI. Instead of using... Something I've been arguing for for a couple of years, instead of using generative AI to create a textbook chapter that's going to be static and that you're going to go... I feel like I'm bagging on Pressbooks. I don't mean to bag on Pressbooks, but you're going to create some static content that you're going to upload into Pressbooks and say, "Look, there, now I'm done." If you'd invest that same amount of time in developing prompts that give the model sufficient context to understand what domain you're talking about and give it access to the information that it needs to be able to give accurate answers without hallucinating or lying or being wrong, and then prompt it with some just very direct pedagogical strategies, and give that prompt to the learner and say, "Here, learner, take this prompt, copy and paste it into ChatGPT, or whatever, and hit enter and enjoy the next 20 minutes of your life when you're going to have this in depth conversation where you can ask as many follow-up questions as you want. There will be no eye rolling. There will be no disappointment or frustration. Anything you're curious about, you can pull on that thread as many times as you want." Just the idea that you have access to this resource that will answer, it will ask, it will correct. It would be like... Here's what it would be like. The way that we're using generative AI right now would be like hiring a personal tutor for someone and then asking that tutor to write out notes and hand the notes to the student and that would be the end of the interaction. That's the way that we're using it. But really, if you think about any kind of conversation you can have, any kind of conversation. A job interview, that's a conversation. A tough meeting with somebody who reports to you in your organization and they haven't been doing their job as well as they need to, that's a conversation. Any kind of conversation you can have, you can prompt AI to help you practice that conversation so that you can be more effective at it. And just the idea that this is... Thinking about AI as a conversational partner, whether it's a learning conversation, a job interview or whatever it might be, just getting over the idea that it's Google 2.0 where you type in your query and you get an even better answer and then you're done, but being able to think about it in this dialogic kind of way, that's what I've been advocating for. I've been advocating for faculty to stop writing content and start writing prompts and openly license those prompts, and give the prompts to students to use so that they can have these interactive conversations where they're getting immediate feedback about their misunderstandings, they're being corrected before that misunderstanding grows and develops and hardens in their brains so then they're going to have to unlearn it later on, which is really hard. Just making that mental switch is a lot of what I've been advocating for for the last two and a half, three years. Kevin Corcoran: In that same vein, if you're able to perhaps convince somebody to expand their use or maybe in the same conversation, how do you allay fears that Hollywood is going through right now with Tilly Norwood? And that there's this completely synthetic actress that is using other bodies of work and there's fear from faculty that, "Oh my God, there's going to be a bot that's going to replace me." And we already have concepts of tutor bots that are high performing that are going to impact. How do you say, "Okay, stop using this as Google 2.0. But at the same time, you're not going to have a Tilly Norwood environment." Dr. David Wiley: Well, in every one of those conversations, I go back to the late 1990s. And for those of you who are old, you might remember David Noble who wrote a series of articles about digital diploma mills, how the internet would be the end of education as we know it, that because online classes exist, universities would be able to fire all faculty and the online courses would just teach themselves. Literally, every concern that people have about AI right now, we had about online learning 30 years ago. And in some ways, they were right. And in some ways, they were wrong. And we grew and adapted and evolved and we figured it out. Now, is AI more powerful than the internet? Absolutely. Are we going to have to find different ways to optimize what we're doing in terms of the amount of value that the faculty member is adding to the learning experience? Because again, for me, the core thing that the faculty member is doing is building that relationship, establishing the trust. The content delivery, the question answering, those really to me are secondary. They're important but if you don't have that foundation of relationship, it's really tough. I think there's a very good chance that in the kinds of settings where today we're already ignoring any possibility of that kind of relationship building because I've got 500 students in a giant lecture, and literally all the faculty member is doing is standing at the front and giving a very compelling performance to their audience of 500 intro-to-chemistry-students. I think there is some likelihood of displacement or replacement or something happening there but I don't think we should have ever been teaching that way in the first place. Some of you will remember when online learning was... Late '90s, more so in the early 2000s, a lot of the objections that were raised to online learning was about its quality and that it didn't do this and it didn't do that and it wasn't high quality for some definition of high quality. But if you turned around... I'm leaning closer and whispering. If you turned around and looked at what was happening in the classrooms, none of it was high quality. It was a total double standard, right? This quality argument was being made in opposition to online just because people were scared of online. And the same people who made those arguments, if you went back and looked at what was happening in their classes, their classes would have never stood up to the objections that they were raising about online. There will absolutely be more double-standarding around AI and its use in education. For example, is AI occasionally wrong and the answers it provides? Yes, it is. Are humans occasionally wrong in the things that they say? Yes, they are. Are world leading experts occasionally wrong? Yeah, they are. Are way overburdened adjuncts who are driving across town to teach at three different campuses, a course that they just got told two days ago that they're going to teach and they're exhausted and frazzled, is there some chance that they might say something wrong from time to time? Yes, there is. We cannot double standard AI in the same way that we tried to double standard online learning before, pretending that face-to-face is perfection without flaw and rejecting the possibilities of all the things AI might do to support student learning because there will be some problems there from time to time. Kevin Corcoran: It makes me think of the Clayton Christensen quote about distance learning when he was in a large lecture hall, is that he was in the last row, eyes closed, the instructor didn't know he was there or not. Dr. David Wiley: Everything past the third row is distance learning. Kevin Corcoran: Yes. Mike Mills: And I'm not sure we've gotten beyond that double standard but that's a conversation for a whole another episode. Dr. David Wiley: It's true. Zach Claybaugh: Just to jump in here real quick. Switching from faculty to students though, you have touched on the student experience a little bit already. Ultimately, we do this for the students. And with this existing intersection of open and AI right now and how that evolves, keeping that idea hat on, what do you think a typical homework assignment might look like for a student five years from now? Dr. David Wiley: Oh, gosh. I hope it's very, very conversational. Over the last decade or two, our assessment design has drifted toward selected response items like multiple choice and true, false, and things like that, because they're just so easy to grade, they can be automatically graded. I think that you will see homework that can be automatically graded that is all completely open-ended. In fact, one of the things I'm currently interested in, I don't know if y'all are familiar with the idea of stealth grading from the games in education literature. But the idea is if you're playing a game and the game is supposed to be helping you develop some skill, it's ridiculous that at the end of playing the game, you would then turn around and take a multiple choice test to see if you had developed the skills that you're supposed to develop during gameplay because you're... Now, assuming this is a computer game. As you've been playing the game, you've been leaving traces of competence all through all the time you've been playing. By the time you get to the end of the game, I already know what you can do and what you can't do and where your strengths and where your weaknesses are. And you never even realized you're being assessed, hence the label stealth assessment. I think once we get students engaged in more conversational forms of study than now instead of reading the textbook, reading, reading, reading, getting to the... Or maybe even reading the textbook, getting to the end of the first module, doing a couple of formative assessments, reading the second module, doing a couple of formative assessments. I think five years from now, I'm not reading, I'm conversing, maybe by voice, maybe by typing, depending on whether I'm on the bus or whether I'm at home. But by the end of that conversation, there's no need for me to come back around and do a standalone assessment because the assessment has been happening the whole time I've been talking with the model about what I'm studying at the moment. I think that's what it looks like not very long from now. Now, will faculty be able to wrap their heads around that? Can you imagine an instructor assigning this kind of conversational experience instead of a textbook? You know the old joke about how many university employees it takes to change a light bulb? Kevin Corcoran: Change? Dr. David Wiley: That's the answer. Kevin Corcoran: What if faculty who don't want to change just relabel their assessments as boss level? Dr. David Wiley: That's great. I'll just have my agentic browser go defeat the boss for me. Mike Mills: Well, David, as we begin to wrap up and look to the future of open learning, to the future of open and AI, can you share a story or an example that serves as a North Star as to why we should be optimistic about the future? Dr. David Wiley: We should be optimistic about the future for the same reasons we should be pessimistic about the future. Because people are endlessly creative, they have a wide range of incentives, and there are people out there in the world who just want to do good, who want to be a blessing to others, who want to help lift people out of generational poverty, who want to do this kind of work and work related to it, and they will use these tools in powerful ways that will help this work get done to degrees that it has not been getting done before. Now, will other people with other motivations who want to accomplish nefarious things, will they also accomplish more than they've accomplished in the past? Yes, they will. And there's a problem there for us to sort out. But your question was just about why should we be optimistic? We should be optimistic because generally speaking, people are awesome and they want to do what's right and they want to help each other and they especially want to help students. I think right now we're in a period of imagination constraint, we can't picture all the things that this new technology is going to let us do. It's like 2000 or 2001, you can't imagine YouTube or Amazon. You can't imagine the things that are coming that are all enabled by this technology that you are already holding in your hands. But it just takes time for people to think, "You know what? I could do this." And somebody sees that and says, "Holy cow, I could do that." And by seeing examples, then it blossoms out into this universe of possibilities. We're this tiny little bud right now, we haven't seen a lot of examples. Everything we're doing now, we're doing in the language of what we've done before. We're using AI just to author the same kind of textbooks we had before, just to author them faster, less expensively. But eventually, we'll get beyond that and I think that's why we can be optimistic. Mike Mills: That is super. And that really, I think, puts a bow on this whole conversation and looking forward to seeing where we are in three to five, seven years. And as you said, I don't even think we can envision where we're going to be. I really appreciate your focus on that and appreciate your deep thought. I know you've given me a lot to think about. And as we end every podcast and I think it's very appropriate today because we end every podcast with a bad open-ed joke, and those open-ed jokes are AI generated. And so let me turn to Zach who is going to give us today's joke. Zach Claybaugh: All right, everybody. Here we go. Why did the open textbook go to comedy school? To work on its open-mic material. Mike Mills: Oh, my gosh. Zach Claybaugh: You're welcome, everybody. Mike Mills: Really bad, Zach. Really bad. Zach Claybaugh: It is. It is. Mike Mills: Really bad. And with that, the mic is closed.