AI Chatbots in Higher Education

Date Added
2023-03-31
Duration
25:36
AI Chatbots in Higher Education
Filetype
MP3 (160 kbps 44100 Hz)
Size
30 MB

This episode is focused the emergence of AI chatbots and how they relate to higher education. In this episode, we had the chance to interview 3 leading professors in the field of AI Chatbots from Temple University, David Schuff, Jason Thatcher, and Konstantin Bauman.

David Schuff is a Research Professor and the Chair of the Department of Management Information Systems. He has published over 50 journal article and book chapters on various topics like the application of information visualization to decision support systems and the impact of user-generated content on organizations and society.

Jason Thatcher the Milton F. Stauffer Professorship in the Department of Management Information Systems at Fox. Dr. Thatcher’s studies primarily revolve around individual decision-making, strategic alignment, and workforce issues related to the application of information technologies in organizations.

Konstantin Bauman joined Temple University back in 2018 after doing postdoctoral research at New York University. Bauman’s research interests are primarily in the areas of technical information systems, with focus on the fields of quantitative modeling, data science and specifically developing novel machine learning methods for predicting customer preferences.

If you have any questions you would like to have asked, or if you would like to be a part of the podcast in a later episode, please email andrew.coletti@temple.edu.

Relevant Articles

  1. ChatGPT as an Assistive Technology
  2. Promises and Pitfalls of ChatGPT
  3. ChatGPT: Threat or Menace?
  4. Cheaters beware: ChatGPT maker releases AI detection tool
  5. ChatGPT Has Everyone Freaking Out About Cheating. It’s Not the First Time.
Show Transcript

Audio Transcript

00:08–00:26Andrew ColettiHello and welcome to this episode of The T in Teaching. This episode is focused on the emergence of A.I. chat bots and how they relate to higher education. In this episode, I had the chance to interview three leading professors in the field of A.I. chat bots from Temple University, David Schiff, Jason Thatcher, and Konstantin Bauman. 

00:26–00:46Andrew ColettiDavid Schiff is a research professor and the chair of the Department of Management Information Systems. He has published over 50 journal articles and book chapters on various topics like the application of information visualizations to decision support systems and the impact of user generated content on organizations and society. 

00:46–01:03Andrew ColettiJason Thatcher is the Milton F. Stauffer, Professor in the Department of Management Information Systems at Fox. Dr. Thatcher Studies primarily revolve around individual decision making, strategic alignment, and workforce issues related to the application of information technologies in organizations. 

01:03–01:27Andrew ColettiKonstantin Baumann joined Temple University back in 2018 after doing postdoctoral research at New York University. Bauman's research interests are primarily in areas of technical information systems, with a focus on fields of quantitative modeling, data science, and specifically developing novel machine learning methods for predicting customer preferences. Thank you for listening and please enjoy. 

01:32–01:40Andrew ColettiAll right, let's start first with kind of defining A.I. chat bots and what they're being used for and how they're being developed. So, Konstantin, do you want to start us off? 

01:40–02:04Konstantin BaumanYeah, I would be happy to start. So Chat Bot is a computer program designed to stimulate conversation with human users. That's like a very broad definition, and there are lots of different applications that fall into this definition. So, there are multiple different types of chat boards which were developed a long time ago. So, for example, like menu, button-based chat bot. 

02:04–02:28Konstantin BaumanSo you come to the chalkboard and it shows you several buttons. You click on the button and it gives you a response and shows you another set of buttons. You click on that and it gives you another, Hmm. Then it might be like linguistic based or keyboard-based chat boards. You type something, chat bot searches for some keywords in your request and then provides you with some prepared answer for you. 

02:29–02:57Konstantin BaumanSo there might be like hybrid models and there could be machine learning chat bots. So, I believe we came here today to discuss mostly machine learning things that were like developed very recently based on the recent advances in machine learning on this transformer A.I. and like generative Pre-Trained Transformer, which was developed by Google and was implemented in ChatGPT. 

02:59–03:26Konstantin BaumanSo which actually generates a very human like text. Mm hmm. So that's kind of definition. What is the chatbot? But then the question was why they were developed. Yeah, they were obviously developed to communicate with users. So every company can automate certain tasks the they do with user support. So, if a customer has any issue they can call in, go through their standard set of steps. 

03:27–03:41Konstantin BaumanSo instead of having a real person working on the in the call center, you can say they just bought, which will talk to the customer and solve their problems automatically and with no cost. 

03:41–03:55Andrew ColettiMm hmm. Yeah. Great. Thank you. And you're right, they have kind of taken over the headlines. There really seems to be like every day a new development in the way that they're being used. I am wondering, what are you guys seeing in the recent developments in how they're being used and what they're being used for? Jason, you want to take that one? 

03:56–04:15Jason ThatcherSo, we're seeing them news more extensively with customer service applications. So we're before we have a first tier of support where you call in the person that your initial person answering supposedly could answer like 90% of your questions that bump you to tier two. And that would be the more sophisticated expert. Tier one has been wiped out for people for the most part. 

04:16–04:33Jason ThatcherOkay, it's now a bot. If they can push you to push you to a computer to have those questions answered somewhat dynamically. And with the modern machine learning chat bots, they can fake be a lot better. But they're still drawing largely on the same knowledge base, more dynamic. But what it's doing is it's making the experts more valuable. 

04:34–04:49Jason ThatcherSo that's the first place that we're seeing that we're also starting to see conversations about how to integrate it into the workplace in more mundane tasks. So, you have a chat bot possibly that would sit there, and you could say, how does this sound? And you can cut and paste your text into it. They'll say, that sounds okay for X, Y or Z. 

04:49–05:16Jason ThatcherAnd if you've used a program like Grammarly, it's basically what it's doing. We're seeing programs like Deeple which are chat bots, but they're generative machine learning techniques which are providing similar advice. You can find that if you want that the next year, which people seem to be most afraid of, is that we're going to see chat bots or generative machine learning algorithms used to replace creative X or more things like programing and whatnot. 

05:16–05:35Jason ThatcherBut really it still comes back to the question of how you synthesize expertise. And so you may be able to put your question into the bot. It gives you an answer, but you still need a human evaluator. Yeah. And so what it serves as in those circumstances is a kick start. So I can say I need to write code to fix X problem. 

05:36–05:56Jason ThatcherOkay. But you only you, you actually have to know to evaluate whether what's kicked out is good or not. Mm hmm. And so those are sort of the ways I see the U.S. have more structured problems, which is that first, your support. So, my structure problems are you kind of know what you want with grammar and then you have harder to answer questions, which really need a lot of human supervision of the bot. 

05:56–05:57Jason ThatcherDoes that sound about right? 

05:57–05:57Konstantin BaumanYeah. 

05:58–06:15Andrew ColettiOne thing that you mentioned that I find really interesting is kind of creating thousands of lines of base code that might take a team of coders a little bit to create, but they're able to create pretty quickly, which I think is really interesting. And I like how you kind of said it changes where the value is put in terms of expertise. 

06:16–06:25Andrew ColettiBut also again, there are such broad usage. David, can you tell me a little bit about kind of where or what kind of AI chat bots we're seeing in terms of the options available to the public? 

06:26–06:50David SchuffWell, I mean, I think there's the thing that we're that a lot of the attention is around is all around the generative ones. That's where you're seeing all the news. So all of the news items and there's really the big one is opening eyes chat GPT. Google has deep mind I think that's what backing it. And they're coming out with something called Bard. 

06:50–07:27David SchuffBut essentially they're the same thing. There's these things that generate text dynamically based on using a using a lot of text that's fed into them in the first place and making guesses about what the next word is going to be. And so I think what is what has captured everybody's imagination is not so much the being able to kind of like do some of the pattern matching and some of the stuff that's previously with AI that's been something which is which it's been used a lot for, but where it can compose an essay for you or write code for you, I think that's really the kind that when we're talking about those, that's, that's the 

07:27–07:34David Schuffmain type. And then the types within those are basically just sort of almost like brands of similar products. 

07:34–07:52Andrew ColettiSure. These generative programs that you spoke about, it seems like there's so many of them. But my question, I guess is what really differentiates them. You were saying that really where they draw their base of knowledge from is what's fed into them. Is that really the only thing that distinguishes Generator A and generator B, or is there something else in the way they're actually structured those changes between them? 

07:53–08:10David SchuffSo Konstantin might be able to talk a little more about the technical piece of it. But I mean, my understanding is that when you look at, you know, open air versus DeepMind, it is the dataset that's okay. So, I think open air uses pretty much the text on the web where DeepMind uses a different dataset. 

08:10–08:24David SchuffSo, they wind up looking a lot the same. Mm hmm. And the basic the basic mechanism with which how they work is very similar. And, you know, and then it comes down to the specifics of the models and the dataset that they're fed. 

08:24–08:29Andrew ColettiDo you want to speak a little bit more on the technical side then? 

08:29–09:03Konstantin BaumanI better speak about some features that are also there. And so besides ChatGPT and DeepMind that David mentioned, there are some other options like Chat Sonic, YouChat, Bing Chat, which also like based on Google's Bard, character AI, I like lots of different them. So, Jeopardy that I'm openly discussing sure is based on the historical database. Chapter three is on the textbook amounts up to like 2021. 

09:04–09:43Konstantin BaumanSo it doesn't know all the recent things that happen. Mm hmm. It's one of the limitations. Okay. And another thing that it can not search online. So, it's very defined model. It's all your trained. So we can just return as you be the answer. What it knows it doesn't know it. And search info like there are some other options, which, again, ask the question not only generated that search online to find the relevant pages, the get the information from the pages, and then using this information they generate their response using the actual facts. 

09:43–09:53Konstantin BaumanOkay. So, it's already implemented in so many, and you chop also searches. So just online the facts from one. 

09:54–10:19Andrew ColettiOkay. Thank you. Yeah. So it seems like there are some real differences in between these different options. My question, I guess, going forward is as the lifespan of these chat engines kind of goes on, are these differences going to become more and less significant as that data set that they're drawn from kind of expands? Is that going to become a real difference as these chat bots continue to be developed and they buffer out some of these issues? 

10:19–10:36Jason ThatcherSo, you're going to see these three things happen. Okay. The first is you're going to see introduction of non-text data and we're seeing that in GPT three, four, GPP five. So, we're able to take video, sound image data and we're able to weave that in with the text data to make predictions. So that's one big shift that's coming. 

10:36–11:04Jason ThatcherThe second big shift that's coming is we're going to see more customization of the chat bots. Okay. So, Microsoft will have its own proprietary data set that informs X, Y, or Z. So say you want to travel and you may not remember when Bing was like the best for travel. But Microsoft will take its travel data set, apply its own algorithms and use generative AI to predict travel patterns and ways of proprietary data sets that say, Google flight doesn't happen. 

11:04–11:21Jason ThatcherAnd so, I think what we're going to see is very large carved out datasets that that firms create value through using the bots by having unique data sets, again, which have their own biases and problem, which we haven't talked about as much. And then the third piece we're going to see change a lot is the volume of data they can handle. 

11:22–11:41Jason ThatcherOkay. Okay. So the difference between choppy, cheap and four and five isn't just the kind of data, it's the amount of data and their ability to digest it in real time. Konstantin alluded to that when he said going out and be of a certain and integrate real time data into these bots or each of them is the generative machine learning algorithms. 

11:41–12:04Jason ThatcherWe're not great at that. So, what you're going to see is customization, new forms of data and then possibly more evolution on the algorithm side itself in the next three or four years. And that's going to create more diversity while we may see similarity and underpinning principles, I think we're going to see more diversity in applications because if you're going to if you want to predict travel, right, you want travel relevant information. 

12:04–12:14Andrew Coletti Yeah. You don't want all the information in one engine to become better or they'll have better sources to be able to predict what is the best way to travel. Right?

12:14–12:27Jason ThatcherYeah. If you remember the old Netflix competition where they who could come up with the best Netflix algorithm, I can envision seeing a race amongst top manufacturers over who can come up with the most predictive but the best generative, you know, an arms race going on around that. 

12:27–12:51Jason ThatcherAnd that's where Google and Microsoft and competitors are chasing it. One thing that really stood out to me is that the chat box themselves will kind of specialize in what they do, which I find really interesting. And one of the big topics in terms of higher ed and chat bots is how some of these chat bots can create and synthesize an entire paper, a college level essay, and every professor's freaking out that they're going to have 20 students submitting papers that nobody ever wrote. 

12:51–13:03Andrew ColettiAre we seeing anything in the field around these kinds of issues where, you know, somebody could use that? Is there any way that the chat bots are trying to break down and stop people from being able to synthesize a whole college paper or anything like that? 

13:04–13:37David SchuffI mean, so I'm not sure that there's efforts to stop it, but, you know, there's tools like zero GPT and OpenAI has their own classifier tool that tells you whether or not something was generated by generative A.I. or not. So, there's ways to detect it. I mean there's plagiarism detectors like turn it in. It's reasonable to assume that that's that zero or something like it will be built into these tools that will also say, well, you know, it wasn't necessarily plagiarized from the Web, but I created this. 

13:38–14:07David SchuffOn the other hand, maybe we'll talk about this either now or a little later, as I think there's an overreaction to this idea that somehow, you know, of the importance of being able to generate like an essay using generative A.I. I think there's you know, it's a similar kind of overreaction to many other sort of technological advances and how we're going to impact, you know, students in the classroom. 

14:09–14:32David SchuffSo I think that, you know, long term, these tools are going to become part of the part of the arsenal of things that students have that help them do better work and will get better at being able to give students instructions and guidance and what we value as far as what they what they generate themselves and what they have tools help them generate. 

14:32–14:52Andrew ColettiThat's always the big debate when something new in technology, is it overhyped, is it under hyped and whatnot? But I think one of the interesting things you mentioned, though, is that it could become a tool. And I'm kind of wondering, how do you guys see generative or non-generative other forms of A.I. chat bots becoming a tool in the classroom? And how can professors really kind of approach that if they want to kind. 

14:52–15:18Konstantin BaumanI have prepared, like, a little long list to talk about. Bring to our lives? Sure, everyone is talking about cheating, but I consider judge apathy is a really good tool for students and we need to find the right way to implement it in our classroom and to help students to, like, get better with that. So things that students can do with that is to check the ask questions and get the answers. 

15:19–15:42Konstantin BaumanSo like previously know my students, if they have some issues, they go to Google and they search for, let's say, an error in the code and they get no answer from StackOverflow, no, they can ask a different types of questions and get the answer from Chargeability Judge You can generate summaries if you have a long text and you just need to get the essence of the text. 

15:42–16:18Konstantin BaumanYou can easily do that for you. It can provide like different explanations and examples for the concepts that we explain in the classroom. So, if students do not understand how we explain it, they can find some different words to explain the same thing from a different perspective. On a different level, it can provide feedback. So, I think David already mentioned that if you can work on something, you can submit and judge your feedback before submitting it to instructor, then judge you can give your practice questions. 

16:18–16:39Konstantin BaumanSo, you are preparing for the exam in data analytics class, and you ask like judge you bookkeepers them questions on data analytics. It will give you a question. You will make an attempt to answer them, ask what the correct answer was, and that's how you will practice in preparation for the exam. So, it would be like personalized tutoring for the student. 

16:39–17:01Andrew ColettiInteresting. Now, that's the first I've ever heard of something like it. You create basically a mock test or my questions, at least that seems excellent. A really fast way for students to get authentic and diversified questions. You're not taking the same questions that have been asked every year that seem really beneficial to students. Do you see that being used in that way, or is that just kind of like maybe in the future that's how it could be used? 

17:02–17:20Konstantin BaumanSo, I really hope that it will be like in the near future we will already have this tool and it's developed and it's already on a certain level of quality. Why not to use in our classroom? Sure. So, -this is the first semester when we have Judge CBT and we're kind of exploring. I already introduce Judge Liberty to my students. 

17:20–17:33Konstantin BaumanI allowed them to use it and they like exploring how they can use. But maybe we will like change our policies during the summer and sure. And force them to use in the fall semester. Well, we'll discuss it. 

17:34–17:35Andrew ColettiYeah. David, it seems like you have something. 

17:35–17:57David SchuffYeah, I think that as far as. Yeah, I agree with Constantine that I think that these tools are immediately usable and I think that, you know. Well, will it be practical to have it generate sample questions or something like that? I think it's the same issue for instructors that it is for students, which is that you also have to be savvy, a savvy information consumer. 

17:57–18:30David SchuffSure. So if I generate a question for an exam using chat, I wouldn't just copy paste it into an exam and go I would verify it against what I know. I would make sure that the question makes sense. And that's a good skill or it's a really important skill for students to have, is that if they get an answer from ChatGPT, they should be able to not just they shouldn't be just taking it at its word, they should be triangulating that against what they know, the same way that if they Google something or they go to StackOverflow for something, they shouldn't take that at face value. 

18:30–18:47David SchuffSo it's another kind of reinforcement of this idea that maybe what we should be teaching more is how to be how to interpret and evaluate information sources and worry less about whether or not, you know, it's regurgitating a fact that we could just memorize anyway. 

18:47–19:17Jason ThatcherAnd now that people have settled down, like when Chelsea first came out, it was like this was the end. I can't tell you how many people said, “Oh, this changes everything”. All my case studies are ruined and we're starting to see faculty adapt and what they're doing is they're starting to say, “Okay, we've talked to the principles. Here is a passage and it may be generated by CBT, identify the problems, pick apart the output of the analysis”, teaching the critical thinking David's alluding to, and pick apart the pick apart the output from it and demonstrate what you really know. 

19:17–19:36Jason ThatcherOkay? And that and that kind of critical thinking chat. ChatGPT and tools like it don't do very well. Okay now, so we're seeing it used in that way the other way we're starting to see it used more is actually is a is an aid for writing in a need for critical thinking. So, you're putting your statement in you go, okay, what's wrong with it? 

19:36–19:57Jason ThatcherAnd it'll show you or point out flaws in your logic. Well, that's an iterative discourse between you and the technology that I can see. In less than five years, any one of us sitting at our desk working on a project we have our personal GPT type out, but that's been trained up to our preferences of our understanding. And we have a conversation with an algorithm that makes us better. 

19:58–20:13Jason ThatcherAnd I really think that that's what I'm hopeful that we can start thinking about in terms of pedagogy is teaching kids how to use these tools to make themselves better. Because what we know from all the work that I've done in AI and all the work I've read about is when humans and I work together, we get the best products, not in isolation. 

20:13–20:20Jason ThatcherAnd I think that's the next wave. That's really what I think we all want to get to, that you're doing your best and that's what you've described. 

20:20–20:39David SchuffI mean, I mean, even for generating code, which, you know, I'm not sure I even care if a student can use it to generate code because they would have to be so specific in their instructions, at least at this point, to get exactly the code they want without modifying it. Yeah, that they might as well know how to solve the problem. 

20:39–20:59David SchuffAnd if they get and if the code they generate needs to be modified, they still have to know how to modify it. Okay. So, one way or another they have to have the higher knowledge. And, you know, remembering very simple things about like what format the code should be in, Do we really care if they if they know that off the top of their head or if they just get an aide to help them? 

20:59–21:16Andrew ColettiWell speaking about the pedagogy side. It sounds like as long as they're still be able to apply it. And they're still able to practice it, That's what's still really important for you all. And I think that's great to hear. I want to talk you guys all said you kind of implemented in some way in your class and you probably some of the first professors at least attempted to do that. 

21:16–21:31Andrew ColettiAnd I kind of want to end on just do you have any takeaways, any mistakes that maybe you made, maybe where you want to introduce it, maybe have another preliminary skill, kind of go along with it? You were saying tech savvy or just kind of tech awareness being on the Internet, not trusting everything. 

21:31–21:54David SchuffWe are still kind of halfway through the semester. So, I'm still trying to figure out how it how it how it's going. But what I'm seeing sort of anecdotally is that it's kind of going back to my code example is that students tend to take the answer at face value. And so I've seen them take like a block of code and then copy paste it in and into their and try to execute it. 

21:54–22:19David SchuffIt doesn't work and so and they're not sure why if this told them how to do something, why doesn't it work? Well, it's because it's not always right and because it doesn't fit every situation. What it's generating, it's giving you almost like a sort of like a first pass at something. And I think that lesson can also carry through to other types of assignments where they have to write a paragraph of something which is that the thing that the generative AI creates is just a first pass. 

22:20–22:45David SchuffAnd so, you know, what I did in the beginning of my classes, I just basically said, here's the caveats. It makes things up. Sometimes it doesn't attribute sources very well yet. And so, you have to double check everything. But I didn't give them any other guidelines because I wanted to see what they'd come up with. But now I think this this whole like maybe integrating some of this, you know, here's how you can interpret what you see from any source on the web, including this. 

22:46–22:51David SchuffYou know that kind of like opening set of guidelines might help them use the tool more effectively. 

22:51–23:03Andrew ColettiSure. So maybe it might be a little too early then to ask you guys, you know, what's your big takeaway from teaching with it in class? But I will be interested to kind of follow up with you guys at least the end of the semester or even in another semester after you had a little bit more time to continue to implement it. 

23:03–23:23Andrew ColettiYou've continued to practice this in a classroom environment and see what it is. Really sounds like you're really open ended with it in the way that they get to use it. And I think that's really great because it allows students to kind of explore the way that they want to use it. 

23:23–23:35Konstantin BaumanYeah, I think it's too early to make conclusions. This semester is the first one with tangible results and widely available. Okay, so we'll see a little bit later. 

23:36–23:54Jason ThatcherSo, I used mine in an executive MBA course with my experience executives and in charge had just burst onto the scene. So, we reframed the course and did our case study around. It had three different teams filled out three different sets of prescriptions for how to deploy it at work. And in my takeaway from that experience was really interesting. 

23:54–24:16Jason ThatcherThe first was each group came up with something totally different. We came up with a structured use guide. One came up with why it wasn't ready yet for deployment in industry, and the third said, here’s how we can use it to make hiring better. So, I was surprised by the diversity, which is I think what you're going to find at the end of the semester and you're probably finding right now is that they're finding really wicked interesting ways to use it. 

24:16–24:42Jason ThatcherSo that was the first piece is we don't know what to expect from the students. The second piece is that we can't substitute for actually using it like a lot of people are talking about chat and generative machine learning, but a lot of the people talking about it, complaining about it, haven't used it in their classroom. So, I would encourage all the faculty that listen to this to actually use it and do more than read your local blog that says this is evil. 

24:42–25:05Jason ThatcherActually, take some time to use it and talk to people in the real world about how to use it. Okay. And then the third piece, I would tell them to stay tuned because ChatGPT is so different than GPT four or GPT five, it's actually fairly primitive and the Great Leap Forward has occurred in three months. This space is moving really, really fast, so whatever we say today will not be relevant in August. 

25:05–25:22Andrew ColettiThat's okay. They're going to have to refresh that. Hopefully we'll still be here, and we can maybe do another podcast and follow up and see how, when and see where really the future is. But I think it's a good time to wrap up this episode. So, thank you all for joining us. Thank you for sharing what you guys have learned and what your kind of doing so far in the field. 

25:22–25:37Andrew ColettiSo, thank you guys so much. Thank you.