A couple of weeks back, I ran a series of focus group conversations with undergraduate students from my institution as part of an IRB approved study of undergraduate use of and attitudes towards Generative AI.
This post is a quick-and-dirty first pass draft of what I think I am seeing in the data, after an initial read. I have a lot more work to do to fully review what’s in the data and make sense of it by putting it into conversation with secondary sources. Whatever I draft for submission for formal publication will likely not totally agree with what I say here, and whatever comes out the other side of peer review will almost certainly agree less. Still, in the spirit of sharing data and writing to learn, I wanted to put this draft together.
The data discussed here were collected from five focus group sessions held over the course of two days. The mean number of focus group participants was 4, with the smallest group having only two participants and the largest having 6. Focus group sessions were about 40 minutes long. Participants were recruited from across campus via posters, faculty announcements, and my students (who were co-researchers) recruiting peers. Participants were rewarded with donuts and the chance to win a gift card to a local coffee shop.
While the sample size of this set of focus groups was somewhat small, the themes discussed by respondents was still revealing, and suggests broad trends in how students are thinking about generative AI that should be investigated in further research. In addition, I was struck by the depth of thought some students had put into these issues as we spoke. Many of them had complex, nuanced thoughts about AI related issues, and in some cases encounters with AI had lead them to do meaningful reflection about education and writing.
During the focus group, students were asked about their uses of AI in educational spaces, their encounters with it in social spaces, and how they anticipated AI impacting their working lives after graduation. The findings below map onto those sections of our conversation.
(Note: I will they/them pronouns for all student respondents, to better protect the anonymity of the small set of focus group participants)
Students are ambivalent about AI
This may not seem to be much of a finding, but I think its a useful perspective that complicates the emerging narrative of young people as “AI Natives” that threatens to reproduce the same mistakes of the “digital natives” discourse of the early 21st century. While my sample size of students was small, and the makeup of that sample may have been skewed towards students that were either AI enthusiasts or skeptics (posters offered students the chance to have their “voice heard” if they found AI “exciting” or “frightening”) the conversations we had showed students to have a variety of positions about AI use. Just as significant, even fairly enthusiastic students tended to have some concerns or uncertainty about AI.
A small subset of students was self-consciously critical of AI, somewhat (but not entirely) disinterested in engaging with it, and concerned about its impact on creativity. Asked to give a single word that sprang to mind when they thought of Generative AI, one student responded “scary,” another “problematic,” and a third “nonsense.” One concerned student expressed their exasperation with omni-present AI and concern about its creative impact like this, “I think we’re gonna see an influx of people trying to use A.I. in every field I’ve noticed that even like on job application sites they’re just like, hi I’m the site A.I. and I’m like you don’t need to be here […] I’m worried about like the influx especially in literature we’re gonna see of people using A.I. to just kind of like slap dash together a book and then try to sell it to like make a few dollars” However, in keeping with the theme of ambivalence, this same student then mentioned that they found AI useful as a transcription tool, helping them use voice transcription to produce written documents while avoiding typing induced hand cramps, and could see the broader positive implications of this kind of transcription technology for accessibility.
However, these AI “conscientious objectors” were a small minority of the respondents. Many more reported being interested in AI as a tool to accomplish a variety of tasks in and out of the classroom. As I discuss in detail below, some of these uses they saw as legitimate, and others they understood as cheating. However, even these interested students had some ambivalent feelings about the technology. One reported using AI as a study aid, and sometimes as a writing aid, but later confessed that they were concerned about their classmates use of AI:
Respondent: but I go to classes with, like, people who are going to be nurses, doctors. They all use AI for everything, and that’s a little bit concerning.
Professor Famiglietti: Okay. Why is that concerning to you?
Respondent: Because those are our future healthcare professionals.
Professor Famiglietti: And what about the AI use is concerning to you?
Respondent: That they’re not learning.
Another student, who reported using AI as part of their tabletop gaming hobby (one of the more common hobby uses of AI reported) and who was open to other uses, still expressed concern about larger commercial applications of generative AI: “I know that only five other people on the planet are going to see [my hobby uses of AI] and I’m not going to make any money from it but I’ve also seen even like the company that makes [tabletop games] like using AI generated art and stuff like that for profit and that I have an issue with.”
Students are getting mixed and confusing messages from Faculty about AI
Students found little in terms of faculty guidance to help them resolve their ambiguous feelings about AI. Many reported that faculty advice seemed confused, inconsistent, or hastily put together. One student quipped, “multiple times I’ve had teachers bring up the syllabus and like the bolded section you can tell they added in like three days ago that says, ‘don’t use AI to write your papers you can use it to brainstorm maybe I’ll allow it.'” Others reported that some teachers were “pro-AI” but that most were “very anti-AI, and they were like,’If you use this at all, I will know.'” In fact, a blanket ban on AI use seemed to be the most common form of faculty advice students reported being given. Their responses to such a ban, as described further below, ranged from fear to disregard.
Given how many professors engaged in a blanket ban of AI, it’s perhaps unsurprising that those professors who engaged with AI assignments also sometimes confused students. One student, asked why they made a face while recalling an assignment that asked them to make an outline with ChatGPT replied “I don’t know, I wouldn’t expect a teacher to encourage that.” Another student, a vocal AI skeptic, was dubious about an assignment they had been given that asked them to use AI, saying “I personally didn’t like it, I just feel like it was following the trends of let’s make this interesting for students since AI is a big thing right now.”
Other students were satisfied with the quality of some guidance they had gotten from professors, but sometimes this guidance contained inaccuracies. One student had been told that they could use ChatGPT to summarize a movie just by asking it for a summary of the movie by title (in fact, this is the kind of prompt ChatGPT will frequently hallucinate in response to). Another had been accurately told that ChatGPT would not always correctly summarize published fiction, but inaccurately told that the reason for this was that “there’s certain copyrights [ChatGPT] can’t break.” (In fact, ChatGPT was trained on significant amounts of copyrighted text.)
While the most frequent direct advise students got from professors about ChatGPT use was a simple “don’t,” in other cases they reported being told that using AI tools as “brainstorming,” research, or pre-writing aids was legitimate (it should be noted, however, that they did not report being given much guidance about how to use them in such a mode). In some cases, they report being given the option to use ChatGPT as a tool to prepare for open book exams, though not given much guidance about what sorts of questions the tool does or does not answer well or how to adapt the tool’s information into their own answer:
Respondent: I have had two teachers […] who are very pro-AI, but both in the sense of not copying, but using it for ideas. […] I had a class, and [the professor] gave us [a set of] questions that would be on the test, and the test was [a subset] of those questions. And [the professor] very verbally encouraged us to use ChatGPT, and AI, and the internet, together.
Professor Famiglietti: Like, ask it those test questions, and see what it told you, as a way of preparing for the test?
Respondent: Yeah, but she said, ’cause we were allowed to. It was open note, so we could write down all the answers,but the only thing she said about it was: your answer has to be unique, and it can’t be copy and paste. It can’t be, what’s the word?
Professor Famiglietti: Like, just verbatim?
Respondent: Yeah.
Students have developed their own guidelines to distinguish legitimate AI use from cheating
In the last example above, the student reported ultimately being satisfied with their open-book test that they were able to use ChatGPT to prepare for since they “enjoyed” the class it was assigned for and the learning they were doing for the class (which was for their major) was ultimately intrinsically important to them. As they put it, “I knew that I actually have to know the stuff,” and so they worked to synthesize ChatGPT sourced ideas with ideas from their textbook and other sources to arrive at answers that they reported felt like their own.
Leave aside for a moment how accurate the ChatGPT sourced information in that example might have been (depending on how broad the questions were, it could have been ok, or quite bad) the larger takeaway here is that the student had their own sense of what made for a legitimate ChatGPT use in an educational environment. Namely, that a use that enabled student learning, rather than replacing it, was a legitimate one. This was echoed by another student, who described their own use of ChatGPT to explain chemistry problems they had been assigned (an AI use they had developed on their own) this way: “I think that you have to learn from it. Like, for example, my chemistry problems. If I could plug in, you know, the the question and it give me the right answer, which it doesn’t because it doesn’t know how to do math. But if I just like copy and pasted that without, you know, giving a care in the world to how to do the problem, then that’s not learning.” Interestingly, this student was planning on a career in education, and while they were themselves an avid AI user, they were concerned about how their students might use AI to circumvent learning on their assignments.
Other students used slightly different terms to describe something similar. Some said that AI use was legitimate if you still put in “work” or “critical thinking.” They admitted, however, that the line between legitimate uses and cheating was not always clear. Often, they suggested that others might have more trouble discerning this than themselves.
Many students see AI as valuable for getting “unstuck”
Students often reported using AI as a way of getting “unstuck” as a method they saw as a legitimate use of the technology. This could take a variety of forms. Sometimes, it was as a kind of brainstorming or pre-writing aid. This is a form of use I have previously been somewhat dubious about, but after discussions with students I think it might have more value than I initially gave it credit for. One student described how they used AI to brainstorm paper topics:
Respondent: when i get an assignment where it’s like oh you can write anything you want that kind of paralyzes me because I’m like oh i can write anything and then i’m just kind of stuck […] with this like i can go into the AI and
just be like hey can you just help and then like it’ll kind of you know throw out ideas but you know then you know it will you know help me kind of I guess realize the thing that I want to write about
Professor Famiglietti: can you tell me more about that idea of helping you find the
thing that you want to write about, can you tell me more about what you mean by that?
Respondent: one of the things that i like is that it’s not like you’re just getting like one prompt, like one result, at a time like you can have like a really nice list of like possible topics for a paper and you can like read through them and think well like well that one looks interesting that one doesn’t you know that one might be interesting but maybe instead of focusing on this aspect i could instead focus on that one and so yeah like it just becomes like a brainstorming thing where
again I’m not asking the AI to write the paper for me but I’m using it to kind of give me that first initial push to get me into the writing process
This respondent is already doing some sophisticated thinking about the output they are getting from AI. They might benefit from instruction that would help them think critically about that output and situate it as a machine generated response, rather than as a human dialogue.
Other students reported using AI to help get them get “unstuck” when facing formal parts of writing they had trouble with, such as introductory paragraphs or transitional sentences. In one particularly telling exchange, a student reflected on how they had used AI to fix a “problem” that others had identified in their writing, excessive wordiness, that they felt unable to solve on their own:
I do personally use AI a lot for sentence structure. I’ve always been told that my writing is super wordy. So I will put it into ChatGPT and say like restructure this sentence or write this sentence in a way that sounds academic and like is clear and concise and like gets the point across without, you know, being a run on or whatever. But, and while I do think, I do like that it’s a tool that’s able to help me do that. Do I wish I could do it myself with my own brain and figure it out myself? Yes. But I have the tool there. So it’s, and it’s kind of like, I’ve heard it, like I’ve heard in that sense, like I’ve heard that my sentences are wordy for so long. I’ve tried to correct it, consistently heard it. It’s like, at this point I’m like, I’ve worked on it. I’ve tried like all throughout high school before I used AI. So now it’s just like, at this point it’s a tool for me bettering my writing in the sense that it’s,
it might be something that I’m not able to do. I’m just not able to learn to structure sentences in that kind of way without just like letting my creative side take hands.
The student’s ambivalence about the tool use and their writing is particularly interesting here. They express some positive feelings about having a “tool that’s able to help me” but they also seem a little regretful when they wish they could “do it myself with my own brain and figure it out myself.” Their inability to meet the demands made on their writing, even after sustained effort, clearly bothers them. This isn’t a student reaching for a tool out of laziness, but out of a carefully considered and thoughtful assessment of their own needs.
On the other hand, the student suggests they believe their “problem” with wordiness stems from an inability to keep their “creative side” under control. There’s a hint here about how “formal” writing might, in some cases, be easier mastered by machines, and how we might need to abandon certain forms of formality if we want to encourage authentic student writing in an era of Generative AI.
When students choose to cheat, they do so to avoid what they see as “busywork” or work they were not prepared for
The most extreme way students might use AI to get “unstuck” is by using it to cheat on assignments. Students reported that they had been universally warned not to do this, but that nonetheless, it was something that happened. When asked what would drive someone to cheat in this way, they often suggested that they might engage in such cheating if confronted with an assignment they did not feel they had been adequately prepared for. For example, one student said “I had a class where I had to write a seven-page paper. In my high school experience, I had never written that long of a paper. So for me, coming up with that many things to talk about was a little bit challenging. So I asked ChatGPT to give me like a very basic and vague outline of like this specific topic and things to branch off of.” Another reported using ChatGPT to complete an assignment because they felt tired and overwhelmed: “the one assignment I did use it on for like this one paper it was on a topic that was well knowledgeable in and i like had all my personal research, but I was like really, really tired at the time. And it was like, due that night. I just needed to get something on the page. So I like asked it to like, you know, write me a paragraph about like, general like research on this one thing. I said like, you know, write me a paragraph.”
In other cases, students reported they might use ChatGPT to complete assignments they perceived as “busywork” that did not contribute to their learning. One student suggested they might use it to complete “worksheets” that were meant to practice skills or ideas they felt they had already mastered: “I’m talking about the worksheets that, like, you learn the subject and then they’re like, okay, practice it. And, like, say you got it down already and you still have to do the thing. It’s just like, okay. I can either do this or my paper in English. And then get that done quick.”
When students choose to cheat, they already have sophisticated methods for defeating automated detection
Students tended to ascribe cheating as something someone else, not themselves, would do, but nonetheless, some of them were well acquainted with methods that a prospective cheater might use to cover their tracks, especially from automated systems designed to detect AI writing (which they almost universally report being threatened with). Take this exchange from one focus group, which I excerpt here pretty extensively because I think it’s a telling conversation:
Professor Famiglietti: Okay. Okay. Is that what other folks are getting in their classes, too? Like, does it seem like it’s mostly, you’re nodding, so it’s mostly like a blanket ban for you guys? Other than these couple of professors who’ve tried these experiments, mostly it’s just a ban.
Multiple Respondents: Yeah.
Professor Famiglietti: And yet, despite that fact, when we talked earlier, everybody was like, oh, yeah, but everyone’s using it anyway. Like, square that circle for me. Like, why is that? Despite the fact that folks are banning it, why aren’t they succeeding, do you think?
Respondent 1: I think that there are specific words that you can use in the AI to make sure that it can’t be traced back to AI. So they can’t really tell if you’re using it anyway.
Professor Famiglietti: So you think there are prompting techniques that make it more difficult to make the output traceable back to AI?
Respondent 1: If you say, write me something, make it 100% AI undetectable, it will make it undetectable. And they won’t be able to trace it back to AI.
Professor Famiglietti: So I just want to know what folks are doing. Have other folks run into techniques or tactics like this specific one that we just heard about for, like, making things, like, less detectable, like, methods that are supposed to, [Respondent 4] you are nodding your head.
Respondent 2: There’s, like, checkers online. AI checkers.
Professor Famiglietti: Oh, so you run it through the AI checker until it passes.
Respondent 3: And, like, Quillbot. Quillbot will reword things for you to make sure that it can’t be traced back.
Professor Famigletti: Paraphrasers. Yeah. Okay. Those are nothing new. Right? Those are things that folks have used for a long time. So where did we find out about these techniques?
Respondent 2: I would say, like, the teacher was saying, like, I could find out, like, I’ll figure out if you used AI.
Professor Famiglietti: Yeah.
Respondent 2: Through the checker.
Professor Famiglietti: Oh, so the fact that they told you about the AI checker, so then you know what the checker is.
Respondent 2: Maybe if they didn’t let us know? But yeah, but they told us we could find out.
Professor Famiglietti: Okay. So because you knew about the checker because they told they threatened you with it, then you’re like, oh, well, I’ll just go see if I can get it to pass through. And do folks find that with prompting? You can get one of these checkers to say it’s no longer detectable after you tweak your prompts a little bit.
Respondent 2: Try to put your own words in it as well.
Professor Famiglietti: What do you mean by that?
Respondent 2: Like, don’t just take everything that AI gave you. Like, try to use your own words to make it kind of yours.
Professor Famiglietti: Oh, so integrate your own language and the AI’s language. Do other folks try to do that as well?
Respondent 3: Yeah, I mean, AI has a consistent pattern where it’ll use the same words over and over in every paragraph, like “additionally,” it’ll just repeat the same words over and over. AI also uses really big words so if they’re words we wouldn’t use, I feel like it would just look better to change it.
To summarize, these students describe using the following techniques to obscure AI generated writing from automated detection: prompt engineering (writing prompts to encourage the LLM to respond with something other than its default tone), automated paraphrase (tools like Quillbot take text and swap out words for synonyms using a simple dictionary look up), manual close paraphrase, and using software that detects AI writing to scan output until it passes undetected.
The exact prompts and paraphrase techniques named by students in this exchange may or may not be effective. (My experience makes me dubious that a prompt asking an LLM to make its output “100% AI undetectable” will work.) However, the broader methods they name have been shown to be effective in defeating AI detection software in peer reviewed research.
While this was the only exchange that delved into the details of obscuring AI writing during the focus groups, the broad response by students suggests that these techniques are fairly widely distributed. Policing AI writing via automated detection, as I have suggested previously, may soon catch more honest users than intentional cheaters. After all, the student who used ChatGPT to correct “wordy” sentences would be turning in language authored by the LLM, which might well trigger an automated detection system. Cheaters using the obfuscation methods described above would not.
Student engagement with AI as a creative tool remains tentative
I was curious if, during these discussions, I would uncover evidence of a broad remix culture style movement using AI to engage in creative activity. While my sample was quite small, I didn’t uncover much of this. Three students reported using AI to generate materials for tabletop games, such as character portraits. One student claimed to have used AI to generate an audio deepfake of a secondary school official as “a prank.”
The only other creative activity involving AI that students reported engaging with was “messing with” AI agents like Snapchat’s MyAI (more about this below), attempting to get them to say things they “weren’t supposed to say.” They reported that this activity had been short lived, and ended soon after the introduction of the tool.
Students see AI driven “deepfakes” as a potential problem in their social media environments
While students did not report engaging in creative activity using AI, they did report encountering creative AI activity on their social media feeds, particularly on TikTok. This often took the form of memes engaging with the idea of “messing with” AI agents and attempting to elicit unexpected responses from them discussed above.
In many cases, however, the AI driven creativity students are encountering seems to be driven by mimicry of human artists. For example, one student reported encountering “leaks” of the then upcoming Taylor Swift album they suspected were not leaks: “with so many artists having like new music coming out that we don’t know what it’s gonna sound like yet. Like people can make like leaks of it by just faking it the whole time. […] I’ve seen a couple people doing like songs like that are supposedly gonna be like on Taylor Swift’s new album, and like some of them I think are real leaks, but there are some that it’s like you can’t really tell if it’s real or not.”
This concern that they couldn’t tell if content they were presented with was “real or not” caused students a fair amount of concern. Students were worried that faked video and audio could be used to incriminate someone. In their experience, such “fakes” were able to be posted and distributed without consequence:
Respondent 1: I think the voice thing is probably the scariest thing for me ’cause I saw a clip recently of, you know, Caitlin Clarke, she’s like a famous basketball player. There was like a clip of her and it was an AI-generated voice over of her saying like a crazy slur ’cause she’s white. And it was like really out of pocket.
Professor Famiglietti – Oh, God.
Respondent 1: I knew it was AI, but I was like that’s, like people can get in a lot of trouble for that kind of stuff.
Professor Famiglietti – All right, [Respondent 2] I see you nodding, what do you think?
Respondent 2: Yeah, I’ve also seen artists that felt like people will make them say really awful things and be like, oh, they’re this kind of person and it’s like not, as long as it’s like a process to prove it’s wrong, even, like people can’t generally tell.
Professor Famiglietti – So I’m curious to know, do you feel like you see this kind of content frequently? Is this something that’s like not uncommon at all or is it just every now and again?
Respondent 2: Like more frequently now, like it’s becoming more frequent.
Professor Famiglietti: – When you see it, do you feel like, oh, this is only gonna be up for 10 minutes and then somebody’s gonna pull it down or does it tend to stick around?
Respondent 3: It usually stays up. I mean, unless you’re going for an artist
who really doesn’t want you to be doing that, like some people probably have gotten sued and stuff, but for the most part, those videos stay up.
While students did not report much anxiety would target them personally, they did express frustration and concern about the proliferation of such material in their social media feeds.
Students are not convinced of the value of AI participants in social media spaces or AI Advertisements
As suggested above, students were not much taken by the Snapchat “MyAI,” though all reported that they had encountered it. Introducing the topic (which emerged out of preliminary conversations with students as way to start talk about AI in social spaces) tended to elicit groans. One student complained “It was just kind of stupid. Like, when it first came out, and, like, you made your, like, avatar, and then you, like, say weird stuff into it and, like, just, like, kind of play with it. And then everyone kind of got bored of it, I’d say.” Others reported that MyAI had allegedly done things they found “creepy” like identifying their location even though they believed they were not sharing their location with Snapchat. Others speculated that MyAI had been made “intentionally annoying” to encourage people to purchase Snapchat Plus, which they believed would allow a user to disable MyAI.
While a recent Adobe blog post breathlessly promises that “half of consumers surveyed are more inclined to shop with brands that use generative AI on their website and 58 percent believe generative AI has already improved their online shopping,” some of my focus group participants extended their frustration with AI mimicry in social media to include its use in advertisements. One related a recent experience at a nearby mall, where they had encountered an AI generated ad on “one of those giant electronic billboards in the center of the mall,” and reported feeling “like, are you kidding me? Because you could really tell that it was A.I. generated. A lot of the lines were meshing together. There was like a lot of irregular shapes where there weren’t supposed to be where like one.” They went on to say, “People are looking at recent advertisements and being like, hey, I think an A.I. generator was used for these.” Another student then chimed in to compare this experience to the recent, infamous, AI generated “Willy Wonka Chocolate Factory Experience.”
Students believe AI will impact jobs, but many believe they can cultivate AI proof skills
Students, especially in creative fields, have considerable anxiety about the impact of generative AI on the job market. As one put it in their final remarks to the focus group, ” it’s a bummer […] it’s really like a shame knowing like how many people pursue creative fields wanting to tell their story and make their mark and everything and then this influx of AI products means it’s like not impossible but it’s getting closer to being harder to do that in a reasonable way which is a shame.” Another expressed this worry, “part of what I’m interested in is digital marketing, and that could honestly disappear. With AI, since it is all digital pretty much anyway, and with what they’re able to put out, and with how much better it gets every day, digital marketing could honestly not even be an option for me in a couple years.”
However another student in the same focus group responded that they had more hope they would able to find a uniquely human niche in their industry. In their words “I wanna go into writing and editing. To an extent, that stuff can be done by, it is done by AI now, but I’m more into the creative aspect of creative writing, so I’d like to think that people in that community will still value a person doing that […] so definitely some aspects could be taken over by AI, but I’d like to think the job itself would still be there.” Other students expressed similar hopes that “uniquely human” job skills, like caretaking and empathy might weather AI’s entry into the job market.
Conclusion and Next Steps
While my study had a small sample size, I still think there are some suggestive findings here. The students who participated in these focus groups were neither the lazy AI abusing cheaters students are sometimes accused of being, nor were they the savvy “AI natives” they are sometimes hailed as. They were people trying to navigate their lives in this moment of rapid AI emergence like the rest of us. They were often thoughtful and insightful about the challenges they were facing, but like the rest of us, they were also often overwhelmed, scared, misinformed, and confused.
From a classroom perspective, the way students talked about the importance of intrinsic motivation for learning in an era where AI based tools could be used to cheat was telling. So too was the long list of tools they had available to defeat automated policing of written work. These both tend to reinforce my already existing suspicion that engaging with students, clearly defining learning goals, and connecting to their intrinsic motivation will be more successful than deploying automated surveillance to defeat AI tool based plagiarism.
But that’s just one small piece of a fascinating set of conversations. There is a lot to continue unpacking here, and I will being doing that over the course of the next few months as I shape this into something ready for publication. I’ll also be gathering further data of this kind in the fall, having more conversations with undergraduates, and possibly faculty as well.
Finally, my own students, who were co-researchers with me on these focus groups, will be turning in their own analyses of this data to me tomorrow. I intend to ask the authors of particularly interesting takes for their permission to run their work here as “guest posts” on my blog, so look for those over the coming weeks.