No results found.

Update the search term and try again.

No search term added.

Please type a search term and try again.

loading...

Data

Parallel transformations: Two community college instructors redefine writing in the age of AI

| Susan Adams

Stories & Case Studies
November 4, 2025

This blog post is part of the “From Vision to Action” series showcasing how ATD Network colleges are operationalizing AI integration across eight key action areas identified in the ATD AI for All Task Force’s report, “Creating the AI-Enabled Community College: A Road Map for Using Generative AI To Accelerate Student Success.” 

When artificial intelligence arrived in their writing classrooms uninvited, Dr. Kim Carter and Dr. Anna Esquivel could have dug in their heels. Instead, these two English faculty members embarked on parallel journeys that would transform not just how they teach but why they teach, revealing a truth about what students need most in an AI-enabled classroom. 

          Dr. Kim Carter

          Dr. Anna Esquivel

 

First encounters with AI 

For Dr. Carter at Chattanooga State Community College, AI first appeared as an unwelcome intruder. Students were submitting AI-generated essays, and, like those of many of her colleagues, Kim’s initial response was unequivocal: “AI is not in our class. We’re trying to teach students how to do this thing. If the students aren’t doing the thing, then that’s cheating.” 

She and her fellow writing instructors became detectives, sharing AI checker tools and tracking down what felt like academic dishonesty. “Honestly, the whole privacy thing didn’t even occur to any of us,” Dr. Carter admits. “We were tracking down cheating. And that’s how we felt.” 

At Jackson State Community College in Tennessee, Dr. Esquivel, then an English professor and now dean of humanities and social sciences, had a different but equally unsettling first encounter. When an AI detection feature flagged a student’s paper, Anna’s reaction surprised her: “I thought, how did I fail this student [in such a way] that they felt like they had not found their voice? Because that’s the whole point of Comp 1: getting them excited about learning what they want to say to the world and how to say it.” 

We owe it to humanity to think about these things before we go all in.” – Dr. Anna Esquivel

Both Dr. Carter and Dr. Esquivel stood at the same crossroads, facing the same existential question: What now? 

The realization that sparked change  

For Dr. Carter, transformation came suddenly at the Achieving the Dream’s DREAM conference in February 2024. When keynote speaker Ethan Mollick mentioned that AI could out-diagnose doctors, something shifted. “I don’t know why that resonated with me, why that really caused me to wake up,” she reflects. “I thought, well, then what am I doing? Like, what’s the point of me if AI can do it?” 

But Dr. Carter’s response wasn’t despair: it was determination, rooted in a deeply personal commitment. “I have this deep fear of being that get-off-my-lawn curmudgeon who refuses to know who Bad Bunny is. I don’t want to be that. I want to be 99 years old and still willing to try something new.”  

She flew home from Philadelphia with a mission: implement an AI lesson before the semester ended. Within weeks, she had created an innovative assignment where students would use AI to generate literary analysis and then analyze AI’s analysis. 

Dr. Esquivel’s awakening was quieter but equally profound. Drawing on her background in psychology and cognitive science, she recognized something her colleagues might have missed: “It’s a problem with students’ confidence in their abilities. It’s not always about laziness — it’s about fear; it’s about anxiety.” 

This realization led Dr. Esquivel backward through her own experience as a writer: “I realized that I was a writer, too, and there were times as a writer that I was so scared of that blank sheet of paper. I was so scared of what people were going to think about what I had to say.” She knew in that moment that she needed to completely rethink how she assessed students, not just what they produced but how they got there. 

Teaching with AI — not against it  

Dr. Carter’s first AI assignment became a masterclass in teaching critical thinking through technology. She assigned three literary works: two from the early 1900s that AI knew well and one contemporary play still under copyright that AI would struggle with. Students had to use AI to generate literary analysis and then evaluate how well AI had done the job. 

The results were revelatory. “The strong students were deeply distrustful of it and not happy,” Dr. Carter revealed. “They felt that their voices were being co-opted. They did not like the tone; they did not like the lack of detail.” Meanwhile, other students were impressed because AI did more than they thought they were capable of, some even got caught in AI’s hallucinations, fabricated quotes, and invented analysis. 

One student loved the phrase “tragic symmetry” that AI had generated. “I said, ‘I like it too, but you just need to cite it,'” Dr. Carter recalls. “Well, his classmate was like, ‘Oh, I’m taking that.'” The students were learning to build from AI’s conceptual frameworks while maintaining their own voices — exactly the skill Dr. Carter wanted them to develop. 

Most importantly, the assignment required students to know the texts deeply. “Analyzing someone else’s or another’s analysis of a thing, that’s a lot of work,” Dr. Carter acknowledges, “determining whether or not it did a good job and explaining why.” Students couldn’t just accept AI’s output: They had to understand the material well enough to catch its mistakes. 

Dr. Esquivel took a different but complementary approach: She completely reorganized her composition course to privilege process over product. “So much of their grade, the balance of their grade, was on the process,” she explains. Every week, students had two or three low-stakes writing assignments, graded as pass or fail, with no pressure for perfection. 

Redesigning assessment in collaboration with AI
But Dr. Esquivel didn’t just change the assignments. She changed the entire emotional architecture of the course. In order show empathy with students fearing the paralysis of an empty page, she assigned readings by writers who confessed their fears and what it was like to fail many times. She spent the first three to four weeks talking to students about “the psychological and emotional impact of creating through writing.” 

“I really had to go back into my writing process and humanities side and say, ‘Let’s talk about how personal this is. Let’s talk about how scary it is, and how exciting it is, and how on the other side of that fear is something amazing and beautiful, and it’s finding your own voice.’” 

The key was backing up her words with her grading structure. “I could say that as much as I wanted to those students, but if I didn’t show them the way I assessed them… that I was willing to let them fail and … not [let it] penalize [them], then they weren’t going to believe me.” 

Dr. Esquivel built in visioning, revisioning, and retrying, as weighted parts of the grade and now encourages her faculty to do the same. “Let’s reward trying and failing more than we ever have,” she says, “because what we’re seeing is AI is taking away the skills that are forged in that really difficult work of trying, failing, and getting back up and trying again.” 

Scaling personal transformation to institutional innovation

Both educators quickly discovered that individual transformation can drive institutional change. Dr. Carter launched a faculty community of practice at Chattanooga State, experimenting with AI across the curriculum. Students are now using ChatGPT to identify career-relevant research topics, explore interview questions for their future fields, and prepare for professional communications, all while maintaining the human work of synthesis, analysis, and authentic voice. 

Her assignments have evolved to include tools like Notebook LM for visualizing how sources connect — but always with the guardrail of requiring students to go back to the texts themselves. “This is what they don’t know how to do,” she explains. “Freshmen are learning to read critically, learning to read for information. They don’t know how to do that.” 

Dr. Esquivel’s influence has scaled even further in her role as dean. At Jackson State, she helped establish three distinct faculty working groups: one for faculty all-in on AI integration, one for those exploring AI in their professional work, and crucially, one focused on building AI-free learning environments. 

The last group, she explained, “wasn’t an anti-AI group.” “It was, ‘How are we going to build classrooms where AI isn’t the tool that students pick up? How do we create learning environments where students connect with each other as humans?’” This group explores everything from oral exams to Blue Books to finding word processing programs without AI embedded. 

The administrative support for all three groups has been transformative. “Being able to say that we are going to allow that space for you to slow down and think through and innovate in a different way was very, very powerful,” Dr. Esquivel reflects. “It engendered a lot of respect and trust in the administration.” 

Discovering the messy middle ground  

Through their experiments, both have arrived at strikingly similar philosophies, what Dr. Carter calls “a messier but more interesting middle ground.” 

“I don’t want to go all paper-pencil for 15 weeks and say, ‘Forget your technology,’” she explains. “But at the same time, I don’t want to say, ‘Just submit an AI-generated essay, and I’ll give you an A.’ There’s got to be a much messier, but more interesting middle ground.” 

Fear is a valid starting point, but it doesn’t have to be the ending point.” – Dr. Kim Carter

For Dr. Esquivel, this middle ground is about protecting what she calls “cognitive sovereignty,” a term that resonates deeply with her understanding of education’s democratic purpose. “Cognitive sovereignty really is the entire value of higher education. It’s giving you the kind of intellectual freedom and emotional freedom that makes us free, that makes us citizens who can determine ourselves in a democratic society.” While Dr. Esquivel uses the term “cognitive sovereignty” in her own powerful way, it originates with German sociologist Ulrich Beck, who coined it in 1986 to describe our fundamental need to understand ourselves and our world. Beck’s concept has found new urgency in the age of AI, as scholars examine how technology shapes, or threatens, our ability to think independently. 

She draws a direct line from W.E.B. Du Bois’s early 20th-century writings on education and liberation to today’s AI challenges: “If we are bypassing that, if we are shortcutting that, then we are shortcutting our freedoms. We’re shortcutting our ability to be liberated humans,” she asserts. In The Souls of Black Folk (1903), Du Bois insisted that education was inseparable from freedom, arguing that African Americans needed higher education to develop the intellectual and political capacity for full democratic citizenship. 

Both Dr. Carter and Dr. Esquivel emphasize that the struggle itself, the productive struggle, is where learning happens. Dr. Carter likens her emphasis on the writing process to making students “show their work in math” for every thinking step. Dr. Esquivel points to the importance of the creative struggle in building a PowerPoint or crafting an email. “Those are all moments of connection,” she insists. “As much as they are moments of struggle, they are moments of connection and real human fallibility and productivity.” 

As AI capabilities grow, both educators have had to grapple with a counterintuitive truth: the standards must go up, not down. 

When Dr. Carter encounters a fully AI-generated essay now, she tells students: “The standards go way up. If I don’t have to look for grammar, wording, commas, punctuation … now I’m looking for depth of ideas, details, and extensive support. Now you’re asking me to grade the work of this machine that has all this knowledge, and so if all that knowledge isn’t represented, then they panic a little bit.” 

Neither Dr. Carter nor Dr. Esquivel claims to have all the answers. In fact, they’re suspicious of anyone who does. 

Dr. Carter describes an encounter with a colleague who accused her of “giving up” when she talked about questioning everything. “I said, ‘No, we’ve got to figure out what the line is, first of all. We have to figure out everything.’ Everything is being thrown into question. I don’t think anything is exempt from reexamination.” 

She continues to wrestle with big questions: “Does this change human beings’ responsibility to learn, to read and write? We’ve never questioned this before, right? Whether or not we should know how to write well in our own language. But what if all this goes away? Is humankind just going to be functionally illiterate?” 

Dr. Esquivel’s advice to fellow faculty reflects similar humility combined with urgency: “Try it, get to know it. Know thy enemy.” She notes that “many of the people who are the most vocal about not using AI, or finding anti-AI technology, are very, very knowledgeable about it. They really dug into it to see what is going on here, [asking] ‘Is this useful? Is this important?’” 

Both emphasized the importance of understanding AI deeply before deciding how, or whether, to integrate it. “We owe it to humanity to think about these things before we go all in,” Dr. Esquivel insists, drawing on her knowledge of technology’s history from the Industrial Revolution forward. 

Takeaways for educators  

The journeys of Dr. Carter and Dr. Esquivel illuminate what the ATD AI for All Task Force identified in their recent report, Creating the AI-Enabled Community College, as Action Areas 5 and 6: Professional Learning and Curriculum Redesign. But their stories reveal something deeper: that these aren’t separate action items to check off a list. They’re intertwined processes of personal and professional transformation. 

Professional learning that works looks like: 

  • Permission to experiment and fail: Dr. Carter literally went home from a conference and implemented a new assignment within weeks, knowing her institution supported innovation. 
  • Community and conversation: Both emphasized the importance of talking with colleagues, sharing concerns, and building communities of practice. 
  • Multiple pathways: Dr. Esquivel’s three working groups acknowledge that faculty are in different places and need different supports. 
  • Time to think: Dr. Esquivel’s administration gave faculty space to “slow down and think through” rather than demanding immediate all-in adoption. 

Course design that matters looks like: 

  • Centering the learning process: Both shifted their grading to heavily weight the journey, not just the destination. 
  • Building in productive struggle: Low-stakes assignments, permission to fail, multiple attempts, all allow students to learn through difficulty. 
  • Maintaining academic rigor: Standards went up, not down, with clearer expectations about depth of thinking and authentic engagement. 
  • Balancing integration and protection: AI can be used where it enhances learning (career exploration, brainstorming, synthesis visualization) while protecting spaces for human creation. 

Most importantly, they each demonstrated that their transformation begins with vulnerability, acknowledging their own fears, their own failures, their own ongoing learning. Dr. Carter’s fear of becoming irrelevant led her to radical openness. Dr. Esquivel’s recognition of her own writing anxiety led her to redesign an entire curriculum. 

If their stories reveal anything, it’s that institutions must support multiple pathways for faculty engagement with AI, not just the “all in” approach. 

Dr. Carter benefited from a teaching and learning conference where the message was simple: “Just try it. Just try something; see what you come up with.” This permission to experiment, coupled with assurance that “failures in our portfolio” wouldn’t affect tenure, created psychological safety for innovation. 

The question isn’t whether AI will change education. It’s whether educators will shape that change.” – Dr. Kim Carter

Dr. Esquivel’s institution created formal structures that validated faculty at every point on the spectrum: from enthusiastic adopters to thoughtful resisters. “That was very validating,” Dr. Esquivel reflects. “It engendered a lot of respect among some of my faculty for the administration for allowing us to do that.” 

Both emphasized that the conversation can’t just be about catching students who use AI inappropriately. It must be about building classroom environments where students don’t feel they need to reach for AI as a crutch, where they have the confidence, support, and psychological safety to struggle productively with their own thinking. 

As the fall semester unfolds, both women continue to evolve their practice. Dr. Carter is introducing Notebook LM for source synthesis, adding AI-assisted career preparation activities to her college success course, and constantly refining the balance between AI integration and traditional skills. 

Dr. Esquivel is working with faculty to redesign the first-year experience course, ensuring that conversations about cognitive sovereignty begin on day one. She’s helping English faculty across campus think about assessment differently, pushing back on institutional pressures for quick grading, and advocating for smaller class sizes that allow for meaningful feedback conversations. 

Neither educator is waiting for perfect solutions or definitive guidance. They’re in the messy middle ground, experimenting, adjusting, learning alongside their students. 

Dr. Carter explained that. “Many of my students are high school students and many lead difficult lives. So sometimes at 10 o’clock at night, AI is what you do when the deadline’s looming. But when I explain it, they do understand.” 

Dr. Esquivel agrees: “Students want to know. They just don’t know how to articulate what it means to be human, what it means to create. If intelligence can be artificial, then how is it real? That’s something I have really enjoyed thinking about.” 

Shaping the future of education with AI  

Dr. Carter and Dr. Esquivel’s journeys offer a roadmap for other faculty navigating AI integration — not as a prescriptive set of steps but as an invitation to lead with curiosity, vulnerability, and commitment to students. 

Their stories remind us of the following key takeaways that are applicable across disciplines: 

  • Fear is a valid starting point, but it doesn’t have to be the ending point. 
  • Students’ use of AI often reflects lack of confidence, not lack of character. 
  • Process matters as much as product, maybe more. 
  • Productive struggle is where learning happens. 
  • We can’t protect students from AI, but we can protect their cognitive sovereignty. 
  • Faculty-driven change works when institutions provide multiple pathways and genuine support. 
  • The middle ground is messy but essential: We can be neither a Luddite nor an uncritical adopter 

The question isn’t whether AI will change education. It’s whether educators will shape that change with the same creative agency, responsibility, and intent they want to inspire in their students. 

Dr. Carter and Dr. Esquivel answered that question with a resounding “Yes.” And in doing so, they’re showing us all what’s possible when fear transforms into innovation, and individual courage drives institutional change. 

Copy link