Making AI Use Visible: Why I Ask Students to Document Their Process
I use Generative AI tools, and started using them when they were first released. You can see how I started using AI to help with my work in this article that I published with colleagues in 2024 in the Journal of Social Work Education:
Báez, J. C., Bjugstad, A., Park, T. K., Jones, J. L., Bidwell, L. N., Sage, M., & Hitchcock, L. I. (2025). Social Work Educators Innovating With Generative AI: An Exploratory Study. Journal of Social Work Education, 61(1), 14–29. https://doi.org/10.1080/10437797.2024.2411170
As I reflect on this article now, I still see the role of AI tools in my work life as one of augmentation or support. The tools help me organize my thinking, draft an outline or get unstuck when I have a writing block. My confidence in using AI tools responsibly is rooted in professional identity, judgment, and ethical accountability. Students, by contrast, are encountering these tools at the same time they are learning what it means to think, write, and reason as social workers. Generative AI tools such as ChatGPT, Grammarly, Copilot, and others are now embedded in students’ daily writing environments. Some students are using these tools deliberately; others may not fully realize when browser extensions or autocomplete features involve AI. Meanwhile, our assignments in social work education are often writing-focused, inviting reflection, analysis, integration of theory and experience, and the development of critical thinking that grounds ethical practice.
Here is the tension I see in my classes. Students are navigating new tools without clear maps from us, and we are trying to maintain academic integrity while also recognizing that these technologies aren’t going away. As I think about this tension, I am reminded of the challenges of integrating social media into social work education years ago. I never felt like the problem was the social media tools in and of themselves, but how they were being used and that we were talking about how they were being used. This leads me to a similar conclusion with AI; the problem isn’t the AI tools themselves, but the training and transparency with which we use them.
When students use AI tools without disclosure, whether intentionally or because they don’t realize it’s AI use, we lose our ability to teach effectively. We can’t distinguish between a student who is genuinely struggling with critical analysis and one who has outsourced their thinking. We can’t provide meaningful feedback on writing that isn’t theirs. And we can’t model the transparency and accountability that are foundational to social work practice. There are several layers to this:
- Academic integrity concerns are obvious but incomplete. Yes, undisclosed use of AI constitutes academic misconduct under most institutional policies. But framing this solely as a “cheating” issue misses the pedagogical heart of the matter.
- Learning assessment becomes unreliable. If I can’t trust that a reflection paper represents a student’s actual capacity for self-awareness and critical thinking, I can’t evaluate their readiness for practice. This isn’t about gatekeeping; it’s about ensuring competence in work that affects real people’s lives.
- Professional formation is undermined. Social work students are learning to become helping professionals who must navigate ambiguity, manage complexity, and make ethical decisions under pressure. If AI is doing their thinking during their education, what happens when they’re sitting with a client in crisis?
- Inequity deepens. Students with greater technological literacy or access to premium AI tools may appear more competent than peers who are doing their own work. Students who follow the rules are disadvantaged when others don’t.
One small shift in how I am approaching AI this semester
My approach centers on required transparency paired with limited, disclosed use. Every written assignment includes a mandatory AI Statement, regardless of whether AI tools were used. This shifts the question from “Did you cheat?” to “How did you approach this work?”
Here’s what this looks like in practice:
- Students may use AI only for specific support tasks, such as spelling and grammar checks, generating an organizational outline, or study support, like creating practice questions. The substance of the assignment, including the analysis, reflection, interpretation, and synthesis, must be their own original thinking and writing.
- All AI use must be fully disclosed in an AI Statement that identifies the tool used, describes how it was used, and confirms that all final content is the student’s own work. Even if no AI was used, the statement is still required: “I did not use generative AI tools in completing this assignment.”
- When AI tools are used for permitted purposes, students must also share the AI-generated output they relied on, either through a shareable link or by uploading a document with their prompts and the AI’s responses. This creates a complete record of their process.
This approach shifts the conversation from punitive to pedagogical. The documentation requirement serves the following purposes:
- It creates a reflective pause before using AI,
- It helps students (and me) track patterns in their learning process,
- It builds the habit of transparency that will serve them throughout their professional lives.
Here is a link to my full AI Policy:
We are four weeks into the semester, and while this approach isn’t perfect, it’s working better than I expected. I’ve found that reviewing AI output alongside student work helps me give better feedback. When a student submits an AI-generated outline and then a final paper, I can see how they moved from a generic structure to a specific, experience-informed analysis. That’s visible learning.
Now, tracking documentation takes time; students sometimes forget the AI Statement (I’m treating early instances as learning moments, not as automatic misconduct); and there’s always the possibility of undisclosed use. But the alternative, pretending AI doesn’t exist or creating punitive policies that drive use underground, feels worse.
What I appreciate most is that this policy aligns with what we teach as social work educators: transparency, accountability, ethical decision-making in uncertain situations, and the importance of documenting our professional reasoning. If we want students to practice these values with clients, we can start by practicing them in the classroom.
We’re all learning together. I expect this policy will evolve, and I’ve told students that explicitly. But the commitment to transparency, to making our processes visible, feels like the right start for me as I navigate this moment in social work education. Are you using something similar? Different? Please share your thoughts in the comments below.
How to cite:
Hitchcock, L. I. (2026, February 2). Making AI Use Visible: Why I Ask Students to Document Their Process. Teaching & Learning in Social Work. https://laureliversonhitchcock.org/2026/02/02/making-ai-use-visible-why-i-ask-students-to-document-their-process/


