If you have visited my blog before, you know that I am somewhat of an Artificial Intelligence (AI) geek. Recently I was giving a presentation on the technology Flip at a Tech Camp held by my school. My presentation turned into a lesson about the possible functions and uses of AI for educators! I have written two blog posts on utilizing AI for gaining inspiration for unique assessments and finding unique ways to utilize data though AI. For this blog post I am going to “put my money where my mouth is” and discuss the reality of crafting an assessment though AI and the revisions needed to make it successful
The AI inspired digital assessment I am “renovating” is one that I created for a master’s course, “Electronic Assessment for Teaching and Learning.” Over the length of the course, I learned how to create fair, reliable, objective, and valid (FROV) assessments. I also learned the unique opportunities afforded by new AI models. This assessment “renovation” is the combined integration of my newfound knowledge. For the course assignment, the technique, structure, technology, response, and design were chosen at-random for me. As I edit this assessment to better fit the needs of my classroom, I made changes to those assessment elements.
This assessment, crafted by the AI model ChatCPT and myself, assessed the standard CCSS.ELA-LITERACY.RI.3.2: Determine the main idea of a text; recount the key details and explain how they support the main idea. This occurred using a short story technique, the technology “Mural,” and assessing the whole class at once. The new, and improved, Assessment Plan is linked here:
Most of the changes that needed to take place for this assessment revolved around the fact that AI models do not know students like teachers do. The informational text used as the short story in the original assessment plan was generated by the AI model. The text was more difficult than texts I generally use in my classroom. It included very complex sentences with challenging vocabulary. This led me to make the change to a text that is available in my school’s reading curriculum. This also allows me to utilize district-provided resources that align to my 3rd grade standards.
Another change that I made to the original assessment plan was the response to student assessments. The original plan required a “Total Score” response to students. For this assessment, I felt it was more appropriate to respond orally to my students. This way, I was able to validate the work they put into their mural and have a natural conversation with my students about their work. Along with an oral response, I would give my students a grade from 0-4 based on their performance as it reflects the system of grading used in my district.
The reason why I decided to revisit and “renovate” this AI assessment was to experience using AI to craft assessments that I would actually use in my classroom. It’s one thing to practice creating with AI models, and another to fully flesh out an assessment with the intent of implementing it in your curriculum. This experience also allowed me to reflect on assessments I have created in the past. Throughout the semester I have learned important assessment criteria, including “FROV,” coined by the Michigan State University Master of Arts in Educational Technology Program, which has changed my perspective on what a “good assessment” is.
This is a screenshot from Google Jamboard depicting what the mural assessment would look like in my 3rd grade context!
Comments