This week I did research around AI supported assessment, considerations, and implications. One of the core articles I looked into was Assessment and AI by WesternU.ca and the Centre for Teaching and Learning. In this article they pose the idea of AI in assessment as a threat which also ties to workers and more specifically educators feelings around AI threatening authentic or human approaches in teaching. There is definitely a lingering fear of what might be lost if technology takes over even just in the realm of assessment and education. This initial idea and the rest of the article had me most interested in regulations within school settings, parents, and students themselves, along with more specific implications AI’s use in assessment processes might have.

Student / Parent Say in AI’s Use Within Assessment
The article highlights the fact that, “human judgment and oversight are essential in all aspects of assessment.” Even if trying a human and AI mix or integration process within student assessments I believe considering student and parent voices should be empowered here as well. There are already worries around privacy, data use, and accessibility that play a major role here. There may be more privacy and data friendly programs but who is in charge of regulating or approving this (especially as these programs are actively developing and changing) and do parents and students fully agree with this use. A more ethical integration might include students’ choice and agency within this process especially considering more personal or relational assignments and being able to change their mind or opt out whenever needed. Learners should have some agency in deciding where, when, and how they are comfortable with AI being used in their education.
AI-Aware Design
The article also calls for an “AI-aware design” suggesting further transparency and shared information or knowledge around AI use. This is relevant in any AI use around the school. Schools should communicate purpose, limits and privacy implications of AI tools in an effective way for example offering related resources and elaboration on the school page and having classes share more on their own use or processes collaborating with AI systems. This way both students and parents can be actively involved, aware, and knowledgeable around these programs. With this the idea of offering choice is also strengthened as everyone is more aware of the possibilities in AI use and what this might look like in their learning. Students should be able to opt out of receiving AI-generated feedback, be informed when their data might interact with AI systems or be provided strong alternatives for certain tasks / assessment when needed. Involving families and students in this decision-making process strengthens trust within educational systems all together and aligns with the article’s core message that AI should aid in enhancing learning experiences while remaining ethical, human-centered and meeting the needs of the community.
![Assessment Suggestions Mind Map, rethink rubrics, emphasize creativity & critical thinking
use mentoring & coaching with staged assessment and logs
foster intrinsic motivation through writing
innovate assessment: oral, group, multimedia, reflections
teach responsible chat bot use
incorporate
Al critically and highlight its limitations
encourage interest-driven writing
use authentic, motivating, realistic assessment, from, Rudolph, J., Tan, S., & Tan, S. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching, 6(1), 380. [image generated by ChatGPT 4o prompt to create Powerpoint template]](https://westcoasteducation.opened.ca/wp-content/uploads/sites/8874/2025/12/image-3-1024x953.png)
Assessment Focused AI Programs
I was curious on what programs educators might use within assessment practices and what the possibilities of this would even look like so I asked ChatGPT 4 to share some resources through the prompt, “What most effective AI programs do educators(teachers) use to help with marking or to compare their own marking.”
Below is the ChatGPT generated response with some ideas and recommendations for specific programs use and overall considerations for educators.
Examples of AI tools for marking & grading
| Tool / Platform | What it does / Strengths |
| Grade It AI | Automatically evaluates essays (typed or handwritten upload), applies your rubric, and gives detailed feedback and marks — freeing up hours of manual grading. gradeitai.com |
| SmartGradr | Can read essays, structured responses, even handwritten work; handles multiple-choice, math problems, tables, etc.; auto-applies rubrics and generates feedback. Useful for mixed assignment types. smartgradr.com |
| CoGrader | Focused on writing assignments (essays, narratives, etc.). Lets you import student work (e.g. from Google Classroom), define or use pre-made rubrics, then get AI-generated grades + feedback — which the teacher reviews and adjusts. cograder.com+1 |
| Marking.ai | Designed for high school marking, claiming to save significant time by automatically marking uploaded assignments and giving students feedback — including for handwritten work. marking.ai |
| GradingPal | Handles a broad range of assignments (worksheets, quizzes, essays, homework, etc.), across grade levels from primary to high school — intended to speed up grading across different subjects. gradingpal.com |
| SmartEducator | More of an all-around platform: converts handwriting or mixed-format submissions to digital text, applies chosen marking criteria/rubrics, and generates feedback/reports — useful for standardizing grading across classes or cohorts. smarteducator.ai |
⚠️ What to watch out for / limitations & considerations
- AI ≠ perfect grading — Even the newest systems can struggle with nuance, especially when evaluating creativity, originality, or well-argued but unconventional student work. For instance, a recent academic framework for AI-based grading found that while AI can be quite reliable, many real-world exam responses pose challenges. arXiv
- Teacher judgment still needed — Most tools are meant to assist, not replace, the teacher. For example, CoGrader explicitly lets the teacher review and adjust AI-suggested grades and feedback. cograder.com
- Potential bias or errors — Depending on the rubric, subject, or assignment type, AI might misinterpret handwriting, misunderstand complex reasoning, or be inconsistent with human grading styles. Some research suggests variability when grading nuanced answers. arXiv+1
- Rubric and criteria alignment — AI works best when you provide a clear, consistent rubric or marking criteria. Ambiguous or highly subjective assignments (e.g. creative writing, open-ended essays) may not fare well.
- Privacy / data concerns — Uploading student work (especially handwritten or scanned) to third-party AI platforms raises questions about data security, student consent, and compliance with local privacy laws or school policies.
🎯 What these tools are best for (and when you might still want to mark manually)
Good fit for AI-assisted marking
- Large classes with many essays/assignments (time saver).
- Standardized assignments (e.g. essays within a fixed rubric, math worksheets, structured short-answer tasks).
- When you need consistency across many students.
- To get a first pass: preliminary grade + feedback, then you review/refine.
Still better for human marking (or heavy review)
- Creative, open-response assignments requiring nuanced evaluation.
- Projects where student voice/style, depth of thought, originality matters.
- High-stakes assessments when fairness and accuracy are critical.
Additional Resources:
Below are some of the UVIC workshops available sharing valuable research and knowledge on AI related topics and exploration as it is crucial to spread and expand awareness around AI tools, uses, and data use along with the many helpful features that may effectively support diverse learners or aim to support more specific needs in a technology accommodating and strategic way.



