AICFT 1.1 Human agency

 A Definition of HUMAN AGENCY in Education Context

Access to the Professor Wayne Holmes Text

This video was created with the support of an Artificial Intelligence platform. All characters featured are digitally generated avatars, and their voices have been synthesized to provide an immersive and innovative experience.

The use of this technology allows us to produce dynamic, high-quality, and accessible content while maintaining our commitment to transparency and authenticity.

PROFILE

BLOCKS OF THE COMPETENCY

(Described Below)

EdTech #384 AI@Teaching – AICFT 1.1 Human agency

TEACHER COMPETENCY

“Teachers have a critical understanding that AI is human-led, and that corporate and individual decisions of AI creators have a profound impact on human autonomy and rights, and are aware of the importance of human agency when evaluating and using AI tools”

The image highlights the first competency of UNESCO’s AI competency framework for Teachers: Human Agency. As the initial competency within the First Aspect, Human-centred Mindset, and the First Competency in the Acquire, the first progression level, it establishes the foundation for students to understand their role in guiding and controlling AI, ensuring its use aligns with human values.

CURRICULAR GOALS (CG)

EdTech #385 AI@Teaching – AICFT 1.1 Human agency – CURRICULAR GOALS (CG)

CG 1.1.1 Foster critical thinking on AI by organizing teachers to discuss and take perspectives on the dilemma of benefits offered by AI versus the risks of diminishing human autonomy and human agency; use specific AI tools as examples to support teachers to critically examine the benefits, limitations and risks of AI in local educational settings and with respect to their own responsibilities.”

CG 1.1.2 Illustrate key steps in the life cycle of AI systems and guide teachers to understand how corporate and individual decisions of creators may affect the impact of AI.”

CG 1.1.3 Highlight how overreliance on AI can undermine thinking skills and human agency.”

CG 1.1.4 Offer practices of writing basic tips to help protect human agency when using AI in education, with a specific focus on students with special needs.”

LEARNING OBJECTIVES (LO)

EdTech #386 AI@Teaching – AICFT 1.1 Human agency – LEARNING OBJECTIVES (LO)

LO 1.1.1 Critically reflect on the benefits, limitations and risks of specific AI tools in their local
educational settings and the subject areas and grade levels they teach.”

EdTech #389 AI@Teaching – AICFT 1.1 Human agency – Learning Objective 1.1.1

“Write an essay to present your views on the benefits, limitations and risks of using facial recognition (or the auto-correct function of generative AI, or another common AI tool) in education.”

Automated Essay Scoring: Develop a case study on AI-based grading systems. Critically assess how these tools can offer timely feedback and reduce teacher workload, while also considering risks such as bias in evaluation, loss of qualitative feedback, and potential devaluation of teacher expertise.

AI-Assisted Grading Tools – Examples

Adaptive Learning Platforms: Design a reflective journal assignment where teachers analyze the implementation of AI-powered adaptive learning systems in their classrooms. Explore the benefits in personalizing learning experiences alongside risks like reduced student autonomy or the digital divide among learners.

Chatbots in Educational Services: Organize a debate or panel discussion on the use of chatbots as virtual teaching assistants. Ask participants to reflect on how these tools might improve access to information and support outside class time, but also whether they might undermine the teacher–student relationship or create dependency on technology.

Instructional Chatbots/Interactive Experiences

Surveillance and Classroom Monitoring: Write a reflective essay on the implications of using AI-enhanced classroom management tools (such as video analytics or behavior-tracking systems). Consider how these systems might improve safety and efficiency while weighing concerns about student privacy, autonomy, and the overall classroom climate.

LO 1.1.2 Demonstrate an awareness that AI is human-led and the corporate and individual decisions of AI creators affect the impacts on human rights, human agency, individual lives, and societies.”

“Design a poster or digital presentation on how the individual and corporate decisions of AI creators may affect teachers’ rights, and the agency of both teachers and students.”

Develop a case study analysis: Teachers select a real‐world AI application (for example, in facial recognition or personalized learning) and examine its entire life cycle. They identify key decision points where corporate or individual choices influenced the system’s ethical outcomes and discuss the implications for human rights and teacher–student autonomy.

Organize a role‐playing simulation: Teachers form groups where some act as AI developers, others as corporate executives, regulators, and educators. In a simulated stakeholder meeting, they debate and negotiate decisions at different stages of the AI system’s life cycle, emphasizing how these choices affect human agency and educational rights.

Create a detailed flowchart or infographic: Teachers map out the AI system life cycle, annotating each step with potential ethical dilemmas. They then add commentary on how human-led decisions at each node can shape the tool’s societal impact, particularly regarding the rights and autonomy of teachers and learners.

Design a policy proposal: In collaborative groups, teachers draft a policy brief that outlines recommendations for integrating ethical checkpoints into the AI development process. The proposal would address how corporate and individual decisions should be monitored to safeguard human autonomy and rights in educational contexts.

Lead a structured debate or panel discussion: Teachers prepare and present arguments on the need for stricter regulation of AI systems in education. They use current examples to explore how decisions made during AI creation influence issues like data privacy, bias, and the balance of power between technology providers and educators.

LO 1.1.3 Outline the role of humans in the basic steps involved in AI development, from the collection and processing of data to the design of algorithms and functionalities of an AI system, to
the deployment and use of AI tools.”

“Exemplify an AI tool that should be banned according to the EU AI Act and explain why.”

Develop a visual timeline or flowchart that maps each step of AI development: from data collection and processing, through algorithm design, testing, and deployment—highlighting specific points where human intervention, judgment, and ethical decisions are required. Then, have students reflect on how reducing human oversight at any stage might lead to overreliance on automated processes that can undermine critical thinking.

Conduct a case study analysis of a real-world AI failure (for example, an algorithm that produced biased outcomes). Students would identify how human decisions (or their absence) in data curation, algorithm design, and testing contributed to the failure, and discuss how overreliance on the automated system compromised human agency.

Organize a role-playing simulation in which students assume different roles: data scientist, AI developer, regulator, and educator to collaboratively design an AI tool. At each stage, they must debate and decide on the human inputs necessary to ensure ethical design and functionality. Afterwards, they can discuss how eliminating or minimizing these human roles might lead to diminished critical thinking and agency.

Design a poster or digital presentation that outlines a “human-centered AI development cycle.” Students should detail the human responsibilities at every stage of the process and include a section that argues how excessive reliance on AI—without robust human oversight—can erode our ability to think critically and make informed decisions.

LO 1.1.4 Understand the need to use basic measures to protect human agency in key steps regarding the design and use of AI systems by ensuring respect for data ownership, collection of data with consent, anti-bias data labelling and cleaning, discrimination-free AI algorithms, and user-friendly functions and interfaces.”

“Draft a list of daily tips to promote teachers’ autonomous use of AI and to encourage student agency.”

Create an AI ethics checklist: Have teachers design a checklist that outlines key measures to protect human agency in AI tools. This checklist should include criteria for ensuring data ownership, explicit consent for data collection, anti-bias labelling, and accessible design features—especially tailored for students with special needs. Teachers can then use the checklist to evaluate existing AI systems or guide the development of new ones.

Conduct a role-playing simulation: Organize a classroom simulation where participants take on roles such as AI developers, data privacy experts, special needs educators, and policy-makers. In the simulation, each group debates design decisions at various stages of an AI tool’s lifecycle (data collection, algorithm design, deployment) and discusses how choices can either protect or undermine human agency. This exercise highlights the importance of human oversight and ethical decision-making.

Draft a policy brief: Ask teachers to prepare a short policy brief recommending best practices for integrating AI in education. The brief should emphasize the need for measures that protect human agency throughout the AI lifecycle—from data collection to interface design—with a special focus on ensuring accessibility and fairness for students with special needs. This exercise not only reinforces their understanding but also encourages them to advocate for responsible AI practices in their institutions.

Prototype a user-friendly AI tool concept: Teachers can design a mock-up or storyboard of an AI tool intended for classroom use, incorporating features such as clear consent prompts, mechanisms for reporting bias, and customizable interfaces for diverse learning needs. This activity helps them apply the principles of respectful data handling and inclusive design, ensuring that human agency is maintained even as AI assists in educational tasks.

CONTEXTUAL ACTIVITIES

EdTech #387 AI@Teaching – AICFT 1.1 Human agency – CONTEXTUAL ACTIVITIES

Unpack hype around AI: Critically examine hype around concrete AI tools through basic risk-benefit analysis and by highlighting the central role of humans in using AI tools.”

Understand why some AI tools should be banned: Demonstrate a basic understanding of why some AI tools should be banned given their potential to diminish human agency and threaten human rights.”

Spotlight risks: List the potential ways in which teachers’ and students’ agency may be undermined by certain AI tools, as is the case, for example, with the use of large language models for essay writing.”

Know basic dos and don’ts: Write daily tips to promote human agency when using AI in teaching and to encourage student agency in harnessing and assessing AI.”

CONTEXTUAL PERFORMANCE-BASED ASSESSMENT TOOLS

EdTech #388 AI@Teaching – AICFT 1.1 Human agency – PERFORMANCE-BASED ASSESSMENT TOOLS

Table 5. An example of designing assessment tools based on the AI CFT

Screenshot

Design assessment methods and items relevant to the domain of competency and the expected mastery level

“Write an essay to present your views on the benefits, limitations and risks of using facial recognition (or the auto-correct function of generative AI, or another common AI tool) in education.”

“Design a poster or digital presentation on how the individual and corporate decisions of AI creators may affect teachers’ rights, and the agency of both teachers and students.”

“Exemplify an AI tool that should be banned according to the EU AI Act and explain why.”

“Draft a list of daily tips to promote teachers’ autonomous use of AI and to encourage student agency.”

Grading criteria for performance and latent competencies

(To be specified in accordance with the adapted learning objectives and the type of the assessment items)

RESSOURCES

SUPERAGENCY.AI WEBSITE

The Future of Human Agency

BACK TO AICFT INTERACTIVE HOMEPAGE

UNESCO AI Competency Framework for Teachers

Sustainable@EDU POLICY FRAMEWORK ARCHITECTURE LANDING PAGE

SUBSCRIBE EdTech HALF A MINUTE PODCAST IN YOUR FAVORITE APP