FILE – The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023. (AP Photo/Richard Drew, file)

The beginning of the Winter term at Colorado colleges and universities and the opening of the legislative session share a familiar hum: a cacophony of chatter as students swarm into dorms and first classes, matched by the brisk tones of lawmakers huddled in early negotiations.

But the loudest chatter at the intersection of education, technology, public policy, and the economy is not coming from students, professors, or legislators at all. It’s coming from AI systems — from OpenAI, Anthropic, Google, Meta, and a tumult of new ed-tech entrants — reshaping the learning landscape in an innovation cycle we haven’t seen since the dawn of the internet.

The velocity of this transformation requires us to shift our focus from whether AI will reshape education to practical approaches to ensure its pedagogical promise and deter its potential to undermine the very foundations of equitable, human-centered learning. 

This is not a short-sighted view; fundamental elements of the educational experience have shifted in under a year. Chatbots draft essays; rubrics and assessments are spun up by AI, and student work is evaluated by those same systems. Algorithms —a nd soon, agentic AI — will influence admissions policies, curriculum design, advising, workforce readiness skills, even accreditation.

Done well, AI can expand access, personalize learning, reduce human error, and free educators to do what colleges do best: teach, mentor, and inspire. Done poorly, it can hard-code bias, shift oversight from people to machines, obscure how decisions are made, question what a degree signifies, and erode trust.

State governments, institutions, ed-tech providers and AI companies are racing to strike the right balance between transparency and agency — expanding opportunity without unleashing unmonitored, unaccountable systems. 

Colorado chose to meet this moment with clear, pragmatic rules. In 2024, the legislature passed a first-in-the-nation framework for “high-risk” AI—systems that make or meaningfully assist consequential decisions, including in education. The law requires developers and deployers to manage risk, disclose when AI is used, and provide explanations and a human review path when AI contributes to an adverse outcome. In short: innovation, yes—but with transparency, accountability, and human agency. 

Because implementation matters, lawmakers returned in 2025 to adjust timelines so schools and universities could build capacity without the chaos of a rushed implementation. The additional time will preserve core protections while giving campuses the opportunity to stand up governance, testing, training, and procurement standards. It’s the difference between AI compliance as a new bureaucracy and responsible adoption. 

In the Legislature, I (Rep. Carter) brought a practical, bipartisan lens to this work. As both a legislator and local school board member, I pressed for common-sense disclosures (tell people when they’re interacting with AI), for aligning AI with existing consumer-protection and anti-discrimination laws, and for a realistic on-ramp so institutions aren’t forced into hasty, expensive choices. The goal wasn’t to slow technology or the adoption of protections. It was to put students first while ensuring public institutions can comply without diverting dollars from classrooms and student services. 

From the vantage point of educational technology, the parallel lesson is simple: responsible AI must be measured through pedagogical standards and must pass a do no harm test. Tools that are transparent, explainable, accessible, and designed with equity in mind deepen learning and strengthen trust. Tools that hide their logic, are opaque in the decisions they affect, or exclusionary of human oversight cannot last.  

As leaders at the intersection of public policy and educational technology, we see the potential to turn Colorado’s guardrails into academic gains: 

• Center students’ rights. When AI influences admissions, aid, placement, or academic standing,students deserve plain-language notices, clear explanations, and a guaranteed route to human appeal. That’s not red tape; it’s a foundation for trust and legitimacy.

• Recognize a spectrum of risk. An AI tutor supporting self-study is not the same as an algorithm ranking applicants. Compliance frameworks should tier oversight, focusing the most rigorous controls on systems that materially affect opportunities and outcomes.

• Design AI to widen — not narrow — opportunity. Adaptive practice, writing feedback, and early-alert systems can close preparation gaps — if they’re monitored for disparate impact. Institutions and AI providers should co-own equity audits, include human-in-the-loop governance, and track outcomes across demographics, ensuring tools inspire and not hamper the potential of every learner.

• Build capacity, not just policies. Campuses need inclusive means to create policies – faculty,student affairs, disability services, IT, institutional research, and legal/ethics – but also the means to effectively make decisions that govern procurement, piloting, data use, and post-deployment monitoring. Ed-tech partners should meet them in this process with clear data-handling practices,privacy policies, and not only the ones you find by scrolling through endless Terms of Use.

5)Provide clear guidance and adequate time. The June 2026 deadline is approaching. Institutions and technology providers need concrete, operational guidance – recognized frameworks beyond broad standards and realistic timelines.

6)Create practical safe harbors. Organizations demonstrating good-faith efforts – risk management,testing, documentation, user disclosures – should receive reasonable protections and cure periods. Safe harbors encourage responsible experimentation rather than chilling innovation.

7)Prove it works. “AI-powered” isn’t a learning outcome. Providers should publish evidence of efficacy – not just engagement metrics – and support independent evaluation through data sharing. Campuses should prioritize tools with transparent mechanisms, measurable learning gains, and accessible design from day one.

Colorado’s pioneering role carries real responsibility. Other states are watching to see whether comprehensive frameworks can work or whether narrower, domain-specific approaches prove more practical. Our success or failure will shape AI governance nationwide for years to come. 

The 2026 session offers an opportunity to refine Colorado’s approach based on early implementation, clarifying ambiguities, addressing operational concerns, and ensuring the law fulfills its purpose: prevent discrimination while enabling beneficial innovation.  

AI may well write the future of higher education if we do not step in. And we do so by insisting that technology serves learning and that every consequential decision remains grounded in fairness, transparency, and human judgment. That is how we protect students and power innovation at the same time. 

Democratic Rep. Michael Carter represents Aurora in Colorado’s House District 36 and served as Vice-Chair of the House Judiciary Committee during the 2025 special session on AI regulation. He is also the former Vice-President of Aurora Public Schools Board. Charles Linsmeier is an educational technology leader on digital transformation in higher education and formerly the Executive Vice President and General Manager at Macmillan Learning. 

Join the Conversation

1 Comment

  1. I wonder which company paid Carter at his Vail retreat to ignore all of the environmental impacts of AI data centers. Does it really help students, or does it help fat cat rich guys who are scamming schools out of money?

Leave a comment

Your email address will not be published. Required fields are marked *