top of page

March 4, 2026

Bills to Establish Guardrails for AI in Health Care Pass Committee

DENVER, CO – The House Health & Human Services Committee today passed two bills to establish necessary guardrails for artificial intelligence (AI) in health care. HB26-1195 and HB26-1139 would ensure patients’ continued access to health care provided by a human, licensed professional.


“Without input or oversight from a licensed professional, AI chatbots can be mistaken for legitimate therapy. This practice is dangerous for patients, and we need to ensure Coloradans are protected and informed,” said Rep. Gretchen Rydin, D-Littleton, sponsor of HB26-1195. “This bill establishes reasonable protective measures on AI-use in mental and behavioral health care, including prohibiting the use of AI to directly interact with a patient for therapy. Licensed professionals would still be allowed to use AI for administrative purposes, but clinical interventions, psychotherapy, must be administered by a licensed human.” 


“Colorado patients deserve access to real, human-centered care,” said Rep. Javier Mabrey, D-Denver, sponsor of HB26-1195. “This timely bill outlines important guardrails for AI in health care. AI chatbots are biased, unlicensed tools and they should not be used for therapy and treatment recommendations without oversight. Our legislation protects patients while still allowing AI usage for administrative tasks, such as scheduling and note-taking.” 


HB26-1195 passed committee by a vote of 13-0. This bill would create guardrails for AI chatbots to ensure patients can make informed choices when using AI chatbots. The bill also sets standards in clinical settings, limiting the use of AI to administrative tasks with oversight by a licensed professional. To ensure patients receive legitimate behavioral health care, this bill makes sure that psychotherapy is human-delivered by a licensed professional, such as a social worker, psychologist or addiction counselor. 

On the consumer protection front, this legislation would prohibit AI chatbots from being marketed to patients as providing the same level of care as a licensed psychotherapist or counselor. AI chatbots would also be barred from implying their responses or suggestions are equivalent to psychotherapy services. Providers must disclose the use of AI for supplementary support, such as recording or transcribing meetings. 


HB26-1139 passed committee by a vote of 8-5. This bill would establish important guardrails for AI systems in health care to ensure insurance coverage decisions are transparent, accountable and subject to human oversight. 


“Human-centered care must be front and center in Colorado’s health care system,” said Rep. Junie Joseph, D-Boulder, sponsor of HB26-1139. “Our bill establishes necessary guidelines for professional use of AI in health insurance coverage determinations, especially in the case of a denial. In Colorado, we’re safeguarding access to licensed, professional care, so that anyone, no matter their income level or zip code, has the opportunity to meet with a licensed human professional.” 


“All patients in Colorado deserve to have a real human adjudicate their health insurance determinations,” said Rep. Sheila Lieder, D-Littleton, sponsor of HB26-1139. “This bill states clearly that while AI may be used to expedite approvals in care coverage, a qualified human must review denials made by AI. Health care is nuanced, and we need to ensure the use of AI in coverage decisions is not solely based on group data. This bill works to ensure that licensed professionals use AI responsibly to make individualized decisions to keep Colorado patients safe and healthy.” 


Under this bill, if an AI system recommends denying coverage for a patient, the final decision must come from a qualified human after review. To protect patients against algorithmic bias, decisions to deny health care coverage must be based on an individual’s medical history and clinical circumstances, not solely on group data that falls short of an individual’s unique needs. 


In 2025, researchers at Stanford University recommended that Large Language Models (LLMs), which power AI chatbots, “should not replace therapists.” Additionally, researchers concluded that “LLMs express stigma toward those with mental health conditions and respond inappropriately to certain common (and critical) conditions.” 


Top AI companies, including OpenAI, Google, and Character.AI, are all facing lawsuits from families after AI chatbots recommended suicide to a person seeking behavioral health advice or support. Last year, parents of children who committed suicide testified before Congress, stating AI chatbots discouraged their teens from seeking help.


bottom of page