DENVER, CO – The House Business Affairs and Labor Committee today passed a bill that would create safeguards around AI chatbots to protect Colorado kids. HB26-1263, sponsored by Representatives Sean Camacho and Javier Mabrey, passed by a vote of 10-3.
“The unfortunate reality is that AI chatbots have encouraged suicides and engaged in romantic interactions with minors. Our bill would protect users, especially children, from misleading AI chatbot conversations,” said Rep. Sean Camacho, D-Denver. “As a parent, it is unsettling to know that unchecked AI chatbots can put children in harm's way, especially when children show signs of depression or suicidal ideation. Our legislation improves transparency and safeguards around AI chatbots to protect Colorado children from manipulative and dangerous AI technology.”
“AI chatbots have posed as licensed mental health professionals or as a romantic partner, which has led to emotional dependence and even suicide,” said Rep. Javier Mabrey, D-Denver. “Our phones have become an extension of ourselves, making these AI chatbots available at kids’ fingertips. Our bill establishes guidelines to prevent the gamification of chatbots, prohibit AI from generating or engaging in sexually-explicit content and require AI companies to provide resources to users who express mental health struggles.”
Beginning January 1, 2027, HB26-1263 implements safeguards around artificial intelligence (AI) chatbots, particularly as they interact with children. The bill would require AI developers to provide a clear and visible disclosure to the minor that the AI chatbot is artificially generated and not human, and prohibit the use of rewards to encourage engagement.
AI developers would be required to take reasonable steps to prevent AI chatbots from generating sexually explicit content or generating conversations that encourage or engage in sexually explicit interactions with minors. These developers would also be required to prevent AI chatbots from creating an emotional dependence through false claims that the chatbot is human, generating conversations that are romantic or sexual or role-playing with a minor.
The bill also requires AI developers to allow for parental controls if their chatbots are accessible to children under the age of 13.HB26-1263 would also require AI chatbots to provide suicide-prevention resources to users who express suicidal thoughts or interest in self-harm, and platforms would be required to file reports on how often a chatbot flags suicidal or self-harm behaviors.
The American Psychological Association warned that, while AI chatbots are low-cost and accessible, they lack necessary regulations to guarantee that they are being used safely.
When a 16-year-old told an AI chatbot about his suicidal thoughts and plans last year, the chatbot discouraged him from telling his parents and offered to write his suicide note. A 14-year-old also died by suicide after his AI chatbot engaged in sexual role play and falsely claimed to be a psychotherapist.
.png)