The Colorado artificial intelligence (AI) law ("Colorado AI law") will take
effect June 30, 2026. This article discusses the Colorado AI law developer statement,
documentation, and disclosure requirements.
Developer Statement, Documentation, and Disclosure Requirements
On and after June 30, 2026, and except as provided in C.R.S. §
6-1-1702(6), a developer of a high-risk AI system shall make available to the deployer
or other developer of the high-risk AI system (a) a general statement describing the
reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI
system, (b) documentation disclosing (i) high-level summaries of the type of data used
to train the high-risk AI system, (ii) known or reasonably foreseeable limitations of
the high-risk AI system, including known or reasonably foreseeable risks of algorithmic
discrimination arising from the intended uses of the high-risk AI system, (iii) the
purpose of the high-risk AI system, (iv) the intended benefits and uses of the high-risk
AI system, and (v) all other information necessary to allow the deployer to comply with
the requirements of C.R.S. § 6-1-1703, (c) documentation describing (i) how the
high-risk AI system was evaluated for performance and mitigation of algorithmic
discrimination before the high-risk AI system was offered, sold, leased, licensed,
given, or otherwise made available to the deployer, (ii) the data governance measures
used to cover the training datasets and the measures used to examine the suitability of
data sources, possible biases, and appropriate mitigation, (iii) the intended outputs of
the high-risk AI system, (iv) the measures the developer has taken to mitigate known or
reasonably foreseeable risks of algorithmic discrimination that may arise from the
reasonably foreseeable deployment of the high-risk AI system, and (v) how the high-risk
AI system should be used, not be used, and be monitored by an individual when the
high-risk AI system is used to make, or is a substantial factor in making, a
consequential decision, and (d) any additional documentation that is reasonably
necessary to assist the deployer in understanding the outputs and monitor the
performance of the high-risk AI system for risks of algorithmic discrimination. C.R.S. §
6-1-1702(2).
Except as provided in C.R.S. § 6-1-1702(6), a developer that offers,
sells, leases, licenses, gives, or otherwise makes available to a deployer or other
developer a high-risk AI system on or after June 30, 2026, shall make available to the
deployer or other developer, to the extent feasible, the documentation and information,
through artifacts such as model cards, dataset cards, or other impact assessments,
necessary for a deployer, or for a third party contracted by a deployer, to complete an
impact assessment pursuant to C.R.S. § 6-1-1703(3). C.R.S. § 6-1-1702(3)(a). A developer
that also serves as a deployer for a high-risk AI system is not required to generate the
documentation required hereby unless the high-risk AI system is provided to an
unaffiliated entity acting as a deployer. C.R.S. § 6-1-1703(3)(b).
On and after June 30, 2026, a developer shall make available, in a
manner that is clear and readily available on the developer's website or in a public use
case inventory, a statement summarizing (i) the types of high-risk AI systems that the
developer has developed or intentionally and substantially modified and currently makes
available to a deployer or other developer, and (ii) how the developer manages known or
reasonably foreseeable risks of algorithmic discrimination that may arise from the
development or intentional and substantial modification of the types of high-risk AI
systems described in accordance with C.R.S. § 6-1-1702(4)(a)(i). C.R.S. §
6-1-1702(4)(a). A developer shall update the statement described in C.R.S. §
6-1-1702(4)(a) (i) as necessary to ensure that the statement remains accurate, and (ii)
no later than 90 days after the developer intentionally and substantially modifies any
high-risk AI system described in C.R.S. § 6-1-1702(4)(a)(i). C.R.S. § 6-1-1702(4)(b).
On and after June 30, 2026, a developer of a high-risk AI system
shall disclose to the Colorado attorney general, in a form and manner prescribed by the
Colorado attorney general, and to all known deployers or other developers of the
high-risk AI system, any known or reasonably foreseeable risks of algorithmic
discrimination arising from the intended uses of the high-risk AI system without
unreasonable delay but no later than 90 days after the date on which (a) the developer
discovers through the developer's ongoing testing and analysis that the developer's
high-risk AI system has been deployed and has caused or is reasonably likely to have
caused algorithmic discrimination, or (b) the developer receives from a deployer a
credible report that the high-risk AI system has been deployed and has caused
algorithmic discrimination. C.R.S. § 6-1-1702(5).
Nothing in C.R.S. § 6-1-1702(2) to (5) requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer. C.R.S. § 6-1-1702(6).
On and after June 30, 2026, the Colorado general may require that a
developer disclose to the Colorado attorney general, no later than 90 days after the
request and in a form and manner prescribed by the Colorado attorney general, the
statement or documentation described in C.R.S. § 6-1-1702(2). C.R.S. § 6-1-1702(7). The
Colorado attorney general may evaluate such statement or documentation to ensure
compliance with the Colorado AI law, and the statement or documentation is not subject
to disclosure under the "Colorado Open Records Act," Part 2 of Article 72 of Title 24.
C.R.S. § 6-1-1702(7). In a disclosure pursuant hereto, a developer may designate the
statement or documentation as including proprietary information or a trade secret.
C.R.S. § 6-1-1702(7). To the extent that any information contained in the statement or
documentation includes information subject to attorney-client privilege or work-product
protection, the disclosure does not constitute a waiver of the privilege or protection.
C.R.S. § 6-1-1702(7).
On and after June 30, 2026, a developer of a high-risk AI system
shall use reasonable care to protect consumers from any known or reasonably foreseeable
risks of algorithmic discrimination arising from the intended and contracted uses of the
high-risk AI system. C.R.S. § 6-1-1702(1). In any enforcement action brought on or after
June 30, 2026, by the Colorado attorney general pursuant to C.R.S. § 6-1-1706, there is
a rebuttable presumption that a developer used reasonable care as required hereunder if
the developer complied herewith and any additional requirements or obligations as set
forth in rules adopted by the Colorado attorney general pursuant to C.R.S. § 6-1-1707.
C.R.S. § 6-1-1702(1).
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.
The Colorado artificial intelligence (AI) law ("Colorado AI law") will take effect June 30, 2026. This article discusses the Colorado AI law developer statement, documentation, and disclosure requirements.
Developer Statement, Documentation, and Disclosure Requirements
On and after June 30, 2026, and except as provided in C.R.S. § 6-1-1702(6), a developer of a high-risk AI system shall make available to the deployer or other developer of the high-risk AI system (a) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system, (b) documentation disclosing (i) high-level summaries of the type of data used to train the high-risk AI system, (ii) known or reasonably foreseeable limitations of the high-risk AI system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk AI system, (iii) the purpose of the high-risk AI system, (iv) the intended benefits and uses of the high-risk AI system, and (v) all other information necessary to allow the deployer to comply with the requirements of C.R.S. § 6-1-1703, (c) documentation describing (i) how the high-risk AI system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk AI system was offered, sold, leased, licensed, given, or otherwise made available to the deployer, (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation, (iii) the intended outputs of the high-risk AI system, (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk AI system, and (v) how the high-risk AI system should be used, not be used, and be monitored by an individual when the high-risk AI system is used to make, or is a substantial factor in making, a consequential decision, and (d) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk AI system for risks of algorithmic discrimination. C.R.S. § 6-1-1702(2).
Except as provided in C.R.S. § 6-1-1702(6), a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk AI system on or after June 30, 2026, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to C.R.S. § 6-1-1703(3). C.R.S. § 6-1-1702(3)(a). A developer that also serves as a deployer for a high-risk AI system is not required to generate the documentation required hereby unless the high-risk AI system is provided to an unaffiliated entity acting as a deployer. C.R.S. § 6-1-1703(3)(b).
On and after June 30, 2026, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing (i) the types of high-risk AI systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer, and (ii) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk AI systems described in accordance with C.R.S. § 6-1-1702(4)(a)(i). C.R.S. § 6-1-1702(4)(a). A developer shall update the statement described in C.R.S. § 6-1-1702(4)(a) (i) as necessary to ensure that the statement remains accurate, and (ii) no later than 90 days after the developer intentionally and substantially modifies any high-risk AI system described in C.R.S. § 6-1-1702(4)(a)(i). C.R.S. § 6-1-1702(4)(b).
On and after June 30, 2026, a developer of a high-risk AI system shall disclose to the Colorado attorney general, in a form and manner prescribed by the Colorado attorney general, and to all known deployers or other developers of the high-risk AI system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk AI system without unreasonable delay but no later than 90 days after the date on which (a) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk AI system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination, or (b) the developer receives from a deployer a credible report that the high-risk AI system has been deployed and has caused algorithmic discrimination. C.R.S. § 6-1-1702(5).
Nothing in C.R.S. § 6-1-1702(2) to (5) requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer. C.R.S. § 6-1-1702(6).
On and after June 30, 2026, the Colorado general may require that a developer disclose to the Colorado attorney general, no later than 90 days after the request and in a form and manner prescribed by the Colorado attorney general, the statement or documentation described in C.R.S. § 6-1-1702(2). C.R.S. § 6-1-1702(7). The Colorado attorney general may evaluate such statement or documentation to ensure compliance with the Colorado AI law, and the statement or documentation is not subject to disclosure under the "Colorado Open Records Act," Part 2 of Article 72 of Title 24. C.R.S. § 6-1-1702(7). In a disclosure pursuant hereto, a developer may designate the statement or documentation as including proprietary information or a trade secret. C.R.S. § 6-1-1702(7). To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection. C.R.S. § 6-1-1702(7).
On and after June 30, 2026, a developer of a high-risk AI system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk AI system. C.R.S. § 6-1-1702(1). In any enforcement action brought on or after June 30, 2026, by the Colorado attorney general pursuant to C.R.S. § 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required hereunder if the developer complied herewith and any additional requirements or obligations as set forth in rules adopted by the Colorado attorney general pursuant to C.R.S. § 6-1-1707. C.R.S. § 6-1-1702(1).
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.