cntr-aisle

CNTR AISLE Framework V1: Glossary

This glossary provides the definitions of the terms used in questions of the Center for Technological Responsibility’s AI Legislation Evaluation Framework V1 (CNTR AISLE Framework).

This is part of the AI Legislative Mapping project at the CNTR at Brown University.

Please contact us or raise an issue at https://github.com/brown-cntr/cntr-aisle/ if you have questions or suggestions.

Version: CNTR-AISLE-V1

Updated: March 12, 2025

Public-sector use

Definition: The use or procurement of AI / ADS in federal, state or local government agencies or entities

Potential relevant categories: General

Private-sector use

Definition: The use or procurement of AI / ADS in private businesses, corporations or nongovernment agencies

Potential relevant categories: General

Domain

Definition: The context that the bill applies to, such as heathcare, employment, entertainment, insurance, finance, housing, etc.

Potential relevant categories: General, Labor Force

Impact/Risk Assessment (IA/RA)

Definition: A process assessing the potential impacts/risks of an action or a system’s relative benefits and costs before implementation

Potential relevant categories: Accountability & Transparency

Reference: https://bipartisanpolicy.org/blog/impact-assessments-for-ai/

Covered Companies

Definition: Companies that are subject to the law

Potential relevant categories: Accountability & Transparency

Stakeholders

Definition: Those with an interest in or affected by the bill and its outcomes

Potential relevant categories: Accountability & Transparency

Civil Recourse

Definition: The legal means through which individuals can seek redress for harms caused against them

Potential relevant categories: Accountability & Transparency

Algorithmic Inaccuracy

Definition: Errors/bias in the functioning of algorithms

Potential relevant categories: Accountability & Transparency

Pre-Deployment

Definition: The testing phase before an AI system’s launch

Potential relevant categories: Accountability & Transparency, Data Protection

Post-Deployment

Definition: Ongoing monitoring after an AI system is launched

Potential relevant categories: Accountability & Transparency, Data Protection

Lifecycle

Definition: The iterative process of moving from a business problem to an AI solution that solves that problem

Potential relevant categories: Accountability & Transparency

Reference: https://coe.gsa.gov/coe/ai-guide-for-government/understanding-managing-ai-lifecycle/

Privacy Harms

Definition: Diverse effects on individuals resulting from the processing or misuse of personal data, such as embarrassment, discrimination, identity theft, financial loss, or loss of autonomy and trust

Potential relevant categories: Accountability & Transparency

Reference: Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4195066

“For example, the development and use of AI systems can cause privacy harms. Some privacy harms, such as identity theft, can be readily observed and measured. Others, such as harms to dignity or autonomy, or for the more concretely minded, exposure to future risks of unauthorized disclosure or identity theft, cannot.”

Data Subject

Definition: An individual whose personal data is processed

Potential relevant categories: Accountability & Transparency

Operational Context

Definition: The environment or circumstances in which an AI system operates

Potential relevant categories: Accountability & Transparency

Assess/Benchmark/Monitor

Definition: These terms refer to evaluating, comparing against standards, and observing for changes, respectively, and could need clearer explanation in a technical/legal context

Potential relevant categories: Accountability & Transparency

Decommissioning

Definition: The process of shutting down or discontinuing a system

Potential relevant categories: Accountability & Transparency

Transparency Report

Definition: Regular public reports on platform usage and governance pratices like takedown requests and policy enforcement.

Potential relevant categories: Accountability & Transparency

Reference: Page 1, https://doi.org/10.48550/arXiv.2402.16268

Risk Regulation

Definition: Rules or procedures intended to control or mitigate risks, broadly covering three categories: precautionary tactics, risk analysis and mitigation, and post-market measures

Potential relevant categories: Accountability & Transparency

Reference: Page 1370, Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066221

Precautionary measures

Definition: Safety-focused measures like bans, licensing, and sandboxing, ensuring technologies are proven safe before use

Potential relevant categories: Accountability & Transparency

Reference: Page 1371, Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066221

Licensing regime

Definition: A formal system for regulating and granting permissions for the operation of AI systems.

Potential relevant categories: Accountability & Transparency

Reference: Page 1388, Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066221

221 See generally, Tutt, supra note 95. (Andrew Tutt);

Malgieri & Pasquale, supra note 96. Gianclaudio Malgieri & Frank Pasquale, From Transparency to Justification: Toward Ex Ante Accountability for AI 10-14 (Brooklyn L. Sch., Legal Studies Paper No. 712, Brussels Priv. Hub, Working Paper No. 33, 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099657

Conditional Licensing

Definition: Licensing granted with specific restrictions or with promised guard rails, requiring compliance to maintain approval.

Potential relevant categories: Accountability & Transparency

Reference: Page 1388, Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066221

Post-market measures

Definition: Risk regulation tools like monitoring, compliance checks, and failsafes applied after a product is in use

Potential relevant categories: Accountability & Transparency

Reference: Page 1372, Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066221

Information-forcing

Definition: The ability to compel entities to provide necessary information.

Potential relevant categories: Accountability & Transparency

Resilience

Definition: The ability of a system to recover from faults or challenges (e.g., through kill switches or emergency protocols).

Potential relevant categories: Accountability & Transparency

Reference: Marchant & Stevens, supra note 19, at 236. (“While there has been some confusion in the literature about whether risk analysis is part of resilience or resilience is part of risk analysis, the two approaches are distinct but complementary.”)

A Right to Privacy

Definition: References to personal data protection, data security, or confidentiality clauses can be considered as referring to a right to privacy

Potential relevant categories: Data Protection

Data Minimization

Definition: Collection of the personal data only necessary for a specific purpose

Potential relevant categories: Data Protection

Private Right of Action

Definition: Individuals can sue an organization for damages or enforce compliance if their rights are violated

Potential relevant categories: Data Protection

Reference: Page 16, “Enforcing a New Privacy Law”, https://www.newamerica.org/oti/reports/enforcing-new-privacy-law/

Data Retention

Definition: Policies on how long data is stored before secure deletion

Potential relevant categories: Data Protection

Algorithmic Discrimination

Definition: When automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.
Potential synonyms: algorithmic discrimination, intentional differentiation, unjustified differential treatment, unlawful discriminatory practice, the risk to rights and freedoms, and systematic bias, (automation bias)

Potential relevant categories: Bias & Discrimination

Reference: https://www.whitehouse.gov/ostp/ai-bill-of-rights/definitions/

Unfair Treatment

Definition: Unlawful discriminatory practice, the risk to rights and freedoms, or systematic bias

Potential relevant categories: Bias & Discrimination

Data sources

Definition: Origins/repositories for data used to train, validate, and test machine learning models or other AI systems. Data sources may include public datasets, private sector data, crowdsourced content and historical records.

Potential relevant categories: Bias & Discrimination

Entity

Definition: A new organization, typically a government body or oversight committee which is established to implement, regulate, or enforce specific aspects of the legislation. It might also focus on activities like monitoring compliance, issuing guidelines, or supporting research.

Potential relevant categories: Institution

Measurable Goals

Definition: Specific, quantifiable goals that an institution or agency aims to achieve.

Potential relevant categories: Institution

Regulatory Reports

Definition: Reports created by institutions to assess compliance, performance, or impact related to specific regulations.

Potential relevant categories: Institution

AI Economy

Definition: Economic activities, industries, and jobs that arise from the development, deployment, and integration of artificial intelligence technologies. Components of the AI economy may include development (research in AI tools), implementation (applications in industries such as healthcare, finance, etc.), AI services and support, or AI transformation.

Potential relevant categories: Labor Force

Reference: https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf

Partners

Definition: Organizations, institutions, or groups tasked with responsibilities such as conducting research and analyzing data with regards to the bill’s objectives. This may include governmental bodies, private organizations, academic institutions, research groups, or industry partnerships/stakeholders.

Potential relevant categories: Labor Force

Definition: Competencies, knowledge, and abilities required to develop, deploy, manage, and work alongside artificial intelligence systems. These skills can be both technical (programming, cybersecurity, etc.) and non-technical (ethics and policy understanding etc.).

Potential relevant categories: Labor Force