Decision Rights and Escalation in Different Models
Decision rights and escalation in different models refer to the frameworks that define who has the authority to make decisions regarding AI systems and how those decisions can be e...
A-Z Index
Browse concept cards whose titles begin with D. This is useful when you want an alphabetical view of the library rather than browsing by governance topic or category.
Decision rights and escalation in different models refer to the frameworks that define who has the authority to make decisions regarding AI systems and how those decisions can be e...
Decision rights in AI governance refer to the allocation of authority and responsibility for making decisions regarding AI systems. This includes who can approve, modify, or termin...
Defending Governance Decisions After the Fact refers to the process of justifying and explaining decisions made regarding AI systems after they have been implemented. This is cruci...
Defending governance positions to external scrutiny involves the ability of an organization to justify and explain its AI governance policies, practices, and decisions to stakehold...
Defensibility of Governance Decisions Over Time refers to the ability of governance frameworks and decisions regarding AI systems to withstand scrutiny and remain justifiable as co...
Defining Long-Term AI Governance Objectives involves establishing clear, strategic goals for the ethical development, deployment, and oversight of AI technologies. This is crucial...
Designing controls that are auditable and defensible refers to the creation of mechanisms within AI systems that allow for transparent oversight and accountability. This is crucial...
Designing for Regulatory Trust and Credibility involves creating AI systems that not only comply with existing regulations but also foster trust among stakeholders, including users...
Designing framework extensions without breaking compliance involves creating new components or features within an existing AI governance framework while ensuring adherence to estab...
Designing Governance from First Principles involves creating governance frameworks for AI systems based on fundamental principles rather than existing models or norms. This approac...
Designing interfaces between governance frameworks involves creating structured connections between different regulatory and operational frameworks that guide AI development and de...
Distinguishing control failures from design failures is a critical aspect of AI governance that involves identifying whether issues in AI systems arise from inadequate control mech...
Documenting Decisions and Rationale refers to the systematic recording of the processes, criteria, and reasoning behind decisions made in AI systems. This practice is crucial in AI...
Documenting ethical reasoning and trade-offs involves systematically recording the decision-making processes behind AI system designs, including the ethical considerations and comp...
In data protection and privacy law, a Data Controller is an entity that determines the purposes and means of processing personal data, while a Data Processor is an entity that proc...
Data Flow Mapping for AI Use Cases involves the systematic identification and documentation of data flows within AI systems, particularly when data crosses borders. This practice i...
Data minimisation is a principle in data protection and privacy law that mandates organizations to collect only the data necessary for a specific purpose. In AI governance, this pr...
Data Protection Across the AI Lifecycle refers to the comprehensive approach to safeguarding personal and sensitive data throughout all stages of AI development and deployment, inc...
Data Protection Principles under the General Data Protection Regulation (GDPR) are a set of guidelines designed to protect personal data and privacy within the European Union. Thes...
Designing Governance for the Strictest Applicable Regime involves creating AI governance frameworks that comply with the most stringent regulations across multiple jurisdictions. T...
Designing governance that survives regulatory change refers to the creation of flexible, adaptive frameworks for AI governance that can withstand evolving legal and regulatory land...
Documentation burden for high-risk AI systems refers to the extensive requirements for detailed documentation throughout the lifecycle of AI systems classified as high-risk. This i...
Data Governance in AI Systems refers to the management of data availability, usability, integrity, and security within AI frameworks. It is crucial in AI governance as it ensures t...
Data lineage and provenance refer to the tracking and visualization of the flow of data through its lifecycle, from its origin to its final destination. In AI governance, understan...
Defining the intended purpose of an AI system involves clearly articulating the specific goals and applications for which the AI is designed. This is crucial in AI governance as it...
Designing AI use cases for multi-jurisdiction deployment involves creating AI applications that comply with the diverse legal, ethical, and cultural standards across different regi...
Designing frameworks for risk tolerance and escalation involves establishing structured approaches to identify, assess, and respond to risks associated with AI systems. This is cru...
Designing use cases to avoid prohibited or high-risk classification involves creating AI applications that do not fall into categories deemed unsafe or unethical by regulatory fram...
Documentation across the AI lifecycle refers to the systematic recording of all processes, decisions, and changes made during the development, deployment, and maintenance of AI sys...
Documenting Intended Purpose and Context involves clearly articulating the objectives and operational environment for which an AI system is designed. This practice is crucial in AI...
Dynamic Risk Reassessment Over Time refers to the continuous evaluation and adjustment of risk management strategies in response to changing conditions, technologies, and outcomes...
Data Use and Protection in Sandboxes refers to the frameworks established within regulatory sandboxes that allow for the controlled experimentation of AI technologies while ensurin...
Deciding when a sandbox exit is required refers to the process of determining the appropriate time and conditions under which an AI system can transition from a controlled testing...
Decision-Making with Incomplete Evidence refers to the process of making judgments or choices based on limited or uncertain information. In AI governance, this concept is crucial a...
Demonstrating Good Faith Compliance to Regulators involves AI organizations proactively showing adherence to laws, regulations, and ethical standards governing AI systems. This is...
Browse more concept cards inside the Governance Principles, Frameworks & Program Design index.
Visit resourceBrowse more concept cards inside the Law, Regulation & Compliance index.
Visit resourceBrowse more concept cards inside the Risk, Impact & Assurance index.
Visit resourceBrowse more concept cards inside the Operational Governance, Documentation & Response index.
Visit resourceOpen the category hub for additional Advanced Governance Framework Evolution concept cards.
Visit resourceOpen the category hub for additional Advanced Governance Scenarios concept cards.
Visit resourceOpen the category hub for additional Advanced Risk Management & Tolerance concept cards.
Visit resourceOpen the category hub for additional Algorithmic Accountability & Assurance concept cards.
Visit resourceOpen the category hub for additional Compliance Frameworks concept cards.
Visit resourceOpen the category hub for additional Cross-Border Data & Jurisdiction concept cards.
Visit resourceOpen the category hub for additional Data Governance & Management concept cards.
Visit resourceOpen the category hub for additional Data Protection & Privacy Law concept cards.
Visit resourceJump to the A index page in the A-Z glossary.
Visit resourceJump to the B index page in the A-Z glossary.
Visit resourceJump to the C index page in the A-Z glossary.
Visit resourceJump to the E index page in the A-Z glossary.
Visit resourceJump to the F index page in the A-Z glossary.
Visit resourceJump to the G index page in the A-Z glossary.
Visit resourceJump to the H index page in the A-Z glossary.
Visit resourceJump to the I index page in the A-Z glossary.
Visit resourceHow to structure your certification prep with exams, flashcards, and AI tutoring.
Visit resourceA practical comparison of core frameworks used in responsible AI programs.
Visit resourceA weekly study structure for balancing frameworks, mock exams, and targeted review.
Visit resourceBreak down the key knowledge areas and prioritize your study time with more confidence.
Visit resourceSearch and browse the full public concept library across domains, categories, and A-Z entry points.
Visit resourceCompare free and premium plans for AI governance learning and AIGP prep.
Visit resourceSee how Startege supports practice exams, revision, and certification readiness.
Visit resourceExplore a practical training path for governance teams, compliance leaders, and AIGP candidates.
Visit resource