Governance Principles, Frameworks & Program Design
AI System vs AI Model vs AI Capability
An AI System refers to the complete setup that includes hardware, software, and data to perform tasks using artificial intelligence. An AI Model is a mathematical representation or algorithm that learns from data to make predictions or decisions. AI Capability encompasses the specific functions or skills that an AI system can perform, such as natural language processing or image recognition. Understanding these distinctions is crucial in AI governance as it informs accountability, risk management, and compliance with regulations. Misunderstanding these terms can lead to inadequate oversight, resulting in ethical breaches or failures in AI deployment.
Definition
An AI System refers to the complete setup that includes hardware, software, and data to perform tasks using artificial intelligence. An AI Model is a mathematical representation or algorithm that learns from data to make predictions or decisions. AI Capability encompasses the specific functions or skills that an AI system can perform, such as natural language processing or image recognition. Understanding these distinctions is crucial in AI governance as it informs accountability, risk management, and compliance with regulations. Misunderstanding these terms can lead to inadequate oversight, resulting in ethical breaches or failures in AI deployment.
Example Scenario
Consider a healthcare organization implementing an AI System to assist in diagnosing diseases. If the organization confuses the AI Model with the AI System, it may overlook the need for rigorous testing and validation of the model's predictions. This could lead to misdiagnoses, harming patients and exposing the organization to legal liabilities. Conversely, if the organization properly distinguishes between the AI System, Model, and Capability, it can ensure robust governance practices, including regular audits and compliance checks, ultimately enhancing patient safety and trust in AI technologies.
Browse related glossary hubs
Governance Principles, Frameworks & Program Design
Core ideas for defining AI governance principles, comparing frameworks, assigning responsibilities, and designing a program that can work in practice.
Visit resourceAI Fundamentals concept cards
Open the AI Fundamentals category index to browse more glossary entries on the same topic.
Visit resourceRelated concept cards
Artificial Intelligence vs Traditional Software
Artificial Intelligence (AI) refers to systems that can perform tasks typically requiring human intelligence, such as learning, reasoning, and problem-solving. In contrast, traditi...
Visit resourceAutonomy and Decision-Making in AI Systems
Autonomy and decision-making in AI systems refer to the capability of AI to make choices and take actions without human intervention. This concept is crucial in AI governance as it...
Visit resourceTypes of AI Systems (Rule-Based ML Generative)
Rule-Based Machine Learning (ML) Generative systems are AI models that operate based on predefined rules and logic to generate outputs. These systems rely on explicit programming t...
Visit resourceAccountability as a Governance Principle
Accountability as a governance principle in AI refers to the obligation of organizations and individuals to take responsibility for the outcomes of AI systems. This principle is cr...
Visit resourceAccountability for High-Risk AI Systems
Accountability for High-Risk AI Systems refers to the responsibility of organizations and individuals to ensure that AI systems classified as high-risk are designed, implemented, a...
Visit resourceAccountability vs Responsibility in AI Contexts
In the context of AI governance, accountability refers to the obligation of individuals or organizations to answer for the outcomes of AI systems, while responsibility pertains to...
Visit resource