Governance Principles, Frameworks & Program Design
Human vs. Artificial Intelligence
Human intelligence is biological and adaptive; AI is machine-based with programmed or learned capabilities.
Definition
Human intelligence is biological and adaptive; AI is machine-based with programmed or learned capabilities.
Example Scenario
A logistics company deploys AI to optimize delivery routes. Humans adapt instantly to new road closures, but the AI struggles without retraining. What limitation is shown?
Browse related glossary hubs
Governance Principles, Frameworks & Program Design
Core ideas for defining AI governance principles, comparing frameworks, assigning responsibilities, and designing a program that can work in practice.
Visit resourceAI Fundamentals concept cards
Open the AI Fundamentals category index to browse more glossary entries on the same topic.
Visit resourceRelated concept cards
AI System vs AI Model vs AI Capability
An AI System refers to the complete setup that includes hardware, software, and data to perform tasks using artificial intelligence. An AI Model is a mathematical representation or...
Visit resourceArtificial Intelligence vs Traditional Software
Artificial Intelligence (AI) refers to systems that can perform tasks typically requiring human intelligence, such as learning, reasoning, and problem-solving. In contrast, traditi...
Visit resourceAutonomy and Decision-Making in AI Systems
Autonomy and decision-making in AI systems refer to the capability of AI to make choices and take actions without human intervention. This concept is crucial in AI governance as it...
Visit resourceTypes of AI Systems (Rule-Based ML Generative)
Rule-Based Machine Learning (ML) Generative systems are AI models that operate based on predefined rules and logic to generate outputs. These systems rely on explicit programming t...
Visit resourceAccountability as a Governance Principle
Accountability as a governance principle in AI refers to the obligation of organizations and individuals to take responsibility for the outcomes of AI systems. This principle is cr...
Visit resourceAccountability for High-Risk AI Systems
Accountability for High-Risk AI Systems refers to the responsibility of organizations and individuals to ensure that AI systems classified as high-risk are designed, implemented, a...
Visit resource