Law, Regulation & Compliance
Prohibited AI Practices (Conceptual)
Prohibited AI Practices refer to specific activities and applications of artificial intelligence that are deemed unacceptable under regulatory frameworks, such as the EU AI Act. These practices typically include systems that pose significant risks to fundamental rights, safety, or societal well-being, such as social scoring by governments or real-time biometric identification in public spaces. Understanding and enforcing these prohibitions is crucial in AI governance to protect individuals and communities from harm, ensure ethical use of technology, and maintain public trust. Violations can lead to severe penalties, loss of reputation, and hindered innovation in the AI sector.
Definition
Prohibited AI Practices refer to specific activities and applications of artificial intelligence that are deemed unacceptable under regulatory frameworks, such as the EU AI Act. These practices typically include systems that pose significant risks to fundamental rights, safety, or societal well-being, such as social scoring by governments or real-time biometric identification in public spaces. Understanding and enforcing these prohibitions is crucial in AI governance to protect individuals and communities from harm, ensure ethical use of technology, and maintain public trust. Violations can lead to severe penalties, loss of reputation, and hindered innovation in the AI sector.
Example Scenario
Imagine a city implementing a real-time facial recognition system for law enforcement purposes. This system, while intended to enhance public safety, is classified as a prohibited AI practice under the AI Act due to its potential for mass surveillance and violation of privacy rights. If the city proceeds without proper governance, it could face legal repercussions, public backlash, and loss of citizen trust. Conversely, if the city adheres to AI governance by seeking alternatives that respect privacy, it could foster community support and enhance safety without infringing on rights. This scenario highlights the critical importance of adhering to prohibited AI practices to ensure ethical governance and societal trust.
Browse related glossary hubs
Law, Regulation & Compliance
Public concept cards covering AI-specific regulation, privacy law, legal interpretation, and the compliance obligations that governance teams must translate into action.
Visit resourceAI Act Obligations & Requirements concept cards
Open the AI Act Obligations & Requirements category index to browse more glossary entries on the same topic.
Visit resourceRelated concept cards
AI Act Expectations for Risk Documentation
AI Act Expectations for Risk Documentation refer to the regulatory requirements set forth in the EU AI Act that mandate organizations to systematically document the risks associate...
Visit resourceAI Act Expectations for Sandbox Participation
AI Act Expectations for Sandbox Participation refer to the regulatory framework established under the EU AI Act that allows companies to test AI systems in a controlled environment...
Visit resourceAI Act Risk Categories (Unacceptable High Limited Minimal)
AI Act Risk Categories classify AI systems based on their potential risks to rights and safety. The categories are 'Unacceptable,' 'High,' 'Limited,' and 'Minimal' risk. This class...
Visit resourceAnticipating AI Act Interpretation Through Precedent
Anticipating AI Act Interpretation Through Precedent involves analyzing previous legal cases and regulatory decisions to predict how current and future AI regulations, such as the...
Visit resourceHigh-Risk AI Obligations vs Limited-Risk Obligations
High-Risk AI Obligations refer to stringent requirements imposed on AI systems that pose significant risks to health, safety, or fundamental rights, as outlined in the EU AI Act. T...
Visit resourceHow AI Systems Become High-Risk
AI systems are classified as high-risk based on their potential impact on fundamental rights, safety, and the environment. This classification is crucial in AI governance as it dic...
Visit resource