Law, Regulation & Compliance
Prohibited AI Practices
Prohibited AI Practices refer to specific actions or applications of artificial intelligence that are deemed unethical, harmful, or illegal under regulatory frameworks. These practices may include, but are not limited to, the use of AI for surveillance without consent, deepfake technology for misinformation, or biased decision-making in critical areas like hiring and law enforcement. In AI governance, identifying and regulating these practices is crucial to ensure public trust, protect individual rights, and prevent societal harm. The implications of failing to regulate these practices can lead to significant legal consequences, loss of public confidence in AI technologies, and potential harm to vulnerable populations.
Definition
Prohibited AI Practices refer to specific actions or applications of artificial intelligence that are deemed unethical, harmful, or illegal under regulatory frameworks. These practices may include, but are not limited to, the use of AI for surveillance without consent, deepfake technology for misinformation, or biased decision-making in critical areas like hiring and law enforcement. In AI governance, identifying and regulating these practices is crucial to ensure public trust, protect individual rights, and prevent societal harm. The implications of failing to regulate these practices can lead to significant legal consequences, loss of public confidence in AI technologies, and potential harm to vulnerable populations.
Example Scenario
Imagine a tech company that develops an AI system for hiring employees. If the system is trained on biased data, it may inadvertently favor certain demographics over others, leading to discriminatory hiring practices. This violation of prohibited AI practices could result in legal action against the company, damage its reputation, and erode trust among potential job applicants. Conversely, if the company implements robust governance measures to ensure fairness and transparency in its AI hiring practices, it can enhance its brand reputation, attract a diverse talent pool, and comply with regulations, ultimately contributing to a more equitable workplace.
Browse related glossary hubs
Law, Regulation & Compliance
Public concept cards covering AI-specific regulation, privacy law, legal interpretation, and the compliance obligations that governance teams must translate into action.
Visit resourceAI-Specific Regulation concept cards
Open the AI-Specific Regulation category index to browse more glossary entries on the same topic.
Visit resourceRelated concept cards
Applying AI Act Categories to AI Use Cases
Applying AI Act Categories to AI Use Cases involves classifying AI systems based on their risk levels as outlined in regulatory frameworks, such as the EU AI Act. This categorizati...
Visit resourceGeneral-Purpose AI vs Use-Case-Specific AI
General-Purpose AI refers to systems designed to perform a wide range of tasks across various domains, while Use-Case-Specific AI is tailored for particular applications, such as m...
Visit resourceHigh-Risk AI Systems (Conceptual Overview)
High-Risk AI Systems refer to AI technologies that pose significant risks to health, safety, or fundamental rights, necessitating strict regulatory oversight. These systems are sub...
Visit resourceLimited-Risk AI Systems and Transparency Obligations
Limited-risk AI systems are those that pose a moderate risk to rights and safety, requiring specific transparency obligations under AI governance frameworks. These obligations mand...
Visit resourceMinimal-Risk AI Systems
Minimal-risk AI systems refer to AI technologies that pose a low level of risk to rights and safety, such as chatbots or spam filters. In AI governance, identifying and categorizin...
Visit resourcePurpose and Objectives of the EU AI Act
The EU AI Act aims to establish a regulatory framework for artificial intelligence within the European Union, focusing on ensuring that AI systems are safe, ethical, and respect fu...
Visit resource