top of page

Navigating AI Liability in the UAE: A Hybrid Approach to Compliance

  • Writer: CoThrive
    CoThrive
  • Apr 7
  • 7 min read

ree


Artificial Intelligence (AI) is revolutionizing industries and daily life, but its increasing autonomy and integration into critical decision-making processes raise important questions about liability and accountability. AI liability refers to the legal responsibility for damages or harm caused by AI systems, such as physical injuries, financial losses, or privacy violations. Determining who should bear this responsibility—the developer, the deployer, or the AI system itself—is complex, and traditional legal frameworks often struggle to address these issues.


AI introduces intricate liability considerations across various legal frameworks, including product liability, negligence, contractual obligations, and evolving regulatory standards, with jurisdictional variations adding further complexity. Most AI risks fall under contractual or tort liability, with significant overlap. If AI is considered a product and harms result from design flaws, liability is assessed under strict liability and product laws. If viewed as a service, AI is subject to negligence laws. This complexity underscores the need for new legal approaches to effectively address AI-related liabilities.


AI Liability: Product vs. Service Classification

The liability landscape for AI systems varies significantly depending on whether they are classified as products or services. When AI systems are considered products, developers and manufacturers may face strict liability for defects that cause harm, even without proof of negligence. This includes issues like design flaws, inadequate safety testing, or failure to warn users of risks. For instance, a healthcare AI with inherent diagnostic flaws could trigger liability under this theory. Contractual liabilities for AI products are often defined in agreements between providers and users, which typically outline risk allocation. While limitations of liability clauses may protect developers, courts can override them if they violate public policy, such as in cases of discrimination.


When AI is classified as a service, developers and operators must meet a duty of care in designing, training, and deploying the technology. Liability arises if harm results from foreseeable risks, like biased training data or insufficient safeguards. In the 2019 case Connecticut Fair Housing Center v. Corelogic Rental Property Solutions, a US District Court ruled that a vendor of tenant screening software was subject to the Fair Housing Act's nondiscrimination provisions, emphasizing the vendor's duty to avoid selling products that could lead customers to violate federal housing laws. In a recent case, Moffatt v. Air Canada (BC Civil Resolution Tribunal, 2025), the airline was held liable for its chatbot’s erroneous bereavement fare advice, despite claiming the AI was a “separate legal entity.” The tribunal ruled Air Canada failed to ensure accuracy or correct known errors, ordering compensation for the plaintiff.


Contractual liabilities for AI services can be complex due to software customization for specific purposes, potentially reclassifying the AI technology as a service. In such cases, claimants must resort to tort law, which addresses harms to customers, users, and outside parties. The assessment of these harms depends on whether AI is viewed as a product or service, with customized AI solutions risking reclassification and a shift in liability to negligence standards.


Challenges in Establishing Liability

The "black box" nature of AI complicates tracing harm to specific actions. AI systems, particularly those powered by machine learning, operate in ways that are often opaque, even to their creators. This raises significant legal and ethical concerns. For example, when an AI system makes a decision that results in harm (e.g., a self-driving car causing an accident or an AI hiring tool discriminating against certain candidates), it’s unclear who should be held liable—the developer, the user, or the AI itself. AI systems are often built by multiple stakeholders, including developers, data providers, and deployers, making it difficult to pinpoint responsibility. Additionally, AI systems can learn and adapt over time, meaning their behavior at the time of deployment may differ significantly from their behavior later. Ambiguity over whether AI is a "product" (strict liability) or "service" (negligence) further complicates the issue.


Solutions to AI and Liability: Hard Law and Soft Law Solutions

Governments and experts are exploring both hard law (binding regulations) and soft law (non-binding guidelines) to address AI challenges. While hard law provides legal certainty, it can be inflexible and slow to adapt to technological advancements. As a result, soft law approaches, such as risk rating methodologies like the NIST AI Risk Management Framework, Dynamic Real-Time Risk Index, Ethical Risk Scoring, Sector-Specific Frameworks, Compliance-Driven Risk Rating, Threat-Centric Security Scoring, and Human-AI Collaboration Index, are gaining traction for their adaptability and ability to address rapidly evolving AI risks.


Risk rating methodologies categorize AI risks as high, medium, or low, helping to identify high-risk applications like autonomous vehicles or healthcare diagnostics. They promote transparency by encouraging the development of explainable AI, which is crucial for accountability. These methodologies also ensure that AI systems comply with sector-specific regulations, such as EU GDPR or US HIPAA, reducing the risk of legal penalties.

Additionally, risk rating methodologies address algorithmic bias by proactively identifying and mitigating biases in AI systems, preventing potential legal liabilities related to discrimination. They also evaluate the robustness and security of AI systems, enhancing reliability and reducing the risk of system failures and cyber-attacks. Overall, these methodologies help organizations understand AI liabilities, take proactive measures, and build trust with stakeholders.



ree


UAE Framework for AI Liability: A Hybrid Approach

To address the complex challenges of AI liability, many experts advocate for a hybrid approach that combines hard law (binding regulations) with soft law (non-binding guidelines). Hard law provides a legal backbone to enforce accountability, while soft law offers flexibility to adapt to rapidly evolving technologies. For example, regulatory frameworks may establish baseline requirements (hard law), such as mandatory compliance with data protection standards, while encouraging organizations to adopt best practices and risk-rating methodologies (soft law).


The UAE exemplifies this hybrid approach by integrating legal frameworks with flexible initiatives to foster innovation while ensuring accountability. The UAE has established itself as a global leader in AI governance through initiatives such as the Dubai AI Seal, which combines certification standards with market incentives to promote ethical AI deployment. This proactive stance aligns with international frameworks, such as the EU’s AI Liability Directive (AILD), while addressing region-specific challenges. The combination of hard and soft law ensures that the UAE remains agile in addressing new challenges while fostering trust in AI technologies.


The UAE’s legal system provides robust hard law foundations for addressing AI liability. This includes regulations and laws including below provisions:


  • Civil Liability: Under Article 316 of the UAE Civil Transactions Law, individuals or entities controlling objects requiring special care can be held liable for harm caused by improper management. This principle extends to AI systems, such as medical diagnostic tools or autonomous vehicles, where negligence in oversight could result in significant harm.


  • Criminal Liability: Article 66 of the Penal Code establishes corporate criminal liability for acts committed by representatives, directors or agents acting in favour of or on behalf of the legal person. Therefore we can argue that for instance, if an AI security system acting on behalf of a company intrudes on privacy without explicit programming, the company may be held accountable. 


  • Data Protection: Regulations like the DIFC Data Protection Law mandate responsible data management for autonomous systems, ensuring compliance with global standards such as GDPR. The UAE’s 2025 Data Laws ensure audit trails for liability investigations, addressing challenges posed by "black box" systems. 


These hard law provisions create a framework for enforcing accountability in cases of harm caused by AI systems. The UAE imposes strict penalties for non-compliance. Fines can reach up to AED 1 million, and jail terms may be imposed for violations involving discrimination or data breaches. Additionally, there are obligations to conduct conformity assessments mirroring high-risk standards under the EU AI Act. Post-deployment monitoring requirements address self-learning defects in autonomous systems.


To complement its legal frameworks, the UAE has implemented several soft law initiatives:


  • Dubai AI Seal Certification: Launched in January 2025, this certification evaluates companies based on criteria such as ethical project portfolios and staffing of AI specialists. Certified entities benefit from reduced litigation and liability risks and priority in government procurement. It holds both developers (creators of AI systems) and deployers (entities integrating or using AI in operations) responsible for compliance, similar to the Revised EU Product Liability Directive 2024/2853. Seal’s transparency requirements simplify causation proof by ensuring clear documentation of AI decision-making processes. This includes audits of AI systems’ design (developer responsibility) and implementation practices (deployer responsibility), ensuring accountability for outputs. Furthermore, the Seal provides market incentives through procurement preferences and unlike post-hoc liability mechanisms, reduces risks before harm occurs.


  • Ethical Guidelines: The Dubai AI Ethics Advisory Board provides non-binding recommendations emphasizing transparency, fairness, and accountability in AI deployment.


  • Industry-Specific Policies: Agile regulations tailored to sectors like healthcare (e.g., Abu Dhabi Department of Health’s AI Policy) enable quick responses to emerging risks without stifling innovation.


In addition, UAE-based companies operating in EU markets are subject to extraterritorial obligations, requiring them to adhere to certain global standards specifically the EU's Artificial Intelligence Liability Directive (AILD) provisions. This includes disclosing technical documentation and audit logs upon request, as well as addressing the presumption of causality, where harm caused by an AI system may shift the burden of proof to providers if claimants demonstrate non-compliance. The AILD's application to non-EU entities underscores the global impact of EU regulations on AI governance, similar to how GDPR has influenced data privacy practices worldwide.


Building Compliance for AI Startups: Venture Studios as Strategic Partners

For AI startups in the UAE, establishing a robust compliance structure demands integrating hard law (e.g., regulations like the Dubai AI Seal) and soft law (e.g., ethical guidelines). This dual approach is resource-intensive, requiring technical, legal, and operational expertise to address risks such as bias, misinformation, and liability claims. The key challenges for a compliance structure are:


  1. Ethical Integration: Embedding principles like transparency, accountability, and fairness into AI systems requires cross-disciplinary collaboration — a challenge for early-stage teams focused on innovation.


  2. Governance Frameworks: Startups must define roles, implement monitoring mechanisms, and align with evolving standards (e.g., EU AI Liability Directive). Without dedicated legal teams, this risks becoming a costly distraction.


  3. Operational Adjustments: Adapting to the UAE’s regulatory environment — including technical safeguards (audit trails), contract revisions, and certification processes (Dubai AI Seal) — strains limited resources.


This is where venture studios provide critical value. By offering turnkey compliance services, it will prevent liability, streamline governance and accelerate certification for AI startups. For AI ventures compliance is non-negotiable but need not be prohibitive. Partnering with a venture studio like CoThrive Ventures transforms this burden into a strategic advantage, combining legal rigor with technical precision. By outsourcing compliance to experts, startups can focus on innovation while safeguarding against liability. 


Hassan Razavi

Chief Legal & Compliance Office

 
 
 

Comentarios


bottom of page