This Guidance Note establishes principles for Licensed Financial Institutions (LFIs), including insurance providers, in the UAE regarding AI and ML technologies that may affect consumers. The objective is to encourage responsible, ethical use and transparency in AI/ML development and deployment, particularly concerning decision-making transparency, bias mitigation, accountability, explainability, and data privacy.
Governance and Accountability
LFIs should adopt a documented governance framework for AI/ML that is commensurate with their size, nature, and complexity, promoting a culture of responsible AI use. Senior management and Boards of Directors bear responsibility and accountability for AI/ML systems, outcomes, model selection, deployment, resourcing, and ongoing oversight—LFIs should not employ AI models they have no control over. Regular reporting on performance and risk should be provided to senior management and boards. Governance structures must facilitate informed decision-making, risk identification and mitigation, and ensure AI/ML systems align with the institution’s risk appetite and legal obligations. Boards and senior management should ensure risk committees and control functions (compliance, internal audit, risk management) understand AI-driven processes and can challenge outcomes where appropriate.
Fairness/Non-Discrimination and Ethics
LFIs are expected to ensure AI/ML systems do not produce discriminatory or manipulative outcomes against individuals or groups; no AI system should be deployed if it is discriminatory or manipulative, or develops as such post-deployment. Training data must be sufficiently accurate, relevant, and representative of customer populations. AI should undergo periodic testing (at least annually or upon material changes) to identify and remediate unintended biases or discriminatory outcomes. AI deployment should reflect the institution’s ethical standards and code of conduct, ensuring decisions are consistent with the duty to act honestly, fairly, and in consumers’ best interests.
Transparency and Explainability
LFIs should be transparent with customers and stakeholders about AI use, particularly for high-impact decisions, and must disclose when customers are communicating with an AI application. LFIs should clearly explain how AI systems operate and make decisions. Disclosures should be in plain language, accurate, and provided in both Arabic and English, with telephone support in all major UAE languages; measures to check understandability should be considered. LFIs should consider providing opt-out rights regarding AI for customers, especially for high-impact decisions, considering potential risks, fairness, and feasibility.
Data Quality, Privacy and Security
LFIs should establish policies ensuring AI/ML models use accurate, relevant, and up-to-date data with clear provenance and audit trails. Data must be of sufficient quality and relevance, updated as necessary, and in compliance with all relevant standards, laws, and regulations. Personal data collection, storage, and use must comply with applicable laws, including Consumer Protection Standards, and be limited to legitimate and proportionate purposes. LFIs should incorporate privacy-by-design and security-by-design principles into AI systems, maintaining safeguards against unauthorized access or misuse. AI should be subject to stress testing and validation to ensure reliable operation across scenarios, with operational resilience measures (redundancy, contingency planning, incident response) to minimize consumer disruption from system failures or cyber-attacks. LFIs should assess and utilize AI where feasible to identify potential fraud, AML disparities, and suspicious activity, complying with legal and regulatory reporting requirements when material findings arise.
Continuous Monitoring and Review
In accordance with the Model Management Standards issued by the Central Bank, AI should be subject to continuous monitoring to ensure ongoing understanding, reliability, relevance, and alignment with consumer protection objectives. LFIs are expected to monitor, review, and where appropriate update or cease using AI/ML models, considering changes in data, market conditions, and customer behaviors. Independent third-party providers, experts, and challengers should periodically assess AI development and use. Automatic updates to AI tools must be tested before implementation, and LFIs should be fully aware of such updates to ensure they do not introduce bias. Mechanisms should be in place to detect, report, and remediate performance issues, biases, or unintended consequences before implementation and over time. LFIs remain responsible for outsourced AI functions and should ensure appropriate contractual rights (audit, information rights, termination provisions, data protection, cyber security, performance guarantees). LFIs must retain the clear and immediate ability, with human intervention, to cease use of any AI model, system, or application. LFIs should have systems in place to keep up to date with legal, third-party provider, and market developments regarding AI use.
Human Oversight and Consumer Protection
LFIs should ensure AI/ML systems operate under meaningful human oversight and judgment, particularly for decisions with significant consumer implications. Human oversight may follow different models: (i) Human-in-the-loop – AI provides recommendations but humans retain full authority to approve or reject outcomes; (ii) Human-on-the-loop – AI works autonomously for routine tasks while humans monitor and can intervene; (iii) Human-out-of-the-loop – AI operates without direct human involvement, suitable only for low-risk, non-material processes with appropriate controls. The level of human involvement should be commensurate with identified and potential consumer risks. Consumers should be able to request human review or explanation of AI-generated decisions, and alternative arrangements should be available for customers who do not wish to be subject to AI decisions. LFIs must maintain clear and accessible complaint and redress channels in line with the Consumer Protection Regulation, with consumers informed of their right to challenge decisions, correct inaccurate data inputs, and access a clear complaints-handling procedure. AI/ML systems should promote fair and equitable treatment and must not be used to target consumers with unsuitable products or engage in pressure-selling or misleading marketing.
Integration with Existing Frameworks
AI tools should be integrated into the enterprise-wide risk management framework of any LFI utilizing AI. AI risk assessments should inform and be informed by the institution’s overall risk appetite and controls, not operate in isolation. Senior management should ensure AI adoption policies complement rather than duplicate existing regulatory obligations under the Consumer Protection Regulation and other CBUAE directives—consumer risk from AI-driven models should be treated as part of the conduct risk framework. Where LFIs develop AI internally, they should consider third-party independent reviews to check suitability, security, and reliability. LFIs should create processes to rate the risk of each AI system/application/technology deployed, enabling appropriate risk assessment, monitoring, and management, considering factors such as data quality, AI capability, controls, impact, and dependence on AI or third parties.
Outsourcing and Third-Party Risk
Where LFIs rely on third-party vendors or cloud providers for AI/ML models, products, or solutions, due diligence should be conducted on the provider’s reputation, governance, security, and data-protection practices, in line with the Model Management Standards and the Outsourcing Regulation. Contracts should include provisions ensuring access to relevant information, audit rights, and compliance with CBUAE requirements. Procurement, choice, and justification for selecting a third-party AI provider should be documented, including annual cybersecurity reviews by independent qualified third parties and pre-deployment tests. Institutions should maintain an inventory of AI models—including those developed or hosted by third parties—and aim to ensure third-party models adhere to the same standards of fairness, explainability, and robustness as in-house models. LFIs should consider utilizing a range of AI providers where feasible to avoid over-reliance on a single AI system or provider.
Ethical Collaboration and Innovation
LFIs are encouraged to collaborate with industry peers, participate in UAE AI sandboxes and the Innovation Hub, engage with academia, the Central Bank, and other stakeholders to share best practices and contribute to developing industry standards for trustworthy AI. LFIs should publish case studies on AI development, responsible use, and customer care interactions, including anonymized relevant examples.
Importantly, this Guidance Note supplements, but does not replace, any laws, regulations, or directives issued by the Central Bank or other competent authorities. LFIs remain responsible for complying with all applicable requirements.
Key Contacts
David Yates, Partner, Head of Digital & Data, d.yates@tamimi.com