The past year has crystallised a shift in the EMEA digital landscape, with 2025 delivering a flurry of legislative change, regulatory guidance and policy initiatives across data protection, AI and cyber resilience that together will shape commercial and compliance choices well into 2026. Businesses face a more interventionist environment: new UK law, evolving EU instruments and fast‑moving reforms across the Middle East have tightened expectations on data governance while carving out targeted flexibilities for innovation.
In the UK the Data (Use and Access) Act 2025 has been the single biggest development for organisations processing personal data. The government’s collection on the Act records that it received Royal Assent on 19 June 2025 and sets out phased commencement between August 2025 and June 2026 designed to give firms breathing space to adapt. The lead legal round‑up notes a broadly similar timeline but records Royal Assent on 18 June 2025, highlighting a minor divergence in reported dates that is reflected in public materials. The Act introduces new lawful bases for processing, clarifies automated decision‑making rules, strengthens protections for children’s data, simplifies cookie requirements and expands the Information Commissioner’s Office’s (ICO) powers, changes that will recalibrate enforcement and compliance across sectors. [1][2]
Regulatory practice has moved in step. The ICO has framed an outcome‑based, risk‑focused approach to AI and biometrics, publishing an AI and Biometrics Strategy and setting out a plan of action to consult on updated guidance and a statutory code of practice covering transparency, explainability, bias and redress. The regulator has also explained how it will apply the new Act as it commences, emphasising phased implementation and advising organisations to apply the law in force at the time an infringement occurred. These documents signal that the ICO will prioritise cooperative engagement with compliant organisations while reserving enforcement for significant breaches. [1][3][4]
Alongside regulatory guidance, government and technical bodies have sought to shore up cyber resilience. The UK’s Cyber Governance Code of Practice and a package of proposals targeting ransomware reflect an effort to bring cyber risk into boardroom oversight and to reduce incentives for criminal actors. The legislative programme also includes reform of the Network and Information Systems framework and a Cyber Growth Action Plan to support the domestic cyber industry, demonstrating that the UK strategy is to couple tougher obligations with measures to grow supply and capability. [1]
Across the EU, the Commission’s Digital Omnibus package aims to cut compliance costs and streamline rules on AI, cybersecurity and data while preserving fundamental rights. The initiative proposes innovation‑friendly amendments to ease application of the EU AI Act, harmonise certain GDPR provisions and modernise cookie rules, alongside measures to expand data access for European AI development. National authorities have been active too: France’s CNIL and Germany’s DSK issued substantive GDPR‑focused AI guidance in 2025, and the EDPS updated its guidance on generative AI use by EU institutions, emphasising legal bases, roles in complex supply chains, and the need for DPIAs for high‑risk processing. These interventions underline a twin EU objective, enable responsible innovation while tightening compliance expectations for AI lifecycles. [1]
International transfers have also received attention. Following a temporary extension, the European Data Protection Board issued broadly positive views on the Commission’s draft adequacy decision for the UK while seeking clarifications on UK‑to‑third‑country transfers and other matters. Separately, the CJEU’s confirmation in September 2025 of the EU‑U.S. Data Privacy Framework (subject to appeal) has, for now, restored a measure of certainty for transatlantic data flows, reducing the operational disruption organisations faced after earlier framework invalidations. [1]
In the Middle East, regulators and governments have accelerated both sectoral and cross‑border initiatives. The Dubai International Financial Centre amended its data protection regime to introduce a private right of action and broaden extraterritorial scope. Saudi Arabia advanced an ambitious Global AI Hub Law to attract data‑intensive activity into controlled hubs, and both Saudi and UAE authorities progressed cyber and AI standards, including national drone cybersecurity guidance and a Cyber Risk Management Framework for ADGM. These moves reflect a strategic tilt: attract investment and capacity while imposing governance frameworks that mirror international expectations. [1]
Looking ahead to 2026, the picture is of continued regulatory intensification. The UK’s expanded enforcement powers, new cyber obligations and potential higher fines for PECR breaches will force boards to heighten attention to data and cyber risks; the EU AI Act’s ramp‑up in August 2026 requires preparatory work now; and the Middle East anticipates further PDPL enforcement, data‑centre‑focused rules and the operationalisation of hub models for AI and cross‑border processing. For organisations operating across the region, the immediate task is pragmatic: map new legal obligations across jurisdictions, embed privacy‑and‑security‑by‑design measures through AI lifecycles, and prepare for heightened regulatory scrutiny as authorities shift from rulemaking to enforcement. [1][2][3][4][5]
📌 Reference Map:
- [1] (JD Supra) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 7
- [2] (UK Government) - Paragraph 2, Paragraph 7
- [3] (ICO AI & Biometrics Strategy Plan) - Paragraph 3, Paragraph 7
- [4] (ICO regulatory guidance on DUAA commencement) - Paragraph 3
- [5] (ISBA update on DUAA) - Paragraph 7
Source: Noah Wire Services