Artificial Intelligence Regulation: Letâs not regulate mathematics!

The discussion around regulation of artificial intelligence (AI) has changed dramatically since early-reports like the one from the White House Office of Science and Technology Policy (OSTP). In 2025, regulation is no longer a distant possibility, in some parts of the world, itâs already here, with new legal frameworks, compliance requirements, and active enforcement under way.
But that doesnât mean regulation should be arbitrary or impede innovation. As we enter this new era, we must strike a balance: encourage investment and development, while safeguarding safety, fairness and transparency.
Where we stand in 2025 when it comes to the regulatory landscape?
- In the European Union, the EU Artificial Intelligence Act (AI Act), the worldâs first comprehensive AI-specific law, entered into force on 1 August 2024.
- In 2025, successive phases of the AI Actâs implementation are rolling out: rules governing General-Purpose AI (GPAI) systems became effective in August 2025, including compliance requirements, transparency obligations, technical documentation, and post-market monitoring.
- EU member states are working to establish national regulatory sandboxes: controlled environments where innovative AI systems can be tested under supervision before being widely deployed. But a recent study warns that varying national sandbox designs may fragment compliance and create âsandbox arbitrage.â
- Globally, the regulatory landscape remains fragmented. Some countries (like in the U.S.) still lack a comprehensive federal AI law leaving regulation mostly to existing sector-specific laws, while many states and regions propose new rules.
- According to a 2025 global survey, legislative mentions of AI rose by over 21% across 75 countries since 2023. That marks a near-tenfold increase since 2016, reflecting the rapid acceleration of AI governance efforts worldwide.
AI is now being regulated seriously, especially in the EU, but regulatory approaches differ significantly by region, and compliance remains a dynamic and evolving challenge.
Why some regulation makes sense, but regulating âMathematicsâ is still problematic?
Many arguments still stands after 10 years especially around fairness, transparency, and the difficulty of âexplainingâ complex AI decisions. These challenges remain central, but evolve in nuance now.
Fairness & Non-Discrimination remain vital
Bias in training data (e.g., skewed demographic representation) remains a real issue and high-risk AI systems (including in healthcare, lending, human resources) are specifically subject to stricter requirements under the AI Act. However, the Act does not ban all automated decision-making. Instead it classifies AI systems by risk level and imposes controls accordingly. The regulation is functional rather than ideological: it regulates what AI does, not necessarily how itâs implemented (mathematically).
Transparency and explainability - still hard, still debated
The demand for explainable AI is stronger than ever. Yet the fundamental challenge remains: deep learning models especially large or âgeneral-purposeâ ones behave in a ways that are often opaque. Extracting a human-understandable explanation from a high-dimensional mathematical model can still border on the impossible. This means âright to explanationâ requirements, if interpreted too rigidly, risk becoming symbolic or even impractical. A model might produce perfectly valid output without a concise explanation. That supports earlier assertion: trying to regulate the mathematical âhowâ may be misguided.
How 2025's real-world regulation looks like in a more functional approach?
Rather than attempt to control the mathematics behind AI, regulators now aim to regulate through outcomes, risk classification, and governance processes. Key elements include:
- Risk-based classification: systems that pose serious risks (e.g., in health, safety, law enforcement, HR, credit scoring) are subject to stricter rules under the AI Act.
- Documentation & transparency obligations: providers of GPAI systems must maintain technical documentation, risk logs, records of testing and safety procedures, and possibly enable âpost-market monitoring.â
- Regulatory sandboxes: national sandbox frameworks (voluntary testing environments) allow innovators to trial systems under supervision before full deployment, ideal for balancing innovation and safety.
- Enforcement and oversight: the EU created a dedicated body, the European AI Office, to coordinate supervision across member states, monitor compliance, and manage risk assessment.
In other jurisdictions (e.g., the U.S.), regulation remains more fragmented, often sector-by-sector, rather than a unified law.
Regulating mathematical models per se is still impractical. Instead, regulation focuses on what AI does, how itâs governed, and how its impact is controlled, which aligns with a âfunctional regulationâ approach.
Risks and debates emerging by the end of 2025
With regulation evolving, new tensions and risks are emerging:
- Regulatory delay & complexity: the latest proposals under the EUâs âDigital Omnibusâ package may postpone or soften some AI Act obligations, prompting concerns about weakening protections.
- Fragmentation & uncertainty: different national regulatory sandboxes, varying interpretations of risk categories and compliance requirements, and uncertain GDPR/AI overlaps mean businesses may struggle with consistency across markets.
- Impact on research and innovation: restrictions may inadvertently affect academic AI research and smaller developers. Some papers argue current regulations apply in practice even to research contexts, risking over-burdening those who push the frontiers.
- Global divergence: while the EU leans on a precautionary, risk-based, rights-focused model, other regions (e.g., U.S., parts of Asia) favor lighter-touch or sectoral regulation. That divergence complicates international cooperation and cross-border AI deployment.
What this means
- Regulation of AI is now real and unavoidable. The âgenieâ is not just out of the bottle; policy-makers have started to define the bottle itself.
- Regulating âmathematicsâ remains the wrong lever. The focus should be on what AI systems do, how they are managed, and metrics/outcomes such as safety, fairness, transparency, and accountability.
- A functional, risk-based, outcome-oriented approach makes more sense than blanket bans or rigid âexplainabilityâ laws.
- Innovation and compliance need not be at odds but only if regulation is calibrated smartly. Sandboxes, risk-tiered rules, and phased enforcement (as under the EU AI Act) help strike a balance.
- Global coordination matters. Divergent regulatory regimes across jurisdictions risk fragmentation; international standards and alignment will be key as AI becomes ever more cross-border.
Across Import.io and Neuralogics, we see the same shift shaping our own strategy. As organisations adapt to evolving regulation, they need data pipelines and AI systems that are structured, traceable and audit-ready. This is why our focus is on AI-native governance, end-to-end lineage visibility and transparent agentic workflows. These capabilities are not compliance add-ons. They are foundational to how modern AI systems must be designed if they are to meet regulatory expectations and deliver reliable outcomes.
Conclusion
If AI regulation is to succeed without strangling innovation, we must reject the temptation to regulate âhow smart math runs behind the scenes.â Instead, we should regulate what AI systems do, how they behave, and how they are governed.
The road ahead wonât be simple, many challenges remain. But with functional regulation, risk-based oversight, international cooperation and smart governance, we can embrace AIâs potential, while safeguarding fairness, safety and human dignity.
The discussion around regulation of artificial intelligence (AI) has changed dramatically since early-reports like the one from the White House Office of Science and Technology Policy (OSTP). In 2025, regulation is no longer a distant possibility, in some parts of the world, itâs already here, with new legal frameworks, compliance requirements, and active enforcement under way.
But that doesnât mean regulation should be arbitrary or impede innovation. As we enter this new era, we must strike a balance: encourage investment and development, while safeguarding safety, fairness and transparency.
Where we stand in 2025 when it comes to the regulatory landscape?
- In the European Union, the EU Artificial Intelligence Act (AI Act), the worldâs first comprehensive AI-specific law, entered into force on 1 August 2024.
- In 2025, successive phases of the AI Actâs implementation are rolling out: rules governing General-Purpose AI (GPAI) systems became effective in August 2025, including compliance requirements, transparency obligations, technical documentation, and post-market monitoring.
- EU member states are working to establish national regulatory sandboxes: controlled environments where innovative AI systems can be tested under supervision before being widely deployed. But a recent study warns that varying national sandbox designs may fragment compliance and create âsandbox arbitrage.â
- Globally, the regulatory landscape remains fragmented. Some countries (like in the U.S.) still lack a comprehensive federal AI law leaving regulation mostly to existing sector-specific laws, while many states and regions propose new rules.
- According to a 2025 global survey, legislative mentions of AI rose by over 21% across 75 countries since 2023. That marks a near-tenfold increase since 2016, reflecting the rapid acceleration of AI governance efforts worldwide.
AI is now being regulated seriously, especially in the EU, but regulatory approaches differ significantly by region, and compliance remains a dynamic and evolving challenge.
Why some regulation makes sense, but regulating âMathematicsâ is still problematic?
Many arguments still stands after 10 years especially around fairness, transparency, and the difficulty of âexplainingâ complex AI decisions. These challenges remain central, but evolve in nuance now.
Fairness & Non-Discrimination remain vital
Bias in training data (e.g., skewed demographic representation) remains a real issue and high-risk AI systems (including in healthcare, lending, human resources) are specifically subject to stricter requirements under the AI Act. However, the Act does not ban all automated decision-making. Instead it classifies AI systems by risk level and imposes controls accordingly. The regulation is functional rather than ideological: it regulates what AI does, not necessarily how itâs implemented (mathematically).
Transparency and explainability - still hard, still debated
The demand for explainable AI is stronger than ever. Yet the fundamental challenge remains: deep learning models especially large or âgeneral-purposeâ ones behave in a ways that are often opaque. Extracting a human-understandable explanation from a high-dimensional mathematical model can still border on the impossible. This means âright to explanationâ requirements, if interpreted too rigidly, risk becoming symbolic or even impractical. A model might produce perfectly valid output without a concise explanation. That supports earlier assertion: trying to regulate the mathematical âhowâ may be misguided.
How 2025's real-world regulation looks like in a more functional approach?
Rather than attempt to control the mathematics behind AI, regulators now aim to regulate through outcomes, risk classification, and governance processes. Key elements include:
- Risk-based classification: systems that pose serious risks (e.g., in health, safety, law enforcement, HR, credit scoring) are subject to stricter rules under the AI Act.
- Documentation & transparency obligations: providers of GPAI systems must maintain technical documentation, risk logs, records of testing and safety procedures, and possibly enable âpost-market monitoring.â
- Regulatory sandboxes: national sandbox frameworks (voluntary testing environments) allow innovators to trial systems under supervision before full deployment, ideal for balancing innovation and safety.
- Enforcement and oversight: the EU created a dedicated body, the European AI Office, to coordinate supervision across member states, monitor compliance, and manage risk assessment.
In other jurisdictions (e.g., the U.S.), regulation remains more fragmented, often sector-by-sector, rather than a unified law.
Regulating mathematical models per se is still impractical. Instead, regulation focuses on what AI does, how itâs governed, and how its impact is controlled, which aligns with a âfunctional regulationâ approach.
Risks and debates emerging by the end of 2025
With regulation evolving, new tensions and risks are emerging:
- Regulatory delay & complexity: the latest proposals under the EUâs âDigital Omnibusâ package may postpone or soften some AI Act obligations, prompting concerns about weakening protections.
- Fragmentation & uncertainty: different national regulatory sandboxes, varying interpretations of risk categories and compliance requirements, and uncertain GDPR/AI overlaps mean businesses may struggle with consistency across markets.
- Impact on research and innovation: restrictions may inadvertently affect academic AI research and smaller developers. Some papers argue current regulations apply in practice even to research contexts, risking over-burdening those who push the frontiers.
- Global divergence: while the EU leans on a precautionary, risk-based, rights-focused model, other regions (e.g., U.S., parts of Asia) favor lighter-touch or sectoral regulation. That divergence complicates international cooperation and cross-border AI deployment.
What this means
- Regulation of AI is now real and unavoidable. The âgenieâ is not just out of the bottle; policy-makers have started to define the bottle itself.
- Regulating âmathematicsâ remains the wrong lever. The focus should be on what AI systems do, how they are managed, and metrics/outcomes such as safety, fairness, transparency, and accountability.
- A functional, risk-based, outcome-oriented approach makes more sense than blanket bans or rigid âexplainabilityâ laws.
- Innovation and compliance need not be at odds but only if regulation is calibrated smartly. Sandboxes, risk-tiered rules, and phased enforcement (as under the EU AI Act) help strike a balance.
- Global coordination matters. Divergent regulatory regimes across jurisdictions risk fragmentation; international standards and alignment will be key as AI becomes ever more cross-border.
Across Import.io and Neuralogics, we see the same shift shaping our own strategy. As organisations adapt to evolving regulation, they need data pipelines and AI systems that are structured, traceable and audit-ready. This is why our focus is on AI-native governance, end-to-end lineage visibility and transparent agentic workflows. These capabilities are not compliance add-ons. They are foundational to how modern AI systems must be designed if they are to meet regulatory expectations and deliver reliable outcomes.
Conclusion
If AI regulation is to succeed without strangling innovation, we must reject the temptation to regulate âhow smart math runs behind the scenes.â Instead, we should regulate what AI systems do, how they behave, and how they are governed.
The road ahead wonât be simple, many challenges remain. But with functional regulation, risk-based oversight, international cooperation and smart governance, we can embrace AIâs potential, while safeguarding fairness, safety and human dignity.