rom hospital wards to wellness apps, AI is rapidly transforming how healthcare is delivered. The speed of AI’s proliferation is quite breathtaking—think Cedars Sinai and its wellness app on the Apple Vision Pro, or the plethora of applications developed by Siemens in precision medicine, predictive analytics, and beyond. As the technologies continue to evolve, the frameworks that govern their use remain fragmented. Law, as usual, is slow to catch up to technological innovation. This raises a serious question: how do we ensure that AI in healthcare is safe, ethical, and worthy of patients’ trust?
In response, researchers from Hamad Bin Khalifa University (HBKU) College of Law proposed a new model of governance. The “True Lifecycle Approach” (TLA) is premised on the idea that AI should be governed across all stages of its research, design, implementation, and oversight. At its heart is a simple premise: patients must come first. Governance should embed medical law and ethics throughout, and not ignore the patient.
Why is the TLA Necessary?
Readers might pause and argue that there are already governance frameworks for AI, such as those developed and updated by the Food and Drug Administration (FDA) in the U.S.and the European Medicines Agency in the EU. These approaches focus narrowly on approving AI medical devices to market. Devices that pose greater risks to the public must undergo more checks and approvals before seeking approval. The problem is that these frameworks have not been designed with the full complexity of healthcare AI in mind. They overlook important issues like informed consent, malpractice liability, and other patient rights.
These frameworks also regard AI as a technical tool, not as something that deeply impacts human lives. By contrast, the TLA is grounded in healthcare law and ethics, emphasizing the importance of the standard of care in medicine, patient confidentiality, matters of consent, and respect for cultural and religious differences. These values are particularly relevant among Gulf Cooperation Council (GCC) countries, which are home to diverse expat populations.
The TLA has three core phases of governance, starting with research and development (R&D). This phase sets the foundation for legal and ethical AI in healthcare from the very beginning of its conception. In Qatar, for example, HBKU worked in partnership with the Ministry of Public Health (MOPH) to create the “Research Guidelines for Healthcare AI Development.” These encourage developers to follow detailed processes and document the purpose, scope, and intended use of AI systems. Importantly, researchers should consider ethics and law from the outset, such as compliance with data protection law as it pertains to medical data.
The second phase considers systems approval. While not all healthcare AI tools require regulatory approval, regulators should nevertheless have broader powers to ensure that healthcare AI meets robust safety standards. While regulators around the world currently lack such powers, Saudi Arabia has made progress in the GCC region with its regulatory framework. Through its published guidance MDS–G010 for AI–based medical devices, the Saudi Food and Drug Authority’s standards go beyond other regulators, incorporating ethical standards and provisions on adaptive algorithms, transparency, and post–market monitoring. Again, this does not cover all healthcare AI devices, but indicates where governance can evolve further.
The TLA’s final phase focuses on AI once it is used in practice. Rules should govern not only researchers and developers, but also healthcare providers, insurers, and any other entity using AI downstream. To this end, Abu Dhabi and Dubai have introduced AI policies with binding elements that go beyond mere recommendations to require audits, validation of AI, and patient feedback mechanisms.
An Unexpected Case Study
GCC countries are prioritizing AI investment, including healthcare AI governance. Regional governments are making strides in different phases of the AI governance lifecycle. Yet, conceptualized collectively, their efforts represent a True Lifecycle Approach (TLA) that covers the whole spectrum of AI governance.
Indeed, the GCC is uniquely positioned to pioneer this approach as a global model. Member states have centralized governance structures and diverse populations which need technologies that account for cultural sensitivities concerning language, religion, and other factors. The result can be a governance ecosystem that respects global norms and local needs—an approach that could help inform global best practices in the years to come.
Developers and deployers of AI in healthcare settings must remember that AI is not just a technological project but a human one. While policymakers and lawmakers decide how to regulate the technology, the TLA provides a principled roadmap that encourages discussions on how best to govern the technology, putting patients at the center.
a global affairs media network
A “true lifecycle approach” for governing healthcare AI

mage by swiftsciencewriting from Pixabay
July 10, 2025
AI is rapidly transforming healthcare, but for that transformation to be for the good we must ensure it is safe and ethical. One way to ensure this is with a new model of governance called the True Lifecycle Approach, writes Dr. Barry Solaiman.
F
rom hospital wards to wellness apps, AI is rapidly transforming how healthcare is delivered. The speed of AI’s proliferation is quite breathtaking—think Cedars Sinai and its wellness app on the Apple Vision Pro, or the plethora of applications developed by Siemens in precision medicine, predictive analytics, and beyond. As the technologies continue to evolve, the frameworks that govern their use remain fragmented. Law, as usual, is slow to catch up to technological innovation. This raises a serious question: how do we ensure that AI in healthcare is safe, ethical, and worthy of patients’ trust?
In response, researchers from Hamad Bin Khalifa University (HBKU) College of Law proposed a new model of governance. The “True Lifecycle Approach” (TLA) is premised on the idea that AI should be governed across all stages of its research, design, implementation, and oversight. At its heart is a simple premise: patients must come first. Governance should embed medical law and ethics throughout, and not ignore the patient.
Why is the TLA Necessary?
Readers might pause and argue that there are already governance frameworks for AI, such as those developed and updated by the Food and Drug Administration (FDA) in the U.S.and the European Medicines Agency in the EU. These approaches focus narrowly on approving AI medical devices to market. Devices that pose greater risks to the public must undergo more checks and approvals before seeking approval. The problem is that these frameworks have not been designed with the full complexity of healthcare AI in mind. They overlook important issues like informed consent, malpractice liability, and other patient rights.
These frameworks also regard AI as a technical tool, not as something that deeply impacts human lives. By contrast, the TLA is grounded in healthcare law and ethics, emphasizing the importance of the standard of care in medicine, patient confidentiality, matters of consent, and respect for cultural and religious differences. These values are particularly relevant among Gulf Cooperation Council (GCC) countries, which are home to diverse expat populations.
The TLA has three core phases of governance, starting with research and development (R&D). This phase sets the foundation for legal and ethical AI in healthcare from the very beginning of its conception. In Qatar, for example, HBKU worked in partnership with the Ministry of Public Health (MOPH) to create the “Research Guidelines for Healthcare AI Development.” These encourage developers to follow detailed processes and document the purpose, scope, and intended use of AI systems. Importantly, researchers should consider ethics and law from the outset, such as compliance with data protection law as it pertains to medical data.
The second phase considers systems approval. While not all healthcare AI tools require regulatory approval, regulators should nevertheless have broader powers to ensure that healthcare AI meets robust safety standards. While regulators around the world currently lack such powers, Saudi Arabia has made progress in the GCC region with its regulatory framework. Through its published guidance MDS–G010 for AI–based medical devices, the Saudi Food and Drug Authority’s standards go beyond other regulators, incorporating ethical standards and provisions on adaptive algorithms, transparency, and post–market monitoring. Again, this does not cover all healthcare AI devices, but indicates where governance can evolve further.
The TLA’s final phase focuses on AI once it is used in practice. Rules should govern not only researchers and developers, but also healthcare providers, insurers, and any other entity using AI downstream. To this end, Abu Dhabi and Dubai have introduced AI policies with binding elements that go beyond mere recommendations to require audits, validation of AI, and patient feedback mechanisms.
An Unexpected Case Study
GCC countries are prioritizing AI investment, including healthcare AI governance. Regional governments are making strides in different phases of the AI governance lifecycle. Yet, conceptualized collectively, their efforts represent a True Lifecycle Approach (TLA) that covers the whole spectrum of AI governance.
Indeed, the GCC is uniquely positioned to pioneer this approach as a global model. Member states have centralized governance structures and diverse populations which need technologies that account for cultural sensitivities concerning language, religion, and other factors. The result can be a governance ecosystem that respects global norms and local needs—an approach that could help inform global best practices in the years to come.
Developers and deployers of AI in healthcare settings must remember that AI is not just a technological project but a human one. While policymakers and lawmakers decide how to regulate the technology, the TLA provides a principled roadmap that encourages discussions on how best to govern the technology, putting patients at the center.