The Impact of Not Regulating AI on Value-Based Healthcare

In recent years, AI has become one of the most talked-about topics due to its capability of performing complex tasks that historically only humans could do, such as reasoning, making decisions, or solving problems. In healthcare, AI has the potential to simplify repetitive, otherwise mundane, tasks of healthcare professionals, performing them quicker and at a fraction of the cost.
Value-based healthcare can utilize AI as a valuable ally thanks to the different ways this tool can improve value, efficiency, and quality of care for patients when implemented thoughtfully in value-based agreements.
In this article, we will discuss the impact of not regulating AI and how it can affect value-based healthcare.
Reducing Federal Regulation
Before the Trump administration came into office, there were several regulations put in place dating back to 2013 that began creating guidelines as artificial intelligence became more popular. However, within the first few hours of the new administration, the president issued a new executive order, Initial Rescissions of Harmful Executive Orders and Actions, that reduces rather than expands federal oversight of AI with the intention to accelerate development in the US and strengthen the country’s leadership in the industry.
This new executive order revoked more than 50 prior executive orders, including Executive Order 14110 of October 30, 2023, that outlines “governing the development and use of AI safely and responsibly, and is there for advancing a coordinated, Federal Government–wide approach to doing so.” Within this revoked executive order are AI guiding principles and priorities, including:
- Making AI safe and secure.
- Promoting “responsible innovation, competition, and collaboration.”
- Committing to supporting American workers in the development and use of AI
- Advancing equity and civil rights with AI.
- Protecting the interest of Americans using AI and AI-enabled products in their daily lives.
- Protecting Americans’ privacy and civil liberties.
- Managing the risks from the federal government’s own use of AI.
- Engaging with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.
Other executive orders that were put into effect by the new administration to remove barriers to American Leadership in Artificial Intelligence include Promoting the Use of Trustworth Artificial Intelligence in the Federal Government as well as Removing Barriers to American Leadership in Artificial Intelligence that enable the US government to better serve the public by utilizing AI technology to improve efficiency while also supporting the evolving industry.
For more updates on federal regulations made to Artificial Intelligence, visit www.whitehouse.gov/.
The Future of Artificial Intelligence in Healthcare
Balance in regulation is going to be key to the success and future of AI and value-based healthcare.
According to an article by Med Tech Intelligence, a lack of regulation from trusted organizations like the Food and Drug Administration (FDA) and National Institutes of Health (NIH) can lead to less accountability, unethical practices, biased decision-making, and more in the development of Artificial Intelligence. While overregulation in AI could also lead to negative outcomes, including a lack of innovation and a delay in the creation of accessible life-saving technology.
When it comes to the future of AI, the following changes will continue to evolve as federal regulation shifts.
Centers for Medicare & Medicaid Services (CMS) Regulating AI Tools
For at least the next four years, CMS is going to be one of the agencies offering supplemental regulation to health plans and physician groups on AI. According to the article Regulation of Health and Health Care Artificial Intelligence, CMS has already shown an interest in regulating tools that healthcare facilities can use and has even gone a step further by “issuing regulations and guidance about Medicare Advantage plans using algorithms to make medical necessity determinations.”
Increase use of Gap-Filling Mechanisms
A lack of regulation in AI could also lead to an increased use of gap-filling mechanisms to essentially fill the space where regulation should exist. This means the use of private contracting and licensing agreements that can lead to bad actors underdelivering and getting out of being held accountable for not making AI safe. Another area that will need gap-filling is in state-specific instances where algorithmic discrimination laws do not exist.
Legal Issues with AI
As time goes on, legal issues are bound to occur when private contracts fail to address legal responsibility. We should “…expect to see litigation over AI-related harms to increase as greater patient exposure to AI tools causes more injuries.”
Although experts are aware that the threat of litigation can create incentives for safety, they are not optimistic that such signals will target the right parties in this case. Liability in healthcare is likely to be complex due to the lack of federal regulation. Physicians should be aware of the consequences they may face when it comes to utilizing new and unregulated AI tools.
State Legislation
The new administration’s shift to decreasing federal regulation allows for more activity at the state level regarding the development and use of Artificial Intelligence within each state’s healthcare system. Since changes were made to revoke former President Biden’s Executive Order 14110, the following states have proposed bills to regulate the use of AI without any human oversight in the healthcare sector:
- Connecticut
- Florida
- Illinois
- Indiana
- Tennessee
- New Mexico
- Texas
For more information or updates on proposed bills related to AI, find your state and local government’s official website by visiting usa.gov/state-local-governments!
The Impact of Not Regulating AI in Value-Based Healthcare
A lack of regulation in AI can have long-term effects on value-based healthcare. At FRG, we believe that if there is no regulation, then there will be a lack of set limitations on how value-based healthcare experts can use AI. Additionally, without specific guidelines or standards, there will be no digital equity to ensure that what is applied is effective.
Not regulating AI can also lead to trust issues with AI-driven responses, which could potentially overshadow the positive impact AI can have in value-based care. According to an article by Harvard Business Review, “The AI trust gap can be understood as the sum of the persistent risks (both real and perceived) associated with AI; depending on the application, some risks are more critical.” AI is a promising tool that can create a better, more equitable world, but if left unchecked, it can also lead to a decrease in trust and setbacks within the value-based care industry.
One of the biggest issues that not regulating AI in value-based healthcare could cause is privacy issues. Without rules, regulations, and guidelines, healthcare workers may begin to utilize generative artificial intelligence to lighten their workloads without realizing they are violating federal privacy laws. Staying compliant in an unregulated space is going to be more and more difficult as tools like ChatGPT and Google Gemini become more popular in the industry.
Connect with FRG Today!
Are you ready to align your financial strategies with industry-leading expertise? Transform your organization’s financial insights with FRG’s transparent, cost-effective solutions! Our proprietary software, AccuReports, delivers actionable insights to drive financial performance, streamline decision-making, and unlock new opportunities for value-based healthcare.