Dive Brief:
- The Trump administration released an “action plan” on artificial intelligence on Wednesday, part of a push to spur implementation of the emerging technology in the U.S.
- One of the plan’s major themes is removing “onerous” regulations that slow the development and deployment of AI across industries.
- The plan rarely mentions healthcare, but it serves as one the administration’s first moves to set federal policies for AI development — which experts say is important to safely roll out the technology in the sector.
Dive Insight:
The plan was created as a result of a January executive order issued by President Donald Trump that aims to ensure the U.S. continues to be a competitive player in AI development.
A portion of the plan focuses on removing red tape for AI implementation, including asking federal agencies to consider limiting funding for states that have “burdensome” AI regulations. The plan, however, doesn’t detail which state laws could be considered onerous.
The administration also plans to set up AI Centers of Excellence across the country to deploy and test AI tools, enabled by agencies like the Food and Drug Administration.
Additionally, the plan looks to create domain-specific groups of public, private and academic stakeholders, including in healthcare, to speed the adoption of national standards for AI and measure how much the technology increases productivity.
Some healthcare industry groups praised the plan.
Medtech lobbying group AdvaMed pointed out in a Wednesday statement that the FDA has now authorized more than 1,200 medical devices using AI, after the agency updated its list earlier this month.
“We look forward to working with the White House on policies that would unleash even more innovation tapping into the full potential of AI to help make America healthy again,” AdvaMed CEO Scott Whitaker said.
In a statement, Soumi Saha, senior vice president of government affairs at group purchasing organization Premier, said it “sets a course towards secure, trustworthy artificial intelligence (AI) in healthcare.”
Federal involvement in AI standards could be a positive move too, Leigh Burchell, chair of the EHR Association Executive Committee, said in a Thursday statement.
“As we evaluate the implications of the AI Action Plan for our member companies, their healthcare provider clients, and, most importantly, patients, we reiterate our call for a uniform, risk-based regulatory model at the federal level,” she said. “Fragmented state mandates risk slowing innovation and complicating compliance, which could deter innovation and adoption.”
Healthcare leaders are excited about the promise of the technology in the sector — hoping AI could ameliorate staffing challenges and providers’ heavy burden of administration and work — but it comes with risks.
Inaccurate or misleading responses, as well as bias embedded in AI tools, could have serious consequences for patients and providers. Additionally, AI products aren’t easy to implement and can require plenty of human labor, like ongoing monitoring of algorithms to ensure they’re still performing up to standards.
Meanwhile, a federal framework for AI in healthcare has been nascent. Former President Joe Biden attempted to set the groundwork with a sweeping executive order, but Trump rescinded that order days into his term.
Along with other policies, Biden’s order asked the HHS to establish a task force and lay out a strategic plan for the technology in the sector, which the department had released shortly before Trump took office.