AI Regulations Coming: 5 Things To Know

Artificial intelligence (AI). Technology. Data rights and privacy. Cybersecurity. They are all rapidly growing areas in health care and life science (HCLS) companies. Frequently, there are also interrelated. As with most new innovations, AI’s use is well ahead of the policies and laws to govern it. To begin to address this, the President released a wide-ranging Executive Order this week. Here are five things to know.

What was released this week?

The big news was that President Biden signed an Executive Order (EO) on the safe, secure and trustworthy use and development of AI. The EO seeks to establish a federal, government-wide approach to setting new standards for AI safety and security, to protect privacy, promote innovation and competition among other themes. The wide-ranging EO covers all types of federal agencies and situations, including AI’s use in HCLS.

How are health care and life sciences impacted under the EO?

The EO includes specific language related to HCLS and directs the federal Department of Health & Human Services (HHS) to do a variety of things. For example, within 90 days, HHS in consultation with other agencies must create an HHS AI Task Force. Within a year the Task Force must develop a strategic plan that includes policies and frameworks, including potential regulations, on the responsible deployment and use of AI and AI-enabled tech in the following sectors – health and human services, research and discovery, drug and device safety, health care delivery and financing, and public health.

What will the HHS AI Task Force focus on?

The HHS AI Task Force is required to cover issues revolving around safe, effective, equitable use of AI and AI-enabled tech among others. A few of those items relate to:

  • the development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing — including quality measurement, performance improvement, program integrity, benefits administration, and patient experience — taking into account considerations such as appropriate human oversight of the application of AI-generated output
  • the long-term safety and real-world performance monitoring, including clinically relevant or significant modifications and performance across population groups
  • incorporating equity principles in AI-enabled tech, using disaggregated data and representative population data sets when developing new models and algorithmic performance to guard against bias and discrimination
  • establishing privacy and security standards into the software-development lifecycle

Additionally, HHS will direct other component agencies to assess whether AI technologies maintain quality, including developing an AI assurance policy to evaluate AI-enabled health care tools and enabling pre-market assessments and post-market oversight.

Also, the agencies will consider actions to advance compliance with nondiscrimination and privacy laws with respect to AI, including handling complaints. There are also provisions related to accelerating or prioritizing certain AI grantmaking opportunities.

When will this all happen?

The EO is only the beginning of this process. It is basically the framework from which the agencies will now work. There are different deadlines for different aspects. For HCLS, some pieces are due within 90 days, some 180 days or others within 365 days.

What does this mean to me?

AI is a broad term and encompasses many forms. Most likely you are already using AI today while some of you are likely developing AI. 

Whether you’re using automated revenue cycle tools, chatbots or ambient clinical intelligence for clinical documentation, there’s likely AI involved. How about those of you in ACOs or advancing patient care management programs? If you’re using predictive analytics to understand your patient populations, there’s likely AI algorithms involved. How about those of you developing devices, drugs or software tools to address all types of HCLS use cases? AI again.

The forthcoming guidance, regulations and processes may impact you as a user or a developer. Either way, understanding AI’s role in your organization – where you use it, how you use it, how you test it, how you develop it, how you audit it, how you maintain security and privacy of the data you collect – are important considerations long-term.

How can CLA help

Bottomline, the forthcoming guidance and potential regulations could impact you either as a user or developer so it’s important to be mindful of this issue. We are here to help. Our data, cyber, policy, and industry specialized knowledge can help bridge these moving parts as AI and AI regulations unfold.

Additional Resource

  • 608-662-7635

Jennifer Boese is the Director of Health Care Policy at CLA. She is a highly successful public policy, legislative, advocacy and political affairs leader, including working in both the state and federal government as well as the private sector. She brings over 20 years of government relations and public policy knowledge with her to CLA. Well over half of her career has been spent dedicated to health care policy and the health care industry, affording her a deep understanding of the health care market and environment, health care organizations and health care stakeholders. Her role at CLA is to provide thought leadership, policy analysis and strategic insights to health care providers across the continuum related to the industry's ongoing transformation towards value. A key focus of that work is on market innovations and emerging payment models. Her goal is to help CLA clients navigate and thrive in an increasingly dynamic health care environment.

Comments are closed.