AI News, New CPT code for radiology AI brings hope, but patience is needed artificial intelligence
Top 10 Emerging Trends in Health Care for 2021: The New Normal
In these unprecedented times, what priorities should hospital and health care system boards focus on to prepare for 2021 and beyond?
As organizations manage through the pandemic, we expect continued disruption to be the norm, and pathways to success will increasingly depend on collaboration, innovation, digitization and scaling ahead of the competition.
Their strategy is to leverage the capabilities of these power players to lower the cost of care, increase downstream market capture and focus on core specialty services while remaining highly connected to the patient.
For example, when a West Coast health system adopted precision scheduling practices to minimize wasted time between imaging exams, they were able to open up 5,000 new exam slots annually, so patients could be scheduled sooner.
Other top issues include: We will continue to see the emergence of virtual care solutions across the care continuum from telehealth visits to virtual hospital care and home-based care.
This isn’t a silver bullet but instead is a natural progression to support providers and patients in a more meaningful way: Virtual needs to become the way organizations work versus a disconnected component of the strategy.
Expect to see large organizations making big investments to better leverage and monetize the use of data to improve productivity, enhance patient care and drive additional funding for key programs.
We also see organizations monetizing data/intellectual property through relationships with nontraditional partners in pharma and big tech, and forming venture capital funds to manage downside risk related to unpredictable patient volumes and volatility of traditional nonoperating investments.
With COVID-19 throwing historical utilization rates on their head and making 2021 projections nearly impossible to calculate, employers, providers and payers are forced to consider utilization, rates and risk as they model the coming year.
What the Aftercare? How to Use Z Codes in ICD-10
But that doesn’t mean therapists should exclusively use aftercare codes, as these specific ICD-10 codes apply only in very select circumstances. So, when it comes to aftercare codes, rehab therapists should keep the following advice in mind.
And that makes sense considering that most of those codes represent conditions—including bone, joint, and muscle conditions that are recurrent or resulting from a healed injury—for which therapy treatment does progress in the same way it does for acute injuries.
When patients need continual care during a post-treatment healing or recovery phase—or when they require care for chronic symptoms that resulted from their original ailment—aftercare visit codes perfectly fit the bill.
Z codes also apply to post-op care when the condition that precipitated the surgery no longer exists—but the patient still requires therapeutic care to return to a healthy level of function.
The appropriate codes for this scenario, according to this presentation, would be: If the line between acceptable and unacceptable uses of aftercare codes still seems a bit fuzzy, just remember that in most cases, you should only use aftercare codes if there’s no other way for you to express that a patient is on the “after” side of an aforementioned “before-and-after” event.
How FDA Regulates Artificial Intelligence in Medical Products
Health care organizations are using artificial intelligence (AI)—which the U.S. Food and Drug Administration defines as “the science and engineering of making intelligent machines”—for a growing range of clinical, administrative, and research purposes.
The agency is currently considering how to adapt its review process for AI-enabled medical devices that have the ability to evolve rapidly in response to new data, sometimes in ways that are difficult to foresee.2 This brief describes current and potential uses of AI in health care settings and the challenges these technologies pose, outlines how and under what circumstances they are regulated by FDA, and highlights key questions that will need to be addressed to ensure that the benefits of these devices outweigh their risks.
In health care, AI technologies are already used in fields that rely on image analysis, such as radiology and ophthalmology, and in products that process and analyze data from wearable sensors to detect diseases or infer the onset of other health conditions.4 AI programs can also predict patient outcomes based on data collected from electronic health records, such as determining which patients may be at higher risk for disease or estimating who should receive increased monitoring.
One such model identifies patients in the emergency room who may be at increased risk for developing sepsis based on factors such as vital signs and test results from electronic health records.5 Another hospital system has developed a model that aims to better predict which discharged patients are likely to be readmitted following their release compared with other risk-assessment tools.6 Other health care systems will likely follow suit in developing their own models as the technology becomes more accessible and well-established, and as federal regulations implement efforts to facilitate data exchange between electronic health record systems and mobile applications, a process known as interoperability.7 Finally, AI can also play a role in research, including pharmaceutical development, combing through large sets of clinical data to improve a drug’s design, predict its efficacy, and discover novel ways to treat diseases.8 The COVID-19 pandemic might help drive advances in AI in the clinical context, as hospitals and researchers have deployed it to support research, predict patient outcomes, and diagnose the disease.9 Some examples of AI products developed for use against COVID-19:10 AI can be developed using a variety of techniques.
In traditional, or rules-based, approaches, an AI program will follow human-prescribed instructions for how to process data and make decisions, such as being programmed to alert a physician each time a patient with high blood pressure should be prescribed medication.15 Rules-based approaches are usually grounded in established best practices, such as clinical practice guidelines or literature.16 On the other hand, machine learning (ML) algorithms—also referred to as a data-based approach—“learn”
Algorithms developed without considering geographic diversity, including variables such as disease prevalence and socioeconomic differences, may not perform as well as they should across a varied array of real-world settings.22 The data collection challenges and the inequalities embedded within the health care system contribute to bias in AI programs that can affect product safety and effectiveness and reinforce the disparities that have led to improper or insufficient treatment for many populations, particularly minority groups.23 For example, cardiovascular disease risks in populations of races and ethnicities that are not White have been both overestimated and underestimated by algorithms trained with data from the Framingham Heart Study, which mostly involved White patients.24 Similarly, if an algorithm developed to help detect melanoma is trained heavily on images of patients with lighter skin tones, it may not perform as well when analyzing lesions on people of color, who already present with more advanced skin disease and face lower survival rates than White patients.25 Bias can also occur when an algorithm developed in one setting, such as a large academic medical center, is applied in another, such as a small rural hospital with fewer resources.
If the software is intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions, FDA considers it a medical device.37 Most products considered medical devices and that rely on AI/ML are categorized as Software as a Medical Device (SaMD).38 Examples of SaMD include software that helps detect and diagnose a stroke by analyzing MRI images, or computer-aided detection (CAD) software that processes images to aid in detecting breast cancer.39 Some consumer-facing products—such as certain applications that run on a smartphone—may also be classified as SaMD.40 By contrast, FDA refers to a computer program that is integral to the hardware of a medical device—such as one that controls an X-ray panel—as Software in a Medical Device.41 These products can also incorporate AI technologies.
However, the authors note that they relied on publicly available information, and because the agency does not require companies to categorize their devices as AI/ML-based in public documents, it is difficult to know the true number.50 Alternatively, certain Class I and Class II device manufacturers may submit a De Novo request to FDA, which can be used for devices that are novel but whose safety and underlying technology are well understood, and which are therefore considered to be lower risk.51 Several AI-driven devices currently on the market—such as IDx-DR, OsteoDetect, and ContaCT (see the text box, “Examples of FDA Cleared or Approved AI-Enabled Products”)—are Class II devices that were reviewed through the De Novo pathway.52 Class III devices pose the highest risk.
however, current regulations do exempt licensed practitioners who manufacture or alter devices solely for use in their practice from product registration requirements.59 Hospital accrediting bodies (such as the Joint Commission), standards-setting organizations (such as the Association for the Advancement of Medical Instrumentation), and government actors may need to fill this gap in oversight to ensure patient safety as these tools are more widely adopted.60 For example, the Federal Trade Commission (FTC), which is responsible for protecting consumers and promoting fair market competition, published guidance in April 2020 for organizations using AI-enabled algorithms.
Because algorithms that automate decision-making have the potential to produce negative or adverse outcomes for consumers, the guidance emphasizes the importance of using tools that are transparent, fair, robust, and explainable to the end consumer.61 One year later, the FTC announced that it may take action against those organizations whose algorithms may be biased or inaccurate.62 FDA officials have acknowledged that the rapid pace of innovation in the digital health field poses a significant challenge for the agency.
In addition, the agency will support efforts to develop methods for the evaluation and improvement of ML algorithms, including how to identify and eliminate bias, and to work with stakeholders to advance real-world performance monitoring pilots.71 Especially as the use of AI products in health care proliferates, FDA and other stakeholders will need to develop clear guidelines on the clinical evidence necessary to demonstrate the safety and effectiveness of such products and the extent to which product labels need to specify limitations on their performance and generalizability.
Further, for AI products used in the drug development process, FDA may need to provide additional guidance on the extent and type of evidence necessary to validate that products are working as intended.72 To fully seize the potential benefits that AI can add to the health care field while simultaneously ensuring the safety of patients, FDA may need to forge partnerships with a variety of stakeholders, including hospital accreditors, private technology firms, and other government actors such as the Office of the National Coordinator for Health Information Technology, which promulgates key standards for many software products, or the Centers for Medicare and Medicaid Services, which makes determinations about which technologies those insurance programs will cover.
Explainability: The ability fordevelopers to explain in plain language how their data will be used.73 Generalizability: The accuracy with which results or findings can be transferred to other situations or people outside of those originally studied.74 Good Machine Learning Practices (GMLP): AI/ML best practices (such as those for data management or evaluation), analogous to good software engineering practices or quality system practices.75 Machine learning (ML): An AI technique that can be used to design and train software algorithms to learn from and act on data.
Are We Approaching AI’s Big Moment in Healthcare? Let’s Hope So
That data was then fed back into the platform to form a data set of patterns of volleyball plays.” Further, “An image-tracking system following live games at the Olympics picks up movement from sensors on players’ uniforms—as well as the ball—relaying data through the platform to identify shots, track the ball speed or determine the height of a jump, among other play-by-play information.
After the organization’s CIO created an “Innovation Garage,” members of that “garage team” developed CarePath, which BCBSNC executives describe as “an advanced deep learning factory approach for creating predictive models that identify target populations at risk for hospital readmissions.” That approach, they note, “enables a more focused, personalized patient intervention that is implemented during the transition from the hospital to the home.” And the predictive analytical model they’ve built “applies a readmission risk score to members currently undergoing inpatient procedures,” with members further prioritized by probability of readmission, low engagement with their primary care physicians (PCPs);
The health plan’s care management team’s engagement success rate rose within a year from 12 percent to 57 percent, and the BCBSNC people have established far more connected and productive interactions with the care management teams in medical groups and hospitals.
In 2018, clinicians began work on a telemedicine-based sepsis detection and response system, using artificial intelligence to implement a nurse-driven and physician-supported care bundle that has led to a 30 percent decrease in sepsis mortality.” Importantly, Raths noted, “Timely and effective care for sepsis, including adherence to evidence-based guidelines, continues to be a challenge and priority for every health system.
The VHC’s remote teams investigate warnings generated by alerts.” And those are just two of many, many great case studies emerging right now and involving clinical and operational leaders in patient care organizations and health plans moving forward to address some of the thorniest issues facing the U.S. healthcare delivery system, including inpatient readmissions and sepsis crises, using AI and machine learning to leap ahead into true innovation.
Their research, published in the International Journal of Environmental Health, describes the model that maps the county neighborhood-by-neighborhood, based on four indicators known to increase an individual’s vulnerability to COVID-19 infection: preexisting medical conditions, barriers to accessing health care, built-environment characteristics and socioeconomic challenges that create vulnerabilities.” In that article, Raths quoted Vickie Mays, Ph.D., UCLA Fielding School of Public Health professor of health policy and management and professor of psychology in the UCLA College of Letters and Sciences, who explained in a press release about the research that she and her colleagues have been involved in.
“The model we have includes specific resource vulnerabilities that can guide public health officials and local leaders across the nation to harness already available local data to determine which groups in which neighborhoods are most vulnerable and how to prevent new infections,” Dr. Mays said a statement contained in the press release.
Back in February 2019, the Medicare actuaries predicted that overall annual U.S. healthcare expenditures would soar from $3.6 trillion in 2017 to nearly $6 trillion by 2027, with gross domestic product (GDP) spent on healthcare in this country rising from 17.9 percent to 19.4 percent by 2027.