AI News, Clinical natural language processing for predicting hospital readmission

Clinical natural language processing for predicting hospital readmission

Doctors have always written clinical notes about their patients — originally, the notes were on paper and were locked away in a cabinet.

These notes represent a vast wealth of knowledge and insight that can be utilized for predictive models using Natural Language Processing (NLP) to improve patient care and hospital workflow.

recently read this great paper “Scalable and accurate deep learning for electronic health records” by Rajkomar et al.

The authors built many state-of-the-art deep learning models with hospital data to predict in-hospital mortality (AUC = 0.93–0.94), 30-day unplanned readmission (AUC = 0.75–76), prolonged length of stay (AUC = 0.85–0.86) and discharge diagnoses (AUC = 0.90).

This blog post will outline how to build a classification model to predict which patients are at risk for 30-day unplanned readmission utilizing free-text hospital discharge summaries.

This amazing free hospital database contains de-identified data from over 50,000 patients who were admitted to Beth Israel Deaconess Medical Center in Boston, Massachusetts from 2001 to 2012.

In this project, we will make use of the following MIMIC III tables To maintain anonymity, all dates have been shifted far into the future for each patient, but the time between two consecutive events for a patient is maintained in the database.

First, we load the admissions table using pandas dataframes: The main columns of interest in this table are : The next step is to convert the dates from their string format into a datetime.

First we will sort the dataframe by the admission date The dataframe could look like this now for a single patient: We can use the groupby shift operator to get the next admission (if it exists) for each SUBJECT_ID Note that the last admission doesn’t have a next admission.

And then backfill the values that we removed We can then calculate the days until the next admission In our dataset with 58976 hospitalizations, there are 11399 re-admissions.

Since the next step is to merge the notes on the admissions table, we might have the assumption that there is one discharge summary per admission, but we should probably check this.

There are a lot of cases where you get multiple rows after a merge(although we dealt with it above), so I like to add assert statements after a merge 10.6 % of the admissions are missing (df_adm_notes.TEXT.isnull().sum() / len(df_adm_notes)), so I investigated a bit further with and discovered that 53% of the NEWBORN admissions were missing discharge summaries vs ~4% for the others.

To do this, we have a few options to balance the training data Since I didn’t make any restrictions on size of RAM for your computer, we will sub-sample the negatives, but I encourage you to try out the other techniques if your computer or server can handle it to see if you can get an improvement.

(Post as a comment below if you try this out!) Now that we have created data sets that have a label and the notes, we need to preprocess our text data to convert it to something useful (i.e.

Let’s define a function that will modify the original dataframe by filling missing notes with space and removing newline and carriage returns The other option is to preprocess as part of the pipeline.

The tokenizer breaks a single note into a list of words and a vectorizer takes a list of words and counts the words.

As an example, let’s say we have 3 notes Essentially, you fit the CountVectorizer to learn the words in your data and the transform your data to create counts for each word.

We also need our output labels as separate variables As seen by the location of the scroll bar… as always, it takes 80% of the time to get the data ready for the predictive model.

We can now build a simple predictive model that takes our bag-of-words inputs and predicts if a patient will be readmitted in 30 days (YES = 1, NO = 0).

One thing to point out is that the major difference between the precision in the two sets of data is due to the fact that we balanced the training set, where as the validation set is the original distribution.

Essentially the ROC curve allows you to see the trade-off between true positive rate and false positive rate as you vary the threshold on what you define as predicted positive vs predicted negative.

We made many choices (a few below) which we could change and see if there is an improvement: When I am trying to improve my models, I read a lot of other blog posts and articles to see how people tackled similar issues.

You built a simple NLP model (AUC = 0.70) to predict re-admission based on hospital discharge summaries that is only slightly worse than the state-of-the-art deep learning method that uses all hospital data (AUC = 0.75).

Speculation detection for Chinese clinical notes: Impacts of word segmentation and embedding models

We experiment on a novel dataset of 36,828 clinical notes with 5103 gold-standard speculation annotations on 2000 notes, and compare the systems in which word embeddings are calculated based on word segmentations given by general and by domain specific segmenters respectively.

We demonstrate that word segmentation is critical to produce high quality word embedding to facilitate downstream information extraction applications, and suggest that a domain dependent word segmenter can be vital to such a clinical NLP task in Chinese language.

Notes Data Model: Discrete and Nondiscrete Clinical Notes

The notes section of each medical record is written by a patient’s healthcare professional during hospitalization or outpatient care.

Variations in content, formatting, and detail are not barriers to access, as this model allows applications to both read and write notes within a patient’s chart regardless of the clinical situation or information included.

Data model

A data model (or datamodel[1][2][3][4][5]) is an abstract model that organizes elements of data and standardizes how they relate to one another and to properties of the real world entities.

For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner.

Sometimes it refers to an abstract formalization of the objects and relationships found in a particular application domain, for example the customers, products, and orders found in a manufacturing organization.

At other times it refers to a set of concepts used in defining such formalizations: for example concepts such as entities, attributes, relations, or tables.

Data models describe the structure, manipulation and integrity aspects of the data stored in data management systems such as relational databases.

They typically do not describe unstructured data, such as word processing documents, email messages, pictures, digital audio, and video.

The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure.

Their work was a first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components.

A next step in IS modelling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of 'a proper structure for machine independent problem definition language, at the system level of data processing'.

Codd worked out his theories of data arrangement, and proposed the relational model for database management based on first-order predicate logic.[13]

In the 1970s entity relationship modeling emerged as a new type of conceptual data modeling, originally proposed in 1976 by Peter Chen.

Entity relationship models were being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database.

compared a data model to a map of a territory, emphasizing that in the real world, 'highways are not painted red, rivers don't have county lines running down the middle, and you can't see contour lines on a mountain'.

In contrast to other researchers who tried to create models that were mathematically clean and elegant, Kent emphasized the essential messiness of the real world, and the task of the data modeller to create order out of chaos without excessively distorting the truth.

data structure diagram (DSD) is a diagram and data model used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them.

In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together.

DSDs differ from the ER model in that the ER model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.

An entity-relationship model (ERM), sometimes referred to as an entity-relationship diagram (ERD), could be used to represent an abstract conceptual data model (or semantic data model or physical data model) used in software engineering to represent structured data.

The logical data structure of a database management system (DBMS), whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS.

Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques.

While data analysis is a common term for data modeling, the activity actually has more in common with the ideas and methods of synthesis (inferring general concepts from particular instances) than it does with analysis (identifying component concepts from more general ones).

{Presumably we call ourselves systems analysts because no one can say systems synthesists.} Data modeling strives to bring the data structures of interest together into a cohesive, inseparable, whole by eliminating unnecessary data redundancies and by relating data structures with relationships.

A data model represents classes of entities (kinds of things) about which a company wishes to hold information, the attributes of that information, and relationships among those entities and (often implicit) relationships among those attributes.

The entities represented by a data model can be the tangible entities, but models that include such concrete entity classes tend to change over time.

the integrity part is expressed in first-order logic and the manipulation part is expressed using the relational algebra, tuple calculus and domain calculus.

For example, a data modeler may use a data modeling tool to create an entity-relationship model of the corporate data repository of some business enterprise.

Within the field of software engineering both a data model and an information model can be abstract, formal representations of entity types that includes their properties, relationships and the operations that can be performed on them.

The entity types in the model may be kinds of real-world objects, such as devices in a network, or they may themselves be abstract, such as for the entities used in a billing system.

an information model is a representation of concepts, relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse.

For example, the Document Object Model (DOM) [1] is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page.

In computing the term object model has a distinct second meaning of the general properties of objects in a specific computer programming language, technology, notation or methodology that uses them.

To help ensure correctness, clarity, adaptability and productivity, information systems are best specified first at the conceptual level, using concepts and language that people can readily understand.

The conceptual design may include data, process and behavioral perspectives, and the actual DBMS used to implement the design might be based on one of many logical data models (relational, hierarchic, network, object-oriented etc.).[29]

Data modeling

Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations.

Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system.

The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders.

The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details.

The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling:

In the context of business process integration (see figure), data modeling complements business process modeling, and ultimately results in database generation.[6]

However, the term 'database design' could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the Database Management System or DBMS.

Entity-relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion.

For example, a generic data model may define relation types such as a 'classification relation', being a binary relation between an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related.

By standardization of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages.

Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model.

The logical data structure of a DBMS, whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS.

The overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerful abstraction concepts known from the Artificial Intelligence field.

The idea is to provide high level modeling primitives as integral part of a data model in order to facilitate the representation of real world situations.[10]

OSI Model (OSI Reference Model) : The 7 Layers Explained

Enroll to Full Course: The "OSI Model" also known as "OSI Reference Model" is discussed here in 2 parts: a) Understanding OSI Reference ..

How To Write A Research Paper Fast - Research Paper Writing Tips

Subscribe to Waysandhow: Research paper writing tips, step by step tutorial and tips on how to write a .

How We Make Memories - Crash Course Psychology #13

You can directly support Crash Course at Subscribe for as little as $0 to keep up with everything we're doing. Also, if you ..

How to Make a Simple Tensorflow Speech Recognizer

In this video, we'll make a super simple speech recognizer in 20 lines of Python using the Tensorflow machine learning library. I go over the history of speech ...

Filter Excel Data to a Different Worksheet

In Excel 2010 or Excel 2007, follow these steps to use an advanced filter to copy data from a table, onto anther ..

How to Make Invisible Ink

Learn how to make invisible ink using lemon juice and a heat source. Write a secret message with invisible ink using these simple household items. Science is a ...

Writing a research proposal

Writing a research proposal.

How to Do a Presentation - 5 Steps to a Killer Opener

GET YOUR FREE 1 HOUR VIDEO TRAINING HERE! PS LAB MEMBERSHIP FOR JUST $1 ..

Europe - The Final Countdown (Official Video)

Europe's official music video for 'The Final Countdown'. Click to listen to Europe on Spotify: As featured on ..

Normal Distribution - Explained Simply (part 1)

I describe the standard normal distribution and its properties with respect to the percentage of observations within each standard deviation. I also make ...