AI News, Healthcare Data Series: Improve Data Quality (Part 2) artificial intelligence

Artificial intelligence

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.

Leading AI textbooks define the field as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1]

Colloquially, the term 'artificial intelligence' is often used to describe machines (or computers) that mimic 'cognitive' functions that humans associate with the human mind, such as 'learning' and 'problem solving'.[2]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14]

Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics.

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding;

and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][12]

The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as '0' and '1', could simulate any conceivable act of mathematical deduction.

The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) transistor technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s.

The success was due to increasing computational power (see Moore's law and transistor count), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[42]

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a 'sporadic usage' in 2012 to more than 2,700 projects.

He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[12]

Computer science defines AI research as the study of 'intelligent agents': any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1]

A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.”[60]

An AI's intended utility function (or goal) can be simple ('1 if the AI wins a game of Go, 0 otherwise') or complex ('Do mathematically similar actions to the ones succeeded in the past').

Alternatively, an evolutionary system can induce goals by using a 'fitness function' to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[61]

Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[62]

Some of the 'learners' described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world[citation needed].

In practice, it is almost never possible to consider every possibility, because of the phenomenon of 'combinatorial explosion', where the amount of time needed to solve a problem grows exponentially.

The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: 'After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza'.

A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial 'neurons' that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to 'reinforce' connections that seemed to be useful.

Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[70]

A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[71]

instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects.

Humans also have a powerful mechanism of 'folk psychology' that helps them to interpret natural-language sentences such as 'The city councilmen refused the demonstrators a permit because they advocated violence'.

For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[80][81][82]

By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[85]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a 'combinatorial explosion': they became exponentially slower as the problems grew larger.[65]

In addition, some projects attempt to gather the 'commonsense knowledge' known to the average person into a database containing extensive knowledge about the world.

by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern).

They need a way to visualize the future—a representation of the state of the world and be able to make predictions about how their actions will change it—and be able to make choices that maximize the utility (or 'value') of available choices.[107]

A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts.

Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well.

Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications.

is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world.

a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its 'object model' to assess that fifty-meter pedestrians do not exist.[123]

Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[125]

the paradox is named after Hans Moravec, who stated in 1988 that 'it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility'.[129][130]

Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[139]

Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.[140]

These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI.

Many researchers predict that such 'narrow AI' work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[18][143]

One high-profile example is that DeepMind in the 2010s developed a 'generalized artificial intelligence' that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[144][145][146]

hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to 'slurp up' a comprehensive knowledge base from the entire unstructured Web.[6]

Finally, a few 'emergent' approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[149][150]

For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's original intent (social intelligence).

A problem like machine translation is considered 'AI-complete', because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

When access to digital computers became possible in the mid 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.

in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science.

Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems.

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless whether people used the same algorithms.[15]

His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[157]

found that solving difficult problems in vision and natural language processing required ad-hoc solutions—they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior.

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[161]

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.

This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[164][165][166][167]

Artificial neural networks are an example of soft computing—they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient.

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results.

However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures.

Compared with GOFAI, new 'statistical learning' techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring a semantic understanding of the datasets.

The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models;

In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.[172][173]

Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[182]

In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called 'pruning the search tree').

These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top.

For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses).

Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[187][188]

Fuzzy set theory assigns a 'degree of truth' (between 0 and 1) to vague statements such as 'Alice is old' (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false.

Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as 'if you are close to the destination station and moving fast, increase the train's brake pressure';

Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[204]

Complicated graphs with diamonds or other 'loops' (undirected cycles) can require a sophisticated method such as Markov chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities.

Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise.

Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as 'naive Bayes' on most practical data sets.[220][221]

one simple algorithm (dubbed 'fire together, wire together') is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another.

In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending;

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events).

Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ('fire together, wire together'), GMDH or competitive learning.[226]

However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches[citation needed].

For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a 'credit assignment path' (CAP) depth of seven[citation needed].

Deep learning has transformed many important subfields of artificial intelligence[why?], including computer vision, speech recognition, natural language processing and others.[235][236][234]

In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning.[242]

Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[243]

In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[253]

The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[272]

The 'imitation game' (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[273]

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[279]

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution,[284]

In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[287]

Another study is using artificial intelligence to try to monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[290]

One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% accuracy.[291]

The team supervised the robot while it performed soft-tissue surgery, stitching together a pig's bowel during open surgery, and doing so better than a human surgeon, the team claimed.[292]

However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[298]

Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[299]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing.

Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking[citation needed]..

The cybersecurity arena faces significant challenges in the form of larges scale hacking attacks of different types which harm organizations of all kinds and create billions of dollars in business damage.

This system will involve use of cameras to ascertain traffic density and accordingly calculate the time needed to clear the traffic volume which will determine the signal duration for vehicular traffic across streets.[314]

Intelligence technologies enables coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Join Fires between networked combat vehicles and tanks also inside Manned and Unmanned Teams (MUM-T).[318]

It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[327]

Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[329]

Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations, has expressed that 'I think the dangerous applications for AI, from my point of view, would be criminals or large terrorist organizations using it to disrupt large processes or simply do pure harm.

He argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.

If this AI's goals do not fully reflect humanity's—one example is an AI told to compute as many digits of pi as possible—it might harm humanity in order to acquire more resources or prevent itself from being shut down, ultimately to better achieve its goal.

If the AI in that scenario were to become superintelligent, Bostrom argues, it may resort to methods that most humans would find horrifying, such as inserting 'electrodes into the facial muscles of humans to cause constant, beaming grins' because that would be an efficient way to achieve its goal of making humans smile.[355]

For this danger to be realized, the hypothetical AI would have to overpower or out-think all of humanity, which a minority of experts argue is a possibility far enough in the future to not be worth researching.[363][364]

Algorithms have a host of applications in today's legal system already, assisting officials ranging from judges to parole officers and public defenders in gauging the predicted likelihood of recidivism of defendants.[369]

It has been suggested that COMPAS assigns an exceptionally elevated risk of recidivism to black defendants while, conversely, ascribing low risk estimate to white defendants significantly more often than statistically expected.[369]

Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[375]

Research in this area includes machine ethics, artificial moral agents, friendly AI and discussion towards building a human rights framework is also in talks.[377]

The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making.[382]

The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: 'Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines.

In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines.

Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence.

Humans should not assume machines or robots would treat us favorably because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.'[385]

The philosophical position that John Searle has named 'strong AI' states: 'The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.'[388]

Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization.

Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029, and predicts that the singularity will occur in 2045.[394]

A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed.[397]

In the 1980s, artist Hajime Sorayama's Sexy Robots series were painted and published in Japan depicting the actual organic human form with lifelike muscular metallic skins and later 'the Gynoids' book followed that was used by or influenced movie makers including George Lucas and other creatives.

Sorayama never considered these organic robots to be real part of nature but always unnatural product of the human mind, a fantasy existing in the mind even when realized in actual form.

Part 2. Full Text of Announcement

Purpose The purpose of this Funding Opportunity Announcement (FOA) is to invite Cooperative Agreement (U24) applications for the continued development and sustainment of high value informatics research resources to improve the acquisition, management, analysis, and dissemination of data and knowledge across the cancer research continuum including cancer biology, cancer treatment and diagnosis, early cancer detection, risk assessment and prevention, cancer control and epidemiology, and/or cancer health disparities.

In order to be successful, the proposed sustainment plan must provide clear justification for why the research resource should be maintained and how it has benefitted and will continue to benefit the cancer research field.

At the same time, it confronts researchers, including bench biologists and clinicians, with significant challenges to access data, analyze data, and ultimately transform discovery into new knowledge and clinical practice.

These challenges are especially prominent in the field of cancer research where complexity and heterogeneity of the disease translate to complex data generation conditions and high data management and analysis overhead, a condition that creates significant barriers to knowledge discovery and dissemination.

At the intersection of biology, physics, chemistry, medicine, mathematics, statistics, computer science, and information technology, biomedical informatics involves the development and application of computational tools to support the organization and understanding of biomedical information, so that new insight and knowledge can be discerned.

Moreover, ITCR provides support for informatics resources across the development lifecycle, including the development of innovative methods and algorithms, early-stage software development (current FOA), advanced stage software development, and sustainment of high-value resources on which the community has come to depend.

Companion FOAs of the ITCR program include: Specific Research Objectives This FOA invites applications to support the sustained operations of informatics technology resources that support a wide range of cancer research, including discovery biology, population studies, as well as clinical and translational research.

Some examples of informatics technologies that may be appropriate for this FOA include, but are not limited to, the following: Examples of activities appropriate to the sustained operations of informatics technology in support of research include: Applications in support of informatics technologies that address under-represented areas in the program portfolio are of particular interest.

Although a letter of intent is not required, is not binding, and does not enter into the review of a subsequent application, the information that it contains allows IC staff to estimate the potential review workload and plan the review.

Final decisions for the release of set-aside funds will be contingent upon 1) adjustment of the collaborative project based on peer-review comments, and 2) post-award but pre-fund-release assessment of the collaborative project’s value for advancement of the developed technology by the ITCR Steering Committee.

In support of this requirement, investigators should describe their and collaborators’ abilities and plans for facilitating the collaborative activities that will enhance the utility and/or interoperability of the informatics technology that is developed in response to this FOA.

In addition, for applications involving clinical trials Are the scientific rationale and need for a clinical trial to test the proposed hypothesis or intervention well supported by preliminary data, clinical and/or preclinical studies, or information in the literature or knowledge of biological mechanisms?

For trials focusing on clinical or public health endpoints, is this clinical trial necessary for testing the safety, efficacy or effectiveness of an intervention that could lead to a change in clinical practice, community behaviors or health care policy?

In addition, for applications involving clinical trials With regard to the proposed leadership for the project, do the PD/PI(s) and key personnel have the expertise, experience, and ability to organize, manage and implement the proposed clinical trial and meet milestones and timelines?

  Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions?

In addition, for applications involving clinical trials Does the design/research plan include innovative elements, as appropriate, that enhance its sensitivity, potential for information or potential to advance scientific knowledge or clinical practice?

Specific for this FOA: Are there plans to evolve the research resource to address emerging needs of the targeted research communities and ensure that the research resource maintains relevance to the research it supports?

In addition, for applications involving clinical trials Does the application adequately address the following, if applicable Study Design Is the study design justified and appropriate to address primary and secondary outcome variable(s)/endpoints that will be clear, informative and relevant to the hypothesis being tested?

Given the methods used to assign participants and deliver interventions, is the study design adequately powered to answer the research question(s), test the proposed hypothesis/hypotheses, and provide interpretable results?

If the project involves human subjects and/or NIH-defined clinical research, are the plans to address 1) the protection of human subjects from research risks, and 2) inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion or exclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?

Study Timeline Specific to applications involving clinical trials Is the study timeline described in detail, taking into account start-up activities, the anticipated rate of enrollment, and planned follow-up assessment?

Does the project incorporate efficiencies and utilize existing resources (e.g., CTSAs, practice-based research networks, electronic medical records, administrative database, or patient registries) to increase the efficiency of participant enrollment and data collection, as appropriate?

  For research that involves human subjects but does not involve one of thecategories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.

For research that involves human subjects and meets the criteria for one or more of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects involvement and characteristics, and 3) sources of materials.

  When the proposed project involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for the inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed.

  The committee will evaluate the involvement of live vertebrate animals as part of the scientific assessment according to the following criteria: (1) description of proposed procedures involving animals, including species, strains, ages, sex, and total number to be used;

  Reviewers will assess whether materials or procedures proposed are potentially hazardous to research personnel and/or the environment, and if needed, determine whether adequate protection is proposed.

  For Resubmissions, the committee will evaluate the application as now presented, taking into consideration the responses to comments from the previous scientific review group and changes made to the project.

  Reviewers will assess whether the project presents special opportunities for furthering research programs through the use of unusual talent, resources, populations, or environmental conditions that exist in other countries and either are not readily available in the United States or augment existing U.S. resources.

  Reviewers will assess the information provided in this section of the application, including 1) the Select Agent(s) to be used in the proposed research, 2) the registration status of all entities where Select Agent(s) will be used, 3) the procedures that will be used to monitor possession use and transfer of Select Agent(s), and 4) plans for appropriate biosafety, biocontainment, and security of the Select Agent(s).

  Reviewers will comment on whether the following Resource Sharing Plans, or the rationale for not sharing the following types of resources, are reasonable: (1) Data Sharing Plan;

  For projects involving key biological and/or chemical resources, reviewers will comment on the brief plans proposed for identifying and ensuring the validity of those resources.

  Reviewers will consider whether the budget and the requested period of support are fully justified and reasonable in relation to the proposed research.

Data and Safety Monitoring Requirements: The NIH policy for data and safety monitoring requires oversight and monitoring of all NIH-conducted or -supported human biomedical and behavioral intervention studies (clinical trials) to ensure the safety of participants and the validity and integrity of the data.

Investigational New Drug or Investigational Device Exemption Requirements: Consistent with federal regulations, clinical research projects involving the use of investigational therapeutics, vaccines, or other medical interventions (including licensed products and devices for a purpose other than that for which they were licensed) in humans under a research protocol must be performed under a Food and Drug Administration (FDA) investigational new drug (IND) or investigational device exemption (IDE).

This means that recipients of HHS funds must ensure equal access to their programs without regard to a person’s race, color, national origin, disability, age and, in some circumstances, sex and religion.

Thus, criteria in research protocols that target or exclude certain populations are warranted where nondiscriminatory justifications establish that such criteria are appropriate with respect to the health or safety of the subjects, the scientific study design, or the purpose of the research.

The Federal awarding agency will consider any comments by the applicant, in addition to other information in FAPIIS, in making a judgement about the applicant’s integrity, business ethics, and record of performance under Federal awards when completing the review of risk posed by applicants as described in 45 CFR Part 75.205 “Federal awarding agency review of risk posed by applicants.” This provision will apply to all NIH grants and cooperative agreements except fellowships.

HHS provides general guidance to recipients of FFA on meeting their legal obligation to take reasonable steps to provide meaningful access to their programs by persons with limited English proficiency.

The following special terms of award are in addition to, and not in lieu of, otherwise applicable U.S. Office of Management and Budget (OMB) administrative guidelines, U.S. Department of Health and Human Services (DHHS) grant administration regulations at 45 CFR Part 75, and other HHS, PHS, and NIH grant administration policies.

The administrative and funding instrument used for this program will be the cooperative agreement, an 'assistance' mechanism (rather than an 'acquisition' mechanism), in which substantial NIH programmatic involvement with the awardees is anticipated during the performance of the activities.

Consistent with this concept, the dominant role and prime responsibility resides with the awardees for the project as a whole, although specific tasks and activities may be shared among the awardees and the NIH as defined below.

The PD(s)/PI(s) will have the primary responsibility for: NIH staff have substantial programmatic involvement that is above and beyond the normal stewardship role in awards, as described below: An NCI Program staff member(s) acting as a Project Scientist(s) will have substantial programmatic involvement that is above and beyond the normal stewardship role in awards, as described below.

The main responsibilities of substantially involved NCI staff members include the following activities: Areas of Joint Responsibility include: Steering Committee: The ITCR Steering Committee will be composed of the following voting members: Each voting member will have one vote.

Primary responsibilities of the ITCR Steering Committee: Primary responsibilities of the ITCR Steering Committee include, but are not limited to, establishing procedures for the solicitation, evaluation and recommendation to awardees of collaborative/joint projects to be pursued with support of the set-aside funds from individual U01 and U24 awards.

This special dispute resolution procedure does not alter the awardee's right to appeal an adverse action that is otherwise appealable in accordance with PHS regulation 42 CFR Part 50, Subpart D and DHHS regulation 45 CFR Part 16.

In accordance with the regulatory requirements provided at 45 CFR 75.113 and Appendix XII to 45 CFR Part 75, recipients that have currently active Federal grants, cooperative agreements, and procurement contracts from all Federal awarding agencies with a cumulative total value greater than $10,000,000 for any period of time during the period of performance of a Federal award, must report and maintain the currency of information reported in the System for Award Management (SAM)about civil, criminal, and administrative proceedings in connection with the award or performance of a Federal award that reached final disposition within the most recent five-year period.

As required by section 3010 of Public Law 111-212, all information posted in the designated integrity and performance system on or after April 15, 2011, except past performance reviews required for Federal procurement contracts, will be publicly available.

Part 2. Full Text of Announcement

The overarching goals of the Alzheimer's Disease Sequencing Project (ADSP) are to: (1) identify new genes involved in Alzheimer’s disease and Alzheimer's disease-related dementias (AD/ADRD), (2) identify gene alleles contributing to increased risk for or protection against the disease, (3) provide insight as to why individuals with known risk factor genes escape from developing AD, and (4) identify potential avenues for therapeutic approaches to and prevention of the disease.

This study of human genetic variation and its relationship to health and disease involves a large number of study participants from ethnically diverse populations and is capturing not only common single nucleotide variations, but also rare copy number and structural variants that are increasingly thought to play an important role in complex disease.

By 2023, the total number of ADSP subjects with whole genome sequencing (WGS) is expected to reach approximately 50,000, including multi-ethnic cohorts from global regions such as Central and South America and Asia.

In addition to the ADSP genetic and phenotypic data, joint calling of ADSP data with data from other NIH-funded large-scale sequencing projects, including harmonized phenotypic data, is now feasible.

In January 2019, the NIA National Advisory Council on Aging (NACA) approved an initiative to apply cognitive systems (artificial intelligence (AI), machine learning (ML), and deep learning (DL)) approaches to the analysis of the ADSP genetic and related data.

New approaches to harmonization of phenotypic and endophenotypic data related to large-scale genetic studies are needed, including methods to combine data from a number of study cohorts with data that are similar, but not the same, as well as methods to incorporate functional status (both self-reported and objectively measured).

Success in this effort would generate harmonization strategies useful not only to the investigators of currently participating cohorts, but potentially to newly recruited or newly constituted cohorts in the future.

The need for this effort is urgent based on the number of subjects with WGS data that will be available soon and because successful analysis of the genetic data by cognitive systems approaches and by other secondary analysis approaches being used by the AD scientific community depends on the availability of these data.

Context for the Study Design: The ADSP Harmonization Consortium (ADSP-HC) This FOA requires that competitive applicants, with deep understanding of all types of ADSP data, present plans and propose techniques for data harmonization in target domains that are common across studies.

The intention is that, under a single cooperative agreement (U24), this initiative will support phenotypic data harmonization on subjects with genetic data, and these data will become a long-lived “legacy”

Studies under this FOA will bring together a single vanguard network of researchers with deep understanding of ADSP data and expertise in genetics, epidemiology, and clinical specialties who will work with the ADSP investigators and with study cohort leads engaged in data harmonization efforts.

Cohorts considered eligible for this activity are those that have genetic data in the ADSP (epidemiology cohorts, case-control, family-based, Alzheimer’s Disease Centers, and convenience cohorts) and related study cohorts with genetic/genomic data.

The team will focus on cohorts where genetic data that includes genome wide association study (GWAS) data, whole exome (WES) or whole genome sequence (WGS), and robust phenotypic data are available in order to better understand subtypes of phenotypes (endophenotypes) of AD and ADRD.

The ADSP Harmonization Consortium (ADSP-HC), a component of the ADSP-FUS, should generate harmonized data sets that will be shared through a central data repository for data (genetic, genomic, annotation, analysis, statistical, and phenotypic) collected by other NIA-funded studies with the capability to work in the cloud environment.

Milestones will be designed to deliver mechanisms to provide genetically and phenotypically harmonized data sets generated under agreed-upon principles that will meet the needs of genetic/genomic research on AD and related neurodegenerative diseases, with an emphasis on deep endophenotypes.

Given the desire to leverage existing investments, investigators that already have strong knowledge of multiple existing cohorts and ADSP efforts and have been productively engaged in relevant activities would be strongly encouraged to apply.

Endophenotypic data to be harmonized include, but are not limited to, cognitive data, structural imaging data, functional imaging data, longitudinal clinical data, neuropathologic data, cardiovascular risk data, and biomarker data.

Deposition of harmonized phenotypic data and related descriptive data files and code books, data dictionaries, and other related supporting materials to be made available to the NIA Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS) for sharing with the research community Types of Data to be Harmonized Please contact the Program Officer for this announcement for specific details on available data for this effort.

Although not an exhaustive list, any or all of the following types of data and data sets may be included in analytical plans that are appropriate for this FOA: Endophenotypic Data Available There are several major areas of phenotype data to be harmonized with high priority: In the longer term, there are several types of data that may be included as resources in the effort: Approaches to Phenotypic Data Harmonization NIH policy will be considered at all stages of the effort, as will any legal or data sharing restrictions, such as the European General Data Protection Regulation (EU GDPR).

Where possible, team members will participate in community genomics standards groups such as the Global Alliance for Genomics and Health (GA4GH) and NIH efforts where institutes are developing standard programmatic interfaces for managing, describing, and annotating phenotypic data.

The ADSP-HC, in consultation with NIA, will determine the harmonization approach to be used, create as many harmonized variables as possible, and create study-specific variables to allow for later pooled analysis.

Feasibility to harmonize cohorts by rapid turnaround will be established for a small set of phenotypes, including cognitive measures, MRI, other neurologic phenotypes, demography, and risk factor data.

The successful application will present a cost-effective method to store and share harmonized data and harmonized summary data through sub-contracts to appropriate sites, engaging existing NIA-funded infrastructure wherever possible.

The successful application will include mechanisms to ensure funding through subcontracts for those who provide cohort data from original studies, those who distribute data, those who aggregate data, and the central data distribution analysis site.

NIAGADS will ensure compliance with human subjects data sharing, data transfer agreements, and related documents needed for qualified access before data are distributed to the research community.

NIAGADS will track data harmonization efforts, work with domain experts, share workflows, and provide outcome data to the ADSP-HC and the research community, Summary NIA intends to fund a single group of ADSP-HC collaborators who have a deep understanding of ADSP genetic and phenotypic data to facilitate and support the harmonization and analysis of large-scale phenotypic data for the next phase of the ADSP activities.

This milestone-driven effort will generate harmonized, ethnically diverse data sets that will need continual curation and updating as new cohorts or types of phenotypic data are added to the ADSP.

Investigators will coordinate efforts through NIAGADS to generate data that are consistent in presentation to the research community, and to integrate at NIAGADS ADSP harmonized cohort study data and related data files necessary for their research.

Retention of harmonized data in a single repository/federated repository for ready access by investigators will be a significant step toward the feasibility of advanced data analysis approaches.

Applicant organizations Applicant organizations must complete and maintain the following registrations as described in the SF 424 (R&R) Application Guide to be eligible to apply for or receive an award.

Although a letter of intent is not required, is not binding, and does not enter into the review of a subsequent application, the information that it contains allows IC staff to estimate the potential review workload and plan the review.

PD(s)/PI(s) must have demonstrated experience and an ongoing record of accomplishments in effectively managing large amounts of diverse types of data in order to discover risk and protective genetic factors for complex diseases.

PD(s)/PI(s) should be facile in the curation of large data sets, including multiple layers of complex phenotypic/endophenotypic data relevant to genetic analysis, such as clinical and neuropathology data elements and related data.

The PD(s)/PI(s) should be skilled in understanding and management of clinical measures that may be found across many studies, such as diagnosis, cognitive testing measures, and selected biomarker and imaging measures.

The PD(s)/PI(s) should have a solid understanding of quantitative neuropathology data and comorbid conditions, including measures of various proteinopathies and vascular infarcts, and must have data management skills appropriate to such data.

For multi-PD/PI applications, investigators must have complementary and integrated expertise and skills in order to provide appropriate leadership approach, governance, plans for conflict resolution, and organizational structure to the AD genetics and genomics data and data management infrastructure.

PD(s)/PI(s) should define levels of experience in effectively managing large amounts of diverse types of data in order to discover risk and protective genetic factors for complex diseases and in coordinating collaborative (basic or clinical) research.

For multi-PD/PI applications, explain complementary and integrated expertise and skills in order to provide appropriate leadership approach, governance, plans for conflict resolution, and organizational structure to the AD genetics and genomics data and data management infrastructure.

Approach PD(s)/PI(s) should define approaches to curation of large data sets, including multiple layers of complex phenotypic/endophenotypic data relevant to genetic analysis, such as clinical and neuropathology data elements and related data.

and quantitative neuropathology data and comorbid conditions, including measures of various proteinopathies and vascular infarcts Innovation Define any novel organizational concepts in the field of AD phenotypic data management strategies or in instrumentation in coordinating the AD research projects that the ADSP-HC will serve.

How familiar are the PD(s)/PI(s) with ADSP genetic and phenotypic data, and do they have experience handling AD genetics, genomics, and phenotypic data and effectively managing large amounts of diverse types of data in order to discover risk and protective genetic factors for complex diseases?

How facile are the PD(s)/PI(s) in curation of large data sets, including multiple layers of complex phenotypic/endophenotypic data relevant to genetic analysis, such as clinical and neuropathology data elements and related data?

How facile are the PD(s)/PI(s) in understanding and management of clinical measures that may be found across many studies, such as diagnosis, cognitive testing measures, and selected biomarker and imaging measures?

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions?

If the project involves human subjects and/or NIH-defined clinical research, are the plans to address 1) the protection of human subjects from research risks, and 2) inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion or exclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?

For research that involves human subjects but does not involve one of thecategories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.

For research that involves human subjects and meets the criteria for one or more of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects involvement and characteristics, and 3) sources of materials.

When the proposed project involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for the inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed.

The committee will evaluate the involvement of live vertebrate animals as part of the scientific assessment according to the following criteria: (1) description of proposed procedures involving animals, including species, strains, ages, sex, and total number to be used;

Reviewers will assess whether materials or procedures proposed are potentially hazardous to research personnel and/or the environment, and if needed, determine whether adequate protection is proposed.

If the Revision application relates to a specific line of investigation presented in the original application that was not recommended for approval by the committee, then the committee will consider whether the responses to comments from the previous scientific review group are adequate and whether substantial changes are clearly evident.

Reviewers will assess the information provided in this section of the application, including 1) the Select Agent(s) to be used in the proposed research, 2) the registration status of all entities where Select Agent(s) will be used, 3) the procedures that will be used to monitor possession use and transfer of Select Agent(s), and 4) plans for appropriate biosafety, biocontainment, and security of the Select Agent(s).

This means that recipients of HHS funds must ensure equal access to their programs without regard to a person’s race, color, national origin, disability, age and, in some circumstances, sex and religion.

Thus, criteria in research protocols that target or exclude certain populations are warranted where nondiscriminatory justifications establish that such criteria are appropriate with respect to the health or safety of the subjects, the scientific study design, or the purpose of the research.

The Federal awarding agency will consider any comments by the applicant, in addition to other information in FAPIIS, in making a judgement about the applicant’s integrity, business ethics, and record of performance under Federal awards when completing the review of risk posed by applicants as described in 45 CFR Part 75.205 “Federal awarding agency review of risk posed by applicants.”

HHS provides general guidance to recipients of FFA on meeting their legal obligation to take reasonable steps to provide meaningful access to their programs by persons with limited English proficiency.

The following special terms of award are in addition to, and not in lieu of, otherwise applicable U.S. Office of Management and Budget (OMB) administrative guidelines, U.S. Department of Health and Human Services (DHHS) grant administration regulations at 45 CFR Part 75, and other HHS, PHS, and NIH grant administration policies.

Consistent with this concept, the dominant role and prime responsibility resides with the awardees for the project as a whole, although specific tasks and activities may be shared among the awardees and the NIH as defined below.

Institutions providing data will retain custody of, and primary rights to, the site-specific data developed under their individual awards, in keeping with Institutional Review Board approval, and subject to Government rights of access, consistent with current HHS, PHS, and NIH policies.

The PD(s)/PI(s) will administer the establishment, operation, and quality control of harmonized phenotypic data, including the development of procedures for assuring data quality control and procedures for transfer of data generated by NIH-funded investigators into the database.

The PD(s)/PI(s) are responsible for working cooperatively with study sites and sponsoring organizations and for overseeing the implementation of, and adherence to, common protocols, as well as assuring quality control of the data collected.

NIH staff have substantial programmatic involvement that is above and beyond the normal stewardship role in awards, as described below: The designated NIA Project Scientist will have scientific involvement during conduct of this activity through technical assistance, advice, and coordination, assisting in those aspects of the award as described below.

The NIA Project Scientist will monitor the deposition of phenotypic data into NIAGADS to ensure that NIA-funded investigators have appropriately deposited data and have properly acknowledged the use of the cohorts and related phenotypic data in thepublication of their work.

Additionally, the NIA Program Official will be responsible for normal program stewardship, including assessing the progress toward the accomplishment of specified milestones, and for recommending release of additional funds to the project, and will be named in the award notice.

The Executive Committee will serve as the main decision-making body for the shared aspects of the study and will devise common protocols related to data harmonization and sharing among collaborators, stakeholders, and the AD research community.

Members of the Executive Committee for the data storage site will contribute to the effort by accessing and assessing appropriate phenotype data and providing expertise in the harmonization of phenotypic data from specific AD cohorts.

This special dispute resolution procedure does not alter the awardee's right to appeal an adverse action that is otherwise appealable in accordance with PHS regulation 42 CFR Part 50, Subpart D and DHHS regulation 45 CFR Part 16.

In accordance with the regulatory requirements provided at 45 CFR 75.113 and Appendix XII to 45 CFR Part 75, recipients that have currently active Federal grants, cooperative agreements, and procurement contracts from all Federal awarding agencies with a cumulative total value greater than $10,000,000 for any period of time during the period of performance of a Federal award, must report and maintain the currency of information reported in the System for Award Management (SAM)about civil, criminal, and administrative proceedings in connection with the award or performance of a Federal award that reached final disposition within the most recent five-year period.

As required by section 3010 of Public Law 111-212, all information posted in the designated integrity and performance system on or after April 15, 2011, except past performance reviews required for Federal procurement contracts, will be publicly available.

Your own blog with GitHub Pages and fast_template (4 part tutorial)

As governments consider new uses of technology, whether that be sensors on taxi cabs, police body cameras, or gunshot detectors in public places, this raises issues around surveillance of vulnerable populations, unintended consequences, and potential misuse.

In 2013, Oakland announced plans for a new Domain Awareness Center (DAC), which would implement over 700 cameras throughout schools and public housing, facial recognition software, automated license plate readers (ALPRs), storage capacity for 300 terabytes of data, and a centralized facility with live monitroy.

Through the advocacy of local citizens, the plans were dramatically scaled back and the Oakland Privacy Commission was formed, which continues to provide valuable insight into potential government decisions and purchases.

For instance, cell-site simulators (often referred to as sting-rays), which help police locate a person’s cell phone, were protected by particularly strong NDAs, in which police had to agree that it was better to drop a case than to reveal that a cell-site simulator had been used in apprehending the suspect.

“Now we have third-party intermediary, they have a kind of privacy shield, they’re not subject to state public record laws, and they have departments sign contracts that they are going to keep this secret.” Project Green Light is a public-private partnership in Detroit in which high-definition surveillance cameras outside business stream live data to police and are prioritized by police over non-participants.

Black people are disproportionately likely to be stopped by police (even though when police search Black, Latino and Native American people, they are less likely to find drugs, weapons or other contraband compared to when they search white people), disproportionately likely to be written up on minor infractions, and thus disproportionately likely to have their faces appear in police face databases (which are unregulated and not audited for mistakes).

At the CADE Tech Policy Workshop she shared how Project Green Light makes her feel less safe, and gave a more hopeful example of how to increase safety: give people chairs to sit on their front porches and encourage them to spend more time outside talking with their neighbors.

Instead people were putting bars on their doors and windows, fearing one another.” Young people went door to door and offered free chairs to neighbors if they would agree to sit on their front porches while children walked to and from school.

As Zeynep Tufekci wrote in Wired, sociologists distinguish between high-trust societies (in which people can expect most interactions to work and to have access to due process) and low-trust societies (in which people expect to be cheated and that there is no recourse when you are wronged).

In the case of police body cameras, this lack of choice/control is worsened by the fact that Axon (previously known as Taser) has a monopoly on police-body cameras: since they have a relationship with 17,000 of the 18,000 police departments in the USA, cities may not even have much choice.

In many cases, cities may want to have fewer options or collect less data, which goes against the prevailing tech approach which Mozilla Head of Policy Chris Riley described as “collect now, monetize later, store forever just in case”.

AI in Healthcare: Real-World Machine Learning Use Cases

Levi Thatcher, PhD, VP of Data Science at Health Catalyst will share practical AI use cases and distill the lessons into a framework you can use when evaluating ...

2. The Advent of AI in Healthcare

Once thought as a futuristic threat to humankind, artificial intelligence is now a part of everyday life. In healthcare, AI is changing the game with its applications in ...

Mayo Clinic Minute: How artificial intelligence could improve outcomes for stroke patients

People use artificial intelligence – or AI – any time they ask Siri, Alexa or Google to help them find something. But AI is also changing how health care providers ...

Getting Started with Healthcare.ai

The healthcare.ai packages are designed to streamline healthcare machine learning. They do this by including functionality specific to healthcare, as well as ...

MD vs. Machine: Artificial intelligence in health care

Recent advances in artificial intelligence and machine learning are changing the way doctors practice medicine. Can medical data actually improve health care?

CME Preview: Current Applications and Future of Artificial Intelligence in Cardiology 2019

Paul Friedman, M.D., Chair of the Mayo Clinic Department of Cardiovascular Medicine, invites you attend Current Applications and Future of Artificial Intelligence ...

HSS Minute: Artificial Intelligence

How can artificial intelligence be used when evaluating patient data? Hear from HSS radiologist Dr. Hollis Potter. "HSS Minute" is a video series geared towards ...

Artificial Intellegence in Health Care: Mayo Clinic Radio

Dr. Tufia Haddad, a Mayo Clinic oncologist and the physician leader for Mayo Clinic's collaboration with IBM Watson, discusses artificial intelligence and ...

Artificial Intelligence in Cardiology: Introduction to A.I.

Mayo Clinic cardiologist Francisco Lopez-Jimenez, M.D., discusses artificial intelligence in cardiology. To learn more, visit Artificial ..

Katherine Chou, Google - Stanford Medicine Big Data | Precision Health 2018

Precision Health is a fundamental shift to more proactive and personalized health care that empowers people to lead healthy lives. It is in this spirit of possibility ...