AI News, Events
National experts chart roadmap for AI in medical imaging
The report was based on outcomes from a workshop to explore the future of AI in medical imaging, featuring experts in medical imaging, and hosted at the National Institutes of Health in Bethesda, Maryland.
The collaborative report underscores the commitment by standards bodies, professional societies, governmental agencies, and private industry to work together to accomplish a set of shared goals in service of patients, who stand to benefit from the potential of AI to bring about innovative imaging technologies.
The report describes innovations that would help to produce more publicly available, validated and reusable data sets against which to evaluate new algorithms and techniques, noting that to be useful for machine learning these data sets require methods to rapidly create labeled or annotated imaging data.
NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases.
Internal Workshop Data Science & Artificial Intelligence
The University of Twente has a number of strategic, cross-disciplinary research programs that aim to strengthen our (inter)national position. An important domain for which we want to improve our reputation is Data Science &
The DS&AI domain encompasses many fields of research including, but not limited to (complex) networks, anomaly detection, sensory data processing and interpretation, data fusion and integration, geospatial data analysis, data-driven modeling, information retrieval, machine learning, privacy preserving data processing, social sensing, explainable AI, human-systems interaction, neuromorphic computing, AI hardware accelerators, spatiotemporal scene understanding and natural language processing.
Likewise, application areas appear in abundance, as witnessed by the numerous data-oriented projects across all our faculties, many of which have strong links to modern artificial intelligence through the methods and techniques applied.
discussions in small groups 16:45 Plenary feedback and discussion 17:15 Wrap-up, conclusions, and planning follow-up 17:30 Drinks and snacks The workshop is intended for all scientific staff involved in research at assistant /associate /adjoint /full professor level.
The European Perspective on Responsible Computing
We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car.
As a consequence, ethical issues—privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)—are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres.
Regulation (EC) 45/2001 establishes the rules for data protection in the EU institutions and the creation of the European Data Protection Supervisor (EDPS) as independent supervisory authority to monitor and ensure people's right to privacy when EU institutions and bodies process their personal data.
The widespread use of AI techniques in the implementation of these systems has exacerbated the problem contributing to the creation of systems and technologies whose behavior is intrinsically opaque.1,2,14 In this article, we will stick to the notion of autonomous technology rather than with AI technology.
Institutional as well as social and scientific entities and boards contribute to constantly feeding the debate by advocating and proposing codes of ethics for developers and regulations from governmental bodies.1,2,3,12,13,14,15,21,22 Admittedly, this debate is mostly concentrated in western countries although with different regulatory outcomes.
However, at least in western countries there is growing consensus that it is time to take actions to address the harms of autonomous technologies15 and that those actions need eventually to have a regulatory nature and be part of public policy.18,19 To this respect, Europe is certainly far ahead both in thinking and regulation.
The EDPS in its strategy for 2015–2019 sets out the goal to develop an ethical dimension to data protection.4 In order to reach the goal, it has established the EAG with the mandate to steer a reflection on the ethical implications that the digital world emerging from the present technological trends puts forward.
EDPS 4/2015 Opinion "Toward a new digital ethics,"3 identifies the fundamental right to privacy and the protection of personal data as core elements of the new digital ethics necessary to preserve human dignity as stated in Article 1 of the EU Charter of Fundamental Rights.
In its 2018 report,6 the EAG has provided a broader set of reflections on the notion of digital ethics that address the "fundamental questions about what it means to make claims about ethics and human conduct in the digital age, when the baseline conditions of humanness are under the pressure of interconnectivity, algorithmic decision-making, machine-learning, digital surveillance, and the enormous collection of personal data."
in which it urges an overall rethinking of the values around which the digital society is to be structured.5 Computer scientists, besides other societal actors, are called to join this effort by contributing theories, methods, and tools to build trustable and societal-friendly systems.
Luciano Floridi, a professor of philosophy and the ethics of information at Oxford and director of the Digital Ethics Lab of the Oxford Internet Institute, defines digital ethics7 as the branch of ethics that aims at formulating and supporting morally good solutions through the study of moral problems relating to personal data, AI algorithms, and corresponding practices and infrastructures.
In the space that is left open by regulation, the actors of the digital world, for example, companies, citizens, and individuals, should exploit digital ethics in order to forge and characterize their identity and role in the digital world.
The DECODE project was selected in response to a call that stated the following objective: "The goal is to provide SMEs, social enterprises, industries, researchers, communities and individuals with a new development platform, which is intrinsically protective of the digital sovereignty of European citizens."
Indeed, there is no general consensus on which ethical principles (personal ethics settings versus mandatory ethics setting) need to be embedded, and how, in the control software of autonomous vehicles.9,10 In 2016, the German Federal Ministry of Transport and Digital Infrastructures appointed an ethical committee that produced a recommendation report resulting in 20 ethics rules for automated and connected vehicular traffic.11 In particular, rules 4 and 6 mention the ethical principle of safeguarding the freedom of individuals to make responsible decisions and the need to balance that with the freedom and safety of others.
The separation of concerns implied by the above notion of digital ethics suggests an overall framework in which the autonomy of the system is delimited by hard ethics requirements, users are empowered with their own soft ethics, and the interactions between the system and each user are further constrained by their soft ethics requirements.
(See the intersection between soft and hard ethics in the accompanying figure.) In such a framework, it should also be possible to deal with liability issues in a fine-grained way by distributing responsibility between the system and the user(s) according to hard and soft ethics.
If verifying the compliance of autonomous systems to hard ethics is already raising huge scientific interest and great worries (given the use of obscure AI techniques),1,2,14 defining the scope of soft ethics and characterizing individual ones is a daunting task.
Building systems that embody ethical principles by design may also permit acquiring a competitive advantage in the market, as predicted in the recent Gartner Top 10 Strategic Technology Trends for 2019.23 Computer scientists alone cannot solve the scientific and technical challenges we have ahead.
- On Tuesday, June 2, 2020
You and AI Presented by Professor Brian Cox
Throughout 2018, we've brought you the world's leading thinkers on artificial intelligence. Now we're calling on you to pose your questions to our panel of ...
Prof. Brian Cox - Machine Learning & Artificial Intelligence - Royal Society
Produced by the Royal Society, more info can be found at Brian Edward Cox is physicist who ..
Artificial Intelligence, the History and Future - with Chris Bishop
Chris Bishop discusses the progress and opportunities of artificial intelligence research. Subscribe for weekly science videos: The last ..
You and AI – The History, Capabilities and Frontiers of AI
Demis Hassabis, world-renowned British neuroscientist, artificial intelligence (AI) researcher and the co-founder and CEO of DeepMind, explores the ...
Brian Cox presents Science Matters - Machine Learning and Artificial intelligence
We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a ...
Demis Hassabis: creativity and AI – The Rothschild Foundation Lecture
Recorded at the Royal Academy of Arts on 17 September 2018: Demis Hassabis, Co-Founder and CEO of DeepMind, draws upon his eclectic experiences as ...
What is AI?
Technology with AI at its heart has the power to change the world, but what exactly is Artificial Intelligence? The Royal Society is a Fellowship of many of the ...
You and AI - with Jim Al-Khalili at the Manchester Science Festival
We asked the public to send in their questions to our panel of experts, to find out what challenges and opportunities they think AI will present us with in the next ...
UK Parliament's Artificial Intelligence Committee - Dec 19th, 2017
The witnesses are: Professor David Edgerton, Hans Rausing Professor of the History of Science and Technology, and Professor of Modern British History, King's ...