AI News, Ethics of artificial intelligence artificial intelligence
Ethics of Artificial Intelligence Explored in Research Library Issues
The latest issue of Research Library Issues (RLI) opens a conversation about the ethical implications of artificial intelligence (AI) in the context of knowledge production, dissemination, and preservation.
Geneva Henry, dean of Libraries and Academic Innovation at The George Washington University, ties it all together with an article on the role of the research library in formulating and implementing institutional policy based on the needs of the users, and in the context of public policy.
The Association fosters the open exchange of ideas and expertise, promotes equity and diversity, and pursues advocacy and public policy efforts that reflect the values of the library, scholarly, and higher education communities.
ARL forges partnerships and catalyzes the collective efforts of research libraries to enable knowledge creation and to achieve enduring and barrier-free access to information.
Making an Ethical Artificial Intelligence
The future of medicine could come down to automation, with artificial intelligence increasingly being used as a substitute for human professions.
It was her interest and speciality in public health ethics that led her to question the future of artificial intelligence (AI) in healthcare, asking about the ethical, legal and social implications of the technology, and how we might be able to bring the public into the matter.
“A lot of people are talking about AI and what it can do, but not many people are talking to the members of the public about the potential upsides and downsides.” “We need to bring the public into the conversation about AI, take their viewpoints seriously and give them an opportunity to think about what these technological changes might mean in the future,” she said.
“The ethical, legal and social implications (ELSI) of using artificial intelligence (AI) in health and social care” project will develop the first academic Australian survey about artificial intelligence.
“If we are able to develop artificial intelligence to replace a clinician, it brings about more complicated questions, such as ‘what will a doctor be?’ ‘Who is responsible for the decisions the AI machine will make, and who takes responsibility if they are wrong?’” Another risk identified is the potential bias that could result.
A 2019 study, “Artificial Intelligence: American Attitudes and Trends” conducted by the University of Oxford, found that support for developing AI is “greater among those who are wealthy, educated, male or have experience with technology.” And a combination of data bias and human bias is often
“We need to think about how this may reinforce existing prejudices, inequities and unfairness in systems, and look at how AI can be developed to address these ethical implications.” Among these concerns, other ethical issues at the forefront of AI include data privacy and confidentiality, with potential for data leaks of confidential information of patients and doctors.
In collaboration with more than 500 citizens, experts, public policy makers, industry stakeholders and professional associations, the organisation declared 10 principles surrounding ethical treatment of AI, with the aim of “supporting the common good, and guiding social change by making recommendations with a strong democratic legitimacy.” These values included well-being, autonomy, intimacy and privacy, solidarity, democracy, equity, inclusion, caution, responsibility and environmental sustainability;
“I can certainly see that it could go very wrong and I can also see that there’s ways that it could be beneficial.” “We’re trying to be prospective about it, trying to think forwards, not just look at where we are now or look at the past.” So, can artificial intelligence ever replace the work of humans?
The Ethics of A.I. Doesn’t Come Down to ‘Good vs. Evil’
These two diametrically opposite statements summarize the binary core of how we look at artificial intelligence (A.I.) and its applications: Good or bad?
This might be a gross oversimplification, but when I explain ethics to my 6- and 9-year-olds, I tell them that ethical behavior is behavior that makes sense toward building a better place, a better society, and a better world—taking into account the overall good.
Whether it’s an airplane autopilot or a computer playing chess, if it’s man-made and can emulate intelligent behavior, it qualifies as A.I.
We have been using cell phones and computers for a few decades now and still don’t fully understand how these tools impact our ecosystem.
itself, but the fact that, as a society, we spend years and billions on R&D but minimal time and funds in understanding the ramifications of the novelty that we are about to introduce.
Think about major innovations: industry machinery, planes, trains, automobiles, cigarettes, e-cigarettes, microwaves (to name a few).
We are also very good at getting excited about the innovation, deploying it at scale ,and only decades later realizing unintended harm and damage that we introduced.
Shortly after the brain chip allows people to walk again, it could become a cognitive differentiator that further separates those who can afford it from those who cannot.
While we are amazing at inventing and breaking through, we are not that good at consciously architecting the values and behaviors that we would believe to virtuous, and introducing things that take us in that direction.
Whereas other forms of technology would take time to roll out, adopt, and use at a wide scale, with tech platforms you have billions of people who are connected to and by few platforms.
has the potential of boxing people into certain lanes and removing their ability to override these lanes given individual exceptions or behavior.
The same candidate who’s unfit from an algorithmic perspective could have a whole range of abilities such as willingness to succeed, intuition, or candor that won’t be captured by the data.
If you play this out, we could live in a world where your future is drawn out the second you are born, based on your location, parents, time of year, name and other countless variables.
If you compare the level of knowledge and awareness that we have now versus 50 years ago, you will easily notice that key things that are not acceptable today were non-issues back then.
If we had built an algorithm 50 years ago to hire people, that algorithm would reflect that time period’s prejudice and would offer different pay according to gender.
Now that we know better, we would fix the algorithm in order to reflect the values of a society that is concerned about fairness and equality.
these people began as statistical outliers but gradually grew in numbers and, propelled by the value structure adopted by modern society, gradually shifted to the statistical norm.
When you add the results of all of these management styles together, it’s likely that goodness and badness will cancel each other out, and you will be left with an end result that is not great… but not terrible, either.
But we are not quite sure of how things will play out, and I highly question our capability of being able to shape the outcome of something that can roll out the change in such profoundly powerful ways.
Artificial Intelligence in Medicine Raises Legal and Ethical Concerns
The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.
For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment.
They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.
Potential for Discrimination AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences.
In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources.
AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.
As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.” This predictive capability of AI raises significant ethical concerns in health care.
They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others.
This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions.
When it comes to genetic testing, patients are advised to seek genetic counseling so that they can thoughtfully decide whether to be tested and better understand test results.
If the data used to develop an algorithm are flawed – for instance, if they use medical records that contain errors – the algorithm’s output will be incorrect.
Yet, to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.