AI News, Ethics of artificial intelligence

Iceland Responsible Fisheries Certification

What is the secret of the Icelandic fish, savoured by chefs and food-lovers across the globe?

The heritage, tradition and most off all the people of the fishing towns and villages is the answer.

Artificial Intelligence in Higher Education: Applications, Promise and Perils, and Ethical Questions

But today we are far from machines that have the ability to perform the myriad of tasks even babies shift between with ease—although how far away is a matter of considerable debate.

In the last wave of AI enthusiasm, technologists tried emulate human knowledge by programming extensive rules into computers, a technique called expert systems.

Machine learning has been used to create GPS systems, to make translation and voice recognition much more precise, to produce visual digital tools that have facial recognition or filters that create crazy effects on Snapchat or Instagram.

Amazon uses artificial intelligence to recommend books, Spotify uses machine learning to recommend songs, and schools use the same techniques to shape students' academic trajectories.

In January 2019, the Wall Street Journal published an article with a very provocative title: 'Colleges Mine Data on Their Applicants.'1 The article discussed how some colleges and universities are using machine learning to infer prospective students' level of interest in attending their institution.

Complex analytic systems calculate individuals' 'demonstrated interest' by tracking their interactions with institutional websites, social media posts, and emails.

Schools, particularly in higher education, increasingly rely on algorithms for marketing to prospective students, estimating class size, planning curricula, and allocating resources such as financial aid and facilities.

Finally, one of the most prominent ways that predictive analytics is being used in student support is for early warning systems, analyzing a wide array of data—academic, nonacademic, operational—to identify students who are at risk of failing or dropping out or having mental health issues.

Educational software assesses students' progress and recommends, or automatically delivers, specific parts of a course for students to review or additional resources to consult.

Here I'm using the phrase to talk about the different ways that instructional platforms, typically those used in a flipped or online or blended environment, can automatically help users tailor different pathways or provide them with feedback according to the particular error they make.

So this is one promise of AI: that it will show us things we can't assess or even envision given the limitations of human cognition and the difficulty of dealing with many different variables and a wide array of students.

With the improved efficacy of systems that may or may not require as much assistance from humans or necessitate that students be in the same geographical location, more students will gain access to better-quality educational opportunities and will perhaps be able to network with peers in a way that will close some of the achievement gaps that continue to exist in education.

For example, AI learning systems that have been trained on students in a particular kind of college or university in California may not have the same outcomes or reflect the same accuracy for students in another part of the country.

Scholars looking at the use of facial recognition by companies such as Google, IBM, Microsoft, and Face++ have shown that in many cases, these tools have been developed using proprietary data or internal data based on employees.

In one study, the facial recognition tools had nearly 100 percent accuracy for light-skinned men but only 65 percent accuracy for dark-skinned women.

Excluding a problematic or protected class of information from algorithms is not a good solution because there are so many proxies for things like race and gender in our society that it is almost impossible to remove patterns that will break down along these lines.

Quite the contrary: Amazon used artificial intelligence to detect those characteristics that were most indicative of a successful employee, incorporated those characteristics into its algorithm, and then applied the algorithm to applicants.

For example, one predictive analytics tool estimated that 80 percent of the students in an organic chemistry class would not complete the semester.6 This was not news to the professors, who still wondered what to do.

A second peril in the use of artificial intelligence in higher education consists of the various legal considerations, mostly involving different bodies of privacy and data-protection law.

Federal student-privacy legislation is focused on ensuring that institutions (1) get consent to disclose personally identifiable information and (2) give students the ability to access their information and challenge what they think is incorrect.7 The first is not much of an issue if institutions are not sharing the information with outside parties or if they are sharing through the Family Educational Rights and Privacy Act (FERPA), which means an institution does not have to get explicit consent from students.

The second requirement—providing students with access to the information that is being used about them—is going to be an increasingly interesting issue.8 I believe that as the decisions being made by artificial intelligence become much more significant and as students become more aware of what is happening, colleges and universities will be pressured to show students this information.

By choosing the variables to be fed into admission systems or financial aid systems or student information systems, these AI tools are creating rules about what matters in higher education.

But educators often overlook that fact when they adopt technology, not understanding that doing so is in some ways the equivalent of imposing an entirely different rubric, instead of standards, in the academic attainment.9

Optimizing learning outcomes—for example, additional skills acquisitions or better grades or increased retention—may crowd out more abstract educational goals promoting citizens capable of self-governance or nurturing creativity.

The use of predictive analytics and early warning systems is often touted as a way to promote student retention by drawing attention to struggling or at-risk students.

The idea was to encourage them to drop out before the university was required to report its enrollment numbers to the federal government, thereby creating better retention numbers and improving its rankings.

According to the president, his plan promoted the institutional interests for better statistics and was also in the students' best interest by preventing them from wasting money on tuition.10 Clearly, this goes into deeper questions of what the institutional and educational enterprise is and should be.

In addition, the committee recommended learning data privacy practices that security providers can implement in the areas of ownership, usage right, opt-in, interoperable data, data without fees, transparency, service provider security, and campus security.12

The Ethics of Artificial Intelligence | Leah Avakian | TEDxYouth@EnglishCollege

In today's ever-changing and growing world, artificial intelligence is quickly becoming more integrated within our everyday lives. What happens when we give ...

The ethical dilemma we face on AI and autonomous tech | Christine Fox | TEDxMidAtlantic

The inspiration for Kelly McGillis' character in Top Gun, Christine Fox is the Assistant Director for Policy and Analysis of the Johns Hopkins University Applied ...

Artificial intelligence and its ethics | DW Documentary

Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence - and to answer ...

Do you know AI or AI knows you better? Thinking Ethics of AI (original version)

This is an English/French version of the video, with subtitles embedded in the video. A multilingual version where you can activate subtitles in Chinese, English, ...

Nick Bostrom - The Ethics of The Artificial Intelligence Revolution

Link to the panel discussion: Nick Bostrom is a Swedish philosopher at the University of Oxford known for his ..

HEWLETT PACKARD ENTERPRISE - Moral Code: The Ethics of AI

The implications and promises of artificial intelligence (AI) are unimaginable. Already, the now ubiquitous functions of AI have changed our lives ...

Artificial Intelligence: The Ethical and Legal Debate

Artificial intelligence is making our lives easier, but raises many ethical and legal questions. A meeting of experts in Brussels, organised by the European ...

The Future of Artificial Intelligence and Ethics on the Road to Superintelligence

The progress of technology over time, the human brain Vs the future, and the future of artificial intelligence. Article: ...

Is Developing Artificial Intelligence (AI) Ethical? | Idea Channel | PBS Digital Studios

Viewers like you help make PBS (Thank you ) . Support your local PBS Member Station here: If you're even the slightest bit ..

Morality and Artificial Intelligence: The Science and Beyond | Devin Gonier | TEDxAustinCollege

As Artificial Intelligence (AI) becomes more prevalent and powerful in our society, it becomes essential to look to its moral consequences. How can a machine ...