AI News, Machine learning artificial intelligence

Research more, search less

Microsoft Academic understands the meaning of words, it doesn’t just match keywords to content.

For example, when you type “Microsoft,” it knows you mean the institution, and shows you papers authored

Artificial Intelligence and Ethics

 On March 18, 2018, at around 10 p.m., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car.

And beyond these larger social and economic considerations, data scientists have real concerns about bias, about ethical implementations of the technology, and about the nature of interactions between AI systems and humans if these systems are to be deployed properly and fairly in even the most mundane applications.

Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend.

During the past two years, self-driving cars that rely on rules and training data to operate have caused fatal accidents when confronted with unfamiliar sensory feedback or inputs their guidance systems couldn’t interpret.

Despite notable advances in areas such as data privacy (see “The Privacy Tools Project,” January-February 2017), and clear understanding of the limits of algorithmic fairness, the realization that ethical concerns must in many cases be considered before a system is deployed has led to formal integration of an ethics curriculum—taught by philosophy postdoctoral fellows and graduate students—into many computer-science classes at Harvard.

(Exploitation of this input vulnerability is the subject of “AI and Adversarial Attacks.”) In other words, AI lacks common sense and the ability to reason—even if it can also make incredible discoveries that no human could, such as detecting third- or higher-order interactions (when three or more variables must interact in order to have an effect) in complex biological networks.

This has proved especially valuable to the field because it led her to reflect deeply on the nature of human-computer interaction, and later, in the course of imagining a future when computers and humans might work together, to propose theoretical models for collaborative AI systems designed to work on teams with people.

She envisions a future that combines the speed and statistical prowess of intelligent computers with innate human talents, not one that pits machines and humans against each other—the way the relationship is often framed in descriptions of AI systems beating world champions in chess and go, or replacing people in the workplace.

When Grosz began experimenting with team-based AI systems in health care, she and a Stanford pediatrician started a project that coordinates care for children with rare diseases who are tended by many people besides parents, including medical experts, home-care aides, physical therapists, and classroom teachers.

The care spans years, she says, and “no human being I’ve ever encountered can keep track of 15 other people and what they are doing over long periods of time.” Grosz, with doctoral student Ofra Amir (now a faculty member at the Technion) began by analyzing how the patient-care teams worked, and developed a theory of teamwork to guide interactions between the human members and an AI system designed to coordinate information about the children’s care.

“What we’re trying to do, on the theoretical end, is to understand better how to share information” in that multi-member team environment, “and then build tools, first for parents, and then for physicians.” One of the key tenets she and her colleague, Bar-Ilan University professor Sarit Kraus, developed is that team members should not take on tasks they lack the requisite knowledge or capability to accomplish.

This is a feature of good human teamwork, as well as a key characteristic of “intelligent systems that know their limits.” “The problem, not just with AI, but a lot of technology that is out in the world, is that it can’t do the job it has been assigned”—online customer service chatbots interacting via text that “are unable to understand what you want” being a case in point.

if students from related fields such as statistics or applied mathematics are included, the total enrollment substantially exceeds that of top-ranked economics.) “Most of these ethical challenges have no single right answer,” she points out, “so just as [the students] learn fundamental computing skills, I wanted them to learn fundamental ethical-reasoning skills.” In the spring of 2017, four computer-science courses included some study of ethics.

That fall, there were five, then 8 by spring 2018, and now 18 in total, spanning subjects from systems programming to machine learning and its effects on fairness and privacy, to social networks and the question of censorship, to robots and work, and human-computer interaction.

“My fantasy,” says Grosz, “is that every computer-science course, with maybe one or two exceptions, would have an ethics module,” so that by graduation, every concentrator would see that “ethics matters everywhere in the field—not just in AI.” She and her colleagues want students to learn that in order to tackle problems such as bias and the need for human interpretability in AI, they must design systems with ethical principles in mind from the start.

The problem of fairness in autonomous systems featured prominently at the inaugural Harvard Data Science Conference (HDSC) in October, where Colony professor of computer science David Parkes outlined guiding principles for the study of data science at Harvard: it should address ethical issues, including privacy (see “The Watchers,” January-February 2017, page 56);

There are lots of reasons why someone might want to “look under the hood” of an AI system to figure out how it made a particular decision: to assess the cause of biased output, to run safety checks before rollout in a hospital, or to determine accountability after an accident involving a self-driving car.

Assistant professor of computer science Finale Doshi-Velez demonstrated by projecting onscreen a relatively simple decision tree, four layers deep, that involved answering questions based on five inputs (see a slightly more complex example, above).

Whenever there is a diverse population (differing by ethnicity, religion, or race, for example), explained McKay professor of computer science Cynthia Dwork during a HDSC talk about algorithmic fairness, an algorithm that determines eligibility for, say, a loan, should treat each group the same way.

Say we have a labor-market disparity that persists without any sort of machine-learning help, and then here comes machine learning, and it learns to re-inscribe those inequalities.” Their solution, which uses tools from economics and sociology to understand disparities in the labor market, pushes the thinking about algorithmic fairness beyond computer science to an interdisciplinary, systems-wide view of the problem.

Because humans are “self-interested, independent, error-prone, and not predictable” enough to enable design of an algorithm that would ensure fairness in every situation, she started thinking about how to take bias out of the training data—the real-world information inputs that a hiring algorithm would use.

Suppose many of the minority group’s members don’t go to college, reasoning that “it’s expensive, and because of discrimination, even if I get a degree, the chances of my getting a job are still low.” Employers, meanwhile, may believe that “people from minority groups are less educated, and don’t perform well, because they don’t try hard.” The point Chen and Hu make is that even though a minority-group member’s decision not to attend college is rational, based on existing historical unfairness, that decision reinforces employers’ preconceived ideas about the group as a whole.

I think that that’s a particularly naïve way of thinking of technology.” Whether the technology is meant to provide facial recognition to identify crime suspects from video footage, or education tailored to different learning styles, or medical advice, Hu stresses, “What we need to think about is how technologies embed particular values and assumptions.

Exposing that is a first step: realizing that it’s not the case that there are some ethical questions, and some non-ethical questions, but really that, in everything we design…there are always going to be normative questions at hand, every step of the way.” Integrating that awareness into existing coursework is critical to ensuring that “the world that we’re building, with ubiquitous technology, is a world that we want to live in.”  

Take it From a Futurist: How Artificial Intelligence and Blockchain Will Change HR

Machine learning, artificial intelligence, blockchain—these emerging technologies are shaking up industries across the board, but many HR professionals are still wary about applying them to the work they do, says HR digital transformation strategist Sherryanne Meyer.

“Data management has to be taken seriously not only in terms of ensuring consistent data definitions and data entry culture, but also protecting the data,' she says.

To help HR professionals get started, we talked with Meyer about applications of machine learning, AI and blockchain that organizations can start to test today, and the data best practices for effective execution.

If an employee needs development in an area that doesn't explicitly require training, or someone shows potential in an area not currently related to his or her job, it's cumbersome to find the right training.

learning platform with AI at its core can take in employee data from performance reviews, see the areas where someone needs development, and produce a list of generate a list of suggested courses for that employee available in the organization's learning system.

According to a recent survey from IT service management company Sierra-Cedar, 8 percent of organizations adopted machine learning in 2018 and 21 percent said they're evaluating the technology for future use.

The report also notes that early forms of machine learning may be mistaken for AI in HR organizations, and that machine learning may already be embedded in existing technology without HR teams fully realizing it.

In layman's terms, machine learning allows computers to complete tasks independently of humans, and they get better at doing those tasks over time .

Today, managers at retail stores develop staff schedules based on the shopping season and sales generated in previous years—'But machine learning can bring other data points into staffing decisions, and connect data across sources,' Meyer explains.

Machine Learning & Artificial Intelligence: Crash Course Computer Science #34

So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving ...

AI vs Machine Learning vs Deep Learning | Machine Learning Training with Python | Edureka

Flat 20% Off on Machine Learning Training with Python: ** This Edureka Machine Learning tutorial (Machine Learning Tutorial ..

Machine Learning Vs Artificial Intelligence? Same or Different?

Namaskaar Dosto, is video mein maine aapse Artificial Intelligence aur Machine Learning ke baare mein baat ki hai, Artificial Intelligence aur Machine Learning ...

Artificial Intelligence vs Machine Learning - Gary explains

Read more: andauth.co/AIvsML | The terms artificial intelligence and machine learning are often used interchangeably these days, but there are some important ...

Machine Learning Lectures | Introduction to Machine Learning in Hindi | ML #1

Ml full notes rupees 200 only for notes fill the form ML notes form : Machine learning introduction : .

Machine Learning and Artificial Intelligence

The science and ethics behind ML and AI.

The 7 Steps of Machine Learning (AI Adventures)

How can we tell if a drink is beer or wine? Machine learning, of course! In this episode of Cloud AI Adventures, Yufeng walks through the 7 steps involved in ...

What is Machine Learning? (AI Adventures)

Got lots of data? Machine learning can help! In this episode of Cloud AI Adventures, Yufeng Guo explains machine learning from the ground up, using concrete ...

Artificial Intelligence Vs Machine Learning Vs Data science Vs Deep learning

For More information Please visit

Which Programming Language for AI? | Machine Learning

How to learn AI for Free : Future Updates : Developers who are moving towards Artificial intelligence and Machine .