AI News, Stanford Institute for Human artificial intelligence

What does human-centric AI mean to management?

Is the managerial motivation behind artificial intelligence simply to reduce headcount by replacing employees with machines?

If MIT defines human-centric AI as “the design, development, and deployment of (information) systems that learn from and collaborate with humans in a deep, meaningful way” it is as important for management to learn where and how machine intelligence can enhance human potential.

Data ethics involves the study and adoption of data practices, algorithms, and applications that respect fundamental individual rights and societal values.[ii] If Ethics isn’t an attribute of either data or algorithms, ethical challenges do inherently arise wherever managers rely on data to take business decisions.

Artificial Intelligence cannot be isolated from the larger economic and social challenges it has been designed to address when both the data and the algorithms reflect the visions, biases, and logic of human decision-making, Do these visions, biases and logic faithfully reflect the reality of business, and if not, how do these reflections color managerial views of customers, organizations and markets?

Finally, should our commitment to data-driven decision-making elevate “scientism” over other forms of human intelligence?[iii] HAI has 121 faculty members — over 80 percent are white, and almost as many are male Stanford University’s Human-Centered Artificial Intelligence Institute (HAI) is an excellent example of both how difficult and how important examining ethical bias has become.

Based on a position that “designers of AI must be broadly representative of humanity”, the University launched HAI this past year to “advance AI research, education, policy, and practice to improve the human condition.[iv]” Yet when the institute revealed the 120 faculty and tech leaders leading the initiative, all were educated in the world’s top business and engineering schools, over 80 percent were white, and almost as many were male.

Tidd and Bessant suggest that innovation today implies applying a product, process, position or paradigm to create value through solving economic or social problems.[v] Exploring how management can use machine intelligence to foster innovation constitutes a major opportunity for organizations to move the discussion of AI beyond data and algorithms and discuss how such technologies can be applied profitably to their business.

Courses discussions can analyze the current state of machine intelligence, which at best is able to master repetitive tasks (Narrow AI), and the promises of “super artificial intelligence” capable of capable of reformulating the economic, social and political problems organizations are trying to solve.

In less than two decades Alibaba has produced ample evidence of its capacity to innovate: its platform-based business model was projected to generate $100 billion in revenue this year alone from its investments in retail, the internet, and technology.[vi] Alibaba’s management has redefined a “new retail” sector by leveraging Internet technologies and data intelligence to develop an open, technology-dependent, and value-centered ecosystem.

[vii] Its vision of “smart business” is based on matching human and machine intelligence — each core business process is coordinated in an online network and use machine-learning technology to efficiently leverage data in real time.

The management of digital assets today requires a deep understanding of the semantics of closed taxonomies and open folksonomies inherent in valuating digital resources.[viii] Accounting for value rather than costs leads management to reflect on the link between AI and productivity, which in turn can lead to a redefinition of the production boundaries that determine the organization’s core activities in a digital economy.

Women In AI – Where Are They & How Are They Succeeding

According to new research by the World Economic Forum, only 22% of artificial intelligence (AI) professionals globally are women.

During a recent two-year sabbatical, she was the vice president and chief scientist of AI and machine learning at Google Cloud, the company’s enterprise cloud computing division.

Despite that, she’s continuing to lead the way in developing AI responsibly, stating at a Capitol Hill hearing titled “Artificial Intelligence – With Great Power Comes Great Responsibility” that “With proper guidance, AI will make life better.

But without it, the technology stands to widen the wealth divide even further, make tech even more exclusive, and reinforce biases we’ve spent generations trying to overcome.” Check out her non-profit AI4ALL, which is an organization working to increase diversity and inclusion in artificial intelligence.

Her studies focus on social robotics and human-robot interaction, with the goal of contributing to the quality of life through education, health, wellbeing and emotive connection and engagement.

Her focus is on studying the social implications of data systems, machine learning and artificial intelligence and her recent writings address data bias and fairness, predictive analytics and due process, and algorithmic accountability and transparency.

Introducing the Stanford Institute for Human-Centered Artificial Intelligence

The emergence of artificial intelligence has the potential to radically alter how we live our lives. This new era can bring us closer to our shared dream of creating ...

2019 Human-Centered Artificial Intelligence Symposium

Artificial intelligence will be the most consequential technology of the 21st century—augmenting human capabilities, transforming industries and economies, and ...

AI Ethics, Policy, and Governance at Stanford - Day One

Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI Ethics, Policy, and ...

Stanford HAI 2019 Fall Conference - AI in Government

David Engstrom, Professor of Law, Associate Dean for Strategic Initiatives, Bernard D. Bergreen Faculty Scholar, Stanford University Sharon Bradford Franklin, ...

AI Ethics, Policy, and Governance at Stanford - Day Two

Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI Ethics, Policy, and ...

Yuval Noah Harari in Conversation with Fei-Fei Li, Moderated by Nicholas Thompson

The rapid development and deployment of artificial intelligence may determine the fate of human agency and the prospects of democracy in the 21st century.

Stanford HAI 2019 Fall Conference - Owning AI: Intellectual Property for Artificial Intelligence

Ryan Abbott, Professor of Law and Health Sciences, University of Surrey School of Law; Adjunct Assistant Professor of Medicine, David Geffen School of ...

Stanford's AI4All Brings AI Research to Youngsters

Fr. Robert Ballecer, Lou Maresca, and Curt Franklin discuss AI4All, previously Stanford Artificial Intelligence Laboratory's Outreach Summer Program, and its ...

The Future of Artificial Intelligence

Stanford Professors Fei-Fei Li and Silvio Savarese, and senior research scientist Juan Carlos Niebles, discuss deep learning and AI's role in driverless cars, ...

Episode 11 - Fei-Fei Li: Human-Centered AI

Fei-Fei Li is a professor at Stanford University and one of the world's pioneering researchers in AI. Her work focuses on human-centered artificial intelligence.