AI News, Code of Ethics in Artificial Intelligence (AI) – Key Traits
Code of Ethics in Artificial Intelligence (AI) – Key Traits
Do you know that organizations have started paying attention to whether AI/machine learning (ML) models are doing unbiased, safe and trustable predictions based on ethical principles?
Have you imagined scenarios in which customers blame your organization of benefitting a section of customers (preferably their competitors), thus, filing a case against your organization and bring bad names and loss to your business?
When applied to an organization, an organization doing business based on ethics is governed by a set of principles based on which the organization conducts one or more business activities based on good principles such as honesty, integrity etc.
The following could be some of the stakeholders in ethical AI implementation: In this post, you learned about implementing the code of ethics for artificial intelligence (AI) / machine learning models. It is important to set ethical AI principles to ensure AI/ML models are making safe, unbiased and trustable predictions impacting the end customers and partners.
The Ethics of Artificial Intelligence for Business Leaders – Should Anyone Care?
Some members of the AI research community believe that new concerns such as: AI bias (by race, gender, or other criterion), personal privacy, and algorithmic transparency (having clarity on the rules and methods by which machines make decisions) seem to be the more pressing issues of the day.
In December of 2016, IEEE published version 1 of a report titled “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Artificial Intelligence and Autonomous Systems.” My interest was sparked in hearing about the new report (and in joining the “Well-being” committee on a number of phone calls and brainstorms), but I realized that if I wanted to cover this research, I’d have to find a way to do so that would bring insights and value to my business readers.
Standards Working Groups in charge of putting together the document, and I asked them just two simple questions: My goal wasn’t to prove or disprove the usefulness of AI’s ethical concerns for businesspeople, but rather provide a platform to have them be explored by our business leader readers here at TechEmergence.
The IEEE Global Initiative’s mission is as follows: To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.
Without further ado we’ll dive into the “meat” of the P7000 Standards’ Working Groups and the Chairs’ respective statements about the potential business relevance of some of AI’s ethical concerns: Standard summary: P7000 outlines an approach for identifying and analyzing potential ethical issues in a system or software program from the onset of the effort.
The project also offers ways to provide transparency and accountability for a system to help guide and improve it, such as incorporating an event data recorder in a self-driving car or accessing data from a device’s sensors.
And as data is the new electricity in which companies increasingly put their faith to spur growth, defending their margins, consequences on data uses about individuals, or their proxies, are increasingly influencing our lives.
In order to assure accountability, the burden of proof is reversed while the GDPR also brings along new fines, up to a maximum of 4% of global turn-over or 20 million euros, which ever is higher.
From the P7002 current outline: “The objective is to define a process model that defines how best to preform a QA orientated repeatable process regarding the use case and data model (including meta data) for data privacy oriented considerations regarding products, services and systems utilizing employee, customer or other personal data.
By providing specific steps, diagrams and checklists, users of this standard will be able to perform a requirements based conformity assessment on the specific privacy practices &/or mechanisms.
article 35 of GDPR: “Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in high risk to the rights and freedoms of natural persons, the data controller shall, prior to processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.”
The standard will also include benchmarking procedures and criteria for selecting validation data sets, establishing and communicating the application boundaries for which the algorithm has been designed, and guarding against unintended consequences.
The requirements specification provided by the P7003 standard will allow creators to communicate to users, and regulatory authorities, that up-to-date best practices were used in the design, testing and evaluation of the algorithm to attempt to avoid unintended, unjustified and inappropriate differential impact on users.
Since the standard aims to allow for the legitimate ends of different users, such as businesses, it should assist businesses in assuring customers that they have taken steps to ensure fairness appropriate to their stated business aims and practices.
As a practical example, an online retailer developing a new product recommendation system might use the P7003 standard as follows: Early in the development cycle, after outlining the intended functions of the new system P7003 guides the developer through a process of considering the likely customer groups to identify if there are subgroups that will need special consideration (e.g.
The standard will help provide clarity and recommendations both for how employees can share their information in a safe and trusted environment as well as how employers can align with employees in this process while still utilizing information needed for regular work flows.
This standard hopes to educate government and industry on why it is best to put mechanisms into place to enable the design of systems that will mitigate the ethical concerns when AI systems can organize and share personal information on their own.
Consumers are starting to worry about the lack of control over their data and the lack of insight they have into the rationale and mechanisms that define their agency in the big data environment, and they’ve begun to act by seeking out services and businesses that provide them with control and agency.
For instance, a robot nanny could inform parents about the time it spent with their kids and activities performed, warning about unhealthy situations where the child lacks human contact, which can compromise their cognitive development.
Standard summary: P7008 establishes a delineation of typical nudges (currently in use or that could be created) that contains concepts, functions and benefits necessary to establish and ensure ethically driven methodologies for the design of the robotic, intelligent and autonomous systems that incorporate them.
Why are these standards relevant to business people now?) The use of AI applications of nudging on computers, chatbots or robots are NOW increasing steadily in the world, both within the private and public sector for example for health, well-being (eg, would AI tools make us drink less alcohol?), and education applications, governmental public-policy for taxes, marketing for private sector, etc.
Our intention is that this nudging standard will be used in multiple ways, including: The use of this Standard will have several benefits that include: This standard is useful for REAL business use-case like these two real examples : Standard summary: P7009 establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems.
The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system’s ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance.
Standard summary: P7010 will identify and highlight well-being metrics relating to human factors directly affected by intelligent and autonomous systems and establish a baseline for the types of objective and subjective data these systems should analyze and include (in their programming and functioning) to proactively increase human well-being.
Why are these standards relevant to business people now?) While it is understood organizations globally are aware of the need to incorporate sustainability measures as part of their efforts, the reality of bottom line, quarterly driven shareholder growth is a traditional metric prioritized within society at large.
Where organizations exist in a larger societal ecosystem equating exponential growth with success, as mirrored by GDP or similar financial metrics, these companies will remain under pressure to deliver results that do not fully incorporate societal and environmental measures and goals along with existing financial imperatives.
Along with an increased awareness of how incorporating sustainability measures beyond compliance can benefit the positive association with an organization’s brand in the public sphere, by prioritizing the increase of holistic well-being, companies are also recognizing where they can save or make money and increase innovation in the process.
For instance, where a companion robot outfitted to measure the emotion of seniors in assisted living situations might be launched with a typical “move fast and break things” technological manufacturing model prioritizing largely fiscal metrics of success, these devices might fail in market because of limited adoption.
However, where they also factor in data aligning with uniform metrics measuring emotion, depression, or other factors (including life satisfaction, affect, and purpose), the device might score very high on a well-being scale comparable to the Net Promoter Score widely used today.
If the device could significantly lower depression according to metrics from a trusted source like the World Health Organization, academic institutions testing early versions of systems would be more able to attain needed funding to advance autonomous and intelligent well-being study overall.
Similarly, the B-Corporation movement has created a unique legal status for “a new type of company that uses the power of business to solve social and environmental problems.” Focusing on increasing “stakeholder” value versus just shareholder returns, forward-thinking B-Corps are building trust and defining their brands by provably aligning their efforts to holistic metrics of well-being.
In terms of how IEEE P7010 fits into this picture, our goal is to create methodologies that can connect data outputs from AI devices or systems that can correlate to various economic Indicators that can widen the lens of value beyond just fiscal issues.
Here’s how the WEF article states the overarching need relevant for business today along these lines: The 21st century calls for a new kind of leadership to inspire confidence in the ability of technology to enhance human potential rather than substitute for it.
Anticipatory transparency The Chairs of the IEEE’s Standards working groups seem to be of the belief that more transparency will be demanded of our common technologies. It seems to be argued by IEEE P7001 that “baking in” transparency (in a clear, structured way) could help companies adjust to what may be inevitable demands for that transparency.
Building trust and goodwill with consumers and regulators Consumers may become increasingly wary of systems that use their data or influence their behavior via targeted algorithms, and the general demand for clarity and ethical standards may increase.
It may behoove business leaders to comb the sections above and determine areas of focus (such as algorithmic transparency, or use of data for children and students, etc…) that may become pressing for their business, or important in the minds of their stakeholders.
Businesses who want to be part of the conversation around AI ethics are free to contact the IEEE’s working groups (see the links under the summary sections of the working groups above) and consider joining and informing the ongoing dialogue as these standards continue to be developed.
7 Ways to Introduce AI into Your Organization
In the simplest case, cognitive technologies can be just more autonomous extensions of traditional analytics — automatically running every possible combination of predictive variables in a regression analysis, for example.
More complex types of cognitive technology — neural or deep learning networks, natural language processing, and algorithms — can seem like black boxes even to the data scientists who create them.
Because implementing these technologies is a key factor in deciding how to move forward, I’ve combined the cognitive entry points into three categories: “Mostly Buy,” “Some Buy, Some Build,” and “Mostly Build.” I’m sure there are other angles that a company could take to adopting cognitive technology, but to date these seem to be the most common ones.
- On Monday, September 23, 2019
Artificial Intelligence: Building the Business Case for AI (CXOTalk #246)
Artificial intelligence can make companies dramatically more efficient, but investing in the technology can come with risks and complications. Tiger Tyagarajan ...
AI in Business - Paul Daugherty, Chief Innovation / Technology Officer, Accenture (CXOTalk / IPsoft)
AI is one of the most profound technologies of our time, with practical implications for business, society, politics, economics, governance, customer experience, ...
The Hugh Thompson Show: Artificial Intelligence APJ Style
Hugh Thompson, RSA Conference Program Chair, RSA Conference Panelists: Dr Ayesha Khanna, Co-Founder and Chief Executive Officer, ADDO AI Mahmood ...
Microsoft Data Governance Solutions for Financial Services
Web: | E-mail: firstname.lastname@example.org | Phone: (888) 381-9725 | Blog: blog.epcgroup.net: Twitter: @epcgroup Note: This video is property of ..
#204: Artificial Intelligence in Business
204: Artificial Intelligence in Business and Digital Transformation Hype around artificial intelligence and machine learning continues to explode. In this episode ...
David Shrier - AI & The Future Proof Workforce
David is a globally recognized authority on financial innovation, and created and leads the SBS online programs Oxford Fintech and Oxford Blockchain Strategy.
First Principle Thinking for Success and Innovation | Riddhi Mittal | TEDxNMIMSBangalore
With the world confined in the shackles of mental blocks and restraints, there's a need to push aside the dilemma and dubiety to create new possibilities. Be it the ...
Andrew Ng: Artificial Intelligence is the New Electricity
On Wednesday, January 25, 2017, Baidu chief scientist, Coursera co-founder, and Stanford adjunct professor Andrew Ng spoke at the Stanford MSx Future ...
The Ethics and Governance of AI opening event, February 3, 2018
Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...
Data Science and AI in Pharma and Healthcare (CXOTalk #275)
Data, artificial intelligence and machine learning are having a profound influence on healthcare, drug discovery, and personalized medicine. On this episode ...