AI News, US Air Force chief of staff: Our military must harness the potential of ... artificial intelligence

USASOC Host the 2018 AI/ML Workshop

The purpose of the workshop was to help USASOC determine how it can use artificial intelligence and machine learning to support future operations in the physical, virtual, and cognitive realms as envisioned by the Army Special Operations Forces Operating Concept (ARSOF OC). The

ARSOF OC is an innovation driven platform that supports the USASOC Force Modernization Campaign and USASOC's vision to invest in new ideas, concepts, technologies, and capabilities in order to maintain an enduring competitive advantage over our Nation's adversaries.

the ARSOF OC asserts that the force must acquire the means to rapidly identify innovative ideas from across the ARSOF Enterprise and effectively implement them to increase the lethality, mobility, survivability, protection, and influence of the ARSOF Operator in all operating environments.

USASOC Public Affairs (PA) spoke with Mr. Robert Warburg, USASOC's G9 directorate, (responsible for developing new ideas, concepts, technologies and capabilities) deputy chief of staff, who shared his expectations and thoughts before and after the workshop.

- "The workshop will further our understanding of how USASOC might best employ artificial intelligence and machine learning to deliver unmatched special operations capabilities for Joint Force Commanders."

"This concept is important because the December 2017, National Security Strategy, stipulates, "To maintain our competitive advantage, the United States will prioritize emerging technologies critical to economic growth and security, such as data science, encryption, autonomous technologies, gene editing, new materials, nanotechnology, advanced computing technologies, and artificial intelligence.

"The Army Special Operations Vision, as released in Aug 2018 states, "ARSOF will be globally postured and ready to compete, respond, fight, and win against adversaries across the range of military operations, anytime and anywhere, as part of a joint force.

ARSOF will leverage adaptive and innovative institutions, empowered Soldiers, and integrated units capable of delivering unmatched special operations capabilities in order to provide joint force commanders operational options and advantage over our Nation's adversaries."" "The

"Specifically, the desired outcomes include: Setting the conditions for further institutional exploration to: (1) Identify game changing technologies and/or cutting edge material solutions, (2) Foster innovation, (3) Deliver actionable outcomes that align with the twelve USSOCOM investment areas, and (4) Continue to advance USASOC's upstream force modernization campaign to deliver increased innovation opportunities for the ARSOF warfighter." USASOC

The Summary of NDS, January 2018, explains that the military's effort needs to remain consistent with the direction established in the security environment and it is also affected by rapid technological advancements and the changing character of war.

analytics, artificial intelligence, autonomy, robotics, directed energy, hypersonics, and biotechnology - the very technologies that ensure we will be able to fight and win the wars of the future.""

3. Improvements ahead: How humans and AI might evolve together in the next decade

Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them?

Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.

Please consider giving an example of how a typical human-machine interaction will look and feel in a specific area, for instance, in the workplace, in family life, in a health care setting or in a learning environment.

In answer to Question One: Among the key themes emerging in our December 10, 2018 report from 979 respondents' overall answers were: * CONCERNS - Human Agency: Decision-making on key aspects of digital life is automatically ceded to code-driven, “black box”

While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings.

that people’s deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others.

- Mayhem: Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups.

to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks.

*BENEFITS of AI 2030 - New Life and Work Efficiencies: AI will be integrated into most aspects of like, producing new efficiencies and enhancing human capacities, It can optimize and augment people's life experiences, including the work lives of those who choose to work.

- Health Care Improvements: AI can revolutionize medical and wellness services, reduce errors and recognize life-saving patterns, opening up a world of opportunity and options in health care.

News release with a nutshell version of report findings is available here:https://www.elon.edu/docs/e-web/imagining/surveys/2018_survey/Press_Release%20_AI_and_the_Future_of_Humans.pdf All credited responses to the question on Ai and the Future of Humans:http://www.elon.edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans_credit.xhtml All anonymous responses tothe question on Ai and the Future of Humans:http://www.elon.edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans_anon.xhtml Digital life is augmenting human capacities and disrupting eons-old human activities.

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation.

systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives.

They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition.

Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.“Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them?

Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

Sonia Katyal, co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future.

Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all.

Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI.

Marina Gorbis, executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions.

Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken.

(Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us.

Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’

Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future.

AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Amy Webb, founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI.

As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before.

Barry Chudakov, founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations.

William Uricchio, media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public.

2) surveillance and data systems designed primarily for efficiency, profit and control are inherently dangerous;3) displacement of human jobs by AI will widen economic and digital divides, possibly leading to social upheaval;

and 5) citizens will face increased vulnerabilities, such as exposure to cybercrime and cyberwarfare that spin out of control and the possibility that essential organizations are endangered by weaponized information.

They are, however, continually becoming more powerful thanks to developments in machine learning and natural language processing and advances in materials science, networking, energy-storage and hardware capabilities.

The systems underpinning today’s global financial markets, businesses, militaries, police forces, and medical, energy and industrial operations are all dependent upon networked AI of one type or another.

Here is a selection of responses from these experts that touch on this: An anonymous respondent summed up the concerns of many, writing, “The most-feared reversal in human fortune of the AI age is loss of agency.

Baratunde Thurston, futurist, former director of digital at The Onion and co-founder of comedy/technology start-up Cultivated Wit, said, “For the record, this is not the future I want, but it is what I expect given existing default settings in our economic and sociopolitical system preferences.

Given that the biggest investments in AI are on behalf of marketing efforts designed to deplete our attention and bank balances, I can only imagine this leading to days that are more filled but lives that are less fulfilled.

China’s monitoring of populations illustrates what this could look like in authoritarian and Western countries, with greater facial recognition used to identify people and affect their privacy.

Given that algorithms often have identifiable biases (e.g., favoring people who are white or male), they likely also have biases that are less well-recognized, such as biases that are negative toward people with disabilities, older people or other groups.

Thomas Schneider, head of International Relations Service and vice-director at the Federal Office of Communications (OFCOM) in Switzerland, said, “AI will help mankind to be more efficient, live safer and healthier, and manage resources like energy, transport, etc., more efficiently.

(We have seen this with every new technology: It can and will be used for good and bad.) Much will depend about how AI will be governed: If we have an inclusive and bottom-up governance system of well-informed citizens, then AI will be used for improving our quality of life.

and competition through social pressure and control, we may risk a loss of individual fundamental freedoms (including but not limited to the right to a private life) that we have fought for in the last decades and centuries.”

Many current AI systems (including adaptive content-presentation systems and so-called recommender systems) try to avoid information and choice overload by replacing our decision-making processes with algorithmic predictions.

For example, Facebook’s current post ranking systems will eventually turn us all into cat video watching zombies, because they follow our behavioral patterns, which may not be aligned with our preferences.

Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia, commented, “I am confident that in 2030 both arms of this query will be true: AI-driven algorithms will substantially enhance our abilities as humans and human autonomy and agency will be diminished.

On the one hand, if corporate entities retain unbridled control over how AI-driven algorithms interact with humans, people will be less well off, as the loss of autonomy and agency will be largely to the benefit of the corporations.

One could even parse this further, anticipating that certain decisions can be comfortably left in the hands of the AI-driven algorithm, with other decisions either falling back on humans or arrived at through a combination of AI-driven algorithmic input and human decision making.

Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, responded, “AI systems will make quite substantial and important contributions to the ability of health care providers to generate accurate diagnoses of maladies and threats to my well-being, now and in the future.

They will also be able to exercise varying degrees of control over most human communications, financial transactions, transportation systems, power grids and weapon systems.

Driven by the convenience of connectivity, the greed that underlies business expansion and the pipedreams of muddle-headed people who confuse machine-like intelligence with biological intelligence, we will continue to build AIs we can barely understand and to expand the InterNest in which they will live –

An attorney specializing in policy issues for a global digital rights organization commented, “I’m not sure, even today, whether the tech advances of the last 12 years have been net positive over the global population.

Brian Behlendorf, executive director of the Hyperledger project at The Linux Foundation and expert in blockchain technology, wrote, “I am concerned that AI will not be a democratizing power, but will enhance further the power and wealth of those who already hold it.

Eileen Donahoe, executive director of the Global Digital Policy Incubator at Stanford University, commented, “While I do believe human-machines collaboration will bring many benefits to society over time, I fear that we will not have made enough progress by 2030 to ensure that benefits will be spread evenly or to protect against downside risks, especially as they relate to bias, discrimination and loss of accountability by that time.”

David Bray, executive director of People-Centered Internet, commented, “Hope: Human-machine/AI collaborations extend our abilities of humans while we (humans) intentionally strive to preserve values of respect, dignity and agency of choice for individuals.

Machines bring together different groups of people and communities and help us work and live together by reflecting on our own biases and helping us come to understand the plurality of different perspectives of others.

Machines amplify existing confirmation biases and other human characteristics resulting in sensationalist, emotion-ridden news and other communications that gets page views and ad-clicks yet lack nuance of understanding, resulting in tribalism and a devolution of open societies and pluralities to the detriment of the global human condition.”

Bernie Hogan, senior research fellow at Oxford Internet Institute, wrote, “The current political and economic climate suggests that existing technology, especially machine learning, will be used to create better decisions for those in power while creating an ever more tedious morass of bureaucracy for the rest.

We see little example of successful bottom-up technology, open source technology and hacktivism relative to the encroaching surveillance state and attention economy.”

Dan Buehrer, a retired professor of computer science formerly with the National Chung Cheng University in Taiwan, warned, “Statistics will be replaced by individualized models, thus allowing control of all individuals by totalitarian states and, eventually, by socially intelligent machines.”

Nathalie Marechal, doctoral candidate at the University of Southern California’s Annenberg School for Communication who researches the intersection of internet policy and human rights, said, “Absent rapid and decisive actions to rein in both government overreach and companies’

There are already plenty of examples about how decision-making is biased using big data, machine learning, privacy violations and social networks (just to mention a few elements) and one can see that the common citizen is unaware of how much of his/her will does not belong to him/her.

The socio-political implications of this breed deep primitive superstition, racial hatred toward whites and Asians who are seen as techno-colonialists and the growth of kleptocracies amid the current mushrooming of corruption.”

I published pioneering quantitative research on internet addiction and dependency in 1996, and followed up 15 years later with a related, updated research talk on the future of AI and internet dependency at a UNESCO-sponsored conference on information literacy in Morocco.

The internet is moving into the human body, and, in that process, societal statuses are altered, privileging some while abandoning others in the name of emerging technologies, and the global order is restructuring to the same effect.

Alan Mutter, a longtime Silicon Valley CEO, cable TV executive and now a teacher of media economics and entrepreneurism at the University of California, Berkeley, said, “The danger is that we will surrender thinking, exploring and experimentation to tools that hew to the rules but can’t color outside the lines.

Dan Geer, a respondent who provided no identifying details, commented, “If you believe, as do I, that having a purpose to one’s life is all that enables both pride and happiness, then the question becomes whether AI will or will not diminish purpose.

Cristobal Young, an associate professor of sociology at Cornell University specializing in economic sociology and stratification, commented, “I mostly base my response [that tech will not leave most people better off than they are today] on Twitter and other online media, which were initially praised as ‘liberation technology.’

Leadership in Lucerne, Switzerland, wrote, “The affordances of digital technologies bind people into information networks such that the network becomes the actor and intelligence as well as agency are qualities of the network as a whole and not any individual actors, whether human or non-human.

Networks will have access to much more information than do any present-day actors and therefore be able to navigate complex environments, e.g., self-driving cars, personal assistants, smart cities.

To ensure the use of these technologies for good instead of evil it will be necessary to dismantle and replace current divides between government and governed, workers and capitalists as well as to establish a working global governance.”

We will know by then, for example, how successful self-driving cars are going to be, and the problems inherent in handing off control from humans to machines in a variety of areas will also have become clearer.

Our institutions are insufficiently nimble to keep up with the policy questions that arise and attempts to regulate new industries are subverted by corrupt money politics at both the federal and state levels.”

professor at a major U.S. university and expert in artificial intelligence as applied to social computing said, “As AI systems take in more data and make bigger decisions, people will be increasingly subject to their unaccountable decisions and non-auditable surveillance practices.

Justin Reich, executive director of MIT Teaching Systems Lab and research scientist in the MIT Office of Digital Learning, responded, “Systems for human-AI collaborations will be built by powerful, affluent people to solve the problems of powerful, affluent people.

So while AI might get increasingly good at extracting value from people, or manipulating people’s behavior toward more consumption and compliance, much less attention will likely be given to how AI can actually create value for people.

Technologists who are using emotional analytics, image-modification technologies and other hacks of our senses are destroying the fragile fabric of trust and truth that is holding our society together at a rate much faster than we are adapting and compensating –

The sophisticated tech is affordable and investible in the hands of very few people who are enriching themselves and growing their power exponentially, and these actors are NOT acting in the best interest of all people.”

Collin Baker, senior AI researcher at the International Computer Science Institute at the University of California, Berkeley, commented, “I fear that advances in AI will be turned largely to the service of nation states and mega-corporations, rather than be used for truly constructive purposes.

Devin Fidler, futurist and founder of Rethinkery Labs commented, “If earlier industrialization is any guide, we may be moving into a period of intensified creative destruction as AI technologies become powerful enough to overturn the established institutions and the ordering systems of modern societies.

If the holes punched in macro-scale organizational systems are not explicitly addressed and repaired, there will be increased pressures on everyday people as they face not only the problems of navigating an unfamiliar new technology landscape themselves, but also the systemic failure of institutions they rely on that have failed to adapt.”

As AI systems become more complex and are given increasingly important roles in the functioning of day-to-day life, we should ask ourselves what are we teaching our artificial digital children?

Peter Asaro, a professor at The New School and philosopher of science, technology and media who examines artificial intelligence and robotics, commented, “AI will produce many advantages for many people, but it will also exacerbate many forms of inequality in society.

It is likely to benefit a small group who design and control the technology greatly, benefit a fairly larger group of the already well-off in many ways, but also potentially harm them in other ways, and for the vast majority of people in the world it will offer few visible benefits and be perceived primarily as a tool of the wealthy and powerful to enhance their wealth and power.”

Stephanie Perrin, president of Digital Discretion, a data-privacy consulting firm, wrote, “There is a likelihood that, given the human tendency to identify risk when looking at the unknown future, AI will be used to attempt to predict risk.

This will find itself into public-space surveillance systems, employee-vetting systems (note the current court case where LinkedIn is suing data scrapers who offer to predict ‘flight risk’

While this might possibly introduce a measure of safety in some applications, the impact of fear that comes with unconscious awareness of surveillance will have a severe impact on creativity and innovation.

The advance of AI technologies is just going to continue this trend, unless quite draconian political changes are effected that bring transnational companies under proper democratic control.”

Decisions may also be made (even more than today) based on a vast array of collected data and if we are not careful we will be unable to control the flows of information about us used to make those decisions or to correct misunderstandings or errors which can follow us around indefinitely.

Imagine being subject to repeated document checks as you travel around the country because you know a number of people who are undocumented immigrants and your movements therefore fit the profile of an illegal immigrant.

In the present era of extreme inequality and climate catastrophe, I expect technologies to be used by employers to make individual workers more isolated and contingent, by apps to make users more addicted on a second-by-second basis, and by governments for surveillance and increasingly strict border control.”

Spafford, internet pioneer and founder and executive director emeritus of the Center for Education and Research in Information Assurance and Security, commented, “Without active controls and limits, the primary adopters of AI systems will be governments and large corporations.

Michael Muller, a researcher in the AI interactions group for a global technology solutions provider, said it will leave some people better off and others not, writing, “For the wealthy and empowered, AI will help them with their daily lives –

responded, “Tech design and policy affects our privacy in the United States so much so that most people do not think about the tracking of movements, behaviors and attitudes from smartphones, social media, search engines, ISPs [internet service providers] and even Internet of Things-enabled devices.

Until tech designers and engineers build privacy into each design and policy decision for consumers, any advances with human-machine/AI collaboration will leave consumers with less security and privacy.”

Goldhaber, an author, consultant and theoretical physicist who wrote early explorations on the digital attention economy, said, “For those without internet connection now, its expansion will probably be positive overall.

For the rest we will see an increasing arms race between uses of control, destructive anarchism, racism, etc., and ad hoc, from-below efforts at promoting social and environmental good.

Ian Peter, pioneer internet activist and internet rights advocate, said, “Personal data accumulation is reaching a point where privacy and freedom from unwarranted surveillance are disappearing.

Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee, commented, “I am increasingly concerned that AI-driven decision making will perpetuate existing societal biases and injustices, while obscuring these harms under the false belief such systems are ‘neutral.’”

Nigel Hickson, an expert on technology policy development for ICANN based in Brussels, responded, “I am optimistic that AI will evolve in a way that benefits society by improving processes and giving people more control over what they do.

Furthermore, and related, the capabilities of mass dataveillance in private and public spaces is ever-expanding, and their uptake in states with weak civil society organs and minimal privacy regulation is troubling.

In short, dominant global technology platforms show no signs of sacrificing their business models that depend on hoovering up ever more quantities of data on people’s lives then hyper-targeting them with commercial messages;

and across the world, political actors and state security and intelligence agencies then also make use of such data acquisitions, frequently circumventing privacy safeguards or legal constraints.”

But moving a decision such as health care or workplace performance to AI turns it into a data-driven decision driven by optimization of some function, which in turn demands more data.

senior researcher and programmer for a major global think tank commented, “I expect AI to be embedded in systems, tools, etc., to make them more useful.

Jenni Mechem, a respondent who provided no identifying details, said, “My two primary reasons for saying that advances in AI will not benefit most people by 2030 are, first, there will continue to be tremendous inequities in who benefits from these advances, and second, if the development of AI is controlled by for-profit entities there will be tremendous hidden costs and people will yield control over vast areas of their lives without realizing it.

The examples of Facebook as a faux community commons bent on extracting data from its users and of pervasive internet censoring in China should teach us that neither for-profit corporations nor government can be trusted to guide technology in a manner that truly benefits everyone.

Suso Baleato, a fellow at Harvard University’s Institute of Quantitative Social Science and liaison for the Organization for Economic Cooperation and Development (OECD)’s Committee on Digital Economy Policy, commented, “The intellectual property framework impedes the necessary accountability of the underlying algorithms, and the lack of efficient redistributive economic policies will continue amplifying the bias of the datasets.”

Sasha Costanza-Chock, associate professor of civic media at MIT, said, “Unfortunately it is most likely that AI will be deployed in ways that deepen existing structural inequality along lines of race, class, gender, ability and so on.

Society, wrote, “AI technologies run the risk of providing a comprehensive infrastructure for corporate and state surveillance more granular and all-encompassing than any previous such regime in human history.”

Zoetanya Sujon, a senior lecturer specializing in digital culture at the University of the Arts London, commented, “Like the history of so many technologies show us, AI will not be the magic solution to the world’s problems or to symbolic and economic inequalities.

My fear: Will all of the benefits of more-powerful artificial intelligence benefit the human race as a whole or simply the thin layer at the top of the social hierarchy that owns the new advanced technologies?”

We can see the tip of this iceberg now with health insurance companies today scooping up readily available, poorly protected third-party data that will be used to discriminate.”

senior data analyst and systems specialist expert in complex networks responded, “Artificial intelligence software will implement the priorities of the entities that funded development of the software.

In other cases, software will operate to the benefit of a large company but to the detriment of consumers (for example, calculating a price for a product that will be the highest that a given customer is prepared to pay).

digital rights activist commented, “AI is already (through racial recognition, in particular) technologically laundering longstanding and pervasive bias in the context of police surveillance.

One of the chief fears about today’s technological change is the possibility that autonomous hardware and software systems will cause millions of people globally to lose their jobs and, as a result, their means for affording life’s necessities and participating in society.

Over time I think AI/machine learning strategies will become merely tools embedded in ever-more-complex technologies for which human control and responsibility will become clearer.”

Atkinson, president of the Information Technology and Innovation Foundation, wrote about how advances in AI are essential to expanded job opportunities: “The developed world faces an unprecedented productivity slowdown that promises to limit advances in living standards.

but I'm optimistic in long term that we’ll work out how to get machines to do the dirty, dull, dangerous and difficult, and leave us free to focus on all the more-important and human parts of our lives.”

Some fear the collapse of the middle class and social and economic upheaval if most of the world’s economic power is held by a handful of technology behemoths that are reaping the great share of financial rewards in the digital age while employing far fewer people than the leading companies of the industrial age.

A fairly large share of these experts warn that if steps are not taken now to adjust to this potential future that AI’s radical reduction in human work will be devastating.”

David Cake, an leader with Electronic Frontiers Australia and vice-chair of the ICANN Generic Names Supporting Organization Council, wrote, “The greatest fear is that the social disruption due to changing employment patterns will be handled poorly and lead to widespread social issues.”

James Hendler, professor of computer, web and cognitive sciences and director of the Rensselaer Polytechnic Institute for Data Exploration and Application, wrote, “I believe 2030 will be a point in the middle of a turbulent time when AI is improving services for many people, but it will also be a time of great change in society based on changes in work patterns that are caused, to a great degree, by AI.

On the one hand, for example, doctors will have access to information that is currently hard for them to retrieve rapidly, resulting in better medical care for those who have coverage, and indeed in some countries the first point of contact in a medical situation may be an AI, which will help with early diagnoses/prescriptions.

On the other hand, over the course of a couple of generations, starting in the not-too-distant future we will see major shifts in work force with not just blue-collar jobs, but also many white-collar jobs lost.

Instead of relying on human expertise and context knowledge, many tasks will be handled directly by clients using AI interfaces or by lower-skilled people in service jobs, boosted by AI.

For AI to significantly benefit the majority, it must be deployed in emergency health care (where quicker lab work, reviews of medical histories or potential diagnoses can save lives) or in aid work (say, to coordinate shipping of expiring food or medicines from donors to recipients in need).”

Nathaniel Borenstein, chief scientist at Mimecast, wrote, “Social analyses of IT [information technology] trends have consistently wildly exaggerated the human benefits of that technology, and underestimated the negative effects.

I foresee a world in which IT and so-called AI produce an ever-increasing set of minor benefits, while simultaneously eroding human agency and privacy and supporting authoritarian forms of governance.

I also see the potential for a much worse outcome in which the productivity gains produced by technology accrue almost entirely to a few, widening the gap between the rich and poor while failing to address the social ills related to privacy.

Andrea Romaoli Garcia, an international lawyer active in internet governance discussions, commented, “AI will improve the way people make decisions in all industries because it allows instant access to a multitude of information.

Future human-machine interaction (AI) will only be positive if richer countries develop policies to help poorer countries to develop and gain access to work and wealth.”

Jeff Johnson, computer science professor at the University of San Francisco, previously with Xerox, HP Labs and Sun Microsystems, responded, “I believe advances in AI will leave many more people without jobs, which will increase the socioeconomic differences in society, but other factors could help mitigate this, e.g., adoption of guaranteed income.”

Hassaan Idrees, an electrical engineer and Fulbright Scholar active in creating energy systems for global good, commented, “I believe human-machine interaction will be more of [a] utility, and less fanciful as science fiction puts it.

For the developing countries, however, whose labor force is mostly unskilled and whose exports are largely low-tech, AI implies higher unemployment, lower income and more social unrest.

Like most innovations, I expect AI to leave our poor even poorer and our rich even richer, increasing the numbers of the former while consolidating power and wealth in an ever-shrinking group of currently rich people.”

professional working on the setting of web standards wrote, “Looking ahead 12 years from now, I expect that AI will be enhancing the quality of life for some parts of some populations, and in some situations, while worsening the quality of life for others.

So many people included comments and concerns about the future of jobs for humans in their wide-ranging responses to this canvassing that a later section of this report has more expert opinions on this topic.

While these experts expect AI to augment humans in many positive ways, some are concerned that a deepening dependence upon machine-intelligence networks will diminish crucial human capabilities.

Some maintain there has already been an erosion of people’s abilities to think for themselves, to take action independent of automated systems and to interact effectively face-to-face with others.

Charles Ess, an expert in ethics and professor with the department of media and communication at the University of Oslo, said, “It seems quite clear that evolving AI systems will bring about an extraordinary array of options, making our lives more convenient.

of our offloading various cognitive practices and virtues to the machines and thereby our becoming less and less capable of exercising our own agency, autonomy and most especially our judgment (phronesis).

Daniel Siewiorek, a professor with the Human-Computer Interaction Institute at Carnegie Mellon University, predicted, “The downside: isolating people, decreasing diversity, a loss of situational awareness (witness GPS directional systems) and ‘losing the receipt’

In the latter case, as we layer new capabilities on older technologies if we forget how the older technology works we cannot fix it and layered systems may collapse, thrusting us back into a more-primitive time.”

Garland McCoy, founder and chief development officer of the Technology Education Institute, wrote, “I am an optimist at heart and so believe that, given a decade-plus, the horror that is unfolding before our eyes will somehow be understood and resolved.

The lack of physical, embodied interaction is almost guaranteed to result in social loneliness and anomie, and associated problems such as suicide, a phenomenon already are on the rise in the United States.”

Ebenezer Baldwin Bowles, author, editor and journalist, responded, “If one values community and the primacy of face-to-face, eye-to-eye communication, then human-machine/AI collaboration in 2030 will have succeeded in greatly diminishing the visceral, primal aspects of humanity.

Is it truly easier and safer to look into a screen and listen to an electronically delivered voice, far away on the other side of an unfathomable digital divide, instead of looking into another’s eyes, perhaps into a soul, and speaking kind words to one another, and perhaps singing in unison about the wonders of the universe?

principal design researcher at one of the world’s largest technology companies commented, “Although I have long worked in this area and been an optimist, I now fear that the goal of most AI and UX is geared toward pushing people to interact more with devices and less with other people.

As a social species that is built to live in communities, reductions in social interaction will lead to erosion of community and rise in stress and depression over time.

Michael Dyer, an emeritus professor of computer science at the University of California, Los Angeles, commented, “As long as GAI (general AI) is not achieved then specialized AI will eliminate tasks associated with jobs but not the jobs themselves.

Nancy Greenwald, a respondent who provided no identifying details, wrote, “Perhaps the primary downside is overreliance on AI, which 1) is only as good as the algorithms created (how are they instructed to ‘learn?’) and 2) has the danger of limiting independent human thinking.

Valarie Bell, a computational social scientist at the University of North Texas, commented, “As a social scientist I’m concerned that never before have we had more ways in which to communicate and yet we’ve never done it so poorly, so venomously and so wastefully.

It is important that all technologies and applications are backed up with social policies and systems to support meaning and connection, or else even effective AI tools might be isolating and even damaging on aggregate.”

Some of these experts are particularly worried about how networked artificial intelligence can amplify cybercrime, create fearsome possibilities in cyberwarfare or enable the erosion of essential institutions and organizations.

Anthony Nadler, assistant professor of media and communication studies at Ursinus College, commented, “The question has to do with how decisions will be made that shape the contingent development of this potentially life-changing technology.

Snow, an innovation officer with the U.S. Air Force, wrote, “Facets, including weaponized information, cyberbullying, privacy issues and other potential abuses that will come out of this technology will need to be addressed by global leaders.”

People find ways to apply technologies to enhance the human spirit and the human experience, yet others can use technologies to exploit human fears and satisfy personal greed.

That said, the changes will represent a significant step toward what I call a DigiTransHuman Future, where the utility of humans will increasingly be diminished as this century progresses, to the extent that humans may become irrelevant or extinct, replaced by DigiTransHumans and their technologies/robots that will appear and behave just like today’s humans, except at very advanced stages of humanoid development.

Dan Schultz, senior creative technologist at Internet Archive, responded, “AI will no doubt result in life-saving improvements for a huge portion of the world’s population, but it will also be possible to weaponize in ways that further exacerbate divides of any kind you can imagine (political, economic, education, privilege, etc.).

Sam Gregory, director of WITNESS and digital human rights activist, responded, “Trends in AI suggest it will enable more individualized, personalized creation of synthetic media filter bubbles around people, including the use of deepfakes and related individualized synthetic audio and video micro-targeting based on personal data and trends in using AI-generated and directed bots.

Miguel Moreno-Muñoz, a professor of philosophy specializing in ethics, epistemology and technology at the University of Granada in Spain, said, “There is a risk of overreliance on systems with poorly experienced intelligence augmentation due to pressure to reduce costs.

Rall, a professor of arts and social sciences at Southern Cross University in Australia, responded, “The basic problem with the human race and its continued existence on this planet is overpopulation and depletion of the Earth's resources.

Patrick Lambe, a partner at Straits Knowledge and president of the International Society for Knowledge Organization’s Singapore chapter, wrote, “I chose the negative answer not because of a dystopian vision for AI itself and technology interaction with human life, but because I believe social, economic and political contexts will be slow to adapt to technology’s capabilities.

There will be some capability enhancement (e.g., medicine), but on the whole technology contributions will continue to add negative pressures to the other environmental factors (employment, job security, left-right political swings).

Alexey Turchin, existential risks researcher and futurist, responded, “There are significant risks of AI misuse before 2030 in the form of swarms of AI empowered drones or even non-aligned human-level AI.”

Many respondents sketched out overall aspirations: Andrew Wycoff, the director of OECD’s directorate for science, technology and innovation, and Karine Perset, an economist in OECD’s digital economy policy division, commented, “Twelve years from now, we will benefit from radically improved accuracy and efficiency of decisions and predictions across all sectors.

The growing consensus that AI should benefit society at-large leads to calls to facilitate the adoption of AI systems to promote innovation and growth, help address global challenges, and boost jobs and skills development, while at the same time establishing appropriate safeguards to ensure these systems are transparent and explainable, and respect human rights, democracy, culture, nondiscrimination, privacy and control, safety, and security.

Given the inherently global nature of our networks and applications that run across then, we need to improve collaboration across countries and stakeholder groups to move toward common understanding and coherent approaches to key opportunities and issues presented by AI.

Arthur Bushkin, an IT pioneer who worked with the precursors to the Advanced Research Projects Agency Network (ARPANET) and Verizon, wrote, “The principal issue will be society’s collective ability to understand, manage and respond to the implications and consequences of the technology.”

Susan Mernit, executive director of The Crucible and co-founder and board member of Hack the Hood, responded, “If AI is in the hands of people who do not care about equity and inclusion, it will be yet another tool to maximize profit for a few.”

A number of these experts said ways must be found for people of the world to come to a common understanding of the evolving concerns over AI and digital life and to reach agreement in order to create cohesive approaches to tackling AI’s challenges.

In my area, health, there is tremendous potential in the confluence of advances in big data analysis and genomics to create personalised medicine and improve diagnosis, treatment and research.

It’s also important to have an honest dialogue between the experts, the media and the public about the use of our personal data for social-good projects, like health care, taking in both the risks of acting –

If so, the pervasiveness of AI/robotics in the future will diminish any negative impact and create a huge synergy among people and environment, improving people’s daily lives in all domains while achieving environment sustainability.”

Fiona Kerr, industry professor of neural and systems complexity at the University of Adelaide, commented, “The answer depends very much on what we decide to do regarding the large questions around ensuring equality of improved global health;

through the growth of understanding in the neurophysiological outcomes of human-human and human-technological interaction which allows us to best decide what not to technologies, when a human is more effective, and how to ensure we maximise the wonders of technology as an enabler of a human-centric future.”

The question is whether we will find ways to increase trust and the possibilities for productive cooperation among people or whether individuals striving for power will try to dominate by decreasing trust and cooperation.

At the same time the concomitant increase in the levels of education and health will allow us to develop new social philosophies and rework our polities to transform human well-being.

Wangari Kabiru, author of the MitandaoAfrika blog, based in Nairobi, Kenya, commented, “In 2030, advancing AI and tech will not leave most people better off than they are today, because our global digital mission is not strong enough and not principled enough to assure that ‘no, not one is left behind’

the full benefits of human-machine/AI collaboration can only be experienced when academia, civil society and other institutions are vibrant, enterprise is human-values-based, and governments and national constitutions and global agreements place humanity first.

professor expert in AI connected to a major global technology company’s projects in AI development wrote, “Precision democracy will emerge from precision education, to incrementally support the best decisions we can make for our planet and our species.

As with the current development of precision health as the path from data to wellness, so too will artificial intelligence improve the impact of human collaboration and decision-making in sustaining our planet.

Bert Huang, an assistant professor in the department of computer science at Virginia Tech focused on machine learning, wrote, “AI will cause harm (and it has already caused harm), but its benefits will outweigh the harm it causes.

That said, the [historical] pattern of technology being net positive depends on people seeking positive things to do with the technology, so efforts to guide research toward societal benefits will be important to ensure the best future.”

Susan Etlinger, an industry analyst for Altimeter Group and expert in data, analytics and digital strategy, commented, “In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity.

AI technologies have the potential to do so much good in the world: identify disease in people and populations, discover new medications and treatments, make daily tasks like driving simpler and safer, monitor and distribute energy more efficiently, and so many other things we haven’t yet imagined or been able to realize.

Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “We could start with owning our own digital data and the data from our bodies, minds and behavior, and then follow by correcting our major tech companies’

By utilizing blockchain or similar technologies and adopting progressive ideals toward citizens and their data, as demonstrated by countries like Estonia, we can usher in genuine digital democracy in the age of the algorithm.

Greg Lloyd, president and co-founder at Traction Software, presented a future scenario: “By 2030 AIs will augment access and use of all personal and networked resources as highly skilled and trusted agents for almost every person –

Certified agents will be granted access to personal or corporate resources, and within those bounds will be able to converse, take direction, give advice and act like trusted servants, advisers or attorneys.

Lauriault, assistant professor of critical media and big data at Carleton University’s School of Journalism and Communication, commented, “[What about] regulatory and policy interventions to protect citizens from potentially harmful outcomes, AI auditing, oversight, transparency and accountability?

Without some sort of principles of a systems-based framework to ensure that AI remains ethical and in the public interest, in a stable fashion, then I must assume that AI will impede agency and could lead to decision-making that can be harmful, biased, inaccurate and not able to dynamically change with changing values.

we should propose a code of ethics of AI to evaluate that each type of application is oriented toward the well-being of the user: 1) do not harm the user, 2) benefits go to the user, 3) do not misuse her/his freedom, identity and personal data, and 4) decree as unfair any clauses alienating the user’s independence or weakening his/her rights of control over privacy in use of the application.

Gary Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, wrote, “The tremendous potential for AI to be used to engage and adapt information content and computer services to individual users can make computing increasingly helpful, engaging and relevant.

Tech in general and AI in particular will promote the advancement of humanity in every area by allowing processes to scale efficiently, reducing the costs and making more services available to more people (including quality health care, mobility, education, etc.).

These tools, much like the internet itself, will allow people to do this ever more cheaply, quickly and in a far-reaching and easily replicable manner, with exponentially negative impacts on the environment.

Preventing this in its worst manifestations will require global industry regulation by government officials with hands-on experience in working with AI tools on the federal, state and local level, and transparent audits of government AI tools by grassroots groups of diverse (in every sense of the term) stakeholders.”

I trust the work by industry, academia and civil society to continue to play an important role in moderating the technology, such as pursuing understandings of the potential costly personal, social and societal influences of AI.

Peter Stone, professor of computer science at the University of Texas at Austin and chair of the first study panel of the One Hundred Year Study on Artificial Intelligence (AI100), responded, “As chronicled in detail in the AI100 report, I believe that there are both significant opportunities and significant challenges/risks when it comes to incorporating AI technologies into various aspects of everyday life.

Anita Salem, systems research and design principal at SalemSystems, warned of a possible dystopian outcome, “Human-machine interaction will result in increasing precision and decreasing human relevance unless specific efforts are made to design in ‘humanness.’

This population will need to be controlled and AI will provide the means for this control: law enforcement by drones, opinion manipulation by bots, cultural homogeny through synchronized messaging, election systems optimized from big data and a geopolitical system dominated by corporations that have benefited from increasing efficiency and lower operating costs.”

While the benefits of AI/automation will accrue very quickly for the 1%, it will take longer for the rest of the populace to feel any benefits, and that’s ONLY if our representative leaders DELIBERATELY enact STRONG social and fiscal policy.

Any company using AI technologies should be heavily taxed, with that money going into strong social welfare programs like job retraining and federal jobs programs.

Martin Geddes, a consultant specializing in telecommunications strategies, said, “The unexpected impact of AI will be to automate many of our interactions with systems where we give consent and to enable a wider range of outcomes to be negotiated without our involvement.

Lindsey Andersen, an activist at the intersection of human rights and technology for Freedom House and Internews, now doing graduate research at Princeton University, commented, “Already, there is an overreliance on AI to make consequential decisions that affect people’s lives.

If we have not dealt with these problems through smart regulation, consumer/buyer education and establishment of norms across the AI industry, we could be looking at a vastly more unfair, polarized and surveilled world in 2030.”

Yeseul Kim, a designer for a major South Korean search firm, wrote, “The prosperity generated by and the benefits of AI will promote the quality of living for most people only when its ethical implications and social impacts are widely discussed and shared inside the human society, and only when pertinent regulations and legislation can be set up to mitigate the misconduct that can be brought about as the result of AI advancement.

If these conditions are met, computers and machines can process data at unprecedented speed and at an unrivaled precision level, and this will improve the quality of life, especially in medical and healthcare sectors.

I have no doubt that advances in AI will enhance human capacities and empower some individuals, but this will be more than offset by the fact that artificial intelligence and associated technological advances will mean far fewer jobs in the future.

This can be avoided by policies that provide for basic human needs and encourage a new definition of work, but the behavior to date by politicians, governments, corporations and economic elites gives me little confidence in their ability to lead us through this transition.”

These risks are ever-present and can be mitigated through societal awareness and education, and through regulation that identifies entities that become very powerful thanks to a specific technology or technologies, and which use that power to further strengthen themselves.

Sam Gregory, director of WITNESS and digital human rights activist, responded, “We should assume all AI systems for surveillance and population control and manipulation will be disproportionately used and inadequately controlled by authoritarian and non-democratic governments.

To fight back against this dark future we need to get the right combination of attention to legislation and platform self-governance right now, and we need to think about media literacy to understand AI-generated synthetic media and targeting.

Jonathan Kolber, futurist, wrote, “My fear is that, by generating AIs that can learn new tasks faster and more reliably than people can do, the future economy will have only evanescent opportunities for most people.

If, however, we fail to implement a market-oriented universal basic income or something equally effective, vast multitudes will become unemployed and unemployable without means to support themselves.

I foresee mostly positive results from AI so long as there is enough guards to protect from automated execution of tasks in areas that may have ethical considerations such as taking decisions that may have life-or-death implications.

Danny O'Brien, international director for a nonprofit digital rights group, commented, “I'm generally optimistic about the ability of humans to direct technology for the benefit of themselves and others.

Bryan Alexander, futurist and president of Bryan Alexander Consulting, responded, “I hope we will structure AI to enhance our creativity, to boost our learning, to expand our relationships worldwide, to make us physically safer and to remove some drudgery.”

Scott Burleigh, software engineer and intergalactic internet pioneer, wrote, “Advances in technology itself, including AI, always increase our ability to change the circumstances of reality in ways that improve our lives.

longtime Silicon Valley communications professional who has worked at several of the top tech companies over the past few decades responded, “AI will continue to improve *if* quality human input is behind it.

changemaker working for digital accessibility wrote, “There is no reason to assume some undefined force will be able to correct for or ameliorate the damage of human nature amplified with power-centralizing technologies.

An information-science futurist commented, “I fear that powerful business interests will continue to put profits above all else, closing their eyes to the second- and third-order effects of their decisions.

A share of these experts suggest the creation of policies, regulations or ethical and operational standards should shift corporate and government priorities to focus on the global advancement of humanity, rather than profits or nationalism.

In light of current events, it’s hard to be optimistic that such an agenda will have the resources necessary to keep pace with transformative uses of AI throughout ever-increasing aspects of society.

To course-correct in time it's necessary for the general public to develop a deep appreciation about why leading ideologies concerning the market, prosperity and security are not in line with human flourishing.”

Benjamin Shestakofsky, an assistant professor of sociology at the University of Pennsylvania specializing in digital technology’s impacts on work, said, “Policymakers should act to ensure that citizens have access to knowledge about the effects of AI systems that affect their life chances and a voice in algorithmic governance.

Charles Zheng, a researcher into machine learning and AI with the National Institute of Mental Health, wrote, “To ensure the best future, politicians must be informed of the benefits and risks of AI and pass laws to regulate the industry and to encourage open AI research.

In fact the experience in China has shown how this technology can be used to take away the freedoms and rights of the individual for the purposes of security, efficiency, expediency and whims of the state.

John Willinsky, professor and director of the Public Knowledge Project at Stanford Graduate School of Education, said, “Uses of AI that reduce human autonomy and freedom will need to be carefully weighed against the gains in other qualities of human life (e.g., driverless cars that improve traffic and increase safety).

My hope, however, is that these deliberations are not framed as collaborations between what is human and what is AI but will be seen as the human use of yet another technology, with the wisdom of such use open to ongoing human consideration and intervention intent on advancing that sense of what is most humane about us.”

Anthony Picciano, a professor of education at the City of New York University’s Interactive Technology and Pedagogy program, responded, “I am concerned that profit motives will lead some companies and individuals to develop AI applications that will threaten, not necessarily improve, our way of life.

Bill Woodcock, executive director at Packet Clearing House, the research organization behind global network development, commented, “In short-term, pragmatic ways, learning algorithms will save people time by automating much of tasks like navigation and package delivery and shopping for staples.

But that tactical win comes at a strategic loss as long as the primary application of AI is to extract more money from people, because that puts them in opposition to our interests as a species, helping to enrich a few people at the expense of everyone else.

For the developing countries, however, whose labor force is mostly unskilled and whose exports are largely low-tech, AI implies higher unemployment, lower income and more social unrest.

For example, automatic real-time translation systems (e.g., Google’s Babel fish) would allow people who don’t speak a foreign language to find work in the tourism industry.”

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, said, “Actions should be taken to make the internet universally available and accessible, provide the training and know-how for all users.”

David Schlangen, a professor of applied computational linguistics at Bielefeld University in Germany, responded, “If the right regulations are put in place and ad-based revenue models can be controlled in such a way that they cannot be exploited by political interest groups, the potential for AI-based information search and decision support is enormous.

David Zubrow, associate director of empirical research at Carnegie Mellon University’s Software Engineering Institute, said, “How the advances are used demands wisdom, leadership and social norms and values that respect and focus on making the world better for all;

Melo, an associate professor of computer science at Instituto Superior Técnico in Lisbon, Portugal, responded, “I expect that AI technology will contribute to render several services (in health, assisted living, etc.) more efficient and humane and, by making access to information more broadly available, contribute to mitigate inequalities in society.

However, in order for positive visions to become a reality, both AI researchers and the general population should be aware of the implications that such technology can have, particularly in how information is used and the ways by which it can be manipulated.

Doug Schepers, chief technologist at Fizz Studio, said, “AI/ML, in applications and in autonomous devices and vehicles, will make some jobs obsolete, and the resulting unemployment will cause some economic instability that impacts society as a whole, but most individuals will be better off.

The social impact of software and networked systems will get increasingly complex, so ameliorating that software problem with software agents may be the only way to decrease harm to human lives, but only if we can focus the goal of software to benefit individuals and groups rather than companies or industries.”

Universities have to rethink what type of graduates to prepare, especially in areas of health, legal and engineering, where the greatest impact is expected, since the labor displacement of doctors, engineers and lawyers is a reality with the incipient developed systems.”

Andrian Kreye, a journalist and documentary filmmaker based in Germany, said, “If humanity is willing to learn from its mistakes with low-level AIs like social media algorithms there might be a chance for AI to become an engine for equality and progress.

An anonymous respondent wrote, “There are clearly advances associated with AI, but the current global political climate gives no indication that technological advancement in any area will improve most lives in the future.

senior strategist in regulatory systems and economics for a top global telecommunications firm wrote, “If we do not strive to improve society, making the weakest better off, the whole system may collapse.

The greatest share of participants in this canvassing said automated systems driven by artificial intelligence are already improving many dimensions of their work, play and home lives and they expect this to continue over the next decade.

While they worry over the accompanying negatives of human-AI advances, they hope for broad changes for the better as networked, intelligent systems are revolutionizing everything, from the most pressing professional work to hundreds of the little “everyday”

An associate professor at a major university in Israel wrote, “In the coming 12 years AI will enable all sorts of professions to do their work more efficiently, especially those involving ‘saving life’: individualized medicine, policing, even warfare (where attacks will focus on disabling infrastructure and less in killing enemy combatants and civilians).

and assistant professor of artificial intelligence at Tilburg University in the Netherlands, wrote, “Even though I see many ethical issues, potential problems and especially power imbalance/misuse issues with AI (not even starting about singularity issues and out-of-control AI), I do think AI will change most lives for the better, especially looking at the short horizon of 2030 even more-so, because even bad effects of AI can be considered predominantly ‘good’

For example, the Cambridge Analytica case has shown us the huge privacy issues of modern social networks in a market economy, but, overall, people value the extraordinary services Facebook offers to improve communication opportunities, sharing capabilities and so on.”

Foghlú, engineering director and DevOps Code Pillar at Google’s Munich office, said, “The trend is that AI/ML models in specific domains can out-perform human experts (e.g., certain cancer diagnoses based on image-recognition in retina scans).

Mike Osswald, vice president of experience innovation at Hanson Inc., commented, “I’m thinking of a world in which people’s devices continuously assess the world around them to keep a population safer and healthier.

Thinking of those living in large urban areas, with devices forming a network of AI input through sound analysis, air quality, natural events, etc., that can provide collective notifications and insight to everyone in a certain area about the concerns of environmental factors, physical health, even helping provide no quarter for bad actors through community policing.”

Although AI will be disruptive through 2030 and beyond, meaning that there will be losers in the workplace and growing reasons for concern about privacy and AI/cyber-related crime, on the whole I expect that individuals and societies will make choices on use and restriction of use that benefit us.

Dana Klisanin, psychologist, futurist and game designer, predicted, “People will increasingly realize the importance of interacting with each other and the natural world and they will program AI to support such goals, which will in turn support the ongoing emergence of the ‘slow movement.’

For example, grocery shopping and mundane chores will be allocated to AI (smart appliances), freeing up time for preparation of meals in keeping with the slow food movement.

Liz Rykert, president at Meta Strategies, a consultancy that works with technology and complex organizational change, responded, “The key for networked AI will be the ability to diffuse equitable responses to basic care and data collection.

Nelson, a technology policy expert for a leading network services provider who worked as a technology policy aide in the Clinton administration, commented, “Most media reports focus on how machine learning will directly affect people (medical diagnosis, self-driving cars, etc.) but we will see big improvements in infrastructure (traffic, sewage treatment, supply chain, etc.).”

Gary Arlen, president of Arlen Communications, wrote, “After the initial frenzy recedes about specific AI applications (such as autonomous vehicles, workplace robotics, transaction processing, health diagnoses and entertainment selections), specific applications will develop –

For example, I expect our understanding of self and freedom will be greatly impacted by an instrumentation of a large part of memory, through personal logs and our data exhaust being recognized as valuable just like when we shed the term ‘junk DNA.’

Networked AI will bring us new insights into our own lives that might seem as far-fetched today as it would have been 30 years ago to say, ‘I’ll tell you what music your friends are discovering right now.’

Menasce, professor of computer science at George Mason University, commented, “AI and related technologies coupled with significant advances in computer power and decreasing costs will allow specialists in a variety of disciplines to perform more efficiently and will allow non-specialists to use computer systems to augment their skills.

David Wells, chief financial officer at Netflix at the time he responded to this study, responded, “Technology progression and advancement has always been met with fear and anxiety, giving way to tremendous gains for humankind as we learn to enhance the best of the changes and adapt and alter the worst.

James Kadtke, expert on converging technologies at the Institute for National Strategic Studies at the U.S. National Defense University, wrote, “Barring the deployment of a few different radically new technologies, such as general AI or commercial quantum computers, the internet and AI [between now and 2030] will proceed on an evolutionary trajectory.

Tim Morgan, a respondent who provided no identifying details, said, “Human/AI collaboration over the next 12 years will improve the overall quality of life by finding new approaches to persistent problems.

We will use these adaptive algorithmic tools to explore whole new domains in every industry and field of study: materials science, biotech, medicine, agriculture, engineering, energy, transportation and more.

David Cake, an leader with Electronic Frontiers Australia and vice-chair of the ICANN GNSO Council, wrote, “In general, machine learning and related technologies have the capacity to greatly reduce human error in many areas where it is currently very problematic and make available good, appropriately tailored advice to people to whom it is currently unavailable, in literally almost every field of human endeavour.”

Rather, we have learned to automate processes in which neural networks have been able to follow data to its conclusion (which we call ‘big data’) unaided and uncontaminated by human intuition, and sometimes the results have surprised us.

The ability for narrow AI to assimilate new information (the bus is supposed to come at 7:10 but a month into the school year is known to actually come at 7:16) could keep a family connected and informed with the right data, and reduce the mental load of household management.”

In public service, a turbulent environment has created a situation where knowledge overload can seriously degrade our ability to do the things that are essential to implement policies and serve the public good.

Robert Stratton, cybersecurity expert, said, “While there is widespread acknowledgement in a variety of disciplines of the potential benefits of machine learning and artificial intelligence technologies, progress has been tempered by their misapplication.

As more-rigorous practitioners begin to gain comfort and apply these tools to other corpora it’s reasonable to expect some significant gains in efficiency, insight or profitability in many fields.

data analyst for an organization developing marketing solutions said, “Assuming that policies are in place to prevent the abuse of AI and programs are in place to find new jobs for those who would be career-displaced, there is a lot of potential in AI integration.

For example, AI can be trained to identify and codify qualitative information from surveys, reviews, articles, etc., far faster and in greater quantities than even a team of humans can.

By having AI perform these tasks, analysts can spend more time parsing the data for trends and information that can then be used to make more-informed decisions faster and allow for speedier turn-around times.

What I know from my work in user-experience design and in exposure to many different Fortune 500 IT departments working in big data and analytics is that the promise and potential of AI and machine learning is VASTLY overstated.

The AI and machine learning code will be there, in a pocket here, a pocket there, but system-wide, it is unlikely to be operating reliably as part of the background radiation against which many of us play and work online.”

An anonymous respondent wrote, “While various deployments of new data science and computation will help firms cut costs, reduce fraud and support decision-making that involves access to more information than an individual can manage, organisations, professions, markets and regulators (public and private) usually take many more than 12 years to adapt effectively to a constantly changing set of technologies and practices.

For example, many organisations will be under pressure to buy and implement new services, but unable to access reliable market information on how to do this, leading to bad investments, distractions from core business, and labour and customer disputes.”

Daniel Berninger, an internet pioneer who led the first VoIP deployments at Verizon, HP and NASA, currently founder at Voice Communication Exchange Committee (VCXC), said, “The luminaries claiming artificial intelligence will surpass human intelligence and promoting robot reverence imagine exponentially improving computation pushes machine self-actualization from science fiction into reality.

Clay Shirky, writer and consultant on the social and economic effects of internet technologies and vice president at New York University, said, “All previous forms of labor-saving devices, from the level to the computer, have correlated with increased health and lifespan in the places that have adopted them.”

Jamais Cascio, research fellow at the Institute for the Future, wrote, “Although I do believe that in 2030 AI will have made our lives better, I suspect that popular media of the time will justifiably highlight the large-scale problems: displaced workers, embedded bias and human systems being too deferential to machine systems.

The steady removal of human emotion-driven discrimination will rebalance social organizations creating true equitable opportunity to all people for the first time in human history.

The results will be primarily positive but will produce problems both in the process of change and in totally new types of problems that will result from the ways that people do adapt the new technology-based processes.”

Mark Crowley, an assistant professor, expert in machine learning and core member of the Institute for Complexity and Innovation at the University of Waterloo in Ontario, Canada, wrote, “While driving home on a long commute from work the human will be reading a book in the heads-up screen of the windshield.

Yvette Wohn, director of the Social Interaction Lab and expert on human-computer interaction at the New Jersey Institute of Technology, said, “One area in which artificial intelligence will become more sophisticated will be in its ability to enrich the quality of life so that the current age of workaholism will transition into a society where leisure, the arts, entertainment and culture are able to enhance the well-being of society in developed countries and solve issues of water production, food growth/distribution and basic health provision in developing countries.”

An anonymous respondent wrote, “There will be an explosive increase in the number of autonomous cognitive agents (e.g., robots), and humans will interact more and more with them, being unaware, most of the time, if it is interactivity with a robot or with another human.

Michael Wollowski, associate professor of computer science and software engineering at Rose-Hulman Institute of Technology and expert in the Internet of Things, diagrammatic systems, and artificial intelligence, wrote, “Assuming that industry and government are interested in letting the consumer choose and influence the future, there will be many fantastic advances of AI.

student and teaching assistant actively researching future human-machine symbiosis at the University of Edinburgh, commented, “2030 is not that far away, so there is no room for extremely utopian/dystopian hopes and fears.

Given that AI is already used in everyday life (social-media algorithms, suggestions, smartphones, digital assistants, health care and more), it is quite probable that humans will live in a harmonious co-existence with AI as much as they do now –

Charlie Firestone, communications and society program executive director and vice president at the Aspen Institute, commented, “I remain optimistic that AI will be a tool that humans will use, far more widely than today, to enhance quality of life such as medical remedies, education and the environment.

For example, the AI will help us to conserve energy in homes and in transportation by identifying exact times and temperatures we need, identifying sources of energy that will be the cheapest and the most efficient.

Lima, an associate professor of computer science at Instituto Superior Técnico in Lisbon, Portugal, said, “Overall, I see AI-based technology relieving us from repetitive and/or heavy and/or dangerous tasks, opening new challenges for our activities.

I envisage autonomous mobile robots networked with a myriad of other smart devices, helping nurses and doctors at hospitals in daily activities, working as a ‘third hand’

Steven Polunsky, director of the Alabama Transportation Policy Research Center at the University of Alabama, wrote, “AI will allow public transportation systems to better serve existing customers by adjusting routes, travel times and stops to optimize service.

will also write up analyses based on parameters elicited from conversation and imbue these analyses with different political (left/right) and linguistic (aggressive/mild) slants, chosen by the human, using advances in language generation, which are already well under way.

I often collect files of material on my cloud drive that I found interesting or needed to read later, and these agents would be able to summarize and engage me in a discussion of these materials, very much like an intellectual companion.

As always, we should worry what the availability of such agents might mean for normal human social interaction, but I can also see many advantages in freeing up time for socializing with other humans as well as enriched interactions, based on knowledge and science, assisted by our new intellectual companions.”

Lawrence Roberts, designer and manager of ARPANET, the precursor to the internet and Internet Hall of Fame member, commented, “AI voice recognition, or text, with strong context understanding and response will allow vastly better access to website, program documentation, voice call answering, and all such interactions will greatly relieve user frustration with getting information.

HCI once held that our ability to gain the benefit from computers would be limited by the total amount of time people can spend sitting in front of a screen and inputting characters through a keyboard.

Joseph Konstan, distinguished professor of computer science specializing in human-computer interaction and AI at the University of Minnesota, predicted, “Widespread deployment of AI has immense potential to help in key areas that affect a large portion of the world's population, including agriculture, transportation (more efficiently getting food to people) and energy.

Even as soon as 2030, I expect we’ll see substantial benefits for many who are today disadvantaged, including the elderly and physically handicapped (who will have greater choices for mobility and support) and those in the poorest part of the world.”

Having said that, there will be major short-term disruptions in the labor market and smart governments should begin to plan for this by considering changes to unemployment insurance, universal basic income, health insurance, etc.

I would say there is almost zero chance that the U.S. government will actually do this, so there will be a lot of pain and misery in the short and medium term, but I do think ultimately machines and humans will peacefully coexist.

Undoubtedly, new ways of using machines and new machine capabilities will be used to create economic activities and services that were either a) not previously possible, or b) previously too scarce and expensive, and now can be plentiful and inexpensive.

It is very good at pattern matching, but human intelligence goes far beyond pattern matching and it is not clear that computers will be able to compete with humans beyond pattern matching.

One could argue much of today’s populist uprising we are experiencing globally finds its roots in the current displacements caused by machine learning as typified by smart manufacturing.

Marek Havrda, director at NEOPAS and strategic adviser for the GoodAI project, a private research and development company based in Prague that focuses on the development of artificial general intelligence and AI applications, explained the issue from his point of view, “The development and implementation of artificial intelligence has brought about questions of the impact it will have on employment.

Apart from the ability to deploy AI, super-labour will be characterised by creativity and the ability to co-direct and supervise safe exploration of business opportunities together with perseverance in attaining defined goals.

at all aspects from product design to marketing and after-sales care, three people could create a new service and ensure its smooth delivery for which a medium-size company would be needed today.

is accessible to all citizens in absolute terms (e.g., having enough to finance public service and other public spending) which would make everyone better off than in pre-AI age, than the relative inequalities.”

Although in the past, too, it seemed as if these technologies would leave people unemployed and useless, human ingenuity and the human spirit always found new challenges that could best be tackled by humans.”

Even though people are concerned about computers replacing the jobs of humans the best-case scenario is that technology will be augmenting human capabilities and performing functions that humans do not like to do.

principal architect for a major global technology company responded, “AI is a prerequisite to achieving a post-scarcity world, in which people can devote their lives to intellectual pursuits and leisure rather than to labor.

Reducing tedium will require changes to the social fabric and economic relationships between people as the demand for labor shrinks below the supply, but if these challenges can be met then everyone will be better off.”

Tom Hood, an expert in corporate accounting and finance, said, “By 2030, AI will stand for Augmented Intelligence and will play an ever-increasing role in working side-by-side with humans in all sectors to add its advanced and massive cognitive and learning capabilities to critical human domains like medicine, law, accounting, engineering and technology.

Imagine a personal bot powered by artificial intelligence working by your side (in your laptop or smartphone) making recommendations on key topics by providing up-to-the-minute research or key pattern recognition and analysis of your organization’s data?

One example is a CPA in tax given a complex global tax situation amid constantly changing tax laws in all jurisdictions who would be able to research and provide guidance on the most complex global issues in seconds.

professor of computer science expert in systems who works at a major U.S. technological university wrote, “By 2030, we should expect advances in AI, networking and other technologies enabled by AI and networks, e.g., the growing areas of persuasive and motivational technologies, to improve the workplace in many ways beyond replacing humans with robots.”

concerns about the potential negative impact of AI on the socioeconomic future if steps are not taken soon to begin to adjust to a future with far fewer jobs for humans.

Wout de Natris, an internet cybercrime and security consultant based in Rotterdam, Netherlands, wrote, “Hope: Advancement in health care, education, decision-making, availability of information, higher standards in ICT-security, global cooperation on these issues, etc.

Fear: Huge segments of society, especially the middle classes who carry society in most ways, e.g., through taxes, savings and purchases, will be rendered jobless through endless economic cuts by industry, followed by governments due to lower tax income.

Alex Halavais, an associate professor of social technologies at Arizona State University, wrote, “AI is likely to rapidly displace many workers over the next 10 years, and so there will be some potentially significant negative effects at the social and economic level in the short run.”

We may be at a tipping point in recognizing that social inequities need to be addressed, so, say, a decreased need for human labor due to AI will result in more time for leisure, education, etc., instead of increasing wealth inequity.”

and professor at the University of Wisconsin, Milwaukee, responded, “Just as automation left large groups of working people behind even as the United States got wealthier as a country, it is quite likely that AI systems will automate the service sector in a similar way.

Fleischmann, an associate professor at the University of Texas at Austin’s School of Information, responded, “In corporate settings, I worry that AI will be used to replace human workers to a disproportionate extent, such that the net economic benefit of AI is positive, but that economic benefit is not distributed equally among individuals, with a smaller number of wealthy individuals worldwide prospering, and a larger number of less wealthy individuals worldwide suffering from fewer opportunities for gainful employment.”

Gerry Ellis, founder and digital usability and accessibility consultant at Feel The BenefIT, responded, “Technology has always been far more quickly developed and adopted in the richer parts of the world than in the poorer regions where new technology is generally not affordable.

European computer science professor expert in machine learning commented, “The social sorting systems introduced by AI will most likely define and further entrench the existing world order of the haves and the have-nots, making social mobility more difficult and precarious given the unpredictability of AI-driven judgements of fit.

The level of flexibility designed in to allow for changes in normative perceptions and judgements will be key to ensuring that AI driven-systems support rather than obstruct productive social change.”

Stephen McDowell, a professor of communication at Florida State University and expert in new media and internet governance, commented, “Much of our daily lives is made up of routines and habits that we repeat, and AI could assist in these practices.

They predict a rise in access to various tools, including digital agents that can perform rudimentary exams with no need to visit a clinic, a reduction in medical errors and better, faster recognition of risks and solutions.

Leonard Kleinrock, Internet Hall of Fame member and co-director of the first host-to-host online connection and professor of computer science at the University of California, Los Angeles, predicted, “As AI and machine learning improve, we will see highly customized interactions between humans and their health care needs.

This mass customization will enable each human to have her medical history, DNA profile, drug allergies, genetic makeup, etc., always available to any caregiver/medical professional that they engage with, and this will be readily accessible to the individual as well.

My hope and expectation is that intelligent agents will be able to assess the likely risks and the benefits that ensue from proposed treatments and procedures, far better than is done now by human evaluators, such humans, even experts, typically being poor decision makers in the face of uncertainty.

Granted, there may be large-scale problems caused by AI and robots, e.g., massive unemployment, but the recent trends seem to indicate small improvements such as health monitor apps outlined above, would be more easily developed and deployed successfully.”

Gabor Melli, senior director of engineering for AI and machine learning for Sony PlayStation, responded, “My hope is that by 2030 most of humanity will have ready access to health care and education through digital agents.”

With AI, we can program algorithms to help refine those decision-making processes, but only when we train the AI tools on human thinking, a tremendous amount of real data and actual circumstances and experiences.

While mammography guidelines have changed to try to reflect this reality, strong human emotion powered by anecdotal experience leaves some practitioners unwilling to change their recommendations based on evidence and advocacy groups reluctant to change their stance based on public outcry.

AI will redefine our understanding of health care, optimizing existing processes while simultaneously redefining how we answer questions about what it means to be healthy, bringing care earlier in the cycle due to advances in diagnostics and assessment, i.e.

Eduardo Vendrell, a computer science professor at the Polytechnic University of Valencia in Spain, responded, “In the field of health, many solutions will appear that will allow us to anticipate current problems and discover other risk situations more efficiently.

Monica Murero, director of the E-Life International Institute and associate professor in sociology of new technology at the University of Naples Federico II in Italy, commented, “In health care, I foresee positive outcomes in terms of reducing human mistakes, that are currently still creating several failures.

Also, I foresee an increased development of mobile (remote) 24/7 health care services and personalized medicine thanks to AI and human-machine collaboration applied to the field.”

Communication, said, “Life expectancy is increasing (globally) and human-machine/AI collaboration will help older people to manage their life on their own by taking care of them, helping them in the household (taking down the garbage, cleaning up, etc.) as well as keeping them company –

In health care, for example, it will help doctors more accurately diagnose and treat disease and continually monitor high-risk patients through internet-connected medical devices.

It will bring health care to places with a shortage of doctors, allowing health care workers to diagnose and treat disease anywhere in the world and to prevent disease outbreaks before they start.”

I imagine people entering a government office or health facility where people with eye- or ear-related disabilities could effortlessly interact to state their necessities and resolve their information needs.”

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, responded, “My hope is that AI/human-machine interface will become commonplace especially in the academic research and health care arena.

Jay Sanders, president and CEO of the Global Telemedicine Group, responded, “AI will bring collective expertise to the decision point, and in health care, bringing collective expertise to the bedside will save many lives now lost by individual medical errors.”

John Lazzaro, retired professor of electrical engineering and computer science, University of California, Berkeley, commented, “When I visit my primary care physician today, she spends a fair amount time typing into an EMS application as she’s talking to me.

The overall hopes for the future of health care are tempered by concerns that there will continue to be inequities in access to the best care and worries that private health data may be used to limit people’s options.

AI could, effectively, manage long-term health care costs by offering lesser treatment (and sub-optimal recovery rates) to individuals perceived to have a lower status.

An AI could subjectively evaluate that the patient has little interest in their own health and withhold more expensive treatment options leading to a shorter lifespan and an overall cost saving.”

Timothy Graham, a postdoctoral research fellow in sociology and computer science at Australian National University, commented, “In health care, we see current systems already under heavy criticism (e.g., the My Health Record system in Australia, or the NHS Digital program), because they are nudging citizens into using the system through an ‘opt-out’

Given the health care industry’s inherent profit motives it would be easy for them to justify how much cheaper it would be to simply have devices diagnose, prescribe treatment and do patient care, without concern for the importance of human touch and interactions.

Lou Gross, professor of mathematical ecology and expert in grid computing, spatial optimization and modeling of ecological systems at the University of Tennessee, Knoxville, said, “I see AI as assisting in individualized instruction and training in ways that are currently unavailable or too expensive.

Micah Altman, a senior fellow at the Brookings Institution and head scientist in the program on information science at MIT Libraries, wrote, “These technologies will help to adapt learning (and other environments) to the needs of each individual by translating language, aiding memory and providing us feedback on our own emotional and cognitive state and on the environment.

At the same time, child labor will be reduced because robots will be able to perform the tasks far cheaper and faster, forcing governments in Asia to find real solutions.”

Classes will, by 2030, be predominantly augmented-reality-based, with a full mix of physical and virtual students in classes presented in virtual classrooms by national and international universities and organizations.

The driving need will be expansion of knowledge for personal interest and enjoyment as universal basic income or equity will replace the automated tasks that had provided subsistence jobs in the old system.”

Jennifer Groff, co-founder of the Center for Curriculum Redesign, an international non-governmental organization dedicated to redesigning education for the 21st century, wrote, “The impact on learning and learning environments has the potential to be one of the most positive future outcomes.

Some high school- and college-level teaching will be conducted partially by video and AI-graded assignments, using similar platforms to the MOOC [massive open online courses] models today, with no human involvement, to deal with increasing costs for education (‘robo-TA’).”

Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, responded, “Huge segments of society will be left behind or excluded completely from the benefits of digital advances –

AI can help customize curricula to each learner and guide/monitor their journey through multiple learning activities, including some existing schools, on-the-job learning, competency-based learning, internships and such.

consultant and analyst also said that advances in education have been held back by entrenched interests in legacy education systems, writing, “The use of technology in education is minimal today due to the existence and persistence of the classroom-in-a-school model.

In education and training, AI learning systems will recognize learning preferences, styles and progress of individuals and help direct them toward a personally satisfying outcome.

The expert predictions reported here about the impact of the internet between 2018 and 2030 came in response to questions asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 4, 2018, and Aug. 6, 2018.

For this project, we invited more than 10,000 experts and members of the interested public to share their opinions on the likely future of the internet, and 985 responded to at least one of the questions we asked.

Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Please consider giving an example of how a typical human-machine interaction will look and feel in a specific area, for instance, in the workplace, in family life, in a health care setting or in a learning environment.

Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background and this was noted where relevant in this report.

2018 Air Warfare Symposium - Spark Tank Final Competition

LT Col Dave Harden (AFWERX - Emcee); Judges: Hon Heather Wilson (SecAF), Gen David Goldfein (CSAF), Milo Medin (Alphabet, Inc.), and Jay Harrison ...

Starr Forum: Artificial Intelligence and National Security Law: A Dangerous Nonchalance

March 06, 2018 A conversation with James E Baker, former chief judge of the US Court of Appeals for the Armed Forces and a national security law expert.

Charles Krauthammer - Constitution Day Celebration 2011

Charles Krauthammer speaks at the Hillsdale College Constitution Day Celebration on September 18, 2011. His speech is titled, "Why We Celebrate ...

Building a Community of National Security Entrepreneurs

The National Security Strategy and the National Defense Strategy each call for a strong National Defense Innovation Base comprised of the whole of American ...

US Navy (USN) Asst. Secretary James "Hondo" Geurts / David Bray (CXOTALK #296)

How can the government innovate at scale while remaining agile and cost-effective? CXOTalk host, Michael Krigsman, speaks with two prominent leaders to ...

DR. JOHN BRANDENBURG : MARS & THE SECRET SPACE PROGRAM

TONIGHT I will be talking with Dr. John Brandenburg, well known physicist about the battles in the past on Mars and the secret space program. Short bio: DR.

NASA Provides Coverage of the National Space Council Meeting

NASA's Kennedy Space Center in Florida hosted a meeting of the National Space Council, chaired by Vice President Mike Pence on Wednesday, Feb. 21.

LIVE: Confirmation hearing for Supreme Court nominee Judge Brett Kavanaugh (Day 1)

Confirmation hearing for Supreme Court nominee Judge Brett Kavanaugh. Full video here: