AI News, $2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust

$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust

$2 million has been allocated to fund research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially.

Things may move very quickly and we need research in place to make sure they go well.” Grant topics include: training multiple AIs to work together and learn from humans about how to coexist, training AI to understand individual human preferences, understanding what “general” actually means, incentivizing research groups to avoid a potentially dangerous AI race, and many more.

As the request for proposals stated, “The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.” FLI hopes that this round of grants will help ensure that AI remains beneficial as it becomes increasingly intelligent.

We’ve identified a set of questions that we think are among the most important to tackle for securing robust governance of advanced AI, and strongly believe that with focused research and collaboration with others in this space, we can make productive headway on them.” -Allan Dafoe “We are excited about this project because it provides a first unique and original opportunity to explicitly study the dynamics of safety-compliant behaviours within the ongoing AI research and development race, and hence potentially leading to model-based advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants.

2018 International AI Safety Grants Competition

For many years, artificial intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant recent success, and great future promise.

In an open letter in 2015, a large international group of leading AI researchers from academia and industry argued that this success makes it important and timely to research also how to make AI systems robust and beneficial, and that this includes concrete research directions that can be pursued today.

The first Asilomar Principle is that “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence,” and the second states that “Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies…”  The aim of this request for proposals is to support research that serves these and other goals indicated by the Principles.

This 2018 grants competition is the second round of the multi-million dollar grants program announced in January 2015, and will give grants totaling millions more to researchers in academic and other nonprofit institutions for projects up to three years in duration, beginning September 1, 2018.

Following the launch of the first round, the field of AI safety has expanded considerably in terms of institutions, research groups, and potential funding sources entering the field.

 There are still relatively few resources devoted to issues that will become crucial if/when AI research attains its original goal: building artificial general intelligence (AGI) that can (or can learn to) outperform humans on all cognitive tasks (see Asilomar Principles 19-23).

Successful grant proposals will either relate directly to AGI issues, or clearly explain how the proposed work is a necessary stepping stone toward safe and beneficial AGI.

As with the previous round, grant applications will be subject to a competitive process of confidential expert peer review similar to that employed by all major U.S. scientific funding agencies, with reviewers being recognized experts in the relevant fields.

Proposals will be evaluated according to how topical and impactful they are: TOPICAL: This RFP is limited to research that aims to help maximize the societal benefits of AGI, explicitly focusing not on the standard goal of making AI more capable, but on making it more robust and/or beneficial.

Very roughly, the expectation is ~70% computer science and closely related technical fields, ~30% economics, law, ethics, sociology, policy, education, and outreach.

IMPACTFUL: Proposals will be rated according to their expected positive impact per dollar, taking all relevant factors into account, such as: Strong proposals will make it easy for FLI to evaluate their impact by explicitly stating what they aim to produce (publications, algorithms, software, events, etc.) and when (after 1st, 2nd and 3rd year, say).

Preference will be given to proposals whose deliverables are made freely available (open access publications, open source software, etc.) where appropriate.

We wish to enable research that, because of its long-term focus or its non-commercial, speculative or non-mainstream nature would otherwise go unperformed due to lack of available resources.

To save time for both you and the reviewers, applications will be accepted electronically through a standard form on our website (click here for the application) and evaluated in a two-part process, as follows: INITIAL PROPOSAL — DUE FEBRUARY 25 2018, 11:59 PM Eastern Time — must include: A

FULL PROPOSAL — DUE MAY 20 2018 — Must Include: Completed full proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described in Section III.

A review panel of scientists in the relevant fields will be convened to produce a final rank ordering of the proposals, which will determine the grant winners, and make budgetary adjustments if necessary.

FLI is an independent, philanthropically funded nonprofit organization whose mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

 Proposals on the requested topics are all germane to the RFP, but the list is not meant to be either comprehensive or exclusive: proposals on other topics that similarly address long-term safety and benefits of AI are also welcomed.

December 20, 2017: RFP is released February 25, 2018 (by 11:59 PM EST): Initial Proposals due March 23, 2018: Full Proposals invited May 20, 2018 (by 11:59 PM EST): Full Proposals (invite only) due July 31, 2018: Grant Recommendations are publicly announced;

Given this, if you are awarded a grant, your institution must a) prove their equivalency to a nonprofit institution by providing the institution’s establishing law or charter, list of key staff and board members, and a signed affidavit for public universities and, b) comply with the U.S. Patriot Act.

Please contact FLI if you have any questions about whether your institution is eligible, to get a list of organizations that can help administer your grant, or if you want to review the affidavit that public universities must fill out.

The committee is likely to recommend that some grants not be renewed, some be renewed at reduced level, some renewed at the same level, and that some be offered the opportunity for increased funding in later years.

April 2017 Update on Grant to the Future of Life Institute for Artificial Intelligence RFP

In the first half of 2015, the Future of Life Institute (FLI) issued a request for proposals1 (RFP) to gather research proposals aimed at ensuring that artificial intelligence systems are robust and beneficial.

Because we view this grant as primarily aimed at helping grow and develop the field of AI safety, it is somewhat difficult to assess how effectively the grant has achieved our goals.

major reason we considered contributing to the funding available to the RFP was our hope that FLI would be willing to allow us, as potential funders, to participate in the RFP process more than it would otherwise have reason to.

Note that in this section, and throughout this page, we mostly avoid naming specific projects, to allow us to state our rough impressions about the RFP as a whole without risking unwarranted reputational or other damage to specific projects.

We also believe the RFP process may have had a number of more indirect positive effects on field growth: We are not completely confident which projects would counterfactually not have been funded without our grant.

Since our impression is that four projects that ultimately received grants would likely not have reached the second round of review without Dario’s participation in the first-round selection panel, we think it is reasonable to largely attribute those projects getting funding to our involvement.

AI Safety Research

➣ The Control Problem in AI: by the Strategic AI Research Centre This was an intensive workshop at Oxford, with a large number of participants, and covered, among many other things, goals and principles of AI policy and strategy, value alignment for advanced machine learning, the relative importance of AI v.

other x-risk, geopolitical strategy, government involvement, analysis of the strategic landscape, theory and methods of communication and engagement, the prospects of international-space-station-like coordinated AGI development, and an enormous array of technical AI control topics.

The brainstorming sessions included “rapid problem attacks” on especially difficult issues, a session drafting various “positive visions” for different AI development scenarios, and a session (done in partnership with Open Philanthropy) which involved brainstorming ideas for major funders interested in x-risk reduction.

Embedded Machine Learning: by Dragos Margineantu (Boeing), Rich Caruana (Microsoft Research), Thomas Dietterich (Oregon State University) This workshop took place at the AAAI Fall Symposium, Arlington, VA, November 12-14, 2015 and included issues of Unknown Unknowns in machine learning and more generally touched on issues at the intersection of software engineering and machine learning, including verification and validation.

I talked about AI applications in science (bird migration, automated scientist), law enforcement (fraud detection, insider threat detection), and sustainability (managing invasive species).

Many topics were discussed including (a) the impact of AI on the future of employment and economic growth, (b) social intelligence and human-robot interaction, (c) the time scales of AI risks: short term, medium term, and very long term, (d) the extent to which mapping the brain will help us understand how the brain works, (e) the future of US Federal funding for AI research and especially for young faculty, (f) the challenges of creating AI systems that understand and exhibit ethical behavior, (g) the extent to which AI should be regulated either by government or by community institutions and standards, and (h) how do we develop appropriate “motivational systems” for AI agents?

Among the questions discussed were (a) how to estimate causal effects under various kinds of situations (A/B tests, domain adaptation, observational medical data), (b) how to train classifiers to be robust in the face of adversarial attacks (on both training and test data), (c) how to train reinforcement learning systems with risk-sensitive objectives, especially when the model class may be misspecified and the observations are incomplete, and (d) how to guarantee that a learned policy for an MDP satisfies specified temporal logic properties.

MIRI also ran four research retreats, internal workshops exclusive to MIRI researchers Participants worked on questions of self-reference in type theory and automated theorem provers, with the goal of studying systems that model themselves. Participants:

Andrew Critch (MIRI), Patrick LaVictoire (MIRI), Abram Demski (USC Institute for Creative Technologies), Andrew MacFie (Carleton University), Daniel Filan (Australian National University), Devi Borg (Future of Humanity Institute), Jaan Altosaar (Google Brain), Jan Leike (Future of Humanity Institute), Jim Babcock (unaffiliated), Matthew Johnson (Harvard), Rafael Cosman (unaffiliated), Stefano Albrecht (UT Austin), Stuart Armstrong (Future of Humanity Institute), Sune Jakobsen (University College Longdon), Tom Everitt (Australian National University), Tsvi Benson-Tilsen (UC Berkeley), Vadim Kosoy (Epicycle) Participants at this workshop, consisting of MIRI staff and regular collaborators, worked on a variety of problems related to MIRI’s Agent Foundations technical agenda, with a focus on decision theory and the formal construction of logical counterfactuals. Participants:

Control and Responsible Innovation in the Development of Autonomous Systems Workshop: by The Hastings Center The four co-­chairs (Gary Marchant, Stuart Russell, Bart Selman, and Wendell Wallach) and The Hastings Center staff (particularly Mildred Solomon and Greg Kaebnick) designed this first workshop. This workshop was focused on exposing participants to relevant research progressing in an array of fields, stimulating extended reflection upon key issues and beginning a process of dismantling intellectual silos and loosely knitting the represented disciplines into a transdisciplinary community.

We hope that scientific and intellectual leaders, new to the challenge and participating in the second workshop, will take on the development of beneficial, robust, safe, and controllable AI as a serious research agenda.

Future of Life Institute — Artificial Intelligence Risk Reduction

The Open Philanthropy Project recommended $1,186,000 to the Future of Life Institute (FLI) to support research proposals aimed at keeping artificial intelligence robust and beneficial.

The proposals that were funded by the RFP span a wide range of approaches, including research on ensuring that advanced AI systems that may be developed in the future are aligned with human values, managing the economic impacts of AI, and controlling autonomous weapons systems.

Following the conference, Elon Musk announced a $10 million donation to FLI to support “a global research program aimed at keeping AI beneficial to humanity.”1 Soon thereafter, FLI issued a Request for Proposals (RFP) to solicit proposals aiming to make AI systems robust and beneficial,2 and published alongside it a document expanding on research priorities within this area.3 The goal of the RFP was to allocate $6 million of Musk’s donation to the most promising proposals submitted, in two categories: “project grants” and “center grants”.4 We see this RFP as an important step in the development of the nascent field of AI safety research.

It represents the first set of grant opportunities explicitly seeking to fund mainstream academic work on the subject, which we feel makes it an unusual opportunity for a funder to engage in early-stage field-building.

Telling FLI about our planned recommendation at this stage was intended to assist FLI in planning the review of second round proposals, and as an expression of good faith while we played an active role in the RFP process.

FLI’s announcement of the grants gives the following overview of the awardees:5 The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include: With the research field of AI safety at such an early stage, we feel that it would be premature to have confident expectations of which research directions will end up being relevant and important once AI systems become advanced enough to pose significant risks.

Some of these directions - which we were glad to see represented among the successful projects - include transparency and meta-reasoning, robustness to context changes, calibration of uncertainty, and general forecasting.6 We also believe that early attempts to explore what it looks like when AI systems attempt to learn what humans value may be useful in informing later work, if more advanced and general AI (which may need to have some internal representation of human values, in a way current systems do not) is eventually developed.

Elon Musk’s donation was the only other source of funding for this RFP, and this contribution was capped at $6 million from the outset.7 We considered the possibility that this amount would be increased late in the process if the quality of submissions was higher than expected.

Given that the proposals submitted could last one, two, or three years and generally requested an even amount of funding across their duration, this restriction meant that it would be essentially impossible for the RFP to allocate its entire budget if it funded any one- or two-year projects.

This included ensuring that talented applicants not have their proposals rejected in a way that could cause them to feel demoralized or unmotivated to do research related to AI safety in future, and funding the set of proposals which would best ensure the productive development of the field as a whole.

There are two other notable reservations we had while making this grant: As a more minor consideration, we share some of the challenges of working with FLI as an indication of reservations we had during the process of considering the grant, though we don’t consider these to be major issues for the grant going forward.

Changes in funding in the AI safety field

In 2016, grants from the Future of Life Institute (FLI) triggered growth in smaller-scale technical AI safety work.[1] Industry invested more over 2016, specially at Google DeepMind and potentially at OpenAI.[2] Because of their high salary costs, the monetary growth in spending at these firms may overstate actual growth of the field.

FLI grantee projects will be coming to a close over the year, which may mean that technical hires trained through those projects become available to join larger centers.

If technical research consolidates into a handful of major teams, it might make it easier to keep open dialogue between research groups, but might decrease individual incentives to because researchers have enough collaboration opportunities locally.

Although little can be said about 2018 at this point, the current round of academic grants which support FLI grantees as well as FHI end in 2018, potentially creating a funding cliff.

(Though FLI has just announced a second funding round, and MIT Media Lab has just announced a $27m center (whose exact plans remain unspecified).[4] Estimated spending in AI Safety broken down by field of work In 2014, the field of research was not very diverse.

It was roughly evenly split into work at FHI on macrostrategy, with limited technical work, and at MIRI following a relatively focused technical research agenda which placed little emphasis on deep learning.

MIRI remains the only non-profit doing technical research and continues to be the largest research group with 7 research fellows at the end of 2016 and a budget of $1.75m.

Google DeepMind probably has the second largest technical safety research group with between 3 and 4 full-time-equivalent (FTE) researchers at the end of 2016 (most of whom joined at the end of the year), though OpenAI and GoogleBrain probably have 0.5-1.5 FTEs.[5] FHI and SAIRC remains the only large-scale AI strategy center.

Computational Creativity: AI and the Art of Ingenuity

Will a computer ever be more creative than a human? In this compelling program, artists, musicians, neuroscientists and computer scientists explore the future of ...

Liquid Sand Hot Tub- Fluidized air bed

Water filled hot tubs are soooo 2016. Go to and use promo code "MarkRober" for $50 off any mattress. They're super dope

Artificial Intelligence Machine Learning Big Data | Neil Jacobstein | Exponential Finance

How machine learning and data analytics are revolutionizing credit, risk, and fraud, and changing the overall financial services landscape.

Full Actors Roundtable: Tom Hanks, Gary Oldman, John Boyega, James Franco | Close Up With THR

Subscribe for Roundtables, Box Office Reports, & More! ▻▻ Stay in The Know With all Things Hollywood, Subscribe to THR News

Science on Board CRS-15

Original air date: Thursday, June 28 at 8 a.m. PT (11 a.m. ET, 1500 UTC) NASA commercial cargo provider SpaceX is targeting no earlier than 5:42 a.m. EDT ...

Stromae - Alors On Danse

Music video by Stromae performing Alors On Danse. 2009 © & ℗ Mosaert.

How to Make Magnetic Slime - Science Experiment

Make magnetic slime using Elmers PVA glue. Add iron filings and watch what happens when you introduce a magnet. Simple and fun science experiment.

Calgary Police Chopper

Part 1. These are exciting times for the Calgary Police Service's Air Services Unit. Last year marked a decade of continuous service for Canada's first municipal ...

Google I/O Keynote (Google I/O '17)

Join us to learn about product and platform innovation at Google. See all the talks from Google I/O '17 here: Watch more Android talks at ..

Freezing 200,000 Tons of Lethal Arsenic Dust

Giant Mine sits near Yellowknife, in the Northwest Territories of Canada. Once it was a productive gold mine, but after the gold ran out, the mining company went ...