AI News, MIRI December 2016 Newsletter
- On 20. november 2017
- By Read More
MIRI December 2016 Newsletter
We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal.
General updates We teamed up with a number of AI safety researchers to help compile a list of recommended AI safety readings for the Center for Human-Compatible AI.
2017 Updates and Strategy
MIRI Strategy In our last strategy update (August 2016), Nate wrote that MIRI’s priorities were to make progress on our agent foundations agenda and begin work on our new “Alignment for Advanced Machine Learning Systems” agenda, to collaborate and communicate with other researchers, and to grow our research and ops teams.
AI successes like AlphaGo indicate that it’s easier to outperform top humans in domains like Go (without any new conceptual breakthroughs) than might have been expected.2 This lowers our estimate for the number of significant conceptual breakthroughs needed to rival humans in other domains.
Our research priorities are somewhat different, since shorter timelines change what research paths are likely to pay out before we hit AGI, and also concentrate our probability mass more on scenarios where AGI shares various features in common with present-day machine learning systems.
As an example, Nate is spending less time on staff management and other administrative duties than in the past (having handed these off to MIRI COO Malo Bourgon) and less time on broad communications work (having delegated a fair amount of this to me), allowing him to spend more time on object-level research, research prioritization work, and more targeted communications.4 I’ll lay out what these updates mean for our plans in more concrete detail below.
Work related to this exploratory investigation will be non-public-facing at least through late 2017, in order to lower the risk of marginally shortening AGI timelines (which can leave less total time for alignment research) and to free up researchers’ attention from having to think through safety tradeoffs for each new result.5 We’ve worked on non-public-facing research before, but this will be a larger focus in 2017.
1 means “limited progress”, 2 “weak-to-modest progress”, 3 “modest progress”, 4 “modest-to-strong progress”, and 5 “sizable progress”:6 logical uncertainty and naturalized induction: 2015 progress: 5.
Jessica, Sam, and Scott have recently been working on the problem of reasoning procedures like Solomonoff induction giving rise to misaligned subagents (e.g., here), and considering alternative induction methods that might avoid this problem.7 In decision theory, a common thread in our recent work is that we’re using probability and topological fixed points in settings where we used to use provability.
Targeted outreach and closer collaborations Our outreach efforts this year are mainly aimed at exchanging research-informing background models with top AI groups (especially OpenAI and DeepMind), AI safety research groups (especially the Future of Humanity Institute), and funders / conveners (especially the Open Philanthropy Project).
Relatively general algorithms (plus copious compute) were able to surpass human performance on Go, going from incapable of winning against the worst human professionals in standard play to dominating the very best professionals in the space of a few months.
The relevant development here wasn’t “AlphaGo represents a large conceptual advance over previously known techniques,” but rather “contemporary techniques run into surprisingly few obstacles when scaled to tasks as pattern-recognition-reliant and difficult (for humans) as professional Go”.
The publication of “Concrete Problems in AI Safety” last year, for example, caused us to reduce the time we were spending on broad-based outreach to the AI community at large in favor of spending more time building stronger collaborations with researchers we knew at OpenAI, Google Brain, DeepMind, and elsewhere.
We generally support a norm where research groups weigh the costs and benefits of publishing results that could shorten AGI timelines, and err on the side of keeping potentially AGI-hastening results proprietary where there’s sufficient uncertainty, unless there are sufficiently strong positive reasons to disseminate the results under consideration.
Additionally, the ranking is based on the largest technical result we expect in each category, and emphasizes depth over breadth: if we get one modest-seeming decision theory result one year and ten such results the next year, those will both get listed as “modest progress”.
Research into risks from artificialintelligence
Potential for huge impact, but also a large chance of your individual contribution making little or no difference Ratings Career capital: Direct impact: Earnings: Advocacy potential: Ease of competition: Job satisfaction: Our reasoning for these ratings is explained below.
Key facts on fit Strong technical abilities (at the level of a top 20 CS or math PhD program), interest in the topic highly important, self-directed as the field has little structure.
Updates to the research team, and a major donation
Concretely, we have already made several plan adjustments as a consequence, including: moving forward with more confidence on full-time researcher hires, trialing more software engineers, and deciding to run only one fundraiser this year, in the winter.1 This likely is a one-time outlier donation, similar to the $631k in cryptocurrency donations we received from Ripple developer Jed McCaleb in 2013–2014.2 Looking forward at our funding goals over the next two years: While we still have some uncertainty about our 2018 budget, our current point estimate is roughly $2.5M.
This would be sufficient for our 2018 budget even if we expand our engineering team more quickly than expected, and would give us a bit of a buffer to account for uncertainty in our future fundraising (in particular, uncertainty about whether the Open Philanthropy Project will continue support after 2017).
On a five-year timescale, our broad funding goals are:3 On the low end, once we finish growing our team over the course of a few years, our default expectation is that our operational costs will be roughly $4M per year, mostly supporting researcher and engineer salaries.
On the high end, it’s possible to imagine scenarios involving an order-of-magnitude increase in our funding, in which case we would develop a qualitatively different set of funding goals reflecting the fact that we would most likely substantially restructure MIRI.
While we consider it reasonably likely that we are in a good position for this, we would instead recommend that donors direct additional donations elsewhere if we ended up concluding that our donors (or other organizations) are in a better position than we are to respond to surprise funding opportunities in the AI alignment space.4 A
Our five-year plans are largely based on assumptions about multiple-year funding flows, so how aggressively we decide to plan our growth in response to this new donation depends largely on whether we can sustainably raise funds at the level of the above goal in future years (e.g., it depends on whether and how other donors change their level of support in response).
To reduce the uncertainty going into our expansion decisions, we’re encouraging more of our regular donors to sign up for monthly donations or other recurring giving schedules—under 10% of our income currently comes from such donations, which limits our planning capabilities.5 We also encourage supporters to reach out to us about their future donation plans, so that we can answer questions and make more concrete and ambitious plans.
We are continuing to review applicants, and in light of the generous support we recently received and the strong pool of applicants so far, we are likely to trial more candidates than we’d planned previously.
However, we did not see enough progress on AAMLS problems over the last year to conclude that we should currently prioritize this line of research over our other work (e.g., our agent foundations research on problems such as logical uncertainty and counterfactual reasoning).
In comparison, problems in the agent foundations agenda have seen more progress: Logical uncertainty (Definability of truth, reflective oracles, logical inductors) Decision theory (Modal UDT, reflective oracles, logical inductors) Vingean reflection (Model polymorphism, logical inductors) One thing to note about these problems is that they were formulated on the basis of a strong intuition that they ought to be solveable.
the agent foundations agenda) are to a lesser extent: it is attempting to solve the whole alignment problem (including goal specification) given access to resources such as powerful reinforcement learning.
From our increased financial security and ability to more ambitiously pursue our plans, to the new composition and focus of the research team, the new engineers who are spending time with us, and the growth of the research that they’ll support, things are moving forward quickly and with purpose.
In particular, an important source of variance in our plans is how our new non-public-facing research progresses, where we’re likely to take on more ambitious growth goals if our new work looks like it’s going well.
We would also likely increase our reserves in this scenario, allowing us to better adapt to unexpected circumstances, and there is a smaller probability that we would use these funds to grow moderately more than currently planned without a significant change in strategy.
- On 26. februar 2021
Nate Soares: "Ensuring Smarter-than-Human Intelligence has a Positive Outcome" | Talks at Google
Nate Soares is the Executive Director of the Machine Intelligence Research Institute (MIRI), an organization focused on ensuring that smarter-than-human intelligence has a positive outcome....
The jobs we'll lose to machines -- and the ones we won't | Anthony Goldbloom
Machine learning isn't just for simple tasks like assessing credit risk and sorting mail anymore -- today, it's capable of far more complex applications, like grading essays and diagnosing...
Humans Need Not Apply
Discuss this video: ## Robots, Etc: Terex Port automation:
The surprisingly charming science of your gut | Giulia Enders
Ever wonder how we poop? Learn about the gut -- the system where digestion (and a whole lot more) happens -- as doctor and author Giulia Enders takes us inside the complex, fascinating science...
Renata Salecl: Our unhealthy obsession with choice
We face an endless string of choices, which leads us to feel anxiety, guilt and pangs of inadequacy that we are perhaps making the wrong ones. But philosopher Renata Salecl asks: Could individual...
Why helmets don't prevent concussions -- and what might | David Camarillo
What is a concussion? Probably not what you think it is. In this talk from the cutting edge of research, bioengineer (and former football player) David Camarillo shows what really happens during...
Everything you think you know about addiction is wrong | Johann Hari
What really causes addiction — to everything from cocaine to smart-phones? And how can we overcome it? Johann Hari has seen our current methods fail firsthand, as he has watched loved ones...
Your brain on video games | Daphne Bavelier
How do fast-paced video games affect the brain? Step into the lab with cognitive researcher Daphne Bavelier to hear surprising news about how video games, even action-packed shooter games,...
WHY IS NEW CDC DIRECTOR A MILITARY MAJOR FROM THE U S AIR FORCE
The Event Is Coming Soon - WHY IS NEW CDC DIRECTOR A MILITARY MAJOR FROM THE U.S. AIR FORCE NATURAL NEWS (Natural News) On July 7, U.S. Department of Health and Human Services (HHS) Secretary...
Refugees have the right to be protected | António Guterres
António Guterres thinks that we can solve the global refugee crisis — and he offers compelling, surprising reasons we must try. In conversation with TED's Bruno Giussani, Guterres discusses...