AI News, MIRI October 2016 Newsletter

MIRI October 2016 Newsletter

The following newsletter was originally posted on MIRI’s website.

Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a way that outpaces deduction.

General updates We wrote up a more detailed fundraiser post for the Effective Altruism Forum, outlining our research methodology and the basic case for MIRI.

Security Mindset and the Logistic Success Curve

The Open Philanthropy Project usually prefers not to provide more than half of an organization’s funding, to facilitate funder coordination and ensure that organizations it supports maintain their independence.

2016 in review

We were also awarded a $75,000 grant from the Center for Long-Term Cybersecurity to pursue a corrigibility project with Stuart Russell and a new UC Berkeley postdoc, but we weren’t able to fill the intended postdoc position in the relevant timeframe and the project was canceled.

How Does Recent AI Progress Affect The Bostromian Paradigm?

Rob Bensinger says: October 31, 2016 at 5:35 pm ~new~ See also Nate on the EA Forum: Loosely speaking, we can imagine the space of all smarter-than-human AI systems as an extremely wide and heterogeneous space, in which “alignable AI designs” is a small and narrow target (and “aligned AI designs” smaller and narrower still).

MIRI’s goal is more or less “find research directions that are likely to make it easier down the road to develop AI systems that can complete some limited (superhumanly difficult) task without open-endedly optimizing the universe”, and our strategy is more or less “develop a deeper conceptual understanding of the relevant kinds of intelligent systems, so that researchers aren’t flying blind”.

There aren’t many good tools yet for formally characterizing general reasoning systems, and when you want to utterly ignore that problem, saying, “Well, pretend it can successfully brute-force-search through all mathematical claims for some true ones” is one way of saying “Pretend we knew how to formally pinpoint a really capable general reasoning system”.