MIRI’s 2016 Fundraiser
Our 2016 fundraiser is underway! Unlike in past years, we’ll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live): Donate Now Employer matching and...
View ArticleCSRBAI talks on agent models and multi-agent dilemmas
We’ve uploaded the final set of videos from our recent Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Future of Humanity Institute. A full list of CSRBAI...
View ArticleOctober 2016 Newsletter
Our big announcement this month is our paper “Logical Induction,” introducing an algorithm that learns to assign reasonable probabilities to mathematical, empirical, and self-referential claims in a...
View ArticleMIRI AMA, and a talk on logical induction
Nate, Malo, Jessica, Tsvi, and I will be answering questions tomorrow at the Effective Altruism Forum. If you’ve been curious about anything related to our research, plans, or general thoughts, you’re...
View ArticleWhite House submissions and report on AI safety
In May, the White House Office of Science and Technology Policy (OSTP) announced “a new series of workshops and an interagency working group to learn more about the benefits and risks of artificial...
View ArticlePost-fundraiser update
We concluded our 2016 fundraiser eleven days ago. Progress was slow at first, but our donors came together in a big way in the final week, nearly doubling our final total. In the end, donors raised...
View ArticleNovember 2016 Newsletter
Post-fundraiser update: Donors rallied late last month to get us most of the way to our first fundraiser goal, but we ultimately fell short. This means that we’ll need to make up the remaining $160k...
View ArticleDecember 2016 Newsletter
We’re in the final weeks of our push to cover our funding shortfall, and we’re now halfway to our $160,000 goal. For potential donors who are interested in an outside perspective, Future of Humanity...
View ArticleAI Alignment: Why It’s Hard, and Where to Start
Back in May, I gave a talk at Stanford University for the Symbolic Systems Distinguished Speaker series, titled “The AI Alignment Problem: Why It’s Hard, And Where To Start.” The video for this talk is...
View ArticleNew paper: “Optimal polynomial-time estimators”
MIRI Research Associate Vadim Kosoy has developed a new framework for reasoning under logical uncertainty, “Optimal polynomial-time estimators: A Bayesian notion of approximation algorithm.” Abstract:...
View ArticleJanuary 2017 Newsletter
Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “The AI Alignment Problem: Why It’s Hard, and Where to Start.” Other big news includes the release of version 1...
View ArticleResponse to Cegłowski on superintelligence
Web developer Maciej Cegłowski recently gave a talk on AI safety (video, text) arguing that we should be skeptical of the standard assumptions that go into working on this problem, and doubly skeptical...
View ArticleNew paper: “Toward negotiable reinforcement learning”
MIRI Research Fellow Andrew Critch has developed a new result in the theory of conflict resolution, described in “Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal...
View ArticleCHCAI/MIRI research internship in AI safety
We’re looking for talented, driven, and ambitious technical researchers for a summer research internship with the Center for Human-Compatible AI (CHCAI) and the Machine Intelligence Research Institute...
View ArticleFebruary 2017 Newsletter
Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key...
View ArticleUsing machine learning to address AI risk
At the EA Global 2016 conference, I gave a talk on “Using Machine Learning to Address AI Risk”: It is plausible that future artificial general intelligence systems will share many qualities in common...
View ArticleMarch 2017 Newsletter
Research updates New at IAFF: Some Problems with Making Induction Benign; Entangled Equilibria and the Twin Prisoners’ Dilemma; Generalizing Foundations of Decision Theory New at AI Impacts: Changes...
View ArticleNew paper: “Cheating Death in Damascus”
MIRI Executive Director Nate Soares and Rutgers/UIUC decision theorist Ben Levinstein have a new paper out introducing functional decision theory (FDT), MIRI’s proposal for a general-purpose decision...
View Article2016 in review
It’s time again for my annual review of MIRI’s activities.1 In this post I’ll provide a summary of what we did in 2016, see how our activities compare to our previously stated goals and predictions,...
View ArticleTwo new researchers join MIRI
MIRI’s research team is growing! I’m happy to announce that we’ve hired two new research fellows to contribute to our work on AI alignment: Sam Eisenstat and Marcello Herreshoff, both from Google....
View Article