=An intelligence explosion might be imminent= In 1965, I.J. Good proposed that machines would one day be smart enough to make themselves smarter. Having made themselves smarter, they would spot still further opportunities for improvement, quickly leaving human intelligence far behind. The Singularity Institute aims to reduce the risk of a catastrophe resulting from an intelligence explosion. The danger stems from the fact that human survival [requires scarce resources]: resources for which intelligent machines may have [other uses]. Thus, the Singularity Institute's primary approach to reducing artificial intelligence (AI) risks has thus been to promote the development of AI with [benevolent motivations] that are reliably stable under [self-improvement]. We call this goal "Friendly AI." The Singularity Institute needs money and people if it is going to succeed. Here's what we're focusing on: =Add skilled researchers to our research staff= In the coming year, the Singularity Institute intends to add additional skilled researchers to work on our [open problems in creating friendly artificial intelligence]. Right now, these problems are highly theoretical: we haven't figured out how to make an AI at all, let alone how to make it friendly or stable. Our current research is on friendliness and goal stability, because it would be dangerous to know how to build AI but not how to make it friendly or stable. And it's theoretical because we only get one chance -- since the world might be at stake, we can't just try things and see what works. To this end, we are currently seeking researchers who have skills in mathematics, theoretical computer science, and philosophy. Our [Research Associates] and [Visiting Fellows] are unpaid; but we support them with room and board in Singularity Institute housing, usually large shared apartments in Berkeley. Our [resident faculty] are paid a living wage on a case-by-case basis. Supporting our researchers is the single greatest expense at the Singularity Institute, and it's the most critical one. The more great minds we have working on these problems, the faster we can figure them out. =Improve the open problems document= In the coming year, one of our researchers will spend significant time working on fleshing out the [list of open problems]. The list of open problems is a critical component in attracting new researchers with fresh ideas to the Singularity Institute, because people who already happen to be working on related problems will discover that humanity can benefit from their research. However, this serendipitous effect can only occur if those researchers can read a description of an open problem and quickly determine whether their own research relates to Friendly AI. Since those researchers have other work to do, the open problems document needs to be concise, yet detailed, and exceptionally well-written for a wide audience. =Write content for Less Wrong= Less Wrong is a community website about the study of rationality: understanding how humans think and decide, using cognitive science and social psychology, and using that knowledge to think more accurately and make better decisions. The Singularity Institute created and currently runs Less Wrong. Our staff produces lots of content, but there are many unaffiliated people who also write for the site. The Less Wrong community self-selects for people who are interested in rationality, and its content trains these people to become better rationalists. Less Wrong is now the primary way that the Singularity Institute recruits researchers, because the tools of rationality are critically important when thinking about the direction of future technology such as AI. There are many cognitive pitfalls -- for example, misplaced optimism with regard to whether an AI is safe could have catastrophic consequences -- and Less Wrong readers are trained very well to recognize and avoid these pitfalls. We continue to spend resources promoting the study of rationality on Less Wrong. The most important way to do this is to produce new content, because new content attracts new kinds of people into the community. Less Wrong's reach is still rather small -- on the order of a million unique visitors since it was created, and only a small percentage of those are regular visitors -- so we expect that we can grow the community to include many more people, if only we can get them to discover it. =Compile a textbook about rationality= Eliezer Yudkowsky was the author of most of the content on Less Wrong when it was created in 2007. He wrote the Sequences, a series of blog posts about rationality comprising over a million words. Now he is compiling the Sequences into a mainstream rationality textbook. Once the textbook exists, rationality can become a subject of formal training in universities, community colleges, and seminar series, and a large new pool of rational thinkers will grow. This new pool of rationalists will be another place from which to draw excellent research candidates, and it will have little overlap with Less Wrong. =Make the case to optimal philanthropists= The optimal philanthropy community is composed of people who are committed to making the world a better place through giving money to highly effective charities. Effectiveness is measured by number of lives saved or significantly improved, and there's a focus on transparency and measuring positive impact. Typically, the community relies on a few research organizations such as [GiveWell] to do trustworthy research, discovering and recommending a small number of charities which are considered to be the most effective known. GiveWell moved almost two million dollars last year, and they are growing very rapidly. In the past, the research organizations have recommended charities focused on improving health in third-world countries. We at the Singularity Institute believe we can make a compelling case that the expected value of funding our present research is higher than any other charity in the world, including the most effective known health-related charities. If we can convince GiveWell, or another research organization, to recommend us to their donors (even as an honorable mention) then we could direct large amounts of money to the Singularity Institute. So during this year, some of our staff will spend a significant amount of time making this argument.