Limits of Solomonoff Induction Joel Artturi Saarinen Abstract: Solomonoff’s theory of inductive inference is often regarded as a gold standard for formal theories of learning. However, several results have shown that Solomonoff’s predictor fails to converge in a greater variety of circumstances than originally thought, exemplified by Hutter’s and Muchnik’s result that this learning failure occurs for a specific type of data string that Martin-Löf random relative to the uniform Lebesgue measure, when the latter is assumed to be the underlying distribution generating the data, and Hutter’s and Lattimore’s result that for any predictor, there exists a certain type of data string relative to the uniform Lebesgue measure for which convergence fails. This thesis aims to expand upon these results by showing that the argument given by Hutter and Muchnik holds for an arbitrary computable measure satisfying a certain boundedness assumption, and that one of the arguments given by Hutter and Lattimore holds for this same measure, delivering an additional blow to the Solomonoff inductor as being a gold standard for learning. In light of these results, the thesis also offers a brief philosophical discussion on how, in light of further failures of such seemingly optimal learners, and the skepticism of being able to find an optimal learner in the first place, whether there might be an alternative standard by which to evaluate optimality.