Exploring the Computational Necessity of Dual Processes for Intelligence Clara Zoe Riedmiller Abstract: The recent success of deep-learning based Large Language Models (LLMs) (Grattafiori et al. 2024; T. Brown et al. 2020) across a wide range of tasks has established them as the most promising candidate for achieving human-level Artificial Intelligence (AI). However, their simultaneous systematic failures on formal reasoning tasks (Xu et al. 2025) demonstrate their struggle with robust generalization (Marcus 2020). This raises questions about the nature and limitations of their intelligence, which is poorly understood on a theoretical basis (van Rooij et al. 2024; Bender et al. 2021). This thesis investigates how understanding different AI models through the lens of their inference mechanisms can shed light on this issue, while drawing on insights from Dual Process Theory (Evans and Stanovich 2013). This framework from Cognitive Science proposes that cognition consists of two types of thinking: System 1, which makes fast, approximate and implicit inferences and System 2, which reasons slowly, accurately and explicitly. We demonstrate that neural network architectures fundamentally align with the characterization of System 1, while symbolic systems align with System 2. To formalize these claims, we apply a complexity-theoretic analysis that allows us to understand AI under real-world resource constraints while assessing intelligence in relation to the cost-accuracy tradeoff (Johnson and Payne 1985). This effort will provide insights into LLMs’ struggle with formal reasoning problems from a theoretical standpoint by framing them as System 1 (Kambhampati et al. 2024). Further, it will argue for the necessity of interaction with a System 2 component to achieve intelligent behavior. These examinations provide evidence for the computational necessity of dual processes for intelligence.