The Good, the Bad, and the Difficult Complexity in a Monotonicity-Grounded Natural Logic for Reasoning with Generalized Quantifiers Jonathan Frederik Sippel Abstract: We will present a natural logic (NQL) for reasoning with generalized quantifiers that aims to predict mean human success on syllogistic and related reasoning tasks. Natural logics provide inference rules that operate directly on natural language representations, thereby gaining flexibility and expressive power. NQL thereby proves to be more cognitively plausible than competing theories. We will further extend NQL to a natural language fragment that is concerned with quantifier iteration. The inference rules in NQL are assigned weights, corresponding to a measure of complexity of inferences – this weight assignment is motivated by semantic and psychological considerations. While the overarching goal is to align the complexity of sequences of inferences with the cognitive difficulty that reasoners encounter, we also aim to demonstrate that NQL can be used to predict the mean success rates of reasoners on related tasks. The natural logic approach highlights the inferential properties of expressions over their extensional ones and emphasizes the how we use natural language in inferences. While we show that NQL successfully models inferences on single and iterated quantifiers, we will also provide some empirically testable hypotheses that are derived from NQL’s informative weights.