Interview with Pierre Wagner on logic, philosophy and computer science

Photo PW
Texte

Pierre Wagner is Professeur des universités. He specializes in logic, and history and philosophy of logic, as well as in the history of analytical philosophy, focusing on Rudolph Carnap’s philosophy. Since 2018, he is also the head of the Institut d’histoire et de philosophie des sciences et des techniques (UMR 8590). His research concerns formal logic and the relations of science and philosophy. He has authored many volumes: Logique et philosophie, published with Ellipses in 2014; La Logique for the Presses universitaires de France’s Que sais-je? collection in 2007, reprinted in 2024; and La Machine en logique with the Presses universitaires de France in 1998.

On the need to train philosophy students in logic and computer science

You have long been in charge of the Logique et culture scientifique License program. In this undergraduate program, we find courses in logic and introductions to formal and mathematical reasoning — but also a course in philosophy and computer science (at the second-year level). This is somewhat of a unique case in the undergraduate courses on offer for humanities and social science students at Paris 1. Could you tell us more about this course and its aims ?

Pierre Wagner: To grasp the raison d’être and aims of the second-year philosophy and computer science course, it is best to situate it in the context of the coursework in logic on offer within the Philosophy UFR. Most philosophy departments offer logic courses, but most often it only includes one or two introductory courses. At Paris 1 Panthéon-Sorbonne, the Philosophy UFR proceeds differently. A basic training in logic is mandatory for first-year students. More optional courses are offered in the second and third years in the Logique et culture scientifique program, and in the Logique et philosophie des sciences Master’s diploma in Philosophy. Students can then pursue their training with a dissertation in logic, in philosophy of logic or in another topic drawing on what they have learned in their undergraduate or graduate studies. The Paris 1 Panthéon-Sorbonne advanced training in logic is rather unique, but it is not a new endeavor. It dates back to the 1970’s: at the time, training in contemporary logic — thus in formal and mathematical logic — was developed under the leadership of Roger Martin and Jacques Bouveresse for the philosophy students and for the humanities and social science students more generally.

Why teach logic to philosophy students? This answer should be obvious insofar as, traditionally, logic is in a way or another part of philosophy according to many philosophers since Antiquity. The training in logic which we offer at the Philosophy UFR is in line with this tradition. Now, of course, the word “logic” does not have the same meaning in all authors within the philosophical tradition. Besides, the content and broad orientations in logic have changed considerably, and particularly so in more recent times. Over the past 50 years or so, what one would receive as a training in logic has become different, because logic itself and the ways to use it have changed, and that is also true of the connections between logic, computer science and philosophy. It seems obvious that philosophy students need some minimal knowledge about what formal languages programs, and abstract machines (like Turing machines) are, as much as they should know what is computer code, and what is the connection between reasonings and calculi, and be aware of our ability to automate intellectual operations, of the tools used to perform such automation and of how ubiquitous algorithms are around us, etc.

The logic and computer science course offered to second-year students aims to introduce them to these issues through basic training and practical exercices in programming, formal languages, algorithms and abstract machines. Students are presented both with basic training in these areas and techniques, and with philosophical reflexions on the consequences of their ubiquitousness, drawing on texts which are read, reviewed and then commented. Those with a particular interest in these topics can study them more deeply by pursuing their training at the undergraduate and graduate level within the Lophisc program. Some think that logic is at least as useful in philosophy as a propedeutic instrument encouraging the development of rigorous and precise thinking, than as a specific tool explicitly tasked with solving philosophical problems. This is certainly true of programming as well, despite the objections of some who favor an approach of philosophy which is more literary than scientific.

A pioneering dissertation on artificial intelligence

In 1994, you defended your doctoral dissertation on the relations of thinking with machines, as well as your habilitation on Carnap’s project for a logic of science and its historical context. Could you tell us more about your work in these areas and about what made you interested in these topics in the first place?

Pierre Wagner: As I did previously, I will first explain my training and professional trajectory to get to the core of the question.

When I began studying philosophy in the 1980’s, I was interested both in literature and science. We were at this point in time where personal computers were starting to be more widely available, and word processing programs were starting to replace typewriters. It was easy to understand that a change — or, to put it more pointedly, a revolution — was underway, and that assuredly philosophers would have something to say about it. In 1984-1985, prior to my dissertation, I wrote a Master’s thesis on Leibnizian concepts used in data processing under Michel Serres’ supervision. This work required me to read the works of Leibniz in French, German and Latin, but also to learn the basics of programming (in Pascal, Lisp and Prolog), which I did through undergraduate-level courses at Paris VII and at the École normale supérieure. I spent the year 1985 in Stanford University, and I remember how surprised I was when I saw dozens of Macintosh computers available for students to work on in the university libraries and study rooms. The catalogs were digitized as well — while in the libraries of Paris universities, catalog rooms were still filled with filing cabinets where items were to be found on index cards stored in long, thin drawers. At the École normale supérieure, I was a literature student, but I was also acquainted with mathematicians working in logic and with computer science afficionados who spent their evenings in basements, gathered around mainframes cooled by loud fans. I was one of very few literature students to use word processing software to print out my Master’s thesis in philosophy rather than to type it on a typewriter. Afterwards, I began my training in analytical philosophy with the thesis I prepared for the DEA (a graduate Diplôme d’études approfondies, now replaced by the M2, the second-year Master’s diploma) under the supervision of Jacques Bouveresse, and through my discovery of another aspect of logic — the one we find in philosophers who use it to analyze language. During my stay at Stanford, located near Silicon Valley, I attended courses on artificial intelligence logic taught by Michael Genesereth (the co-author, with Nils J. Nilsson, of a seminal book, Logical Foundations of Artificial Intelligence). There I had also met with John McCarthy, one of the founders of artificial intelligence. This led, years later, to my doctoral research on the connection between thinking and machines, “Machine et pensée: l’importance philosophique de l’informatique et de l’intelligence artificielle”. When I came back in France — and returned, as well, to the French academic atmosphere — I jumped through some perillous intellectual hoops as I worked on the agrégation. Meanwhile I started to study logic, first obtaining undergraduate and graduate diplomas from Paris 1, then getting a DEA from Paris VII in Logic and foundations of computer science, studying under professors like René Cori and Michel Parigot.

I was extremely lucky for Jean Masconi (author of a masterful and erudite thèse d’État, La constitution de la théorie des automates, which was unfortunately never published) to be able to supervise my doctoral dissertation. His deep knowledge of the field was most precious to me. Here, I must point out that the conditions in which doctoral students worked on their dissertation were very different then, in the early 1990’s, from what they are now. I worked on my dissertation while teaching philosophy in high school, which meant working through evenings and week-ends (and during summer vacations when family obligations allowed it), all the while with no contact whatsoever with other doctoral students or any research group.

Beyond biographical context, the overall purpose of my dissertation was to evaluate the ambitions and means of artificial intelligence at the time — which, of course, is only distantly related to artificial intelligence as it is developed now. The title (and topic) of the dissertation — “Machines and thought: the philosophical significance of computer science and artificial intelligence” — is meant to convey doubts or questions about what could be relevant to philosophy in the technological revolution. Is that truly artificial intelligence itself, as it was developed then? Or is it computer science generally, (i.e. ubiquitous algorithms and programs, whether or not they explicitly aim to compete with human intelligence or with the various manifestations of what we agree to gather under the highly problematic umbrella-term of “intelligence”)? In the dissertation, I distinguished between metaphysical, epistemological and logical ways to tackle this issue. This approach of the issue in itself gives a sense of what is specifically problematic with it. Which perspective can help us understand the meaning of such technological changes? Is it a metaphysical perspective about thought and a mechanistic philosophy? At the time, Searle’s paper on the Chinese room was making waves in philosophical circles. Should we rather envision the issue through the lens of psychology and cognitive science? Hubert Dreyfus’ texts on the capacities exclusive to the human mind, and the problem of representing background knowledge supported this. Or should we use logic? In this regard, we could have drawn on the philosophical consequences of Gödel’s incompleteness theorem, or on the relations between logic and computation, and between logic and computer science. And, of course, we could have used Turing’s work on the imitation game. This dissertation involved discussions which now date back 30 years. At that moment, the very first ambitions of artificial intelligence developed in the 1950’s (ambitions for artificial intelligence to be a general kind of intelligence) were being replaced by what we called expert systems (which were specialized in particular fields like medicine, chemistry, etc.) which posed the question of the specificity of human cognitive capacities. What was at stake was thus whether artificial intelligence itself revealed interesting philosophical problems, or if we should rather focus on computer science in general.

The expected continution of my doctoral work would have been postdoctoral research in cognitive science, which was a fertile, quickly-developing field in the early 1990’s. Research groups were formed with linguists, neurologists, philosophers, psychologists and computer scientists, hoping to unify research on cognition. I did not, however, follow this path, since I thought it would force me to depart from my original intellectual pursuit, which was properly philosophical. It was never my intention to become a scientist — in whatever field or practice that would be. Instead, I wanted to remain in the field of philosophy in its relation to science. This led me to choose a different topic for my habilitation: I pivoted to the history of philosophy of science, logical empiricism and the philosophy of Rudolf Carnap, and I presented the results of this work much later, in 2009.

The concept of mechnical thinking

Could you tell us more about mechanical thinking? Is mechanical thinking a kind of general thinking which could account for any human thought, or resemble it? Or is it a more specific kind of thinking, like the one involved in symbolization?

Pierre Wagner: This question concerns precisely the conclusions I arrived at in my doctoral research. After they defend their dissertation, it is quite common for researchers to try and publish the results their research produced in book form. But it seemed to me that my dissertation was too exploratory in nature to be published as it was. I thus began working on a volume where I discussed the consequences of my conclusions to expand and build on my doctoral work. This resulted in a book, “La Machine en logique”, published in 1998 in the Presses universitaires de France’s Science, histoire et société collection curated by Dominique Lecourt. It is in this book that I tried to clarify the notion of mechanical thinking. Philosophical perspectives on artificial intelligence (as we thought of it then) were unsatisfying to me, and it seemed obvious that a renewed perspective was within reach as well as desirable. Indeed, a great many questions were being posed on the topic of artificial intelligence. Would machines one day be able to do all human intelligence can accomplish ? If a machine behaved in such a way as to be undistinguishable from a human being, would that justify our calling it intelligent, or our saying that it could think ? Can we use Gödel’s incompleteness theorems to show that some human capacities will always lie beyond machines’ reach ? Does the functional structure of computing machines contain solutions to problems in philosophy of mind ? What is the meaning of expressions like “machines think” ? These were the questions discussed by those interested in artificial intelligence. (Let me quickly point out that the questions most popular today with philosophers working on artificial intelligence, ie. on ethics, cognitive biases and explicability, are truly different from those.) And these questions were certainly legitimate. Yet it seemed to me that they did not address problems which were more authentic or more interesting — that the most difficult of the tasks at hand was to identify issues which did not merely recast in contemporary terms old questions which had been (or could have been) formulated before the first attempts at artificial intelligence. More specifically, when it came to the connection of logic and artificial intelligence, we were then discussing non-monotonic logical systems, problematic logical representations of common and background knowledge, or reasoning in situations of uncertainty. These were the issues discussed in Genesereth and Nilsson’s book on the logical foundations of AI. But to me these issues have to do with logical engineering (ie. the design of formal systems tailored to a specific practical usage), and do not address the core of the problem. It was much more interesting and more significant to me to try and understand the connection between proofs and programs, or the one between logical operations to reduce terms and computing itself. Can we distinguish what belongs to calculations and what belongs to the proof itself in a mathematical demonstration ? This is an entirely novel perspective on mechanical thinking and its problems. What is interesting to me is not our focus on human “intelligence” — which, again, is in itself a very problematic term — but rather interrogations on the capacities of machines which stem from abstract models of calculation, like those we study in logic.

When I used the term “mechanical thinking” in my 1998 book, I was thus referring to a set of abilities which machines could exhibit or do exhibit, and which depend on computation and information processing capacities. I stress again that this way to address the philosophical issues raised by computer science and artificial intelligence was strikingly different from what was discussed in the literature around three decades ago, when I finished my dissertation. Yet, most of my research followed another path during the next few years, as I focused on the history of philosophy of science, and on the way in which logical empiricists used logic. And so I did not directly pursue research into this notion of mechanical thinking.

One of my doctoral students, Henri Salha, does promising and substantial work which introduces what appears to me to be yet another novel perspective on these issues, based on whether programming is knowledge and how. Since the dissertation is underway, I cannot, of course, discuss it in more detail. But to me, it will significantly renew the general conception of mechanical thinking, and give it a distinct and more specific meaning than what I had in mind in the 1990’s.

The work of Turing and Wittgenstein on machines

From a philosophical point of view, what sense can we make of Turing’s work on AI? You wrote about Wittgenstein’s work on machines and thinking, but this work is usually less known by the general public than Turing’s work. Could you also tell us more about Wittgenstein’s ideas on this topic, and why it matters to the future of AI as well?

Pierre Wagner: Turing died in 1954 and we usually hold the general program for artificial intelligence to have been launched at a 1956 conference at Dartmouth College in New Hampshire. This means that whenever we may want to discuss Turing’s work on AI, it must be clear that we are discussing texts of Turing which were used to think about issues raised later by the first developments of artificial intelligence. Turing’s “Computing Machinery and Intelligence” paper is nonetheless famous among philosophers, and it is a crucial part of the classical topics and problems I covered in my dissertation. We can, however, ask whether this paper is still relevant for artificial intelligence as we think of it today — which is quite different from 20th century AI. I do think we can answer “yes” to this question. But we still need to determine what are the issues this text brings up. Turing’s paper is in fact much more concerned with human intelligence than with the artificial intelligence produced by computer technologies. And, of course, Turing is also known for his work on computability theory and on abstract machine models. We can see this work as related to artificial intelligence as well, although in a vastly different way, more indirectly. We can establish that indirect connection through a note of Wittgenstein’s in his Remarks on the Philosophy of Psychology. In it, Wittgenstein writes (in the Anscombe translation): “Turing’s ‘Machines’. These machines are humans who calculate. And one might express what he says also in the form of games.” The original German reads like this: “Diese Maschinen sind ja die Menschen, welche kalkulieren.” It can be puzzling, and a published French translation even interpreted the passage to mean the opposite: “ces machines sont des hommes qui calculent”, that is, “these machines are humans who calculate”.

I wrote a paper on this very issue in 2005: “Wittgenstein et les machines de Turing” in the Revue de métaphysique et de morale. My point in it was precisely to clarify this note by Wittgenstein about Turing’s machines. Going beyond this particular situation, however, we will find the relations between Turing and Wittgenstein to be fairly complex, as some recent research has revealed. Juliet Floyd, a world-renowned Wittgenstein scholar, is the reference on this matter, and wrote several papers on Wittgenstein and Turing.

Moving from the relation between Wittgenstein and Turing to the more general one between Wittgenstein and artificial intelligence, I will mention an older, remarkable paper which we must discuss when touching on this topic. “Le fantôme dans la machine” was written by Jacques Bouveresse, and published first partially in 1970, and completely in 1971 in La parole malheureuse (Paris: Éditions de Minuit, 1971). This paper (or chapter) is itself a 470-page book about the thinking of machines from the standpoint of linguistic analysis. It is thus another way to inquire philosophically into artificial intelligence, based on an analysis of Wittgenstein’s text on thinking in machines. Drawing on the abundant literature on thinking machines, Bouveresse investigates the kind of meaning one can assign to a claim like “machines think”. Such a claim bears all the usual signs which would normally allow us to say that it is true or false — but some have taken it to be semantically improper. The logical type of a claim like “machines think” is what is at stake here, as well as the “grammar” of the verb “to think”. What are the objects of which we can legitimately ask whether they think ?

Connecting logic and artificial intelligence

What connects logic with artificial intelligence in the days of ChatGPT?

Pierre Wagner: Trying to answer this question, we run into two big problems. First, we have to say what logic is. And next, we have to say what artificial intelligence is. There is likely no simple and enlightening solution to these problems. This points to the fact that the connections between logic and artificial intelligence are diverse and numerous. But most importantly, they vary in depth. During the development of artificial intelligence in the 20th century, the starting point was to create a general problem-solver — as though we could write a program with a general scope which could then be used to solve particular problems. This initial project faced many roadblocks: the role of common sense and background knowledge in problem-solving, the formal representation of knowledge, the way reasoning works when situations are uncertain or when knowledge is imperfect, etc. And to push through these difficulties, we called on logical tools: specific methods for formal representation, non-monotonic logical systems, resolution methods and strategies, etc. If we want to say that these tools all belong to “logic”, then we will be able to say that they are of use to some conception of artificial intelligence. But, of course, this is not at all how we develop artificial intelligence tools based on specific algorithms applied to massive datasets, like the ones we have today. Where is logic in these tools ? This may not be a good question to ask at all. We would rather want to ask about the formal methods these recent tools use. The question then becomes whether some formal methods are distinctly and properly “logical”. And to answer such a question, we will likely need to reassess what we mean by “logic”, and to perform some conceptual contorsions as well. This has clear impacts on the initial question in this intervew concerning training in computer science in a logic curriculum — it is a question our pedagogical team is asking: what do we need to teach students when we train them in logic ?