Over 20 jobs have been advertised this season at PhilJobs: Jobs for Philosophers that list among the desired areas of specialization or competence philosophy related to artificial intelligence (AI).
There are questions about AI relevant to many subfields of philosophy—ethics, philosophy of science, philosophy of technology, philosophy of mind, philosophy of art, etc. The topic is a hot one in the broader culture owing to the development and popularization of large language models (LLMs) like ChatGPT and other machine-learning-based products and services. Administrators want departments to ride the topicality of the subject in pursuit of enrollments and research dollars. And private industry and government agencies are increasing their funding for AI-related research.
So it’s no surprise that we’d see an increase in the number of philosophy positions to be filled that have something to do with AI. But are there enough people specializing in this area to take up these positions?
Additionally, it seems like more and more is being written on questions concerning philosophy and AI across a range of subfields. So we might ask: are there currently enough experts in these areas for the research being produced to be adequately vetted and peer-reviewed?
These questions were prompted by a reader who, in an email, expressed curiosity as to whether, and if so when and for what areas, demand for philosophical expertise has outstripped supply.
I’m wondering whether the apparent AI hiring craze in philosophy over the last few years has any precedence in the discipline. It seems like very few people have written dissertations on AI and yet there have seemingly been more jobs advertised over the last few years for folks working in AI than most other sub-areas. Are people with not-very-extensive training in AI ending up in these positions? That seems highly unusual. Relatedly, a friend of mine with no (direct or even indirect) experience in AI is getting requests from top journals to review AI submissions. Is work in AI really being reviewed by non-experts? That also seems highly unusual. Has anything like this happened over the last 70+ years in academic philosophy? What should we as a discipline think about the current situation either way?
One way to approach the precedent question would be to look through some data (see, for example, this post on areas of specialization from last year’s job market). Another might be to recall which areas of specialization it has been especially difficult to hire in (and when).
Discussion—on both the demand for AI and philosophy experts and the question of precedents—is welcome.