Day 1, June 23, 2020
09:30AM - 02:00PM
HSS Conference Room
HOPOS Steering Committee Meeting
03:00PM - 04:45PM
Mentor/Mentee Meetings
05:00PM - 06:30PM
Hive Roof Deck
Welcome Reception
Day 2, June 24, 2020
09:00AM - 10:40AM
SR-4
Corpuscles and Forces in Aristotle and Descartes
Track : Kant and Before
Moderators
Barnaby Hutchins, Barnaby Hutchins
Aristotle on Corpuscular Structures
Presented by :
Tiberiu Popa, Butler University, Indianapolis, USA
The prevailing view of Aristotle's treatment of material properties is that they hinge solely on uniform mixtures (discussed e.g. in Generation and Corruption I.10) and implicitly on the ratios between the ingredients that exist in those mixtures in potentiality (dunamei). Relatively little attention is being given to microstructures that he claims are responsible for a host of pervasive properties, from fragility to viscosity. This attitude is strikingly different from a number of influential alchemical works written in early modern times and placing such structural features (e.g. poroi or very fine channels running through solid bodies) at the center of Aristotle's account of the capacities (dunameis) of uniform bodies. The problem with some of those early modern interpretations, as William Newman, among others, has pointed out, is that they make Aristotle sound more Democritean than they should, while making Democritus' own theory look decidedly Aristotelian. To properly capture the significance of Aristotle's appeal to microstructures in his causal explanations of material properties (articulated in his 'chemistry' as well as in his biological treatises), one needs to resist both temptations, i.e. both the view that his theory of mixis is sufficient to account for the enormous variety of material qualities perceptible to us, and an overemphasis on the possible common points between his natural philosophy and early atomism. More specifically, I argue for the reconsideration of two aspects that can shed light on Aristotle's interest in imperceptibly small structures: (1) The scope of his investigation goes far beyond his mention of poroi and of ogkoi (the latter refers to invisibly tiny liquid particles that can penetrate solid bodies) and beyond a few passages in his Meteorology IV. For instance, he confidently invokes chain-like structures responsible for what we perceive as viscosity; the uniform or uneven distribution of moisture in various metals (explaining meltability or other qualities); conglomerations of minuscule bubbles of air which can cause e.g. the whitening of oil; the actual rather than potential presence of solid particles in blood and milk etc. (2) The final section of my paper will consider the epistemic implications of Aristotle's approach to such microstructural characteristics. While some of the relevant passages – in his Meteorology, but also in Generation of Animals etc. – can be easily reformulated in syllogistic form and occasionally hint at experimental procedures, other passages rely on analogies or on an intuitive grasp of the sort examined towards the end of the Posterior Analytics. His apparent lack of hesitation, however, should not necessarily be equated with an implicit claim of having secured knowledge proper about the imperceptible. Instead, Aristotle takes his explanations of the behavior of uniform bodies to be plausible (and further improvable), as a number of formulations seem to indicate, rather than amounting to knowledge, strictly speaking. This marks a notable departure from Democritus' earlier claim that we can attain genuine knowledge about microstructural features.  
Descartes and the Problem of Force
Presented by :
Stavros Ioannidis, National And Kapodistrian University Of Athens, Greece
The aim of this paper is to defend a realist solution to what is known as the 'problem of force' in Cartesian physics, i.e. the problem of how to understand the ontological status of forces in Descartes's Principia. Three main readings have been offered in the recent literature concerning this problem. According to fictionalist accounts (Hatfield 1979, Garber 1993, Ott 2009), bodies do not literally have forces; Descartes uses this notion as a shorthand to refer to the behaviour of bodies due to the action of God. Such a view goes hand in hand with an occasionalist interpretation of body-body causation. According to non-fictionalist accounts, forces have objective existence. These views can in turn be divided into double aspect views (Gueroult 1980, Gabbey 1980, Des Chene 1996), that view forces as an aspect of God's action in nature, and full-blown realist views (Della Rocca 1999, Schmaltz 2008). Only the latter view forces as real entities that bodies possess, and that can ground causal relations in the world.In the paper, I offer a new defence of the realist view, by analysing the features of the concept of force in the Principia, and by offering a unifying reading of the role of force and similar notions in the demonstrations of the three laws of nature, as well as of Descartes's remarks concerning forces and powers in several key passages in his correspondence. The main claim of the paper is that Descartes has a concept of force in the Principia that is structurally similar to the aristotelian notion of power. Thus, in contrast to an occasionalist reading of Cartesian physics, body-body causation in the Principia is grounded in forces.In the first part of the paper I defend the thesis of structural similarity between aristotelian powers and cartesian forces. Powers in Aristotle and in medieval thinking are principles of change, act by contact, are active or passive, can be potential or actual, are relative (ad aliquid) and are accidents. I show that cartesian forces have similar features: they are causes of changes in motion, act by contact, are active or passive, can be potential or actual, are relative and are modes of bodies. Next, I explain the connections among the Cartesian notions of force, tendency, striving and inclination, focusing mainly in their role in the demonstrations of the three laws of nature in the Principia. In the last part of the paper I defend the thesis of structural similarity against two main criticisms: first, that Descartes cannot be a realist about forces, given his arguments against substantial forms and real qualities; second, that even if forces exist, they are grounded in God. I conclude that talk of forces in the Principia must be understood literally, that forces are modes of bodies grounded in motion, and that they in turn ground body-body causation.
Mechanism and motion: Digby contra Descartes
Presented by :
Laura Georgescu, Rijkuniversiteit Groningen
A lesser known critic of Cartesian philosophy is Kenelm Digby. In his major work, the Two treatises published in the same year as Descartes' Principles of Philosophy, 1644, Digby rallies conceptual and methodological arguments against Descartes's version of mechanical philosophy, broadly understood (for my purposes here) as a natural philosophical system whose goal is to account for all natural phenomena in terms of matter and motion. If one accepts this broad framework, Digby can also be successfully qualified as a mechanical philosopher, as indeed he was taken to be in his own time (e.g. by Boyle, Charleton, Leibniz). However, of course, different accounts of what counts as matter and what as motion result in different mechanical systems. And the Digbean mechanical philosophy is radically different from the Cartesian. In fact, Digby articulates his alternative system precisely in relation to, and by opposition to, the available alternatives, of which Descartes' was probably the most systematic at the time, and certainly the most widely known. Thus, it is not surprising that showing the roadblocks the Cartesian system runs into is one of the methodological strategies running throughout Digby's Two treatises.  To see how this works, in this paper I discuss one of the arguments Digby uses against Cartesian optics – more specifically, against the way in which Descartes goes about formulating his account of reflection and refraction in La Dioptrique (1637). I show three related things. First, that Digby's reductio – or, at least, his attempt at offering a reductio – of Descartes's theory of refraction works by criticising Descartes' conceptualisation and use of one of his exemplars, namely the striking of a tennis ball as a stand-in for how light hits surfaces. We see that in discussing this example Digby seems to ignore the Cartesian distinction between the tendency to motion and the determination of motion of a body. But, as I show, this is a principled decision, and not just a misunderstanding on Digby's part. Second, in making sense of Digby's critique, we begin to see that Descartes and Digby have different presuppositions about how idealisations work, and when are they warranted. Third, I argue that what is at stake for Digby is not so much to challenge Descartes' optics itself, but rather to challenge the Cartesian account of motion, with the goal of replacing it with an account that defines motion purely in terms of division. It is this reconceptualisation of motion which, for Digby, makes the Cartesian conceptual distinction between determination and tendency to motion redundant – motion, for Digby, being prior to, rather than secondary to, bodies themselves.
09:00AM - 10:40AM
SR-6
Philosophical reflections with and on mathematics in China (Symposium)
Track : Kant and Before
Moderators
Malcolm Forster, Fudan University
The historiography of mathematics in China has insisted on the mathematical achievements, to which Chinese writings attest, more than it has inquired into the philosophical reflections that practitioners developed in the context of their mathematical activity. However, there is ample evidence showing that some practitioners of mathematics did not limit themselves to obtaining results. In some cases, their works attest to philosophical reflections on mathematics. In other cases, their works show that these practitioners pondered general philosophical issues, drawing on their mathematical work. The aim of this symposium is to shed light on the theoretical developments to which these mathematical writings attest.A first contribution will focus on the earliest known mathematical documents and explore the reflections on the concept of "category" that they contain. The reflections analyzed cover a period from the 2ndcentury BCE to the third century CE. They will allow us to examine whether we can document an advancement of philosophical reflection in this respect on this time period. Moreover, we will examine how these reflections can be situated with respect to general philosophical reflections on the same concept, to which several traditions in China attest. A second contribution will concentrate on how practitioners reflected on the "origin of mathematics," drawing on both textual records and material tools of computation. This contribution will raise the general issue of how tools of computation, far from being merely a means to yield results, were also a basis on which theoretical reflection drew. Again, the documents brought into play will cover a long time period, ranging from the 1stto the 13thcentury. The third contribution will also explore notions of "origin." However, it will mainly focus on a 17thcentury scholar, Mei Wending 梅文鼎, who developed his theoretical reflections on mathematics, after mathematicalbooksfrom Europe had been translated into Chinese. The notion of "origin" Mei introduces relates to a conception of the organization of mathematical knowledge. In which respects does this approach inherit older conceptions from Chinese writings, and in which respect does it attest to changes in the approach to the organization of mathematical knowledge under the influence of newly introduced mathematical writings? How does the understanding of "origin" enable Mei to situate the new knowledge with respect to knowledge available in Chinese writings of the past, and even to integrate the two? These will be some of the issues at stake.We hope that this symposium will inspire the development of new directions of research in the history of mathematics in China.
“Reason/Principle” (li) and “root” (gen): Generality, from a Diagram, of the Knowledge about the Right-angled Triangle in Ancient China (gou-gu) as Clarifying Mei Wending’s (1633 ~ 1721) New Practices
Presented by :
Shuyuan Pan, Institute For The History Of Natural Sciences
Although during the Ming Dynasty (1368-1644), mainstream mathematical practices emphasized popularizing knowledge and applying it, we can also identify a trend among scholarly mathematicians to seek the reason why the known things are as they are ("suoyi ran" 所以然). When in the early 17th century, mathematical works such as Euclid's Elementswere introduced from Europe into China, this trend became therefore strengthened. Mei Wending 梅文鼎(1633- 1721), a famous specialist in astronomical and mathematical studies in the early Qing period, is one of the most representative scholars in this trend. He always compared "the Chinese method" and "the Western method", and finally convinced himself that they were identicalwith respect to their reason and principle, that is, in the Chinese language of the time, their li理. In this paper, we will focus on Mei Wending's reflections on the right-angled triangle (gou-gu句股), a branch of mathematics deriving from the organization of knowledge embodied by Mathematical Procedures in Nine Chapters (Jiuzhang suanshu, 九章算術), which persisted in medieval China. Practitioners introduced the base, height and hypotenuse of a right-angled triangle, and the five sums (he和) and five differences(jiao较)deriving from them, in order to solve a triangle when only two of those items (sometimes also the area of the triangle was added) were known.Previously, Mei Wending was regarded as the first scholar who put forward four new propositions. We will show that the four were already actually known to Mei's predecessors in the Yuan and Ming dynasties, and we will further investigate his practices in the explanations. There, he emphasized that the reason and principle (li) of the methods for constructing those propositions derived from "ancient diagrams". One of the "ancient diagrams", similar with those Mei drew and used for his explanation, was probably the one showing a smaller square in a corner of a larger square. It was originally used to show that the square of the hypotenuse can be separated into the square of the base (or the height), and a gnomon whose area is the square of the height (or the base). However, Mei developed a general understanding from this diagram, that is, provided any two squares whose sides are unequal, the difference of their areas is the product of the sum and the difference of the lengths of their two sides. Thanks to this key point, which he called the root (gen根) of sum and difference, Mei could establish and complete his reasoning about the four propositions. By this study, we will get a better understanding of how Chinese scholars shaped new practices out of the classical knowledge from the perspective of generality. This might further shed light on why the theory of "the Chinese origins of Western learnings" was produced in this academic context.
The Interplay between Counting Rods and Procedural Texts, and its Relation to approaches to the “Origin of Mathematics” in Ancient China
Presented by :
Yiwen Zhu, Sun Yat-sen University
Historical studies of mathematics in ancient China have shown that before the late 16thcentury, in specialists' milieus, practitioners used counting rods (suan chou筭筹) to execute mathematical procedures. However, how these practitioners used counting rods is still not completely clear due to the lack of direct evidence. My research has led me to put forward the hypothesis that, being the main mathematical tool, counting rods were not simply used to implement procedures recorded in writings. This hypothesis is supported by the following remark: when practitioners used other instruments (e.g. abacus) to carry out the "same" procedure, the procedure was written in different ways. This suggests that in addition to being used to execute a procedure given in words, the operation of counting rods had a function in writing the procedure. Therefore, material calculating instrument and mathematical knowledge recorded in texts might have had a closer relationship than we had realized. From this perspective, my presentation will firstly investigate the interplay between counting rods and procedural texts in the earliest and most important Chinese mathematical document handed down through the written tradition: The Nine Chapters on Mathematical Procedures (Jiuzhang suanshu九章筭術). Next, I will discuss the historical evolution of this interplay drawing on mathematical documents from the Tang (618-907) and the Song dynasties (960-1279). Finally, I will argue that the interplay between calculating tools and procedural texts in fact had a bearing on how in different periods, practitioners inquired into what for them was the "origin of mathematics". In brief, the aim of this paper is to study the role of mathematical instruments with respect to how practitioners understood and wrote down mathematics in ancient China as well as how they reflected upon mathematics.
Thinking about “categories lei 類” in Chinese mathematical sources from the 2nd century BCE to the 3rd century CE
Presented by :
Karine Chemla, SPHERE (UMR 7219), CNRS & Université De Paris
Two mathematical books from early imperial China were handed down throughthe written tradition. The earliest one was titled The Gnomon of the Zhou (周髀), and recently new arguments were given supporting the view that the book was completed in the 1stcentury CE. In particular, I have argued that the theoretical development about mathematical activity that this book contains, and that is the single known document of this type from early imperial China, belonged to the most recent layers composing the book. This suggests that this theoretical development was written in the same decades as those during which the second book, The Nine Chapters on Mathematical Books(九章算術, hereafter The Nine Chapters), was completed, that is, arguably, in the 1st century CE.             The theoretical development in The Gnomon of the Zhou foregrounds the concept of "category lei類," and the practice with "categories," as central to an ideal mathematical practice. This concept does not occur in The Nine Chapters. However, the third century commentator whose commentary was handed down with The Nine Chapters, Liu Hui 劉徽, also gives prominence to the concept of "category" in his interpretation of the canon. Similarly, the third century commentator on The Gnomon of the Zhou, Zhao Shuang趙爽, pushes forward the theoretical reflection on "categories" in mathematics. This concept also occurs in a mathematical manuscript recently found in a tomb sealed in or shortly after 186 BCE, which arguably is its earliest known occurrence in relation to mathematics. Interestingly, its use there fits with some of Liu Hui's uses in his commentary on The Nine Chapters. Mathematical practitioners thus seem to have developed a long-term reflection on the concept within their practice of mathematics. Finally, this concept is central to several philosophical works in China, like the Mohist Canon and the Grand commentary on the Yijing, which are both quoted by commentators on these mathematical canons and notably in relation to the concept of "category."The questions my talk will address are the following: Does the theorizing about "categories," to which the theoretical development in The Gnomon of the Zhou attests, fit with Liu Hui's interpretation of how "categories" are at play in The Nine Chapters? Can we perceive a history of the reflection on "categories" within mathematics? Does this theorizing about "categories" enable us to situate the reflections developed by practitioners of mathematics better with respect to philosophical traditions? Finally, did the reflections about "categories" that were carried out within mathematics have any impact on any general philosophical approach to this issue? In closing, let me insist that there were many publications on the notion of "category" and its use in Chinese philosophical writings. None of them took into account the reflections on "categories" that mathematical texts offer. I hope that studies like the one I will present can serve to integrate scientific writings in Chinese into the general discussion on the history of philosophy in China.
09:00AM - 10:40AM
SR-7
The Mental and The Physical
Track : After Kant
Moderators
Scott Edgar, Saint Mary's University
Russell’s Pragmatism? On His Representationalism about Consciousness
Presented by :
Alexander Klein, McMaster University
Bertrand Russell published what are easily among the most influential criticisms of pragmatism ever (Russell 1908/1966, 1909/1966). Targeting William James, Russell argued that an idea's truth is not a matter of its utility, but of its correspondence with reality. By about a decade after his major attacks, Russell had taken a major shift towards James. The shift culminated in Russell's 1921 book, The Analysis of Mind. His transformation didn't concern truth, though-it had to do with the metaphysics of perception. And yet some recent scholarship has challenged the received view of Russell as an arch anti-pragmatist. For Russell didn't just adopt neutral monism (a move long regarded as inspired by James; see Russell 1959, 13). Russell also shifted towards other philosophical commitments that are tightly connected with pragmatism. For instance, Russell warms to a behaviorist-style account of belief as that upon which we're prepared to act (Russell 1921, lec. 12). This is the account of belief of which Peirce had called pragmatism "scarce more than a corollary" (Peirce 1931 – 1958, 5.12). Russell also now insists that linguistic meaning must be derived from linguistic usage ("the use of the word comes first," he says; Russell 1921, 165). And Russell would even claim that for a belief to constitute knowledge, it must not only be accurate, but also display "appropriateness, i.e. suitability for realizing one's purpose" (Russell 1921, 261, my italics; for discussions of these passages, see Misak 2018, Misak 2016, Levine 2018). Indeed, these striking changes help bring into focus Frank Ramsey's otherwise incredible (1927) statement: "My pragmatism is derived from Mr. Russell" (quoted at Misak 2016, 173; my italics).But the new scholarship on Russell's turn towards pragmatism raises an important question: why did  he maintain his lifelong opposition to the pragmatist theory of truth? For he continued through the end of his career to hold that truth is a matter of correspondence, not utility (e.g., Russell 1948/2009, 139). My paper addresses this question by examining some deeper aspects of Russell's philosophy of mind-in particular, his account of consciousness in his most James-inspired work, the Analysis of Mind. That work of course takes off from metaphysical commitments Russell shares with James. But the theory of consciousness Russell builds atop that shared metaphysic is a pioneering version of what we know today as representationalism, I argue. In contrast, James developed an approach that looks more like what we now call enactivism. For James treats the mind as an adapted mediator between an organism and its dynamic environment.Russell's commitment to understanding consciousness (and other mental phenomena) in terms of worldly representation helps explain why he would maintain that truth is a matter of correspondence between said representations and the world. So while Russell did drift towards James's mature metaphysics, and even came to appreciate some absolutely central aspects of pragmatism, the two old sparring partners nevertheless offered importantly different conceptions of the mind and, ultimately, related but distinct visions for how to make philosophy more "scientific."
The evolution of Ayer’s views on the mental-physical relation
Presented by :
Gergely Ambrus, Eotvos University (ELTE), Institute Of Philosophy
In the presentation I shall discuss the development Ayer's views on the mind-body relation from his early Viennese-inspired logical positivism towards a more refined but still consequent empiricism, which presents a unique path in the progression towards a naturalistic metaphysics that became dominant in late 20th century analytic philosophy.I shall arrange my discussion around three points.First: Ayer's views on the relation between mind and matter were framed by his views on whether experience and phenomenal language is logically private. Since Ayer changed his views on these issues several times, I will track the major changes in order to make clear their relation to his views on the mental-physical relation.Second: Ayer, beginning as an ardent logical positivist, held that the traditional metaphysical mind-body problem is meaningless. According to the phenomenalist view he proposed in LTL, the only meaningful question concerning the relation of mind to matter is whether a particular – neutral – experience is a constituent of a 'physical' or as 'mental' logical construction (where this distinction was based on the relations of experiences to each other, and was not considered to express an ontological difference).Later, however, Ayer gave up phenomenalism about the physical world (beginning with Phenomenalism, 1947), and worked out a 'sophisticated realist' position (The Central Questions of Philosophy, 1973), while parallelly arriving at a view maintaining that public, intersubjectively intelligible phenomenal language is possible, moreover this possibility can be reconciled with experiences being logically private (The Problem of Knowledge, 1956). These moves opened up the 'logical space' for considering some traditional approaches to the nature of mental-physical relation intelligible, such as materialism or dualism, in contrast with his early views, judging them non-sensical.Third: although Ayer did not explicitely formulated his later views on the nature of the psycho-physical relation, Feigl (in The "Mental" and the"Physical", 1958/1967 p. 60) interpreted him as coming close to critical realism (a view prevalent in the turn of the century German 'scientific philosophy', which, for our purposes, can be taken as similar to Russellian monism). Further, Ayer did criticize some coeval views on the nature of the mind-body relation: he rejected not only Cartesian substance-dualism, but also Strawson's (1959) linguistic dual-aspect theory, Smart's (1959, 1962) and Armstrong's (1968) reductive materialism, as well as Kripke's (1972) dualism about qualia, moreover he also argued against Davidson's (1970) anomalous monism. I shall argue, along with Feigl's interpretation, that it is likely that he remained 'close to critical realism', as such views try to give justice both to the reality and the epistemological primacy of experience, a view Ayer held from the beginning and never gave up, while also accepting realism about the physical world, a view he arrived at later in his career.
What did a Monad mean in O. F. Müller?
Presented by :
Caroline Angleraux, Università Degli Studi Di Padova & Paris 1 Panthéon-Sorbonne
In 1870, in a talk for the celebration of the anniversary of Leibniz's death entitled "Leibnizische Gedanken in der neueren Naturwissenschaft", du Bois-Reymond distinguished the Leibnizian monad from its conceptual natural descendants in Bonnet, Oken or A.F.J.C. Mayer. In particular, du Bois-Reymond insisted on the fact that if some people could well imagine that O. F. Müller named "monads" the simplest living beings of his classification in reference to Leibniz (or minimally imagine that the monads of Leibniz and the ones of O. F. Müller shared something in common) then this conceptual link was a pure nonsense from readers equipped with a very imaginative mind. At best, there was no more to read into this uncertain Müllerian mention to Leibniz than a simple quip. This necessity of clarifying what a monad meant in biology implicitly shew that the conceptual link between the Leibnizian naturalised monad and the Müllerian monad was questionable. This talk aims to clarify what the monad referred to in O. F. Müller. In order to do so, I will first present how O. F. Müller thematised the monad as the simplest living being and as a new genus in his classification. In particular, I will focus on the notion of simplicity inherent to the monad when the monad named the simplest living being. Indeed, in the Müllerian approach, being a simple living being meant having a homogeneous composition, with no well-developed organ and this definition of simplicity contrasted with the one that the naturalised Leibnizian tradition thematised. While the conceptual descendants of the Leibnizian monads were simple entities with an internal impetus implied in ontological issues, the monads in Müller developed a compositional approach of simplicity in an epistemic approach, with no underlying ontological questioning. Consequently, at first sight, there is no conceptual link between the Leibnizian naturalised monad and the Müllerian monad. However, in the preface of Vermium terrestrium et fluviatilium (1773), Müller quoted Bonnet's Considérations sur les corps organisés and explicitly based on Bonnet for theoretical considerations. In his Considérations and his Palingenèse philosophique (and both books were in Müller's bibliography) Bonnet developed conceptions based on Leibniz which he reinterpreted. Then, this reference in Müller of a theoretical supervision by Bonnet questions again the conceptual relationship between the Leibnizian naturalised monad and the Müllerian monad and between their simplicity. This questioning constitutes the second point of the talk.I will conclude by examining how the strict separation of du Bois-Reymond between the Leibnizian legacy and the Müllerian monad emphasized a complex conceptual situation between different approaches in the way of doing biology: while some authors like Oken or J. Müller interpreted the monad from O. F. Müller in an organicist or a vitalist way, for du Bois-Reymond, the point, was to separate the monad in Müller (the naturalist who introduced first the monad in classification) from any philosophical or speculative background to promote a scientific method only based on empirical observations in a mechanical and a materialistic approach.
09:00AM - 10:40AM
SR-8
The Paradox of Historical Epistemology (Symposium)
Track : After Kant
Moderators
Philippe Huneman, IHPST (CNRS/Université Paris I Panthéon Sorbonne)
The relationship between history and historical epistemology as it crystallized in the Bachelardian tradition is based on a paradox: on the one hand, the notion of historical epistemology gives history an essential role, but on the other hand, historical epistemology has had very little to do with the historical discipline as such. A number of questions result from this paradox: Is there a specificity of the history of science in relation to the history of historians? In what respect is history of science, as conceived by historical epistemologists, still a history? Is the history of science possible according to historical epistemology? What would be the appropriate theory for an epistemological history of science?The three presentations of the symposium 'The Paradox of Historical Epistemology" address all these questions. The first presentation, "The intertwinement of history and philosophy of science in Bachelard's epistemology", shows how the paradox of historical epistemology has been constituted in Bachelard himself. The second presentation, "The issue of the historicity of sciences among Bachelard's heirs: Canguilhem, Foucault, Granger", insists on how different were the solutions that Bachelard's early heirs sought to this paradox. The third presentation, "Does historical epistemology need a theory of history?", starting from the question of the status of history, focuses on how Hacking sought to overcome this paradox by making history one of his styles of reasoning.
The intertwinement of history and philosophy of science in Bachelard’s epistemology
Presented by :
Lucie Fabry, ENS / Université Paris Nanterre
aire As Georges Canguilhem and Dominique Lecourt discussed whether Bachelard's work should be referred to as historical epistemology or epistemological history, they both claimed that Bachelard had established a new kind of connexion between history and philosophy of science. The aim of this paper is to identify the nature of this connexion and what it implies for Bachelard's practice of history and philosophy. The paper follows this relation back to Bachelard's early work: his two dissertation theses (Bachelard, 1928a, 1928b). His main dissertation, L'Essai sur la connaissance approchée, is rooted in the philosophical tradition of theories of knowledge; his complementary dissertation, L'Étude sur l'évolution d'un problème de physique, presents itself as an historical inquiry. In both works, however, philosophy and history of science permeate each other: L'Essai turns to the history of science in order to illustrate its thesis on the nature of knowledge, and L'Étude draws philosophical conclusions from the history of the physical laws of heat propagation. It appears, nonetheless, that the philosophical theses which stem from the historical study partly contradict the Essai's claims on the nature of knowledge. This is shown by comparing the way Bachelard considers the mathematisation of physics: in the philosophical Essai, deeply influenced by Bergsonism, mathematisation is presented as a structuration and simplification of experience, which should ultimately be overcome for a minute description of the purely given. In the historical Étude, however, Bachelard claims that mathematical physics is richer, not poorer, than direct experience, for mathematics alone open the path to progress of knowledge. This paper shows, paradoxically, that Bachelard's later works are closer, in methods and theses, to the historical study than the philosophical essay. Since the early 1930's, Bachelard has, indeed, regularly criticised the philosophical attitude which consists in turning to the history of science to illustrate a general thesis; he claimed that philosophers should, instead, derive their philosophy from the study of history of science (G. Bachelard 2013, 7, 2012, 2–5). To specify what that requirement meant to Bachelard, the author shows that Bachelard's project consists, more specifically, in identifying the conditions for and obstacles to scientific progress. It implies a specific way of writing the history of science, which Bachelard presented as retrospective, normative and recurring (G. Bachelard 1951, 1972). It also implies a specific use of philosophical concepts, which seeks to clarify the paths to scientific breakthroughs and promote further progress. As an example of Bachelard's conceptual activity, the paper discusses Bachelard's notion of non-, a concept which analyses the history of non-Euclidian geometries and provides a pattern for the progress of other sciences-Bachelard sketching, for instance, what a non-Lavoisian chemistry may look like (G. Bachelard 2012). Along with the comment of Bachelard's texts, the consideration of their early reception by Suzanne Bachelard (1970) and Georges Canguilhem (1977) helps elucidate the assumptions which allow the intertwinement of history and philosophy of science in his epistemology. ReferencesBachelard, Gaston. 1951. L'activité rationaliste de la physique contemporaine. Bibliothèque de philosophie contemporaine. Paris: Presses universitaires de France.Bachelard, Gaston. 1972. 'L'Actualité de l'histoire des sciences'. In L'engagement rationaliste, 137–52. Bibliothèque de philosophie contemporaine. Paris: Presses universitaires de France.Bachelard, Gaston. 2012. La philosophie du non : essai d'une philosophie du nouvel esprit scientifique. Quadrige. Paris: Presses universitaires de France.Bachelard, Gaston. 2013. Le nouvel esprit scientifique. Quadrige. Paris: PUF.Bachelard, Suzanne. 1970. 'Épistémologie et histoire des sciences'. In Actes du XIIe congrès international d'histoire des sciences, 1. A : Colloques, textes des rapports, 39–51. Paris: A. Blanchard.Canguilhem, Georges. 1977. 'Le Rôle de l'épistémologie dans l'historiographie contemporaine'. In Idéologie et rationalité dans l'histoire des sciences de la vie : nouvelles études d'histoire et de philosophie des sciences, 11–29. Problèmes et controverses. Paris: J. Vrin.
The issue of the historicity of sciences among Bachelard’s heirs: Canguilhem, Foucault, Grangerclaire
Presented by :
Sophie Roux, ENS
In a book that is now twenty years old, Les Inquiétudes de la raison, épistémologie et histoire en France dans l'entre-deux-guerres, Enrico Castelli Gatinara mentioned the "double articulation" that French philosophers of science after Bachelard established between history and science, history asserting its scientificity and sciences discovering their historicity. But what was exactly expected from the history of sciences? Was there a connection between the history of sciences and history? What was the historicity of the history of sciences?Focusing on Canguilhem, Foucault and Granger and using a classical distinction between history as a sequence of events and history as a discipline, I will show that, if all three subscribe to Bachelard's thesis that history of science is not a history like any other, they have quite different notions about the historicity of the history of sciences.Like Bachelard before him, Canguilhem neglects history as a discipline, but he finds in the set of events that led to the development of sciences the material for a highly human history. For him, the history of sciences has the allure of an adventure in which the decisions that are taken lead to the emergence of unpredictable innovations and to the institution of norms of truth. If the history of sciences is historical, it is therefore for Canguilhem because it requires the same kind of engagement as the one that makes one enter in the Resistance.Foucault focuses on the emergence of norms of truth as well, but he thinks that these norms are, not freely instituted by a subject, but imposed by prior mechanisms of power. Under these conditions, he is not interested in the history of sciences properly speaking, but in its theoretical conditions of possibility (archaeology of forms of knowledge) and in its practical conditions of possibility (genealogy of mechanisms of power). If the history of science is historical, it is because it depends on conditions of possibility of this kind, which can be related to the regularities that some forms of history reconstruct.Of the three authors considered, Granger is the only one who explicitly questioned the epistemology of history as a discipline, which he considers as singular among the other human sciences insofar as it aims, either directly or through superimposed models, to restore the singularity of an experience. The history of sciences consequently seems to be a-historical, since it aims at revealing structures that manifest necessity. Still, sciences are historical, because, in their development, the anterior cannot take the place of the posterior, but, this development being autonomous, such a historicity cuts them off history as a discipline rather than brings them closer to it.Canguilhem, Foucault and Granger attach importance to the history of science, because of the question of norms that runs through it: but, for the first, the norms at stake are those that a human subject institutes, for the second, the traces that the mechanisms of power deposits, for the third and last, the norms that emerge in an autonomous process.ReferencesCanguilhem, Georges. 2018. Œuvres complètes, 5e tome, Paris: Vrin.Foucault, Michel. 1966. Les Mots et les choses : une archéologie des sciences humaines, Paris: Gallimard.Foucault, Michel. 1969. L'archéologie du savoir, Paris: Gallimard.Foucault, Michel. 2001. Dits et écrits, 2 vols., Paris: Gallimard.Granger, Gilles-Gaston. 1967. Pensée formelle et sciences de l'homme, 2nd ed., Paris: Aubier-Montaigne.Granger, Gilles-Gaston. 1994, Formes, opérations, objets, Paris: J. Vrin.
Does historical epistemology need a theory of history?
Presented by :
Matteo Vagelli, Université Paris 1 Panthéon-Sorbonne
In its study of science, historical epistemology, rather than relying on methodology or a theory of knowledge, has traditionally mobilized history. For early practitioners this implied a recognition of the role history of science played in shaping epistemology, and a profound redefinition of the nature and function of the history of science itself. This redefinition has chiefly been accomplished by taking general history as a negative counterpoint. As a result, historical knowledge and history -- understood as a discipline that aspires to be scientific, to the extent that it aims to produce true statements about the past -- do not seem to have received specific attention by historical epistemologists. My talk aims to investigate the reasons for this apparent neglect.In the first part, I will show how, at least for early historical epistemologists, history is considered a way of indexing knowledge rather than a kind of knowledge itself. I will illustrate this approach to history by revisiting Canguilhem's interpretation of Bachelard's normative turn in the history of science. According to Canguilhem, a science which does not, "at a certain moment", operate the "recusal of certain conditions of objectivity [...] and their substitution by conditions of objectivity more objectively defined", not only has no history, properly speaking, but is moreover not actually a science (Canguilhem 2018). Understanding the scientific status of a discipline to be based on the progressive refinement of its standards of objectivity makes the scientificity of the human and social sciences, history included, appear particularly uncertain. Making history the very logic of science seems to paradoxically hinder the possibility of developing an epistemology of historical knowledge.In the second part, I will take up this problem using a different framework: in his philosophical elaboration of historical epistemology, Hacking qualified what he called the "genetic way of understanding" or the "historical explanation by way of development" as one of the least firmly established of the six styles of reasoning characterizing Western science (Hacking 2002, Ch. 11-12). Styles of reasoning do not correspond to definite scientific disciplines but are "ways of finding out" deployed by one or more discipline. Hacking accounts for the lesser stability of the genetico-historical style with respect to the styles of mathematical postulation or experimentation by pointing out that the objects of history are "interactive", or rather, moving categories describing behaviors that change over time. This element of change makes theories about them unstable.The two approaches to history at first seem opposed: whereas the early historical epistemologists seem to preclude analysis of history as a discipline, in the approach advanced by Hacking historical reasoning is identified as one of the sources of empirical knowledge. My claim is that, if we look deeper, we will see that both cases are implicitly underpinned by fundamentally similar epistemologies of history. To excavate these epistemologies, in the third and final part of my talk I draw on three interconnected insights from analytical philosophy of history (Danto 1965; Mink 1987): the nature of historical explanation, the present-orientedness of historical narratives, and the indeterminacy of the past. In this comparative analysis, I hope to demonstrate that a theory of history necessarily underpins, even if implicitly, any epistemology claiming to be historical.ReferencesCanguilhem, Georges. 2018. "Objectivité et historicité de la pensée scientifique", Œuvres complètes, 5e tome, Paris: J. Vrin, pp. 200-314. Danto, Arthur C. 1965. Analytical Philosophy of History, Cambridge: Cambridge University Press. Hacking, Ian 2002. Historical Ontology, Cambridge Mass. : Harvard University Press. Mink, Louis O. 1987. Historical Understanding, B. Fray, E. O. Golob and R. T. Vann (eds.), Ithaca, NY: Cornell University Press.
09:00AM - 10:40AM
SR-9
Weyl
Track : After Kant
Moderators
Matteo Collodel, Independent Scholar
The unavoidable residuum of the ego’s annihilation: Weyl between phenomenology and physical geometry
Presented by :
Casey McCoy, Yonsei University
In Das Kontinuum, Raum-Zeit-Materie, and in several other places, Hermann Weyl strikingly and enigmatically describes the coordinate systems of physical geometry as "the unavoidable residuum of the ego's annihilation". The enigma is reduced somewhat by the recognition of the influence of Edmund Husserl's phenomenology on Weyl in this period, as has been particularly emphasized by Thomas Ryckman in The Reign of Relativity. However, it is my contention that, though his language is distinctively Husserlian in places, there is no significant Husserlian residuum in Weyl's assertion, much less in his development of a "world geometry". Instead, I suggest that Weyl creatively adapts Husserl's ideas to the context of the mathematical project of developing and generalizing (infinitesimal) geometry.Weyl's contention that coordinate systems are necessary in physical geometry is based on the idea that an observer in spacetime must use a coordinate system for the purpose of measurement. That coordinate systems cannot be the residuum of anything, however, is evident from a modern point of view, that is, from coordinate-free formulations of differential geometry and spacetime theories based upon them. Although it is often noted that coordinate charts are required for the very definition of a differential manifold, this is, in fact, a confusion: differential manifolds are equally well constructed by specifying the set of differentiable functions (of some degree) on a topological manifold. This development of a coordinate-free perspective on physical geometry, I suggest, is actually consonant with Husserl's views on physical geometry, as he would attribute the placing of an observer in such a geometry to remaining in the "natural attitude", hence of no genuine phenomenological significance.As detailed by Ryckman, Weyl held that "phenomenological" reflection leads one to identify "infinitesimal" facts (represented by a coordinate system) as the sole legitimate grounds (Evidenz) of an observer in a physical geometry. I will argue that this is in fact a dogmatic epistemological thesis, not a phenomenological one. There is no reason to suppose that what an observer is justified in is merely facts related to her "tangent space", which genuine phenomenological reflection shows, as I will argue. Such a supposition, however, is the basis of a natural generalization and development of the idea of an infinitesimal geometry (Riemann, Levi-Civita, etc.), and it is intimately connected to the so-called "problem of space" (as has been drawn out especially by Erhard Scholz). Furthermore, it is also (perhaps) in better agreement with Fichtean ideas than Husserlian ones. (The importance of Fichtean ideas in Weyl's thought has been particularly developed by Norman Sieroka in recent years). Thus it is these two strands that I will develop in the way of providing a better interpretation of Weyl's complex philosophical/physical/mathematical approach to "world geometry".
Why Did Weyl Think that Quantum Logic is a Formal Pottage?
Presented by :
Iulian Toader, University Of Vienna / Insitute Vienna Circle / Insitute Of Philosophy
In a 1987 paper, Bas van Fraassen wrote: "I locate the real beginning of the semantic approach, the place where it became consciously distinct, in E. W. Beth's Natuurphilosophie (1948)... Since his view could be sloganized by saying that the analysis of quantum logic provides the paradigm for the semantic analysis of physical theory, it is clear that his view also had antecedents in the thirties and forties, flourishing perhaps entirely unconsciously of the implied opposition to the approach of the logical positivists. A good example is Hermann Weyl's article 'The Ghost of Modality' (1940)." Later, van Fraassen added: "Weyl gave in rudimentary but prescient form the outline of the semantic analysis that would eventually unify modal, quantum, and intuitionistic logic." (1992)In this paper, I discuss Weyl's analyis of quantum logic (QL). In particular, I focus on his argument that a close analysis of QL shows that "we sold our birthright of reality for the pottage of a nice formal game." His main question in the 1940 paper was Is there a useful universal logic of modality? In order to answer this, Weyl examined several "models" in which modal operators combine "unambiguously" with logical operators. His inquiry is described as follows: "If in several such models we encounter the same complete set of axioms, then we have reason to believe in the usefulness of a universal logic of modality. In the opposite case our hopes will be nipped in the bud." The existence of different sets of axioms in different such models then shows that there is no universal logic of modality. QL is one among the examined models. Weyl assumed Birkhoff and von Neumann's 1936 approach and so followed them in taking the properties of a quantum system to correspond to linear closed subspaces of a Hilbert space. That is, he considered "experimental" propositions like "An experiment to measure observable O of the system yields a result within a certain subset of the real numbers" to correspond to closed subspaces. Such a proposition is assigned a certain probability value, which indicates the probability of its being true. Thus, for each state of the system, there is a mapping from the set of experimental propositions to the interval [0,1] of probability values. Then Weyl noted: "the parallelism between the [logical] operators … for sets [i.e., for closed subspaces of the Hilbert space] and for (truth or probability) values, a feature prevailing in classical logic ... breaks down completely in quantum logic.'' While he was wrong about classical logic (as Carnap would come to show in 1943), Weyl concluded that the broken "parallelism" entails that QL is an incomplete calculus and thus "of very little extrinsic significance, in spite of its attractive intrinsic mathematical features." In particular, he considered its incompleteness as a "barrier to objective reality". My own reconstruction of the argument clarifies Weyl's meaning of incompleteness and explains why he took this to have anti-realist consequences.
Towards a new philosophical perspective on Hermann Weyl’s turn to intuitionism
Presented by :
Kati Kish Bar-On, Tel Aviv University
Hermann Weyl's engagement with Intuitionistic-based ideas began in 1910 when he first started to consider constructive methods as a reliable alternative to Cantor's set-theoretical approach (Beisswanger 1965; Scholz 2000). In Das Kontinuum (1918) Weyl's adopted a constructivist approach, which he developed into an intuitionistic approach three years later in "On the New Foundational Crisis in Mathematics" (1921). However, during the 1920s, Weyl's foundational thinking drifted away from intuitionism in favor of Hilbert's axiomatic program, possibly due to intuitionism's inability to fully recapture all theories in classical mathematics (Weyl 1927; van Dalen 1995; 2013). Still, Weyl's later works show that he never fully accepted Hilbert's program (Weyl 1927; 1940). Solomon Feferman describes at least two additional changes of heart vis-à-vis constructivism attested to by Weyl's later works: one in the late 1930s, when he restated the importance of his early constructive views as essential for a viable solution to the foundational problem, and another in his 1953 lecture where he described himself as being torn between constructivity and axiomatics (Feferman 1998; 2000). Historians have tended to deem Weyl's inclination towards intuitionism to be "strange" or even "voluntaristic" (Scholz 2000, 2). His retreat from Brouwer's ideas is portrayed as "disillusionment" (Rosello 2012, 147), and he is often described (e.g., by Erhard Scholz) as "wandering" between mathematical approaches and through philosophical fields (Scholz 2004, 1). Such accounts view Weyl's frequent change of mind as owing to confusion bred by indecisiveness, but they make no attempt to explain why he saw fit to change his mind or the reasons for his undecidedness. In the current paper, I purport to enhance the prevailing view held by historical accounts and reconsider Weyl's "shifting positions" as a symptom of a much deeper, convoluted intrapersonal process of self-deliberation.To do better justice to Weyl's changes of heart, I wish to look briefly at the broader philosophical questions having to do with the rationality of normative framework transitions; in particular, how committed practitioners can come to fault their commitments sufficiently to seek for alternatives? First, I shall focus on three central themes that occupied Weyl's thought over the years: the problematic differentiation between the intuitive and the mathematical continuum, the notion of logical existence, and the necessity of both intuitionism and formalism to adequately address the foundational crisis of mathematics. Read through the prism of these persistent concerns, Weyl emerges as grappling consistently with fundamental problems rather than merely being confused, as his intuitionistic phase is commonly described. Subsequently, building on Menachem Fisch's model of scientific framework transitions and the special role given in it to normative indecision or ambivalence (Fisch and Benbaji 2011; Fisch 2017), I will examine Weyl's motives for considering such a radical shift in the first place with a view to showing that Weyl's indecision was the result of a rational process of self-criticism.
10:40AM - 11:00AM
HSS Foyer
Coffee & Tea Break
11:00AM - 12:40PM
SR-4
Mathematics, Laws, & Certainty
Track : Kant and Before
Moderators
Valérie Lynn Therrien, McGill University
Mathematization and the Quaestio de Certitudine Mathematicarum
Presented by :
David Marshall Miller, Iowa State University
One feature of modern science is a commitment to mathematization. All else being equal, scientists prefer theories that allow the mathematical characterization of phenomena. This commitment is grounded in the belief that mathematics is certain, applicable, and productive. That is, mathematics is indubitable; it accurately describes natural phenomena; and it (somehow) expresses the causation of those phenomena.Yet, as participants in the late sixteenth- and early seventeenth-century Quaestio de Certitudine Mathematicarum argued, this tripartite attitude is contrary to the default Aristotelianism of early modern natural philosophy and the revived Platonism of its opponents. Aristotelians (like Piccolomini and Pereira) held that the mathematical categories are too impoverished to capture the causal structure of the world-they apply to accidents, not productive natures. Meanwhile, for Platonists (like Barozzi and Biancani) the natural world is too corrupt to exemplify mathematical perfection. Either way, mathematics is of limited value in natural science. So we are faced with a historical question: when and how did the tripartite belief arise?The answer offered in this paper is that Pietro Catena's innovative participation in the Quaestio was of seminal importance. Catena recognizes that both Aristotelians and Platonists hold that the certainty of mathematics derives from its objects (in different ways). But insofar as the mathematical objects are distinct from natural things, the certain knowledge of the former does not apply to the latter. Catena consequently reverses the order of dependence. He holds that mathematical certainty is sui generis. The Euclidean axioms reveal themselves as necessary truths as soon as we express them.Catena then asserts an ontological argument: mathematical truths are necessary, so their objects necessarily exist. For instance, the proof that the angles of a triangle sum to two right angles entails the actuality of a universal triangle with that property. Moreover, the universals' properties are inherited by particulars under constraining conditions. This is true even of concrete particulars in the natural world, which are mathematical universals under particular physical conditions. Yet this entails that the mathematical properties of a natural object are identical to those of the universal it inherits. The bronze triangle is just a triangle conditioned by bronze-so it possesses the properties of a triangle, including angles that sum to two right angles. In this way, mathematics is rendered applicable to natural science: the study of the rainbow is a particularization of the study of refraction, which is a particularization of pure geometry.Catena's view also yields the productivity of mathematics. The existence of mathematical objects flows from the certainty of mathematical demonstration, so premises produce their conclusions, logically and causally. The logic of mathematics, Catena says, invokes rational causes-causae illativae-which are prior to and more fundamental than the four Aristotelian causes, and which generate the propter quid of scientific demonstrations.Catena's intervention in the early modern Quaestio is at the root of modern mathematization. Contrary to both Aristotelians and Platonists, Catena makes mathematics certain, applicable, and productive.
Laws as Axioms
Presented by :
Michael Jacovides, Purdue University
I defend the analytical interest for historians of philosophy of science of the following conception of laws of nature: a law of nature is an empirically discovered relation between quantities that is suitable to be a first principle of a science. Historians of ideas who consider the origin of the concept of a law of nature usually consider them to be general commands from God, and treat some juridical connotation as essential to the concept. I'll argue for the greater usefulness of my concept along four desiderata. First, it helps us better understand ancient discoveries of at least three laws of nature: the law of reflection, the law of the lever, and the law of buoyancy. Second, it helps us better understand Roger Bacon's 13thcentury application of the word 'lex'to what seem to us to be laws of nature governing optics. Third, it fits better with the way that Newton introduces his laws of motion in Principia. Fourth,it better corresponds to modern usage among contemporary scientists and philosophers of science-after all, one can today be an atheist and a competent scientist or philosopher of science.Many cultures develop mathematics in various forms. Only in Greece and places influenced by it are proofs put into a format of axioms and theorems. The great success story in this project is in mathematics and the mathematical sciences as developed by Euclid, Archimedes, Heron of Alexandria, and Ptolemy. If we think of laws as potential axioms, we can understand why we call the law of the lever and the law of buoyancy but not every correct general result in the Archimedean Corpus. The laws are more suited to be axioms. In 1959, A.C. Crombie shows that Roger Bacon influentially uses the term 'lex' for laws of reflection, refraction, and the multiplication of species. Crombie asserts that the Bacon's use of the term shows a new commitment to mathematizing nature. Jane Ruby rightly argues that Greek and Arabic writers on optics had already been using mathematics, and what's new with Bacon is only his use of the word 'lex'. Ruby shows it's just a puffed up version of the word 'rule' (regula). Not every important general principle is a law of nature, however. It's important for understanding why Bacon's leges are laws of nature to acknowledge that he's working within an empirical axiomatic framework.Newton's Principia is in the same mathematical form as Euclid's Elements and Archimedes' On the Equilibrium of Planes. He introduces his laws of motion with the title of Axiomata sive Leges Motus: Axioms, that is to say, Laws of Motion. That equivalence gives us his concept of laws. A similar conception lives on in modern physics textbooks: the laws are the general principles at the beginnings of chapters from which more particular results can be derived. Most philosophers of science could agree that these are the statements that express the laws of nature. Their disputes are really about why these statements are true.
Waismann’s Philosophical Method as Mathesis Universalis
Presented by :
Radek Schuster, University Of West Bohemia
Although Waismann's late work, from his Oxford period in 1940s/1950s, is generally recognized as a distinctive and valuable contribution to the philosophy of science, philosophy of language, and meta-philosophy, it has been traditionally perceived as strongly influenced by Wittgenstein. Moreover, Waismann's texts have been read by many as mere derivative or even epigonic of Wittgenstein's own writings. The aim of this paper is, on the one hand, to reconsider Wittgenstein's influence on Waismann and to show such reading as inappropriate and unjust to Waismann, and, on the other hand, to consider Waismann's work as an attempt to construct an universal science and to evaluate it in the context of the history of philosophy of science. The exposition will be based on the three following arguments:First, in his late work, Waismann continued to develop and apply the method of philosophizing, which was invented in the course of the collaboration between him, Schlick and Wittgenstein on the book-project Logik, Sprache, Philosophie in 1930s.  This method, called "unsere Methode", was the joint tool of the three. Waismann, who was responsible for its articulation, not only did include Wittgenstein's voice into it, but also allowed in it for Schlick's voice and the spirit of exact discussions within the Vienna Circle. Furthermore, after Schlick's assassination in 1936 and Wittgenstein's having been "led astray", Waismann remained the only one who was able to utilize this innovative method fruitfully.    Second, this philosophical method as the analysis of the meaning and the clarification of ideas is an essential part of science. The examination of fundamental concepts has been, according to Waismann, the only way out whenever science has come to a crisis in its history. The method is a process in grammar, in a broader sense, and it consists of formulating and bringing to consciousness the rules for the usage of signs. This philosophical grammar embraces the rules of conventions, definitions and ostensive definitions, the rules of logical inference and mathematical calculation etc. Waismann contrasts the grammar with the actual use of language. Grammar is, as he puts it, "everything about language which can be fixed before language is applied", or, in other words, "the installation and adjustment of a system of signs, in preparation for their use". Waismann compares the relation between grammar and the application of language with the relation between determining the meter as the unit of length and carrying out a measurement.Third, Waismann's elaboration of the method and its utilization bears comparison with famous similar projects in the history of philosophy of science that sought scientia universalis by accenting linguistic techniques. In the tradition that goes back to the ars magna of Lullus and continues through the characteristica universalis and calculus ratiocinator of Leibniz to the Begriffsschrift of Frege, Descartes's project of mathesis universalis seems to be the most compatible one with that of Waismann. This claim is to be supported by examples of analogical ways in which both Descartes and Waismann dissolve scientific problems.
11:00AM - 12:40PM
SR-6
Biology, Chemistry, Medicine
Track : Kant and Before
Moderators
Takaharu Oda, Trinity College, Dublin
Extracting Spirits: the Role of Distillations in the Emergence of Early Modern Experimental Philosophy
Presented by :
Doina-Cristina Rusu, Doina-Cristina Rusu
In this paper, I will investigate the impact the invention of distillations had on early modern philosophy and especially on the emergence of experimental science. Early modern authors worked within different theoretical frameworks, and this influenced the way in which they were interpreting the resulting substance of the distillate: it was thought to be quintessence, subtle matter, oil, tincture, etc., but everyone agreed on the uniqueness of it. With few exceptions, this topic has been overlooked in the literature. An exception to this, however, is Sergius Kodera who claimed that the discovery of the distilling apparatus in the late middle ages provided an empirical model for the physiological process of the formation of spirits in the human body, and, as a result, the concept of spirit eclipsed the importance accorded to the bodily humours (Kodera, 2012).In this paper, I will prove that distillations had a greater impact than previously suggested. To answer Kodera's claim, I consider that it is not the medical framework that benefited mostly from the discovery of the distilling apparatus, but experimental sciences. This is because the medical spirits were an important part in both medieval medicine and philosophy, because their formation was described as evaporation or rarefaction, and because humours continued to be used in medical explanation. My claim can be subdivided into three parts. First, claiming that the quintessence of things can be extracted from sublunary objects contributed to the rejection of the Aristotelian distinction between sublunary and supralunary realms. Second, generalizing the existence of a spirit in all material things, and not only animals and humans, lead to a revival of the Stoic concept of pneuma, as a subtle material body permeating the entire universe. Third, distillations were the proof that this body could be extracted, modified and manipulated in the laboratory. This was a characteristic that the Aristotelian essences or forms could not possess. This means that, while taking over some of the structural and vital functions of the form, spirits quabeing active, subtle, and material provided the framework for the development of an experimental philosophy which aims at chaining and transforming natural and artificial bodies. In order to prove my hypothesis, I will look at the relation between theory and practice in the distillations of Hieronymus Brunschwig, and Gianbattista della Porta, as well as the more general experiments with spirits in Francis Bacon.
“Revisiting the history of biology with nutrition: vital mechanisms and the ontology of life”
Presented by :
Cécilia Bognon, UCLouvain, CEFISES
The way in which philosophy of biology crystallized around a core set of concepts and problems tied to evolutionary biology seems to be problematic (Sarkar and Plutynski 2008, Gayon 2009, Pradeu 2017) in the way it does not reflect the wealth and diversity of philosophical problems contained in biology. In that sense, one of the goals of the present talk is to propose some distance with regard to the 'problem-space' that seems to have been imposed on us by a certain history of biology, with its highlighted entities such as the organism or the gene (Huneman and Wolfe 2010, Creager 2017). This approach explicitly inquires into the origins and emergence of biology, as a way of challenging a certain univocity and unidirectionality.In this talk I investigate how life emerged as a problem or was constituted as a specific ontological category in the early 18th century, i.e.how life was gradually separated from the inert and the role played by the conceptualization of metabolic processes in this separation. In this nutrition-centered approach (rather than reproduction-centered), the contribution of nutrition to the emergence of the ontological problem of life and to the elucidation of the material (chemical) processes that support it necessarily implied the differentiation of mechanisms specific to organic beings, such as intussusception (vs cristalization).However, the polarization of the ontological opposition between the living and the non-living, does not necessarily accompany the work of constitution of an empirical and experimental biological science. This necessary recognition of the ontological problem of life (as a distinction between living and non living entities) therefore had to be complemented by a profound redefinition of nutrition and its role in the vital economy – a reconceptualization that required replacing a definition of nutrition as growth and repair with an organizational and productive understanding of nutrition. Under this condition, nutrition allowed to grasp the specificity of living organisms as a certain capacity for self-production – a specificity which, if not materially or chemically determined, at least raised the question of the relationship between life and matter, or between the vital and the chemical.This double movement of the emergence of a category of "life" against the backdrop of an ontological crisis, and the emancipation of biological discourse from these same ontological concerns, is like the two poles of the history that I would like to contribute to write, in a broader project, from the standpoint of nutrition. Indeed, studying nutrition, as I will show, should make it possible both to polarize an ontological opposition between the living and the inert from the study of specifically vital mechanisms (intussusception) and at the same time to deploy, in the field of biological chemistry, an analysis of the materiality of life. In other words, I argue that the recognition of nutrition and its specific mechanisms has provided a ground from which could arise an autonomous biological science that did not imply the transcendent nature of its object (life or organism).Pas sur que ce soit correct grammaticalement
Immanuel Kant’s Reduction to the Pristine State: Chemical Method in the Critique of Pure Reason
Presented by :
Ashley Inglehart, Union College
Curtis Sommerlatte, Union College
In the preface to the second edition of the Critique of Pure Reason, Kant famously compares the revolution brought about by his transcendental method with that brought about by Copernicus. In presenting the so-called Copernican Analogy, however, Kant mentions another vivid analogy for understanding his own method: the "synthetic procedure" that is exhibited by the chemists in "the experiment of reduction". Despite abundant work on both Kant's Critique and his use of the Copernican Analogy, this chemical analogy has been virtually neglected. This paper aims to elucidate the chemical experiments referenced by Kant and by what means they serve as an analogy for Kant's own synthetic method in the Critique of Pure Reason. We first consider two chemical experiments referenced by Kant in the preface. The first-what Kant calls "the experiment of reduction"-is actually the Reductio in Pristinum Statum, popularized a century earlier by Wittenberg chymist Daniel Sennert. The second is Georg Ernst Stahl's famed phlogiston experiment, in which charcoal is added to lead calx in order to regenerate the metal. Both experiments investigate unobservable matter-corpuscles and phlogiston, respectively-by manipulating a substance through chemical analysis. That analysis is followed by a synthesis with some external component, which results in a reduction to the substance's original state.We then show how these experiments serve as an analogy for Kant's own synthetic method in the Critique and argue for a new interpretation of that method. We argue that it consists in a three-stage process: 1) a hypothesis about unobservable entities; 2) an analysis of a whole into its elements; and, 3) a synthesis of those elements back into a whole.In the first stage, Kant assumes the distinction between appearances and things in themselves, which distinction concerns unobservable entities insofar as our experience does not present us with the latter. More precisely, Kant draws this distinction by assuming that "objects must conform to our cognition" (Bxvi) such that unobservable things in themselves affect us, which prompt our cognitive faculties to represent appearances, which must conform to what those faculties "put into" (Bxviii) those objects. In the second stage, Kant's "Transcendental Analytic" presents a theory according to which our a priori cognition contains "two very heterogeneous elements, namely those of the things as appearances and the things in themselves" (Bxxi fn.). In the third stage, Kant's "Transcendental Dialectic" carries out a synthesis of these two elements, and this synthesis supports the hypothesis that there is a distinction between appearances and things in themselves, despite the latter's unobservability. Kant argues that, without such a distinction, any alleged a priori cognition would be contradictory.We conclude by suggesting how this interpretation promises to explain some revisions to the second edition of the Critique of Pure Reason. Namely, given his newly conceived synthetic method, his revisions at the end of the "Transcendental Analytic" aim to forestall the worry that his acceptance of the distinction between appearances and things in themselves commits him to a form of Berkeleyan idealism.
11:00AM - 12:40PM
SR-7
Time and Entropy
Track : After Kant
Moderators
Klodian Coko, Ben Gurion University Of The Negev
Entropy in works and worldview of Andrei Platonov
Presented by :
Monica Puglia, Università Di Sassari
The Soviet engineer-writer Andrei Platonov (1899-1951) was one of the most important authors in the three decades after Russian Revolution. Usually studied through the glasses of Fedorov's Common Cause philosophy that combined religion and science, in our proposal we examine his other side, the pure scientific one, focusing on the obsessive presence of entropy in his works and its influence on his worldview. The flowing of time is deeply connected with the increase of entropy: neither of them exists without the other. For what concerns our intents, entropy can be seen as a measure of the disorganization of a system, and of the unavailability of its energy to do useful work. The second law of thermodynamics asserts the universal trend towards the growth of entropy, that is to say, the rush of everything towards a state of disintegration and homogeneity, of loss of form, meaning and memory. The author's interest in this topic is in line with the theories that animated the scientific debate in those years: being himself an engineer, he was undoubtedly aware of the application of these recent studies to fields like electrification and railway systems, for which the previous fifty years of development of thermodynamics had revealed fundamental. In the research here proposed, we will try to show how these scientific notions influenced the structure of Platonov's novels, which are burdened by a vision of the world dominated by the entropic dissipation, and resignation to the victory of chaos. These ideas appear insistently in the novel at various levels: first of all, in the biologic life of men, associated to the perception of our own corruption and decomposition, and in the way death is seen, without worrying for any individual ending, yet perceived as a collective occurrence that associates all men; secondly, in the action, controlled by chaos, by inconclusiveness, by the absence of a planning, which is just as predicated as unrealized, dispersed in a chaotic motion of activities and futile endeavours; furthermore, in the casual wandering, so typical of Russian man, who wants to find into space what it is impossible to reach in a lifetime; finally, in the language, which stumbles between concrete and abstract. Even work itself, the only bulwark to the mount of entropy, rarely manages to hinder the dissipation of forces, because it is often concentrated on useless goals. The only desirable partial victory is that of the species: entropy could be defeated only through a link between generations, according to Fëdorov's utopia, yet at this stage neither it seems to give any solace. In those years, Communist Party endorsed so-called production novels, loaded with juvenile strength and impetus, renewing and creative energy. On the other side, Platonov's works could be defined dissipation novels, because they recorded men's defeat against the laws of nature.
The philosophical underpinning of the absorber theory of radiation
Presented by :
Marco Forgione, University Of South Carolina
The paper offers my view on how the absorber theory of radiation, by Wheeler and Feynman (W-F; 1945, 1949), considers both advanced and retarded interactions to account for the radiative damping of an accelerated charged particle. It will be argued that the theory is grounded on the philosophical idea of "overall-processes" for which: (i) the micro-dynamical laws of radiation are time-symmetric  ̶ in the sense that emitters and absorbers produce a radiation which is half advanced and half retarded. (ii) The interaction between absorbers and emitter is a necessary one. The first part of the paper rehearses the derivations of the theory and emphasizes the overall-process intuition.  The second part of the paper addresses the philosophical problem of the time-asymmetry of the experimental evidences, emerging from the time-symmetry of the micro-dynamical laws of radiation. Wheeler and Feynman argued that the emergence of such an asymmetry is due to statistical reasons or, in other words, because of a "thermodynamical damping". However, H. Price (1997) has suggested a reformulation of the theory to overcome what he deemed to be the fallacious derivation of the time (experimental) asymmetry. The paper reconstructs the debate around Price's idea of changing the original time symmetry of individual emitters and absorbers with a symmetry for which emitters "emit" retarded radiation and absorbers "emit" advanced radiation. This modification to the original theory carries important consequences from both the physical and the philosophical point of view. With respect to the former: the paper shows that by following Price's reinterpretation, it becomes unclear where the radiative damping would come from (to account for it was W-F's original purpose). With respect to the latter, Price's theory changes the original intuition of `overall-process' for which the intertwining of past and future is the basis of radiative phenomena.As a further evidence of the importance of the overall-process intuition, the paper considers the extension of the absorber theory to quantum electrodynamics (Davies; 1970). We argue that Davies' theory maintains the original time-symmetry suggested by W-F and it also shows that the time-asymmetry of the experimental evidences can be traced back to the boundary conditions used for photons interactions. The last section mentions Feynman's later works (path integrals and Feynman diagrams) to the purpose of drawing a connection between the overall-process intuition explained above and the more recent developments of quantum mechanics and quantum field theory.Bibliography:Davies, P. C. W. (1970, November). A quantum theory of Wheeler–Feynman electrodynamics. In Mathematical Proceedings of the Cambridge Philosophical Society (Vol. 68, No. 3, pp. 751-764). Cambridge University Press.Wheeler, J. A., & Feynman, R. P. (1949). Classical electrodynamics in terms of direct interparticle action. Reviews of modern physics, 21(3), 425.Wheeler, J. A., & Feynman, R. P. (1945). Interaction with the absorber as the mechanism of radiation. Reviews of Modern Physics, 17(2-3), 157.Price, H. (1997). Time's arrow & Archimedes' point: new directions for the physics of time. Oxford University Press, USA.
11:00AM - 12:40PM
SR-8
Politics and Values
Track : After Kant
Moderators
Nadine De Courtenay, University Paris Diderot
Ideology and the Politics of Reason in Early Analytic Philosophy
Presented by :
Bianca Crewe, University Of British Columbia
Intellectual history in the 20th century charts the rise of scientific philosophy, the methodological germ of the dominant philosophical tradition in contemporary Anglo-American institutions. In 1928, the Vienna Circle articulated this methodological commitment in their manifesto, The Scientific Conception of the World, which invokes a cluster of epistemological attitudes including a turn away from metaphysics and a shift towards envisioning philosophy as an activity or method. The work of Hans Reichenbach, a figure peripheral to the Vienna Circle, is illustrative of many of these attitudes. Here, I argue that Reichenbach's vision of philosophy as it appears in The Rise of Scientific Philosophy and his earlier political and pedagogical writing discloses a particular account of rationality and the epistemic agent that is generalizable to much early analytic philosophy. I index this to what John McCumber refers to as "the politics of reason," operative in the ideological and historical context of the Cold War, and particularly visible in attempts on both sides to elaborate the philosophical implications of the science of the time. Against this socio-historical backdrop, and with the aid of Mannheim's sociology of knowledge (particularly his account of conservative and natural law thought styles), I suggest that the account of rationality and the knowing subject operative in Reichenbach's philosophy are compatible with democracy, socialism, and capitalism as principles of economic and social organization, but are incompatible with the normative underpinnings of Marxism, the other ideological pole of the Cold War. These underpinnings involve a distinctive vision of rationality and the relationship between knowledge and social processes, as well as an explicitly political and robustly social account of science. It is worth noting that contemporarily, feminist philosophy, critical race theory, and critical theory have adopted many of these theoretical positions, even from within the analytic tradition. This points to the broader relevance of this investigation: the historical, political and ideological context in which analytic philosophy of science coalesced alongside, and in implicit opposition to, Marxist philosophy of science can shed light on contemporary tensions involved in mobilizing analytic philosophy as a tool for social justice. 
The Reichenbach Scare: Cold War Reason and Closing the Gemeinschaft Gap
Presented by :
Alan Richardson, University Of British Columbia
Hans Reichenbach is a curious figure in the history of analytic philosophy.  In the standard histories of analytic philosophy offered by analytic philosophers, logical empiricism has pride of place but for reasons unrelated to Reichenbach's concerns; hence, while Carnap always and Schlick often make important appearances, Reichenbach is often marginal to the historical accounts given.  There is, however, a genre of history of analytic philosophy offered by those who do not wish to self-describe as analytic philosophers in which Reichenbach, by contrast, plays an outsize role: Both Philip Mirowski and John McCumber offer accounts of the notion of reason in logical empiricism that connect logical empiricism to rational choice theory and neoclassical economics, and both give Reichenbach pride of place in this story.  It is an irony of history that a German Jewish social democrat came to the USA and changed philosophy into a cog on the cold war machine of the USA, but an irony that, on their accounts, we must live with.In this paper I rely especially on McCumber's articulation of rational choice theory in Chapter 3 of his The Philosophy Scare to argue that contrary to his account of Reichenbach's Rise of Scientific Philosophy in Chapter 4, Reichenbach does not endorse all the elements of rational choice theory.  Indeed, Reichenbach's Rise denies both the fixity and givenness of preferences and, in so doing, denies the implicit individualism of rational choice theory.  Moreover, while the preferences of others might not be directly criticizable on ethical grounds for Reichenbach, he does locate the crucial political and ethical questions of community to arise from the differences of and the need to harmonize preferences. I trace the hybridity of Reichenbach's actual views to their sources in his early attempts to provide an ideology to democratic socialism and to his general engineering conception of scientific philosophy. My story also has an irony in it: it reveals that the very sort of democratic socialism that American cold war theorists could not adequately theorize (usually assimilating it to communism) remains untheorizable in at least one form of contemporary American criticism of cold war reason (which, by contrast, assimilates it to capitalism).
An Analysis of Knowledge and Valuation and Quine’s Two Dogmas of Empiricism
Presented by :
Robert Sinclair, Soka University, Tokyo
Recent work on C.I. Lewis's 1929 Mind and the World Order (MWO) has resulted in a deeper appreciation of its influence, especially in accounting for one key source of Quine's more 'thorough' pragmatism from his 'Two Dogmas of Empiricism'.  One element notably missing from this discussion concerns the development of Lewis's epistemology after MWO, especially as this is seen in his Carus Lectures An Analysis of Knowledge and Valuation (AKV).  This presentation seeks to further clarify Quine's route to his arguments in TDE by examining the changes to Lewis's position seen in AKV and which, it is argued, provide vital background to both Quine's criticisms and his later naturalized conception of knowledge.In AKV we find Lewis developing a more careful account of analyticity in response to the rise of logical empiricism and Quine's developing criticisms.  Lewis explicitly rejects his earlier conventionalism, which, it is argued, should be seen as his response to Quine's criticism in 'Truth by Convention', and he further develops his alternative view of analyticity in terms of 'meaning inclusion'.  Lewis's pragmatic a priori, which was the centerpiece of MWO, and which highlighted a basic role for a priori classification in science is also significantly downplayed in AKV.  Furthermore, in order to address logical and epistemological difficulties with his earlier view of empirical givenness Lewis further refines his account of empirical knowledge by distinguishing between expressive, terminating and non-terminating statements.  This modified empiricist view provides, it is argued, a clear example of the phenomenalist reductionism that Quine would label as the second dogma of empiricism. The mature epistemological position seen in AKV contains vivid depictions of the two dogmas that Quine would soon reject. With this background in place it is further argued that Quine's own evolving position has more affinity with Lewis's view in MWO then the revised position of AKV, which, as we have seen serves as one important basis for his criticisms in two dogmas.  Quine retains elements of the pragmatic a priori which Lewis appears to reject, while sketching his holistic phenomenalist picture in response to Lewis's reductionism.  This phenomenalism will soon be rejected in favor of Quine's naturalized approach to knowledge.  These developments represent a decisive break with Lewis's theory of knowledge: a rejection of the epistemic relevance of the analytic-synthetic distinction and a rejection of the phenomenal given in favor of sensory stimulation.  Lewis claimed that the radical empiricism that results from the rejection of the analytic-synthetic distinction cannot account for the validity of human knowledge.  With the development of Quine's naturalized epistemology, we see the beginning of an alternative theory of knowledge not captured by Lewis's map of possibilities. This paper concludes by briefly examining aspects of Quine's mature epistemology arguing that despite his radical break with Lewis's view, the affinities between naturalized epistemology and Lewis's conceptual pragmatism, especially as seen in MWO, remain.
11:00AM - 12:40PM
SR-9
Understanding Science I
Track : After Kant
Moderators
Tommaso Ostillio, University Of Warsaw And Kozminski University
Models of expert authority in STS: A political-theoretic reading of the Third Wave and its critics
Presented by :
Kinley Gillette, University Of British Columbia
Despite the fact that political concerns underlie many Science and Technology Studies (STS) scholars' projects, it is uncommon for these scholars to engage directly with political theory (Durant 2011). For this reason, attempts to read STS political-theoretically (e.g., Thorpe 2008; Durant 2011) are useful for clarifying what is at stake in the discipline's debates. Here, I offer a political-theoretic reading of the Third Wave (Collins and Evans 2002, 2007), as well as contemporary opposition to it (e.g., Jasanoff 2003; Wynne 2003). My reading is, however, not in terms of implicit models of democracy (Durant 2011) or STS's various distinct critiques of liberalism (Thorpe 2008) but rather in terms of models of expert authority. Distinguishing between what I call "deferential" and "democratic" models of expert authority, I argue that the Third Wave embraces the former and thus parallels a neoconservative project in politics. Importantly, this does not entail that the Third Wave is altogether undemocratic; STS scholars' models of expert authority do not necessarily correspond to preferences for or against democracy. In fact, many proponents of the democratization of expert decision-making in particular nonetheless assume deferential models of expert authority. For this reason, these STS scholars, who are prominent critics of the Third Wave, tend to be uneasy about expert authority in the first place (cf. Lövbrand, Pielke, and Beck 2011). In politics, an analogous anti-authority position is often associated with radical democracy (cf. Warren 1996). But what proponents of extensive democratization tend to neglect, whether in politics or STS, is an alternative, distinctly democratic model of authority, which merits less radical-democratic opposition. According to this alternative, the legitimacy of an authority is not "foundational" or "given," and thus unchallengeable, but conditional and, by nature, contestable (Warren 1996). Legitimacy, according to this model, comes from the institutionalization of means by which those affected by an authority can hold that authority accountable. In addition to highlighting a neglected alternative to the Third Wave and, relatedly, what the Third Wave and its critics have in common, this political-theoretic reading suggests a way to understand the Third Wave as both continuous with and different from the Second Wave. To begin with, the Third Wave can be interpreted as acquiring a deferential model of expert authority from the Empirical Program of Relativism (EPOR). What distinguishes the Third from the Second Wave is, then, the former's partial move away from a Mannheimian "conservative thought-style" that has long characterized STS (cf. Thorpe 2008). This is informative, not only because it is an account of the Second Wave-Third Wave relationship that is distinct from the Third Wave's own account (e.g., Collins and Evans 2002; Jomisko 2016), but because it illustrates how a move away from a conservative thought-style can nonetheless be part of a neoconservative project.
Addressing “The Social Determinants of Health” in Epidemiology: Something New? Something Old? Something Borrowed?
Presented by :
Elisabeth Stelson, Harvard University T.H. Chan School Of Public Health / Dana Farber Cancer Institute
Over the past two decades, the field of public health in general-and epidemiology in particular-has seen a growing emphasis on what is known as social determinants of health. Social determinants of health (SDOH) refer to the conditions "in which people are born, grow, live, work, and age" (WHO, 2008). This recent emphasis is exemplified by the World Health Organization (WHO) formation of the Commission on Social Determinants of Health in 2005, which was charged with developing priorities for SDOH research and practice for the WHO's government and nongovernmental partners. The creation of the Commission, as well as the positive response to its mission by researchers and practitioners, indicates that on the self-understanding of epidemiologists, there has a been a noticeable shift from studying biological mechanisms of disease distribution to how these biological mechanisms interact with the social, political, and economic structures that inform people's lives.In my presentation, I challenge epidemiologists' self-understanding of SDOH as a new disciplinary focus by examining the historical emergence of the fields of public health and epidemiology in the early 19th and 20th centuries. I will show that the awareness of the relationship between social, political, and economic living conditions and the health of populations was an integral component to the development of statistics, social-spatial mapping, community health practices, and observational research since the inception of the field of public health. I will illustrate this longstanding understanding of the relationships between living conditions and health through three case examples: 1) the development of vital statistics by William Farr in the mid 19th century; 2) neighborhood mapping assessments, social services initiatives, and community activism of Settlement House workers and early social workers in the United States; and 3) the influence Edgar Sydenstricker's research on the association between household conditions, work environments, and health for the United States Public Health Service during the Great Depression. While these tools arguable emerged from disciplines adjacent to public health, it is from these fields that the first epidemiologists and public health practitioners emerged.Even with these case examples illustrating how fundamental the study of SDOH was in the formation of epidemiology as a discipline, the question remains: why did the field forget about its SDOH origins? With this question in mind, the presentation will conclude with a discussion of two potential reasons for this disciplinary amnesia: 1) the prioritization of the positivist biomedical model by research funding agencies since the Cold War; and 2) the concept of "separatism as strategy" to establish public health as a standalone research and practice field, independent of the disciplines from which it grew.
The Digital Eye: The Value of “Distant Reading” of Philosophical Databases
Presented by :
Christopher Green, York University
The historian of literature, Franco Moretti, distinguished the convention historical method of "close reading" – examining a small number of key texts in exquisite detail – from "distant reading" – using digital methods to analyze hundreds or even thousands of relevant texts. Distant reading enables one to include the entirety of a given genre in one's research instead of choosing in advance a small number of "canonical" works, thereby obtaining a more comprehensive view of one's subject. The initial examination of material is necessarily more cursory with distant reading, but one can, at any moment, "zoom in" on any item that requires individual attention.My research group has, for several years, employed a variety of digital methods to study past philosophical and psychological material – mostly from around the turn of the 20thcentury when psychology was disentangling itself from philosophy and gaining autonomy as an academic discipline (e.g., Green & Feinerer, 2015; Green, Heidari, et al., 2016; Green 2017; Green & Martin, 2017). This presentation will begin with a brief review of the variousdigital methods we have used with diverse historical materials: (1) extensive runs of articles in major journals, (2) membership lists from important scholarly societies, (3) lists of the most prolific authors in several decades, (4) judgments collected online of which historical figures had the greatest impact on their field. The second part of the talk will present some new work we have completed on the journal Philosophy of Science. In this work, we generate a series networks of all the articles published in the journal over its decades of existence, and we use an algorithm to divide the articles into clusters representing different subdisciplines. Over time, one can see some subdisciplines fade while new ones emerge. One can also observe which authors were most central to each subdiscipline by the number and strength of their articles' connections to other articles in the subdiscipline.  This approach is a distinct from the "topic modeling" method that was employed by Malaterre, Chartier, and Pulizzotto (2019), and it is interesting to compare and contrast their results of each. ReferencesGreen, C. D. & Feinerer, I. (2015). The evolution of the American Journal of Psychology1, 1887-1903:  A network investigation. American Journal of Psychology, 128. 387-401.Green, C. D., Heidari, C., Chiacchia, D., & Martin, S. M. (2016). Bridge over troubled waters? The most "central" members of psychology and philosophy associations ca. 1900. Journal of the History of the Behavioral Sciences, 52, 279-299.Green, C. D. (2017). Publish andperish: Psychology's most prolific authors are not always the ones we remember. American Journal of Psychology, 130,105-119.Green, C. D. & Martin, S. M. (2017). Historical impact in psychology differs between demographic groups. New Ideas in Psychology, 47, 24-32.Malaterre, C., Chartier, J-F., & Pulizzotto, D. (2019). What Is This Thing Called Philosophy of Science? A Computational Topic-Modeling Perspective, 1934–2015. HOPOS: The Journal of the International Society for the History of Philosophy of Science. 
12:45PM - 02:15PM
HSS Foyer
Business Lunch
02:15PM - 04:25PM
SR-4
Aristotelian epistemologies of particulars (Symposium)
Track : Kant and Before
Moderators
Margaret Scharle, Reed College
In an influential and disputed passage at the end of his Posterior Analytics, Aristotle outlines the ascent from sense-perception (αἴσθησις) via memory (μνήμη) and experience (ἐμπειρία) to art (τέχνη) and science (ἐπιστήμη). Aristotle and his followers through the centuries have laboured to spell out how it is possible that at first, individuals and particulars are perceived whereas universal concepts and propositions are known at the end. To justify Aristotle's scheme, thinkers appealed, for instance, to accounts of intellection, induction, intuition or even divine illumination. Accepting this picture bestowes onto particulars a vital preliminary rôle on the way to universal knowledge. However, in the process which converts sense-data into universal content, the specificity of the data vanishes. The epistemic status of particular contents thus becomes seemingly problematic in the Aristotelian conception. In this panel, we attempt to offer some new historical perspectives on the dual problems: How is it that an Aristotelian scientist is able to obtain knowledge of universals on the basis of experience of particulars? And: To what extent does an Aristotelian philosophy of science leave room for knowledge of particulars as part of science in itself, rather than merely as a preliminary for the acquisition of scientific knowledge proper? Through historical analysis of different philosophers within the Aristotelian tradition, we aim to historicise and problematise these epistemological puzzles. We believe that such an endeavour contributes to a more nuanced evaluation of the question whether Aristotle and his followers were forerunners or roadblocks to modern science and its avowed attention to the particular, a commitment epitomised, for instance, in Francis Bacon's Novum Organum.Our session opens with a discussion of knowledge of particulars in Aristotle, with an emphasis on Metaphysics M and its relation to the more widely known discussion in the Posterior Analytics. The talk asks whether a philosophy of science can be built upon such an epistemology of particulars. Aristotle's own answer will be a foundation for our treatment of later thinkers. The next two talks are dedicated to two thirteenth-century scholastics that feature prominently in the historiography of medieval science: Roger Bacon and Albert the Great. Both thinkers were Aristotelians, but of very different persuasion. The two talks will show the different flavours that an Aristotelian epistemology of particulars could take in medieval Latin scholasticism. Our panel concludes with a paper on epistemologies of matter in the thirteenth- and fourteenth-century Latin tradition; by that time, the concept of matter had given rise to difficult philosophical problems due to the communality of matter to all corporeal substances on the one hand and its simultaneous particularity in each individual on the other hand. The knowability of matter, therefore, was an extremely tricky limit case of the knowability of particulars, thus constituting an epistemological puzzle that would preoccupy many Renaissance scientists.
Aristotle on knowledge of particulars in Metaphysics M.10
Presented by :
Joshua Mendelsohn, Loyola University Chicago
In the Posterior Analytics, Aristotle frequently claims that scientific knowledge is of the universal. This feature of Aristotle's view has presented a roadblock for those wishing to rehabilitate an Aristotelian philosophy of science, in so far as it appears to limit scientific knowledge to the study of generalisations and leaves no room for the scientific study of individuals. In one important passage, however, Aristotle qualifies his claim that scientific knowledge is of the universal. At the conclusion of his discussion of mathematics and Platonism in Metaphysics M.10, he writes: 'it is clear that in one sense scientific knowledge is of the universal, but in another sense it is not' (1087a.24–25). There is a sense, Aristotle maintains in this passage, in which scientific knowledge is in fact 'of a particular' (1087a.18).In this paper I delineate the respective senses in which Aristotle takes scientific knowledge to be of the universal and of the particular in Metaphysics M.10 and I ask why he takes knowledge in these respective senses to have their respective objects. I argue that Aristotle distinguishes scientific knowledge as a capacity from scientific knowledge in its exercise, and takes them to have different objects according to a like-knows-like principle. Aristotle takes knowledge as a capacity to have a general content because knowledge as a capacity is in principle available to be exercised on multiple occasions, and he holds that only an object that is itself general could be the object of a capacity that can be exercised across multiple occasions. On the other hand, Aristotle takes knowledge in its exercise to be of a particular because an exercise of knowledge itself is a particular event, and he holds that only a particular could be the object of a punctual mental state. I find reasons for Aristotle's view that the generality or particularity of a mental state must match that of its object in his discussion of cognitive virtues in Nicomachean Ethics VI, and find precedent for this view in the Cratylus and Theaetetus.I close with a discussion of the extent to which Aristotle's view in Metaphysics M.10 constitutes a revision to his claims about the universality of scientific knowledge in the Posterior Analytics. I argue that Aristotle's claim in Metaphysics M.10 represents a smaller revision to the view of the Posterior Analytics than some commentators have thought. Further, I argue that given the revisions to his view in Metaphysics M.10, Aristotle's position becomes more attractive as a philosophy of science. Rather than restricting knowledge only to universal facts, Aristotle's view leaves room for knowledge of particulars in the exercise of this universal knowledge. On this revised view, scientific knowledge is not so much a matter of turning attention to universals at the expense of particulars, but rather a back-and-forth movement between recognition of general features in particular instances and application of previously learned generalities to new particular cases.
Albert the Great and his contemporaries on the rôle of particulars in a demonstrative science
Presented by :
Dominic Dold, Max Planck Institute For The History Of Science (Berlin)
The medieval savant Albert the Great (d. 1280) has become a well-known figure to historians of science, mainly due to his intellectual breadth and curiosity. With much diligence, he embraced the totality of learning available to him after new (mainly Aristotelian) sources had been translated from both Greek and Arabic in the twelfth and thirteenth century. His almost encyclopaedic oeuvre has given him a reputation as a compiler among some scholars of medieval philosophy. While justified to some degree, this assessment does not do proper justice to Albert's originality. For not only did he adduce his own observational examples when commenting on Aristotle's writings on natural philosophy, but he also developed an original version of an Aristotelian philosophy of science which he deemed suitable for conducting research in natural science. Thus many remarks on scientific methodology can be gathered from Albert's commentaries on the Organon (especially the Posterior Analytics) and on Aristotle's biological works. While these passages do not always amount to a coherent picture, it is clear that for Albert, particulars cannot be disregarded in the study of the natural world. Rather, enquiring into particulars and their physical matter is relevant for natural science. In this talk, I investigate Albert's criteria for determining within natural science why a certain attribute inheres in a subject. This appropriate cause then, I argue, depends epistemologically on the specific science considered, which in turn corresponds to the ontological division of reality. At the lowest level of this division, particulars and individuals studied in natural science are to be found. Living individuals then possess internal levels of organisation: from the level of substance down to the elements. For Albert, studying the interplay between these levels of organisation as well as investigating the causes manifest in a demonstration involving a particular (demonstratio particularis) are important tasks for the natural scientist. The main objective of this talk is to spell out this interplay in more detail.I conclude by touching upon Latin debates on the theory of demonstration. More specifically, an intense contemporary debate about the nature of the middle term in a demonstration often involved discussions of the demonstratio particularis, where this type of demonstration was used as a test case to understand more deeply whether the middle term of a demonstration was the definition of the subject or the definition of the attribute predicated of the subject. Albert's position was the latter, and I argue that this position is compatible with his philosophy of natural science. Moreover, I explore to which degree this was defensible to his contemporaries.
Scientia and prudentia: Roger Bacon (1220-1292) on knowledge of particulars and the nobility of the practical sciences
Presented by :
Yael Kedar, Tel Hai College/Friedrich-Alexander University Of Erlangen
How can a philosopher establish the know-ability of particulars within an Aristotelian frame? And how can their knowledge become a part of science? This paper presents Roger Bacon's attempt to find a solution to these questions.Bacon held that all knowledge of the actual material world is acquired by the mediation of species. A species is a form representing a certain feature of an active agent (such as its colour or smell), advancing in the medium and the senses by multiplication (or re-generation), carrying information which transforms its recipients to becoming like its agent in certain respects. This was Bacon's way to explain both sensation (and eventually knowledge) and physical interactions in the natural world.All things, including substances, accidents, universals and particulars, produce species. Bacon was a realist concerning universals, and thought they exist within particular material objects. The universals in the mind, so he thought, are the species (or likenesses) of the real, external universals, and they relate to real universals just in the same way as the species of particulars in the soul relate to real particulars. Universals, therefore, are not formed by abstraction from particulars. Rather, their species arrive in the senses and from there in the intellect together with the species of particulars.Although perceived in the same way, particulars have an ontological and epistemological precedence: the perpetuity and ubiquity of the universal is derived from the succession of singulars in all times and places, and hence particulars are recognised first and universals second. Bacon stressed that both types of species arrive in the intellect, because the intellect cannot work with universal species alone: knowledge of particulars cannot be deduced from their universal, for in this way the particulars cannot be distinguished from each another; therefore the intellect must hold to the species of particulars as well.So there can be knowledge of particulars according to Bacon -- but can this knowledge receive the status of a scientia? Not exactly. Bacon made a distinction between two types of knowledge: scientia and prudentia. Searching for a place for the knowledge of particulars within the Aristotelian outlook, Bacon divided the intellect into speculative and practical parts, which he named 'speculative' and 'practical intellects'. He identified the latter with the will, and he deemed it truly rational. Accordingly, he divided the sciences into speculative and practical. Aristotle did not think that there could be two arts or sciences of the same subject matter; nevertheless Bacon, relying on the Nicomachean Ethics rather than the Posterior Analytics, did so. Moreover, he reversed the order of the speculative and practical, universal and particular, and gave precedence to the practical and particular. While for Aristotle the most important science was metaphysics, Bacon considered ethics the noblest science of all. Practical truth should prevail, according to Bacon, over theoretical truth. The end of speculative knowledge lies in its practical application, and the measure of how great a science is lies in its utility.
Neither one, nor many -- or both. Matter, enmattered bodies, and the limits of natural philosophy
Presented by :
Nicola Polloni, Humboldt University, Berlin
Within the medieval Aristotelian framework of natural philosophy, the theory of matter plays a most relevant role. A principle of the natural world, matter is the most eminent characteristic of natural existence, as natural bodies are always and necessarily enmattered bodies. However, matter is also a central metaphysical feature -- and metaphysically, its functional scope is both wider and different. Tensions arising from the intertwining of physical and metaphysical implications of matter-theories marked the medieval debate on natural philosophy.Medieval thinkers followed Aristotle's claim that natural philosophy must consider matter in its study of nature. They also tended to accept a controversial point of the Aristotelian tradition according to which matter enters the definition (and essence) of each natural object. Joined to ontological and physical considerations of the material constitution of the natural object, these doctrinal stances entailed a series of puzzling implications for medieval philosophers, particularly in consideration of 'prime' matter. How can this original matter play a fundamental rôle in individuation and, at the same time, be the substrate common to every corporeal being? How can prime matter even be conceived by the human mind, as it is completely deprived of forms? My paper focuses on one specific question addressed by the debate on these problematic aspects bordering natural philosophy and ontology: whether matter is to be considered singular and common to every enmattered body, or manifold and particularised within each enmattered body.My examination is centred on Averroes's discussion of this thorny problem and the impact his position had on some pivotal Latin philosophers from the thirteenth and fourteenth centuries. Averroes's consideration of the commonality of matter as a most perfect mental image of prime matter -- an image with no extramental correspondence in reality -- would have a long-lasting influence in the Latin tradition up to the Renaissance. On the one hand, its application improved the elaboration of epistemic strategies to establish the conditions of intelligibility of matter. On the other hand, it also widened the disciplinary fragmentation of the notion of matter. In both cases, the position of prime matter as a fringe concept of natural philosophy and its subordinated sciences would lead to the unfolding of a main gap (prime matter) in the knowledge of the natural world and to a sort of redundancy of the metaphysical notion of matter as explanatory device of natural philosophy. These tendencies, I argue, governed the transition to a new notion of matter as 'materiality', whose emergence would be instrumental for the early-modern change of paradigm in natural philosophy.
02:15PM - 04:25PM
SR-6
Newton
Track : Kant and Before
Moderators
Karen Detlefsen, University Of Pennsylvania
Fitter, Stronger, More General: The Multiple Aspirations of Newtonian Induction
Presented by :
Zvi Biener, University Of Cincinnati
Isaac Newton uses multiple comparative concepts to characterize inductive success/failure. Scholars, however, have traditionally tried to reduce the meaning of 'Newtonian induction' to a single defining characteristic: whether it be a simple 'enumerative' induction from instances, induction licensed only by an underlying ontology of primary/inherent qualities, induction as a process of generalization in scope, or induction as a process of greater and greater confidence in increasingly accurate claims. These interpretations often privilege particular texts: some scholars believe the locus of Newtonian induction is Rule 3 of the Regulae Philosophandi, and so emphasize induction from gross matter to micro-matter as the paramount example. Others emphasize Rule 4, and so stress induction's fallible and piecemeal nature. Etc.I argue in this paper that attempts to find a defining feature of Newtonian induction are misguided. I claim, instead, that Newton understood induction along three, irreducible dimensions: strength, generality, and fit. Each corresponds to a different desideratum: increasing strength corresponds to 'simple' enumerative inductive support, increasing generality corresponds to broader and broader scope in space, time, and classes of objects; and increasing fit corresponds to higher and higher precision within inductively made claims. Each can be increased independently of the others. For example, one can increase a claim's 'simple' inductive support without increasing the scope of objects to which it applies, or one can increase a claim's scope without increasing its accuracy. Keeping these dimensions distinct allows us to 1) recognize that Newton purposely emphasized different features of induction in different texts, and thus 2) see that his formulations were not confused, but responsive to different argumentative needs, in different contexts. In particular, it allows us to see how his responses to several challenges set by Huygens and Leibniz, although they seem to miss the philosophical point, actually offer an innovative conception of induction, one that was not easily accepted by Newton's contemporaries. Finally, keeping in mind the multi-faced nature of Newtonian induction allows us to 3) clarify several recent scholarly debates. I will focus specifically on whether Newton's claim that he has achieved certainty in natural philosophy contradicts his pervasive probabilistic language regarding the status of theoretical claims.
Newton, Tides and the Darker Side of Baconianism
Presented by :
Kirsten Walsh, University Of Exeter
In Book 3 of his Principia, Newton extended his theory of universal gravitation to offer a physical cause for the tides. This theory has been celebrated as one of the major achievements of Newtonian physics, effectively reducing the problem of tides to a mathematical problem, and setting the study of tides on a new path.But this paper isn't about the success of Newtonian physics.A considerable amount of empirical data underwrites Newton's work on the tides. And, while he was working with tidal data from areas such as the eastern section of the Atlantic Ocean, the South Atlantic Sea and the Chilean and Peruvian shores of the Pacific Ocean, Newton himself never left England. His data was the result of a collective effort on a massive scale, coordinated by the Royal Society, under the posthumous directives of Francis Bacon. Here, we find Newtonian physics embedded in rich social, cultural and economic networks. Newton's access to global data was the result of hard work from natural historians, merchants, mariners and priests who participated in the accumulation, ordering and dissemination of this data. Further, the capacities of that data to be collected itself followed the increasingly global trade networks reaching to and from Europe. And so, while the theory of the tides was considered a major theoretical achievement for Newtonian physics, as an empirical project, this case might be considered one of the major achievements of Baconian experimental philosophy.A closer look at these networks of scholars and merchants working together in the pursuit of global knowledge production, however, reveals a darker side of Baconianism. The collection of tidal data was carried out by the Royal Society in cooperation with the East India Company and the Royal African Company. It is well known that both companies engaged in extractive behaviours in their respective localities-behaviours we now consider morally abhorrent. While the Royal Society cannot be considered responsible for these acts, we might say that it played a role in legitimising, normalising and even celebrating them.While there should be no surprise to historians, philosophers and sociologists of science that knowledge-production and exploitation are interwoven, the connection between natural philosophy and exploitative trade is only rarely made in presentations of the Royal Society's work or Baconianism generally: science is often viewed as floating serenely and objectively above the darker aspects of early modern society. But the Baconian requirement of information gathering was enabled by-and perhaps itself worked to legitimate-the systems of trade which, often, represented the darkest parts. This is not to say that the Royal Society explicitly endorsed these features of the early modern world. Rather, the success of such large-scale Baconian projects may have tacitly whitewashed the social and political context.This paper explores the extent to which these trading empires (and their attendant moral problems) did not merely enable the success of Newton's work on the tides and other Royal Society projects, but often directed and shaped them and, epistemically and morally speaking, potentially sullied them as well.
Newton’s Early Metaphysics of Body: Impenetrability and Action at a Distance
Presented by :
Elliott Chen, University Of California, Irvine
In this paper, I discuss Newton's conception of body in  De gravitatione and its relation to the legitimacy of action at a distance. Howard Stein has argued that such a conception privileges contact over distant action: by dint of being impenetrable, bodies must necessarily act through contact; yet there is no analogous property of which action at a distance is a consequence. This paper presents a two-pronged challenge to Stein's reading. I begin by arguing that impenetrability cannot imply action through contact because such an implication hinges on one's laws of motion in three senses: it must be physically possible for contact to occur, the laws must make coherent the notion of a trajectory from which a body deviates, and the necessity of introducing collision dynamics renders impenetrability otiose. I then turn to a close reading of De gravitatione and consider whether Newton himself sees his account of body as establishing contact action as prior to distant action in any sense. Although Newton does see impenetrability as rendering bodily action intelligible, ample room remains for action at a distance once one takes into account certain textual ambiguities and the provisional character of the narrative. By way of substantiating this reading and answering an objection of Stein's, I pivot to Newton's remarks concerning the nature of gravity in his correspondence with Bentley. Although Newton is often held to reject essential gravity as being in conflict with his metaphysical commitments, I offer a more austere reading on which Newton is decrying a kind of action that is unmediated, i.e. alleged to take place without a cause. By contrast, Newton carves out a place for action at a distance mediated by an immaterial agent as a perfectly acceptable explanation of natural phenomena.
Newton's Abductive Methodology
Presented by :
Christian J. Feldbacher-Escamilla, DCLPS: Duesseldorf Center For Logic And Philosophy Of Science
The Newtonian research program consists of the core axioms of the Principia Mathematica, a sequence of force laws and auxiliary hypotheses, and a set of methodological rules. The latter underwent several changes and so it is sometimes claimed that historically seen, Newton and the Newtonians added methodological rules post constructione in order to further support their research agenda.An argument of Duhem, Feyerabend, and Lakatos aims to provide a theoretical reason why Newton could not have come up with his theory of the Principia in accordance with his own abductive methodology: Since Newton's starting point, Kepler's laws, contradict the law of universal gravitation, he could not have applied the so-called method of analysis and synthesis. In this paper, this argument is examined with reference to the Principia's several editions. Newton's method is characterized, and necessary general background assumptions of the argument are made explicit. Finally, the argument is criticized based on a contemporary philosophy of science point of view.
02:15PM - 04:25PM
SR-7
Empiricism and Determinism
Track : After Kant
Moderators
Marco Giovanelli, Universität Tübingen
For and Against Empiricism: Feyerabend on Naturalizing Epistemology
Presented by :
Jamie Shaw, University Of Toronto
Despite being recognized as one of the most vociferous critics of empiricism, there are two underappreciated features of Feyerabend's discussions of empiricism. First, Feyerabend does not elaborate one criticism of empiricism, but several. The secondary literature has elaborated in great deal on the more famous arguments that observation statements are theory-laden and the difficulty of incommensurability (Preston 1997; Farrell 2003; Oberheim 2006). However, Feyerabend also develops another, largely distinct, line of argumentation that empiricism is self-refuting. Despite the prominence of this argument in Feyerabend's corpus, it has received scant attention. Second, not only does Feyerabend reject empiricism, but he labels his own view as a 'disinfected' empiricism. This suggests that the refutation of empiricism cannot be total, as he retains an element of it in his positive philosophy. In this paper, we rectify these insufficiencies and detail Feyerabend's lesser known argument against empiricism and show what empiricism emerges in its place. This sheds light on Feyerabend's stance towards empiricism and reveals his deeper meta-philosophical commitments of his own methodology.            Feyerabend argues that empiricism defines itself by the rejection of synthetic a priori principles and simultaneously commits itself to synthetic a priori principles. This argument is scattered across several texts. Since experience can be related to theories in a wide variety of ways, even theology and myths incorporate experience into theorizing in one way or another, the task of empiricism becomes to articulate a specific way that experience relates to theories. In other words, how can observations, directly or indirectly, test theories. Feyerabend articulates two ways this can be done: 1) philosophical stipulation (e.g., an epistemology or semantics) or 2) an empirical hypothesis about the relationship between experience and theories. Any resort (1) requires synthetic a priori principles since they are independent of science. Thus empiricism, understood as a philosophical theory, is self-refuting. (2) is the path that Feyerabend takes which reveals his commitment to epistemological naturalism by construing epistemology, and philosophy more generally, as part and parcel with science. While he never explicitly articulates this naturalism, it follows from his earliest papers on meta-philosophy (Feyerabend 1955, 1956). This shows that, despite Feyerabend's silence on his meta-philosophical naturalism, it continues to inform his mature thought.            (2) dictates that empiricism as an empirical research program requires stipulating what physical processes constitute experience, and how these processes can be causally connected to theories. Feyerabend's examples of this par excellance are Aristotle and Bohr, whose physical theories and epistemologies come hand-in-hand. This has the implication that empiricism may be false in the sense that experience may be irrelevant to theory testing; the human body as a measuring device may become obsolete as science progresses. He provides a hypothetical account of this in his "Science Without Experience" (Feyerabend 1969) where experience is unneeded for theory testing. This retains the empiricist rejection of synthetic a priori principles; all principles, those of epistemology included, can be rejected in light of scientific progress. 
Telling Stories in Science: Feyerabend and Thought Experiments
Presented by :
Michael Stuart, University Of Geneva
Paul Feyerabend is sometimes dismissed as a "clown," an "enfant terrible," and the "worst enemy of science," someone who defends voodoo and astrology, attacks strawpeople, misses the point, and has no positive view at all. Others see him as an exciting philosopher who played a crucial role in the development of ideas we now take for granted in philosophy of science, including pluralism, the disunity and value-ladenness of science, feminist philosophy of science, and green philosophy. Feyerabend's legacy is not settled, and new scholarship on his recent (posthumous) publications aims to sway us in one or the other direction.This paper aims to contribute to that discussion by extracting and appraising Feyerabend's views on scientific thought experiments, tools of imagination that scientists use to achieve various epistemic ends. The history of the philosophy of thought experiments has included careful considerations of Kuhn, Putnam, Duhem, Mach, Lakatos, and other big names of the 20th century, but so far, almost nothing has been written about Feyerabend. This paper aims to extract Feyerabend's views on the topic, since he never did so explicitly.Feyerabend's most influential work was Against Method, eight chapters of which present an extended case study containing a specific focus on Galileo's thought experiments. And a recently unearthed letter from Feyerabend to Kuhn written before 1964 contains some of his other explicit views on thought experiments. But Feyerabend's later work on the epistemology of drama is also relevant, including the critical role he saw for stories and myths in science. For the later Feyerabend, all truth is mediated through story-telling. There are better and worse stories, and a good story is one that is interesting, appealing, revealing, comprehensible, coherent and surprising. A myth is a story that has congealed into dogma. Myths can be useful as organizing principles, but when a single myth dominates in a community, it stifles imagination and we have an epistemic-cum-ethical duty to overturn it. The thought experiments that Feyerabend focuses on are special kinds of stories that break us out of restrictive myths. Because they are stories, thought experiments cannot be objectively good or bad, since some of their success conditions are sensitive to psychological and social factors.I conclude by connecting Feyerabend's views to two ideas in the modern literature on scientific thought experiments. First, Feyerabend provides several powerful new arguments against the claim (defended by John D. Norton and others) that the epistemology of thought experiments is just the epistemology of deductive or inductive arguments. Second, Feyerabend's discussion of the epistemic use of drama extends the modern claim that the specifically narrative quality of thought experiments must be taken into account if we want a complete epistemology of thought experiments.
Reassessing Bas van Fraassen’s empiricist philosophy of science
Presented by :
Maarten Van Dyck, Ghent University
Bas van Fraassen's Empirical Stance (2002) has been slightly provocatively characterized as "the most robust piece of transcendental philosophy written in the past 100 or so years" (Richardson 2011). Richardson's analysis was mainly focussed on Van Fraassen's voluntaristic epistemology, but I will use his characterization as an invitation to reassess the whole of Van Fraassen's oeuvre and its place in twentieth century philosophy of science.My paper has three related aims. Firstly, it will highlight the relations between Van Fraassen's empiricism (a philosophical stance), his voluntarism (a position in epistemology), his empiricist structuralism (an analysis of scientific representation), and his constructive empiricism (a view on the aim of science). While each of this positions has been widely debated, the important relations between each has almost never been analyzed in any detail. I will show how the post-Kantian perspective suggested by Richardson helps us to bring these relations into focus. Secondly, it will emerge that the historicity of knowledge, which Richardson already identified as central to Van Fraassen's epistemology, is crucial for all aspects of his philosophical thinking. This will allow me to stress the often implicit but overall guiding importance that history of science has for Van Fraassen's analyses that are typically focussed more on logical and structural aspects of scientific theorizing. Thirdly, this will be used to reassess the debates on scientific realism and Van Fraassen's central place within them. A rather subtle picture will be sketched about what it does mean to identify the aim of science for Van Fraassen. This can not be simply read off from scientific practice, nor can it be straightforwardly established by an epistemological argument, but it requires a hermeneutic decision that is informed but not determined by empirical description and epistemological argumentation. This hermeneutic dimension allows us to reconceive what is at stake within the debates on scientific realism for Van Fraassen. Ultimately it will transpire that Van Fraassen occupies a rather unique place within late twentieth century philosophy of science, but one that can be naturally connected with the projects of philosophers earlier in the century trying to interpret science in a post-Kantian context (including the logical empiricists, but also thinkers such as Pierre Duhem and Ernst Cassirer), thus confirming but also extending Richardson's earlier analysis.REFERENCESAlan Richardson (2011). "But what then am I, this inexhaustible, unfathomable historical self? Or, upon what grounds may one commit empiricism?" Synthese 178 (1), 143-154.
Scientific determinism revisited
Presented by :
Donata Romizi, University Of Vienna
The received opinion on scientific determinism tells us that scientific determinism is rooted (at least historically) in Newtonian or classical physics, it culminates in the Laplacian dream of an omniscient "intelligence", and has been overcome through the emergence of quantum physics (such an account is to be found in most works of reference, see e.g. Weatherford 2005: 209).Indeed, some scholars have already shown some flaws in this picture. For example, Cassirer (2004 [1937]) and Hacking (1983; 1990, Ch. 18.) have both suggested (in different ways) that an explicit concept of scientific determinism only emerged in the late 19th century - which is quite some time after the development of Newtonian mechanics. Earman (1982) and Norton (2008) were quite successful in severing the bond between scientific determinism and classical mechanics (not from an historical point of view, though). Van Strien (2014) recently questioned this bond also historically, suggesting that Laplacian determinism could not really bear on the equations of motion of classical mechanics. Finally, there is quite a lot of evidence for determinism being already "eroded" (using Hacking's expression) long before the emergence of quantum physics: see for example Krüger – Daston – Heidelberger (eds. 1987) on the so called "probabilistic revolution" or Stöltzner (1999 & 2003) and Coen (2002 & 2007) on the so-called "Vienna Indeterminism". It is time to dare a new overall account of scientific determinism and its history. In my paper, I would like to tackle especially the following three questions. (1) What are we talking about when we refer to a "scientific determinism" prior to the second half of the 19th century? As Cassirer and Hacking have shown, scientific determinism was no identifiable, specific philosophical (or scientific) position until then. So, how do we explain its implicit concept? Relying especially on Pulte (2005) and Schiemann (1997), I would like to suggest that a certain (Aristotelian and rationalistic) conception of the nature of scientific theories and their relationship to reality made scientific determinism almost tautological.The second question to be asked, then, is: (2) how did scientific determinism cease to be tautological and became a specific philosophical position? And (3): Why did this happen in a time (the second half of the 19th century) in which scientific determinism was challenged and, in many respects, did not seem to be plausible anymore? Already Hacking (1983: 457) pointed to this paradox (without explaining it): scientific determinism apparently emerged in the same time in which it was "eroded". How so? Relying on both primary and secondary sources, I would like to show how in the course of the 19th century two "parties" - pro-determinism and against-determinism - emerged together in an historical context in which scientists were acting also as "public men" and the question of determinism had become a matter of worldview (or Weltanschauung), entangled with political and religious issues.    
02:15PM - 04:25PM
SR-8
Kantian and Neo-Kantian Themes
Track : After Kant
Moderators
Huaping Lu-adler, Georgetown University
The differential concept and sensible intuition in Salomon Maimon’s Essay on Transcendental Philosophy
Presented by :
Scott Edgar, Saint Mary's University
In Salomon Maimon's Essay on Transcendental Philosophy (1790), the concept of the differential plays a role in his theory of knowledge that is both obscure and apparently central. In Chapter 2 of the Essay, Maimon connects the differential concept to the idea of sensible intuition, arguing that there are sensible intuitions that exist below the level of the subject's conscious awareness, and that those sensible intuitions are "differentials." Multiplicities of these differential sensible intuitions are combined by the imagination, in accordance with a concept of the understanding, to produce determinate objects of intuition. Maimon's use of the differential concept thus appears to do most the philosophical work in his rejection of the view of things in themselves that he attributes to Kant: namely, the view that things in themselves exist independently of the subject and affect the subject, thereby giving rise to sensations in the subject's consciousness. Whereas Kant (as Maimon reads him) had to appeal to things in themselves to explain how a sensible representation can arise in consciousness when previously there had been none, Maimon thinks he can explain that occurrence as "differentials" being combined to produce the representation in question. However, Maimon's account raises at least two interpretive questions. First, does he intend his use of the term 'differential' literally or metaphorically? That is, does the account he gives of unconscious sensible intuitions as differentials use the differential concept in exactly the same way he uses it in his account of the epistemic foundations of calculus? Or does his use of the term in those two contexts differ? Second, whether Maimon uses the differential concept literally or metaphorically, why does he think the differential concept is a useful one for him to appeal to in his account of sensible intuition?This paper attempts to answer both of these questions. To do so, it examines Maimon's philosophy of mathematics, and in particular, his remarks on the differential concept, the infinite, the infinitesimal, limit concepts, and the concept of magnitude, including his use of Newton's "Introduction to the Quadrature of Curves." The paper highlights the centrality of philosophy of mathematics for the theoretical philosophy of at least one strain of post-Kantian idealism, and provides background to the account of infinitesimals and intuition developed late in the nineteenth century by the Marburg School neo-Kantian Hermann Cohen. 
Helmholtz’s “Counting and Measuring”: a Contextualized Interpretation
Presented by :
Biying Ling, University Of Chicago
Helmholtz's 1887 "Counting and Measuring" is often juxtaposed to the modernist view of number and arithmetic, while his discussion of the foundation of measurement often fades into the background. More recently, this aspect of his paper has received more attention and scholars have debated whether Helmholtz should be regarded as a forerunner of the "representational theory of measurement." My paper interprets Helmholtz in his contemporary scientific and technological contexts in support of the representationalist reading. After 1870s, Helmholtz did extensive research in electricity and magnetism. At the Physikalisch-Technische Reichsanstalt under his leadership, the calibration of electrical standards according to agreements made at the International Electrical Congresses was a main task. By showing the meaning of electrical quantities and the techniques of their measurement, I argue that Helmholtz's views could be understood more in depth in relation to his contemporary scientific practices.     More specifically, in late 19th century electrical measurement, quantities were meaningful insofar as they facilitated the reproduction of well-established laws and phenomena. They were never directly measured, but only indirectly through their effects. Notions such as "current intensity" or "electromotive force" stood for relations between other concepts and measurable quantities, and they were materialized through complex laboratory devices and procedures. Unit was not the starting point of measurement, but the last stage, of standardization. Given this context, a theory of measurement, which focused on how mathematical operations and relations were expressed through specific experimental operations and occurrences, would be more relevant to the scientist, than a theory of measurement that focused on the a priori form of magnitudes. Accordingly, in Helmholtz's theory, the meaning and criteria of measurement boiled down to two questions: the meaning of the equality relation and the additive operation "in the realm of facts." Representing physical relationships as magnitudes could only ever be based on empirical knowledge of the behavior of bodies in relation to others.      Further comparison between Helmholtz and the Kantian view (e.g., held by Kant or Hermann Cohen) of quantity highlights Helmholtz's divergence from conventional views of measurement. Both Kant and Cohen regarded the part-whole structure as definitive of magnitudes, which Helmholtz dismissed. Helmholtz's early manuscript resembled the Kantian definition more than his own 1887 version, suggesting that scientific practices between 1847 and 1887 was a motivating factor for his 1887 views. As mathematics and physics evolved into autonomous realms of knowledge in the 19th century, Helmholtz's 1887 paper could be seen as an attempt to redefine the relationship between these two disciplines.
The Splitting of Truth: Grete Hermann’s Novel Neo-Kantian Approach to Science
Presented by :
Elise Crull, The City College Of New York, CUNY
In the 1930s – a time when monumental figures like Einstein, Schrödinger, Bohr and Heisenberg grappled intensely with the question of how to interpret quantum mechanics – a young doctoral student of Emmy Noether's named Grete Hermann became interested in defending Kant's notion of causality in the face of this new and apparently indeterministic theory.  In 1933 Hermann composed a manuscript on determinism in quantum mechanics which she sent to Dirac and also to Copenhagen, where it was read with interest by Bohr, Heisenberg and von Weizsäcker.  Based on the promise shown in this essay, Hermann was invited to attend Heisenberg's colloquia in Leipzig in the winter term 1934-1935 – an offer she accepted.  Her visit culminated in a lengthy essay published in early spring of 1935 concerning the natural­-philosophical foundations of quantum mechanics.  Hermann's 1935 essay is becoming increasingly recognized as one of the first and finest philosophical treatments of quantum mechanics.  Although Hermann's aim was to demonstrate consilience between the principle of causality and quantum theory, she far exceeds this goal: she in fact outlines the contours of a novel Neo-Kantian interpretation of quantum mechanics.  Due to its author's rigorous dual training in mathematics and natural philosophy, this interpretation does particular justice to the intricacies of the theory and offers a view unlike others forming the canon of early interpretations.Not only does Hermann present a fascinating interpretation of quantum mechanics, but along the way she uncovers or makes more perspicuous several key aspects of the theory – including the role of Bohrian complementarity and the relative nature of observational context for maintaining an intuitive picture of physical processes. She is also the first to clearly articulate in print the uniquely quantum-mechanical phenomenon christened 'entanglement' by Schrödinger later that year.  This talk has two aims. The first is simply to introduce the Neo-Kantian interpretation of quantum mechanics suggested by Hermann in her 1935 essay, highlighting how Hermann's subtle understanding of this new physics led her to suggest that quantum mechanics demands not just a reconceptualizing of other scientific theories, but of all domains of human inquiry.  The second aim is to begin a critical analysis of the precise ways in which Hermann's approach departs from other, better-known neo-Kantian interpretations of modern physics – in particular those of Schlick and Carnap. I will argue that Hermann's unique appreciation of, for example, the central role of entanglement and the relative context of observation in quantum mechanics set her on her way toward establishing a general epistemology radically different from those laid down by her more famous contemporaries.      
Dilthey and Natorp on Progress in Science
Presented by :
Nabeel Hamid, Concordia University
Around 1900 German philosophy witnessed intense debate on the foundations of empirical science, in both its 'natural' and 'human' scientific modes. In 1910, Paul Natorp and Wilhelm Dilthey published monographs addressing the question of objectively valid knowledge in the exact and the historical sciences, respectively (Die logischen Grundlagen der exakten Wissenschaften; and Der Aufbau der geschichtlichen Welt in den Geisteswissenschaften). The purpose of this paper is to compare the philosophies of science contained in these works. It thus contributes to recent reconsiderations of Dilthey's relation to Neo-Kantian authors, such as Cohen, Cassirer, or Windelband (e.g. Damböck 2016; Orth 2016; Kinzel 2019), as one of constructive engagement, and not primarily, as it has traditionally been seen, of antagonism.A comparison reveals both surprising similarities and important differences. Dilthey shares with Natorp certain key commitments. For instance, both reject Kant's distinction between sensibility and understanding as distinct sources of knowledge in favor of a unified synthetic act. Both also regard this synthetic activity as having produced modern science, natural and historical, which philosophy now seeks to understand. Finally, both deny that the objectivities produced in the actual course of science can be grounded in logical formalisms. This leads them to a genetic theory of knowledge which conceives objectivity as a function of scientific method. To understand science is to understand its process of development-thus Natorp characterizes science as a fieri rather than as faktum. Dilthey and Natorp diverge, however, on the proper understanding of this development. For Natorp, objectivity results from a law of scientific development. But grasping science as a process does not mean that scientific knowledge is essentially historical; he is acutely aware of the charges of historicism and psychologism, and their implication of relativism, that threaten genetic approaches to epistemology. According to Natorp, the lawfulness of science does not determine a temporal unfolding of knowledge, but rather expresses a logical series of stages constitutive of scientific reason. This atemporal but processual character is exemplified in the development of modern mathematics, in which Natorp finds a progressive yet still ahistorical foundation for natural science. Dilthey, by contrast, embraces the historicist standpoint. For him, the developmental character of science can only be understood as a temporal process, because all science is ultimately an expression of lived experience. As an essentially human activity, science expresses not only cognitive but also affective and volitional elements, all three of which are intertwined in its actual history. This difference between Natorp's and Dilthey's epistemologies is rooted in their distinct conceptions of the unified synthetic act underlying all knowledge. Whereas Natorp's Grundakt der Erkenntnis is a purely cognitive act, for Dilthey the origins of objectivity lie in a combination of elementary operations which includes judgment, feeling, and will, a totality he sometimes identifies simply as 'life itself'. Consequently, the objectivities studied by the philosopher of science must be construed as only partially cognitive (yet not wholly non-cognitive). For Dilthey, the history of science thus consists in mere development, not in definite progress. 
02:15PM - 04:25PM
SR-9
Carnap
Track : After Kant
Moderators
Christian Damboeck, University Of Vienna / Insitute Vienna Circle / Insitute Of Philosophy
CARNAP, MAXWELL, AND THE DOUBLE NATURE OF THE METHOD BY ANALOGY
Presented by :
Vera Matarese, University Of Bern
The success of the method by analogy, considered by Mach (1905) to be the leitmotif of scientific thinking, can be seen not only from its long history starting with Aristotle, but also from its application in different scientific disciplines. Not by chance, then, it has featured as a key topic in the works of formal epistemology since the advent of logical positivism. However, it has been argued that the epistemological accounts of the method by analogy hitherto developed do not faithfully capture the process of scientific practice. According to Norton (Ms), as long we approach the scientific reasoning by analogy formally, there will always be a gap between philosophy and science. My talk, which may be regarded as a reply to what I call 'Norton's pessimism', answers the question of whether formal accounts of the method by analogy can faithfully represent its application in scientific practice. Given that the history of philosophy and the history of physics offer different formal accounts and different applications, I juxtapose the very first formal account of the method by analogy, the one developed by Carnap (1950), with Maxwell's famous use of analogy in his electromagnetic field theory (Maxwell 1865). Firstly, I explain Carnap's account of the method by analogy and show why it is considered inadequate to capture its scientific practice. In particular, I focus on two different criticisms. The first one is that this method is normally applied when one of the analogous systems under scrutiny features already a theory connecting the system's properties by causal relations. On the contrary, Carnap's account considers only analogues characterized by collections of unrelated properties. The second criticism is that Carnap's account is based on a perfect analogy, not only because the two analogues do not have any different properties that we know of, but also because their properties are identical rather than analogous. By contrast, in science there are no perfect analogies between the analogues, as they normally differ in many regards, and their properties are only similar-not identical. However, Maxwell's use of an incompressible fluid system to understand electric and magnetic systems, shows that there are cases that meet the characterization of Carnap's account. I point out, in fact, that Maxwell didn't have a theory of incompressible fluids at hand, and that he constructed the system of incompressible fluids by reproducing the systems under scrutiny; in this way, the analogues shared all their properties. Moreover, even though the analogues differed in virtue of their different physical natures, the identity of their properties was guaranteed at a mathematical level: the analogues, which featured different physical properties, could nevertheless be described by the very same mathematical quantities. In my conclusion, I draw general considerations from this discussion: given that Maxwell's case is representative of instances of scientific modelling, Carnap's account faithfully represents cases of modelling, where the method by analogy is employed by creating an imaginary model that features the same properties as the system under investigation and that is not described by any scientific theory. 
Sellars on Carnap and Quine on Analyticity and Ontology
Presented by :
Takaaki Matsui, The University Of Tokyo
In recent years, philosophers and historians of philosophy have become more interested in the debate between Carnap and Quine on analyticity and ontology as well as in Sellars's philosophy. However, works on the Carnap-Quine debate and works on Sellars have been driven by different motivations. Thus, only a few attempts have been made to connect these three great philosophers in the mid-twentieth century. This paper aims to bridge the gap by reconstructing how Sellars responded to Carnap and Quine. In particular, it aims to show 1) how Sellars responded to Quine's attack on the analytic-synthetic distinction and Quine's holism, and 2) how Sellars's response to Quine is different from Carnap's.In "Two Dogmas of Empiricism," Quine famously rejected several attempts to define analyticity as circular, and then proposed confirmation holism as an alternative. As for analyticity, Sellars defends it by construing it in terms of semantical rules. As for holism, Sellars does not reject it itself, but only what Quine takes as its consequences, i.e., the claim that ontological questions are on a par with questions of natural science, and the ontological commitment to abstract entities.I argue that Sellars's defense of analyticity in terms of semantical rules is interesting because his conception of semantical rules is quite different from Carnap's more famous one. Unlike Carnap, Sellars stresses the normative role semantical rules play in our linguistic practice. According to Sellars, semantical rules are embedded in natural languages, though in a vague form, and this fact is suggested by the fact that we sometimes correct the use of words by others, especially by children. Based on these considerations, Sellars develops a theory of language as a rule-governed phenomenon analogous to games like chess. On his view, analytic sentences in a language L are those sentences users of L are authorized to assert by the rules of L, without appeal to further empirical ground or evidence. One advantage of this construal of analyticity is that it enables us to explain revisability of analytic sentences as analogous to revisability of the rules of games.I also argue that Sellars's rejection of Quine's commitment to abstract entities suggests an interesting way to avoid Quine's radical, "more thorough pragmatism." Sellars certainly admits that a linguistic framework of abstract entities is indispensable for natural science. He also accepts that there is a sense in which the adoption of a framework of abstract entities can be justified empirically. Further, Sellars himself emphasizes that natural science tells us what really exists. Nevertheless, Sellars rejects Quine's claim that the status of abstract entities is the same as that of theoretical entities in natural science, and thus, he also rejects Quine's commitment to abstract entities. The key to Sellars's argument is his emphasis on the role causality plays in explanation. I argue that despite his shared commitments with Quine to holism and naturalism, Sellars is still suggesting an alternative, more realistic picture of the natural world.
The ontology of scientific models. A Carnapian view.
Presented by :
Antonis Antoniou, University Of Bristol
In his seminal book 'Explaining Science' (1988), Giere presented a theory of models as abstract systems for which certain ontological questions should be answered. With this in mind, Thomson-Jones (2010) has more recently described scientific models as 'missing systems' which although described as real concrete objects, we know that there are no such idealized objects in the actual world fitting the description. The challenge is therefore to find an appropriate way to understand these missing systems. This challenge is often described as 'the problem of the ontology of models', and its crux can be summarised in the following question:[Q] What are models?During the last two decades, several attempts have been made to address this question, each of which comes with its own strengths and weaknesses. For example, following Giere, Psillos (2011) takes models to be real existing abstract objects, whereas Godfrey-Smith (2006) and Frigg (2010) have argued that models are useful fictions which, literally speaking, do not exist. An alternative approach, dating back to Suppes (1960) and van Fraassen (1980), focuses on the mathematical aspect of models and sees them as set-theoretical structures and trajectories in state spaces.The ongoing reflection on the problem of the ontology of models has, unsurprisingly, led to a further discussion regarding the metaphysics of abstract objects and their properties, bringing forward a host of difficult and well-known problems. If models are abstract objects, do these abstract objects really exist? If yes, where do they exist? And how is it possible for them to possess physical properties as some authors have suggested? If not, as the fictionalist suggests, then how do they possess their properties and how is it possible to compare them with their target systems, as scientists often do? If, on the other hand, models are mathematical structures, how can they stand in isomorphic relations with real systems since the latter are clearly not mathematical structures? Criticisms along these lines often come forward as challenges for all three main accounts on the ontology of models making the problem of ontology seem unresolvable.The aim of this paper is to argue that these ostensibly insurmountable difficulties stem from a false reading of [Q] as a metaphysical question and thus they should not be taken as genuine problems. Building on Carnap's views (1950), it will be shown that [Q] is either (i) an internal theoretical question within an already accepted linguistic framework or (ii) an external practical question regarding the choice of the most appropriate form of language in order to describe and explain the practice of scientific modelling. These two readings of [Q] jointly provide all the necessary conceptual tools for developing a robust theory of models whilst keeping away from the aforementioned metaphysical puzzles. The further reading of [Q] as an external theoretical question, that is, as a purely metaphysical question regarding the real nature of models, independently of any form of language that might be used to describe them, is misleading and redundant.
Programming tolerance: Carnap and the rise of computer languages
Presented by :
Daniel Kuby, University Of Konstanz
Samuel Hunziker, ETH Zurich
The Principle of Tolerance was first articulated in Rudolf Carnap's Logische Syntax der Sprache in 1934. It would remain one of the cornerstones of Carnap's philosophical attitude for the rest of his life, permeating other notions, like the task of explication and logical reconstruction. While the legacy of the Principle is mainly discussed in the philosophy of logic (logical pluralism), we want to trace a different-and hitherto hidden-legacy, relating the Principle to the birth of programming languages in the United States in the 1950s and 1960s.Our first, systematic, claim is that Carnap put forward the Principle not (only) for logic, but for formal languages; its intended application was logic, because the thesis of Logical Syntax is that logic can be reduced to syntax. It is therefore in line with Carnap's original formulation to uphold the Principle for formal languages in general.As a second step, we will build on available evidence that the conceptualization of (high-level) programming languages as languages in the 1950s built essentially on the conception of formal languages put forward by Carnap and Tarski in the 1930s. In particular: the distinction between syntax and semantics (and pragmatics); and the object-/meta-language distinction. The rise of the "language" metaphor supplanted the "translation" metaphor between human operator and machine through "programming notation(s)".A natural question stemming from the previous points is whether the Principle played any role in the early conceptualization of programming languages.To seek an answer to this question, we will look at the trading zone between formal linguistics, programming, and logic. In particular, we will study the discussion around the design of hardware-independent ("universal") computer code like ALGOL. The idea of coding computer programs that were independent of the specific hardware they had to be executed on was the specific context in which the notion of "programming language" first arose. Here, the goals of universality and "problem-orientation" necessitates a trade-off of two sides that in Carnap's conception were kept separated: the universality of constructed international auxiliary languages, like Esperanto; and the task-oriented design of constructed formal languages.
04:10PM - 04:30PM
HSS Foyer
Coffee & Tea Break
04:30PM - 06:00PM
HSS Auditorium
Keynote Address - Sean Hsiang-lin Lei
Day 3, June 25, 2020
09:00AM - 10:40AM
SR-4
Science and Religion in the Early Modern Period (Symposium)
Track : Kant and Before
Moderators
Christopher Noble, New College Of Florida
This panel examines the relationship between Science and Religion in the Early Modern Period, with specific focus on Émilie du Châtelet and David Hume. Within the context of a growing interest in Newtonianism, particularly as an experimental method of reasoning and thinking about the world, Émilie du Châtelet and David Hume represent important contributions to the relationship between Philosophy and Science. This panel will consider the interpretation and application of the experimental method of reasoning in the works of both philosophers in relation to other types of reasoning. An important and interesting task for the experimental method was its use in examining and perhaps establishing religious claims. Within a philosophical context where science and religion had an enigmatic and complex relationship, particularly on epistemological grounds, the attempt to integrate the experimental method of reasoning into philosophical discussions provide fruitful ground for understanding the relationship between science and religion.The first paper examines the epistemological positions of Philo and Cleathes in Hume's Dialogues and discusses the positions in relation to the Discontinuity Thesis: the thesis that science and philosophy are fundamentally of a different kind to theological reasonings. The second paper examines Émilie Du Châtelet's commitment to the Principle of Sufficient Reason from the possibility of scientific reasoning. The third paper examines the role that experimental reasoning plays in Hume's philosophy of religion. Taken together, this panel will discuss the relationship between Science and Religion with particular attention to notions of scepticism, the Principle of Sufficient Reason, the experimental method and other forms of reasoning, and theology. In so doing, this panel seeks to contribute to the existing rich discussion of the role of Science and Religion in the Early Modern Period.
Du Châtelet’s Argument for a Commitment to the Principle of Sufficient Reason
Presented by :
Fatema Amijee, National University Of Singapore
Most commentators have assumed that while Émilie Du Châtelet's Foundations of Physics (1740) is an important and original work that demonstrates her commitment to Leibnizian metaphysics and the Principle of Sufficient Reason (PSR), Du Châtelet herself does not have an original argument for either a commitment to the PSR or its truth. I argue against this widespread assumption and show that implicit in the Foundations is an argument for a commitment to the PSR from the possibility of scientific reasoning. This argument takes as its starting point our commitment to scientific reasoning, and in particular to abductive reasoning in science. It then shows that that the PSR is a presupposition of such reasoning. Thus, insofar as we are committed to abductive-and more generally, scientific-reasoning, we are also committed to the PSR. I will show that this argument is both original and distinct from any argument for the PSR presented by Leibniz. I will further argue that the argument is valid, and that a case can be made for its soundness. An argument for a commitment to the PSR, however, need not be an argument for its truth. I will argue that Du Châtelet's argument is merely an argument for a commitment to the PSR rather than to its truth, but that this does not diminish its importance in the rationalist tradition. Finally, I will show how we can extend Du Châtelet's argument to other types of reasoning.
Science and Religion in Hume’s Dialogues
Presented by :
Hsueh Qu, National University Of Singapore
In this paper, I will examine the epistemological positions of Philo and Cleanthes. I find that while Philo's attitude towards scepticism seems to explicitly mirror that of the first Enquiry in EHU 12, Cleanthes' own position seems to correspond to that of the Treatise in THN 1.4.7. In this, I also highlight some differences between Philo and Cleanthes, and also the Enquiry and the Treatise, on scepticism. One key disagreement between the two is Philo's subscription to, and Cleanthes' denial of, what I call the Discontinuity Thesis: the thesis that science and philosophy are fundamentally of a different kind to theological reasonings. Philo asserts this thesis, while Cleanthes challenges it, and accuses the sceptics of inconsistency in continuing to accept abstruse science while nevertheless denying what he takes to be plain and simple theological reasoning. This difference in respect of the Discontinuity Thesis is also a point of difference between the Treatise and the Enquiry. Hume worked on the Dialogues Concerning Natural Religion shortly after the publication of the first edition of the Enquiry, and roughly contemporaneously with his editing and publishing later editions. Hume first published the first Enquiry in 1748, and would continue publishing later editions until 1753. Meanwhile, on 10 March 1751, Hume refers to a draft the Dialogues in a letter to Sir Gilbert Elliot of Minto:You wou'd perceive by the Sample I have given you, that I make Cleanthes the Hero of the Dialogue. Whatever you can think of, to strengthen that Side of the Argument, will be most acceptable to me. (HL i.153–4).Given this timeline, it seems that Hume's endorsed position in the Dialogues would be most closely aligned with the character whose underlying epistemology matched that of the Enquiry. If I am right that this character is Philo, then we have the result that Philo acts as Hume's spokesperson. And indeed, Hume does continue in the above letter to Elliot:Had it been my good Fortune to live near you, I shou'd have taken on me the Character of Philo, in the Dialogue, which you'll own I coud have supported naturally enough: And you woud not have been averse to that of Cleanthes. (HL i.154).Of course, the matter is complicated by a variety of factors, not least that Hume also claims to make Cleanthes the hero of the Dialogues. By and large, I set those aside here. I hope in this paper to offer a positive argument for Philo being Hume's voice in the Dialogues, on the basis of a detailed examination of the attitudes to scepticism, science, philosophy, and theology espoused in the Dialogues, as well as the Enquiry.
The role of experimental reasoning in Hume’s philosophy of religion
Presented by :
Daryl Ooi, Independent Scholar
In the Treatise, Hume famously set out to establish a new 'science of man'. Hume argued that Natural Religion is "in some measure dependent on the science of Man" (THN 0.4) and endeavoured to establish a system of the science of Man through the experimental method of reasoning. In this paper, I consider the more specific questions, in what way did Hume think that Natural Religion depended on the science of Man? And, can Natural Religion, when studied through the experimental method of reasoning, be considered a reasonable belief? To do so, I will examine several passages from Hume's works, focusing on his later writings. A common response from commentators is that, on Hume's view, the use of the experimental method would likely yield the conclusion that Natural Religion does not amount to a reasonable belief, or, if it does, "this reasonable belief amounts to so very little" that it "is religiously insignificant" (Gaskin 1974). Commentators often point to the weakness that Philo ascribes to the Design argument in the Dialogues to support such a conclusion. Another common response is that for Hume, belief in God is a 'natural belief' and therefore exceeds the scope of the experimental method. In this paper, I will argue against both of these responses. I first examine the notion of 'religion' that Hume employs and argue that our task seems to be complicated by the different uses of the term 'religion' across his works. I then argue that, in employing the experimental method, Hume is confident that some aspects of 'religion' can be established, specifically, that there exists an intelligent author of the universe. However, Hume makes it clear that the experimental method can establish this conclusion, and nothing more. Attempts to infer other attributes of the Deity, including the inference of moral attributes from natural attributes, are bound to fail because they exceed the scope of our faculties and the experimental method. Nevertheless, I will argue, Hume still leaves open the question of whether other aspects of 'religion' might be held reasonably, namely, through the means of faith. In examining passages such as EHU 10.40 and EHU 12.32, I argue that Hume leaves open the question concerning the plausibility or truthfulness with regard to these other aspects of 'religion', though he allows that one who commits to such beliefs might be said to be doing so reasonably. On this reading, faith is compatible, or at least not necessarily incompatible, with the experimental method of reasoning. 
09:00AM - 10:40AM
SR-6
Experiment and Causes
Track : Kant and Before
Moderators
Troy Vine, Humboldt University Of Berlin
Induction, Reduction, and Action in Francis Bacon’s Philosophy of Nature
Presented by :
Ori Belkind, Tel Aviv University
Francis Bacon is known for his articulation of a ground-breaking experimental philosophy, where scientific propositions are taken to be derived from sense-impressions using the method of induction. But what does Bacon mean by "induction"? In this paper, I argue that Bacon's notion of induction is misunderstood by most commentators to be the method by which regular correlations between two sensible qualities ground certain causal claims or natural laws relating the qualities. The key to understanding Bacon's methodological remarks in the Novum Organum is his assertion that the scientists ought to find relations of dependency between "forms" and "natures".  The notion of "nature" comes close to the notion of a "sensible quality", which includes qualities like heat, color, and air pressure. However, I argue that Bacon's notion of "form" does not describe another sensible quality. Rather, it describes active principles present in the corpuscular parts of matter that generate regular changes in corpuscular forms. That is, the notion of "form" represents in Bacon's mind a corpuscular theory that includes active principles of change, or "laws of action". This reading suggests the following claims regarding the proper way to read to Bacon's method. First, matter theory, and in particular, corpuscularianism, is essential to Bacon's method of induction, and makes possible the inferences from sense-impressions. Second, Bacon's method of induction is intimately linked to the idea of reduction. The claim is that empirical investigation should attempt to reduce sensible qualities to the underlying corpuscular forms and principles of change. Third, while Bacon attempts raise the status of a science that is based observations and scientific experiments, his method of induction is not without presuppositions. While the actual active principles present in corpuscles are unknown, and ought to be investigated via observations and experiments, the investigation is set within the context of a corpuscular reductive program.The reading offered here draws inspiration from Rees's (1977, 1980, 1996) work on Bacon's matter theory and the role of Bacon's alchemy in shaping Bacon's understanding of matter. It also draws on Perez-Ramos's (1985, 1988, 1996) work on Bacon's "forms" and how to interpret this notion. BibliographyRees, Graham (1977) "Matter Theory: A Unifying Factor in Bacon's Natural Philosophy?" Ambix: The Journal of the Society for the Study of Alchemy and Early Chemistry 24 (2): 110–25. ------------------ (1980) "Atomism and `subtlety' in Francis Bacon's Philosophy." Annals of Science 37 (5): 549–71.------------------- (1996) "Bacon's Speculative Philosophy." In Cambridge Companion to Francis Bacon, edited by Markku Peltonen, 121–43. Cambridge University Press.Perez-Ramos, Antonio (1985) "Bacon in the Right Spirit." Annals of Science 42 (6): 603--611.------------------ (1988) Francis Bacon's Idea of Science and the Maker's Knowledge Tradition. Oxford, UK: Clarendon Press.------------------ (1996) "Bacon's Forms and Maker's Knowledge." In The Cambridge Companion to Bacon, 99--120. Cambridge University Press.
Margaret Cavendish on Experiment and Unaided Experience in Natural Philosophy
Presented by :
Marcus Adams, SUNY Albany
Margaret Cavendish's criticisms of the experimental philosophy in her work Observations upon Experimental Philosophy (1666) are well known. She excoriates those who use the microscope by claiming that they have wasted "their time with useless sports" (1666, 11) and argues that those who practice microscopy are like "boys that play with watry Bubbles, or fling Dust into each others Eyes…." Such "boys" are "worthy of reproof rather than praise" (Ibid.). In the end, she seems to side with speculative philosophy: "…I conclude, that Experimental and Mechanick Philosophy cannot be above the Speculative part, by reason most Experiments have their rise from the Speculative…" (1666, 7).                Cavendish makes clear her reason for seeing the microscope as a waste of time. She argues that rather than unlocking nature's secrets the microscope presents a distorted picture: "the more the figure by Art is magnified, the more it appears mis-shappen from the natural" (Ibid., 9). For example, she argues that "had a young beautiful Lady such a face as the Microscope expresses, she would not only have no lovers, but be rather a Monster of Art, then a picture of Nature…" (1666, 9-10).Given such criticisms, it seems natural to read Cavendish as an armchair speculative philosopher who denigrated the use of experience in natural philosophy. Indeed, she begins Philosophical and Physical Opinions (1666) by offering an account of what she calls the "only matter," which she describes as infinite and existing in different degrees: sensitive matter, rational matter, and inanimate matter (1666, 1). Similarly, she asserts, without argument, that just "[a]s there is but One only Matter, so there is but One only Motion" (Ibid., 4-5). Experience seems to play no role in such arguments.In this paper, I argue that, contrary to such an understanding, Cavendish understands unaided experience as providing a route to establish the principles of natural philosophy. To substantiate this claim, the paper will trace Cavendish's discussions of the role of experience throughout the Opinions and Observations. Although Cavendish argues that "natural reason is above artificial sense" (1666, 12-13), she also holds that reason frequently gains insight by making an inference from unaided experience, or as she says, it does so "by [an object's] exterior actions, as by several effects." Indeed, rather than using reason alone, or instruments, such as the microscope, Cavendish holds that the "best optick is a perfect natural Eye" (1666, 12).The paper proceeds in three stages. First, I will discuss Cavendish's criticisms of microscopy and the use of experiments generally in natural philosophy. Second, I will show various contexts in which Cavendish accords priority to the experiences of unaided senses as establishing the principles of natural philosophy. Third, I will examine her discussions of the principles of motion and rest in the Opinions (1663), Observations (1666), and Philosophical Letters (1664) to show how this understanding of the interaction between unaided experience and reason operates in her natural philosophy.
Berkeley’s pragmatist theory of causation in De motu
Presented by :
Takaharu Oda, Trinity College, Dublin
This paper argues for a pragmatist theory of causation, based on George Berkeley's eighteenth-century metaphysics of science in his De motu (1721). Therein I read a crucial relationship between two different––metaphysical and mechanical––notions of the term 'cause'. By pragmatism I mean that, in line with mechanical causes or theoretical terms like forces, gravity and attraction, Berkeley takes metaphysical (efficient and final) causes to be causation that should be analysed or defined in terms of human needs or practices. That is, even though the conformity and uniformity of metaphysical causes with laws of nature cannot be absolutely comprehended with respect to divine will (in the theological domain), the metaphysical causation in science is supposed to be what is truly at work or in use. This pragmatism in Berkeley's metaphysics, resting on human and divine minds, can scaffold the utility and success of scientific explanation about physical, sensible qualities within our use (without unobservable, occult qualities).    De motu is Berkeley's scientific but metaphysical commentary of physical theories at the time, especially Newtonian dynamics and mechanics that he admired and depreciated regarding absolute space/time/motion. My explanatory focus is firstly upon his causal distinction between metaphysics, including theology, and mechanics, including dynamics. Drawing such a distinction, Berkeley clearly severs the task of natural scientist (finding patterns in our ideas, say, of gravity and attraction) from that of metaphysician (contemplating the causes of those ideas), such that the two tasks should not contradict each other. This is because the two domains of mechanics and (theological) metaphysics seem to deal with matters of disparate kinds. By contrast, I argue for a feasible bridge between the two domains, because his pragmatist theory of causation, or his sort of inference to the best explanation, is still grounded in a realist fundamental metaphysics ('reason', DM §37) for the conformity and uniformity with laws of nature and motion.     Before defending my pragmatist reading, I critically review three major, alternative theories of causation in De motu on offer: reductionism (i.e. reducing theoretical notions like forces in dynamics to observation notions about motions of bodies in kinematics), instrumentalism (i.e. empirically holding the utility of dynamics for calculating bodily motions, even though not possessing truth values), and structural realism (i.e. dismissing theoretical entities such as occult qualities, but not the formal structure of them for scientific progress). Particularly, pragmatism differs from instrumentalism because, on the instrumentalist reading, theoretical terms like forces are not necessarily true, but can be fictional, for their utility in science. Clarifying why we cannot totally favour any of them, I finally vindicate my textual and philosophical rationale for Berkeley's pragmatism about causation in De motu (and also his earlier and later works and manuscripts).     As a result, we will newly stand to understand Berkeley's pragmatic method or pragmatically scientific theory of causation, where metaphysical and divine knowledge stays to be true without our comprehension of it in scientific discourse or inference. Berkeley's pragmatism is then concerned with scientific success grounded in the conforming truth of metaphysical causes.
09:00AM - 10:40AM
SR-7
Koyré, Frank, Cassirer
Track : After Kant
Moderators
Maarten Van Dyck, Ghent University
Art Historians’ Response to Koyré’s idealistic Philosophy of Science
Presented by :
Rémi Carouge, Université De Tours
Alexandre Koyré's account of Modernity as a cosmological and scientific revolution grounded on certain epistemological and metaphysical ideas is widely known to the point it constitutes a basis for cultural history and philosophy of science. However, though Koyré never made a secret out of his "idealist" conception of science, his own epistemological and metaphysical conceptions are rarely discussed. He vocally maintained his ground as the great change of focus towards effective cultural and material conditions of scientific production was occurring during the 1960s, which lead to his categorization as an "internalist".Yet Koyré warmly welcomed the results of art historian Erwin Panofsky about Galileo's aesthetical stand as the cause of his rejection of Kepler's astronomy. The historian of science Pierre Thuillier rightly implied that Koyré is not as much an internalist as a selective externalist, as it happens, an idealist that conceives science as an essentially theoretical activity, which can only develop from other purely theoretical activities such as theology or metaphysics. Thuillier wondered at the fact that Koyré's History of Modernity never mentioned the names of Brunelleschi or Alberti, historical figures who pioneered modern science as much as the Renaissance of the arts in the early fifteenth century, like Panofsky himself, and numerous art historians since, has argued.Taking this historical moment into account leads to reassess Koyré's conception of science and Modernity without resorting to the methodology of sociology of science he always categorically refused. Indeed, following Koyré's own criteria for "Modernity", the most pivotal being the "breaking of the circle" and the "archimedean" conception of knowledge, I argue that history of science cannot ignore the work of Florence's early fifteenth century "artists" the art historians have thoroughly studied. This, in turn, as I intend to show, contradicts Koyré's idea of modern science both as a purely theoretical activity and as an expression of a theology- and metaphysics-driven thought, rather revealing modern science as a product of humanism, being at the same time a relativistic conception of reality and a claim to objective knowledge.
Relativity in science and outside: a comparative study between Philipp Frank and Ernst Cassirer
Presented by :
Philippe Stamenkovic, Independent Scholar
Recent literature has noticed how Philipp Frank and Ernst Cassirer both see in Einstein's theory of relativity a vindication of the concept of relativity, not only of knowledge, but more generally of "truth" (in fields other than science). Indeed, for both authors relativization constitutes (scientific, and more generally any) objectivity. There are, however, important differences in their interpretation of the theory of relativity, and their application of the concept of relativity outside physics. Frank conceives "relativization" as the specification of the conditions of validity of particular statements, not only in physics but also in ethics, politics or everyday experience. He grounds this application of relativity outside physics on his empiricist interpretation of Einstein's theory of relativity, together with a pragmatic, operationalist theory of meaning, which insists on the qualification, and thus relativity, of particular statements. By contrast, Cassirer's idealistic interpretation of Einstein's theory of relativity insists on the invariance of laws, whereas outside physics and more generally science, Cassirer considers relativization betweensuch "symbolic forms" (science, language, art, myth...), each having their own legitimacy.Frank's interpretation of relativity (both within and outside physics) is fully understandable not only because of his logical empiricist stance, but also given the historical context. In his double fight against philosophical misinterpretations of Einstein's theory and the political totalitarianisms of the day, Frank understandably insists on the relative (i.e. empirically verifiable) features of the theory, and not on its absolute or invariant features (as Cassirer does). By drawing the attention to the concrete consequences of statements, and the need to specify the meaning of the terms used, Frank rightfully wants to debunk any "absolute slogans" which can easily be used to support any line of action.Nevertheless, his application of the concept of relativity outside science, in ethics and politics, is disputable. In this communication, I show that it allows for a kind of moral relativism. On the contrary, I claim that not all ethical values or principles need to be "relativized" as Frank recommends, but that some can, and indeed must, be upheld absolutely (such as e.g. human rights, or not having recourse to torture). What is more, Frank's quick association of idealism with political totalitarianism is unwarranted, and refuted by Cassirer's example.Frank's undue extension of the principle of relativity outside physics stands in sharp contrast to Cassirer's philosophy of symbolic forms, which takes care to preserve, and meanwhile limit, the legitimacy of each symbolic form (such as science). On the contrary, Frank's expanding conception of logical empiricism (unduly incorporating Cassirer's interpretation of quantum mechanics), and more generally what we might call his scientism (incorporating sociological and psychological considerations into a "general science of human behaviour"), may explain his totalizing application of the concept of relativity.Thus, although Frank's main goal is precisely to show that "relativism" (understood as the application of the concept of relativity outside physics) does not threaten an objective conception of truth or values, Cassirer's conception may appear more convincing for doing so.
Foucault reading Cassirer: the History of Knowledge as a Stance of the Self
Presented by :
Fons Dewulf, Ghent University
In this paper I defend that Ernst Cassirer and Michel Foucault shared a conception of the history of knowledge as an ethos or stance of the self. Their histories of knowledge do not aim to understand the past, but they aim to assess how present concerns over what knowledge is emerged from past decisions, and in that process a history of knowledge liberates the self from taking past commitments on knowledge as given and immutable.  In 1966 Foucault wrote an emphatically laudatory review of the first French translation of Cassirer's Die Philosophie der Aufklärung, claiming that "it is from [Cassirer's method] that we must now start our work" (Foucault [1966] 2001, 577). Foucault identities this method as a historico-transcendental project which shows how conceptual norms constitute the place from which knowledge becomes possible (Foucault [1966] 2001, 576). This description fits how Cassirer thinks about his own intellectual project in Das Erkenntnisproblem. There, he aims to unearth the basic system of general concepts and presuppositions with which an era controls and collates its experience and observations. According to Cassirer, this shows how a constitutive system is not a rigid structure of the mind, but a free posit of the understanding (Cassirer [1906] 2009, IX). In the 1980s Foucault explicitly situated himself within the Neokantian tradition and reinterpreted his genealogical project within Kantian terminology in 'Que' est-ce que les lumières?'. A crucial element of this text is Foucault's evaluation of the Enlightenment as a stance of the self towards its own history. Foucault describes this stance as "a philosophical life in which the critique of who we are is simultaneously both a historical analysis of the limits with which we are confronted, and an experiment to the possibility of breaching them" (Foucault 2001b, 1396). In Philosophie der Aufklärung Cassirer similarly summarizes the Enlightenment not as a set of doctrines or rational methods, but as an attitude to measure oneself against one's past (Cassirer [1932] 2009, XVI). Thus, both for Cassirer and for Foucault, historical philosophy is a stance of the self towards its own history.I claim that Foucault's and Cassirer's project should be understood as a voluntarist project which emphasizes the capacity of the self to choose with which conceptual norms it operates. Both philosophers share this voluntarist project with Hans Reichenbach and Rudolf Carnap. Whereas the latter believe a logical explication of scientific knowledge is the best way to exercise this capacity, the former believe that a history of scientific knowledge is best suited. My novel realignment of Cassirer and Foucault vis-à-vis Carnap and Reichenbach gives a better insight how to understand the various voluntarist epistemological projects of 20th century philosophy.Cassirer, Ernst. (1906) 2009. Das Erkenntnisproblem. Hamburg: Meiner Verlag.---. (1932) 2009. Philosophie Der Aufklärung. Hamburg: Meiner Verlag.Foucault, Michel. 1966. Les Mots et Les Choses. Paris: Gallimard.---. (1966) 2001. "Une Histoire Restéé Muette." In Dits et Écrits I, 573–77. Paris: Gallimard.---. 2001. "Qu'est-Ce Que Les Lumières?" InDits et Écrits. II, 1381–97. Paris: Gallimard.
09:00AM - 10:40AM
SR-8
Pragmatism, Pluralism, Conventionalism
Track : After Kant
Moderators
Alexander Klein, McMaster University
A Comparison of John Dewey and Joseph Rouse
Presented by :
Juho Lindholm, University Of Tartu
In philosophy of science, there seems to have been some kind of practical turn recently, or at least it is beginning to appear. In mainstream analytic philosophy of science, realist and anti-realist alike, science is treated as an abstract system of representations (propositions). Such philosophy of science passes over concrete scientific practices in silence – as if they were merely accidental or irrelevant for understanding scientific knowledge and as if representations made sense without regard to practice. However, an increasíng number of scholars has attempted to describe science as practice or an ensemble of practices. The significance of representations is not necessarily denied, but their practical function is emphasized. This trend has begun somewhere in the 70s and the 80s in the work of sociologists of scientific knowledge and others, for example, Latour and Woolgar ([1979] 1986), Knorr-Cetina (1981), Hacking (1983) and Latour (1987).But there have been praxis philosophies before: Marxism, pragmatism, and certain phenomenological philosophies like the early Heidegger ([1927] 1977) and Merleau-Ponty ([1942] 1967; [1945] 2002). Moreover, Ryle ([1949] 1951) made the famous distinction between know-how and knowing that and Polanyi ([1958] 1962; [1966] 1983) coined the notion of tacit knowledge. Hence the question, whether these earlier contributions could be relevant for the emerging practice-based philosophy of science, arises. Pragmatism is an especially interesting case in point because of its roots in attempts to understand experimental science philosophically. Peirce, who laid down the fundamental ideas of pragmatism, had first-hand experience about scientific practice. But Dewey's version with its strong emphasis on bodily action stands out as another possible source of ideas.Roughly put, it seems that what John Dewey said about habits in Human Nature and Conduct (1922) and about experience in Experience and Nature ([1925] 1929a), The Quest for Certainty (1929b) and Logic (1938), Joseph Rouse says about practices in Engaging Science (1996) and How Scientific Practices Matter (2002). Hence it is possible that Dewey anticipated some current trends in practice-based philosophy of science. Rouse, however, does not cite Dewey.The most salient difference between Dewey and Rouse is that Dewey's epistemology has a larger scope: it is not confined to philosophy of science but includes philosophy of education and social and moral philosophy (see Dewey 1916; 1920; 1922; 1929b). However, there seems to be no reason why Rouse's ideas could not be extended likewise outside philosophy of science. Moreover, Rouse's notion of practice seems to capture better what Dewey tried to convey with "habit" and "experience."In this presentation, I will examine to how large a degree Dewey anticipated Rouse's philosophy of science. I will also point out their differences and try to explain them.
Royce and Conventionalism
Presented by :
Michael Futch, University Of Tulsa
This paper is an examination of the relation of Josiah Royce's late writings on the philosophy of science to the thought of Henri Poincaré.   In his introduction to The Foundations of Science – a translated collection of three of Poincaré's writings – Royce endorses the thesis that some principles of mechanics are best understood as conventions.  Being neither Kantian a priori synthetic propositions nor experimental truths, these conventions are "disguised definitions" that, while suggested by experience, are not the kinds of principles that will be refuted or called into question by subsequent experiments.  Royce repeatedly likens Poincaré's views to his own notion of "leading ideas," a notion developed in earlier writings on probability theory.  On Royce's telling, leading ideas are like conventions in that they are regulative principles used to organize scientific inquiry but that are inoculated against falsification.  This makes conventions and leading ideas importantly different from experimental laws in that the latter are subject to disconfirmation.  Explicating the nature of leading ideas by way of underscoring their similarity to Poincaré's conventions, Royce writes that "these hypotheses are not subject to direct confirmation or refutation by experience . . . . [They] stand in sharp contrast to the scientific hypotheses of the other, and more frequently recognized type. i.e. to hypotheses which can be tested by a definite appeal to experience."  Following Poincaré, Royce emphasizes that, though not imposed with the predetermination of "Kant's rigid list of a priori forms," leading ideas are selected not arbitrarily but upon the basis of their ability to confer conceptual unity and connectedness to a wide range of phenomena.The focus of this paper will be on the nature of Royce's conventionalism, with the aim of establishing that it is in fact quite different from what one finds in Poincaré.  First, I will exposit Royce's theory of leading ideas and how it is developed in opposition to Peirce's account of fair sampling.  I then turn to the ostensible similarity between Royce's theory of leading ideas and Poincaré's conception(s) of conventions.   Here, I try to show that, contrary to Royce's own assessments, leading ideas are importantly different from Poincaré's conventions, at least if conventions are understood as either disguised definitions or apparent hypotheses.  Instead, to the extent that leading ideas find a counterpart in Poincaré's philosophy, they are better construed as analogous to so-called "indifferent hypotheses."  I conclude by arguing that this divergence suggests that there are limitations on the extent to which Royce's philosophy of science can be considered as a kind of conventionalism.
Arne Naess: From Pluralism to Deep Ecology
Presented by :
Eric Desjardins, The University Of Western Ontario / Rotman Institute Of Philosophy
Arne Naess was one of the most prolific, broad, and unique thinkers in 20th century philosophy of science. His corpus is impressively varied, and he was engaged with many movements from the Vienna Circle to the 'radical environmentalist' programme of the 1980s. Despite this, his work has received little attention from scholars who almost exclusively focussed on Naess' early contributions in sociolinguistics (Howe 2010; Chapman 2011; Uebel 2011a, 2011b; Radler 2011, 2013). In this paper, we focus on trying to connect Naess' earlier philosophy of science with his later involvement with the 'deep ecology' movement. More acutely, we try to understand the extent to which Naess' earlier work impacted his later work in environmental philosophy and its subsequent influence on the deep ecology movement.In Interpretation and Preciseness (Naess 1953), Naess articulates a view of theories as networks starting with highly general and vague sentences (T0 sentences) which can be interpreted, and made more precise, in numerous ways. This network can be inherently self-contradicting if all the branches of interpretation of T0 are considered. While not explicitly stated, this view of theory forms the background of his pluralism in The Pluralist and Possibilist Aspects of the Scientific Enterprise (1972). Here, Naess contends that inquiry should proceed with multiple research programs which are internally pluralistic insofar as they develop by interpreting the T0 sentences differently. However, some theories are more general than others if T0 in one theory is an interpretation of the T0 of a distinct theory. As such, we can have a pluralism of nested theories within a more general over-arching theory.Our leading question is: How much continuity is there between Naess' early work on theory, pluralism, and his later work on environmental philosophy? We argue that the three philosophical axes are profoundly interconnected. Naess depicts deep ecology (DE) as a movement against an instrumentalist understanding of the relation between humans and nature. The DE platform is thus a philosophical tool that could be used as an alternative by anyone engaged in interacting and valuing their natural environment. Our goal is to highlight the continuity with earlier work by suggesting that we can interpret it as an attempt to elaborate a new theory that does not rest on a purely instrumentalist system of value. He proposes to adopt the normative claim "Self Realization!" as T0 and gives the notion of intrinsic value a central place in this new theory. However, if we understand the DE movement with pluralism in mind, then the DE platform is but one interpretation of "Self-Realization!." Within the movement, here understood as a more encompassing theory, there can remain alternative and sometimes inconsistent series of interpretations. This opens up the door to various kinds of deep environmental philosophy, perhaps even based on instrumental values.Our hope is that this analysis not only lends greater unity to the seemingly disparate elements of Naess' framework but also provides the theoretical background necessary for understanding his later philosophy of ecology and environmental ethics.
09:00AM - 10:40AM
SR-9
Logical Empiricism
Track : After Kant
Moderators
Alan Richardson, University Of British Columbia
Feyerabend, Hempel and the Later Reception and Decline of Logical Empiricism
Presented by :
Matteo Collodel, Independent Scholar
As is well known, in the first half of the 1960s, Feyerabend repeatedly targeted two of the most successful outputs of Logical Empiricism (LE), i.e. Nagel's account of inter-theoretic reduction and Hempel's account of scientific explanation. It is less known, however, that Feyerabend had been particularly familiar with the Vienna Circle tradition and LE since his formative years in post-war Vienna thanks to his connections with Kraft and Feigl; that a first stage of Feyerabend's offensive against LE dates back to his early intellectual trajectory in the second half of the 1950s, when Feyerabend unsuccessfully tried to find fault with Carnap's two-language model for reconstructing scientific theories and with Carnap's 1956 criterion of empirical meaningfulness; and that both in the first and in the second stage of Feyerabend's sustained assault against LE, his criticisms either originated or were articulated in personal dialogue with distinguished representatives of LE. This paper focuses on the second stage of Feyerabend's offensive, examining both Feyerabend's reception of LE and Hempel's response to Feyerabend's challenge.In a series of papers published between 1962 and 1966, Feyerabend identified two basic assumptions of the 'orthodox' accounts of reduction and explanation and deployed a two-pronged attack against them, questioning their descriptive adequacy and their normative desirability. According to Feyerabend, those assumptions were both inaccurate with respect to the history of science, as they did not reflect actual scientific practice; and objectionable with a view to the progress of scientific knowledge, as they would promote a conservative approach favouring well-established theories as against novel ones. Not only were Hempel's and Nagel's proposals dismissed but, ultimately, LE itself was under discussion.Feyerabend's persistent criticism shook North American philosophy of science and prompted Hempel's reaction, which appeared in print in the second half of the 1960s. Initially, Hempel readily admitted some of Feyerabend's points, such as the undesirability of the methodological rules exposed by Feyerabend; only to retort that this did not warrant Feyerabend's radical conclusions as Feyerabend's methodological analysis was 'completely mistaken' and Feyerabend could offer 'no support' for his allegations. This raises interesting historiographical questions about the later reception of LE as, Feyerabend's vantage point notwithstanding, it seems that he substantially misinterpreted the logical empiricist research programme as understood today.On the other hand, Hempel also recognised that the theory-dependence of the meaning of any descriptive term on which Feyerabend insisted, despite having been long acknowledged by LE, could have more far-reaching consequences than previously envisaged. In fact, by the end of the 1960s and under the stimulus of Feyerabend's arguments, Hempel came to make momentous concessions, admitting that the standard model for explicating scientific theories of LE was essentially 'misleading', that its account of inter-theoretic reduction was 'an untenable oversimplification', and that its approach to the meaning of scientific terms was actually 'misconceived'. In this respect, there are good reasons to consider the decline of the logical empiricist research programme in the 1970s at least partly as a result of Feyerabend's challenging insights.
Reichenbach and Whewell on Measuring Causes by their Effects
Presented by :
Malcolm Forster, Fudan University
How do we learn about the external causes of our sensory experiences?  Presumably, we learn from the effects of those causes on our internal state.  How do we learn that the world we see in a mirror is just a reflection of the world we already know?   Presumably, it has something to do with the way our perception of mirror images correlates with our more normal perception of the same objects.   In 1938, Reichenbach put forward the idea that the correlations of the causally independent views of objects may be analogous to how Copernicus learned that the planets revolve around the Sun.[1]  Reichenbach imagined an observer enclosed in the corner of a cubical world, where objects in the external world cast shadows on the walls of the enclosure.  The problem is to infer the external causes of the shadows from the shadows and the correlations amongst them.  It seems plausible that much more information about external world would be available if external objects cast shadows on two walls of the enclosure, rather than a single shadow or copies of a single shadow.  And the inferential engine inside our heads tends to favor the conclusion that they are the shadows of a single object, rather than shadows of different objects.William Whewell (1794-1866) also thought about inferential problems in planetary astronomy.  He began with the axiom that "the existence of the cause is known only by the effect it produces".  Axiom II.   Causes are measured by their effects. Every effect, that is, every change in external objects, implies a cause, as we have already said: and the existence of the cause is known only by the effect it produces. Hence the intensity or magnitude of the cause cannot be known in any other manner than by these effects: and, therefore, when we have to assign a measure of the cause, we must take it from the effects produced.[2]The purpose of this talk is to introduce the idea that Reichenbach's Principle of Common Cause can be generalized in a way that makes sense of Reichenbach and Whewell.  The idea that that correlated phenomena (not just correlated events) are indicators of a common cause.  Examples in which the correlation between sets of variables do not reduce to pairwise correlations (Bernstein's paradox) prove that it is strictly more general, in an interesting way.  It is also shown that the generalized principle (like the original corrected version of Reichenbach's principle) follows from the Causal Markov Condition, which is the main axiom of the structural theory of causation (aka Bayes causal nets).[1] Reichenbach, H. (1938). Experience and Prediction: University of Chicago Press., page 117.  Reichenbach's cubical world is discussed in Sober, Elliott (2011). "Reichenbach's cubical universe and the problem of the external world."  Synthese 181 (1): 3-21.[2] Whewell, William, quoted from William Whewell: Theory of Scientific Method. Edited by Robert Butts (1989).  Hackett Publishing Company, Indianapolis/Cambridge, page 81.
How Herbert Feigl entered the Matrix
Presented by :
Christian Damboeck, University Of Vienna / Insitute Vienna Circle / Insitute Of Philosophy
"Imagine that biologists could conserve a cerebral cortex in a nutrient solution and could both supply all sorts of stimuli and derive all motor impulses from it such that one could simulate an entire biography." This is the well known "brains in a vat" example, famously formulated in Hilary Putnam's Reason, Truth and History (1981) and later exploit in the Matrix movies. However, the present formulation of this example is neither taken from Putnam's book nor from the Hollywood screenplay. It is from a letter that Herbert Feigl sent to Rudolf Carnap already in spring 1933, in order to indicate "an alleged discrepancy between subjective and intersubjective language." This talk will examine the following points. (a) I will briefly present and reconstruct Putnam's example and philosophical argument from 1981, together with its philosophical motivation and background. (b) the Feigl example from 1933 will be examined and carefully analyzed, against the background of Feigl's philosophical conceptions of this time. The philosophical motivation and background of Feigl in 1933 will (c) be systematically compared with the respective aspects of Putnam's 1981 story. Interestingly, both stories are surrounded by a certain conception in philosophy of language. Therefore, the question arises in what sense these conceptions may overlap or diverge. What is the philosophical goal of the 1933 example, in comparison with the 1981 case? Then, I will (d) speculate how the Feigl example from 1933 could have found its way to Putnam's 1981 book. Putnam himself, to be sure, did not claim that the "brains in a vat" example was his own invention. Rather, he starts the description of the example with the formulation: "Here is a science fiction possibility discussed by philosophers." However, one might suspect that the philosophers who discussed and invented this story might have done so somewhere in the 1970s, which is obviously not the case. Putnam, on the other hand, met Feigl (and Carnap) already in the 1950s. Thus, it appears to be useful here to have a look at the discussions that took place then, for example, at the Minnesota Center in Philosophy of Science. The aim is to somewhat reconstruct how the 1933 example might have found its way over the decades, finally entering Putnam's 1981 book. Though this example is certainly not something that was mainly discussed by philosophers of science in the past, the Feigl case also demonstrates that the example originated in a discussion that very well belongs to (general) philosophy of science, for its immediate background in Feigl's letter is a discussion about Carnap's Logical Syntax of Language. Therefore, I will finally (e) try to figure out in what sense the brains in a vat story of Herbert Feigl might be related to the specific discussions that were going on in the Vienna Circle in the early 1930s, in particular, Carnap's second major book project. 
10:40AM - 11:00AM
HSS Foyer
Coffee & Tea Break
11:00AM - 12:40PM
SR-4
Kant
Track : Kant and Before
Moderators
Elise Crull, The City College Of New York, CUNY
The Threefold Concepts of Nature and the special collaboration between Metaphysics and Science in Kant's Metaphysical Foundations of Natural Science (1786).
Presented by :
Francesco Mariani, Università Di Roma La Sapienza
This paper aims to make some considerations on the threefold concept of Nature and Laws of Natur in the Metaphysical Foundation of Natural Science (MAN).  Only a 'special collaboration' between Metaphysics and Mathematics in MAN makes possible a synthetical a priori determination of the laws of Nature. In fact, the MAN, far from being a metaphysical deduction of the Newtonian doctrine, constitute the first passage to a Metaphysics that can be presented as Science: with this work, Kant intends to show how the intellect, even if in a very complex way, is able to determine a priori the material Nature and therefore how a priori Science of this type is possible for us. From this point of view, the MAN are not only a unique work in the kantian production, but also a text of remarkable originality and interest in the History of the Philosophy of Science. So we will then try to trace, primarily, three levels of investigation corresponding to three respective meanings of the concept of Nature, secondly to determine the connection and transition between these three levels; in this way we will try to show how the MAN and the KrV represent two complementary levels of the same philosophical investigation of Nature. In the Kantian perspective, the intellect does not limit itself to describing the legality shown by nature but imposes its own legality on it. Now, however, the intellect is capable of a different lawmaking, according to the type of nature it has as an object (natura materialiter spectata): in other words, the intellect is able to establish a different legality of Nature based on the determination of nature under investigation. If we consider the MA, in connection with what is stated in KrV, we see that Kant delineates at least three meanings of the concept of Nature: I) the concept of nature in general; II) the concept of material nature, as it can be known a priori; III) the concept of material Nature, as its multiplicity can only be known empirically. In all three cases listed, therefore, a specific legality of nature is at stake. While on the transcendental level - that of all objects that can be part of nature - the intellect provides the principles that make nature generally possible, on the metaphysical level - that of material Nature inasmuch as it can, however, be known a priori, is the object of investigation of the MAN -, it provides the principles of material nature generally. On the empirical level - that of empirical nature in its multiplicity - finally, the directly legislating capacity of the intellect is lacking and we can orient ourselves in the face of an infinity of laws of phenomena, by means of the reflective faculty of judgment. The three levels of legality (transcendental, metaphysical, empirical) are hierarchically the condition of a progressive advancement in the investigation of Nature.
Kant and Einstein on the Causal Order of Time
Presented by :
David Hyder, University Of Ottawa
This paper establishes structural links between the theory of time of Kant's Critique of Pure Reasonand Einstein's Theory of Special Relativity, including its connection to the family of EPR thought-experiments. These links derive from an internal connection between Kant's theory of relativistic kinematics, outlined in the latter's Metaphysical Foundations of Natural Science (MFNS), and what Robert Palter called its "relativistic analogue," namely the kinematic theory of Einstein's 1905 paper on special relativity. The presentation has five parts: (1)A presentation of Kant's theory of kinematics, and of the underlying theory of the homogeneous manifolds of space and time. This section shows that the MFNS's "Phoronomy" applies the principle of relativity to the space and time manifolds of the "Transcendental Aesthetic" in order to produce a family of "inertial frames", which effectively replace absolute Newtonian space-time. The materials for this approach are located in texts of Euler, Kästner and Lambert, which Kant either owned, or corresponded with the author about, or both. (2)A presentation of two causal principles, which, according to Kant, provide an empirical foundation for properties of time that are not accounted for by kinematics alone: "later," "earlier," and "simultaneous." The 2ndAnalogy of Experience" asserts that events that are "later" are causally dependent on events that are "earlier", but not vice versa. The 3rdAnalogy" asserts that events that are simultaneous are related to one another bicausally, by forces acting instantaneously, in contrast to the first sort of causal relations, which act over a temporal interval. (3)In the third section, I follow Robert Palter in arguing that the theory presented in (1) is a limiting case of the theory of Special Relativity. The parallelogram law that Kant derives involves a Euclidean triangle. If it is changed into a hyperbolic one, the kinematics obtained is that of Einstein's 1905 paper. I show how this is inevitable given the homology between Einstein's 1905 deduction of his kinematic parallelogram law, and the proof offered by Kant to obtain his own. (4)I then show how the two causal laws presented in (2) become incompatible through this change in the space-time geometry. The conjunction of Kant's 2ndAnalogy with Kant's kinematics produces a "Law of Mechanics", asserting that all causes of a present event must lie in the past in space. I show that the Principle of Locality is the mathematical dual of this Law, since it is obtained by combining the same law of causality with Lorentz-Einstein kinematics. I then show that this law is not cotenable with the dual obtained by mapping the 3rdAnalogy, the Principle of Simultaneity, onto this same space-time, since Locality forbidsthose instantaneous connections between distant points which the Principle of Simultaneity requires. (5) In a final section, I connect the theory outlined above to more traditional concerns in Kant interpretation, by considering the arguments of the 2ndAnalogy concerning the use of symmetric and asymmetric causal-informational relations as a criterion for establishing time-order.
Kant’s Method of Hypothesis and Its Implications for His Views on Race
Presented by :
Huaping Lu-adler, Georgetown University
In three publications from mid-1770s through late 1780s, Kant defends a scientific racialism: he reduces all humans to four basic races in terms of skin color (black, red, white, yellow), arguing that this racial feature is necessarily hereditary and so must be rooted somehow in nature. Meanwhile, blatantly racist remarks (e.g. about the inferiority of non-white races) are scattered here and there in Kant's lectures and writings.Some commentators acknowledge that Kant held racist beliefs, only to set these aside as a matter of personal bias that should not affect his major philosophical achievements. Others contend that we must take Kant's racism seriously and reassess his entire philosophy in that light. Neither side, however, has offered a close theoretical analysis of Kant's views on race to determine whether he would be actually committed-as a philosophical conviction-to the belief in a natural and hereditary racial hierarchy, i.e., to scientific racism. I provide such an analysis in connection with Kant's method of hypothesis, which he had established as part of a general scientific methodology before applying to the issue of race.  Here, I give special attention to how Kant compares "hypothesis" with "opinion":  Suppose one passes a judgment, J, as to why a phenomenon, P, is the case. J is a mere opinion if it offers no sufficient explanation of P; otherwise, it is a hypothesis. Hypotheses and opinions are alike in that we can never be fully certain about either. A hypothesis can nevertheless approximate full certainty. Besides being highly probable, a hypothesis, H, approximates full certainty due to factors such as these: one can show how given phenomena follow from H, without needing any supplementary hypotheses; H coheres with related theories that have already been proven; and no other conceivable hypotheses satisfy those conditions better than H does. If one believes H for the reasons just described, this belief is a conviction secured on objective grounds. By contrast, one is merely persuaded if one holds an opinion on subjective grounds such as its broad acceptance in a community. In these terms we can separate, at least conceptually, Kant's scientific racialism and his racist opinions. He introduced the former as a natural-scientific hypothesis to explain certain physical differences among human beings, and was convinced of its truth only after detailing a proof for it vis-à-vis the alternatives proposed by other natural scientists. By contrast, Kant's ranking of the races in terms of mental qualities was a racist opinion that he formed on the basis of already biased travel logs by his fellow Europeans. As a philosopher, he could not be convinced of such a ranking-not only for the lack of independent objective evidence, but also because it contradicts his own claim (in the three publications mentioned above) that skin color, no one shade of which is better than another, is the only necessarily hereditary racial character.             Given this conceptual distinction, I then outline different strategies for answering Kant's hypothetical racialism versus his racist opinions.
11:00AM - 12:40PM
SR-6
Cassirer, Kuhn, and the History & Philosophy of Science (Symposium)
Track : After Kant
Moderators
Paul Hoyningen-Huene, Leibniz University Hannover, University Of Zurich
From early on in his career, Ernst Cassirer was a keen observer of the history of mathe­matics and the natural sciences, later also of history and other disciplines. He was interested in specific developments in the history of science, but also in the philo­sophical relevance of such history more generally. In addition, Cassirer was involved in trans­cendental investigations along neo-Kantian lines. An interesting illustration of both is his "invariance theory of experience", with roots in nineteenth-century geometry and physics, but also in the "critical phi­lo­­sophy" of the Marburg School. In this symposium, we will explore both sides of his combination of studying history of science and doing a priori philosophy.In the first talk, "Ernst Cassirer on Historical Though and the De­mar­cation Problem of Epistemology", it is the apparent tension bet­ween histo­ri­cal/a posteriori and philo­sophical/a priori aspects of Cassirer's ap­proach that is addres­sed. On the one hand, Cassirer insists that we need to start from the "fact of science", also that there is no way of justification and demar­cation in science that is independent from a historically oriented perspective. On the other hand, he argues for a number of a priori elements in knowledge overall. According to the author, this tension can be addressed, partly resolved, and the strengths of Cas­sirer's result­ing posi­tion appreci­ated better by viewing him as positing some conceptual depen­den­cies at the core of objectivity, in a way that leaves room for historical insights.The second talk, "Cassirer's Cri­ti­que of the Arith­me­tiza­tion of the Continuum", addresses how Cassirer positioned himself in a central debate about the history of the Calculus and its implications for the philosophy of the Marburg School. From 1906 on, Cassirer acknowledged the arithmetization of analysis as an phi­lo­so­phically significant development in the history of mathe­mat­ics. In addi­tion, attempts by other members of the Mar­burg School, like Na­torp and Gaw­ronski, to resist it by relying on non-Archimedian num­ber fields left him unimpres­sed. Never­theless, one can find a lingering dissatis­fac­tion with the arithmetized con­tinu­um in several of Cassirer's later writings. After tracing the ex­pres­sions of this dis­satis­faction, the author ties it to a deeper tension in Cassirer's views concern­ing the relationship of thought and intuition in science.The third talk, ""Kuhn and Cassirer on Articulation, Aesthetic Judgments, and the Dynamics of Science", approaches Cassirer's history and philosophy of science by way of Thomas Kuhn's parallel work. The author starts by arguing that Kuhn's notion of "exemplar", together with the related notion of "articulation", deser­ves more attention. In addition, these notions are closely connected to Cassirer's views on the "unfolding" of core aspects of science and to the idea of "sym­bolic pre­g­nancy" from his philosophy of sym­bolic forms. Both symbols, in Cassirer, and exem­­­plars, in Kuhn, are characterized by "pointing beyond themselves" in certain ways. A further connection between Cassirer and Kuhn is that both recog­nize the impor­tance of "aesthetic" judgments in assessing "arti­cu­lat­ions". Taken together, this leads to new insight into the dynamics of science.
“Cassirer’s Critique of the Arithmetization of the Continuum”
Presented by :
Marco Giovanelli, Universität Tübingen
As recent literature has rightly shown, Cassirer was possibly the first professional philosopher to have appreciated the conceptual relevance of the modern rigorous "arithmetical" definition of real numbers and the real-number continuum. Already in his Habilitation lecture (1906), Cassirer elevated the 19th-century arithme­tiza­tion of the continuum, and the complete banishment of spatiotemporal intuition and the infinitesimal from the foundations of analysis, to one of the most significant exam­ples of the historical development of sciences from the concept of substance to that of function. A year later, Cassirer returned to this issue in an article on Kant and modern mathematics, where he discussed in more detail Weierstrass', Dede­kind's, and Cantor's construction of the continuum from arithmetical materials. However, in a footnote Cassirer also complained that the attempt of explicat­ing the continuum concept entirely in discrete terms was epistemologically paradoxi­cal and not fully satisfying. Cassirer planned to write a paper entitled "The Problem of Continuity and the Concept of Limit" to sort out the question. The paper was never written. One can find traces of the issue at stake in the contemporary discus­sions within the Marburg school. In particular, Gawronski (1909) and Natorp (1910) discussed at length non-standard theories of continua (Veronese, Stolz, DuBois-Reymond) in the attempt to achieve a more rich definition of the continuity as "qualita­tive totality" that goes beyond "quantitative discretion." But in private corre­spondence with Natorp, Cassirer seemed to remain somewhat unimpressed. Indeed, there is no trace of Cassirer's discontent with the arithmetization of the con­tinuum in his major monograph Substance and Function (1910). Cassirer's dissatisfaction resurfaced a decade later in his book on relativity (1921). Appealing to Poincaré's authority, Cassirer complained that the continuum of real numbers was merely an aggregate of individuals and could not fully express the genuine notion of continuity as an unbroken whole that precedes the parts. He did not develop this critique further, which, nevertheless, reappeared once again in his book on quantum mechanics (1936). This time Cassirer echoed Weyl's denun­ciation of the unbridgeable gap between intuitively given continua (e.g., those of space, time, and motion) and the exact discrete concepts of mathematics (e.g., that of real number). Once again, these remarks remained isolated, and Cassirer never devel­oped a coherent, alternative conception of the continuum. This paper attempts to provide some textual evidence that Cassirer never ceased to yearn for an autonomous definition of the continuum, not explicable in terms of the discrete. The paper concludes that the contest between the continuous and the discrete might be a symptom of a deeper, never resolved the tension between thought and intuition, the conceptually constructed and the immediately in Cassirer's work.
"Ernst Cassirer on Historical Thought and the Demarcation Problem of Epistemology"
Presented by :
Francesca Biagioli, University Of Turin
Cassirer's work has become a classical reference in contemporary history and philo­so­phy of science. Not only did Cassirer make an important con­tri­bu­tion to twen­tieth-century scientific historiography, but he formulated some of the classical theses of his philosophy in historical terms. Notably, Cassirer looked at the trans­for­ma­tion of geometry in the modern group-theoretical framework to propose his inter­pretation of neo-Kantian epistemology as an "invariant theory of expe­rience". While Cassirer acknowledged that such a theory is an infinite task, which cannot be con­sidered accomplished at any given stage in the history of science, he suggested that the principles of actual scientific theories converge towards the ideal of iso­lat­ing the ultimate common elements of all possible forms of scientific experience. However, the historical aspects of Cassirer's thought are in some tension with his defense of a priori elements of knowledge. Insofar as Cassirer defines the invari­ants of experience in terms of mathematical structures, his view has been taken to imply that new theories contain older ones only as limiting cases. Insofar as Cassirer suggests a more developmental account of the lawfulness of experience, his view has been taken to abandon the grounds of Kant's transcendental justification of knowledge in favor of a historical dialectic in Hegel's sense. This paper reconsiders Cassirer's approach by drawing attention to how he addressed the demarcation problem of epistemology. Cassirer relied on Hegel for the recognition that true universals are "concrete" in the sense that they express the manifolds of individuals that fall under them. Cassirer emphasized that concrete univer­sality relies on explicitly formulated laws of connection and ultimately on the postulate of the lawfulness of scientific experience. In the wake of Marburg neo-Kan­tian­ism, he maintained that the preconditions of knowledge have to be explained in a transcendental investigation of the possibility of scientific inquiries, where science is regarded as a fact. The universal invariant theory of experience is distinguished from a historical reconstruction of scientific change in virtue of the transcendental method. At the same time, historical thought plays an essential role in Cassirer's episte­mology insofar as he denied that there is an independent way of justifying and delimiting a body of a priori knowledge valid for all time. Not only do attempts to provide such a justification encounter serious difficulties in the face of scientific change, but the object of epistemology in Cassirer's sense is well defined only in terms of the conceptual dependencies that constitute scientific objectivity. He articu­lated this picture further in the philosophy of symbolic forms by taking into consideration the different forms of understanding at work in historical and scien­tific disciplines as offering different but complementary perspectives on objectiv­ity. My suggestion is that the advantage of Cassirer's account over alternative con­cep­tions of a priori knowledge is that it posits conceptual dependencies at the core of the notion of objectivity. This notion allows him to account both for scientific change and for conceptual dependencies from the standpoint of the resulting finished theories.
“Cassirer and Kuhn on Articulation, Aesthetic Judgment, and the Dynamics of Science"
Presented by :
Erich Reck, University Of California At Riverside
One of the most well known notions from Thomas Kuhn's The Structure of Scientific Revolutions, although still not one of its best understood, is that of "paradigm".  In subsequent writings, starting with his postscript to SSR, Kuhn attempted to clarify it by differentiating two senses: "exemplar" (narrow sense) and "disciplinary matrix" (broader sense).  In this talk, I will argue that the notion of exemplar deserves fur­ther attention.  I will do so by focusing on a less explicit but closely related notion in Kuhn's work that has received relatively little attention so far, that of "articulation".  As I understand Kuhn's account of science, its development, both in "normal" and in "revolutionary" phases, consists crucially of the "articulation of exemplars".  One bene­fit of emphasizing Kuhn's notion of articulation more is to make us appre­ciate that, and how exactly, science is a temporal, essentially dynamic pheno­menon.  What Kuhn points our attention to in science is not whether (sup­posed­ly) fully arti­culated concepts, theses, and theories are "well confirmed" or "true to reality", as tradition epistemology and philosophy of science tend to do, but how we move from only partly articulated, relatively particular problem solutions to more general, wider ranging arti­cu­­la­tions of them, including how to evaluate such moves. To get a better handle on the latter issue, I will turn to Ernst Cassirer, for whom processes of "articulation" and "unfolding" are also central to the develop­ment of science. Indeed, for the mature Cassirer this is a phenomenon to be found with respect to symbolic processes more generally.  I will argue, more par­ticu­larly, that Kuhnian exemplars have an important feature Cassirer points to and highlights in his philosophy of symbolic forms, namely that of "symbolic preg­nancy".  For Cassi­rer there is always more to a concrete symbol than what is imme­diately given; it points beyond itself.  Similarly Kuhnian exemplars, in their never fully articulated (indeed, never fully articulable) form, point beyond themselves.  For both thinkers, that is what underlies much of the dynamics of science and reason.  In addition, both Kuhn and Cassirer recognize that broadly "aesthetic" judg­ments play a significant role in the articulation process.  Yet another striking notion em­ployed by Cassirer in this general context is that of "style", under­stood not just in an aesthetic but also an epistemological sense. While that notion is more familiar from its application to the history of art, it will become apparent that for Kuhn and Cassi­rer concepts such as those of "style" can also be applied usefully to science and its development, precisely in connection with the "articulation of exem­plary achie­ve­ments".  Illustrations can be found in various areas, from the history of astro­n­omy (Kuhn's account of the Copernican Revolution) to the history of mathe­matics (Cassi­rer and others on Dedekind and structuralist mathematics).
11:00AM - 12:40PM
SR-7
Pluralism in Science
Track : After Kant
Moderators
Kristian Camilleri, University Of Melbourne
Historical Dimensions of the Debate over Pluralism in Science
Presented by :
David Stump, University Of San Francisco
Advocating for pluralism in science has become a popular position in the philosophy of science, whether the argument stems from a claim about what the world is like (Dupré, Cartwright, Waters) or on epistemological grounds (Chang, Ruphy). Here I would like to consider some of the origins of pluralism. William James takes a strongly pluralist position and links the debate over pluralism to that between empiricists and rationalists (see especially James 1909). According to James' reading, empiricists start from individual phenomena without presuppositions about how connected they are, while rationalists present a priori arguments that everything must be connected and unified into a single coherent system. James' account raises interesting historical questions about the Logical Empiricist/Logical Positivist's commitment to the unity of science, given their commitment to both empiricism and unity, notwithstanding Carnap's late adoption of the principle of tolerance and logical pluralism. Is there a tension in Logical Empiricism or can the adoption of both empiricism and (rationalist) unity be seen as a great synthesis of the empiricist and rationalist traditions, and hence an alternative to Kant? The connection of empiricism and pluralism has not been discussed in the recent literature, but it seems clear that James' conception of pluralistic empiricism is more compatible with Neurath's ideal of unity than it is with reductionism. I will explore the connection between pluralism and empiricism and also consider the implications of fallibilism for the debate over pluralism in science. Fallibilism, adopted by James from Peirce, may alone be sufficient to make an epistemological argument for pluralism in science. Even a monist who also has a strong commitment to realism, that is, even someone who holds that there is one world and that "the ultimate aim of science is to establish a single, complete, and comprehensive account of the natural world,"1 would be required to adopt a pluralist position if they were also a fallibilist. Given that we are unable to know the world with certainty, we are better off leaving multiple accounts of the world open to investigation so as to hedge our bets. Thus, fallibilism, even when conjoined with the metaphysically realist idea that there is one independent world, opens the door to an epistemological argument for pluralism. Empiricism and fallibilism are widely held positions that are associated especially with pragmatism. It seems that they lead naturally to pluralism in science, although the historical connection between these positions has not been explored. 1. Kellert, Stephen H., Helen E. Longino, and C. Kenneth Waters, eds. 2006. Scientific Pluralism. Minneapolis: University of Minnesota Press, p. x. James, W. (1909). A pluralistic universe; Hibbert Lectures to Manchester College on the present situation in philosophy. New York, Longmans, Green, and Co.
Four Traditions in the History of Values in Science
Presented by :
Matthew Brown, The University Of Texas At Dallas
In philosophy of science today, discussions of values in science are increasingly important and accepted. It was not always so. For several decades after the mid-twentieth century, there was near-consensus around the ideal of value-free science, according to which moral, political, and other "non-epistemic" had no role to play in science proper. Proctor (1991) and Douglas (2009) have told histories of the rise and consolidation of the value-free ideal. Today, 30 years after Longino's The Fate of Knowledge, and 20 years after Douglas's "Inductive Risk and Values in Science," it is time a history of the counter-movement within philosophy of science, according to which non-epistemic values play a legitimate role in science. I will focus on four origins for thinking about values in science: the pragmatist and Marxist traditions of early and mid-twentieth century, the feminist tradition arising in the 1970s, and philosophy of science focused especially on policy and risk assessment beginning from the 1990s. I will emphasize how each of these traditions begins from a different intellectual background or philosophical orientation and how each responds to different (though overlapping) concerns. These different traditions lead to different approaches to values in science that sometimes complement and sometimes conflict with one another. Early thinkers on values in science from pragmatist and Marxist traditions were concerned with the influence of values on science in a secondary way; their primary concern was the potentially beneficial influence of science over the normative fields of ethics and politics, though even within this broadly shared focus, their approaches differed significantly. But both traditions tended to see the influence between science and values as mutual or dialectical, and so were more or less committed to a legitimate role of values in science. These approaches differ as well from the two traditions that have had the greatest influence over contemporary values in science discussions. The first is the feminist science and feminist philosophy of science movement. The second is the focus on risk assessment, the uses of science in policymaking, and the revival of the argument from inductive risk. Those starting with the feminist approaches have tended to focus on social-level norms for values in science, the critique of patriarchy and inegalitarian values in science, and the possibility of a science that was an ally of anti-sexist, anti-racist activism. Those who have come from the tradition concerned with risk and policy, including a significant contribution from environmental philosophy, have tended to emphasize scientific integrity and threats to it, individual-level norms for scientific inquirers, the role of scientific authority, and mechanisms for democratic consultation or deliberation about values. There has, of course, been significant cross-fertilization between these four traditions and their various approaches, as well as philosophers who have integrated across two or more of the traditions in question. That said, there has been insufficient recognition of the role these different traditions and their disparate interests have played in influencing the contemporary lines of debate. 
An alternative archaeology of the science wars.
Presented by :
Philippe Huneman, IHPST (CNRS/Université Paris I Panthéon Sorbonne)
Nowadays, we witness a revival of some of the controversies that divided academia in the eighties around the topics of relativism, science and social construction. The alliance of feminist epistemologies, marxist critiques on one side,  postkuhnian philosophy of science and STS Strong-program style, on the other side, together added to the importation of a so-called 'French theory', triggered a critique from analytic philosophers and natural scientists. Its epitome was the "Sokal hoax", which was rather a denunciation of fetichization of science by French-theory authors, than a defense of science against relativism. But recently the claim that most of humanities and social sciences are invaded by social constructivists and so-called « social justice warriors », and that the authority of science is thereby downsized, has been vehemently raised in a way that reminds us those "science wars". The most recent hoax intended to demystify social sciences, sometimes labelled "grievance studies hoax",  has even been called a "Sokal square" (notwithstanding Sokal's own reservations about it).In this talk I will hypothesize that the recurrence of those concerns has to be situated within a long-term history of the critique of science, which traces back to the question of the authority of science vs the authority of religious faith disputed after the Enlightenment. This may seem weird since most of the critiques of science come from the liberal left, which is most opposed to religion. However I will claim that those critiques inherit - in a specific way - concepts, problems and concerns developed by the first generation of philosophers who dealt with the anti-Enlightenment critique of Enlightenment. The paper will sketch a genealogy that starts from postkantism, and Kantian considerations by Jacobi about the limits that faith should impose to science (Glauben vs Wissen in German). Then this genealogy goes through Hegel's Glauben und Wissen (a major early text of his). I'll consider next the french side of this history, marked by the legacy of Schelling, whose philosophy has inspired Félix Ravaisson's theory of habits and practice, and then the critique of the primacy of pure mind by Maine de Biran, a major figure for phenomenologist Merleau-Ponty. Bergson's opposition between 'intuition' and 'intelligence', which inherits from Ravaisson and Schelling, reactivated the question of the limits of science with regard to another source of knowledge, detaching this source from faith by seeing it as 'intuition'. A direct genealogy goes then from the Bergsonian idea of the 'intuition' as a limit to 'intelligence', and then science, to Merleau-Ponty and post-structuralist hermeneutic thinkers like Derrida or Lyotard. Thus, through a long-term history, the "French theory" side of the radical science critique in the 'science wars' ultimately connects to topics and concerns apparently very different. That's how the 'science wars' inherited the framing of the question from this history of the issue of science facing religious faith.Such surprising genealogy sheds light onto contemporary issues regarding science, emancipation, religion, and some of the ideological ambiguities of the current liberal left
11:00AM - 12:40PM
SR-8
Understanding Science II
Track : After Kant
Moderators
Kinley Gillette, University Of British Columbia
Vagueness and Laws of Nature
Presented by :
Eddy Keming Chen, University Of California, San Diego
It is standard to assume that fundamental laws of nature, whatever they might be, have to be exact, precise, and unambiguous. After all, the fundamental laws of nature (such as classical mechanics, quantum theory, and general relativity) will be stated in the language of mathematics, which is illustrative of the "ideal language" that is superior to the vague language we use in everyday context (for suggestions along this line, see Frege's Begriffsschrift 1879, Russell's logical atomism 1918, and perhaps Leibniz's characteristica universalis 1676). Against this background in the history of philosophy of science, we consider the possibility that some fundamental laws of nature are vague, which we shall refer to as 'nomic vagueness.' We define the phenomenon of nomic vagueness as the existence of borderline worlds that are not determinately lawful, in analogy with the definition of semantic vagueness of baldness as the existence of borderline cases of being bald. Although it is already a feature of the collapse postulate in the orthodox quantum theory, nomic vagueness only becomes a serious possibility if we are open to include something like the Past Hypothesis (Boltzmann 1896, Feynman 1965, Albert 2000) among the fundamental laws of nature. The Past Hypothesis, a postulate about the low-entropic macrostate of the early universe, contains macroscopic terms that only vaguely correspond to sets of microstates. Nomic vagueness is accommodated much more naturally by Humean analyses of laws than anti-Humean ones, and by the semantic approach to vagueness than the epistemicist approach (in the case of PH). While nomic vagueness is a robust feature in standard frameworks of Boltzmannian statistical mechanics, it disappears in a new framework of quantum statistical mechanics where we connect the initial macrostate to the micro-dynamics. Far from making the world fuzzy or indeterminate, quantum theory can bring more exactness to the nomological structure of the world. The project draws inspirations from and has implications for several debates in the philosophical history of laws of nature, vagueness, and precision in science.  
Changing the World View by changing tacit knowledge
Presented by :
Eros Carvalho, Federal University Of Rio Grande Do Sul
In the Structure of Scientific Revolutions, Thomas Kuhn claims that a paradigm shift changes how scientists see the world. This is meant something stronger than a change in conception, which would be trivial. The very perceptual states of scientists, and not only their beliefs, change by substituting one paradigm for another. Aristotelian and Newtonian physicists looking at the same portion of the world, e.g. a stone swinging back and forth on a string until it comes to rest, saw different things. Their experiences have different contents. The former ones saw a stone falling with difficulty in the direction of its natural rest, the center of the Earth, while the latter ones saw a pendulum, a body that would repeat ad infinitum its movement if it were not for the friction of the air. The main point here is: the scientist vision system is changed by a paradigm shift. For Kuhn, this transformation of the scientist vision is neither a question of belief change, nor a question of changing the interpretation of empirical data or observations. What happens is that paradigms determine in some way the perceptual experiences of scientists.In this presentation I aim to clarify in which way paradigms determine the world view. In the literature, we find at least two established readings of Kuhn's radical claims on world change. One is the taxonomic reading, due to Ian Hacking; the other is the phenomenal or Kantian reading, due to Paul Hoyningen-Huene. According to the taxonomic reading, the world of individuals does not change by a paradigm shift, only the world of kinds-the way we divide up the world in a organized taxonomy-changes. According to the phenomenal reading, the world of the things-in-themselves does not change, only the world of appearances-the way the scientist experience is conceptually structured-changes by a paradigm shift. The former reading preserves our realist and anti-skeptical intuitions, there is a world, the world of individuals, that is independent of our thinking and to which we have cognitive access, but it does render Kuhn's claims too weak, almost trivial, the world changes only in the sense that we organize a set of individuals in different ways, and it is not clear how or whether such an organization has some bearing on perception. The latter reading does justice to Kuhn's claims, the way the world perceptually shows up to the scientist changes, but it is too idealist, we seem to lose the real world. I propose an intermediate reading, the tacit-knowing reading, inspired by Polanyi's view on this kind of knowing. According to this reading, tacit knowledge, not concepts, shapes perception. At the same time, tacit knowledge is relational and world-involving, it cannot be understood independently of the environment where the scientists are conducting their investigations. By changing our tacit knowledge our perceptual world changes, as much as the world of mind-independent individuals we attend to.
The Epistemological Limits of Axiomatic Approaches to Economic Equilibrium
Presented by :
Tommaso Ostillio, University Of Warsaw And Kozminski University
The aftermath of the last economic crisis has been a very long moment of self-reflection for almost all economists. In fact, economists have since then acknowledged that their oversight of the drivers of the last downturn forcefully remind them that the time for economics to renovate its methodology has eventually come. For the high sophistication of mathematical economics does not make economists better off in predicting the behavior of firms and consumers.More specifically, driven by the growing influence of behavioral economics, the debate on the methodology of economic analysis has focused on two main epistemological issues: first, on the intrinsic inability of economics to deal with uncertainty as effectively as natural sciences do; second, on the ineffectiveness of axiomatic approaches to economic behavior. That is, the main epistemological worry about the actual effectiveness of economic analysis is that rigorous approaches to economic behavior do not grant any effectiveness in predicting and explaining the behavior of constantly evolving socioeconomic trends. This is because the deduction of economic equilibria from defined sets of axioms imposes defined rational structures on undefined and unstructured socioeconomic trends.Accordingly, it is important to notice that, while a great deal of progress in economic modelling has been made over the last decade thanks to the rapid diffusion of agent-based models of economic behavior, economists' troubles are now bigger than before because of two problematic interrelated socioeconomic trends that limit the epistemological effectiveness of economic analysis: first, the increasing replacement of physical labor with untaxed automatons powered by AI; second, the substitution of earned income with Universal Basic Income policies (UBI).Remarkably, either trends are problematic because both entail two fundamental questions: (1) as automatons are perfect substitutes of homo oeconomicus, does this mean that in the long-run the behavior of an economy run by automatons is more predictable than the behavior of an economy run by humans? (2) Does UBI (i.e. not-earned income) find its rightful spot within the body of economic theory without causing any trouble to economic analysis?By fixing (1) and (2) as guiding research questions, this study aims to raise economists' and philosophers' of economics awareness of the threats that the current industrial revolution poses to the methodology of economic analysis. More precisely, this study argues that any attempt to answer (1) and (2) with a positive statement leads to the postulation of an economy with stagnating growth and increasing inequalities. Furthermore, this study argues that if an axiomatic approach to economic equilibrium is undertaken in order to analyze the behavior of an economy that is run only by machines where humans earn only a UBI and run the government, then it is possible to ascertain the existence of undecidable sentences within such an axiomatic system. This is because the case of economy run solely by machines is likely to lead to at least three paradoxical outcomes resulting from the assumption of hyper-efficiency: first, hyperinflation; second, paralyzed credit markets; and, third, ineffective monetary policy.
12:04PM - 02:00PM
Lunch Break
02:00PM - 04:10PM
SR-4
Cosmology, Teleology, Theology
Track : Kant and Before
Moderators
Laura Georgescu, Rijkuniversiteit Groningen
The Teleology of Activity and Ordering in Aristotle’s Natural Science
Presented by :
Margaret Scharle, Reed College
Scholars have failed to disentangle two aspects of Aristotle's god: god is not only perfect activity, but is also the perfect orderer: unlike Plato's divine demiurge who manually labors for the order he imposes on the cosmos, Aristotle's god does no work at all.   Instead, Aristotle's god is simply the perfect exemplar that other substances in the universe toil to imitate, and the hierarchy of being reflects their relative success.  While unified in the divine, the two aspects of activity and ordering are refracted into two sorts of imitative teleology, both of which are essential to comprehending the unity of Aristotle's natural science.   The hierarchy of being not only reflects entities' relative success at imitating the divine activity, but also reflects their relative success at ordering and subordinating the substances below them for their own benefit, and such teleological explanations are central to his natural science: Just as birds develop beaks shaped to accommodate the shape and size of their food (Parts of Animals III.14 674b17-35) and wings to maintain their relationship to their food by migrating into and breeding in areas where such food is available (History of Animals VIII.12 596b21-9), the plants form roots to accommodate the water they take in as nourishment, and those roots grow down instead of up, since water is located in the ground (Physics II.8, 199a29-30).  Plants are more limited than animals in their ability to create and maintain the beneficial teleological relationship to the water they take in since they have less complex parts and a much more limited range of motion than animals.   Consequently, their ability to subordinate water is less efficacious than birds' ability to subordinate the lower animals, plants, and elements they take in as food, and plants are therefore more subject to the fluctuations of supply.  As we descend the hierarchy to the lowest level of the inorganic elements, we find the least efficacious of all natural things in the cosmos.  Earth, air, fire, and water are in the lowest position since they possess purely passive natures that can neither create nor administer any teleological relations for which they would serve as beneficiary.  By approaching the teleology invoked in natural science from a comprehensive perspective that acknowledges two sorts of imitative teleology, I argue for the unity in the multiplicity of teleologies found at each level of the hierarchy-while each entity takes god as an exemplar, and in this sense there is a unity to their imitation, nonetheless their expression of the imitation is multiple.   Scholars' failure to see such unity in multiplicity has led them to mistake characteristics of the animal level of the hierarchy (e.g. being an active mover or a whole of parts) as essential to any natural teleology, and to justify the exclusion of the inorganic elements on the basis of their not having such characteristics.  Adopting a more complete perspective including both types of imitative teleology, I reveal Aristotle's unified conception of natural science.
A naturalistic understanding of the Chinese Warring States period excavated cosmogonies: the case of the Hengxian
Presented by :
Francesca Puglia, Bern University
Cosmogony in pre-imperial China is a quite controversial topic, and the debate on the actual existence of a story of creation is still very intense and have yet not led scholars to an unanimous position. Being the effective existence of a cosmogonic approach in Daoist texts such as the Daodejing 道德經 or the Zhuangzi 莊子 still on discussion, radical positions such as Derk Bodde (who affirmed that China had little interest in cosmic origins) and Angus Graham's (who assumed that there is no cosmogonic myth in pre-Han China) have been contested thanks to the unearthing of such texts as the Dao Yuan 道原 ("The Origin of the Dao") and the Taiyi sheng shui 太一生水 ("The Great One Generates Water") and to the discovery of the Warring States period (475 – 221 BC) Hengxian 恆先 ("The Primeval Constrancy") and Fanwu liuxing 凡物流形 ("All Things Flow into Form"), from the Shanghai Museum collection.  Being approximately contemporaneous with the oldest rediscovered fragments of the Laozi (the Guodian 郭店 Laozi), dating back to the 4th century BC, these resources have shown how the topic of cosmogenesis in early times was both prevalent and diverse.The excavated cosmogonies shed a new light on the understanding of the primordial state that preexists the coming into being of material reality, a condition of stillness, simplicity, and vacancy, a key-concept in all Daoist cosmogonies, and, above all, have given precise descriptions of the process of formation of the manifold reality that immediately follows the beginning of time. The aim of this paper is to give an interpretation of the first section – the cosmogonic lines – of the text entitled Hengxian, basing on the assumption that, as shown by Yong-yun Lee regarding the Taiyi sheng shui, the unearthed cosmogonies of the Warring States period are likely to have been written in a pure philosophical and conceptual perspective, rather than being a reflection of contemporary or antecedent religious cults. If Fritjof Capra's chef d'oeuvre "The Tao of physics" sets an interesting and accurate parallel between the modern physics' understanding of the microscopic world and the concept of "Oneness" in East Asian philosophies, our intent is to show how the cosmogonic process described in the first section of this text displays a modern quasi-scientific attitude toward the comprehension of the inner workings of reality, thus modern physics' understanding of the first steps of the coming into being of our universe after the Big Bang could be a very useful tool with a view of trying to unravel the meaning of the text and the possible scenario of an evolutionary process of generation starting from nothingness, developed and recorded with an evidently naturalistic approach.
Mechanism as a non-exhaustive ontology: Descartes, natural teleology, and life
Presented by :
Barnaby Hutchins, Barnaby Hutchins
Early modern mechanism tends to be seen as an exhaustive ontology: the natural world consists of efficient-causal mechanisms made up of bits of matter, and nothing more. This goes especially for Descartes, who explicitly commits himself to a particularly austere form of materialism (souls aside), in which the phenomena of the natural world supposedly fully reduce to just the fundamental properties of extended substance.In this talk, I argue that Descartes's mechanism does not exhaust the ontology of his natural world-that mechanism coexists with elements that are not mechanistically reducible-and that doing so allows him to provide a more comprehensive account of natural phenomena. I also argue that this is compatible with his philosophy, and show how it is.I focus on two non-mechanistic elements, natural teleology and life, arguing that Descartes can reduce neither to the properties of extended substance, and that he nevertheless commits himself to their existence. I then set out two ways in which such irreducible, non-mechanistic elements can coexist with a mechanistic ontology: (1) integration-natural teleology plays a functional role in accounts of biological phenomena, patching gaps in the explanatory structure of mechanism; (2) coincidence-life is eliminated from accounts of biological phenomena, but is treated as existent in a standalone capacity.
Rethinking the History of Philosophy of Science from 1054 CE to 1439 CE
Presented by :
Alberto Bardi, Polonsky Academy For Advanced Study, Van Leer Jerusalem Institute
Two different conceptions of science were developed through the long and intricated history of the controversies between the Eastern and Western Christianity. It is likely that western theological conceptions, influenced by Augustine of Hippo, then scholasticism contributed significantly to the formation of the modern concept of science, hence triggering the struggle towards technological progress (e.g. Francis Bacon). Such conceptions, instead, were received in Eastern Christianity on a radically different philosophic-theological background, which had constructed a solid tradition of social groups (e. g. monks and confraternities) characterized by a struggle towards contemplative life. As a consequence, this different mindset put Eastern Christianity aside from the processes triggered by the so-called scientific revolution. The historic-philosophical discrepancies between the two Churches are most evident in the period 1054-1439 CE, that is from the schism between Catholic and Orthodox Churches until the conclusion of the Council of Florence, which was the last attempt to unifying the two Churches. Scholarship in this field, though massive in quantity, has been focusing on theological issues, so that studies on the differences in conceptualizing science in the history of the controversies are still pioneering. Nevertheless, extant scholarly contributions concerning issues of philosophy of science in this field have been treated as if science was a disembodied entity, independent from socio-historical contexts. This paper attempts therefore to explore the concept of science in the history of the controversies between the two Churches through new lens, that is (1) by focusing on social and historical issues that play a role in the development of the two different philosophies of science in the controversies of Eastern and Western Christianity and (2) by exploring what kind of entanglements connects theological and philosophical issues to socio-historical contexts. Through (1) and (2) we will trace a history of philosophy of science from 1054 CE to 1439 CE.
02:00PM - 04:10PM
SR-6
Leibniz and Beyond
Track : Kant and Before
Moderators
David Hyder, University Of Ottawa
Reality of Motion and Causal Hypotheses in Leibniz
Presented by :
Laurynas Adomaitis, Scuola Normale Superiore
            There is an ongoing discussion about the reality of motion in Leibniz. On one end there is an interpretation that affirms the reality of motion in Leibniz. According to the realist position, although motion is in some sense relative, the addition of forces and causes makes the motion real. This applied to the orbital motion of planets has been called the Cassirer thesis following Ernst Cassirer. A milder position is the aetiological reading espoused by Richard Arthur who argues that we should distinguish between motion treated purely mathematically and motion with respect to causes. Arthur thinks that the addition of a causal story, although it does not make the motion real, makes it possible to determine the one true hypothesis for a given configuration of motions. The aetiological reading differs from the realist in that it holds that the one true hypothesis is determined by its simplicity rather than causes or forces themselves.            The position advocated in this paper is the third alternative – the instrumentalist reading. I argue that both realism and aetiologism overvalue the importance of causal explanations for Leibniz. The instrumental reading claims that even causal explanations are hypothetical and evaluated only by their simplicity or intelligibility. Arthur concedes that Leibniz "talks of 'being permitted to choose' the simplest hypothesis, or of Copernicanism being 'sufficiently corroborated', without calling it true" (Arthur 2013: 103). However, the reason why Leibniz does not call the Copernican hypothesis true is that he distinguishes between being true in a demonstrative sense and being *true in an instrumental sense and hypotheses cannot pass from being *true to being true.            Realism seems to contradict what Leibniz is stating in the Principia Mechanica (1676/77(?)). For example, he is claiming that "since no hypothesis can be refuted rather than others through certain demonstration, not even by someone omniscient, it follows that none is false rather than others" (A6.3.110, tr. Arthur 115). Leibniz's claim that even omniscience wouldn't allow to fully determine absolute motion means that there is a certain epistemological obstacle to knowing it. If perfect knowledge is not enough to determine motion absolutely, then the determination of absolute motion is absolutely unknowable. Omniscience would certainly involve the knowledge of causes; otherwise it would not be omniscience. So it seems to follow that, according to Leibniz, even the consideration of causes does not absolutely determine the motion and does not make it absolute.            Leibniz is open about the fact that causal models are a factor in considering the simplicity of a hypothesis or a frame of motion: "we will be permitted to choose the simpler mode of explaining, which involves reference to a cause from which the remaining changes may be derived more easily" (A6.3.111, tr. Arthur 115). It does not follow from this that they provide absolute certainty or certainty that would be comparable to demonstrations. Both causal and purely geometrical considerations are within the same instrumental level of certainty provided by the simplicity of hypotheses.
The World of Leibniz: Relationship Between Metaphysics and Physics
Presented by :
Xiaoqian Hu, Indiana University Bloomington, Bloomington, USA
Following Leibniz' publication of Monadologyin 1714, a conversation ensued among physicists regarding the foundations of the physics. More recently, in discussions among and between physicists and philosophers each have treated Leibniz' Principle of the Identity of Indiscernible (PII, that there can be no two things which are completely alike), a metaphysical principle, as a contingent truth and a matter of physical fact. In particular, both physicists and philosophers have employed PII to promulgate or refute one theory of physics over another, alluding in their arguments to its empirical character. In this paper, I will argue that the confusion introduced by philosophers and physicists on the status of PII was not a conversation Leibniz envisioned, and further obfuscates the relationship between metaphysis and physics that Leibniz saw as crucial for his work as a philosopher-scientist. Furthermore, I aim to explore the relationship between metaphysics and physics in Leibnizian world. Metaphysics and physics occupy differentiated realms: metaphysics provides the ultimate grounding of the entities and principles that are studied in physics. Metaphysical principles shepherd us in examining physics and the laws of physics because they: allow us to conduct experiments and underwrite physical laws. Therefore, metaphysics constrains (places boundaries on) the various (and contradictory) theories advance and the sorts of experiments we conduct. Hence, it is unwarranted to vindicate or refute a metaphysical principle via the foundation of physics or to conflate physics with metaphysics. There is no need to do such things since metaphysics is more fundamental than physics in Leibnizian world.
"Leibniz, Lange, and Bilfinger on Whether the Soul is an Automaton’”
Presented by :
Christopher Noble, New College Of Florida
My paper examines an early eighteenth-century controversy between Joachim Lange and Georg Bernhard Bilfinger revolving around Leibniz's earlier characterization of the nature of the soul as a self-moving "spiritual automaton." This concept figures importantly in Leibniz's "preestablished harmony" between soul and body, according to which soul and body do not interact directly and their correspondence results from the fact that God has arranged for their activities to unfold in parallel. In this regard, Leibniz's hypothesis posits that an individual substance is composed of two distinct self-moving automata: a physico-mechanical bodily automaton, and an immaterial perceiving "spiritual" automaton.Leibniz's characterization of the soul as an "automaton" provided difficulties for later supporters of the preestablished harmony. While souls played an important explanatory role in metaphysics, explaining the presence of unity and activity within nature, Leibniz also wanted to argue that at least human souls are capable of acting freely. However, as critics of the preestablished harmony as pointed out, characterizing the soul as an automaton is problematic in this regard as is it seems to involve a comparison between the soul and amachine. If this is right, Lange argues that it entails that the activity of the human soul would not take place freely but rather as the result of necessary, machine-like causation.Bilfinger responds to Lange by rejecting any association between the soul a machine, arguing that the term "automaton" is fundamentally ambiguous. While "automaton" commonly refers to a self-moving machine, Bilfinger argues on etymological grounds that it can also simply mean something capable of self-motion. In this sense, to say that the soul is an "automaton" merely indicates that the soul moves itself. Nevertheless, despite the way that this interpretation absolves Leibniz's concept of the soul from machinic necessity, Bilfinger recommends that we avoid ambiguity by abstaining from characterizing the soul as an "automaton."I first introduce Leibniz's concept of the spiritual automaton and then outline Lange's criticisms of the preestablished harmony and the "spiritual automaton" in his Causa Dei of 1723. I then detail Bilfinger's response in his Harmonia Praestabilita of 1723 and Dilucidationes Metaphysicae of 1725. I then show that Bilfinger's strategy in fact overlooks Leibniz's own explicit affirmation of the comparison between a soul and a machine. Indeed, for Leibniz, a soul carries out a predetermined series of perceptions, just as a well-designed machine is structured to carry out an ordered series of motions. Thus, I argue that Leibniz fully embraced the comparison between a soul and a machine, challenging those supporters of the preestablished harmony that want to affirm a non-deterministic form of human freedom.
The Scientific Methods of Johann Christoph Sturm and Christian Wolff: From Causal Explanation to Quantification
Presented by :
Christian Henkel, University Of Groningen
Johann Christoph Sturm (1635 – 1703) and Christian Wolff (1679 – 1754) were immersed in a similar academic setting. They were both teaching mathematics and (natural) philosophy at a German university. Moreover, Wolff is inspired by Sturm's scientific method, and yet he finds it incomplete. This is to say that while natural philosophy and mathematics are relatively independent in Sturm, they go hand in hand in Wolff. In this paper, I will investigate the scientific method of Sturm and its advancement by Wolff. In particular, I will explain the reasons why Wolff considered the quantification of natural phenomena, which was absent in Sturm, necessary. Accordingly, the paper consists of two parts. In the first part, I will investigate Sturm's approach to conducting natural philosophy, and his eclecticism. I will show that Sturm is an atypical instance of the scientific culture of the 17thcentury, especially in an academic setting, in that he carefully negotiates between opposing thoughts and explanations. Immersed in the academic context of 17th century Aristotelian teaching, Sturm carves his own path as a lecturer of natural philosophy oftentimes significantly diverging from, and eventually reworking Aristotelian natural philosophy. He thereby responds to the critique against university teaching by philosophers such as Bacon, Hobbes, Descartes and others. In going against the standard teaching of his time, Sturm is led by his own Baconian and eclectic scientific method. He rejects authority and following one philosopher's opinion only. He takes science to be an open-minded and open-ended collective endeavour. In addition, he selects what he deems best from other authors creating a coherent system. He accepts the new experiments and instruments of his days, and uses hypothetical reasoning.Sturm's scientific method in natural philosophy follows a three-step process: (1) A diligent and accurate presentation of the phenomena that need to be explained. (2) An accurate, meticulous and fair presentation of hypotheses put forward by other philosophers of the most disparate strands. (3) An unemotional assessment of the different hypotheses, selecting what he deems good and true while ridding himself of mere pseudo-explanations. In the second part of the paper, I will show how Christian Wolff advances, and emends Sturm's method. In the 1728 edition of his Psychologia Rationalis, Wolff offers three kinds of knowledge that natural philosophy needs to attain: (1) Historical knowledge, that is, factual knowledge of the natural world and its phenomena, and knowledge of philosophical theories put forth to explain them. (2) Philosophical knowledge, that is, knowledge of causes of natural phenomena. (3) Mathematical knowledge, that is, knowledge of the quantitative dimension of the driving forces of nature. While Wolff's first two categories of knowledge square with Sturm's scientific method, he is convinced that the world can and needs to be quantified and measured. It is Newton's Principia mathematica–– disregarded by Sturm –– which convinces Wolff that the quantification of the natural world is not only necessary, but feasible. 
02:00PM - 04:10PM
SR-7
Natural Philosophy in the Wake of Newton
Track : Kant and Before
Moderators
Kirsten Walsh, University Of Exeter
D’Alembert’s dynamics: from “system” to action principles
Presented by :
Tzuchien Tho, University Of Bristol
In D'Alembert's polemical exchanges with Euler and Lagrange, the former made clear that even if there was shared agreement on the emerging mathematical descriptions of the variational approach to dynamics, the language of "action" was at best superfluous and at worst a Panglossian obfuscation. However, D'Alembert's conceptual contributions to the history of the principle of least action can be understood in another way. It was precisely D'Alembert's restrictive scruples and anti-metaphysical attitude in the use of physical entities that laid the grounds for a renovation of the concept of a physical system. That is, insofar as D'Alembert's nascent variational method crucially depends on a distinction between an internal system of conserved magnitudes set against (or embedded within) a larger external system of motions, the concept of a physical system steps into the center stage of the 18th physics to the benefit of the interpretation of least action principles. As such, D'Alembert's Traité de dynamique provided a context for the neutralization of raging debates about inherent and external forces inherited from the earlier generation (Newton, Leibniz, etc.) by providing an ontology of systems orthogonal to controversies about the dynamic properties of bodies. In providing a metaphysically "neutral" method, D'Alembert provides the fertile ground for the propagation of the metaphysically tinged "vis viva" (living force) in other terms.In this presentation, we shall begin by addressing the metaphysical minimalism of D'Alembert's Traité de dynamique and contextually defining its aims. Secondly, we shall how D'Alembert uses the notion of "system" to rework the conservation of vis viva as to remove its metaphysical implication. Finally, we shall examine the relation between the new notion of a "physical system" and its crucial role in the later development of the principle of least action in the work of Lagrange.  
Mechanism and Newtonianism in Brook Taylor’s Conception of Isochronism
Presented by :
Iulia Mihai, Ghent University
Brook Taylor (1685-1731), a mathematician and natural philosopher of the Royal Society, conceptualizes the oscillations of a taut string in the first successful mathematized approach of its kind in 1714. Here I analyze some of the methodological aspects of Taylor's reconstruction of the string as a mechanical object, especially his conception and justification of isochronism (that is, of vibrations occurring in equal amounts of time).Whereas many of the subsequent accounts of the vibrating string assume that the force at play is directly proportional to distance, and that this engenders isochronism, Taylor assumes neither. Instead, he argues for both before he sets on computing the periodic time of vibrations with the help of principles drawn from statics. Here, I reconstruct Taylor's argument in which he moves from the force proportional to distance towards the string's isochronism. I argue that Taylor does not have the mathematical language for such a move, and instead (1) Taylor relies on a mechanistic conception of the taut string as made up of particles which oscillate under the action of a force directly proportional to distance, and (2) the isochronism of the string is justified through Taylor's close reception of Newton's theory of the hypocycloidal pendulum in Book 1 of the Principia. It follows that Taylor's methodology combines geometry with mechanism, the dominating view of natural philosophy in Britain in the wake of Newton's Principia, in such a way that results in a twofold conception of the string. This methodology is unusual if seen from the tradition of solving the vibrating string problem that ensues along the lines of geometry only. But from Taylor's standpoint, the string understood as a geometrical curve allows for the use of various mathematical techniques, and mechanism allows for the theoretical decomposition of the string into particles. Each of the particles is urged by a force whose effects are not unlike those of the forces directly proportional to distance acting in Newton's hollow globes, which also appear in Newton's study of the hypocycloidal pendulum. I show how by way of this mechanistic route Taylor is able to integrate Newton's mathematical arguments about the isochronism of the hypocycloidal pendulum into his justification of the isochronism of the string, and thus explain the sense in which the string 'moves like a pendulum in a cycloid'. This early episode in the history of harmonic motion reveals how a broad array of tools and concepts are blended into what counted in the beginning of the eighteenth century as an acceptable scientific argument.
“Banishing monsters”: Du Châtelet contra Locke on thinking matter and inherent attraction
Presented by :
Lisa Downing, Ohio State University
Famously, and of course very controversially, Locke defends the epistemic possibility of thinking matter, and motivates that possibility in part by relation to Newtonian attraction:  if bodies can attract each other, who knows how far their powers might outrun our conception of body, who knows what else they might be able to do?  Du Chatelet is interested in definitively rejecting this move of Locke's-she wants to deny the possibility of thinking matter along with the possibility of attraction being seated in matter as Locke seems to imagine it. (Her inclinations on these issues are thus Leibnizian, though her arguments are not.)In this paper, I examine her particular argument against thinking matter, looking at its grounding in the metaphysical categories and analysis of chapter 3 ("Of Essence, Attributes and Modes") of her great work the Institutions de Physique(Foundations of Physics).  I then raise some questions about how the argument might be made more complete, which in turn raises some further questions about the structure of du Châtelet's metaphysics and its implications for physics.
Goethe’s Immanent Critique of Newton’s Application of Analysis and Synthesis to Colour
Presented by :
Troy Vine, Humboldt University Of Berlin
Newton's and Goethe's approach to colours are usually presented as diametrically opposed. Heisenberg, for example, contended that they treat separate domains. In contrast, I argue that Goethe presents an immanent critique of Newton. He does this by taking Newton's criteria for a theory, in particular its being derived directly from experiment without metaphysical assumptions, and showing that Newton fails to construct a theory. Newton's "proof by experiment" is an application of the antique mathematical method of analysis and synthesis by analogy. Goethe's approach, too, is an attempt to apply analysis and synthesis by analogy and is thus is not an experimental refutation of Newton's results, but rather a philosophical critique of Newton's approach.Unlike Descartes' attempt to derive prismatic colours from properties of hypothetical entities by synthesis alone, Newton developed a scientific method that allows properties of light to be derived directly from experiments using analysis and synthesis. Newton's experimentum crucis, for example, is a "proof by experiment" of the proposition that sunlight is heterogeneous. As light is represented by a geometrical entity, namely a light ray, Newton considered this proposition to be independent of and prior to any postulated metaphysical entitles, such as Descartes'. This is the basis of Newton's claim that his theory is categorically different from an hypothesis and thus a rejection of the metaphysical approach to colour inaugurated by Descartes.Goethe challenged Newton's reduction of colour to light by showing that Newton had made his own metaphysical assumptions by using a light ray to represent light. By passing a shadow through the prism, Goethe showed that a complementary spectrum appears. Applying Newton's geometrical ray analysis to the complementary spectrum results in a proof by experiment of the heterogeneity of darkness. Goethe regarded the latter an absurdity and used it as a reductio ad absurdum for Newton's claim of the heterogeneity of sunlight. While Goethe's argument does not generate a contradiction, it reveals the metaphysical assumptions Newton was making, thereby reducing the status of his theory to that of an hypothesis.
02:00PM - 04:10PM
SR-8
The Human and Life Sciences
Track : After Kant
Moderators
David Stump, University Of San Francisco
Vitalism’s Influence on the Nineteenth Century Debate over Perceptual Qualities
Presented by :
Lydia Patton, Virginia Tech
This paper will focus on a nineteenth century debate over how to explain qualities of experience and of perceived objects that seem to be traceable, not merely to directly perceived qualities of external substantial things, but also to the interaction between the subject's sensorium and external things. These qualities of experience are sometimes called 'secondary qualities'. Physiologists of perception including the Müller school (Helmholtz, Brücke, and their colleagues) faced a difficult problem in explaining perceived secondary qualities. That debate often is analyzed in terms of the varying 20th century analysis of reductive physicalism – that is, in terms of whether secondary qualities can be explained solely in terms of physical properties.This paper will focus on a distinct history. Much of the relevant background to the debates over perceptual qualities is found in the long tradition of vitalism, as opposed to mechanism, in the life sciences. Broadly, vitalists argued that there is a vital force, a force which organizes or directs the processes within a living organism, in addition to the specific mechanisms of, say, metabolism within animal bodies. A history centering on reductive physicalism is not incorrect as far as it goes, but it misses crucial elements in the background to the work of Helmholtz, in particular.Helmholtz began his career by rejecting vitalism, in concert with du Bois-Reymond and other members of the Müller school. His arguments against it had a profound, though negative, influence on his weighing of possible explanations, and in particular, on his account of what kinds of explanations can be 'scientific'. The account given will focus on Helmholtz's account of explanation in the human and natural sciences and on his physics-based framework for explanation in the physiology of perception. Building on and responding to earlier work on variants of this topic, including analyses from Chirimuuta, Tolley, de Kock, Heidelberger, Tracz, Hatfield, and Ott, the conclusion of the paper will examine the framework for the analysis of secondary qualities that results from attention to the influence, positive and negative, of the vitalist tradition on late nineteenth century accounts of perception.
Historical, Epistemological and Metaphysical Aspects of Quality in 19th-Century Sense Physiology
Presented by :
Nadia Moro, National Research University Higher School Of Economics, Moscow
Philosophy informed 19th-century scientific investigation into perception. Conversely, sense physiology was ascribed an ambitious philosophical programme: it was expected to explain the boundary between mind and matter, and provide philosophical conceptions with a scientific basis. This paper argues that investigations on sensory qualities pointed to the conceptual shortcomings of physiological research, possibly opening up discussion on the limits of natural science.Sense physiology underwent significant methodological changes throughout the 19th century. In this context, the notion of 'specific sense energies' and the claim that it accounted for the perception of the variety of sensory qualities was widespread among German-language physiologists, psychologists and philosophers, and was occasionally discussed on a European scale. The idea of specific sense energies was first introduced in 1826 by Johannes Müller (1801–1858), who appropriated Aristotle's energeia and Magendie's experiences. Müller claimed that there were specific differences or dispositions ('energies') within the 'nerve substance' and that they were correlated to the perceived quality of the corresponding sensations. Hence, the variety of sensory qualities depended on the specific 'energy' of the affected nerve substance rather than on the nature of the stimuli. Müller's former student Hermann von Helmholtz (1821–1894) put forward an extended version of Müller's theory: he systematised sense qualities (e.g. pitch differences in auditory sensation or colours in sight) and sense modalities (sight, hearing, taste, etc.), and suggested a physiological interpretation of the Kantian a priori. Ewald Hering (1834–1918), one of Helmholtz's chief opponents, endorsed the doctrine of specific energies, which he interpreted as original faculties within an organicist view of the living matter. These are but few stages in the development of the notion of specific energies, which was discussed until the beginning of the 20th century. Despite their heterogeneity, doctrines of the specific energies shared a problematic aspect in their reference to substance and matter. Proponents of specific energies referred to the 'sense (or nerve) substance', which they posited as the matter and bearer of the sense energy, but which they failed to explain. In fact, the specific energies were claimed to be essentially connected with the sense substance; but they were also considered to be inaccessible to  knowledge. As a result, accounts of the dependence between matter and "its" specific energies ultimately lacked metaphysical and epistemological foundation.The paper analyses, firstly, the explanatory deficits in the physiological theories by Müller, Helmholtz, and Hering concerning the connection between the specific energies and the 'nerve substance'. Secondly, it shows that those deficits may be traced back to biased epistemological and metaphysical assumptions. Finally, the paper argues that the notion of specific sense energies is an expression for the unknown in 19th-century investigation into sensory qualities and perception, as the problematic relationship between specific energy and matter shows. 
On the Emergence of Modern Philosophy of Biology
Presented by :
Daniel Nicholson, Konrad Lorenz Institute For Evolution And Cognition Research
This paper aims at nothing less than a re-evaluation of the origins of the philosophy of biology as an academic discipline. The story that is often told is that the field only really emerged in the last third of the twentieth century owing to the pioneering work of David Hull and Michael Ruse, alongside that by Kenneth Schaffner and William Wimsatt. Hull and Ruse tend to be given special credit for the creation of the discipline due to the publication of two seminal textbooks-The Philosophy of Biology (Ruse 1973) and Philosophy of Biological Science (Hull 1974)-which focused on a core set of problems and thereby set the agenda for subsequent philosophical discussions of biology. From the 1970s onwards the field grew rapidly, becoming consolidated as an academic discipline in the 1980s and 1990s, and ultimately developing into the thriving area of research that it is today.No one has done as much to popularize this account as the story's two lead characters. Repeatedly over many decades, both Hull and Ruse have decried the fact that before they began to work in the field, it "really did not exist as a subject" (Ruse 1979: 785). Already in their respective textbooks, Ruse asserted that "the author of a book on the philosophy of biology need offer no excuse for the subject he has chosen, since few areas of philosophy have been so neglected in the past 50 years" (Ruse 1973: 9), while Hull noted that his book would "take a closer look at that area of science which has been passed over in the rapid extrapolation from physics to the social sciences" (Hull 1974: 6). Much later, Hull recalled that when he received his PhD in 1964, "quite a bit had been written on the history of biology but very little on anything that might be termed the philosophy of biology" (Hull 1994: 375). Ruse has been even more critical of the state of philosophy of biology in those days, stating that "[o]nly those who were there at the time-around the late 1960s, early 1970s-can know just how bad was much that passed then for the philosophy of biology. Its major merit was that there was so little of it. It was dreadful stuff, marked by an incredibly thin knowledge of biology" (Ruse 2000: 467).Although Hull and Ruse undoubtedly deserve credit for helping to consolidate the philosophy of biology, careful historical scrutiny of work published during the period when the discipline was supposedly formed reveals a rather different and far more interesting story. As this paper will show, the establishment of the field tells us much more about the interests of the actors involved and the choices they made than about the general standing of philosophical examinations of biology at the time. Understanding this history also enables us to appreciate the surprising degree of contingency that was involved in defining the narrow set of topics and questions that dominated the philosophy of biology discourse until relatively recently.
The Philosophical Impact of Cybernetics on Waddington’s Processual Epigenetics
Presented by :
Flavia Fabris, Konrad Lorenz Institute For Evolution And Cognition Research
This paper analyses Conrad Hal Waddington's systemic and antireductionist approach to biology, and explains its development in the period between the 1930s and 1950s. It argues that:  (i) much of Waddington's work on epigenetics was deeply influenced by a process ontology of living systems; (ii) Waddington's process philosophy offered a new rationale for evolutionary biology, distinctly different from the one proposed by the architects of the Modern Synthesis; (iii) and lead the foundation for the systems approach. This is a well-worked territory for historians and philosophers of biology, but this paper will challenge relevant aspects of the received view. Drawing from the first-hand study of papers, books, and correspondence letters from the Waddington's archive housed in Edinburgh, the paper establishes a link between Waddington's reasoning and Whitehead's organicism, and argues that it was mainly Waddington's cybernetic reasoning –rather than organicism–, to lead the foundation for his novel scientific approach. Waddington's theory of developmental systems was initially entrenched in the general cybernetic framework of communication and control. Waddington took the work of Ashby Ross and colleagues beyond their familiar boundaries, toward a cybernetics of biological development, that he called epigenetics. Building upon Whitehead and Thompson's cybernetic works, and then on Ross' feedback-control concept, Waddington adumbrated, and then fully presented, the process of genetic assimilation. It will be shown that this link is fundamental to understand the conceptual dimensions of Waddington's processual epigenetics and to clarify what contributions it made to contemporary theoretical biology.
02:00PM - 04:10PM
SR-9
Understanding Science III
Track : After Kant
Moderators
Peeter Müürsepp, Tallinn University Of Technology
A Feminist Defense of the Demarcation Problem
Presented by :
Juliana Broad
If the demarcation problem is dead, as philosopher Larry Laudan proposed in 1983, then what I suggest to feminist epistemologists is no less than intellectual grave-robbery. This may sound peculiar: After all, the project of demarcation has proved as injurious to the feminist cause as, I propose, it has the potential to be fruitful. Although feminist epistemologists have taken up the responsibility of creating conceptual frameworks to identify gender biases in science – generating remarkable changes in a number of scientific fields under the label of "feminist science" – there has been intense criticism of the validity of feminist science qua science. Critics have even stated that science infused with feminist values would result in a loss of science's special epistemic authority, while claiming its status as science. What more, at the hand of demarcation, feminist epistemologists have been saddled with the casualties of intellectual battle: Past attempts at distinguishing science from non-science have pejoratively, if not justifiably, relegated certain modes of inquiry important to feminist theory – such as Freudian psychoanalysis and Marxism – to the realm of pseudoscience, with its attending stigma. It is therefore perhaps historically justified that feminist epistemologists have eschewed the demarcation problem as an inherently antagonistic project. Indeed, despite resurgent philosophical interest in demarcation, feminists have remained quiet. In this paper, I will establish that feminist epistemologists have abandoned the demarcation project for the wrong reasons, and argue that the analytical tools of feminist epistemology are exceptionally capable of generating a pragmatic demarcation between science and pseudoscience.I will approach this conclusion in three major parts. In Section 1, I will outline an account of the demarcation problem, tracing its early twentieth century origins through developments in mid-century. In Section 2, I will focus on the intellectual commitments of feminist epistemology and discuss their ability to identify biases in science. In Section 3.1, I will illustrate the aversion towards demarcation characteristic of feminist epistemology by providing a close analysis of a paper by Sandra Harding, an influential feminist philosopher of science. Against Harding, I will establish in Section 3.2 that the demarcation problem has been abandoned precipitously. I will suggest in Section 3.3 that a starting point for feminist demarcation can be found in the intellectual tools and frameworks already generated by feminist epistemology. 
The Limits of Limiting Cases
Presented by :
Zachary Shifrel, Arizona State University
Skeptics appeal to the graveyard of once successful scientific theories to cast doubt on the realist's success-to-truth inference. In response, many philosophers employ limiting case arguments that purport to show a thread of continuity between discarded theories and their successors. Structural realists, for example, argue that the invariance of structure under theory change can be shown when we take various limits of (usually physical) theories: modern spacetime theories retrieve Newtonian gravity in the weak-field limit, quantum electrodynamics retrieves Maxwell's equations in the limit of a large number of photons or in the classical ħ → 0 limit, and so on. The realist then alleges that we can make ontological commitments to those robust theory constituents that are recovered in the relevant limit. (Ladyman, 2007; Lyons, 2016; Worrall, 1989).I argue that limiting cases cannot do the work that realists want them to, leaving realism vulnerable to historical challenges. More specifically, I aim to show that (i) limiting case arguments often fail to preserve the (empirical, conceptual, structural, etc.) content that realists say they preserve and that (ii) even when limiting case argument are successful at proving the invariance of some subset of a theory's content, they serve as a poor means by which to formulate and defend realism. In the former case (i), we are misled by unnoticed historical revisions or by an unjustified focus on theory content that washes out the process of theory construction. Sometimes the limiting case relation does not actually hold between our contemporary theory and the past theory itself, but instead between our theory and a version of the past theory that we constructed in the present and is unfaithful to the past theory. Regarding (ii), I argue that a different variant of realism can better accommodate the historical record, namely Williams' effective realism. (Williams, 2017). References: Ladyman, J., Ross, D., et al. (2007). Every Thing Must Go: Metaphysics Naturalized. Oxford: Oxford University Press. Lyons, T. D. (2016). Structural realism versus deployment realism: A comparativeEvaluation. Studies in History and Philosophy of Science. doi: 10.1016/j.shpsa.2016.06.006.Williams, P. (2017) Scientific Realism Made Effective. The British Journal for the Philosophy of Science, 70(1), 209–237.Worrall, J. (1989). Structural realism: The best of both worlds? In D. Papineau (Ed.),1996, philosophy of science, Oxford: Oxford University Press.
Rethinking the history of the philosophy of measurement: Norman Campbell (1880-1949) on quantities and the conditions of possibility of mathematical physics.
Presented by :
Nadine De Courtenay, University Paris Diderot
Thanks to the recent revival of the philosophy of measurement and modernization of metrology, illustrated by the current reform of the International system of units (SI), measurement could, once again, become a core issue in the philosophy of science. However, the philosophical emphasis given today to questions of modelling, calibration and standardization, tends to set aside the concerns about the nature of quantities and the justification of the mathematical treatment of physical properties that had animated the classical founders of the philosophy of measurement of the turn of the 19thand 20thcenturies. Campbell's extensive contribution to the philosophy of measurement is, in this respect, either neglected, or misconstrued and criticized. In this talk, I will show how the recent developments of metrology help to better appreciate the depth of Campbell's analyses. I will show further how Campbell's insights can be extended to shed a new light on certain episodes of the history of philosophy of science, and suggest new directions for today's philosophy of science.In the new SI, a number of base units have been redefined on the model of the definition of the metre: they are now established on the basis of laws of nature in which the value of certain fundamental constants have been frozen. By the same token, quantities that had first been identified by abstraction from experiment as independent entities of different kinds get redefined on the basis of laws through which they become conceptually related to one another (fixing the value of the speed of light thus endorses the relativistic synthesis of space and time). I will suggest that this redefinition process pertains to a relationalunderstanding of the nature of quantities that remains implicit in Helmholtz's article "Counting and Measuring" but emerges as a key result of Campbell's enquiry into the conditions of measurement: abstract quantities, or magnitudes, are defined as the entities that satisfy the laws of measurement necessary to the development of a mathematical physics. The introduction of magnitudes as prescriptions, and not anymore as descriptions, entails the simultaneous introduction of errors as a second, indispensable theoretical construct, in order to account for the actual failure of the laws of measurement to be exactly satisfied by the concrete quantities involved in experiments. This relational conception of magnitudes gets further unfolded in Campbell's account of the nature of derived magnitudes and in his conception of the "true value" of a magnitude.I will conclude by showing how these insights can lead to reinterpret the introduction of absolute space and time in Newton's famous scholium, as well as Einstein's holist account of the meaning of theories in the framework of the philosophy of measurement. Further, if error is the indispensable correlate of the introduction of magnitudes, philosophy of science's focus on truth appears one-sided. Scientific knowledge should rather be thought as an activity of enquiry constantly engaged in a collective enterprise of corrections. This view has the advantage to promote a natural articulation between philosophy and history of science.
From random error to systematic error: a history of the concept of bias.
Presented by :
Nicolas Brault, Chair In Plant Breeding, InTerAct UP 2018.C102, UniLaSalle Polytechnic Institute
The concept of bias is now quite popular in science: from the cognitive bias in psychology and behavioral economics to the canonical distinction between selection, information and confusion bias in epidemiology, one of the main tasks of many current scientists seem to detect and eliminate bias from their practice, in the sense that it would constitute a threat to objectivity and truth. However, the definition of what a bias is remains quite vague and seems to vary between the various discipline which use it. For example, it is unclear if the "hindsight bias" (also called the "I-knew-it-all-along" effect), one of the most famous cognitive bias, has really something to do with the "Berksonian bias", which refers to the non-representativeness of a sample.               To solve this problem and define precisely what a bias is, it is thus necessary to go back to the historical origins of the idea and retrace its trajectory through the history of science. To do this, I will track the origin of the idea of systematic error in the works of the founders of the calculus of probabilities in the 18th century (i.e. Thomas Bayes and Thomas Simpson); then I will study the uses of this word in the works of Francis Galton, the biometricians and Ronald Fisher; and, finally, I will examine the uses of this concept by the founders of modern epidemiology (especially Austin Bradford Hill) and of Evidence-Based Medicine (especially David Sackett). My thesis is then twofold: First, the concept of bias is at the beginning a trivial and popular psychological notion, synonymous with prejudice, and became progressively a genuine scientific concept in the context of the probabilistic revolution and its consequences on such disciplines as psychology or medicine. This evolution has a lot to do with the methodological development of the design of experiments, conducted essentially by Ronald Fisher.My second thesis is that the unity of the concept of bias rests on two fundamental notions: the notion of error, and the notion of systematicity. I'll show that the problem of systematic error can only appear as soon as the problem of random error has been solved, both by the calculus of probabilities and by differential calculus. The problem of systematic error or bias is thus the problem that remains unsolved by these two mathematical techniques.Through this opposition between systematic and random error, it will become possible to understand how this concept circulated through many scientific disciplines and how the actors of theses disciplines used it in various operational functions relatively to the historical contexts and epistemic needs relevant to each discipline. It will become clearer why this concept has been and is still opposed to such different concepts as those of validity, causality, evidence, objectivity and truth. To clarify the situation, I will conclude by distinguishing between four different concepts of bias: the psychological, the statistical, the epidemiological and the medical ones.
04:10PM - 04:30PM
HSS Foyer
Coffee & Tea Break
04:30PM - 06:00PM
HSS Auditorium
Keynote Address - Christina Thomsen Thörnqvist
Day 4, June 26, 2020
09:30AM - 11:10AM
SR-4
Individuating Bodies from Late Scholastic Aristotelian to Cartesian Philosophy (Symposium)
Track : Kant and Before
Moderators
Doina-Cristina Rusu, Doina-Cristina Rusu
The aim of this symposium is to examine late Scholastic debates about the individuation of bodies and how they might inform 17th century approaches to the problem of individuation.Given the intensity of interest in the subject of what, if anything, serves as the principle of individuation in medieval and late scholastic debates and how critical the principle was to 17th century metaphysical systems, it is surprising how little attention is paid to the subject by 17th century philosophers. The crux of Spinoza's monism is the assumption that the individuation of substances (plural) could only be either by essence or accident, and since neither is plausible, what must be rejected is the plurality of substances. Hobbes renders the individuation of a living thing, composed of different bodies over time, a matter of its having the same motion, without much concern either for what individuates a body at a time or for what it is for distinct bodies to share a common motion. Leibniz resorts to individuation by accidents, and Descartes is either mute on the subject (e.g. on what individuates souls), or vague, suggesting, on the one hand, that bodies are individuated by motion, or by the soul, in the case of the human body, or by nothing at all. The problems of individuation become even more vexed in relation to animate bodies which undergo a continual and over time complete change of parts, especially when, as in the case of non-human animate bodies, there are no souls to serve as the criterion of unity and identity over time.Today it is often assumed that late Scholastic theories, which 17th century philosophers inherited, took forms, in the case of living bodies, souls, to be the principles of individuation of all natural substances. Aristotelian bodies are primary substances, which were standardly characterized as compounds of substantial form and prime matter. Since prime matter, as the potential for any form, is common to all bodies, it seems logical that form does the individuating. This inference can lead current scholars to make certain connections with 17th century philosophy. E.g., the purported 'Scholastic view' is sometimes compared with the view, attributed to Descartes and his followers, that soul/mind individuates whatever matter it is joined to, such that it is one human body. The general observation is made that with respect to non-living bodies, in the so called 'mechanical philosophy', motion and its laws take over what is presumed to have been the role of form.The first two papers, by Calvin Normore and Helen Hattab respectively, challenge this common preconception about the Scholastic approach to individuation. Normore examines late 15th and early 16th century arguments surrounding the nominalist position that there is no principle of individuation. Hattab traces the debate among Scholastic defenders of various principles of individuation in late 16th century Padua. The third paper by Deborah Brown challenges presumptions about the relationship between Scholastic and modern views, by reinterpreting how Descartes conceives of the individuation of bodies against this background.
What Individuates Bodies? Late 16th Century Scholastic Aristotelian Debates
Presented by :
Helen Hattab, University Of Houston
Though Scholastic Aristotelians and their early modern critics conceived of matter and substance, differently, both grappled with the problem of what individuates the bodies that natural philosophy studies. This paper argues that the above-mentioned standard connections made between late Scholastic accounts of how bodies are individuated and the views of canonical 17th century philosophers are premature. There is a more complex story that must first be told about the late 16th century debate. My aim is to begin to sketch out this story.First, I show that there was no Scholastic consensus at this time as the pros and cons of various medieval theories on individuation were heavily debated. Followers of Averroes did hold that form individuated matter in the case of the compound substances of physics. However, several other positions were highly influential, including Avicenna's and Thomas Aquinas' theory that matter affected by quantity is the principle of individuation, as well as John Duns Scotus' positing of an individual difference, called haecceitas, and the hybrid view that form and matter together are responsible for individuation. Second, the fact that this topic takes up increasingly greater amounts of space in commentaries on the relevant passages of Aristotle's works, as well as engendering independent treatises and disputations, indicates that developing a comprehensive account of what individuates bodies, and substances in general, became more important as the 16th century went on. One such treatise, entitled On the Principle of Individuation, was published by Archangelus Mercenarius in 1571 at Padua. Mercenarius (d.1585) became a Professor at Padua after studying in Venice, and was an opponent of Jacopo Zabarella. Mercenarius' treatise is a useful guide to the growing debate on individuation because he structures it as a disputation. He sequentially presents the arguments for and against Scotus' view, the view that form individuates and the hybrid view before defending his interpretation of the Thomist view that designated matter individuates natural substances. I thus draw on Mercenarius' discussion and related texts to map out the prevailing late 16th century Scholastic positions and arguments on what individuates bodies. This provides a starting point to tell the more complex story of how 16th century Scholastics construed and attempted to solve the problem of individuating bodies.
Nominalism , Composition and Individuation in the later Middle Ages
Presented by :
Calvin Normore, University Of California, Los Angeles
During the later Latin Middle Ages two theses were almost universally accepted: that each thing is individual and that matter is indefinitely divisible. Two other theses were more controversial and usually identified with the Nominalist tradition. One of these is that a whole just is its parts 'taken collectively (coniunctim)'. This paper focuses on the other thesis - that nothing is the thing it is in virtue of relations to anything else. This fourth thesis amounted to a rejection of the claim that there were principles of individuation. Opponents of the Nominalist tradition argued that taken together the four theses entail that there are no composite things. This paper explores the Nominalist rejection of principles of individuation and some of the debates this generated. Beginning with a sketch of the origin of these debates in the early 14th century work of William of Ockham, John Buridan, and their critics it focuses on developments at the end of the 15th and beginning of the 16th centuries in the works of Gabriel Biel (1420-1495) and John Mair (Major) (ca.1467-1550).Biel is usually taken to be an 'orthodox' Ockhamist and in his Collectorium in IV Libros Setentairum Guillelmi Occam he does defend a recognizably Ockhamist approach to the issue of the primitive individuation of substances but it is in a new context in which his Albertist, Scotist, and Thomist opponents present arguments with which Ockham did not have to deal. The paper explores some of these arguments and Biel's responses. Biel's near contemporary, John Mair meanwhile is attempting to adjudicate between neo-Scotist accounts of individuation by 'haecceitas' and Buridan's insistence that individuality is primitive, but again in a context which has shifted considerably since the 14th century. These debates illustrate how the nominalist position on indiviudation was understood and transformed in the late 15th and early 16th centuries.
Descartes and the Plurality of Bodies
Presented by :
Deborah Brown, The University Of Queensland
This paper examines the problem of individuation in the 17th century, particularly, as it pertained to bodies. Descartes provides a focus for the study since many of the treatments of individuation that occur in the 17th century do so in reaction to what are perceived as the deficiencies of his analysis. It will be argued, however, that Descartes' approach to the subject was widely misunderstood. Although it is usually supposed, on the basis of key texts from the Principles of Philosophy, that Descartes individuates a body as a collection of matter that moves with a common motion, it will be argued that this proposal was intended neither as a principle of individuation nor as an explanation of the unity of a body. One well-documented problem with such a proposal is that motion, being a mode, cannot individuate that on which it depends, namely a substance, an observation that fuels the monist reading of Descartes, according to which there is just one extended substance (the plenum), all distinctions within which are simply modal. Other problems concern the tension between this criterion and Descartes' identification of space and the body that occupies it. But the problems with such a reading, I shall argue, extend further than either of these considerations. Without an explanation of what counts as a single motion, the common motion criterion cannot be used to discriminate between cases of single bodies moving with single motions and cases of distinct bodies moving with a single motion (assuming there is an interpretation in which the latter represents a class of distinct possibilities). These considerations (among others) point to the need to rethink the common motion criterion – indeed to rethink whether it was ever intended to function as a criterion of individuation at all, and if not, what other purpose it might have been intended to serve instead.This discussion leads us finally to the question why the notion of primitive individuation – one of the solutions favoured by late medieval nominalists – was not more widely countenanced. If it is, indeed, the only plausible answer to the question why there are many souls rather than one, it is a good question why the same sort of answer could not equally serve as the story for why there is a plurality of bodies rather than one. This paper concludes by defending a reading of Descartes as committed to primitive individuation of bodies, despite their real and infinite divisibility, against those philosophers who searched instead for a criterion of individuation and more often than not did so in vain.
09:30AM - 11:10AM
SR-6
Causal Reasoning
Track : After Kant
Moderators
Alberto Cordero, CUNY Graduate Center & Queens College CUNY, New York
What Capricious Reasoning Means for Philosophy: Mill’s Logic and Problem of Causal Selection
Presented by :
Brian Hanley, University Of Calgary
Philosophers of science are increasingly interested in how scientists select importantcauses from a background of other factors. Previously, philosophers have noted the significanceof causal selection in history and the law (Hart & Honoré 1959, Dray 1978). Recent studies ofcausal selection have broadened attention to biotechnology (Baxter 2019), medicine (Rossforthcoming), and cell biology (Sterner forthcoming). While philosophical interest continues togrow, it is against the background of a deeply dismissive philosophical tradition.With few exceptions, philosophers working in an analytic tradition have held that causalselection is unprincipled or philosophically uninteresting. Even Hart & Honoré, perhaps the mostimportant figures in the legacy of causal selection, say it is largely irrelevant outside the narrowinterests of history and the law. Dismissiveness has been shaped, and solidified, by severalinfluential voices. Most prominently, David Lewis and John Stuart Mill. However, their remarksare ambiguous and open to interpretation. I show that Lewis may have been more optimistic thanmost think, and I argue that Mill offers a great source of unappreciated insights.Most trace pessimism about causal selection back to John Stuart Mill's claim thatselections are made in a "capricious manner" (1843/1981). However, there are significantdiscrepancies surrounding what causal selection is about, and what exactly is Mill's view on thematter. Many different types of solutions have been given to fundamentally distinct "The"problems attributed to Mill. There is considerable disagreement about what philosophicalproblem is posed by selection, and what form if any a solution should take. One way to mendthese discrepancies is to go back to Mill.In this paper I argue that Mill is misunderstood. I offer a new interpretation of how heunderstands the capriciousness of causal selection and the philosophical problem it poses.Philosophers have not appreciated Mill's deeper reasoning, or his broader philosophicalmotivations. I show that the significance of calling a form of reasoning "capricious" within thecontext of Mill's deeply naturalistic epistemology has been completely overlooked. I show thatthe common interpretations of Mill's position are incongruous with his philosophicalmethodology in System of Logic. In the context of Logic, Mill's problem is more challengingthan philosophers have acknowledged. Yet, it offers philosophers a way forward.While some of Mill's reasoning relies on assumptions about causation and science manywould find obsolete, I argue that he offers lasting lessons for contemporary philosophersinterested in causal selection. Mill gives an insightful analysis of the complex ways that causalselection works across different cases. His analysis reveals the challenges that philosophers mustface in order to develop illuminating accounts of causal selection. I explain these lessons, anddescribe how they offer a fruitful way forward. Mill's analysis can be used as a blueprint for amore coherent philosophical study of causal selection that builds on recent work in philosophy ofcausation, such as James Woodward's interventionist theory of causation.
W. F. R. Weldon's Philosophy of Science
Presented by :
Yafeng Shan, Univeristy Of Kent
W. F. R. Weldon is widely credited as the co-founder of the Biometric School in the study of inheritance at the beginning of the 20th century. With Karl Pearson (the other co-founder of Biometry), Weldon heavily criticised Mendelism. Like Pearson, Weldon favoured a statistical approach to the study of inheritance. Like Pearson, Weldon attempted to develop a theory of inheritance inspired by Francis Galton. Nevertheless, there are substantial differences between Pearson's and Weldon's views on inheritance. Pearson contended that an ideal theory of inheritance should be centred on Galton's law of ancestral heredity, while Weldon was devoted to developing a theory based on both Galton's theory of hereditary process and Galton's law of ancestral heredity. Methodologically, for Pearson, the statistical approach is the only very approach to the problem of inheritance, while Weldon admits the priority of the statistical approach but still find it incomplete. These theoretical and methodological disagreements are deeply rooted in the difference of Pearson's and Weldon's philosophy of science. This paper reexamines Weldon's philosophy of science with a detailed analysis of his unfinished and unpublished work, especially Theory of Inheritance and Individuality of the Chromosomes (now stored at UCL Special Collections). In particular, I focus on Weldon's view on causation. Charles Pence (2011) recently argues that compared with Pearon's positivist view on causation, Weldon's view is more like a probabilistic account of causation. However, from my reading of Weldon's unpublished manuscripts, I argue that Weldon's view on causation is very close to the Russo-Williamson thesis (also called evidential pluralism), the basic idea of which is that in order establish a causal claim, one has to have both difference-making evidence (e.g. probabilistic evidence) and mechanistic evidence.
Inductive and Deductive Reasoning in Discovering the Cause of Brownian Movement
Presented by :
Klodian Coko, Ben Gurion University Of The Negev
Brownian movement is the completely irregular movement of microscopic particles of solid matter when suspended in liquids. Although it was known for the most part of the nineteenth century, it was only at the end of that century that the importance of the phenomenon for the kinetic-molecular theory of matter was recognized. Historians of science have expressed both surprise and lament about the fact that Brownian movement did not play a role in the early development and justification of the kinetic theory of gases. If the liquid's molecular motion was identified from the beginning as the proper cause of the phenomenon, some of the main philosophical and scientific objections raised against the early kinetic theory could have been answered. Related to this received historiographical position is the claim that most of the nineteenth century experimental investigations on the cause of Brownian movement were of a somewhat lower scientific rigor than the later experiments that successfully established molecular motion as the unique cause of the phenomenon (Brush 1968; Nye 1972; Maiocchi 1990).I present the complexities of the nineteenth century investigations on the cause of Brownian movement and make sense of its late connection with the kinetic-molecular theory of matter. I argue that there was actually an extensive and sophisticated experimental work done on the phenomenon of Brownian movement throughout the course of the nineteenth century. Most investigators were fully aware of the methodological standards of their time and spent much effort to make their work adhere to these standards. Two were the main methodologies strategies they employed:The first was the inductive strategy of varying the experimental parameters to identify causal relations. This is the traditional Baconian strategy of varying the circumstances, which was codified in a more rigorous form in the nineteenth century methodological works of John Herschel and John S. Mill. Its underlying rationale was that all the circumstances that could varied or excluded without affecting the phenomenon under investigation, could not be causes of the phenomenon. On the other hand, all the circumstances whose exclusion or variation had an influence on the phenomenon under investigation were considered to be causal factors.The second was the hypothetico-deductive strategy (or the Method of Hypothesis--as it was called at the time), which made a dynamical re-emergence during the course of the nineteenth century as the proper strategy for validating hypotheses about unobservables. This strategy was mostly exemplified in William Whewell's notion of the Consilience of Inductions. Its underlying rationale was that the ability of a hypothesis (about unobservables) to explain a variety of experimental facts--especially facts that played no role in the initial hypothesis formation--indicated the validity of the hypothesis. Both strategies faced limitations when investigating the cause of Brownian movement. Only the fruitful combination of these (prima faciae) antithetical strategies led, at the end of the nineteenth century, to the recognition of molecular motion as the most probable cause of Brownian movement.
09:30AM - 11:10AM
SR-7
Fleck and Kuhn
Track : After Kant
Moderators
Nicolas Brault, Chair In Plant Breeding, InTerAct UP 2018.C102, UniLaSalle Polytechnic Institute
The role of (conceptual) metaphors in Ludwik Fleck’s philosophical writings
Presented by :
Paweł Jarnicki, Warsaw University Of Technology
When Ludwik Fleck wrote his philosophical texts, all authors writing about science used such traditional terms as fact, observation, discovery, truth, etc. Fleck, with his direct experience of laboratory work, observed that these concepts did not fit into modern research practice. In order to articulate his experience, he had to refer to his contemporary knowledge of science, i.e. to the above mentioned concepts. From the perspective of the cognitive metaphor theory, Fleck's strategy can be described as follows. Fleck conceptualizes traditional abstract conceptual domains of philosophy and the history of science (according to Fleck's theory derived from "popular knowledge") with new metaphors that are better suited to contemporary research practice, and then with the same metaphors he conceptulizes the new concepts introduced by him (thought style, collective thought, collective mood, etc.). For example, Fleck conceptualize the concepts of scientific observation and scientific discovery by means of projections from Gestaltpsychologie which bring traditional concepts down to absurdity (you observe what you already have in your mind), scientific discovery is about seeing a new whole, scientists observe what they have learned to see before. Fleck then makes the "readiness to see" of certain forms the main component of the thought style. Metaphors of evolution, in turn, are used by Fleck to modify the notion of scientific fact, which turns out to originate, develop and perish. The group of musical metaphors is used to conceptualize the mood, i.e. the condition for the origin of any collective and hence any thought style (these metaphors have almost disappeared from the American translation of Fleck's book). There is also the whole group of physical metaphors (inertia, force field, resistance) and a group of religious metaphors (initiation, esoteric and exoteric circles) that are also important for Fleck's theory. In his works we will also find a creative development of the traditional metaphor of cognition as a journey (they searched for India, found America) and a polemic with traditional metaphors of cognition as a conquest (veni, vidi, vici epistemology), theories are buildings (bricks that do not fit together). All these metaphors not only make Fleck's texts read well after 80 years, without many of these metaphors Fleck would not be able to formulate his theory at all. In my speech I would like also to show that certain aspects of Fleck's work remain unknown to the reader of the English translation of Fleck's writings, because some of these metaphors have disappeared or been significantly transformed in translation.
Arranging the Symposium: The Correspondence between Ludwik Fleck and Tadeusz Bilikiewicz
Presented by :
Artur Koterski, Maria Sklodowska-Curie University
The debate between Ludwik Fleck (microbiologist and philosopher of science) and Tadeusz Bilikiewicz (psychiatrist and historian of medicine) took place shortly before the outbreak of the World War II and remained virtually unnoticed until 1978. A wider recognition of their exchange was possible when the English (1990) and German (2011) translations appeared. The polemics consists of four papers; in principle it concerned understanding of the concept of style, influence that the intellectual environment exerted upon scientific activity and its products as well as the task for sociology of science. The commentators of this dialogue were quick to notice that the claims made by Fleck at that time were crucial for understanding of his position and its non-relativist interpretation; on the other hand, the rendition of Bilikiewicz's version of constructivism was given only recently and it presented this disputation in a substantially new light. The understanding of the debate has been broadened by the newly discovered correspondence between Fleck and Bilikiewicz as it allows to answer old questions---most importantly, how this discussion originated and what motives made it happen---asked already over forty years ago when the debate was found and reviewed for the first time. Thus, the aim of this paper is twofold. First, it is to reconstruct briefly the main concerns that Fleck had about Bilikiewicz's sociology of science (expounded in his Die Embryologie im Zeitalter des Barock und des Rokoko, 1932) together with the effective rebuttal given by the latter. Second, having the reconstruction of the argument in the background, it is to analyze their epistolary exchanges in search for the objectives and (rather big) expectations that they both linked with this polemics-and to point out the reasons why that project did not work out.
The Genealogy of Kuhn's Metaphysics
Presented by :
Paul Hoyningen-Huene, Leibniz University Hannover, University Of Zurich
Thomas Kuhn's metaphysics is not well understood. Kuhn himself realized that he did not quite understand the metaphysics he was groping for. From the Structure of Scientific Revolutions onwards, he tried to articulate that reality is not something fixed once and for all, completely given from outside, but is somehow and to some degree dependent on our "paradigms" (or something of this sort). He really meant "reality", and this is why this thought appeared to be utterly incoherent to some philosophers, because they think of reality as something that is independent of us, that is our counterpart, opposite of us. Many people therefore had extreme difficulties to understand what Kuhn was talking about when he spoke about "world change" due to revolutions, and the result was mostly straightforward dismissal of those passages of his work, or metaphorical or psychological readings. Unfortunately, Kuhn did not finish those chapters of his unfinished book entitled The Plurality of Worlds: An Evolutionary Theory of Scientific Development, which were planned to deal with his metaphysics. Whatever Kuhn's last word would have been, I shall try in this paper to reconstruct the basic ideas of Kuhn's metaphysics. However, in order to gain maximally possible plausibility, I shall start with the historical path in which the idea has been weakened that reality is and has to be purely object-sided, without the slightest subject-sided contributions. This path fundamentally starts with Copernicus and his new planetary system. This is significant because the motions of the Sun and the planets that were previously seen as purely object-sided, were now seen as containing genetically subject-sided contributions. A very similar process, also at center stage of the constitution of modern science, was the introduction of secondary qualities in the 17th century by Descartes, Galilei, Boyle, Locke, and others. There we have the same process that something that appeared to be purely object-sided, now lost this status and was seen as also containing subject-sided elements. In these processes, the reality status of something changed, whose reality seemed beyond doubt before. The reflection of such processes in philosophy culminates in Kant's critical philosophy. Ever since, this kind of thinking has been an indispensable part of Western thought, and it surfaced again very clearly in the development of special relativity and quantum mechanics and in many currents of present-day Western thougt.This paper argues that Kuhn is continuing this tradition. In order to understand Kuhn's intended metaphysics, it is therefore essential to understand the credentials of this tradition, which appear to be unavailable to most realist critics of Kuhn. 
09:30AM - 11:10AM
SR-8
Continental HOPOS
Track : After Kant
Moderators
Fons Dewulf, Ghent University
Heidegger on the Unity and Significance of Science in Being and Time
Presented by :
Paul Goldberg, Boston University
In Being and Time, Heidegger claims that science stands out as a unique and privileged pursuit for human beings ("Dasein" is the term that Heidegger coins to describe entities with our ontological structure). In this essay, I aim to clarify this important but generally overlooked set of claims. As I reconstruct it, Heidegger's argument for the privileged status of science runs as follows:(1) Truth is constitutive of Dasein.(2) Science, a possible pursuit of Dasein's, bears a privileged relationship to truth.(3) Therefore, science is one of Dasein's privileged pursuits.Premise (2) of this argument also advances a claim about the unity of science. For Heidegger, science is distinguished by its privileged relationship to "truth," as he idiosyncratically defines it.Generally, commentators have either ignored or misunderstood this argument. Premise (1) has been treated at length on its own (for instance, by Dahlstrom in his 2001 Heidegger's Concept of Truth and Wrathall in his 2011 Heidegger and Unconcealment), but generally not within a detailed analysis of how it figures into Heidegger's account of the significance of science. And insofar as scholars have treated Heidegger's analysis of the unity and significance of science, they tend to misunderstand it. Almost every major expositor of Heidegger's philosophy of science (for instance, Hubert Dreyfus, Joseph Rouse, William Blattner, Joseph Kockelmans, Jeff Kochan, and Trish Glazebrook, among others) subscribes to a version of what I call the "presence-at-hand thesis" (PHT). In Being and Time, according to PHT theorists, Heidegger thinks that what distinguishes science from other pursuits is that the entities scientists investigate all share the ontological structure that Heidegger calls "presence-at-hand" (Vorhandenheit--roughly, the mode of being characterizing isolated, independent substances with properties). Instead, I claim, Heidegger thinks what distinguishes science is that it bears a privileged relationship to "truth," in his idiosyncratic sense.I unpack Heidegger's argument about the privileged status of science premise by premise. Premise (1) is based on two independent arguments that Heidegger advances: (1.i.) What Heidegger calls discovery (Entdeckung--roughly, the intelligibility of entities) and disclosedness (Erschlossenheit--roughly, the intelligibility of being) are constitutive of Dasein; and (1.ii.) discovery and disclosedness are the most fundamental senses of "truth," insofar as disclosedness and discovery are the necessary conditions for the capacity of a judgment to correspond to a state of affairs.In explicating premise (2), I begin (Section 2.i.) by discussing what, for Heidegger, distinguishes science from other pursuits. First, I rebut the PHT theorists. I present evidence that Heidegger thinks that engaging with present-at-hand substances is neither necessary nor sufficient for scientific research. I then argue that Heidegger actually thinks that science is distinguished by its uniquely explicit and systematic orientation toward discovery and disclosedness. Finally, (Section 2.ii.) I discuss what I see as Heidegger's Aristotelian view that science is one of Dasein's privileged pursuits. On my reading, Heidegger's analysis of the relationship between science, authenticity, and Dasein is inspired by and analogous with Aristotle's account of the connection between episteme, eudaimonia, and human existence.
On Ways to Talk about the Whole: A Preliminary Note on Jacques Merleau-Ponty's Philosophy of Cosmology
Presented by :
Jeesun Rhee, Kangwon National University
Philosophy of cosmology is concerned with cosmological theories and observations, as well as their interpretations and philosophical implications. It seems to be a relatively new subfield of philosophy of science and only recently to gain interest to cover an issue of Studies in History and Philosophy of Modern Physics in 2014 edited by H. Zinkernagel, titled "Philosophical aspects of modern cosmology." In this talk, I'll firstly review the contributions of the SHPMP volume and other similar survey articles on the subject recently appeared. Then I'll show "philosophical moves" in cosmology or "cosmological turn" in the philosophy of science are preceded by works of the French historian and philosopher of science Jacques Merleau-Ponty (1916–2002). The talk aims to give a glimpse on Merleau-Ponty's philosophy of cosmology, with an emphasis on two main topics: a. definition of the very object of cosmology, i.e. the universe; b. scientificity of cosmology and its very specificity as a science that distinguishes it from the other sciences.
The ‘philosophy of the concept’ in French philosophy of science from Cavaillès to Foucault
Presented by :
Massimiliano Simons, Ghent University
In the introduction to the English translation of Canguilhem The Normal and the Pathological, Michel Foucault makes the infamous distinction between two reactions to Husserlian phenomenology in France, contrasting a philosophy of the subject and experience (Sartre, Merleau-Ponty) with a 'philosophy of the concept', associated with Gaston Bachelard, Georges Canguilhem and himself. He traces this second group to the work of Jean Cavaillès, who indeed in his posthumous published Sur la logique et la théorie de la science (1947) alludes to an alternative understanding of science through a 'dialectics of the concept'. It, however, remains enigmatic what Cavaillès (and Foucault) intended with this distinction. Too often it is either purely defined in the negative (as not being a phenomenology that starts from the transcendental subject) or is simply equated with a form of structuralism (thus falling prey to all the latter's limitations and problems). This paper wants to explore the origins of such a philosophy of the concept and how it entails a unique approach to philosophy of science. It aims to do so in two ways. One the one hand, it will focus on Cavaillès' own attempt to develop an alternative philosophy of mathematics and science and how this has subsequently been developed by authors such as Jean-Toussaint Desanti. On the other hand, the work of Canguilhem will be re-examined, mainly his La formation du concept de réflexe aux XVIIe et XVIIIe siècles (1955) which claims to be history of concepts, and how these ideas subsequently played a role in the work of Michel Foucault. Based on this, I aim to evaluate how a philosophy of the concept can still be relevant for contemporary science studies, and how it relates to more recent developments such as the practical and social turn.
11:10AM - 12:45PM
HSS Foyer
Lunch Break (coffee and tea provided at 12:30)
12:45PM - 02:55PM
SR-4
The Philosophy of Emilie Du Chatelet (Symposium)
Track : Kant and Before
Moderators
Lisa Downing, Ohio State University
The study of Emilie Du Châtelet's philosophy has seen a resurgence in recent years. Building on the important work of Hagengruber (2012), Reichenberger (2016), and Brading (2019), which seeks to place Du Châtelet's thought within its proper historical and philosophical context in the early Enlightenment, this symposium seeks to illustrate the wide range of her contributions to mid-18th century philosophy. In Institutions Physiques (Paris, 1742) and other texts, Du Châtelet contributed to a number of significant conversations amongst leading early Enlightenment figures such as Maupertuis, D'Alembert, Euler and the early Kant. In particular, Du Châtelet presented challenging and potentially novel arguments in favor of "les êtres simples" (or simple beings), which can be compared with monads; she articulated a fascinating distinction between mechanical and physical explanations that can be considered an early pluralist conception; and, she articulated a methodology for the new physics of gravity that indicated the importance of conceptual clarity in advance of solving pressing empirical problems. In each case, what we find in Du Châtelet is a distinctive voice that deserves greater recognition than it has received so far in the literature. Unlike her male colleagues, however, Du Châtelet faced exclusion from the principal intellectual institutions of her day, such as the Académie Royales des Sciences and its sister academy in Berlin. So it is equally important for our panel to discuss not only that exclusion, but the clever maneuvers that she employed in order to ensure that her work joined the scientific and philosophical conversation of her day. Intriguingly, despite the institutional exclusion that she faced, Du Châtelet exerted a significant influence on philosophy during the early Englihtenment in France and Germany. This symposium ought to interest anyone who studies the history of the philosophy of science that developed in Continental Europe in the mid-18th century. 
Emilie Du Châtelet on Mechanical Explanation vs. Physical Explanation
Presented by :
Qiu Lin, Duke University
In the second edition of her Foundations of Physics, Du Châtelet advocates a three-fold distinction of explanation: the metaphysical, the mechanical, and the physical. While her use of metaphysical explanation-i.e., explaining something via the Principle of Sufficient Reason-has received some attention in the literature, little has been written about the distinction she draws between mechanical and physical explanations, including their demand, scope, and use in physical theorizing. This paper aims to fill this void, arguing that making this distinction is a crucial piece of Du Châtelet's scientific method. According to Du Châtelet, a mechanical explanation is one that 'explains a phenomenon by the shape, size, situation, and so on, of parts', whereas a physical explanation is one that 'uses physical qualities to explain (such as elasticity) … without searching whether the mechanical cause of these qualities is known or not' (Du Châtelet 1742, 181).I will analyze Du Châtelet's views regarding: (1) What counts as a good physical explanation; (2) Why a mechanical explanation is not necessary for answering most research questions in physics; and, (3) Why a good physical explanation, instead, is sufficient for answering those questions. I argue that in so doing, Du Châtelet is advancing an independent criterion of what counts as a good explanation in physics: on the one hand, it frees physicists from the methodological constraint imposed by mechanical philosophy, which was still an influential school of thought in her time; on the other, it replaces this constraint with the requirements of attention to empirical evidence, for that alone determines which physical qualities are apt to serve as good explanans.
Du Chatelet on Matter in Motion
Presented by :
Andrea Reichenberger, University Of Paderborn
Considering that bodies are inert, matter is a heavy mass without action. That sounds very much like Newton's concept of matter. However, Du Châtelet disagrees with Newton regarding the concept "body," or "matter." According to Du Châtelet, the essence of a body consists in (i) extension, (ii) inertial force, and (iii) moving force (Du Châtelet 1742, §143). Extension is therefore not the only property of a body; the power to act still needs to be added. This power is inherent to matter. This sounds very much like Christian Wolff's concept of matter. For Wolff, extension, moving force and inertial force are those properties which constitute the essence of bodies; they are « phénomènes substantiés, comme les appelle Mr. Wolf » (Du Châtelet 1742, §156). Recently, Marius Stan (2018) has drawn a close connection between Du Châtelet and Wolff. He argues that this result challenges the ruling consensus, which takes her to have been decisively influenced by Leibniz, an idealist. According to Stan, Du Châtelet's view is best understood as "a mixture of realism and idealism".In this paper, I will shed a somewhat different light on Du Châtelet's position by addressing the philosophical problem of Zeno's paradoxes of motion and focusing on the largely unknown reception of Du Châtelet in the German Enlightenment. I argue that the distinction of the ideal and the real has to be extended to include also the phenomenal and the actual. In this context, Johann Heinrich Samuel Formey's plagiarism and Abraham Gotthelf Kästner's criticism of Du Châtelet in the Berlin Academy's prize question on monads (1747) play a decisive role. It is the confounding of the real with the ideal which generates the kinds of problems crystallized in Zeno's paradoxes and the infinitesimals: Whether infinitesimals did or did not exist was a question of fact, not too different from the question of whether material atoms, or physical monads, do or do not exist. Du Châtelet's modal metaphysics aimed to answer this question. 
Du Chatelet's Method in Institutions Physiques
Presented by :
Andrew Janiak, Duke University
By the middle of the 18th century, prominent figures like Euler and Kant sought to unify the disparate contributions that Leibniz's metaphysics and Newton's physics made to the understanding of nature. Would a creative approach enable one to overcome the seemingly intractable debate between these two intellectual streams? The importance of this dynamic has undergirded the interpretation that Du Châtelet's Institutions de physiques (Paris, 1740) was a creative attempt to provide Newton's physics with a Leibnizian metaphysical foundation. The chapters of the Institutions seemingly support this interpretation: whereas the early chapters concern classic metaphysical topics like essences and modes, the later chapters concern gravitational phenomena. The title adds support: since her work can be translated as the "Foundations of Physics," it is tempting to read the early metaphysical chapters as providing the foundation for the physics of the later ones.In this paper, I argue that this interpretive framework for understanding Du Châtelet's magnum opus is tempting, but ultimately misleading. According to my alternative interpretation, Du Châtelet does not provide a metaphysical foundation for physics, but rather a method that systematizes physics, tackling metaphysical questions raised by Newton's own theory, but left unanswered by it. One example will illustrate this point.At the time of Newton's death in the late 1720s, there was a major problem in understanding his theory: he had claimed that all bodies gravitate toward one another, but failed to clarify precisely what that claim means. He insisted he was not claiming that gravity is "essential to matter," but never explained what "essential" means. Many Newtonians rushed in where Newton feared to tread: since gravity acts on all bodies, they said, it must be essential to matter after all. Du Châtelet fills the void left by Newton's abstemious approach to metaphysical issues by explicitly discussing essences at the outset of her work. The goal is not to provide a metaphysical foundation for physics that was foreign to Newton's project, but rather to show that it is the gravitational theory of the new physics itself that raises the question of matter's essence. First, she argues that if essential means intrinsic, then gravity is not essential to matter because that force depends on distance and is therefore relational (i.e., not intrinsic). Second, if essential signals a kind, Newton's theory cannot show that gravity is essential in the sense of being necessary for something to be material. The reason is that the theory is compatible with the possibility that some kind of medium could undergird gravitational interactions-material bodies in the absence of the medium might not gravitate. This example nicely illustrates Du Châtelet's creative method: she demonstrates how a systematic approach to physics requires one to delve into philosophical problems if the picture of nature presented in that physics is to be rendered clear. The morale of the story is that the much lauded "non-metaphysical" approach to physics hailed in Newton's revolution was not yet achievable in the mid-18th century.
Gender and Science in Eighteenth-Century France: The Strategies of Émilie Du Châtelet
Presented by :
Karen Detlefsen, University Of Pennsylvania
Émilie Du Châtelet (1706-49) produced much excellent philosophical work in her lifetime, including works in natural philosophy -- physics, optics, and experimental work on the nature of fire -- metaphysics, and value theory, broadly conceived. Throughout her lifetime, she displayed, in both the explicit written word and in her actions, an acute understanding of the ways in which her work was impacted by her being a woman. From her discussion of the long-term impact upon women's minds of limited early opportunities in education, to her exclusion from institutions such as the Académie Royale des Sciences, Du Châtelet was alert to the obstacles to her full participation in public intellectual life in France in the mid-eighteenth century, including in the sciences. This paper engages with two of the strategies Du Châtelet employed to overcome the range of obstacles she faced with an eye to uncovering her attunement to social phenomena that we now theorize as implicit bias and epistemic injustice.First, in her preface to her translation of Bernard Mandeville's Fable of the Bees, she discusses the intellectual opportunities open to women once their potential genius has been left undeveloped as a result of limited education. One such opportunity is the practice of translation, but as her subsequent translation of the Fable establishes, the liberties taken with that text result in fundamental changes to its philosophical content. The first strategy I explore, then, is her practice of translation -- specifically of Newton's Principia -- with an eye to uncovering the degree to which she used translation to contribute original thinking in natural philosophy. The second strategy is her use of publishing venues to engage in public debates in natural philosophy that she would have otherwise been precluded from due to her exclusion, for example, from the Académie. Most notable is her very public, and highly strategic, engagement in high-profile publications with Dortous de Mairan over the vis viva controversy.The primary aim of this paper is not to interrogate the content of Du Châtelet's own contributions to natural philosophy, though the fact of these contributions will be mentioned throughout. Rather, the primary aim of this paper is to show how, through her employment of specific strategies, as well as her explicit discussion of the state of women's intellectual opportunities in her era, Du Châtelet displayed an understanding of phenomena that have only recently been theorized in the field of social epistemology. Most notable are the phenomena of epistemic injustice and implicit bias. Du Châtelet thus represents an early stage in feminist philosophy of science, both for the obvious reason that she was a woman confounding sexist ideas about women and science through her excellent work in the sciences, and also for the less obvious reason that she provides an early example of a thinker aware of social phenomena that would later be theorized within feminist epistemology of science.
12:45PM - 02:55PM
SR-6
Goedel and Wittgenstein
Track : Kant and Before
Moderators
Erich Reck, University Of California At Riverside
Gödel and Leibniz
Presented by :
Julia Jankowska, University Of Warsaw
Our main objective in this talk is to provide a critical reading of van Atten's interpretation of Gödel's philosophy of mathematics.Van Atten [1] highlights similarities of Gödel's approach to philosophy of mathematics with the approach of Husserl. More precisely, according to van Atten, both authors aimed at finding a way to reach the essentially ultimate truth. Gödel searched for the ultimate and unique set of axioms that underly all mathematical knowledge. The way of attaining such an ultimate and all-encompassing truth was supposed to lead through a special epistemological method. This method, which Gödel hoped to adopt from Husserl, was the phenomenological method. Husserl purposed his method for broader philosophical goals. Gödel wanted to apply it in mathematics, and specifically in set theory which, in his opinion, formed an all-encompassing framework for "the queen of sciences". In the modern approach in the philosophy of set theory, Gödel is seen by van Atten as someone who postulates one set-theoretical universe, which - most importantly - can be expressed in one final, closed set of axioms. As such, Gödel, according to van Atten, is an advocate, avant la lettre, of the view opposite to the multi-verse approach in the contemporary debate.Gödel indeed believed that all mathematical questions have clear answers, and that these answers can be reached by humans. He also claimed that all important aspects of mathematical reality are potentially well-defined so that all sentences about them may be judged as true or false. However - as we want to claim in this paper - there exists a non-negligible proto-structuralist aspect in Gödel's philosophy of mathematics which can be derived from Leibniz (here we follow Mugnai [2]).We claim that the most distinctive aspect of Gödel's philosophy consists of a specific combination of a particular kind of Platonism with a structuralist spirit. Moreover, we claim that our interpretation explains all the aspects and all the phases of Gödel's thought. We observe that the idea that truths need to be accessible to humans is present already in the philosophy of Leibniz. We also observe that the idea of truth being accessible to humans is coherent with the idea of  existence of an ultimate truth despite the inevitability of there being many frames of reference (or "perspectives"). In consequence, the Leibnizian interpretation of Gödel's thought explains the apparent incoherencies, and is also more in the spirit of modern philosophy, and less of - as Gödel has put it - "a Platonism that cannot satisfy a critical mind".We argue that Gödel's philosophy has much more affinity to Leibniz than to Husserl. The role of Husserl in Gödel's philosophy was quite small. Even if Gödel hoped to achieve something in mathematics by using phenomenology, his positive philosophical ideas were free from Husserl's influence.Bibliography[1] Mark van Atten Essays on Gödel's Reception of Leibniz, Husserl, and Brouwer, Springer 2015.[2] Massimo Mugnai Leibniz and Gödel, in: Gabriella Crocco and Eva-Maria Engelen, eds. Kurt Gödel: Philosopher-Scientist, Presses universitaires de Provence 2016, pp. 401-416.
Carnap vs. Gödel on the application of mathematics
Presented by :
Jeongmin Lee, Hankyung University
In his unpublished article "Is Mathematics Syntax of Language?," Gödel criticizes what he takes to be Carnap's 'syntactical interpretation' of mathematics. Commentators including Ricketts and Goldfarb, Awodey and Carus, and recently Lavers all have reconstructed Gödel's critique in various ways and explored Carnap's possible responses. After briefing these reconstructions, I claim to make the following contributions.First, the essence of Gödel's critique should be not the one based on the admissibility of syntactical rules (§11 of Gödel 1953/9-III) and the second incompleteness theorem. Rather, we should take Gödel's main argument against Carnap to be the one based on the application of mathematics and empirical expectations thereby generated (§§12-14 of Godel's 1953/9 Draft III) I support this reading of mine by comparing relevant passages of Gödel's various Drafts. Understood in this way, Gödel's critique turns out to be an interesting variant of Quine-Putnam's indispensability argument in the philosophy of mathematics.Second, I claim that Carnap has perfectly reasonable responses to Gödel's critique as far as the empirical application of mathematics is concerned. The so-called 'correspondence principle' of Carnap, which has been overlooked in the previous discussions, plays a key role in Carnap's view on the application of mathematics. If we take into account Carnap's correspondence principle, Gödel's argument against Carnap that mathematics has conceptual, if not empirical, contents turns out to be an interesting non sequitur. Understood properly with the correspondence principle, Carnap's claim that mathematics has no content is not question-begging by any means.
Turing and Wittgenstein on Logic and Foundations of Mathematics
Presented by :
Zhao Fan, University Of Canterbury
Alan Turing and Ludwig Wittgenstein are certainly two great minds in the twentieth century. The relation between their thoughts attracts many attentions in the literature. This is not only because their fascinating exchanges at Cambridge University in 1930s, but because their overlap of interest, in particular in logic and foundations of mathematics, as well as the concept of thought and intelligence. However, the literature of the relation between their thoughts seems perplex, as different readings tell completely different story about Turing and Wittgenstein. The incompatible reading holds that Turing and Wittgenstein's views are incompatible. More specifically, it is argued that Wittgenstein denies or would deny Turing's analysis of computation, Turing's thesis, Turing's test for intelligence, and even Turing's proof for the undecidability of the Entscheidungsproblem (cf. Chihara, 1977; Shanker, 1987, 1998; Florez, 2001; Lampert, 2019; Persichetti, 2019). On the other hand, the compatible reading holds that Turing and Wittgenstein's views are compatible. Furthermore, it is argued that Turing and Wittgenstein have influenced each other in a significant way (cf. Floyd, 2012, 2016, 2017). Unfortunately, the focus of the incompatible reading and the compatible reading seldom overlaps, which makes it even harder to compare and to evaluate Turing and Wittgenstein's thoughts. In this paper, I am trying to argue against the incompatible reading. Given the breadth and complexity of the incompatible reading, I will confine myself only on Turing and Wittgenstein on logic and foundations of mathematics, and leave Turing and Wittgenstein on thought and intelligence for another occasion. Also, I will not address questions about the actual influence between Turing and Wittgenstein, which requires a thorough examination of the history in 1930s. But I will provide an outline of the background of Turing and Wittgenstein's activities in 1930s. My goal here is only to defend the compatible reading of Turing and Wittgenstein by pointing out the misunderstanding involved in the incompatible reading. The first section of this paper explains the incompatible reading about Turing and Wittgenstein on the nature of computation. The incompatible reading argues that since Wittgenstein holds a normative requirement for logic and computation, and Turing provides a descriptive modeling of computation, Turing's analysis of computation would therefore fall under Wittgenstein's criticism. I will identify two underlying assumptions of the incompatible reading in interpreting Turing: one is the mind assumption, arguing Turing's analysis of computation is based on a theory of mind; the other is the empirical justification assumptions, arguing that the plausibility of Turing's thesis requires empirical justification. In the second section, I will argue that neither thesis is hold by Turing in his 1936 famous paper "On Computable Numbers". In the third section, I will compare the exchange between Turing and Wittgenstein in Wittgenstein's 1939 lectures on foundations of mathematics. I will analysis their exchanges on computation to further validate the compatible view. In the last section, I will briefly discuss issues about contradiction and undecidability. I will make brief remarks about the difficulties in the incompatible reading.
Wittgenstein and ἄπειρον in mathematics
Presented by :
Valérie Lynn Therrien, McGill University
Wittgenstein's idiosyncratic take on mathematical infinity shall be shown to be a modern application of Aristotle's basic posits concerning the ontology of infinity. We shall show how a reading of Wittgenstein can help to elucidate and clarify Aristotle's position on the notion of continuum (and vice-versa). We hope to show not only that Aristotle's views on the subject are still relevant but that they also have potential applications to modern mathematical theory. To do so, we shall compare Wittgenstein's comments on infinity and the continuum in his Lectures on the Foundations of Mathematics and his Remarks on the Foundations of Mathematics to Aristotle's in Book III and V-VI of Physics as well as Book VI of the Categories.Aristotle concludes that the idea of a possible infinite divisibility of continuous magnitudes is not tantamount to implying an actual infinite divisibility; essentially, if a continuous magnitude can be divided at any given point, it cannot be divided at all points at the same time. The Aristotelian continuum is therefore irreducible to any other kind of structure, i.e., it is indecomposable neither into a) non-void parts lacking a common boundary (« indivisibles » or « atoms ») nor b) void parts of extionsionless magnitudes (« points »; for a point is nothing more than a cut in the line, mere accidents emerging from operations on magnitudes bearing no actual reality – they subsist only in a potential mode wherein the infinite division can never be completed to obtain the actual extensionless point). Wittgenstein's view of the algorithmic nature of mathematics is directly linked to his anti-Platonic and anti-Cantorean stance against completed infinity. As such, his position on mathematical infinity is closely related to Aristotle's ἄπειρον which holds that the infinite is intrinsically in-complete. The logical syntax and conceptual analysis of « infinity » precludes any technical use other than as an adjective describing a potentially infinite mathematical process – the possibility of constructing infinitely many series through the deployment of a recursive rule. The concept of ℝ as a continuous, gapless entity is thus borne out by a semantic and conceptual confusion :  « [the] picture of the number line is an absolutely natural one up to a certain point ; that is to say so long as it is not used for a general theory of real numbers », for « [the] straight line isn't « composed of points », the « mathematical rules are the points ».
12:45PM - 02:55PM
SR-7
Themes in Realism
Track : After Kant
Moderators
Seán Muller, University Of Johannesburg
Practical Roots of Practical Realism
Presented by :
Peeter Müürsepp, Tallinn University Of Technology
Practical realism was initiated by the late Estonian philosopher of science and chemistry Rein Vihalemm roughly a decade ago. It's five tenets say the following (in a somewhat abridged version):1. Science does not represent the world "as it really is" from a god's-eye point of view. 2. The fact that the world is not accessible independently of scientific theories – or, to be more precise, paradigms (practices) – does not mean that Putnam's internal realism or "radical" social constructivism is acceptable.3. Theoretical activity is only one aspect of science; scientific research is a practical activity and its main form is the scientific experiment that takes place in the real world, being a purposeful and critical theory-guided constructive, as well as manipulative, material interference with nature.4. Science as practice is also a social-historical activity which means, amongst other things, that scientific practice includes a normative aspect. 5. Though neither naïve nor metaphysical, it is certainly realism, as it claims that what is "given" in the form of scientific practice is an aspect of the real world. The paper concentrates on two historical roots of practical realism, pragmatism and Marxism. The stress will be on the Marxist understanding of practice that Karl Marx presented in his "Theses on Feuerbach", mostly in the first two ones. Marx is not saying anything about science in particular in his theses on Feuerbach. However, his practical approach to human cognition points in the same direction with the current practical approach in the philosophy of science. An important point here is that the well-known Marxist idea (actually introduced by Engels) of practice as the criterion of truth actually disturbs rather than helps to understand Marx's initial understanding of practice and its connection with practical realism. The emphasis of Marx himself is not at truth but at creating a working interaction between reality and human mind that is free of scholastic metaphysics, exactly in the sense of contemporary scientific realist understanding.From the pragmatist side it is John Dewey whose understanding of science looks parallel to Rein Vihalemm in several respects. Contrary to Marx, Vihalemm never really cites Dewey in his own work but still mentions his name at least once. However, Dewey's position that the end of science is not the contemplation of eternal and immutable truths but rather the intelligent and technical regulation of the human environment and society is seemingly close to the main points of practical realism. Still, we are rather going to argue that Rein Vihalemm's practical realism cannot be taken as a continuation of Dewey's pragmatism. Vihalemm was an original thinker in his own right. His understanding of science as practice is not in direct accord with the pragmatist approach. It is a contemporary view on science as practice and a normative activity. However, due to the untimely death of the founder of practical realism in 2015, it is history already.  
Introduction of the notion of Instrumentalism into Scientific Realism Debate
Presented by :
Tetsuji Iseda, Kyoto University
Instrumentalism is supposed to be one of the representative anti-realist positions in scientific realism debate. Instrumentalism is usually understood as a position that regards theoretical terms as merely (or mainly) instrumental for organizing our experience in the observable realm. A strong version of instrumentalism is also supposed to claim that theoretical terms are meaningless and whether those unobservable entities do not exist. However, it is hard to identify exactly who are instrumentalists. For example, main figures in the logical positivism movement were not 'instrumentalists' in the above sense, because the existence of observable entities was not taken for granted in their early writings. This feature makes it more appropriate to locate them in the classical realism debate in which the existence of the world itself, rather than unobservable entities, is at issue.Then why is instrumentalism regarded as a major position in the debate? One way to approach this question is to trace the lineage of the notion of 'instrumentalism', and see how it became related to this debate.Before the notion of 'instrumentalism' is associated with the scientific realism debate, it was a notion associated with pragmatism, and in particular with Dewey. The scope of Dewey's use of 'instrumental' is much wider sense, regarding all sorts of knowledge, judgment and action as 'instrumental.' Interestingly, Dewey also discusses the status of unobservable entities in science, where what he says can be classified as 'instrumentalist' in a present (weak) sense. Then, who started to use 'instrumentalism' as a more specific position about theoretical terms? We can name Karl Popper and Ernest Nagel as two philosophers responsible for this. A survey of philosophy of science journals shows that early uses of 'instrumentalism' not associated with pragmatism appear in British Journal for the Philosophy of Science in 1950s, and the introduction of the word is attributed to Popper (though Popper himself was aware of the use of the term in pragmatism). Many of Popper's characterizations of instrumentalism reminds us of what is called instrumentalism in the contemporary debate, but his use of the term was not very precise.Ernest Nagel's Structure of Science published in 1961 is another important source of the use of 'instrumentalism' in the context of scientific realism debate. One of the instrumentalists Nagel named was Dewey, and a comparison between Structure and Nagel's comment on Dewey's philosophy of science shows that many aspects of instrumentalism as Nagel characterizes come from Dewey's philosophy (as Nagel understands it). Interestingly, Nagel thought that instrumentalism is compatible with scientific realism; it was not conceived as an anti-realist position.During 1960s, other authors such as Paul Feyerabend and J.J.C. Smart started to use the notion of 'instrumentalism' as an anti-realist position. One thing to note is that both Feyerabend and Smart were on the realist side of the debate. To know who exactly were the instrumentalists, we should look at the people criticized by them.
The ‘Spiralling’ Decades in Philosophy of Science: Feminist Approaches and the Debate over Scientific Realism
Presented by :
RASLEEN KOUR, INDIAN INSTITUTE OF TECHNOLOGY ROPAR
The 'Spiralling' Decades in Philosophy of Science: Feminist Approaches and the Debate over Scientific RealismWylie (1986) argues that the debate over scientific realism is an 'ascending' and never-ending 'spiral' because there are meta-level disagreements between realists and antirealists. These disagreements are about key philosophical goals and conceptions in the debate.  Almost ten years later, Oberheim and Huene (1997) arrive at a similar conclusion by attributing 'meta-level incommensurability' between philosophical conceptions used in the debate. In this period, various feminist approaches within the broad area of philosophy of science (Harding 1986, Longino 1990) started to entrench which includes Wylie's own work (1996). The intellectual plausibility of constitutive ideas of scientific realism was questioned by the feminist approaches. Two of these crucial ideas are:a non-epistemic conception of truth, anda conceiver-independent reality. This paper has three parts, in Part 1, it is shown that, historically feminist philosophies of science[1] have been effective in questioning the alleged self-justifying realist ideas. The paper looks at the works of Helen Longino, Sandra Harding and Alison Wylie to elucidate important implications to scientific realism. In Part 2, it is shown that, in complementing their arguments against (i) and (ii), feminist approaches take a course that questions the traditional realist notion of objectivity. In the last three decades, feminist approaches have given normative and prescriptive directions in an effort to make science better. For this, they suggest procedural level actions by highlighting the role of non-epistemic values in confirmation, experimenting and testing. The role of non-epistemic values in confirmation has direct implications to the thesis of scientific realism. In Part 3, the paper concludes the following: at the meta-level, even though it appears that the scientific realist position and the general assumptions of feminist approaches are inconsistent with each other, it may not be the case if we delve deeper. That is, feminist positions necessarily do not imply antirealism. However, key literature in scientific realism in the recent years cocoons itself from feminist approaches by retreating to their own self-justifying ideas and conceptions, and at best confronting only traditional problems such as 'pessimistic induction' or underdetermination (Psillos, 1999). The paper concludes by claiming that the scientific realist requires to 'descend' to examine feminist critical engagements in philosophy of science, and see their points of contentions with openness unless they regard all of feminism in philosophy of science as essentially antirealistic.Select References:Oberheim, E & Huene, P. H (1997) "Incommensurability, realism, and meta-incommensurability", Theoria 12 (3): 447-465 Harding, S (1986) The Science Question in Feminism, Ithaca, NY: Cornell University Press.Harding, S (1991) Whose science? Whose knowledge?, Ithaca, N.Y : Cornell University Press.Longino, H (1990) Science as Social Knowledge, Princeton, N.J : Princeton University Press.Longino, H (2001) The Fate of Knowledge, Princeton: Princeton University Press.Wylie, A (1986) "Arguments for Scientific Realism: The Ascending Spiral", American Philosophical Quarterly, 23(3): 287–298.[1] These positions are now characterized in three broad themes: Standpoint theory, postmodernism and feminist empiricism.
Selective Realism in History
Presented by :
Alberto Cordero, CUNY Graduate Center & Queens College CUNY, New York
"Selective realism" (the divide et impera approach) is a variegated family of realist responses to antirealist readings of scientific theories. According to selectivists: (a) theories are not monolithic proposals but intellectual constructs made of posits of various degrees of success with respect to truth, (b) empirically successful theories flourish because the world is as some of the posited theoretical accounts say it is, and (c) recognizing this, scientists grade theory components accordingly. Current selectivism arises most proximately from responses to Laudan's pessimistic inductions from the history of science, but the approach is considerably older, or so I argue in this paper. I trace selectivism to epistemological and methodological schemes on view since Antiquity-in e.g. Ptolemaic and Copernican astronomy; Galileo's piece-meal approach to the study of nature, also his efforts to embrace realism about both the Bible and the Heliocentric Theory; Newton's proposed reform of natural philosophy, and (at the apex of classical physics) Lorentz's reading of Maxwell's theory. These and numerous other cases, I suggest, show regular recognition by past scientists that successful theories contain both "wheat" and "chaff" that need to be detached from each other, attesting to a selectivist core at work during most of the history of natural philosophy and science. At each stage, this core, together with local background knowledge, guided gradation of intellectual content and preferred retention as science advanced. Until about the late Renaissance, the resulting rational gradations emphasized deductive reasoning and meta-empirical certitudes; retention of intellectual content was poor except at levels guarded against revision by metaphysical or religious convictions. Ptolemaic astronomy (which, contrary to popular opinion, embodied a partial realist stance) exemplifies this stage well. At the dawn of modernity, when natural philosophers began to challenge the content and character of traditional knowledge, the gradation strategy reoriented accordingly. I focus on some emblematic episodes: (a) Galileo (Dialogue, Discourses, also his Letters to Duchess Christina); (b) Newton (Principia, Opticks); and (c) ampliative strategies in the century of Fresnel, Wheewell, Maxwell, and Lorentz. Cases such as these, I suggest, show how and why the tenets of today's divide et impera selectivism arose. What counts as acceptable natural philosophy has altered along the way, as has the selectivist emphasis, increasingly shifting towards partial piece-meal descriptions and theories that provided (and were meant to provide) incomplete understanding of their intended domains. Gradually, it became satisfactory to pursue knowledge through less than apodictic proof, a trend fortified by methods focused on inductive markers of truthful theories. In the early 19th century, the markers of choice were parsimony and fruitfulness, predictive power gaining favor only later in the century. Recognition of these inductive indicators has led to unprecedented quality and quantity retentions of theory-parts at inductive levels.A complementary question arises, however: If selectivist schemes have long been in the background, why does selectivism seem new? The last section considers this issue and calls attention to the enduring impact of some views from the mid-twentieth century. 
12:45PM - 02:55PM
SR-8
Reflecting upon some philosophical histories of STS in East Asia (Symposium)
Track : After Kant
Moderators
Sophie Roux, ENS
In thinking of HOPOS from a global history perspective, the important histories of philosophy of science (POS later) and its neighboring discipline STS in East Asia deserve close studies. Due to the big rise in recent decades of technoscience societies in East Asia and all their technological controversies, STS has grown up fast, from and also in comparing with, its elder sister discipline POS. But what are the philosophical histories of these East Asian STS in these decades? That is, what are the POS-informed, or even POS-reconstructed, histories of these East Asian STS? In other words, what are the special problematic and connections between the sister disciplines of STS and POS in East Asia? Furthermore, thinking more globally, what is special about them if comparing with "Western" mainstream POS or STS? What could be the best positions of East Asian STS and POS in connecting and communicating with their mainstream counterparts? As one first step of this important problematic, we invite POS and STS scholars from China, South Korea and Taiwan to reflecting upon, in their respective societies, the philosophical histories of STS in terms of different philosophical assumptions in different period, ethical and policy assessments established by the government, gender/feminist POS concerns and enlightenment on the development of science, and finally the formation of certain East Asian gaze in understanding the mainstream STS and its genealogical history.
Different Philosophical backgrounds and different STS in China
Presented by :
Bing Liu, Department Of History Of Science, Tsinghua University, China
In the Chinese mainland, STS is a concept introduced from the West. For the development of STS in the Chinese mainland in recent decades, at different stages, the philosophical backgrounds are different. In the different stages of development in the Chinese mainland, because of its different philosophical backgrounds and foundations, content, goals, positions, methods and purports of STS are presented differently. In this historical development, Marxist philosophy, especially the Marxist philosophy represented by Engels's Dialectics of Nature, the western mainstream philosophy of science, and later a variety of non- mainstream western philosophies of science, as well as other related cultural thoughts, are closely related to STS research in different periods of China.In the specific social ideology, social system, academic system and understanding of science and technology, these different philosophies as the background and foundation of STS in the Chinese mainland directly affect the development of STS from science, technology and society to science and technology studies. In fact, STS in the Chinese mainland still exists in many forms, and there is a certain crisis. However, no matter for the social development or academic development of the Chinese mainland, the value of STS can not be ignored. To understand the relationship between STS and different philosophical backgrounds and foundations is of practical significance for the future development of STS in the Chinese mainland to some extent.
Gender/feminist philosophies in mainland Chinese STS
Presented by :
Meifang Zhang, Institute For Cultural Heritage And History Of Science And Technology, USTB.
In the tradition of western philosophy of science, feminism belongs to a relatively marginal branch. Compared with other branches of philosophy of science, the repercussions caused by it in mainland Chinese STS have certain differences too. The purpose of this article is to explore the influence, status and characteristics of its development in mainland Chinese STS through reviewing the historical process of introducing it into mainland China and summing up the different attitudes of Chinese scholars toward it.STS in mainland China is a relatively loose academic circle, including scholars from the disciplines such as philosophy of science, history of science, and sociology of science. Feminism first attracted attention in the field of STS in mainland China since the 1990s. Some scholars from the field of the history of science have discussed the historical research on gender and science in western academies from the perspective of historiography. In the first decade of the 21st century, more and more scholars in the field of philosophy of science began to pay attention to the feminist epistemologies, which were widely introduced in mainland China and caused some controversy. After 2010, the related discussions decreased slowly. A few scholars turned to focus on the relationship between gender and technology. Some other scholars began to study the history of science and technology in China from the perspective of feminism.There are five kinds of attitudes of Chinese scholars in STS towards feminist philosophy of science. The first kind of scholars generally believe that feminist criticism of science brings enlightenment to the development of science. However, the second group of scholars insist that it has a great destructive effect on the healthy development of science. The third kind of scholars think that feminist philosophical proposition can provide support for the legitimacy of studying the history of science in non-western society. The fourth kind of scholars avoid epistemological problems and focus on the gender situation in the practice of science and technology. The fifth is indifferent, which does not include feminist scientific philosophy into the scope of investigation. Most of them do not believe that it will have any impact on their research.Compared with the influence of traditional logical positivism, analytical philosophy and even sociology of scientific knowledge, feminist philosophy of science has relatively limited influence in mainland China. Most of the research is limited to commentary, with few original theoretical explorations. Surprisingly, compared with the philosophy of science, scholars in the fields of the history of science, sociology of science, and science and technology policy research have done some work from the perspective of feminism. They set up a committee on gender and science study to attract scholars from various fields to jointly promote the academic research and equal practice in mainland China. From this point of view, feminist philosophy of science does have a positive impact on STS academic and scientific practice in mainland China.
Some Taiwanese roots in gazing upon a genealogical history of STS
Philosophy of science (POS) in Taiwan was once used by progressive philosophers, in 1950s, to criticize the status of scientific knowledge in the ideological claims of Taiwan's authoritarian state. After those philosophers were suppressed, the studies of POS became formal and epistemological, strictly distinguished from social and political concerns. Science is logical and neutral, whereas politics is subjectively social and partial. This bi-polar situation continues until the introduction of Kuhn's Structure in Taiwan's 80s. Science became part of the culture, and cultural elements are often embedded in sciences. But still, Kuhn's hesitate position of science was to stay put on the cross-roads of various possible directions, avoiding to go easily with new tides of post-structuralism, feminism, and last but not the least, science and technology studies (STS).In 1987, Taiwan lifted its 40 years long martial law and gradually became a democratic society, while aspiring also to become a technologically-oriented society. Social movements exploded after the lift, and later more citizen science movements coming up to protest the science controversies in waves of new technologies. Decades of nuclear power controversy is one obvious example. But technical philosophy of science, Kuhnian HPS are still popular in Taiwan's serene academic philosophy, distinct from the hot social debates outside universities in the 90s. Meanwhile, new and perhaps more radical STS were stepping in. However, even this newly established STS has become more "academic" than its earlier years, sweeping many of its critical elements under the rug. As a balanced position, my the other concern is how to make Taiwan's STS intellectually independent, avoiding to be carried away by Taiwan's citizen science movements, and resisting the lure of populism. Reflectively, it is these concerns that drove me to write such a book in recent years: A Genealogical history of STS and its multiple constructions (2019).In my detailed study of the mainstream STS, three perspectives can be said to have roots in Taiwan's history of POS and its sister disciplines like STS. First, I avoid the tedious epistemological debates of relativism between POS and STS, and focus instead on why Kuhn was not happy with STS and their different visions of science. Second, I explore the critical elements in STS' early constructions, e.g., its relationship with the social responsibility of science movement, and its critical studies of US academic-military establishment, including even Kuhn's own activities in Princeton's massive demonstrations in the early 70s. Third, I emphasize the importance of STS' early alliance with UK's social anthropology. Working in a declining empire in an increasingly postcolonial world, social anthropologist have learnt to see, symmetrically, between European civilization and African tribes. I believe this symmetrical perspective also crucially informed STS' symmetry principle, and in turns, it should also help us East Asian intellectuals to symmetrically see mainstream POS or STS and its East Asian counterparts.
Ethical and Policy Trajectories of Korean STS
Presented by :
Sang Wook Yi, Hanyang University, Seoul, South Korea
Korean STS community has grown substantially for last 20 years thanks to a few contributing factors. They include the establishment of several graduate schools producing young STS scholars, a number of socially sensitive issues involved in scientific and technological issues such as stem-cell research and BSE epidemic. A few reflective and evaluative exercises such as ELSI and TA(technology assessment) have been institutionalized since 2000s for better science communication and preemptive science policy by Korean government. The ethical and policy considerations have been the focus of STS discussion in South Korea. I shall examine the STS development in this respect, paying particular emphasis on its philosophy of science aspects. The STS community of South Korea is a mixed group of sociologists, historians and NGO activists with very few philosophers. Sociologists and NGO activists tend to take rather simplistic view of social constructivism, and criticize government's science policy often without providing realistic policy alternatives. This move resulted in the weakening of their policy influence in frontier technologies. I shall discuss their climate change response to illustrate the point. Philosophy of science on the other hand has been helpful in clarifying the issues, and suggesting workable policy suggestions with collaboration of researchers and legal experts. In particular, it has stressed the importance of recognizing the 'incommensurate' nature of various value concerns within TA exercises and the need to make them 'commensurable' in order to arrive at policy suggestions.
12:45PM - 02:55PM
SR-9
HOPOS in Context
Track : After Kant
Moderators
Brian Hanley, University Of Calgary
Natural History as Part of History? The Case of The Hong Kong Story
Presented by :
Stephen Chung-On Ng, The University Of Hong Kong
Time can be perceived differently across disciplines, and these variations often lead to a range of narratives of the past. For example, Natural History and (Human) History, both of which are concerned with events in the past, generally do not overlap because of the different time scales on which events have happened. The central themes to be studied in the two histories are different, and they are not generally lumped together. The Hong Kong Story, as the permanent exhibition in the Hong Kong Museum of History, therefore, appears as an anomaly. The Story is set out as a 400-million-year journey to showcase the history and development of Hong Kong. Presumably, the long span across Time is a result of the necessity to curate the Museum's natural history collection, which includes rocks, fossils and animal specimens, acquired when the Museum's predecessor was alone amongst local colonial institutions and displayed in the first gallery, and to piece together the successions that shaped the landform and local ecosystem from the Devonian, a geologic time period, down to the pre-Neolithic. The rest of The Story, housed in the remaining seven galleries, is human history, realised on a completely different time scale, and covering archeological findings through developments up to the end of colonial rule in 1997. The connection between Hong Kong and South China is emphasised throughout the journey, but the jump of narrative from natural history into human history appears to be abrupt, if one considers the change of subject of study and the relative temporal and spatial scales. This leads to an obvious question as to whether natural history can be readily incorporated into the study of the history of a region, and whether it is possible to construct a universal history. This paper aims to examine these questions using The Hong Kong Story as a case study.
Eclipsing Idols: Changing self-conceptions among colonial-era trigonometric surveyors, 1780-1830
Presented by :
Sharad Pandian, Nanyang Technological University
The Philosophy of Science should not restrict itself to work produced by professional philosophers but should also consider the self-conceptions of agents engaged in scientific activity. In addition to new theoretical problems that might be discovered, this allows for the tracing of conditions under which these accounts are produced and the various ways the accounts are themselves productive. This paper attempts to undertake this through a close reading of accounts of the trigonometric survey by William Roy in Philosophical Transactions and William Lambton in Asiatick Researches.William Roy, using money from King George III, manpower from the Ordnance survey, and a theodolite from the famed London instrument maker Jesse Ramsden, carried out triangulation from Hounslow Heath to Romney Marsh. He believed that Ramsden's theodolite eliminated average error, that the chain used "would measure distances much more accurately", and that his pyrometer "seems not easy to improve." Lambton, in India, was too far away, did not have connections to the Royal Society, and depended on mistrusted Indian labour. To shore up the credibility of his work, he positioned himself as following Roy's methods as closely as possible. He attributed his interest in the subject directly to Roy's publications and claimed that his chain he was made by "the same incomparable artist, Mr. Ramsden" and was "precisely alike, in every respect, with that used by General Roy in measuring his base of verification on Romney marsh".In addition to the value of tracing the context of production and the productivity of these accounts, these accounts also reveal two philosophically interesting lines of inquiry: First, even if a certain method seems promising, results will have to be generated to showcase its superiority. This is a problem for new methods because a lot of time-consuming debugging work is usually required. This leads to surveyors sometimes adopting creative strategies to defend the worth of their infant approaches. For example, to maintain the superiority of his trigonometric method over Neville Maskelyne's astronomy, Roy misquoted Maskelyne's data, selecting from Maskelyne's paper a number ten seconds larger than the one Maskelyne finally settled on. Two decades later, Lambton stuck to his story of the superiority of his geodetic measurements even while Colin Mackenzie was insisting his own traditional survey values were not inferior to Lambton's.Second, even when everything goes well, the role of the founder is always complicated, both celebrated and devalued. As founder, he will be respected as the originator. But since progress is supposed to be marked by novelty, alongside praise, successors will argue for the irrelevance of the founder's methods in favour of their own. Roy disavowed his previous work as "having been carried on with instruments of the common, or even inferior kind," while Lambton accepted changes to Roy's instruments that he considered improvements. And in 1830, when writing about Lambton's theodolite, George Everest gently admitted that while the instrument had been the best of its kind for field operations in its time, "it would not now, perhaps, be considered a very perfect instrument."
Scientific progress and effective, incorrect puzzle solutions: the case of an 1832 treatment for cholera
Presented by :
Seán Muller, University Of Johannesburg
Cholera was one of the most devastating diseases of the 18th and 19th century, claiming hundreds of thousands of lives. The initially dominant theory of medical science at the time held that sickly vapours ('miasma') were responsible. An alternative theory, based on a rudimentary conception of germ-based disease, competed for influence in the 19th century and ultimately superseded the miasma theory. Various analyses have discussed the dynamics of the contestation between these theories, raising questions about whether the resistance to the germ theory was merited or rather a reflection of an irrational or self-serving commitment to the status quo. Vandenbroucke (2003) has argued, to the contrary, that there were sound reasons for maintaining the miasma theory: "[it] was very comprehensible, and what is more: it worked". Vandenbroucke's argument raises interesting questions about the nature of scientific progress and efficacy of problem solutions. On the face of it, the argument is directly aligned to conceptions of progress elaborated by Kuhn (1962) and Laudan (1978), in which puzzle- and problem-solving is at the centre of inquiry and, hence, progress. However, I suggest that this alignment is not as straightforward as one might supposed. Specifically, I use Vandenbroucke's claims to tease out different possible interpretations of what it means to provide solutions to scientific problems.I develop these points further with the aid of a case study of a treatment for cholera proposed by Hall (1832). Hall experimented, in a somewhat unsystematic manner by modern standards, with a variety of different treatments for patients during an outbreak of ship-based cholera. As was typical of the time, Hall supposed that "the exciting cause [of the outbreak] was a miasm radiating from the ships that had been affected with cholera". The treatment that Hall ultimately settled upon, because it appeared to be consistently effective, was an emetic. This appears curious because, in light of modern medical science, induced vomiting would provide no cure for cholera and in addition would be expected to worsen the dehydration that is the intermediate cause of most cholera deaths.The apparent resolution to this mystery lies in Hall's post-treatment prescription: that having ceased vomiting the patient be given only warm, sugared gruel. This would have served as a relatively effective rehydrating agent and, if heated sufficiently in preparation, would not be contaminated by live cholera bacteria. Thus, if we accepted his reported successes, we may conclude that Hall misidentified the causal mechanism.How should we treat such instances? On the face of it, Hall's research is precisely the kind of gradual progress through puzzle-solving by normal scientists working within a paradigm that Kuhn (1962) described. Yet while Hall's 'solution' was practically correct it was intellectually wrong. I briefly elaborate a view as to how we ought to assess such cases and what can be learned from them for various conceptions of scientific progress. 
The Physicist as Philosopher in Cultural Context
Presented by :
Kristian Camilleri, University Of Melbourne
In a paper on 'Physics and Reality' in 1936 Albert Einstein argued it was not only the right but the duty of the physicist to philosophize, even at the risk of incurring the wrath of the professional philosopher. At a time when "the very foundations of physics itself have become as problematic as they are now", Einstein stressed, "the physicist cannot simply surrender to the philosopher the critical contemplation of theoretical foundations; for he himself knows best and feels more surely where the shoe pinches". It was the physicist who could best judge "how far the concepts which he uses are justified, and are necessities". These sentiments resonated with many physicists of the era. Einstein has long been regarded, along with Niels Bohr, as the epitome of the philosopher-physicist. While much recent scholarship has been devoted to understanding the philosophical views of physicists like Einstein, Bohr, Schrödinger and Weyl, we still have much to learn about the intellectual culture that gave rise to the philosopher-physicist. In this paper I take up this task, in examining the social, intellectual cultural norms that shaped the various forms of philosophical discourse to which physicists contributed in the German-speaking world during the interwar period. The right to philosophize was not only a mark of prestige and status, it was also, in some sense, an obligation. This typically took the form of public lectures or addresses to non-specialist audiences, which were often later published. But when physicists chose to philosophize, they often sought out the company of scholars and philosophers with whom they felt they could engage in a constructive dialogue. This typically took the form of participation in informal discussion groups and local circles, as well as networks of correspondence. In this way physicists navigated the complex and ever shifting intellectual terrain of the Weimar era, in forging their own philosophical identities. While constrained by what was available to them, physicists chose their own influences.While many physicists pursued questions of epistemology, some like Heisenberg and Pauli pursued their own private philosophical projects that were not devoted to contributing to scientific philosophy. These were rather attempts to forge a personal Weltanschauung in something like Dilthey's sense, which expressed both an understanding of the external world and the cultural and the inner world of the individual. The task here was to acquire an orientation to the world that rendered life meaningful. In contrast to some of the leading members of the Vienna Circle, who saw philosophy as superfluous, or at best, as a handmaiden to science, many physicists saw philosophy as the central pillar of the intellectual culture to which they belonged. This can help to understand the poor reception of logical positivism among many of the leading physicists of the era.
06:00PM - 08:00PM
Congress Banquet at Genting Jurong Hotel
1