NZAP Abstracts
*Toward contrastive explanations in GeoAI
Ben Adams, University of Canterbury
In the last few years interest in GeoAI has grown as newer machine-learning techniques have shown success when applied to geographic problems. For the most part, this work has focused on training predictive deep-learning models using large data sets. However, these models can be opaque and the reasoning behind why certain outcomes are predicted will not be clear to a human who might want to make informed decisions based on the predictions. I will introduce some recent research on explainable AI, and then discuss how we can build geographic AI systems that better explain their reasoning. In particular, I will focus on contrastive explanations and show how they might work for common cases of GeoAI use, including crime analysis, travel behaviour modelling and population projection.
On utilitarian shit stirring
Nicholas Agar, Victoria University of Wellington
This paper explores the phenomenon of moral shit stirring which involves speech acts that have the grammatical form of moral advice but where there is no intent that recipients act accordingly. I offer moral shit stirring as a counterpart of bullshit as described in Harry Frankfurt’s widely-discussed essay “On Bullshit”. Shit stirring does for advice as bullshit does for belief. I discuss two examples and argue that there is currently too much shit stirring in bioethics.
*Protocol and sensor software development for fracture healing
James Atlas, University of Canterbury
The Mechanical Engineering Department at UC have developed a microelectronic strain sensor designed for use with a bone attached rod in fractures. Research and development is being carried out with the aim of tracking fracture healing progress. When a fracture occurs, a rod is attached to the bone to hold the pieces together. As the fracture heals the bone will become stronger causing less strain on the rod. Patients will be put through periodic tests of walking, standing etc. to get strain measurements from the rod. There is a need for a machine learning model to use the strain data from the rod to classify the activities a patient is experiencing. The purpose of this is to enable comparison of strain experienced in activities over time to track healing progress. We have developed an initial model for a basic drill press setup designed to emulate strain on a bone. The developed model performed successful classification for a drill press protocol enumerating many possible activities. The model achieved a cross fold validated accuracy of 0.80952. The success of the model demonstrates the applicability of the selected machine learning method, Time Series Forest, in a strain sensor context. The results show that similar models will likely be successful in contributing to the end goal of tracking healing progress for fractured bones.
*Same same but different
Christoph Bartneck, New Zealand Human Interface Technology Lab
The idea of robots have inspired humans for generations. The Bank of Asia, for example, had commissioned a building that looks like robot to host its headquarters in Bangkok. This profound interest in creating artificial entities is a blessing and a curse for the study of human-robot interaction. On the one hand it almost guarantees a headline in newspapers, but on the other hand it biases all participants in the study. Still, almost all robots that made it out of the research labs and into the market failed. This talk with try to shine some light on why robots are so (un)popular.
*Building a computer that thinks like the brain
Simon Brown, University of Canterbury
Recent progress in artificial intelligence and machine learning means that humans are now far inferior to computers at playing games like chess and go. However, the brain is still far more efficient than even the largest supercomputers at performing some types of tasks, such as pattern or image recognition. This has motivated a worldwide effort to build brain-like or ‘neuromorphic’ computers, using a number of different approaches. The focus of neuromorphic computing is on hardware, in contrast to the usual software approaches to AI. I will review some of those approaches, which include the use of traditional silicon transistors to emulate neurons and synapses, and new solid-state devices which have synaptic and neuronal functionality. I will explain how my group has attacked one of the key remaining challenges, which is to achieve truly brain-like networks using self-assembled nano-components. Not only have we been able to build highly complex networks—the dynamical signals within those networks are remarkably similar to those of the brain.
Greenbeard Theory, meet Simulation Theory: a new account of the evolution of human altruism
Doug Campbell, University of Canterbury
The common human inclination to spend valuable resources helping non-relatives, even in circumstances where reciprocal help is not to be hoped for, presents a longstanding evolutionary mystery. How could a helping gene profit by causing its carriers to ‘gift away’ their fitness? One answer might be that it profits by selectively targetting help towards individuals, who, through their own helping behaviour, show that they likely carry copies of the same helping gene. For this to work, some cognitive mechanism would be required by which a donor can compare and contrast her own helping dispositions with those of potential recipients. We human beings have such a mechanism, in the the form of our ability to predict and explain each other’s behaviour by a method known as ‘simulation’. Here a theory of human altruism is developed based on these ideas.
*Robots in Nozickland: a cautionary fairytale for our times
Doug Campbell, University of Canterbury
Minarchism is the theory, famously advocated by Robert Nozick, that a national state can be legitimate only if it is a minimal state—i.e., a state that confines itself to protecting its citizens from assault, theft, fraud and breach of contract. It remains a very influential theory on the economic right. In this talk I consider what would happen in a minimal state if inexpensive but highly capable artificially intelligent robots were invented, able to match or exceed human performance in most arenas. I argue that Nozick’s theory explodes under the weight of its own contradictions when the possibility of such machines being created is taken into account.
The battle for meaning: Ryle and Findlay vs Carnap
Max Cresswell, Victoria University of Wellington
One of the most important figures in the emergence of theories of semantics was Rudolf Carnap. This paper explores the hostility of Gilbert Ryle and J. N. Findlay to Carnap, and examines how their writings provide some interesting insights into philosophy during the time of the ‘ordinary language’ movement. I shall make a few brief remarks at the end about the effect both these philosophers had on the development of A. N. Prior’s attitude to logic.
2020 The Innocenti Maria Bocheński Year—His impact on logic in NZ
Aneta Markoska-Cubrinovska, Diane Proudfoot and Jack Copeland, University of Canterbury
Józef Bocheński was born in 1902 near Cracow, Poland. He took his Dominican vows in 1928. Before the war, Bocheński taught logic at the Pontifical University of St. Thomas Aquinas in Rome, and in 1936 was a founding member of the ‘Cracow Circle’ of logicians, philosophers and theologians. War saw him involved in the German-Polish campaign and fighting on the Italian Front as an officer in the Polish army. In 1945 he became Professor of Philosophy at the University of Fribourg in Switzerland. Bocheński remained at Fribourg for the rest of his long life, living in the Dominican Albertinum. He was an leading early pioneer of modal logic, and through Arthur Prior had an influence on the first generation of New Zealand’s logic students. This paper is an affectionate look at Bocheński, his logical work, and his pen pal relationship with Prior. We also touch on the plans of the newly formed UC–ETHZ–Fribourg Bocheński Project.
*Quagmire & botheration for table & diagram: when numbers mean something more than just that
Giulio Dalla Riva, University of Canterbury
In this talk I’m going to share and analyse my experiences in teaching ethics and data science. As examples, I pick two courses—one at UC and the other at the University of British Columbia—which invited data science students to reflect on the ethical dimension of their work. I claim to have learned some lessons.
*Explaining explainable AI
Tim Dare, University of Auckland, Justine Kingsbury, University of Waikato
There is near consensus in the emerging field of data ethics that processes and systems must be explainable to a wide range of stakeholders. Europe’s General Data Protection Regulations (GDPR) guarantee individuals a ‘right of access’ to ‘knowledge of the logic involved’ in automated decision-making. New Zealand’s Algorithm Charter requires signatories to ‘maintain transparency by clearly explaining how decisions are informed by algorithms’. Are these two different statements of the same requirement, or are they different from each other? What level of explanation of an automated decision is required, and why is it required? If a machine-learning algorithm reliably produces good outcomes, even though no-one can explain exactly how, mightn’t reliability trump explainability? In this paper we clarify the explainability requirement and examine the justification for it.
Towards a practice-based pluralist theory of cultural knowledge
Gregory W. Dawes, University of Otago
Are we likely to encounter or hear from extraterrestrial intelligent beings? No, we are not. One reason is that they are unlikely to have developed a science like ours, that would allow for interplanetary travel or communication. But does this matter? Wouldn’t any science, no matter how alien its starting point, eventually discover the same laws of nature? No, it would not. To defend this claim, I outline the elements of a practice-based, pluralist theory of (cultural) knowledge. This holds that the representations by which we know the world are shaped by the practices in which they are embedded: by their goals, their historical context, their character, and their target domain. So there is no reason to believe that terrestrial and extraterrestrial sciences would converge.
Theorising about conspiracy theories in the time of the novel coronavirus
M. Dentith, University of Waikato and Beijing Normal University
Philosophers have—by and large—have argued for particularism about conspiracy theories: we cannot dismiss conspiracy theories out of hand just because they have been pejoratively labelled as such. Rather, we have to assess particular conspiracy theories on their evidential merits. Yet in the age of COVID-19, given the negative social consequences of belief in particular COVID-19 conspiracy theories—notably the potential for widespread community transmission of the novel coronavirus—particularism could be considered intellectually dishonest since it can be charged with ignoring the social context of conspiracy theorising. Looking both at the early work in the philosophy of conspiracy theory—which attempted to identify types of conspiracy theories which we are justified in treating dismissively—and recent work on the ethics of conspiracy theorising, I argue that particularism can accommodate the social context of conspiracy theorising. Furthermore, if we were to endorse a general attitude of scepticism towards these things called ‘conspiracy theories’ we would end up committing ourselves to ignoring the positive social consequence of conspiracy theorising: the detection of actual conspiracies in our polities.
Moral obligations to future generations and the non-identity problem
Heather Dyke, University of Otago
Concern about climate change, rising sea levels, and depletion of the planet’s resources is often expressed in terms of our obligations to future generations. But to whom are these obligations owed? Since future individuals do not now exist, and relations such as ‘owing a moral obligation to’ must obtain between existing individuals, there is a prima facie obstacle to the attribution of these moral obligations. It has been argued that this problem can be resolved if we adopt an eternalist temporal ontology. Future individuals do not now exist, but they tenselessly exist, so they are capable of possessing rights and being owed obligations. I argue that this solution fails for reasons to do with Parfit’s non-identity problem. How can my action harm someone who, had I acted differently, would never have come to exist? I follow Annette Baier and argue that we should think of rights and obligations as possessed by people, not in virtue of their unique individuality, but in virtue of the roles they fill. I argue that this approach can account for our obligations to future generations, and that it does not, like the eternalist approach, succumb to the non-identity problem.
Aviation exceptionalism in the age of covid
Elisabeth Ellis, James Higham and James Maclaurin, University of Otago
The world has not appreciated the climate risk, political injustice, and threat to fragile international norms of aviation exceptionalism. Having committed to an already very weak carbon mitigation effort before the pandemic struck, international aviation has used the Covid-19 crisis to ratchet its emission reduction effort down even further. We set out why the emissions behaviour of the aviation sector is risky and unfair. Using standard scenarios, we demonstrate the scale of the burdens transferred and anticipated to be tranferred to every other sector by aviation exceptionalism. We note that every other sector has committed to emission reduction under the Paris Agreement; even marine transport is at least in principle committed to emission reduction in line with global climate goals. We argue that the aviation sector is free riding on the other parts of the global economy, operating outside international norms. Finally, we consider how such behaviour has been possible, highlighting the sector’s history, the unusually opaque structure of the ICAO, and the cognitive incentives and biases mediating people’s perceptions of aviation exceptionalism. Though it is too early to say conclusively, the path selected by marine transport demonstrates that it is at least possible to bring free riders on international agreements into compliance by self-organisation, visibility, and informal pressure rather than legal subordination.
*The strange phenomenon of Turing denial
Zhao Fan and Jack Copeland, University of Canterbury
Shortly
before the Second World War, Alan Turing invented the fundamental
logical principles of the modern digital computer. Turing was, however, a
misunderstood and
relatively isolated figure, who made little or no attempt to
communicate with the main centres of postwar computer development in the
United States. He generally received at best a minor mention in
histories of the computer written in the 20th and early 21st
centuries. All that changed in 2012, Turing’s centenary year, and he is
now popularly regarded as the founding father of computer science. But
an academic backlash has developed. ‘Turing deniers’ seek to show that
Turing made no very significant contribution
to the development of computers and computing. We examine the arguments
of some leading Turing deniers.
*Autonomous futures: Positioning lethal autonomous weapons in the landscape of future warfare
Amy L. Fletcher, University of Canterbury
The emergence of lethal autonomous weapons (LAWs) will disrupt military strategy and war-fighting in an already tumultuous geopolitical era characterized by a cranky America, an assertive China, a rising India, and a recalcitrant Russia. Already, thirty countries have called for a global ban on LAWs, citing both the humanitarian consequences of ‘robot warfare’ and the need to have a human ‘in the loop’ of any final decision to use lethal force. However, the four countries noted above, though each has a different position on the nuanced specifics of using LAWs, nevertheless do not intend to sign such a ban and are committed to the autonomous war-fighting paradigm in the pursuit of geopolitical dominance. To begin to parse this extraordinarily complex policy domain, this paper asks: how do elite US stakeholders harness particular ideas of the future of warfare to position and legitimize LAWS? The underlying premise of this research is that, while LAWs are tangible technologies that exist in real time and space, concepts such as ‘autonomous warfare’ or ‘robot warfare,’ and the rules and ethics governing them, must be brought into being via elite-level discourse. This project, drawing upon issue-mapping analysis of over 1,000 pertinent mass media articles and policy reports, seeks to determine how elite stakeholders deploy cultural tropes (including popular culture) and future projections to justify ongoing investment in autonomous weapons.
Mixing it up: the unity of propositions and objects in Plato and Davidson
Stephanie Gibbons, University of Waikato
In his book Truth and Predication, Donald Davidson raises Plato’s “third man” problem. Even if we resolve the third man, says Davidson, “The difficulty of avoiding one infinite regress or another might almost be said to be the problem of predication” (p. 79). Plato’s explanation of the unity of the proposition is not quite what Davidson says it is. But Plato’s work on how mixtures are possible, and so how unity (of anything) can occur might almost be said to be the problem of metaphysics. Can Plato’s solution also help us with Davidson’s problem?
*Minds, Brains, and the Puzzle of Implicit Computation
Randolph Grace, University of Canterbury
Many behavioural and perceptual phenomena, such as spatial navigation and object recognition, appear to require implicit computation—that is, the equivalent of mathematical or algebraic calculation. This capacity is found across a wide range of species, from insects to humans. Why can minds and brains do this? Shepard (1994) gave a possible reason: Because the world is described by Euclidean geometry and physical laws with algebraic structure, natural selection would favour perceptual systems that successfully adapted to those principles; thus, the algebraic and geometric invariants that characterize the external world have been internalized by evolution. Another possibility is suggested by our recent experiments with a novel ‘artificial algebra’ task. Participants learn by feedback and without explicit instruction to make an analogue response based on an algebraic combination of nonsymbolic magnitudes, such as line lengths or brightnesses. Results show ‘footprints’ of mathematical structure—response patterning that is not trained, implying that the participants have generated it themselves. These results suggest that algebraic structure is intrinsic to the mind, and an alternative explanation for implicit computation. According to our mathematical mind hypothesis, computation is what the brain is, not what the brain does. I conclude by exploring some implications of this view for artificial intelligence, numerical cognition, computational neuroscience, and philosophy of mathematics.
*Not thinking like a young white western secular man—whose intelligence and what intelligence is being artificialized?
Mike Grimshaw, University of Canterbury
This paper takes the forms of a thought piece raising the question of diversity in AI. Not only is there a noted lack of diversity in the tech industry, there are questions needing to be raised as to what constitutes the ‘intelligence’ in AI. We could—or rather need to—say: non-white, non-male, non-western minds matter.
Liminality and its philosophical use(fulness)
Marco Grix, Massey University Auckland
Liminality (from the Latin līmen, meaning ‘threshold’) is a concept first developed in anthropology to characterise processes and experiences of transition and in-between-ness, especially in tribal communities. For example, during rites of passage the ritual subject undergoes preliminal separation (removal and social detachment), transition (personal eradication, limbo, and reassembly), and postliminal reaggregation (social reincorporation). More recently, the concept has been applied to intentional communities within Western societies. In this paper, I explore liminality and its recent application, especially concerning its potential for the use in social and political philosophy.
Beyond ideals of friendship
Simon Keller, Victoria University of Wellington
Since Aristotle, at least, philosophers have usually described good friendship by describing the place that friendship takes in a good human life. This approach informs widespread views about obligations of friendship, the skills and traits of a good friend, and the relationship between friendship and morality. But the approach also has some odd consequences: most obviously, it has populated the literature with pictures of friendship that have virtually no connection with any real human life or real human friendship. I make the case for a different approach to good friendship, on which good friendships are those that make human lives better. My preferred approach, I argue, produces more plausible accounts of the function and ethics of friendship and of what it takes to be a good friend, and it gives reason to think that ideals of friendship are not very interesting.
Self-report: an unstable foundation for aesthetic theorising
Justine Kingsbury, University of Waikato
Aesthetic theorising often takes as a starting point first-person reports of responses to artworks or to other aesthetically interesting objects. Such reports are also used as a test of a theory: if the theory has implications for how most or all viewers will respond to a particular thing, and viewers report not having that response, that tells against the theory. In this talk I use two debates in aesthetics to illustrate this use of self-report – a debate about musical expressiveness and emotional responses to music, and a debate about aesthetic responses to nature. I will argue that self-report is an unstable foundation for aesthetic theorising, and conclude by considering some alternatives.
The identity of indiscernibles and human nature in Spinoza’s Ethics
Michael LeBuffe, University of Otago
Spinoza appears to defend a version of the principle of the identity of indiscernibles at Ethics 1p4: “Two or more distinct things are distinguished from one another either by a difference in the attributes of substances or by a difference in their affections.” This proposition clearly serves as a central premise in the argument to Spinoza’s substance monism, where Spinoza uses it to defend the claim (1p5) that there could be no two substances of the same nature or attribute, a view that implies, for example, that there could be no two thinking substances and there could be no two extended substances. As stated at 1p4, however, the principle applies to any purportedly different things. Why should it not also apply to finite individuals, such as human beings? I argue that it does but that Spinoza nevertheless has the means to individuate human beings in, first, the fact of existence, which distinguishes existing human natures from others that do not exist, and, second, the different situations of existing human beings among other finite individuals in extension and thought.
*What is it like to be a bot?
James Maclaurin, University of Otago
New Zealand’s animal welfare legislation reflects the fact that we, like many many other countries, accord moral status to a wide variety of non-human animals. This raises the question of whether we might at some point have to accord weak artificial intelligence some sort of limited moral status. A recent proposal from John Danaher and Rob Sparrow suggests we deploy an ethical equivalent of the Turing test. This paper analyses the idea of ethical behaviour and argues that the proposed test is fundamentally ill-suited to detecting moral status in entities with simple mental and emotional lives.
Causal relativism
Cei Maslen, Victoria University of Wellington
In this paper I discuss whether the context-dependence of singular causal statements is assessor context or speaker context. Following MacFarlane’s discussion of knowledge statements I relate this to the question of whether this should be understood as contextualism or relativism.
Populations and machine-like decomposition
John Matthewson, Massey University
Explanations of population-level phenomena can be given at various grains of description. For example, an explanation of disease incidence might refer to properties of the population as a whole (such as the average level of income in the region), or it might decompose that population into subgroups or individuals (such as the risky actions of people in a particular age bracket). However, Arnon Levy (2014) argues that decomposition only explains an entity’s behaviour to the extent that the entity is “machine-like”, and populations aren’t at all like machines. This appears to rule out decompositional explanations of population-level behaviour. I present two counterexamples to Levy’s arguments, and attempt to reconcile these findings.
The Role of Rules in Wittgenstein’s Later Philosophy of Language
Alex Miller, University of Otago
In their paper “Es braucht die Regel nicht”, Kathrin Glüer and Åsa Wikforss argue against what they describe as “the received view” of Wittgenstein’s much-discussed remarks on rule-following. According to the received view: (RG) Speaking a language is a rule-guided activity. Glüer and Wikforss argue, to the contrary, that “from a broadly later Wittgensteinian picture of language and thought, (RG) proves to be the villain of the piece”. Contrary to the received view, they suggest the later Wittgenstein actually rejects (RG). I will suggest that Glüer and Wikforss are mistaken. In particular, I will argue that there are both philosophical and exegetical reasons to reject their interpretation, and (time-permitting) I’ll give a sketch of an alternative framework for making sense of Wittgenstein’s remarks on rule-following and meaning.
*Using AI to support student engagement in video-based learning
Tanja Mitrovic, University of Canterbury
Video-based learning is widely used in both formal education and informal learning in a variety of contexts. With the ubiquity of widely available video content, video-based learning is seen as one of the main strategies to provide engaging learning environments. However, numerous studies show that to learn effectively while watching videos, students need to engage deeply with video content. We have developed an active video watching platform (AVW-Space) to facilitate engagement with video content by providing means for constructive learning. The initial studies with AVW-Space on presentation skills show that only students who commented on videos and who rated comments written by their peers have improved their understanding of the target soft skill. In order to foster deeper engagement, we designed a choice architecture and a set of nudges to encourage students to write more, and to reflect on their past experience. The nudges are implemented using AI techniques, and are generated automatically based on the student’s behaviour while watching videos. We conducted three studies investigating the effect of nudges. The results provide evidence that show that nudges are effective. Students who received nudges wrote more comments, of various types, and of better quality.
*Philosophical prototyping
Jonathan Pengelly, Victoria University of Wellington
Wallach and Allen argue that as artificial moral agents become more sophisticated, their similarities and differences to humans will tell us much about who and what we are. This development, they claim, will be crucial to humanity’s understanding of ethics. This paper agrees that AI technologies have the potential to generate new philosophical insights. To do this however, we must be open to new research methods which effectively utilise the power of these technologies. I propose one such method, philosophical prototyping, showing how it can be used to explore the limits, false and real, of moral theory.
Russell, causality and the solipsistic prison
Charles Pigden, University of Otago
“But now [the universe] has shrunk to be no more than my own reflection in the windows of the soul through which I look out upon the night of nothingness. The revolutions of nebulae, the birth and death of stars, are no more than convenient fictions in the trivial work of linking together my own sensations, and perhaps those of other men not much better than myself. There is no splendour, no vastness, anywhere; only triviality for a moment, and then nothing. Why live in such a world? Why even die?” (Autobiography ch. 11 p. 393). Three doctrines confined Russell to his solipsistic prison:
a) the Fundamental Principle, “that [the] sentences we can understand must [ultimately] be composed of words with whose meaning we are acquainted”;
b) the idea that we only perceive our own perceptions (sense-data); and
c) the Logical Atomist principle that we should substitute logical constructions for inferred entities.
I
take it that the basic project of his later philosophy is to escape the
prison by retaining a) and b) but dropping principle c). I argue that
he has to drop either
a) or b) as well.
*Did Turing endorse the computational theory of mind?
Diane Proudfoot, University of Canterbury
Many, if not most, theorists assume that Turing anticipated
the
computational theory of mind. I argue that his account of intelligence
and free will lead to a new objection to computationalism.
Susan Stebbing, Alice Ambrose, Ruth Barcan Marcus: The role of women in symbolic logic
Adriane Rini, Massey University
This
talk will explain how what these women were doing in the new symbolic
logic of Russell and Whitehead was what no one else at the time was
doing. The talk will
use the work of these women to illustrate some of what has been missed
in existing discussions and histories of women in philosophy. The talk
will be accessible to logicians and non-logicians.
Ethical issues with involving kith and kin in tertiary education
Vanessa Scholes
Tertiary education practices often seem to consider learners’ extramural lives to be peripheral to the learning process. It is acknowledged that the people in learners’ lives can impact on their learning process, but this is often seen as problematic: flatmates who frequently invite groups over to party; sick kids; co-workers who don’t pull their weight or bosses who expect too much. Educators help learners overcome these problems through offering, for example, recordings of lectures and extensions on assignments. Institutions providing on-campus learning can afford to keep the learner’s people at the periphery because these institutions can put other people in the learning process – lecturers, tutors, fellow students – to support the learner. In this paper, I argue that off-campus learning provision (e.g., remote or online learning), by contrast, should recruit and make use of the learner’s people in the learning process. I discuss some of the ethical issues this approach raises.
Making sense of animal welfare: A taxonomy of concepts
Asher Soryl, University of Otago Bioethics Centre
Attitudes
toward animal welfare vary widely across disciplines, and can be
generally divided into two conceptual categories: philosophical concepts
of well-being which
aim to describe welfare in terms of what is non-instrumentally good or
bad for an animal, and practical welfare concepts which are conceived of
and discussed with explicit regard to real world circumstances
involving animals. Lack of communication between
philosophers and scientists using these concepts has created confusion
regarding their appropriate usage in different contexts, and to date
there have been no significant efforts to describe the nature of how
they might relate to one another. This paper attempts
to chart out the relationship between philosophical and practical
concepts of animal welfare, proposing a three-tiered system of models,
views, and theories which recognises the inherent complexities involved
in bridging theory and practice across disciplines.
Crossed wires: blaming artifacts for bad outcomes
Justin Sytsma, Victoria University of Wellington
Philosophers and psychologists often assume that responsibility and blame only apply to certain agents. Sometimes this is nuanced by claiming that there are multiple ordinary concepts of blame and responsibility, with one set being purely descriptive while the other is distinctively moral, and with the latter applying just to certain agents. But do our ordinary concepts of responsibility and blame reflect these assumptions? In this paper, I investigate one recent debate where these assumptions have been applied—the back-and-forth over how to explain the impact of norms on ordinary causal attributions. I investigate one prominent case where it has been found that norms matter for causal attributions, but where it is claimed that responsibility and blame do not apply because the case involves artifacts. Across five studies (total N=1,393) more carefully investigating Hitchcock and Knobe’s (2009) Machine Case, I find that the same norm effect found for causal attributions is found for responsibility and blame attributions, with participants tending to ascribe both to a norm-violating artifact. Further, the evidence suggests that participants do so because they are applying broadly normative, but not distinctively moral, concepts.
*How to make a conscious robot
Justin Sytsma, Victoria University of Wellington
In one sense, robots are strikingly different from us. More different from us than mosquitos, or ferns, or even viruses. They are non-biological, non-living. They are artifacts. In another sense, however, robots can be strikingly similar to us. They can do many of the things that we do. In fact, they’re often created for just that purpose—to take over jobs previously done by humans. Not surprisingly, robots are a common comparison case for thinking about the mind, mental states, and cognitive capacities: they are both different from and similar to us, offering the hope of bringing into focus the role of behavioral cues in our beliefs that something has a mind or mental attributes. But this helpful tool is not without drawbacks. The very fact that robots are seen as different from us, often times as radically other, carries the risk of bias. There is evidence that people are generally disinclined to attribute a range of mental state capacities to even extremely sophisticated humanoid robots described as being behaviorally indistinguishable from humans, such as consciousness, free will, feeling pain, and having emotions. Does this reflect bias—that even human-like robots are treated as other—or does it reflect something deeper about how we think about minds? Might the same tendencies that lead us to dehumanize members of human outgroups, lead us to dehumanize robots, whatever their behavioral abilities? In this paper, we expand on previous work testing judgments about human-like robots, increasing the closeness between the participant and a robot, rather than simply the similarity between the robot and other humans. Across three large-scale studies (total N=3624) we find a large effect: when a robot is described as implementing a simulation of the participant’s brain, mental capacities typically denied of robots are ascribed at levels similar to self-attributions, and at much higher levels than when a robot is described as implementing a simulation of a stranger’s brain. Further, the same effect is found when comparing a robot running a simulation of a close friend’s brain versus a stranger’s brain. The results suggest that making a robot that people judge to have the full range of human mental capacities depends not so much on what the robot is capable of doing, but on people taking the robot to be part of their ingroup.
Virtual reality and pictorial seeing
Grant Tavinor, Lincoln University
In this paper I argue that stereoscopic VR headsets involve a kind of picturing in which users see visual scenes through a depictive surface. A problem for this account is to explain whether and how VR visual media involve the “seeing in” typical of some “twofold” theories of picture perception, given that virtual media differ in certain important respects from other pictures. One such difference is that virtual media provide a kind of egocentric seeing that is lacking in customary pictures. I will argue here that this difference can be accommodated by a theory of VR picturing, but that this accommodation may necessitate changing our assumptions about how pictures can function.
Mathematical pluralism and inconsistent arithmetic
Zach Weber, University of Otago
Non-classical
mathematics—and inconsistent arithmetic in particular—is mathematical
practice based on non-classical logics, such as intuitionistic and
paraconsistent.
Is non-classical mathematics just another legitimate practice among
many, or is it a rival of its classical counterpart? We will consider pluralistic and monistic answers
to this question, focusing on R. K. Meyer’s claim circa 1975 to have
“overturned” Godel’s incompleteness theorem using a paraconsistent
logic. Did he? Does his claim even make sense? The answers will depend
on whether one takes logic to be descriptive or normative.
Philosophy as a vehicle for significant learning experiences
Dan Weijers and Nick Munn, University of Waikato
Philosophy
has a PR problem, and traditional methods of teaching philosophy don’t
help. But all is not lost. Contrary to popular belief, philosophy
courses can [easily?]
be delivered in ways that generate significant learning experiences for
students. Doing so has the potential to create a virtuous cycle of
improvement in the reputation of philosophy as a discipline… and will
help students learn more. So, in this paper, we
argue that philosophers ought to structure their teaching to maximise
the affordance of significant learning experiences, and we offer some
suggestions for how to do so, both in person and online.
The prospects of primitivism
Jeremy Wyatt, University of Waikato
Primitivist theories of truth date back at least to the origins of analytic philosophy, being defended by Moore, Russell, and Frege. A number of contemporary philosophers have also defended primitivist truth theories, with Davidson’s and Sosa’s defenses probably being the best known. The most extensive development of primitivism, however, has been offered by Jamin Asay, who contends that the concept truth is primitive while the property truth is non-primitive yet insubstantial. In this talk, my primary aim will be to critically assess Asay’s primitivism. I’ll explain why his signature argument for primitivism is inconclusive and why his views face a pair of formidable challenges. After defending these negative claims, I’ll suggest that the way forward for inquiry about truth is to move away from purely a priori investigations and towards a sort of inquiry which recognizes the critical role of empirical questions about the nature and acquisition of truth.
What is the past?
Benjamin David Young, University of Waikato
Every mental state is about something. Presentists claim only the present exists. Thus, a mental state about something that existed in the past is either about something nonexistent or about the content of one’s thought. If the former, then mental states may be about nonexistent things. If the latter, then the mental state is not about the past but about one’s thoughts about the past. Is the past merely the content of one’s present thoughts or is the past nonexistent, as abstract as ‘the future’? What practical consequences follow, if any, from the ontological status of the past? Just as the future is nonexistent and abstract, so too is the past. In this paper, I propose to answer the question about the past by showing how we can treat the past like the future. If the past is as abstract as the future, then perhaps we have just as much reason to discount moral judgements that respect the past as we have reason to discount moral judgements about the future. I consider whether this conclusion is amenable to normative morality or whether it seems unconvincing in light of common attitudes about the morality of past events and past individuals.
Comments