top of page

Workshop Abstracts

Professor Vincent C. Müller

Title: Orthogonality and Existential Risk from AI: Can We Have it both Ways?

Abstract

 

There is a hole in the standard argument to the conclusion that AI constitutes an existential risk for the human species – even if its two premises are true: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim uses a notion of ‘general intelligence’, while the orthogonality thesis uses a notion of ‘instrumental intelligence’. If that is true, they cannot be joined to support the conclusion that AI constitutes an existential risk. To repair the situation, we try to find a unified notion of intelligence that can be used in both premises, but we fail.

Title: Embodied AI: Autonomous Systems and Ethics

ABSTRACT

Many applications of Artificial Intelligence (AI) and especially Machine Learning (ML) are digital advisory or “recommender” systems where humans have time to consider the system’s output and to mediate its implementation. Examples of such applications include on-line shopping and systems that make recommendations related to prison sentences, such as the notorious COMPAS in the USA.

In contrast, where AI or ML is embodied in a physical system, e.g. a car or a marine vessel, decision-making responsibility is more fully “transferred” from a human to the system – and the time and ability of the human to intercede and mediate the decisions is much more limited. Such autonomous systems can have a direct impact on human health and well-being – they are often referred to as being “safety-critical” – thus there is clearly an ethical dimension to the deployment of such systems.

The talk will briefly outline the technical capabilities and limitations of autonomous systems that embody AI and ML in a way intended to be accessible to a non-specialist audience. Based on this overview, some areas will be identified where a philosophical perspective, particularly some ideas from practical ethics, could illuminate decisions related to the design, assessment and regulation of autonomous systems.

Professor John McDermid

Dr. Paula Boddington

Title: Philosophy of AI through the theory and practice of dementia

ABSTRACT

 

This paper considers the potential for fruitful dialogue between work considering the care and treatment of people living with dementia, and an understanding of ethical and other issues in AI and related technologies. Despite at first glance being two quite different areas of investigation, there are many shared underlying philosophical and ethical issues. These include the nature of personhood and its attribution; models for the nature of agency; questions of dehumanisation; and the nature of communication and of human interactions. The paper draws on collaborative work with Dr Katie Featherstone and her team at Cardiff University and their ethnographic studies of hospital wards.

Dr. David Strohmaier

Title: Ontology, neural networks, and the social sciences

ABSTRACT

Here is a slightly shortened version of the paper abstract which should work: The ontology of social objects and facts remains a field of continued controversy. This situation complicates the life of social scientists who seek to make predictive models of social phenomena. For the purposes of modelling a social phenomenon, we would like to avoid having to make any controversial ontological commitments. The overwhelming majority of models in the social sciences, including statistical models, are built upon ontological assumptions that can be questioned. Recently, however, artificial neural networks (ANNs) have made their way into the social sciences, raising the question whether they can avoid controversial ontological assumptions. I argue that neural networks can avoid ontological assumptions to a greater degree than common statistical models in the social sciences. I then go on, however, to claim that ANNs are not ontologically innocent either. The use of ANNs in the social sciences introduces ontological assumptions typically in at least two ways, via the input and via the architecture.

This presentation is based on his latest publication: https://link.springer.com/article/10.1007/s11229-020-03002-6

Dr. Ioannis Votsis

Title: Machine‐Made Jabberwocky?

ABSTRACT


The question of whether machines can be truly creative has been with us at least since the advent of modern computers. Although the tide seems to be turning, the naysayers still represent a sizeable share of the voices out there. On their view, machines are ultimately incapable of the deeply transformational creativity that human beings exhibit, especially that found in the most outstanding examples of us, e.g. a Mozart, a Dali or an Einstein. In this talk, I subject the kinds of reasons they offer to a careful examination. The focal point of my discussion is on cases of scientific creativity but I also make some relevant remarks about cases in the arts and the humanities. By and large, I find the kinds of reasons offered by the naysayers wanting and argue that machines should be able to exhibit the same, and indeed even superior, levels of creativity to humans. It is high time that we broaden our horizons and abandon such antiquated notions as the view that humanity sits at the apex of meaningful existence.

Prof. David Hogg

Title: AI and Common Sense

ABSTRACT

Today’s AI systems have little in the way of common sense. This partly motivates the requirement for AI systems to explain their reasoning, particularly in safety critical areas such as health care. The talk will examine the prospects for developing AI systems with common sense and the ways in which this might be achieved. 

bottom of page