Papers by Luciano Floridi
Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it wi... more Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it will be ethically justifiable. Just as we know that AI can be used to diagnose disease, predict risk, develop personalized treatment plans, monitor patients remotely, or automate triage, we also know that it can pose significant threats to patient safety and the reliability (or trustworthiness) of the healthcare sector as a whole. These ethical risks arise from (a) flaws in the evidence base of healthcare AI (epistemic concerns); (b) the potential of AI to transform fundamentally the meaning of health, the nature of healthcare, and the practice of medicine (normative concerns); and (c) the 'black box' nature of the AI development pipeline, which undermines the effectiveness of existing accountability mechanisms (traceability concerns). In this chapter, we systematically map (a)-(c) to six different levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal. The aim is to help policymakers, regulators, and other high-level stakeholders delineate the scope of regulation and other 'softer' Governing measures for AI in healthcare. We hope that by doing so, we may enable global healthcare systems to capitalize safely and reliably on the many life-saving and improving benefits of healthcare AI.
Wikipedia is an essential source of information online, so efforts to combat misinformation on th... more Wikipedia is an essential source of information online, so efforts to combat misinformation on this platform are critical to the health of the information ecosystem. However, few studies have comprehensively examined misinformation dynamics within Wikipedia. We address this gap by investigating Wikipedia editing communities during the 2024 US Presidential Elections, focusing on the dynamics of misinformation. We assess the effectiveness of Wikipedia's existing measures against misinformation dissemination over time, using a combination of quantitative and qualitative methods to study edits posted on politicians' pages. We find that the volume of Wikipedia edits and the risk of misinformation increase significantly during politically charged moments. We also find that a significant portion of misinformation is detected by existing editing mechanisms, particularly overt cases such as factual inaccuracies and vandalism. Based on this assessment, we conclude by offering some recommendations for addressing misinformation within Wikipedia's editing ecosystem.
Philosophy & Technology
This article argues that the current hype surrounding artificial intelligence (AI) exhibits chara... more This article argues that the current hype surrounding artificial intelligence (AI) exhibits characteristics of a tech bubble, based on parallels with five previous technological bubbles: the Dot-Com Bubble, the Telecom Bubble, the Chinese Tech Bubble, the Cryptocurrency Boom, and the Tech Stock Bubble. The AI hype cycle shares with them some essential features, including the presence of potentially disruptive technology, speculation outpacing reality, the emergence of new valuation paradigms, significant retail investor participation, and a lack of adequate regulation. The article also highlights other specific similarities, such as the proliferation of AI startups, inflated valuations, and the ethical concerns associated with the technology. While acknowledging AI's transformative potential, the article calls for pragmatic caution, evidence-based planning, and critical thinking in approaching the current hype. It concludes by offering some recommendations to minimise the negative impact of the impending bubble burst, emphasising the importance of focusing on sustainable business models and real-world applications, maintaining a balanced perspective on AI's potential and limitations, and supporting the development of effective regulatory frameworks to guide the technology's design, development, and deployment.
The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemina... more The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemination, and consumption. This technological revolution is reshaping our understanding of content, challenging traditional notions of authorship, and transforming the relationship between content producers and consumers. As we approach an increasingly AI-integrated world, examining the implications of this paradigm shift is crucial. This article explores the future of content in the age of GenAI, analysing the evolving definition of content, the transformations brought about by GenAI systems, and emerging models of content production and dissemination. By examining these aspects, we can gain valuable insights into the challenges and opportunities that lie ahead in the realm of content creation and consumption and, hopefully, manage them more successfully.
The US Government has stated its desire for the US to be the home of the world's most advanced Ar... more The US Government has stated its desire for the US to be the home of the world's most advanced Artificial Intelligence (AI). Arguably, it currently is. However, a limitation looms large on the horizon as the energy demands of advanced AI look set to outstrip both current energy production and transmission capacity. Although algorithmic and hardware efficiency will improve, such progress is unlikely to keep up with the exponential growth in compute power needed in modern AI systems. Furthermore, even with sufficient gains in energy efficiency, overall use is still expected to increase in a contemporary Jevons paradox. All these factors set the US AI ambition, alongside broader electrification, on a crash course with the US government's ambitious clean energy targets. Something will likely have to give. For now, it seems that the dilemma is leading to a de-prioritization of AI compute allocated to safety-related projects alongside a slowing of the pace of transition to renewable energy sources. Worryingly, the dilemma does not appear to be considered a risk of AI, and its resolution does not have clear ownership in the US Government.
Background: There are more than 350,000 health apps available in public app stores. The extolled ... more Background: There are more than 350,000 health apps available in public app stores. The extolled benefits of health apps are numerous and well documented. However, there are also concerns that poor-quality apps, marketed directly to consumers, threaten the tenets of evidence-based medicine and expose individuals to the risk of harm. This study addresses this issue by assessing the overall quality of evidence publicly available to support the effectiveness claims of health apps marketed directly to consumers.
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.
Artificial Intelligence and computer games have been closely related since the first single-playe... more Artificial Intelligence and computer games have been closely related since the first single-player games were made. From AI-powered companions and foes to procedurally generated environments, the history of digital games runs parallel to the history of AI. However, recent advances in language models have made possible the creation of conversational AI agents that can converse with human players in natural language, interact with a game's world in their own right and integrate these capabilities by adjusting their actions according to communications and vice versa. This creates the potential for a significant shift in games' ability to simulate a highly complex environment, inhabited by a variety of AI agents with which human players can interact just as they would interact with the digital avatar of another person. This article begins by introducing the concept of conversational AI agents and justifying their technical feasibility. We build on this by introducing a taxonomy of conversational AI agents in multiplayer games, describing their potential uses and, for each use category, discussing the associated opportunities and risks. We then explore the implications of the increased flexibility and autonomy that such agents introduce to games, covering how they will change the nature of games and in-game advertising, as well as their interoperability across games and other platforms. Finally, we suggest game worlds filled with human and conversational AI agents can serve as a microcosm of the real world.
Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market.... more Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market. However, only a few studies have taken an interest in this professional profile, and they have yet to provide a normative discussion of its expertise and skills. The goal of this article is to initiate such discussion. We argue that AI Ethicists should be experts and use a heuristic to identify them. Then, we focus on their specific kind of moral expertise, drawing on a parallel with the expertise of Ethics Consultants in clinical settings and on the bioethics literature on the topic. Finally, we highlight the differences between Health Care Ethics Consultants and AI Ethicists and derive the expertise and skills of the latter from the roles that AI Ethicists should have in an organisation.
U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut... more U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut each other off artificial intelligence (AI) computing resources through protectionist policies comes at a cost-greater energy consumption. This article shows that heightened energy consumption stems from six key areas: 1) Limited access to the latest and most energy-efficient hardware; 2) Unintended spillover effects in the consumer space due to the dual-use nature of AI technology and processes; 3) Duplication in manufacturing processes, particularly in areas lacking comparative advantage; 4) The loosening of environmental standards to onshore manufacturing; 5) The potential for weaponizing the renewable energy supply chain, which supports AI infrastructure, hindering the pace of the renewable energy transition; 6) The loss of synergy in AI advancement, including the development of more energy-efficient algorithms and hardware, due to the transition towards a more autarkic information system and trade. By investigating the unintended consequences of the U.S.-China AI competition policies, the article highlights the need to redesign AI competition to reduce unintended consequences on the environment, consumers, and other countries.
This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that i... more This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that is, research developed and used to support decision-making in the asylum process — can be published in an ethical manner. The article focuses on the risk that published COI reports could be misused and thereby harm the subjects of the reports and/or those involved in their development. It supports a situational approach to assessing data ethics when publishing COI reports, whereby COI service providers must weigh up the benefits and harms of publication based, inter alia, on the foreseeability and probability of harm due to potential misuse of the research, the public good nature of the research, and the need to balance the rights and duties of the various actors in the asylum process, including asylum seekers themselves. Although this article focuses on the specific question of ‘how to publish COI reports in an ethical manner’, it also intends to promote further research on data ethics in the asylum process, particularly in relation to refugees, where more foundational issues should be considered.
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (... more As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the 'what' and the 'how' of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed 'Ethics as a Service'
In this article we analyse the role that artificial intelligence (AI) could play, and is playing,... more In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.
In this article, we compare the artificial intelligence strategies of China and the European Unio... more In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterise China’s strategy by its primary focus on fostering innovation and a more recent emphasis on “common prosperity”, and the EU’s on promoting ethical outcomes through protecting fundamental rights. Building on this comparative analysis, we consider the areas where the EU and China could learn from and improve upon each other’s approaches to AI governance to promote more ethical outcomes. We outline policy recommendations for both European and Chinese policymakers that would support them in achieving this aim.
Digital sovereignty seems to be something very important, given the popularity of the topic these... more Digital sovereignty seems to be something very important, given the popularity of the topic these days. True. But it also sounds like a technical issue, which concerns only specialists. False. Digital sovereignty, and the fight for it, touch everyone, even those who do not have a mobile phone or have never used an online service. To understand why, let me start with four episodes. I shall add a fifth shortly. June 18, 2020: The British government, after having failed to develop a centralised, coronavirus app not based on the API provided by Google-Apple, 1 gave up, ditched the whole project (Burgess 2020) and accepted to start developing a new app in the future that would be fully compatible with the decentralised solution supported by the two American companies. This U-turn was not the first: Italy (Longo 2020) and Germany (Busvine and Rinke 2020; Lomas 2020) had done the same, only much earlier. Note that, in the context of an online webinar on COVID-19 contact tracing applications, organised by RENEW EUROPE (a liberal, pro-European political group of the European Parliament), Gary Davis, Global Director of Privacy & Law Enforcement Requests at Apple (and previously Deputy Commissioner at the Irish Data Protection Commissioner's Office), stated that
Technologies to rapidly alert people when they have been in contact with someone carrying the cor... more Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource con... more Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource constraints. At the same time, the use of digital health technologies in all care settings has led to an expansion of data. For this reason, policy makers, politicians, clinical entrepreneurs, and computer and data scientists argue that a key part of health-care solutions will be artificial Intelligence (AI), particularly machine learning AI forms a key part of the National Health Service (NHS) Long-Term Plan (2019) in England, the US National Institutes of Health Strategic Plan for Data Science (2018), and China’s Healthy China 2030 strategy (2016). The willingness to embrace the potential future of medical care, expressed in these national strategies, is a positive development. Health-care providers should, however, be mindful of the risks that arise from AI’s ability to change the intrinsic nature of how health care is delivered. This paper outlines and discusses these potential risks.
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater atten... more Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defense strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organ... more Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
The article develops a correctness theory of truth (CTT) for semantic information. After the intr... more The article develops a correctness theory of truth (CTT) for semantic information. After the introduction, in section two, semantic information is shown to be translatable into propositional semantic information (i). In section three, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in section four, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In sections five and six, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates (in a Fregean sense) Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. The last section draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users.
The paper introduces a new model of telepresence. First, it criticises the standard model of pres... more The paper introduces a new model of telepresence. First, it criticises the standard model of presence as epistemic failure, showing it to be inadequate. It then replaces it with a new model of presence as successful observability. It further provides reasons to distinguish between two types of presence, backward and forward. The new model is then tested against two ethical issues whose nature has been modified by the development of digital information and communication technologies, namely pornography and privacy, and shown to be effective.
Uploads
Papers by Luciano Floridi
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.
di Agnese Bertello
The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to address its ethical implications. The second part collects contributions focusing on Just War Theory and its application to the case of Information Warfare. The third part adopts alternative approaches to Just War Theory for analysing the ethical implications of this phenomenon. Finally, an afterword by Neelie Kroes - Vice President of the European Commission and European Digital Agenda Commissioner - concludes the volume. Her contribution describes the interests and commitments of the European Digital Agenda with respect to research for the development and deployment of robots in various circumstances, including warfare.
Since the seventies, IE has been a standard topic in many curricula. In recent years, there has been a flourishing of new university courses, international conferences, workshops, professional organizations, specialized periodicals and research centres. However, investigations have so far been largely influenced by professional and technical approaches, addressing mainly legal, social, cultural and technological problems. This book is the first philosophical monograph entirely and exclusively dedicated to it.
Floridi lays down, for the first time, the conceptual foundations for IE. He does so systematically, by pursuing three goals:
a) a metatheoretical goal: it describes what IE is, its problems, approaches and methods;
b) an introductory goal: it helps the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to computer ethics;
c) an analytic goal: it answers several key theoretical questions of great philosophical interest, arising from the investigation of the ethical implications of ICTs.
Although entirely independent of The Philosophy of Information (OUP, 2011), Floridi's previous book, The Ethics of Information complements it as new work on the foundations of the philosophy of information.
As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed into our 'real' lives so that we begin to live, as Floridi puts in, "onlife". Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution.
"Onlife" defines more and more of our daily activity - the way we shop, work, learn, care for our health, entertain ourselves, conduct our relationships; the way we interact with the worlds of law, finance, and politics; even the way we conduct war. In every department of life, ICTs have become environmental forces which are creating and transforming our realities. How can we ensure that we shall reap their benefits? What are the implicit risks? Are our technologies going to enable and empower us, or constrain us? Floridi argues that we must expand our ecological and ethical approach to cover both natural and man-made realities, putting the 'e' in an environmentalism that can deal successfully with the new challenges posed by our digital technologies and information society.
- Inspires reflection on the ways in which a hyperconnected world forces the rethinking of the conceptual frameworks on which policies are built
- Draws upon the work of a group of scholars from a wide range of disciplines including, anthropology, cognitive science, computer science, law, philosophy, political science
What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition.
ICTs are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialise); our conception of reality (our metaphysics); and our interactions with reality (our agency). In each case, ICTs have a huge ethical, legal, and political significance, yet one with which we have begun to come to terms only recently.
The impact exercised by ICTs is due to at least four major transformations: the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks.
Such transformations are testing the foundations of our conceptual frameworks. Our current conceptual toolbox is no longer fitted to address new ICT-related challenges. This is not only a problem in itself. It is also a risk, because the lack of a clear understanding of our present time may easily lead to negative projections about the future. The goal of The Manifesto, and of the whole book that contextualises, is therefore that of contributing to the update of our philosophy. It is a constructive goal. The book is meant to be a positive contribution to rethinking the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily.
The Manifesto launches an open debate on the impacts of ICTs on public spaces, politics and societal expectations toward policymaking in the Digital Agenda for Europe’s remit. More broadly, it helps start a reflection on the way in which a hyperconnected world calls for rethinking the referential frameworks on which policies are built.
The Cambridge Handbook of Information and Computer Ethics provides an ambitious and authoritative introduction to the field, with discussions of a range of topics including privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, and online pornography.
'Philosophy and Computing is a stimulating and ambitious book that helps lay a foundation for the new and vitally important field of Philosophy of Information. This is a worthy addition to the brand new and rapidly developing field of Philosophy of Information, a field that will revolutionise philosophy in the Information Age.' - Terrell Ward Bynum, Southern Connecticut State University
'What are the philosophical implications of computers and the internet? A pessimist might see these new technologies as leading to the creation of vast encyclopaedic databases far exceeding the capacities of any individual. Yet Luciano Floridi takes a different view, aruging ingeniously for the optimistic conclusion that the computer revolution will lead instead to a reversal of the trend towards specialisation and a return to the Renaissance mind.' - Donald Gillies, King's College London
'In his seminal book, Philosophy and Computing, Luciano Floridi provides a rich combination of technical information and philosophical insights necessary for the emerging field of philosophy and computing.' - James Moor, Dartmouth College
'Luciano Floridi's book discusses the most important and the latest branches of research in information technology. He approaches the subject from a novel philosophical viewpoint, while demonstrating a strong command of the relevant technicalities of the subject.' - Hava T. Siegelman, Technion
Product Description
Philosophy and Computing is the first accessible and comprehensive philosophical introduction to Information and Communication Technology.
"The Blackwell Guide to the Philosophy of Computing and Information is a rich resource for an important, emerging field within philosophy. This excellent volume covers the basic topics in depth, yet is written in a style that is accessible to non–philosophers. There is no other book that assembles and explains systematically so much information about the diverse aspects of philosophy of computing and information. I believe this book will serve both as an authoritative introduction to the field for students and as a standard reference for professionals for years to come. I highly recommend it." James Moor, Dartmouth College <!––end––>
"There are contributions from a range of respected academics, many of them authorities in their field, and this certainly anchors the work in a sound scholarly foundation. The scope of the content, given the youthfulness of the computing era, is signigficant. The variety of the content too is remarkable. In summary this is a wonderfully fresh look at the world of of computing and information, which requires its own philosophy in testimony that there are some real issues that can exercise the mind." Reference Reviews
"The judicious choice of topics, as well as the degree of detail in the various chapters, are just what it takes neither to deter the average reader requiring this Guide, nor to makeit unfeasible placing this volume in the hands of students. Floridi′s book is clearly a valuable addition to a worthy series." Pragmatics & Cognition
Product Description
This Guide provides an ambitious state–of–the–art survey of the fundamental themes, problems, arguments and theories constituting the philosophy of computing.
* A complete guide to the philosophy of computing and information.
* Comprises 26 newly–written chapters by leading international experts.
* Provides a complete, critical introduction to the field.
* Each chapter combines careful scholarship with an engaging writing style.
* Includes an exhaustive glossary of technical terms.
* Ideal as a course text, but also of interest to researchers and general readers.
Computing and information, and their philosophy in the broad sense, play a most important scientific, technological and conceptual role in our world. This book collects together, for the first time, the views and experiences of some of the visionary pioneers and most influential thinkers in such a fundamental area of our intellectual development. This is yet another gem in the 5 Questions Series by Automatic Press / VIP.
Floridi's complete and rigorous book constitutes a major contribution for the knowledge of the transmission and influence of Sextus' writings, which makes it an essential work of reference for any study in this field. (The British Journal for the History of Philosophy )
A fascinating read for anyone interested in the history of Scepticism. (Greece & Rome )
Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute Copernicus Visiting Professor, IUSS Ferrara 1391
This is a targeted call, by which we intend to recruit researchers in subjects currently underrepresented by our fellowship cohort. Fellowships are available for 3 years with the potential for an additional 2 years of support following interim review. Fellows will pursue research based at the Institute hub in the British Library, London. Fellowships will be awarded to individual candidates and fellows will be employed by a joint venture partner university (Cambridge, Edinburgh, Oxford, UCL or Warwick).
Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute
Copernicus Visiting Professor, IUSS Ferrara 1391
La Filosofia dell’Informazione: una sfida etica ed epistemologica Ferrara 24 – 26 marzo e 28 – 30 aprile 2016
Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute
Copernicus Visiting Professor, IUSS Ferrara 1391
La Filosofia dell’Informazione: una sfida etica ed epistemologica Ferrara 24 – 26 marzo e 28 – 30 aprile 2016