صورة الغلاف المحلية
صورة الغلاف المحلية
عرض عادي

The Ethics of Artificial Intelligence in Defence / Mariarosaria Taddeo.

بواسطة:نوع المادة : ملف الحاسوبملف الحاسوباللغة: الإنجليزية الناشر:New York, NY : Oxford University Press, [2024]تاريخ حقوق النشر: 2024الطبعات:First editionوصف:1 online resource (xxii, 282 pages)نوع المحتوى:
  • text
نوع الوسائط:
  • computer
نوع الناقل:
  • online resource
تدمك:
  • 9780197745458
الموضوع:النوع/الشكل:تنسيقات مادية إضافية:Print version:: Ethics of Artificial Intelligence in Defence.موارد على الانترنت:
المحتويات:
Table Of Contents -- Cover -- The Ethics of Artificial Intelligence in Defence -- Copyright -- Contents -- Preface -- Acknowledgements -- List of Most-Used Abbreviations -- 1. The Groundwork for an Ethics of Artificial Intelligence in Defence -- 1. Introduction -- 2. Artificial Intelligence and the Predictability Problem -- 2.1. Human-Machine Teaming -- 2.2. Machine Learning -- 2.3. Data Curation -- 2.4. Technical Debt -- 3. The Methodology of Levels of Abstraction -- 4. Ethical Problems of Using AI for Defence Purposes -- 4.1. Sustainment and Support Uses of AI -- 4.2. Adversarial and Non-kinetic Uses of AI -- 4.3. Adversarial and Kinetic Uses of AI -- 5. Conclusion -- 2. Ethical Principles for AI in Defence -- 1. Introduction -- 2. Ethical Principles for the Use of AI -- 2.1. Responsible Uses of AI -- 2.2. Equitable Uses of AI -- 2.3. Traceability -- 2.4. Reliable and Governable -- 3. From Defence Principles to Practice -- 4. Five Ethical Principles for AI in Defence -- 4.1. Justified and Overridable Uses -- 4.2. Just and Transparent Systems and Processes -- 4.3. Human Moral Responsibility -- 4.4. Meaningful Human Control -- 4.5. Reliable AI Systems -- 5. A Three-Step Methodology to Extract Guidelines from AI Ethics Principles in Defence -- 5.1. Independent, Multistakeholder Ethics Board -- 5.2. Abstraction -- 5.3. Interpretation and Requirements Elicitation -- 5.4. Balancing The Principles -- 6. Conclusion -- 3. Sustainment and Support Uses of AI in Defence: The Case of AI-Augmented Intelligence Analysis -- 1. Introduction -- 2. Mapping Augmented Intelligence Analysis in Defence -- 3. Ethical Challenges of Augmented Intelligence Analysis -- 3.1. Intrusion -- 3.2. Explainability and Accountability -- 3.3. Bias -- 3.4. Authoritarianism and Political Security -- 4. Conclusion -- 4. Adversarial and Non-kinetic Uses of AI: Conceptual and Ethical Challenges -- 1. Introduction -- 2. The Weaponisation of AI in Cyberspace -- 2.1. Recommendations -- 3. AI for Adversarial and Non-kinetic Purposes: The Conceptual Shift -- 4. Information Ethics -- 5. Just Non-kinetic Cyberwarfare -- 6. Conclusion -- 5. Adversarial and Non-kinetic Uses: The Case of Artificial Intelligence for Cyber Deterrence -- 1. Introduction -- 2. Deterrence Theory -- 3. Attribution -- 4. Deterrence Strategies: Defence and Retaliation -- 4.1. Defence in Cyberspace -- 4.2. Retaliation in Cyberspace -- 4.2.1. Control and Risks of Cyber Deterrence by Retaliation -- 5. Credible Signalling -- 6. AI for Cyber Deterrence: A New Model -- 7. Conclusion -- 6. Adversarial and Kinetic Uses of AI: The Definition of Autonomous Weapon Systems -- 1. Introduction -- 2. Definitions of Autonomous Weapon Systems -- 2.1. Autonomy, Intervention, and Control -- 2.2. Learning Capabilities -- 2.3. Purpose of Deployment -- 3. A Definition of AWS -- 3.1. Autonomous, Self-Learning Weapons Systems -- 3.2. Human Control -- 4. Conclusion -- 7. Taking a Moral Gambit: Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems -- 1. Introduction -- 2. Moral Responsibility for AI Systems -- 3. Collective and Faultless Distributed Moral Responsibility -- 4. Moral Responsibility for AWS: The Collective Moral Responsibility Approach -- 4.1. Moral Responsibility for AWS: Distributing Moral Responsibility along the Chain of Command -- 4.2. Moral Responsibility for AWS: The Distributed Faultless Moral Responsibility Approach -- 5. Meaningful Moral Responsibility and the Moral Gambit -- 6. Discharging Meaningful Moral Responsibility for the Actions of Non-lethal AWS -- 7. Conclusion -- 8. Just War Theory and the Permissibility of Autonomous Weapons Systems -- 1. Introduction -- 2. Jus ad bellum and AWS -- 3. The Principle of Necessity -- 3.1. The Principle of Necessity and AWS -- 4. Distinction, Double Effect, and Due Care -- 4.1. AWS, Distinction, and Due Care -- 5. Conclusion -- Epilogue -- References -- Index
ملخص:The use of Artificial Intelligence (AI) for national defence poses important ethical problems that combine ethical risks related to the use of AI--e.g. enabling human wrongdoing; reducing human control; removing human responsibility; devaluing human skills; and eroding human self-determination--with those that follow the use of force in warfare, like respecting of human dignity and the risk of breaching the principles of Just War Theory. Because of the range of possible applications and of the set of ethical risks and opportunities to address, it is difficult to develop a coherent and systemic ethical analysis of AI in defence. The goal of this chapter is to clarify how this book will do so, by outlining the methodology and the scope of the analysis proposed here. Three aspects are crucial to this end: the definition of AI; the methodology of levels of abstraction; and the identification of three categories of use of AI in defence, namely, sustainment and support, adversarial and non-kinetic, and adversarial and kinetic.
قوائم هذه المادة تظهر في: Electronic Books | الكتب الإلكترونية
المقتنيات
نوع المادة المكتبة الحالية رقم الطلب رابط URL حالة تاريخ الإستحقاق الباركود حجوزات مادة
مصدر رقمي مصدر رقمي UAE Federation Library | مكتبة اتحاد الإمارات Online Copy | نسخة إلكترونية رابط إلى المورد لا يعار
إجمالي الحجوزات: 0

Includes bibliographical references and index.

Table Of Contents -- Cover -- The Ethics of Artificial Intelligence in Defence -- Copyright -- Contents -- Preface -- Acknowledgements -- List of Most-Used Abbreviations -- 1. The Groundwork for an Ethics of Artificial Intelligence in Defence -- 1. Introduction -- 2. Artificial Intelligence and the Predictability Problem -- 2.1. Human-Machine Teaming -- 2.2. Machine Learning -- 2.3. Data Curation -- 2.4. Technical Debt -- 3. The Methodology of Levels of Abstraction -- 4. Ethical Problems of Using AI for Defence Purposes -- 4.1. Sustainment and Support Uses of AI -- 4.2. Adversarial and Non-kinetic Uses of AI -- 4.3. Adversarial and Kinetic Uses of AI -- 5. Conclusion -- 2. Ethical Principles for AI in Defence -- 1. Introduction -- 2. Ethical Principles for the Use of AI -- 2.1. Responsible Uses of AI -- 2.2. Equitable Uses of AI -- 2.3. Traceability -- 2.4. Reliable and Governable -- 3. From Defence Principles to Practice -- 4. Five Ethical Principles for AI in Defence -- 4.1. Justified and Overridable Uses -- 4.2. Just and Transparent Systems and Processes -- 4.3. Human Moral Responsibility -- 4.4. Meaningful Human Control -- 4.5. Reliable AI Systems -- 5. A Three-Step Methodology to Extract Guidelines from AI Ethics Principles in Defence -- 5.1. Independent, Multistakeholder Ethics Board -- 5.2. Abstraction -- 5.3. Interpretation and Requirements Elicitation -- 5.4. Balancing The Principles -- 6. Conclusion -- 3. Sustainment and Support Uses of AI in Defence: The Case of AI-Augmented Intelligence Analysis -- 1. Introduction -- 2. Mapping Augmented Intelligence Analysis in Defence -- 3. Ethical Challenges of Augmented Intelligence Analysis -- 3.1. Intrusion -- 3.2. Explainability and Accountability -- 3.3. Bias -- 3.4. Authoritarianism and Political Security -- 4. Conclusion -- 4. Adversarial and Non-kinetic Uses of AI: Conceptual and Ethical Challenges -- 1. Introduction -- 2. The Weaponisation of AI in Cyberspace -- 2.1. Recommendations -- 3. AI for Adversarial and Non-kinetic Purposes: The Conceptual Shift -- 4. Information Ethics -- 5. Just Non-kinetic Cyberwarfare -- 6. Conclusion -- 5. Adversarial and Non-kinetic Uses: The Case of Artificial Intelligence for Cyber Deterrence -- 1. Introduction -- 2. Deterrence Theory -- 3. Attribution -- 4. Deterrence Strategies: Defence and Retaliation -- 4.1. Defence in Cyberspace -- 4.2. Retaliation in Cyberspace -- 4.2.1. Control and Risks of Cyber Deterrence by Retaliation -- 5. Credible Signalling -- 6. AI for Cyber Deterrence: A New Model -- 7. Conclusion -- 6. Adversarial and Kinetic Uses of AI: The Definition of Autonomous Weapon Systems -- 1. Introduction -- 2. Definitions of Autonomous Weapon Systems -- 2.1. Autonomy, Intervention, and Control -- 2.2. Learning Capabilities -- 2.3. Purpose of Deployment -- 3. A Definition of AWS -- 3.1. Autonomous, Self-Learning Weapons Systems -- 3.2. Human Control -- 4. Conclusion -- 7. Taking a Moral Gambit: Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems -- 1. Introduction -- 2. Moral Responsibility for AI Systems -- 3. Collective and Faultless Distributed Moral Responsibility -- 4. Moral Responsibility for AWS: The Collective Moral Responsibility Approach -- 4.1. Moral Responsibility for AWS: Distributing Moral Responsibility along the Chain of Command -- 4.2. Moral Responsibility for AWS: The Distributed Faultless Moral Responsibility Approach -- 5. Meaningful Moral Responsibility and the Moral Gambit -- 6. Discharging Meaningful Moral Responsibility for the Actions of Non-lethal AWS -- 7. Conclusion -- 8. Just War Theory and the Permissibility of Autonomous Weapons Systems -- 1. Introduction -- 2. Jus ad bellum and AWS -- 3. The Principle of Necessity -- 3.1. The Principle of Necessity and AWS -- 4. Distinction, Double Effect, and Due Care -- 4.1. AWS, Distinction, and Due Care -- 5. Conclusion -- Epilogue -- References -- Index

The use of Artificial Intelligence (AI) for national defence poses important ethical problems that combine ethical risks related to the use of AI--e.g. enabling human wrongdoing; reducing human control; removing human responsibility; devaluing human skills; and eroding human self-determination--with those that follow the use of force in warfare, like respecting of human dignity and the risk of breaching the principles of Just War Theory. Because of the range of possible applications and of the set of ethical risks and opportunities to address, it is difficult to develop a coherent and systemic ethical analysis of AI in defence. The goal of this chapter is to clarify how this book will do so, by outlining the methodology and the scope of the analysis proposed here. Three aspects are crucial to this end: the definition of AI; the methodology of levels of abstraction; and the identification of three categories of use of AI in defence, namely, sustainment and support, adversarial and non-kinetic, and adversarial and kinetic.

Description based on print version record.

Electronic reproduction. Ann Arbor, MI : ProQuest, 2018. Available via World Wide Web. Access may be limited to ProQuest affiliated libraries.

اضغط على الصورة لمشاهدتها في عارض الصور

صورة الغلاف المحلية
شارك

أبوظبي، الإمارات العربية المتحدة

reference@ecssr.ae

97124044780 +

حقوق النشر © 2026 مركز الإمارات للدراسات والبحوث الاستراتيجية جميع الحقوق محفوظة