صورة الغلاف المحلية
صورة الغلاف المحلية
عرض عادي

Strengthening Deep Neural Networks : making AI less susceptible to adversarial trickery / Warr, Katy

بواسطة:نوع المادة : نصنصاللغة: الإنجليزية الناشر:Sebastopol : O'Reilly Media, Incorporated, 2019تاريخ حقوق النشر: ©2019الطبعات:First editionوصف:xiii, 227 pages : illustrations ; 24 cmنوع المحتوى:
  • text
نوع الوسائط:
  • unmediated
نوع الناقل:
  • volume
تدمك:
  • 9781492044956
  • 9781492044925
  • 149204492X
  • 9781492044901
  • 1492044903
الموضوع:تصنيف مكتبة الكونجرس:
  • QA76.87 .W37 2019
المحتويات:
Part 1. An introduction to fooling AI. Introduction -- Attack motivations -- Deep neural network (DNN) fundamentals -- DNN processing for image, audio, and video -- Part 2. Generating adversarial input. The principles of adversarial input -- Methods for generating adversarial perturbation -- Part 3. Understanding the real-world threat. Attack patterns for real-world systems -- Physical-world attacks -- Part 4. Defense. Evaluating model robustness to adversarial inputs -- Defending against adversarial inputs -- Future trends : toward robust AI -- Mathematics terminology reference
ملخص:As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust
المقتنيات
نوع المادة المكتبة الحالية رقم الطلب رقم النسخة حالة تاريخ الإستحقاق الباركود
كتاب كتاب UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة QA76.87 .W37 2019 (إستعراض الرف(يفتح أدناه)) C.1 Library Use Only | داخل المكتبة فقط 30020000207566
كتاب كتاب UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة QA76.87 .W37 2019 (إستعراض الرف(يفتح أدناه)) C.2 المتاح 30020000207565

Includes bibliographical references and index

Part 1. An introduction to fooling AI. Introduction -- Attack motivations -- Deep neural network (DNN) fundamentals -- DNN processing for image, audio, and video -- Part 2. Generating adversarial input. The principles of adversarial input -- Methods for generating adversarial perturbation -- Part 3. Understanding the real-world threat. Attack patterns for real-world systems -- Physical-world attacks -- Part 4. Defense. Evaluating model robustness to adversarial inputs -- Defending against adversarial inputs -- Future trends : toward robust AI -- Mathematics terminology reference

As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust

اضغط على الصورة لمشاهدتها في عارض الصور

صورة الغلاف المحلية
شارك

أبوظبي، الإمارات العربية المتحدة

reference@ecssr.ae

97124044780 +

حقوق النشر © 2024 مركز الإمارات للدراسات والبحوث الاستراتيجية جميع الحقوق محفوظة