Strengthening Deep Neural Networks : making AI less susceptible to adversarial trickery / Warr, Katy
نوع المادة : نصاللغة: الإنجليزية الناشر:Sebastopol : O'Reilly Media, Incorporated, 2019تاريخ حقوق النشر: ©2019الطبعات:First editionوصف:xiii, 227 pages : illustrations ; 24 cmنوع المحتوى:- text
- unmediated
- volume
- 9781492044956
- 9781492044925
- 149204492X
- 9781492044901
- 1492044903
- QA76.87 .W37 2019
نوع المادة | المكتبة الحالية | رقم الطلب | رقم النسخة | حالة | تاريخ الإستحقاق | الباركود | |
---|---|---|---|---|---|---|---|
كتاب | UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة | QA76.87 .W37 2019 (إستعراض الرف(يفتح أدناه)) | C.1 | Library Use Only | داخل المكتبة فقط | 30020000207566 | ||
كتاب | UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة | QA76.87 .W37 2019 (إستعراض الرف(يفتح أدناه)) | C.2 | المتاح | 30020000207565 |
Browsing UAE Federation Library | مكتبة اتحاد الإمارات shelves, Shelving location: General Collection | المجموعات العامة إغلاق مستعرض الرف(يخفي مستعرض الرف)
QA76.87 .T73 2019 Grokking deep learning / | QA76.87 .T73 2019 Grokking deep learning / | QA76.87 .T87 2004 Stochastic models of neural networks / | QA76.87 .W37 2019 Strengthening Deep Neural Networks : making AI less susceptible to adversarial trickery / | QA76.87 .W37 2019 Strengthening Deep Neural Networks : making AI less susceptible to adversarial trickery / | QA76.88 .H4917 2008 High performance computing and grids in action / | QA76.88 .H4917 2008 High performance computing and grids in action / |
Includes bibliographical references and index
Part 1. An introduction to fooling AI. Introduction -- Attack motivations -- Deep neural network (DNN) fundamentals -- DNN processing for image, audio, and video -- Part 2. Generating adversarial input. The principles of adversarial input -- Methods for generating adversarial perturbation -- Part 3. Understanding the real-world threat. Attack patterns for real-world systems -- Physical-world attacks -- Part 4. Defense. Evaluating model robustness to adversarial inputs -- Defending against adversarial inputs -- Future trends : toward robust AI -- Mathematics terminology reference
As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to "fool" them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. You'll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs "think" and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust