صورة الغلاف المحلية
صورة الغلاف المحلية
عرض عادي

Explainable AI : interpreting, explaining and visualizing deep learning / Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller (eds.)

بواسطة:المساهم (المساهمين):نوع المادة : نصنصاللغة: الإنجليزية السلاسل:Lecture notes in computer science ; 11700 | Lecture notes in computer scienceالناشر:Cham, Switzerland : Springer, 2019وصف:xi, 438 pages : illustrations ; 24 cmنوع المحتوى:
  • text
  • still image
نوع الوسائط:
  • unmediated
نوع الناقل:
  • volume
تدمك:
  • 9783030289539
عنوان آخر:
  • Explainable artificial intelligence
الموضوع:تنسيقات مادية إضافية:Print version:: Explainable AI.تصنيف مكتبة الكونجرس:
  • Q335 .E975 2019
المحتويات:
Towards Explainable Artificial Intelligence / Wojciech Samek, Klaus-Robert Müller -- Transparency : Motivations and Challenges / Adrian Weller -- Interpretability in Intelligent Systems : A New Concept? / Lars Kai Hansen, Laura Rieger -- Understanding Neural Networks via Feature Visualization : A Survey / Anh Nguyen, Jason Yosinski, Jeff Clune -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation / Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee -- Unsupervised Discrete Representation Learning / Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama -- Towards Reverse-Engineering Black-Box Neural Networks / Seong Joon Oh, Bernt Schiele, Mario Fritz -- Explanations for Attributing Deep Neural Network Predictions / Ruth Fong, Andrea Vedaldi -- Gradient-Based Attribution Methods / Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross -- Layer-Wise Relevance Propagation : An Overview / Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller -- Explaining and Interpreting LSTMs / Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek -- Comparing the Interpretability of Deep Networks via Network Dissection / Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba -- Gradient-Based vs. Propagation-Based Explanations : An Axiomatic Comparison / Grégoire Montavon -- The (Un)reliability of Saliency Methods / Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation / Markus Hofmarcher, Thomas Unterhiner, José Arjona-Medina, Günter Klambauer, Sepp Hochreiter, Bernhard Nessler -- Understanding Patch-Based Learning of Video Data by Explaining Predictions / Christopher J. Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks / Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller -- Interpretable Deep Learning in Drug Discovery / Kristina Preuer, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, Thomas Unterthiner -- Neural Hydrology : Interpreting LSTMs in Hydrology / Frederik Kratzert, Mathew Herrnegger, Daniel Klotz, Sepp Hochreiter, Günter Klambauer -- Feature Fallacy : Complications with Interpreting Linear Decoding Weights in fMRI / Pamela K. Douglas, Ariana Anderson -- Current Advances in Neural Decoding / Marcel A.J. van Gerven, Katja Seeliger, Umut Güçlü, Yağmur Güçlütürk -- Software and Application Patterns for Explanation Methods / Maximilian Alber
ملخص:The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. -- Provided by publisher
المقتنيات
نوع المادة المكتبة الحالية رقم الطلب رقم النسخة حالة تاريخ الإستحقاق الباركود
كتاب كتاب UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة Q335 .E975 2019 (إستعراض الرف(يفتح أدناه)) C.1 Library Use Only | داخل المكتبة فقط 30020000208120
كتاب كتاب UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة Q335 .E975 2019 (إستعراض الرف(يفتح أدناه)) C.2 المتاح 30020000208119

Includes bibliographical references and indexes

Towards Explainable Artificial Intelligence / Wojciech Samek, Klaus-Robert Müller -- Transparency : Motivations and Challenges / Adrian Weller -- Interpretability in Intelligent Systems : A New Concept? / Lars Kai Hansen, Laura Rieger -- Understanding Neural Networks via Feature Visualization : A Survey / Anh Nguyen, Jason Yosinski, Jeff Clune -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation / Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee -- Unsupervised Discrete Representation Learning / Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama -- Towards Reverse-Engineering Black-Box Neural Networks / Seong Joon Oh, Bernt Schiele, Mario Fritz -- Explanations for Attributing Deep Neural Network Predictions / Ruth Fong, Andrea Vedaldi -- Gradient-Based Attribution Methods / Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross -- Layer-Wise Relevance Propagation : An Overview / Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller -- Explaining and Interpreting LSTMs / Leila Arras, José Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek -- Comparing the Interpretability of Deep Networks via Network Dissection / Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba -- Gradient-Based vs. Propagation-Based Explanations : An Axiomatic Comparison / Grégoire Montavon -- The (Un)reliability of Saliency Methods / Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation / Markus Hofmarcher, Thomas Unterhiner, José Arjona-Medina, Günter Klambauer, Sepp Hochreiter, Bernhard Nessler -- Understanding Patch-Based Learning of Video Data by Explaining Predictions / Christopher J. Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks / Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller -- Interpretable Deep Learning in Drug Discovery / Kristina Preuer, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, Thomas Unterthiner -- Neural Hydrology : Interpreting LSTMs in Hydrology / Frederik Kratzert, Mathew Herrnegger, Daniel Klotz, Sepp Hochreiter, Günter Klambauer -- Feature Fallacy : Complications with Interpreting Linear Decoding Weights in fMRI / Pamela K. Douglas, Ariana Anderson -- Current Advances in Neural Decoding / Marcel A.J. van Gerven, Katja Seeliger, Umut Güçlü, Yağmur Güçlütürk -- Software and Application Patterns for Explanation Methods / Maximilian Alber

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. -- Provided by publisher

اضغط على الصورة لمشاهدتها في عارض الصور

صورة الغلاف المحلية
شارك

أبوظبي، الإمارات العربية المتحدة

reference@ecssr.ae

97124044780 +

حقوق النشر © 2024 مركز الإمارات للدراسات والبحوث الاستراتيجية جميع الحقوق محفوظة