عرض عادي

Applied deep learning : a case-based approach to understanding deep neural networks / by Umberto Michelucci.

بواسطة:نوع المادة : نصنصاللغة: الإنجليزية الناشر:Berkeley, CA : Apress : Imprint: Apress, 2018الطبعات:1st ed. 2018وصف:xxi, 410 pages: illustrations ; 27 cmنوع المحتوى:
  • text
نوع الوسائط:
  • unmediated
نوع الناقل:
  • volume
تدمك:
  • 9781484237892
  • 9781484237908
الموضوع:تصنيف مكتبة الكونجرس:
  • Q325.5 .M5344 2018
المحتويات:
Chapter 1: Introduction -- Chapter 2: Single Neurons -- Chapter 3: Fully connected Neural Network with more neurons -- Chapter 4: Neural networks error analysis -- Chapter 5: Dropout technique -- Chapter 6: Hyper parameters tuning -- Chapter 7: Tensorflow and optimizers (Gradient descent, Adam, momentum, et cetera) -- Chapter 8: Convolutional Networks and image recognition -- Chapter 9: Recurrent Neural Networks -- Chapter 10: A practical COMPLETE example from scratch (put everything together) -- Chapter 11: Logistic regression implement from scratch in Python without libraries.
ملخص:Work with advanced topics in deep learning, such as optimization algorithms, hyper-parameter tuning, dropout, and error analysis as well as strategies to address typical problems encountered when training deep neural networks. You'll begin by studying the activation functions mostly with a single neuron (ReLu, sigmoid, and Swish), seeing how to perform linear and logistic regression using TensorFlow, and choosing the right cost function. The next section talks about more complicated neural network architectures with several layers and neurons and explores the problem of random initialization of weights. An entire chapter is dedicated to a complete overview of neural network error analysis, giving examples of solving problems originating from variance, bias, overfitting, and datasets coming from different distributions. Applied Deep Learning also discusses how to implement logistic regression completely from scratch without using any Python library except NumPy, to let you appreciate how libraries such as TensorFlow allow quick and efficient experiments. Case studies for each method are included to put into practice all theoretical information. You'll discover tips and tricks for writing optimized Python code (for example vectorizing loops with NumPy). You will: Implement advanced techniques in the right way in Python and TensorFlow Debug and optimize advanced methods (such as dropout and regularization) Carry out error analysis (to realize if one has a bias problem, a variance problem, a data offset problem, and so on) Set up a machine learning project focused on deep learning on a complex dataset.
المقتنيات
نوع المادة المكتبة الحالية رقم الطلب رقم النسخة حالة تاريخ الإستحقاق الباركود
كتاب كتاب UAE Federation Library | مكتبة اتحاد الإمارات General Collection | المجموعات العامة Q325.5 .M5344 2018 (إستعراض الرف(يفتح أدناه)) C.1 Library Use Only | داخل المكتبة فقط 30030000001745

Chapter 1: Introduction -- Chapter 2: Single Neurons -- Chapter 3: Fully connected Neural Network with more neurons -- Chapter 4: Neural networks error analysis -- Chapter 5: Dropout technique -- Chapter 6: Hyper parameters tuning -- Chapter 7: Tensorflow and optimizers (Gradient descent, Adam, momentum, et cetera) -- Chapter 8: Convolutional Networks and image recognition -- Chapter 9: Recurrent Neural Networks -- Chapter 10: A practical COMPLETE example from scratch (put everything together) -- Chapter 11: Logistic regression implement from scratch in Python without libraries.

Work with advanced topics in deep learning, such as optimization algorithms, hyper-parameter tuning, dropout, and error analysis as well as strategies to address typical problems encountered when training deep neural networks. You'll begin by studying the activation functions mostly with a single neuron (ReLu, sigmoid, and Swish), seeing how to perform linear and logistic regression using TensorFlow, and choosing the right cost function. The next section talks about more complicated neural network architectures with several layers and neurons and explores the problem of random initialization of weights. An entire chapter is dedicated to a complete overview of neural network error analysis, giving examples of solving problems originating from variance, bias, overfitting, and datasets coming from different distributions. Applied Deep Learning also discusses how to implement logistic regression completely from scratch without using any Python library except NumPy, to let you appreciate how libraries such as TensorFlow allow quick and efficient experiments. Case studies for each method are included to put into practice all theoretical information. You'll discover tips and tricks for writing optimized Python code (for example vectorizing loops with NumPy). You will: Implement advanced techniques in the right way in Python and TensorFlow Debug and optimize advanced methods (such as dropout and regularization) Carry out error analysis (to realize if one has a bias problem, a variance problem, a data offset problem, and so on) Set up a machine learning project focused on deep learning on a complex dataset.

شارك

أبوظبي، الإمارات العربية المتحدة

reference@ecssr.ae

97124044780 +

حقوق النشر © 2024 مركز الإمارات للدراسات والبحوث الاستراتيجية جميع الحقوق محفوظة