" Deep Learning " is the only comprehensive book in the field of deep learning. Its full name is also called the Deep Learning AI Bible (Deep Learning) . It is edited by three world-renowned experts, Ian Goodfellow, Yoshua Bengio, and Aaron Courville. The book covers background knowledge of mathematics and related concepts. , including related content in linear algebra, probability theory, information theory, numerical optimization and machine learning. At the same time, it also introduces deep learning technologies used by practitioners in the industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling and practical methods, and investigates topics such as natural language processing, Applications in speech recognition, computer vision, online recommendation systems, bioinformatics, and video games. Finally, the deep learning book also provides some research directions, covering theoretical topics including linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, partition functions, approximate inference, and deep generative models, suitable for For use by college students or graduate students in related majors.
You can download the Chinese version pdf and English version pdf of "Deep Learning" and read them directly.
For the work of this project, you can directly download Deep Learning_Principles and Code Implementation.pdf (the book will be continuously updated later)
"Deep Learning" can be said to be an introductory guide to deep learning and artificial intelligence. Many algorithm enthusiasts, machine learning training courses, and interviews with Internet companies refer to this book. However, this book is obscure, and the official code implementation is not provided, so some parts are difficult to understand. This project re-describes the concepts in the book based on mathematical derivation and generation principles , and uses Python (mainly the numpy library) to reproduce the book content ( source-level code implementation. The derivation process and code implementation are placed in the pdf file in the download area , the important part of the implementation code is also placed in the code folder ).
However, my level is limited, but I sincerely hope that this work can help more people learn deep learning algorithms. I need everyone's advice and help. If you encounter errors or unclear explanations while reading, I hope you can summarize your suggestions and submit them in Issues. If you also want to join this work or have other questions, you can contact my email. If you use this book in your work or blog, please include a citation link.
During the writing process, I referred to many excellent online works, and all reference resources are saved in the reference.txt
file.
This job is to write this book Deep Learning_Principles and Code Implementation.pdf. As you can see in the pdf file, every concept involved in "Deep Learning" will be given a detailed description, derivation at the principle level, and implementation in code. The code implementation will not call any deep learning framework such as Tensorflow, PyTorch, MXNet, or even sklearn (the part using sklearn in the PDF is used to verify that the code is correct). All codes are implemented from the principle level (Python's basic library NumPy ), and have detailed comments, which are consistent with the principle description area above the code area. You can understand it by combining the principles and code.
The reason for this job is my own love, but to complete this job I need to invest a lot of time and energy, and I usually write until two or three in the morning. The derivation, coding, and drawing are all polished slowly, and I will ensure the quality of this work. This job will be updated all the time, and the chapters that have been uploaded will continue to be supplemented with content. If you encounter any concepts or errors that you want to describe during the reading process, please send me an email to let me know.
Thank you very much for your recognition and promotion. Finally, please wait for the next update.
My name is Zhu Mingchao, my email is: [email protected]
2020/3/:
1. 修改第五章决策树部分,补充 ID3 和 CART 的原理,代码实现以 CART 为主。
2. 第七章添加 L1 和 L2 正则化最优解的推导 (即 L1稀疏解的原理)。
3. 第七章添加集成学习方法的推导与代码实现,包括 Bagging (随机森林)、 Boosting ( Adaboost 、 GBDT 、 XGBoost )。
4. 第八章添加牛顿法与拟牛顿法 ( DFP 、 BFGS 、 L - BFGS ) 的推导。
5. 第十一章节添加贝叶斯线性回归、高斯过程回归 ( GPR ) 与贝叶斯优化的推导与代码实现。
Each subsequent update will be placed in the update.txt
file.
In addition to the conceptual points in the "Deep Learning" book, this project also adds some supplementary knowledge to each chapter, such as the random forest in the integrated learning part of Chapter 7, the principle analysis and code implementation of Adaboost, GBDT, and XGBoost, or the tenth chapter. Chapter 2 describes some current mainstream methods . The large chapter table of contents and pdf file download link can be found in the table below. For the actual table of contents in the specific pdf file, please refer to contents.txt
. You can download the corresponding chapters in the pdf link below, or you can download all files directly in the releases interface.
Chinese verses | English chapter | download (Including derivation and code implementation) |
---|---|---|
Chapter 1 Preface | 1 Introduction | |
Chapter 2 Linear Algebra | 2 Linear Algebra | |
Chapter 3 Probability and Information Theory | 3 Probability and Information Theory | |
Chapter 4 Numerical Calculation | 4 Numerical Computation | |
Chapter 5 Basics of Machine Learning | 5 Machine Learning Basics | |
Chapter 6 Deep Feedforward Network | 6 Deep Feedforward Networks | |
Chapter 7 Regularization in Deep Learning | 7 Regularization for Deep Learning | |
Chapter 8 Optimization in Deep Models | 8 Optimization for Training Deep Models | |
Chapter 9 Convolutional Network | 9 Convolutional Networks | |
Chapter 10 Sequence Modeling: Recurrent and Recursive Networks | 10 Sequence Modeling: Recurrent and Recursive Nets | |
Chapter 11 Practical Methodology | 11 Practical Methodology | |
Chapter 12 Application | 12 Applications | |
Chapter 13 Linear Factor Model | 13 Linear Factor Models | |
Chapter 14 Autoencoders | 14 Autoencoders | |
Chapter 15 Expressing Learning | 15 Representation Learning | |
Chapter 16 Structured Probabilistic Model in Deep Learning | 16 Structured Probabilistic Models for Deep Learning | |
Chapter 17 Monte Carlo Method | 17 Monte Carlo Methods | |
Chapter 18 Confronting the Partition Function | 18 Confronting the Partition Function | |
Chapter 19 Approximate Inference | 19 Approximate Inference | |
Chapter 20 Deep Generative Models | 20 Deep Generative Models |
Chapters that have not yet been uploaded will be uploaded in the future.
Thanks for the recognition and promotion of this project.
Writing this project took time and effort. If this project is helpful to you, you can treat the author to an ice cream treat: