**Description:**

There
is a wealth of literature and books available to engineers starting to
understand what machine learning is and how it can be used in their everyday
work. This presents the problem of where the engineer should start. The answer
is often “for a general, but slightly outdated introduction, read this book;
for a detailed survey of methods based on probabilistic models, check this
reference; to learn about statistical learning, this text is useful” and so on.
This monograph provides the starting point to the literature that every
engineer new to machine learning needs. It offers a basic and compact reference
that describes key ideas and principles in simple terms and within a unified
treatment, encompassing recent developments and pointers to the literature for
further study.

*A Brief Introduction to Machine Learning for Engineers* is
the entry point to machine learning for students, practitioners, and
researchers with an engineering background in probability and linear algebra.

Contents:

__Part I. Basics__

**Chapter 1: Introduction • ****What is Machine Learning? • When to Use Machine Learning? •
Goals and Outline**

**Chapter 2: A Gentle Introduction through Linear
Regression • ****Supervised Learning •
Inference • Frequentist Approach • Bayesian Approach • Minimum Description
Length (MDL)* • Information-Theoretic Metrics • Interpretation and Causality* •
Summary**

**Chapter 3: Probabilistic Models for Learning • ****Preliminaries • The Exponential Family • Frequentist Learning
• Bayesian Learning • Supervised Learning via Generalized Linear Models (GLM) •
Maximum Entropy Property* • Energy-based Models* • Some Advanced Topics* •
Summary**

** **

__Part II. Supervised
Learning__

**Chapter 4: Classification • ****Preliminaries: Stochastic Gradient Descent • Classification
as a Supervised Learning Problem • Discriminative Deterministic Models •
Discriminative Probabilistic Models: Generalized Linear Models • Discriminative
Probabilistic Models: Beyond GLM • Generative Probabilistic Models • Boosting*
• Summary**

**Chapter 5: Statistical Learning Theory* • ****A Formal Framework for Supervised Learning • PAC
Learnability and Sample Complexity • PAC Learnability for Finite Hypothesis
Classes • VC Dimension and Fundamental Theorem of PAC Learning • Summary**

** **

__Part III. Unsupervised
Learning__

**Chapter 6: Unsupervised Learning • ***K***-Means
Clustering • ML, ELBO and EM • Directed Generative Models • Undirected
Generative Models • Discriminative Models • Autoencoders • Ranking* • Summary**

** **

__Part IV. Advanced Modelling
and Inference__

**Chapter 7: Probabilistic Graphical Models • ****Introduction • Bayesian Networks • Markov Random Fields •
Bayesian Inference in Probabilistic Graphical Models • Summary**

**Chapter 8: Approximate Inference and Learning • ****Monte Carlo Methods • Variational Inference • Monte
Carlo-based Variational Inference* • Approximate Learning* • Summary**

** **

__Part V. Conclusions__

**Chapter 9: Concluding Remarks**

** **

**Appendices**** • ****Appendix
A: Information Measures**** •
Entropy • Conditional Entropy and Mutual Information • Divergence Measures • ****Appendix
B: KL Divergence and Exponential Family**** • Acknowledgements • References**

Target Audience:

This
book is the entry point to machine learning for students, practitioners, and
researchers with an engineering background in probability and linear algebra.