Jeżeli nie znalazłeś poszukiwanej książki, skontaktuj się z nami wypełniając formularz kontaktowy.

Ta strona używa plików cookies, by ułatwić korzystanie z serwisu. Mogą Państwo określić warunki przechowywania lub dostępu do plików cookies w swojej przeglądarce zgodnie z polityką prywatności.

Wydawcy

Literatura do programów

Informacje szczegółowe o książce

Foundations of Deep Learning - ISBN 9789811682322

Foundations of Deep Learning

ISBN 9789811682322

Autor: Fengxiang He

Wydawca: Springer

Dostępność: 3-6 tygodni

Cena: 753,90 zł

Przed złożeniem zamówienia prosimy o kontakt mailowy celem potwierdzenia ceny.


ISBN13:      

9789811682322

Autor:      

Fengxiang He

Oprawa:      

Hardback

Rok Wydania:      

2025-02-02

Ilość stron:      

292

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a “cloud” to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. 

The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the “effective” hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.

 We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

1. Front Matter

2. Statistical Learning Theory

3. Hypothesis Complexity

4. Algorithmic Stability

5. Capacity and Complexity

6. Stochastic Gradient Descent as an Implicit Regularization

7. The Geometry of the Loss Surfaces

8. Linear Partition in the Input Space

9. Reflecting on the Role of Overparameterization: Is it Solely Harmful?

10. Theoretical Foundations for Specific Architectures

11. Privacy Preservation

12. Algorithmic Fairness

13. Adversarial Robustness

Koszyk

Książek w koszyku: 0 szt.

Wartość zakupów: 0,00 zł

ebooks
covid

Kontakt

Gambit
Centrum Oprogramowania
i Szkoleń Sp. z o.o.

Al. Pokoju 29b/22-24

31-564 Kraków


Siedziba Księgarni

ul. Kordylewskiego 1

31-542 Kraków

+48 12 410 5991

+48 12 410 5987

+48 12 410 5989

Zobacz na mapie google

Wyślij e-mail

Subskrypcje

Administratorem danych osobowych jest firma Gambit COiS Sp. z o.o. Na podany adres będzie wysyłany wyłącznie biuletyn informacyjny.

Autoryzacja płatności

PayU

Informacje na temat autoryzacji płatności poprzez PayU.

PayU banki

© Copyright 2012: GAMBIT COiS Sp. z o.o. Wszelkie prawa zastrzeżone.

Projekt i wykonanie: Alchemia Studio Reklamy