Deep Learning Techniques for Music Generation - Companion Mini Web Site

Deep Learning Techniques for Music Generation - Companion Mini Web Site

This is the Companion Mini Web Site of the Deep Learning Techniques for Music Generation Book.
It provides various associated resources, such as a course, and various related documentation (blogs, projects, implementations) by various authors.

Additions to the Book (New)

Since the book has been published, new architectures have been proposed, notably Transformer (made popular and widely used via Large Language Models (LLMs) such as ChatGPT), but also diffusion models. Transformer appeared at the time of completion of the book, thus it was only quickly introduced. Actually it is somehow an evolution of some existing composite architecture, RNN Encoder-Decoder architectures, with some additional important features, in the first place some self-attention mechanism, and also an important use of the mechanism of embedding. Diffusion models may be considered also as an evolution of some existing composite architecture, namely stacked autoencoders, with some additional features, denoizing (like for denoizing autoencoders). We hope (and believe!) that the foundations and analysis presented in the book remain valid. We introduce these two architectures and their associated features as some additions in this page.

Associated Resources

Recent Additional Content

Courses

Papers

Related Resources


Jean-Pierre.Briot, 04/11/2024.