A MUSIC GENERATION BY A COMBINING MODElOF RESNET AND LSTM NETWORKS
- Graduate School of Electrical and Information Engineering, Shonan Institute of Technology, 1-1-25 Tsujido Nishikaigan, Fujisawa 251-8511, Japan.
- Abstract
- Keywords
- Cite This Article as
- Corresponding Author
In this paper, to automatically generate a music for the melody part by deep learning with training data collected from Chopins piano piecies, a combining model of Residual Neural Networks(ResNet) and Long-Short Term Memory Networks (LSTM) are proposed. First, to generate a music for the melody part of a piano music, a training dataset used for deep learning is provided. Secondly, by using each of a LSTM Model and a combining model of LSTM and ResNet,experiments on music generationare presented. Thirdly, the results of music generation by each model are compared and discussed. In conclusion, the principal results are summarized.
[Kazuya Ozawa and Hideaki Okazaki (2022); A MUSIC GENERATION BY A COMBINING MODElOF RESNET AND LSTM NETWORKS Int. J. of Adv. Res. 10 (Mar). 937-943] (ISSN 2320-5407). www.journalijar.com
Graduate School of Electrical and Information Engineering, Shonan Institute of Technology
Japan