Yan Meng and Péter Mihajlik 

Adapters  in Cross-Language Transfer Learning For Low-Resource Automatic Speech Recognition

In recent years, the application of adapter modules in large language models proved to be successful in reducing computing and memory costs during fine-tuning. In our paper, we apply adapters to the field of automatic speech recognition. Specifically, we add adapters to different pre-trained speech recognition models to evaluate their efficiency in cross-language transfer learning. In this study, the evaluations are extended to GPU memory consumption, training duration, and recognition accuracy. By comparing the effects of adapters added to different models, we further explore the impact of whether the foundational model was (pre-) trained in the target language.

DOI: 10.36244/ICJ.2024.4.1

Download 

Please cite this paper the following way:

Yan Meng and Péter Mihajlik, "Adapters  in Cross-Language Transfer Learning For Low-Resource Automatic Speech Recognition", Infocommunications Journal, Vol. XVI, No 4, December 2024, pp. 2-9., https://doi.org/10.36244/ICJ.2024.4.1