A Short Bio

Yashish M. Siriwardena received the honors degree in biomedical engineering from University of Moratuwa, Sri Lanka, in 2017. He is currently a Ph.D. candidate in electrical and computer engineering at the University of Maryland, College Park, MD, USA, working with Carol Espy-Wilson and Shihab Shamma.

Yashish’s research is primarily on speech communication. He combines knowledge of digital signal processing, speech science, linguistics, acoustic phonetics and machine learning to conduct interdisciplinary research in speech production, speech synthesis and speech inversion. He has also worked in using speech as a behavioral signal for emotion recognition, and the detection and monitoring of mental health.

CV

CV

News

  1. (Oct 2023) I successfully defended my PhD dissertation at the University of Maryland College park in Electrical and Computer Engineering. I am curently open for research positions in industry.
  2. (June 2023) I joined as a Summer Research Intern with the Multimodal science group at Dolby Laboratories. Excited to work on a speech synthesis and voice conversion related research problem.
  3. (May 2023) Three first author papers accepted for publication in Interspeech 2023
    • Learning to Compute the Articulatory Representations of Speech with the MIRRORNET
    • Speaker-independent Speech Inversion for Estimation of Nasalance
    • Acoustic-to-Articulatory Speech Inversion Features for Mispronunciation Detection of /r/ in Child Speech Sound Disorders (equal contribution with Nina R. Benway)
  4. (May 2023) Our paper on “Audio data augmentation for acoustic-to-articulatory speech inversion” has been accepted for publication in the 31st European Signal Processing Conference(EUSIPCO) 2023
  5. (April 2023) I have been selected to IEEE ICASSP Rising Star Programme to present my thesis work at the ICASSP 2023
  6. (Feb 2023) Our paper “The Secret Source : Incorporating Source Features to Improve Acoustic-to-Articulatory Speech Inversion” has been accepted for publication in ICASSP 2023
  7. (Sep 2022) Attended Interspeech 2022 in Incheon, South Korea to present our paper on “Acoustic-to-articulatory Speech Inversion with Multi-task Learning”
  8. (May 2022) Our paper “Acoustic-to-articulatory Speech Inversion with Multi-task Learning” has been accepted for publication in Interspeech 2022
  9. (Jan 2022) Our paper “The MirrorNet: Learning Audio Synthesizer Controls Inspired by Sensorimotor Interactions” has been accepted for publication in ICASSP 2022
  10. (Dec 2021) Attended the 181st meeting of Acoustical Society of America in Seattle, WA to present our work on “Emotion Recognition with Articulatory Coordination Features”
  11. (July 2021) Our paper “Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia using Convolutional Neural Networks” has been accepted for publication in ACM ICMI 2021
  12. (Dec 2020) Presented our work on “Inverted Vocal Tract Variables and Facial Action Units to Quantify Neuromotor Coordination in Schizophrenia” at 12th International Seminar on Speech Production (ISSP 2020)