[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:04408] Call for Papers - Special Issue on Biosignal-based Spoken Communication - IEEE/ACM Transactions on Audio, Speech, and Language Processing


(Please apologize for cross-posting)
———————
Dear colleagues, 

We are happy to announce the following call-for-papers for a special issue on "Biosignal-based Spoken Communication" in the IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP). 

Best regards

Call for Papers 
Special Issue on Biosignal-based Spoken Communication 
in the IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) 

Speech is a complex process emitting a wide range of biosignals, including, but not limited to, acoustics. These biosignals – stemming from the articulators, the articulator muscle activities, the neural pathways, or the brain itself – can be used to circumvent limitations of conventional speech processing in particular, and to gain insights into the process of speech production in general. Research on biosignal-based speech capturing and processing is a wide and very active field at the intersection of various disciplines, ranging from engineering, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Consequently, a variety of methods and approaches are thoroughly investigated, aiming towards the common goal of creating biosignal-based speech processing devices and applications for everyday use, as well as for spoken communication research purposes. We aim at bringing together studies covering these various modalities, research approaches, and objectives in a Special issue of the IEEE Transactions on Audio, Speech, and Language Processing entitled Biosignal-based Spoken Communication. 

For this purpose we will invite papers describing previously unpublished work in the following broad areas: 
  • Capturing methods for speech-related biosignals: tracing of articulatory activity (e.g. EMA, PMA, ultrasound, video), electrical biosignals (e.g. EMG, EEG, ECG, NIRS), acoustic sensors for capturing whispered / murmured speech (e.g. NAM microphone), etc. 
  • Signal processing for speech-related biosignals: feature extraction, denoising, source separation, etc. 
  • Speech recognition based on biosignals (e.g. silent speech interface, recognition in noisy environment, etc.). 
  • Mapping between speech-related biosignals and speech acoustics (e.g. articulatory-acoustic mapping) 
  • Modeling of speech units: articulatory or phonetic features, visemes, etc. 
  • Multi-modality and information fusion in speech recognition 
  • Challenges of dealing with whispered, mumbled, silently articulated, or inner speech 
  • Neural Representations of speech and language 
  • Novel approaches in physiological studies of speech planning and production 
  • Brain-computer-interface (BCI) for restoring speech communication 
  • User studies in biosignal-based speech processing 
  • End-to-end systems and devices 
  • Applications in rehabilitation and therapy 

Submission Deadline: November 2016 
Notification of Acceptance: January 2017 
Final Manuscript Due: April 2017 
Tentative Publication Date: First half of 2017 

Editors: 
Tanja Schultz (Universität Bremen, Germany) tanja.schultz@xxxxxxxxxxxxx (Lead Guest Editor) 
Thomas Hueber (CNRS/GIPSA-lab, Grenoble, France) thomas.hueber@xxxxxxxxxxxx 
Dean J. Krusienski (ASPEN Lab, Old Dominion University) dkrusien@xxxxxxx 
Jonathan Brumberg (Speech-Language-Hearing Department, University of Kansas) brumberg@xxxxxx 



Thomas Hueber, PhD
CNRS research fellow
GIPSA-lab, Département Parole et Cognition
961 rue de la Houille Blanche - Domaine universitaire - BP 46
38402 Saint Martin d'Hères CEDEX FRANCE
Tel  :  +33 (0)4 76 57 49 40
Fax :  +33 (0)4 76 57 47 10
E-mail : thomas.hueber@gipsa-lab.grenoble-inp.fr
Web : http://www.gipsa-lab.inpg.fr/~thomas.hueber