[hts-users:01494] HTS-2.1 was released
Dear all,
We are please to inform you the release of our software
The HMM-based Speech Synthesis System (HTS)
version 2.1 release June 27, 2008
for HMM-based speech synthesis. We apologize if you receive
multiple copies. Please check
http://hts.sp.nitech.ac.jp/
A brief explanation of this software is attached bellow.
We would appreciate it if you would distribute this email to
anyone who would be interested in this software.
================================================================
The HMM-based Speech Synthesis System (HTS) has been being
developed by the HTS working group and others (see Who we are
and Acknowledgments). The training part of HTS has been
implemented as a modified version of HTK and released as a form
of patch code to HTK. The patch code is released under a free
software license. However, it should be noted that once you
apply the patch to HTK, you must obey the license of HTK.
Related publications about the techniques and algorithms used in
HTS can be found in the HTS website.
HTS version 2.1 includes hidden semi-Markov model (HSMM)
training/adaptation/synthesis, speech parameter generation
algorithm considering global variance (GV), SMAPLR/CSMAPLR
adaptation, and other minor new features. Many bugs in HTS
version 2.0.1 were also fixed. The API for runtime synthesis
module, hts_engine API, version 1.0 was also released. Because
hts_engine can run without the HTK library, users can develop
their own open or proprietary softwares based on hts_engine.
HTS and hts_engine API does not include any text analyzers but
the Festival Speech Synthesis System, DFKI MARY Text-to-Speech
System, or other text analyzers can be used with HTS. This
distribution includes demo scripts for training
speaker-dependent and speaker-adaptive systems using CMU ARCTIC
database (English). Six HTS voices for Festival 1.96 are also
released. They use the hts_engine module included in Festival.
Each of HTS voices can be used without any other HTS tools.
For training Japanese voices, a demo script using the Nitech
database is also prepared. Japanese voices trained by the demo
script can be used on GalateaTalk, which is a speech synthesis
module of an open-source toolkit for anthropomorphic spoken
dialogue agents developed in Galatea project. An HTS voice for
Galatea trained by the demo script is also released.
================================================================
Best regards,
Heiga ZEN (Byung Ha CHUN)
--
------------------------------------------------
Heiga ZEN (in Japanese pronunciation)
Byung Ha CHUN (in Korean pronunciation)
Department of Computer Science and Engineering
Nagoya Institute of Technology
Gokiso-cho, Showa-ku, Nagoya 466-8555 Japan
http://www.sp.nitech.ac.jp/~zen
------------------------------------------------