[Subject Prev][Subject Next][Thread Prev][Thread Next][Date Index][Thread Index]

[hts-users:02069] Re: Audio-Visual speech synthesis

Dear all,
Thank you very much for all these links. The papers were very useful.
Yours sincerely
Girish Malkarnenkar

On Fri, Jul 3, 2009 at 12:26 PM, Junichi Yamagishi <jyamagis@xxxxxxxxxxxx> wrote:

I pick up old ones.

Brand, M.E., "Voice Puppetry", ACM SIGGRAPH, ISBN: 0-201-48560-5, pps 21-28 , August 1999

Although this is not Audio-visual one, this paper
(trajectory HMM plus annealing!) is also nice.

Junichi Yamagishi

On 3 Jul 2009, at 11:14, Heiga ZEN (Byung Ha CHUN) wrote:


Girish Malkarnenkar wrote:

I am trying to synthesise visual speech via HTS by replacing the MGC files created with facial parameters. While I would appreciate it if someone who has done a similar thing before could share his experience,

Bailly, G., O. Govokhina, G. Breton, F. Elisei and C. Savariaux (2008). The trainable trajectory formation model TD-HMM parameterized for the LIPS 2008 challenge. Interspeech, Brisbane, Australia.



Heiga ZEN (Byung Ha CHUN)

Heiga ZEN (Byung Ha CHUN)
Speech Technology Group
Cambridge Research Lab
Toshiba Research Europe
phone: +44 1223 436975

This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email ______________________________________________________________________

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

[hts-users:02063] Audio-Visual speech synthesis, Girish Malkarnenkar
[hts-users:02066] Re: Audio-Visual speech synthesis, Heiga ZEN (Byung Ha CHUN)
[hts-users:02067] Re: Audio-Visual speech synthesis, Junichi Yamagishi