[hts-users:02065] Re: Audio-Visual speech synthesis
From: Simon King <Simon.King@xxxxxxxx>
Subject: [hts-users:02064] Re: Audio-Visual speech synthesis
Date: Fri, 3 Jul 2009 10:23:50 +0100
Message-ID: <69BF0721-34D4-444D-BF16-79D6A22B307E@xxxxxxxx>
> On 3 Jul 2009, at 09:05, Girish Malkarnenkar wrote:
>
> > Dear Sir/Madam,
> >
> > I am trying to synthesise visual speech via HTS by replacing the MGC
> > files created with facial parameters. While I would appreciate it if
> > someone who has done a similar thing before could share his experience
>
>
> Gregor Hofer, Junichi Yamagishi, and Hiroshi Shimodaira. Speech-driven
> lip motion generation with a trajectory hmm. In Proc. Interspeech
> 2008, pages 2314-2317, Brisbane, Australia, September 2008.
>
> Gregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Speech-driven
> head motion synthesis based on a trajectoy model. Poster at Siggraph
> 2007, 2007
>
> Gregor Hofer, Hiroshi Shimodaira, and Junichi Yamagishi. Lip motion
> synthesis using a context dependent trajectory hidden Markov model.
> Poster at SCA 2007, 2007.
>
> all available from http://www.cstr.ed.ac.uk/publications/users/s0343879.html
This would also be useful:
Shinji Sako, Keiichi Tokuda, Takashi Masuko, Takao Kobayashi,
Tadashi Kitamura, "HMM-based text-to-audio-visual speech
synthesis," International Conference on Spoken Language
Processing (ICSLP2000/INTERSPEECH2000), vol.III, pp.25-28,
Beijing, China, Oct. 16-20, 2000.
Keiichi
- References
-
- [hts-users:02063] Audio-Visual speech synthesis, Girish Malkarnenkar
- [hts-users:02064] Re: Audio-Visual speech synthesis, Simon King