Abstract | ||
---|---|---|
This paper presents the main approaches used to synthesize talking faces, and provides greater detail on a handful of these approaches. An attempt is made to distinguish between facial synthesis itself (i.e. the manner in which facial movements are rendered on a computer screen), and the way these movements may be controlled and predicted using phonetic input. The two main synthesis techniques (model-based vs. image-based) are contrasted and presented by a brief description of the most illustrative existing systems. The challenging issues—evaluation, data acquisition and modeling—that may drive future models are also discussed and illustrated by our current work at ICP. |
Year | DOI | Venue |
---|---|---|
2003 | 10.1023/A:1025700715107 | I. J. Speech Technology |
Keywords | Field | DocType |
text-to-speech synthesis, audiovisual synthesis, facial animation, talking faces | Speech synthesis,Computer science,Data acquisition,Speech recognition,Text to speech synthesis,Computer facial animation | Journal |
Volume | Issue | ISSN |
6 | 4 | 1572-8110 |
Citations | PageRank | References |
50 | 2.86 | 36 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
G Bailly | 1 | 95 | 7.83 |
Maxime Berar | 2 | 50 | 2.86 |
Frédéric Elisei | 3 | 275 | 25.05 |
Matthias Odisio | 4 | 99 | 8.60 |