TOKYO, May 6 (UPI) -- Japanese scientists have refined the production of synthetic vocals by copying the frequencies of human voices, the University of Tokyo says.
Akio Watanabe and Hitoshi Iba have devised an algorithm that takes eight frequency curves from human samples with random parameters and feeds them into a Vocaloid program.
Then a music engineer moves slider bars in the software to see how well each curve works. The best curves are used as "parents" for a new generation of curves.
The second generation curves are put through crossover and random mutation and the process is repeated until frequency curves emerge that produce a synthetic vocal with the most realistic approximation of human singing.
Akio Watanabe and Hitoshi Iba have devised an algorithm that takes eight frequency curves from human samples with random parameters and feeds them into a Vocaloid program.
Then a music engineer moves slider bars in the software to see how well each curve works. The best curves are used as "parents" for a new generation of curves.
The second generation curves are put through crossover and random mutation and the process is repeated until frequency curves emerge that produce a synthetic vocal with the most realistic approximation of human singing.
The university says the process will replace current approaches that are more labor-intensive and susceptible to human error.
(Tokyo University students)
No comments:
Post a Comment