Preventing deep fakes Edit

Based on Erogol's idea from Coqui there should be a way to identify deep-fakes in voice context. After some Twitter chatting[1] there seems one thing without doubt: "It's the old story between hacker and the people trying to prevent misusage".

Possible techniques[2] Edit

What kind of techniques are useful for what and what's the pros and cons:

Watermark in TTS output Edit

Easy to analyse / reproduce using original sourcecode.

Watermark in TTS dataset Edit

Models can learn to reproduce watermark without seeing anything on that in the code.

References Edit