Convolutional decoding in the presence of synchronization errors
IEEE Journal on Selected Areas in Communications
The multi-bit watermarking method for speech signals in the time-frequency domain
Integrated Computer-Aided Engineering
Hi-index | 0.00 |
A framework is proposed for synchronization in feature-based data embedding systems that is tolerant of errors in estimated features. The method combines feature-based embedding with codes capable of simultaneous synchronization and error correction, thereby allowing recovery from both desynchronization caused by feature estimation discrepancies between the embedder and receiver; and alterations in estimated symbols arising from other channel perturbations. A speech watermark is presented that constitutes a realization of the framework for 1-D signals. The speech watermark employs pitch modification for data embedding and Davey and Mackay's insertion, deletion, and substitution (IDS) codes for synchronization and error recovery. Experimental results demonstrate that the system indeed allows watermark data recovery, despite feature desynchronization. The performance of the speech watermark is optimized by estimating the channel parameters required for the IDS decoding at the receiver via the expectation-maximization algorithm. In addition, acceptable watermark power levels (i.e., the range of pitch modification that is perceptually tolerable) are determined from psychophysical tests. The proposed watermark demonstrates robustness to low-bit-rate speech coding channels (Global System for Mobile Communications at 13 kb/s and AMR at 5.1 kb/s), which have posed a serious challenge for prior speech watermarks. Thus, the watermark presented in this paper not only highlights the utility of the proposed framework but also represents a significant advance in speech watermarking. Issues in extending the proposed framework to 2-D and 3-D signals and different application scenarios are identified.