首页 > 编程 > Python > 正文

webrtcvad python——语音端点检测

2019-11-11 02:27:52
字体:
来源:转载
供稿:网友

py-webrtcvad 语音端点检测

算法说明

webrtc的vad使用GMM(Gaussian Mixture Mode)对语音和噪音建模,通过相应的概率来判断语音和噪声,这种算法的优点是它是无监督的,不需要严格的训练。GMM的噪声和语音模型如下:

p(xk|z,rk)={1/sqrt(2*pi*sita^2)} * exp{ - (xk-uz) ^2/(2 * sita ^2 )}

xk是选取的特征量,在webrtc的VAD中具体是指子带能量,rk是包括均值uz和方差sita的参数集合。z=0,代表噪声,z=1代表语音

webrtc中的vad的c代码的详细步骤如下:

1.设定模式

根据hangover、单独判决和全局判决门限将VAD检测模式分为以下4类

0-quality mode1-low bitrate mode2-aggressive mode3-very aggressive mode

2.webrtc的VAD只支持帧长10ms,20ms和30ms,为此事先要加以判断,不符合条件的返回-1

3.webrtc的VAD核心计算只支持8KHz采样率,所以当输入信号采样率为32KHz或者16KHz时都要先采样到8KHz

4.在8KHz采样率上分为两个步骤

4.1 计算子带能量

子带分为80~250Hz,250~500Hz,500~1000Hz,1000~2000Hz,2000~3000Hz,3000~4000Hz需要分别计算上述子带的能量feature_vector

4.2通过高斯混合模型分别计算语音和非语音的概率,使用假设检验的方法确定信号的类型

首先通过高斯模型计算假设检验中的H0和H1(c代码是用h0_test和h1_test表示),通过门限判决vadflag然后更新概率计算所需要的语音均值(speech_means)、噪声的均值(noise_means)、语音方差(speech_stds)和噪声方差(noise_stds)

实例代码

import collectionsimport contextlibimport sysimport waveimport webrtcvaddef read_wave(path): with contextlib.closing(wave.open(path, 'rb')) as wf: num_channels = wf.getnchannels() assert num_channels == 1 sample_width = wf.getsampwidth() assert sample_width == 2 sample_rate = wf.getframerate() assert sample_rate in (8000, 16000, 32000) pcm_data = wf.readframes(wf.getnframes()) return pcm_data, sample_ratedef write_wave(path, audio, sample_rate): with contextlib.closing(wave.open(path, 'wb')) as wf: wf.setnchannels(1) wf.setsampwidth(2) wf.setframerate(sample_rate) wf.writeframes(audio)class Frame(object): def __init__(self, bytes, timestamp, duration): self.bytes = bytes self.timestamp = timestamp self.duration = durationdef frame_generator(frame_duration_ms, audio, sample_rate): n = int(sample_rate * (frame_duration_ms / 1000.0) * 2) offset = 0 timestamp = 0.0 duration = (float(n) / sample_rate) / 2.0 while offset + n < len(audio): yield Frame(audio[offset:offset + n], timestamp, duration) timestamp += duration offset += ndef vad_collector(sample_rate, frame_duration_ms, padding_duration_ms, vad, frames): num_padding_frames = int(padding_duration_ms / frame_duration_ms) ring_buffer = collections.deque(maxlen=num_padding_frames) triggered = False voiced_frames = [] for frame in frames: sys.stdout.write( '1' if vad.is_speech(frame.bytes, sample_rate) else '0') if not triggered: ring_buffer.append(frame) num_voiced = len([f for f in ring_buffer if vad.is_speech(f.bytes, sample_rate)]) if num_voiced > 0.9 * ring_buffer.maxlen: sys.stdout.write('+(%s)' % (ring_buffer[0].timestamp,)) triggered = True voiced_frames.extend(ring_buffer) ring_buffer.clear() else: voiced_frames.append(frame) ring_buffer.append(frame) num_unvoiced = len([f for f in ring_buffer if not vad.is_speech(f.bytes, sample_rate)]) if num_unvoiced > 0.9 * ring_buffer.maxlen: sys.stdout.write('-(%s)' % (frame.timestamp + frame.duration)) triggered = False yield b''.join([f.bytes for f in voiced_frames]) ring_buffer.clear() voiced_frames = [] if triggered: sys.stdout.write('-(%s)' % (frame.timestamp + frame.duration)) sys.stdout.write('/n') if voiced_frames: yield b''.join([f.bytes for f in voiced_frames])def main(args): if len(args) != 2: sys.stderr.write( 'Usage: example.py <aggressiveness> <path to wav file>/n') sys.exit(1) audio, sample_rate = read_wave(args[1]) vad = webrtcvad.Vad(int(args[0])) frames = frame_generator(30, audio, sample_rate) frames = list(frames) segments = vad_collector(sample_rate, 30, 300, vad, frames) for i, segment in enumerate(segments): path = 'chunk-%002d.wav' % (i,) PRint(' Writing %s' % (path,)) write_wave(path, segment, sample_rate)if __name__ == '__main__': main(sys.argv[1:])

参考:

http://blog.csdn.net/u012931018/article/details/16903027

GitHub地址:

https://github.com/wiseman/py-webrtcvad


发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表