kaldi 跑thchs300训练好了后,怎么用

查看语音库(9)
单音素模型的训练
# Flat start and monophone training, with delta-delta features.
# This script applies cepstral mean normalization (per speaker).
#monophone 训练单音素模型
steps/train_mono.sh --boost-silence 1.25 --nj $n --cmd &$train_cmd& data/mfcc/train data/lang exp/mono || exit 1;
#test monophone model
local/thchs-30_decode.sh --mono true --nj $n &steps/decode.sh& exp/mono data/mfcc &train_mono.sh用法
echo &Usage: steps/train_mono.sh [options] &data-dir& &lang-dir& &exp-dir&&
echo & e.g.: steps/train_mono.sh data/train.1k data/lang exp/mono&
echo &main options (for others, see top of script file)&其中的参数设置,训练单音素的基础HMM模型,迭代40次,并按照realign_iters的次数对数据对齐
# Begin configuration section.
cmd=run.pl
scale_opts=&--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1&
num_iters=40
# Number of iterations of training
max_iter_inc=30 # Last iter to increase #Gauss on.
totgauss=1000 # Target #Gaussians.
careful=false
boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment
realign_iters=&1 2 3 4 5 6 7 8 9 10 12 14 16 18 20 23 26 29 32 35 38&;
config= # name of config file.
power=0.25 # exponent to determine number of gaussians from occurrence counts
norm_vars=false # deprecated, prefer --cmvn-opts &--norm-vars=false&
cmvn_opts=
# can be used to add extra options to cmvn.
# End configuration section.
thchs-30_decode.sh测试单音素模型,实际使用mkgraph.sh建立完全的识别网络,并输出一个有限状态转换器,最后使用decode.sh以语言模型和测试数据为输入计算WER.
#decode word
utils/mkgraph.sh $opt data/graph/lang $srcdir $srcdir/graph_word
|| exit 1;
$decoder --cmd &$decode_cmd& --nj $nj $srcdir/graph_word $datadir/test $srcdir/decode_test_word || exit 1
#decode phone
utils/mkgraph.sh $opt data/graph_phone/lang $srcdir $srcdir/graph_phone
|| exit 1;
$decoder --cmd &$decode_cmd& --nj $nj $srcdir/graph_phone $datadir/test_phone $srcdir/decode_test_phone || exit 1align_si.sh用指定模型对指定数据进行对齐,一般在训练新模型前进行,以上一版本模型作为输入,输出在&align-dir&
#monophone_ali
steps/align_si.sh
--boost-silence
&$train_cmd& data/mfcc/train
data/lang exp/mono exp/mono_ali
# Computes training
alignments using a model with delta or
# LDA+MLLT features.
# If you supply
the &--use-graphs true& option, it will use the training
# graphs from
the source directory (where the model is). In this
# case the number
of jobs must match with the source directory.
&usage: steps/align_si.sh &data-dir& &lang-dir& &src-dir& &align-dir&&
&e.g.: steps/align_si.sh data/train data/lang exp/tri1 exp/tri1_ali&
&main options (for others, see top of script file)&
& --config &config-file& # config containing options&
& --nj &nj& # number of parallel jobs&
& --use-graphs true # use graphs in src-dir&
& --cmd (utils/run.pl|utils/queue.pl &queue opts&) # how to run jobs.&
以单音素模型为输入训练上下文相关的三音素模型
#triphonesteps/train_deltas.sh --boost-silence 1.25 --cmd &$train_cmd& 2000 10000 data/mfcc/train data/lang exp/mono_ali exp/tri1 || exit 1;#test tri1 modellocal/thchs-30_decode.sh --nj $n &steps/decode.sh& exp/tri1 data/mfcc &
train_deltas.sh中的相关配置如下,其中输入
# Begin configuration.stage=-4 #
This allows restarting after partway, when something when wrong.config=cmd=run.plscale_opts=&--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1&realign_iters=&10 20 30&;num_iters=35
# Number of iterations of trainingmax_iter_inc=25 # Last iter to increase #Gauss on.beam=10careful=falseretry_beam=40boost_silence=1.0 # Factor by which to boost silence likelihoods in alignmentpower=0.25 # Exponent for number of gaussians according to occurrence countscluster_thresh=-1
# for build-tree control final bottom-up clustering of leavesnorm_vars=false # deprecated.
Prefer --cmvn-opts &--norm-vars=true&
# use the option --cmvn-opts &--norm-means=false&cmvn_opts=delta_opts=context_opts=
# use&--context-width=5 --central-position=2& for quinphone# End configuration.
echo &Usage: steps/train_deltas.sh &num-leaves& &tot-gauss& &data-dir& &lang-dir& &alignment-dir& &exp-dir&&echo &e.g.: steps/train_deltas.sh
data/train_si84_half data/lang exp/mono_ali exp/tri1&
对特征使用LDA和MLLT进行变换,训练加入LDA和MLLT的三音素模型。
LDA+MLLT refers to the way we transform the features after computing&the MFCCs: we splice across several frames, reduce the dimension (to 40&by default) using Linear Discriminant Analysis), and then later estimate,&over multiple iterations, a diagonalizing
transform known as MLLT or CTC.
详情可参考&http://kaldi-asr.org/doc/transform.html
#triphone_alisteps/align_si.sh --nj $n --cmd &$train_cmd& data/mfcc/train data/lang exp/tri1 exp/tri1_ali || exit 1;#lda_mlltsteps/train_lda_mllt.sh --cmd &$train_cmd& --splice-opts &--left-context=3 --right-context=3& 2500 15000 data/mfcc/train data/lang exp/tri1_ali exp/tri2b || exit 1;#test tri2b modellocal/thchs-30_decode.sh --nj $n &steps/decode.sh& exp/tri2b data/mfcc &
train_lda_mllt.sh相关代码配置如下:
# Begin configuration.cmd=run.plconfig=stage=-5scale_opts=&--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1&realign_iters=&10 20 30&;mllt_iters=&2 4 6 12&;num_iters=35
# Number of iterations of trainingmax_iter_inc=25
# Last iter to increase #Gauss on.dim=40beam=10retry_beam=40careful=falseboost_silence=1.0 # Factor by which to boost silence likelihoods in alignmentpower=0.25 # Exponent for number of gaussians according to occurrence countsrandprune=4.0 # This is approximately the ratio by which we will speed up the
# LDA and MLLT calculations via randomized pruning.splice_opts=cluster_thresh=-1
# for build-tree control final bottom-up clustering of leavesnorm_vars=false # deprecated.
Prefer --cmvn-opts &--norm-vars=false&cmvn_opts=context_opts=
# use &--context-width=5 --central-position=2& for quinphone.# End configuration.
运用基于特征空间的最大似然线性回归(fMLLR)进行说话人自适应训练
This does Speaker Adapted Training (SAT), i.e. train on&fMLLR-adapted features. &It can be done on top of either LDA+MLLT, or&delta and delta-delta features. &If there
are no transforms supplied&in the alignment directory, it will estimate transforms itself before&building the tree (and in any case, it estimates transforms a number&of
times during training).
#lda_mllt_alisteps/align_si.sh
--nj $n --cmd &$train_cmd& --use-graphs true data/mfcc/train data/lang exp/tri2b exp/tri2b_ali || exit 1;#satsteps/train_sat.sh --cmd &$train_cmd& 2500 15000 data/mfcc/train data/lang exp/tri2b_ali exp/tri3b || exit 1;#test tri3b modellocal/thchs-30_decode.sh --nj $n &steps/decode_fmllr.sh& exp/tri3b data/mfcc &
train_sat.sh的具体配置如下:
# Begin configuration section.stage=-5exit_stage=-100 # you can use this to require it to exit at the
# beginning of a specific stage.
Not all values are
# supported.fmllr_update_type=fullcmd=run.plscale_opts=&--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1&beam=10retry_beam=40careful=falseboost_silence=1.0 # Factor by which to boost silence likelihoods in alignmentcontext_opts=
# e.g. set this to &--context-width 5 --central-position 2& for quinphone.realign_iters=&10 20 30&;fmllr_iters=&2 4 6 12&;silence_weight=0.0 # Weight on silence in fMLLR estimation.num_iters=35
# Number of iterations of trainingmax_iter_inc=25 # Last iter to increase #Gauss on.power=0.2 # Exponent for number of gaussians according to occurrence countscluster_thresh=-1
# for build-tree control final bottom-up clustering of leavesphone_map=train_tree=truetree_stats_opts=cluster_phones_opts=compile_questions_opts=# End configuration section.
decode_fmllr.sh :对做了发音人自适应的模型进行解码
Decoding script that does fMLLR. &This can be on top of delta+delta-delta, or&LDA+MLLT features.
# There are 3 models involved potentially in this script,# and for a standard, speaker-independent system they will all be the same.# The &alignment model& is for the 1st-pass decoding and to get the# Gaussian-level alignments for the &adaptation model& the first time we# do fMLLR.
The &adaptation model& is used to estimate fMLLR transforms# and to generate state-level lattices.
The lattices are then rescored# with the &final model&.## The following table explains where we get these 3 models from.# Note: $srcdir is one level up from the decoding directory.##
Default source:##
&alignment model&
$srcdir/final.alimdl
--alignment-model &model&#
(or $srcdir/final.mdl if alimdl absent)#
&adaptation model&
$srcdir/final.mdl
--adapt-model &model&#
&final model&
$srcdir/final.mdl
--final-model &model&
Train a model on top of existing features (no feature-space learning of any&kind is done). &This script initializes the model (i.e., the GMMs) from the&previous
system's model.That is: for each state in the current model (after&tree building), it chooses the closes state in the old model, judging the&similarities based on overlap of counts in
the tree stats.
#sat_alisteps/align_fmllr.sh --nj $n --cmd &$train_cmd& data/mfcc/train data/lang exp/tri3b exp/tri3b_ali || exit 1;#quicksteps/train_quick.sh --cmd &$train_cmd& 4200 40000 data/mfcc/train data/lang exp/tri3b_ali exp/tri4b || exit 1;#test tri4b modellocal/thchs-30_decode.sh --nj $n &steps/decode_fmllr.sh& exp/tri4b data/mfcc &
train_quick.sh的配置:
# Begin configuration..cmd=run.plscale_opts=&--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1&realign_iters=&10 15&; # Only realign twice.num_iters=20
# Number of iterations of trainingmaxiterinc=15 # Last iter to increase #Gauss on.batch_size=750 # batch size to use while compiling graphs... memory/speed tradeoff.beam=10 # alignment beam.retry_beam=40stage=-5cluster_thresh=-1
# for build-tree control final bottom-up clustering of leaves# End configuration section.
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:311416次
积分:3310
积分:3310
排名:第9924名
原创:16篇
转载:381篇
评论:23条
(5)(24)(33)(3)(5)(12)(15)(6)(25)(17)(38)(14)(17)(20)(31)(22)(46)(47)(17)(1)5172人阅读
软件分享(2)
微软在 Windows 10 上新增了一项功能 Windows 聚焦 (Windows Spotlight),它会自动随机下载并更换锁屏界面的壁纸 (Lockscreen),让你每次打开电脑都有不一样的视觉享受。这些高清锁屏壁纸往往都很精美,很多视觉冲击力十足,非常值得收藏。但很多同学想将这些壁纸设为桌面,却不知道怎样下载保存Win10的锁屏壁纸。实际上这些精美的图片都在你电脑上的缓存文件夹中,比如我的就在“C:\Users\Anymake\AppData\Local\Packages\Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy\LocalState\Assets”中,这些缓存文件没有后缀名,你只需要重命名加上.jpg就可以看到了。这些手动提取的教程各大网站都有介绍。
但是每次手动复制比较繁琐,我实现了一个自动化的将每天新更新的Windows 聚焦 (Windows Spotlight)图片设置为桌面壁纸的程序。本方法不需要单独安装任何软件,只需要windows自带的Powershell和任务计划程序就可以了。
一、编写自动提取并设置为壁纸的脚本
打开一个文本文件,复制以下代码,保存后缀为.ps1,命名为SetWallPaperFromSpotlight.ps1,然后右键“使用powershell运行”就可以发现桌面壁纸已经被设置为了最新的图片。所有的聚焦图片都被复制到你自己的用户文件夹下的Spotlight文件夹。比如我的是在:“C:\Users\Anymake\Pictures\Spotlight”。这样你就有了一个手动提取并设置最新图片为桌面壁纸的方法。下面第二步介绍每天电脑自动设置的方法。
# 将复制出来的缓存图片保存在下面的文件夹
add-type -AssemblyName System.Drawing
New-Item &$($env:USERPROFILE)\Pictures\Spotlight& -ItemType directory -F
New-Item &$($env:USERPROFILE)\Pictures\Spotlight\CopyAssets& -ItemType directory -F
New-Item &$($env:USERPROFILE)\Pictures\Spotlight\Horizontal& -ItemType directory -F
New-Item &$($env:USERPROFILE)\Pictures\Spotlight\Vertical& -ItemType directory -F
# 将横竖图片分别复制到对应的两个文件夹
foreach($file in (Get-Item &$($env:LOCALAPPDATA)\Packages\Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy\LocalState\Assets\*&))
if ((Get-Item $file).length -lt 100kb) { continue }
Copy-Item $file.FullName &$($env:USERPROFILE)\Pictures\Spotlight\CopyAssets\$($file.Name).jpg&;
foreach($newfile in (Get-Item &$($env:USERPROFILE)\Pictures\Spotlight\CopyAssets\*&))
$image = New-Object -comObject WIA.ImageF
$image.LoadFile($newfile.FullName);
if($image.Width.ToString() -eq &1920&){ Move-Item $newfile.FullName &$($env:USERPROFILE)\Pictures\Spotlight\Horizontal& -F }
elseif($image.Width.ToString() -eq &1080&){ Move-Item $newfile.FullName &$($env:USERPROFILE)\Pictures\Spotlight\Vertical& -F }
# 壁纸设置函数
function Set-Wallpaper
[Parameter(Mandatory=$true)]
[ValidateSet('Center', 'Stretch')]
$Style = 'Center'
Add-Type @&
using System.Runtime.InteropS
using Microsoft.Win32;
namespace Wallpaper
public enum Style : int
Center, Stretch
public class Setter {
public const int SetDesktopWallpaper = 20;
public const int UpdateIniFile = 0x01;
public const int SendWinIniChange = 0x02;
[DllImport(&user32.dll&, SetLastError = true, CharSet = CharSet.Auto)]
private static extern int SystemParametersInfo (int uAction, int uParam, string lpvParam, int fuWinIni);
public static void SetWallpaper ( string path, Wallpaper.Style style ) {
SystemParametersInfo( SetDesktopWallpaper, 0, path, UpdateIniFile | SendWinIniChange );
RegistryKey key = Registry.CurrentUser.OpenSubKey(&Control Panel\\Desktop&, true);
switch( style )
case Style.Stretch :
key.SetValue(@&WallpaperStyle&, &2&) ;
key.SetValue(@&TileWallpaper&, &0&) ;
case Style.Center :
key.SetValue(@&WallpaperStyle&, &1&) ;
key.SetValue(@&TileWallpaper&, &0&) ;
key.Close();
[Wallpaper.Setter]::SetWallpaper( $Path, $Style )
$filePath = &$($env:USERPROFILE)\Pictures\Spotlight\Horizontal\*&
$file = Get-Item -Path $filePath | Sort-Object -Property LastWriteTime -Descending | Select-Object -First 1
Set-Wallpaper -Path $file.FullName
# echo $file.FullName
Remove-Item &$($env:USERPROFILE)\Pictures\Spotlight\CopyAssets\*&;
其中设置桌面壁纸的代码参考自:http://www.pstips.net/powershell-change-wallpaper.html
提取windows聚焦的图片参考自:/save-win10-spotlight-wallpapers.html
二、利用windows自带的任务计划程序每天自动运行脚本
必须以管理员身份登录才能执行这些步骤。如果不是以管理员身份登录,则您仅能更改适用于您的用户帐户的设置。
1、由于windows默认的任务计划没有权限执行ps1脚本,因此首先需要用管理员运行Windows PowerShell
2、输入&Set-ExecutionPolicy Unrestricted进行权限更改,输入Y确认
3、打开“任务计划程序”,方法是依次单击“控制面板”、“系统和安全性”、“管理工具”,然后双击“任务计划程序”。? &需要管理员权限 如果系统提示您输入管理员密码或进行确认,请键入该密码或提供确认。
单击“操作”菜单,然后单击“创建任务”。
配置如下:
常规:键入任务的名称比如SetWallPaperFromSpotlight和描述(可选) &- 勾选“使用最高权限运行”
触发器:新建 - 选择“制定计划时” - 选择 执行时间如“7:30:00” - 选择执行周期如“每天 每隔1天发生一次” - 勾选&启用&,也可以根据需要选择每小时,每半小时或者更高的频率运行脚本。
操作:新建 - 选择“启动程序” - &powershell& ,添加参数为文件路径,如&D:\code\py\SetWallPaperFromSpotlight.ps1&,- 点击“确定”
所有完成就大功告成了,要检查效果的话,单机左侧的任务计划程序库,从右边找到你刚设置的SetWallPaperFromSpotlight任务,右键立即运行就可以看到效果了。
首先,确保你的 Windows 10 已经开启了“聚焦”壁纸功能,桌面右键 & 个性化 & 锁屏界面 & “背景”选项下选择 “Windows 聚焦”即为开启,之后系统将会自动联网更换锁屏壁纸。为了使任务栏颜色随着壁纸改变,最好将颜色设置为从壁纸中自动选取
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:141252次
排名:千里之外
原创:11篇
评论:52条
急事联系,请发邮件至:
echo YW55bWFrZUAxMjYuY29tCg== |base64 -d
(1)(1)(1)(1)(2)(1)(1)(1)(2)(1)kaldi - 博客频道 - CSDN.NET
dzp443366的专栏
分类:kaldi
第一阶段:./cmd.sh .
./path.sh(设置执行路径以及命令脚本名字)
ps:decode 解码
train 训练
第二阶段:data preparation(数据准备阶段)
执行 local/ thchs-30_data_prep.sh
为了创建: wav.scp, utt2spk.scp, spk2utt.scp, text 还有words.txt phone.txt
- 循环遍历thchs30/thchs30-openslr/data_thchs30/data/ 中 dev
test train 3个文件夹内的wav 视频(6g视频) 获取它们的名字(如:C08_559 )
- 循环遍历thchs30/thchs30-openslr/data_thchs30/目录下的 data test train 3个文件夹内的wav 视频(6g视频) 获取.wav.trn
- 它们的名字(如:C08_559 ) 生成了 utt2spk,
wav.scp 存放在egs/thchs30/s5/data/
中 dev test train 3个文件夹
- 它们的.wav.trn 生成了phone.txt word.txt
存放在egs/thchs30/s5/data/ dev test train 3个文件夹
utils/utt2spk_to_spk2utt.pl
利用 utt2spk 产生 spk2utt (两者排序不同而已) 存放在egs/thchs30/s5/data/ dev test train 3个文件夹
上图为目录,下图为utt2spk(由上图数据生成下图数据)
ps:C13与C14 是不同的讲话者
更多文件解释在:
第三阶段:产生的MFCC特征以及计算CMVN
MFCC就是语音特征提取
1. steps/make_mfcc.sh
2.steps/compute_cmvn_stats.sh
第四阶段:建立一个大的词汇,包括词的训练和解码
源码备注:(就是上述标题意思)
prepare language stuff
build a large lexicon that invovles words in both the training and decoding.
查看两个生成文件语句,从resource包中复制dict文件夹到 项目中的data文件夹
看一下dict里面的文件 lexicon.txt
以及phones.txt
上图为lexicon.txt
格式为: &词汇&
上图为:nonsilence_phones.txt
格式为:全部音标
我们把这一步看作准备大量的素材,用于后续构造训练器。
第五阶段 算法训练
步骤为: 单音素训练, tri1三因素训练, trib2进行lda_mllt特征变换,trib3进行sat自然语言适应,trib4做quick
后面就是dnn了
这些操作的文件夹 是
- data/mfcc/train
(mfcc模块)
- data/lang
(语音模型)
- exp/mono_ali (mono:单音节训练)
- exp/exp/tril
排名:千里之外
(2)(6)(2)(5)(1)(2)(6)(1)(3)语音识别(34)
原始数据下载
总共三个tgz文件:
data_thchs30.tgz [6.4G]
( speech data and transcripts )
test-noise.tgz [1.9G]
( standard 0db noisy test data )
resource.tgz [24M]
( supplementary resources, incl. lexicon for training data, noise samples )
下载后随便解压到哪个目录文件夹下。
我解压的目录路径是:
/media/gsc/kaldi_data/thchs30-openslr
训练生成模型
该example在我的电脑上的目录是:
/home/gsc/kaldi/egs/thchs30/s5
PC编译的cmd.sh按如下方式更改以使用pc进行编译:
至少4核4G内存。
export train_cmd=run.pl
export decode_cmd="run.pl --mem 4G"
export mkgraph_cmd="run.pl --mem 8G"
export cuda_cmd="run.pl --gpu 1"
#export train_cmd=queue.pl
#export decode_cmd="queue.pl --mem 4G"
#export mkgraph_cmd="queue.pl --mem 8G"
#export cuda_cmd="queue.pl --gpu 1"
修改run.sh文件:
然后执行run.sh,由于编译时间较长,可以参看run.sh文件跟踪过程;
该文件进行了不同声学模型的训练。最简单的就是单音素模型了。
可以在任何时间停止,但是至少mono训练结束后停止,因为下面在线识别依赖至少一个模型。
安装portaudio
gsc@X250:~/kaldi/tools$ ./install_portaudio.sh
./src/下 , make ext
创建相关文件
从voxforge把online_demo拷贝到thchs30下,和s5同级,online_demo建online-data和work两个文件夹。online-data下建audio和models,audio放要识别的wav,models建tri1,讲s5下/exp/下的tri1下的final.mdl和35.mdl拷贝过去,把s5下的exp下的tri1下的graph_word里面的words.txt和HCLG.fst也拷过去。
修改online_demo 下的run.sh
2.修改模型类型
ac_model_type=tri2b_mmi 改成ac_model_type=tri1
3.更改命令行
图中注释掉的是tri2b模式时的命令调用格式
4.运行run.sh –test_mode live在线识别。
单音素识别结果如上,基本不准,但是对“为什么”识别很好。
运行tri2(tri3,tri4同理):把s5下的exp下的tri2b下的12.mat考到models的tri2b下,把final.mat考过来,再拷贝其他相应的文件,修改,
online-gmm-decode-faster --rt-min=0.5 --rt-max=0.7 --max-active=4000 \
--beam=12.0 --acoustic-scale=0.0769 --left-context=3 --right-context=3 $ac_model/final.mdl $ac_model/HCLG.fst \
$ac_model/words.txt '1:2:3:4:5' $trans_matrix;;
不截屏结果了,显示比前面一个好,且如果听过wav,就会发现wav文件里高频词汇,识别的结果相对准一些。
运行dnn:首先要将nnet1转成nnet2,如何转换,上面的文章里有,再贴一下链接:,
syntaxnet(和文章标题无关,请跳过)
无关的两张图,syntaxnet
在网上下载Chinese模型文件,
导出所在路径:
gsc@X250:~/envtensorflow/deep_learn/models/syntaxnet$ MODEL_DIRECTORY=~/Downloads/Chinese
执行如下命令,查看分词结果;
gsc@X250:~/envtensorflow/deep_learn/models/syntaxnet$ echo '然而,中国经历了30多年的改革开放' | syntaxnet/models/parsey_universal/tokenize_zh.sh $MODEL_DIRECTORY | syntaxnet/models/parsey_universal/parse.sh $MODEL_DIRECTORY
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:219129次
积分:3825
积分:3825
排名:第8111名
原创:160篇
评论:72条
(4)(12)(8)(3)(9)(2)(3)(7)(3)(8)(6)(14)(5)(6)(3)(7)(4)(1)(3)(5)(3)(1)(3)(7)(4)(20)(1)(1)(2)(3)(3)(3)(3)(5)
基于Linux3.10版本下tcp/ip协议主线的网络协议栈源码级剖析

我要回帖

更多关于 德翔降价后的thc 的文章

 

随机推荐