msk调制解调带宽怎么计算?

详解服务器内存带宽计算和使用情况测量
原创文章,转载请注明: 转载自Erlang非业余研究
本文链接地址: 详解服务器内存带宽计算和使用情况测量
前段时间我们在MYSQL调优上发现有瓶颈,怀疑是过多拷贝内存,导致内存带宽用完。在Linux下CPU的使用情况有top工具,
IO设备的使用情况有iostat工具,就是没有内存使用情况的测量工具。
我们可以看到大量的memcpy和字符串拷贝(可以用systemtap来测量),但是像简单的数据移动操作就无法统计,我们希望在硬件层面有办法可以查到CPU在过去的一段时间内总共对主存系统发起了多少读写字节数。
所以我们内存测量的的目标就归结为二点:1. 目前我们这样的服务器真正的内存带宽是多少。 2.
我们的应用到底占用了多少带宽。
首先来看下我们的服务器配置情况:
$ sudo ~/aspersa/summary
# Aspersa System Summary Report ##############################
11:23:11 UTC (local TZ: CST +0800)
Hostname | my031121.sqa.cm4
Uptime | 13 days,
load average: 0.02, 0.01, 0.00
System | Dell Inc.; PowerEdge R710; vNot Specified (&OUT OF SPEC&)
Service Tag | DHY6S2X
Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga)
Kernel | 2.6.18-164.el5
Architecture | CPU = 64-bit, OS = 64-bit
Threading | NPTL 2.5
Compiler | GNU CC version 4.1.2
(Red Hat 4.1.2-44).
SELinux | Disabled
# Processor ##################################################
Processors | physical = 2, cores = 12, virtual = 24, hyperthreading = yes
Speeds | 24x
Models | 24xIntel(R) Xeon(R) CPU X5670 @ 2.93GHz
Caches | 24x12288 KB
# Memory #####################################################
Total | 94.40G
Free | 4.39G
Used | physical = 90.01G, swap = 928.00k, virtual = 90.01G
Buffers | 1.75G
Caches | 7.85G
Used | 78.74G
Swappiness | vm.swappiness = 0
DirtyPolicy | vm.dirty_ratio = 40, vm.dirty_background_ratio = 10
Form Factor
Type Detail
========= ======== ================= ============= ============= ===========
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
1333 MHz (0.8 ns) DIMM
{OUT OF SPEC} Synchronous
{OUT OF SPEC} Synchronous
{OUT OF SPEC} Synchronous
{OUT OF SPEC} Synchronous
{OUT OF SPEC} Synchronous
{OUT OF SPEC} Synchronous
{OUT OF SPEC} Synchronous
DELL R710的机器上有2个X5670CPU,每个上面有6个core,超线程,所以共有24个逻辑CPU。上面插了12根
8192MB(1333 MHz)内存条。
我们的机器架构从之前的FSB总线结构变成现在的numa架构,谢谢@fcicq提供的信息,请参考下图(来源):
我们可以清楚的看到每个CPU都有自己的内存控制器直接连接到内存去,而且有3个通道, CPU直接通过QPI连接。
内存控制器和QPI上面都会流动数据。
我们服务器用的是DDR3内存,所以我们需要计算下在这样的架构下我们内存的带宽。
DDR3内存带宽如何计算,参看这里
从配置信息可以看到我的服务器的内存条: DIMM_A1 8192 MB 1333 MHz (0.8 ns),
有12根,每个CPU连接6根。
根据文章我们计算如下:每个通道 (*8 /8 = 10.6G
Byte;而我们的CPU是3个通道的,也就是说这个CPU的总的内存带宽是
10.6*3=31.8G;我的机器有2个CPU,所以总的通道是63.6G,
理论上是这样的对吧(有错,请纠正我),我们等下实际测量下。
从上面的计算,显然内存带宽不是瓶颈。那问题出在那里呢,我们继续看!
我们需要个工具能够搬动内存的工具。这类工具测试处理的带宽是指在一个线程能跑出的最大内存带宽。 挑个简单的mbw(mbw �
Memory BandWidth benchmark)来玩下:
$ sudo apt-get install mbw
$ mbw -q -n 1 256
Method: MEMCPY
Elapsed: 0.19652
MiB: 256.00000
Method: MEMCPY
Elapsed: 0.19652
MiB: 256.00000
Method: DUMB
Elapsed: 0.12668
MiB: 256.00000
Method: DUMB
Elapsed: 0.12668
MiB: 256.00000
Method: MCBLOCK Elapsed: 0.02945
MiB: 256.00000
Method: MCBLOCK Elapsed: 0.02945
MiB: 256.00000
CPU内存带宽测量工具方面,在@王王争
同学大力帮助下,同时给我详尽地介绍了PTU(intel-performance-tuning-utility),在这里可以下载。
解开下载的二进制包后,bin里面带的vtbwrun就是我们的硬件层面的内存带宽使用测量工具,下面是使用帮助:
$ sudo ptu40_005_lin_intel64/bin/vtbwrun
***********************************
Performance Tuning Utility
***********************************
Usage: ./ptu [-c] [-i &iterations&] [-A] [-r] [-p] [-w]
-c disable CPU check.
-i &iterations& specify how many iterations PTU should run.
-A Automated mode, no user Input.
-r Monitor QHL read/write requests from the IOH
*************************** EXCLUSIVE ******************************
-p Monitor partial writes on Memory Channel 0,1,2
-w Monitor WriteBack, Conflict event
*************************** EXCLUSIVE ******************************
$ sudo ptu40_005_lin_intel64/bin/vtbwrun -c -A
运行期截图如下:
该图清楚的指出System Memory Throughput(MB/s): 13019.45, 其中QPI也消耗挺大的。
此外,我们还需要知道
CPU的topology结构,也就是说每个操作系统的CPU对应到哪个CPU哪个核心的哪个超线程去。有了这个信息,我们才能用taskset把内存测试工具绑定到指定的CPU去,才能精确的观察内存使用情况。CPU
topology参看这里
$ sudo ./cpu_topology64.out
Advisory to Users on system topology enumeration
This utility is for demonstration purpose only. It assumes the hardware topology
configuration within a coherent domain does not change during the life of an OS
session. If an OS support advanced features that can change hardware topology
configurations, more sophisticated adaptation may be necessary to account for
the hardware configuration change that might have added and reduced the number
of logical processors being managed by the OS.
User should also`be aware that the system topology enumeration algorithm is
based on the assumption that CPUID instruction will return raw data reflecting
the native hardware configuration. When an application runs inside a virtual
machine hosted by a Virtual Machine Monitor (VMM), any CPUID instructions
issued by an app (or a guest OS) are trapped by the VMM and it is the VMM's
responsibility and decision to emulate/supply CPUID return data to the virtual
machines. When deploying topology enumeration code based on querying CPUID
inside a VM environment, the user must consult with the VMM vendor on how an VMM
will emulate CPUID instruction relating to topology enumeration.
Software visible enumeration in the system:
Number of logical processors visible to the OS: 24
Number of logical processors visible to this process: 24
Number of processor cores visible to this process: 12
Number of physical packages visible to this process: 2
Hierarchical counts by levels of processor topology:
# of cores in package
0 visible to this process: 6 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core
1 visible to this process: 2 .
# of logical processors in Core
2 visible to this process: 2 .
# of logical processors in Core
3 visible to this process: 2 .
# of logical processors in Core
4 visible to this process: 2 .
# of logical processors in Core
5 visible to this process: 2 .
# of cores in package
1 visible to this process: 6 .
# of logical processors in Core 0 visible to this process: 2 .
# of logical processors in Core
1 visible to this process: 2 .
# of logical processors in Core
2 visible to this process: 2 .
# of logical processors in Core
3 visible to this process: 2 .
# of logical processors in Core
4 visible to this process: 2 .
# of logical processors in Core
5 visible to this process: 2 .
Affinity masks per SMT thread, per core, per package:
Individual:
P:0, C:0, T:0 --& 1
P:0, C:0, T:1 --& 1z3
Core-aggregated:
P:0, C:0 --& 1001
Individual:
P:0, C:1, T:0 --& 4
P:0, C:1, T:1 --& 4z3
Core-aggregated:
P:0, C:1 --& 4004
Individual:
P:0, C:2, T:0 --& 10
P:0, C:2, T:1 --& 1z4
Core-aggregated:
P:0, C:2 --& 10010
Individual:
P:0, C:3, T:0 --& 40
P:0, C:3, T:1 --& 4z4
Core-aggregated:
P:0, C:3 --& 40040
Individual:
P:0, C:4, T:0 --& 100
P:0, C:4, T:1 --& 1z5
Core-aggregated:
P:0, C:4 --& 100100
Individual:
P:0, C:5, T:0 --& 400
P:0, C:5, T:1 --& 4z5
Core-aggregated:
P:0, C:5 --& 400400
Pkg-aggregated:
P:0 --& 555555
Individual:
P:1, C:0, T:0 --& 2
P:1, C:0, T:1 --& 2z3
Core-aggregated:
P:1, C:0 --& 2002
Individual:
P:1, C:1, T:0 --& 8
P:1, C:1, T:1 --& 8z3
Core-aggregated:
P:1, C:1 --& 8008
Individual:
P:1, C:2, T:0 --& 20
P:1, C:2, T:1 --& 2z4
Core-aggregated:
P:1, C:2 --& 20020
Individual:
P:1, C:3, T:0 --& 80
P:1, C:3, T:1 --& 8z4
Core-aggregated:
P:1, C:3 --& 80080
Individual:
P:1, C:4, T:0 --& 200
P:1, C:4, T:1 --& 2z5
Core-aggregated:
P:1, C:4 --& 200200
Individual:
P:1, C:5, T:0 --& 800
P:1, C:5, T:1 --& 8z5
Core-aggregated:
P:1, C:5 --& 800800
Pkg-aggregated:
P:1 --& aaaaaa
APIC ID listings from affinity masks
0, Affinity mask
- apic id 20
1, Affinity mask
- apic id 0
2, Affinity mask
- apic id 22
3, Affinity mask
- apic id 2
4, Affinity mask
- apic id 24
5, Affinity mask
- apic id 4
6, Affinity mask
- apic id 30
7, Affinity mask
- apic id 10
8, Affinity mask
- apic id 32
9, Affinity mask
- apic id 12
10, Affinity mask
- apic id 34
11, Affinity mask
- apic id 14
12, Affinity mask
- apic id 21
13, Affinity mask
- apic id 1
14, Affinity mask
- apic id 23
15, Affinity mask
- apic id 3
16, Affinity mask
- apic id 25
17, Affinity mask
- apic id 5
18, Affinity mask
- apic id 31
19, Affinity mask
- apic id 11
20, Affinity mask
- apic id 33
21, Affinity mask
- apic id 13
22, Affinity mask
- apic id 35
23, Affinity mask
- apic id 15
Package 0 Cache and Thread details
Box Description:
is cache level designator
is cache size
OScpu# is cpu # as seen by OS
is core#[_thread# if & 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if & 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
L1D is Level 1 Data cache, size(KBytes)= 32,
Cores/cache= 2, Caches/package= 6
L1I is Level 1 Instruction cache, size(KBytes)= 32,
Cores/cache= 2, Caches/package= 6
L2 is Level 2 Unified cache, size(KBytes)= 256,
Cores/cache= 2, Caches/package= 6
L3 is Level 3 Unified cache, size(KBytes)= 12288,
Cores/cache= 12, Caches/package= 1
+-------------+-------------+-------------+-------------+-------------+-------------+
c0_t1| c1_t0
c1_t1| c2_t0
c2_t1| c3_t0
c3_t1| c4_t0
c4_t1| c5_t0
+-------------+-------------+-------------+-------------+-------------+-------------+
+-------------+-------------+-------------+-------------+-------------+-------------+
+-------------+-------------+-------------+-------------+-------------+-------------+
CmbMsk|555555
+-----------------------------------------------------------------------------------+
Combined socket AffinityMask= 0x555555
Package 1 Cache and Thread details
Box Description:
is cache level designator
is cache size
OScpu# is cpu # as seen by OS
is core#[_thread# if & 1 thread/core] inside socket
AffMsk is AffinityMask(extended hex) for core and thread
CmbMsk is Combined AffinityMask(extended hex) for hw threads sharing cache
CmbMsk will differ from AffMsk if & 1 hw_thread/cache
Extended Hex replaces trailing zeroes with 'z#'
where # is number of zeroes (so '8z5' is '0x800000')
+-------------+-------------+-------------+-------------+-------------+-------------+
c0_t1| c1_t0
c1_t1| c2_t0
c2_t1| c3_t0
c3_t1| c4_t0
c4_t1| c5_t0
+-------------+-------------+-------------+-------------+-------------+-------------+
+-------------+-------------+-------------+-------------+-------------+-------------+
+-------------+-------------+-------------+-------------+-------------+-------------+
CmbMsk|aaaaaa
+-----------------------------------------------------------------------------------+
#或者最简单的方法,让Erlang告诉我们
Erlang R14B04 (erts-5.8.5) 1 [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.8.5
(abort with ^G)
1& erlang:system_info(cpu_topology).
[{processor,[{core,[{thread,{logical,1}},
{thread,{logical,13}}]},
{core,[{thread,{logical,3}},{thread,{logical,15}}]},
{core,[{thread,{logical,5}},{thread,{logical,17}}]},
{core,[{thread,{logical,7}},{thread,{logical,19}}]},
{core,[{thread,{logical,9}},{thread,{logical,21}}]},
{core,[{thread,{logical,11}},{thread,{logical,23}}]}]},
{processor,[{core,[{thread,{logical,0}},
{thread,{logical,12}}]},
{core,[{thread,{logical,2}},{thread,{logical,14}}]},
{core,[{thread,{logical,4}},{thread,{logical,16}}]},
{core,[{thread,{logical,6}},{thread,{logical,18}}]},
{core,[{thread,{logical,8}},{thread,{logical,20}}]},
{core,[{thread,{logical,10}},{thread,{logical,22}}]}]}]
#我们顺手写个shell脚本可以运行多个mbw绑定在指定的CPU上
$ cat run_mbw.sh
#!/bin/bash
for i in $(seq $1 $3 $2)
taskset -c $i mbw -q -n
&/dev/null &
$ chmod +x run_mbw.sh
我们知道CPU0的逻辑CPU号码是奇数, CPU1的逻辑CPU号码是偶数.
只要运行 ./run_mbw from to increase 就达到目的.
有了这些工具,我们就可以做试验了,喝口水继续:
$ sudo ./run_mbw.sh
我们可以看到CPU的绑定情况,符合预期,见下图:
那么内存的带宽呢?看图:
有点问题,内存被2个CPU消耗,并且有QPI。
$ sudo./run_mbw.sh
这时候所有的CPU已经跑满,那么内存的带宽呢?看图:
我们看出这台机器内存最大带宽32G。
我们很奇怪呀,为什么会出现这种情况呢?CPU并没有按照预期消耗自己的那部分内存,而且QPI的消耗也很大。
究根本原因是我们在操作系统启动的时候把numa给关掉了,避免mysqld在大内存的情况下引起swap.
我们来确认下:
# cat /proc/cmdline
ro root=LABEL=/ numa=off
console=tty0 console=ttyS1,115200
找到原因就好办了。没关系,我们重新找台没有关numa的机器,实验如下:
$ cat /proc/cmdline
ro root=LABEL=/ console=tty0 console=ttyS0,9600 #确实没关numa, 现在机器有2个node
$ sudo ~/aspersa/summary
# Aspersa System Summary Report ##############################
12:26:18 UTC (local TZ: CST +0800)
Hostname | my031089
Uptime | 18 days, 10:33,
load average: 0.04, 0.01, 0.00
System | Huawei Technologies Co., Ltd.; Tecal RH2285; vV100R001 (Main Server Chassis)
Service Tag | B4001897
Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga)
Kernel | 2.6.18-164.el5
Architecture | CPU = 64-bit, OS = 64-bit
Threading | NPTL 2.5
Compiler | GNU CC version 4.1.2
(Red Hat 4.1.2-44).
SELinux | Disabled
# Processor ##################################################
Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes
Speeds | 16x
Models | 16xIntel(R) Xeon(R) CPU E5620 @ 2.40GHz
Caches | 16x12288 KB
# Memory #####################################################
Total | 23.53G
Free | 8.40G
Used | physical = 15.13G, swap = 20.12M, virtual = 15.15G
Buffers | 863.79M
Caches | 4.87G
Used | 8.70G
Swappiness | vm.swappiness = 60
DirtyPolicy | vm.dirty_ratio = 40, vm.dirty_background_ratio = 10
Form Factor
Type Detail
========= ======== ================= ============= ============= ===========
1066 MHz (0.9 ns) DIMM
{OUT OF SPEC} Other
1066 MHz (0.9 ns) DIMM
{OUT OF SPEC} Other
1066 MHz (0.9 ns) DIMM
{OUT OF SPEC} Other
1066 MHz (0.9 ns) DIMM
{OUT OF SPEC} Other
1066 MHz (0.9 ns) DIMM
{OUT OF SPEC} Other
1066 MHz (0.9 ns) DIMM
{OUT OF SPEC} Other
{OUT OF SPEC} Other
{OUT OF SPEC} Other
{OUT OF SPEC} Other
{OUT OF SPEC} Other
{OUT OF SPEC} Other
{OUT OF SPEC} Other
Erlang R14B03 (erts-5.8.4) 1 [64-bit] [smp:16:16] [rq:16] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.8.4
(abort with ^G)
erlang:system_info(cpu_topology).
erlang:system_info(cpu_topology).
[{node,[{processor,[{core,[{thread,{logical,4}},
{thread,{logical,12}}]},
{core,[{thread,{logical,5}},{thread,{logical,13}}]},
{core,[{thread,{logical,6}},{thread,{logical,14}}]},
{core,[{thread,{logical,7}},{thread,{logical,15}}]}]}]},
{node,[{processor,[{core,[{thread,{logical,0}},
{thread,{logical,8}}]},
{core,[{thread,{logical,1}},{thread,{logical,9}}]},
{core,[{thread,{logical,2}},{thread,{logical,10}}]},
{core,[{thread,{logical,3}},{thread,{logical,11}}]}]}]}]
这是一台有2个E5620 CPU的华为生产的机器,总共16个逻辑CPU,其中逻辑CPU4-7,12-15对应于物理CPU0,
逻辑CPU0-3,8-11对应于物理CPU1。
$ ./run_mbw.sh 4 5 1
这时候的内存带宽图:
从图中我们可以看出2个mbw绑定在CPU 0上消耗了10.4G带宽,CPU1无消耗,QPI无消耗,符合预期。
继续加大压力:
$ ./run_mbw.sh 6 7 1
内存消耗达到16G了,继续加压力。
$ ./run_mbw.sh 12 15 1
我们看到线程数目加大,但是内存带宽保持不变,说明已经到瓶颈了。该CPU瓶颈16G。
我们在另外一个CPU1上实验下:
$ ./run_mbw.sh 0 3 1
$ ./run_mbw.sh 8 11 1
一下子把压力全加上去,看截图:
从图中我们可以看出CPU0,CPU1都消耗差不多15G的带宽,总带宽达到30G。
到此我们很明白我们如何计算我们的服务器带宽,以及如何测量目前的带宽使用情况,是不是很有意思?
非常感谢互联网让我们解决问题这么迅速。
BTW:测量出来的带宽和理论差距一倍,是不是我哪里计算错了,请大侠们帮我解惑,谢谢!
后记: 在华为的机器上内存是1066MHZ的,理论上的每通道带宽是 .52G,每个CPU有25.56G,
从截图中可以看到CPU0的带宽达到17.17G,那么已经达到理论峰值的67.2%.
侧面验证了DDR的带宽算法是正确的,再次谢谢@王王争 同学。
祝玩得开心!
Post Footer automatically generated bywp-posturl pluginfor
wordpress.
已投稿到:
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。【图文】数字调制技术_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
评价文档:
数字调制技术
上传于||暂无简介
大小:5.03MB
登录百度文库,专享文档复制特权,财富值每天免费拿!
你可能喜欢工具类服务
编辑部专用服务
作者专用服务
GNSS信号调制方式及频率兼容性研究
Galileo系统正在不断完善和优化。为提高卫星导航定位的自主性,中国启动了Compass 系统的建设,日本和印度也已经启动了各自的卫星导航系统。导航信号作为卫星导航系统的灵魂在决定系统性能方面起着十分关键的作用。扩频码的码片脉冲成形(国外文献也称为调制)会影响信号的码跟踪性能,最终影响测距精度。卫星导航系统目前采用的正交相移键控(Quadrature Phase ShiftKeying,QPSK)或二相移相键控(Binary Phase Shift Keying,BPSK)调制对扩频码的码片脉冲形状没有影响。最小频移键控(Minimum Shift Keying,MSK)调制与QPSK 调制的传输效率相当,但在相同的信道条件下MSK 调制会改变扩频码的码片脉冲形状。同时,MSK 调制信号的恒包络特性使其受星上高功率放大器(High PowerAmplifier,HPA)非线性的影响比QPSK 调制信号更小。因此本文将通过评估MSK调制对卫星导航信号码跟踪性能的影响,以及对比MSK和QPSK 调制信号受非线性的影响来研究卫星导航系统采用MSK 调制的可行性。另一方面,由于卫星导航系统的增加和频率资源的有限,各主要卫星导航系统共用频段所带来的兼容性问题一直是国际频率协调的重点。作为我国正在全力建设的卫星导航系统,Compass 系统与世界其他卫星导航系统的兼容对于Compass 系统未来的发展十分重要。针对这两个问题,本论文的主要研究内容和贡献如下:  
⑴建立了MSK-BOC(Minimum Shift Keying-Binary Offset Carrier)信号的时域模型和功率谱密度模型;以卫星导航系统L 频段信号为例,分析了MSK 调制对卫星导航信号自相关函数和功率谱密度的影响。结果表明:MSK 调制信号比R和BOC调制信号的功率谱密度主瓣的高频成分更多,旁瓣能量更小,因此MSK 调制信号的带外抑制效果更好;前端带宽的变化对MSK 调制信号自相关函数的影响小于对R和BOC 调制信号的影响;当接收机前端带宽接近MSK 调制信号功率谱密度主瓣的过零点带宽时,MSK 调制信号的自相关函数在0 点附近的斜率大于R和BOC 调制信号。  
⑵分析了不同相关器间隔、载噪比和前端带宽下MSK 调制对卫星导航信号码跟踪误差和多径误差包络的影响;分析了不同前端带宽下MSK 调制对卫星导航信号Gabor 带宽的影响。分析表明:当接收机前端带宽接近MSK 调制信号功率谱密度主瓣的过零点带宽时,MSK 调制信号的Gabor 带宽大于R和BOC 调制信号;MSK 调制信号在相关器间隔相对较宽或前端带宽接近其信号功率谱密度主瓣的过零点带宽时的码跟踪误差和多径误差包络均小于R和BOC 调制信号。分析结果从码跟踪性能的角度为卫星导航系统L 频段采用MSK 调制的可行性提供了理论支撑。  
⑶评估了MSK 调制信号受HPA 非线性的影响。对比分析了MSK和QPSK 调制信号的时域波形;在包含HPA的非线性信道模型基础上,对比评估了不同前端带宽下卫星导航信号分别采用MSK和QPSK 调制时通过HPA后的相关损耗、功率损耗和码跟踪误差。分析表明:QPSK 调制信号的时域波形相位变化不连续,这些信号带限后会出现包络陷落;MSK 调制信号的时域波形相位连续变化,包络恒定,克服了QPSK 调制信号时域波形的缺点;当接收机前端带宽接近或大于MSK 调制信号功率谱密度主瓣的过零点带宽时,MSK 调制信号的相关损耗、功率损耗和码跟踪误差均小于QPSK 调制信号。分析结果从非线性角度说明卫星导航系统L 频段采用MSK调制的可行性。  
⑷以最新公布的Compass 信号体制和Galileo ICD(Signal In Space InterfaceControl Document)文档为依据,分析了Compass 信号与GPS和Galileo 信号的兼容性;研究了Compass、GPS和Galileo 系统对卫星导航频段的占用情况;在考虑信号相对位置的前提下,计算分析了Compass、GPS和Galileo 信号间的谱分离系数(Spectral Separation Coefficient,SSC);在考虑系统星座、卫星运行周期、多普勒频偏、卫星天线增益、接收机前端带宽等因素的基础上,仿真分析了Compass 信号与GPS和Galileo 信号间干扰的最坏情况。分析结论可以为Compass 系统与世界其他卫星导航系统的频率协调提供技术支撑。
学科专业:
授予学位:
学位授予单位:
导师姓名:
学位年度:
在线出版日期:
本文读者也读过
相关检索词
万方数据知识服务平台--国家科技支撑计划资助项目(编号:2006BAH03B01)(C)北京万方数据股份有限公司
万方数据电子出版社

我要回帖

更多关于 msk调制解调 的文章

 

随机推荐