Output Limit Exceededbuy limit是什么意思思

LeetCode solutions
You have solved 277/277 problems. 主要追求两个目标:代码简短,时间空间复杂度低。
使用了multimap,时间复杂度O(n^2*log(n)),得到所有答案后不需要去重操作。
合法的四元组有三类:
a&=b&c&=d,枚举c和d,判断是否存在和为target-c-d的二元组
a&=b=c&=d,枚举b和d,判断是否存在target-b-b-d
a=b=c&=d,枚举a,判断是否存在target-a-a-a
分别统计,小心实现可以保证不会产生相同的候选解,从而无需去重。
Alien Dictionary
对于每一对单词,可以确定去除最长公共前缀后的下一对字母的大小关系。两个单词的最长公共前缀等于夹在其中的(所有相邻单词对的最长公共前缀)的最小值。根据相邻单词对的信息即可推得所有任意单词对的信息。因此只需根据相邻单词对求出拓扑关系。
Basic Calculator
这是operator-precedence grammar,可以用operator-precedence parser计算。这类算法的一个特例是shunting-yard algorithm,适用于本题。 为了方便,我假设字串开头和末尾都有隐含的’’字符。使用的文法如下:
12345S := %x00 E %x00E := E "+" EE := E "-" EE := ( E )E := 1*DIGIT
在比较两个操作符时,区分左手边和右手边,左手边用于先处理的操作符,右手边为新来的操作符。使用左手边操作符的in-stack precedence(isp)和右手边操作符的incoming precedence(icp)进行比较。立即数可以看作icp很大,因此shift后可以立刻reduce。
线性解法。见。
题意:求BST中与目标值最接近的k个节点。 BST中关键字小于等于目标值的所有节点可以用若干(不带右孩子的子树)表示。这些(不带右孩子的子树)在一条链上,可以用栈维护,得到用于计算前驱的的迭代器。求出目标值的k个前驱的时间复杂度为\(O(depth+k)\)。 类似地,大于目标值的所有节点也可以组织成一个迭代器。 两个迭代器做合并操作,获得前k个最接近的值。
Contains Duplicate III
以t+1为单位把元素分桶,若一个桶内包含两个元素,则它们的差一定小于等于t;差小于等于t的数对还可能出现在相邻两个桶之间。对于vector的各个元素,计算当前元素应放置的桶的编号j,检查三个桶j-1,j,j+1,判断之前k-1个元素里是否有差值小于等于t的元素。
Course Schedule
拓扑排序。入度减为0的顶点即成为候选,可以把入度向量组织成单链表,从而无需使用额外的队列或栈存储候选顶点,减小空间占用。
Dungeon Game
动态规划,对于每个格子记录从它出发顺利到达终点需要的最小血量。从(m-1,n-1)递推至(0,0)。
注意对于每个格子其实有进和出两个状态:进入该格子时(未考虑当前格子影响);和离开该格子后(已考虑当前格子影响)。这两种方法都行,考虑到程序的结构可能有三部分:初始化边界条件、状态转移、根据状态表示答案,需要仔细斟酌以追求代码简短。这里两种方式并无明显代码长度差异。
Expression Add Operators
Find Minimum in Rotated Sorted Array
方法是二分,如果要写短可以采用下面的思路: 子数组中如果有相邻两个元素a[i]&a[i+1],则a[i+1]是整个数组的最小值。 若子数组a[i..j]满足a[i]&a[j],则存在i&=k&j使得a[k]&a[k+1]。 对于子数组a[l..h]判断a[l]&a[m]或a[m]&a[h]是否有一个成立,成立则可以去除一半候选元素,不然a[h]&a[l]为一个逆序对。
Find Minimum in Rotated Sorted Array II
留意几个特殊情形:[1,0,1,1,1]、[1,1,1,0,1]。
对于子数组a[l..h],令m=floor(l+h)/2(注意m可能等于l)。判断a[l]&a[m]或a[m]&a[h]是否有一个成立,成立则可以去除一半候选元素。 若两个条件均不满足则a[l]&=a[m]&=a[h],若a[h]&a[l]则说明a[l]最小,否则a[l]=a[m]=a[h],可以把范围缩小为a[l+1..h-1]。
另外一种方法是只判断a[m]与a[h]`的大小关系。
Find Peak Element
二分查找,把区间缩小为仍包含候选值的子区间。 12345678int l = 0, h = a.size();while (l & h-1) {
int m = l+h && 1;
if (a[m-1] & a[m]) h =
else if (m+1 == h || a[m] & a[m+1]) l = h =
else l = m+1;}return
First Missing Possitive
空间复杂度O(1)。
Fraction to Recurring Decimal
注意INT_MIN/(-1)会溢出。
Graph Valid Tree
\(n\)个节点的树的判定方式:
有\(n-1\)条边且没有simple circle。可以用union-find algorithm判断。
联通且无simple circle。可以用graph traversal。
Invert Binary Tree
选择一种binary tree traversal算法,节点访问改成交换左右孩子即可。 如果要O(1)空间的话,得借鉴Morris pre/in-order traversal的思路,该算法的核心是通过左子树最右节点的右孩子是否指向当前节点来判断当前节点被访问到的次数,从而决定当前应该进行的操作(探索左子树或访问当前节点)。但交换子树操作会弄乱左子树最右节点的位置,因此pre-order和in-order都行不通,但post-order可行。Morris post-order traversal(这个叫法也许不恰当)可以参考的描述,pre/in-order的访问是针对一个节点进行的,但post-order则需要对当前节点左子树的右链进行,并且需要翻转两次以达到按post-order输出的效果。但这里只需要保证每个节点都被访问一次,不需要严格的post-order顺序,因此无需翻转操作。
空间复杂度O(1)。
Jump Game II
使用output-restricted queue优化的动态规划,注意到动态规划值的单增性可以用类似BFS的方式,空间复杂度O(1)。
Linked List Cycle II
单链表找圈。Brent’s cycle detection algorithm或Floyd’s cycle detection algorithm。
如果只需要判断是否有圈,而不用指出圈的起点,可以使用pointer reversal。访问链表的过程中翻转链表,倘若访问到NULL,则说明无圈;若回到了起点,则说明有圈。如果有圈的话,算法执行完毕后,圈的方向会被反转,可以再执行一次翻转过程还原。下面是这个方法的实现:
123456789101112bool pointerReversal(ListNode *head) {
if (! head) return false;
ListNode *x = head-&next, *y, *z =
while (x && x != head) {
return x ==}
Longest Palindromic Substring
线性时间求出以每个位置为中心的最长回文子串的Manacher’s algorithm。提供的描述感觉最清晰。但实现我更喜欢我从某本stringology书上看到的。
Majority Element
Boyer-Moore majority vote algorithm ,思路是每次找到两个不同的元素并丢弃,这样做不会丢失解。容器里只剩下一种元素时,它就是majority。
Majority Element II
Boyer-Moore majority vote algorithm的扩展,找出所有出现次数大于floor(N/K)的元素。方法是每次找出K个互不相同的元素丢掉,最后剩下的元素是候选解。时间复杂度O(N*K)。
Maximum Gap
平均数原理,求出极差d后,根据桶大小ceil(d/(n-1))分成若干个桶,答案必为不同桶的两个元素之差。
Maximum Subarray
Kadane’s algorithm
Meeting Rooms II
题意中两条线段冲突当且仅当它们的公共部分长度大于0。 这是interval graph,所求的是chromatic number,interval graph包含与perfect graph,因此chromatic number等于maximum clique顶点数,即最大线段相交数,可以把所有端点排序计算。 或者按起始端点排序后用贪心算法:维护若干线段集合,按起始端点总从小到大考虑每个线段,任选一个不冲突的集合加入,若无不冲突的集合则自成一个新集合。最后,集合数即为答案。实现时,每个集合用最大右端点值表示,所有集合组织为一个小根binary heap,若当前考虑线段的起始点小于根则压入一个新元素,否则修改根的值。
Merge k Sorted Lists
Tournament sort,可以用priority_queue实现。
使用两个栈,原栈S存放各个元素。每当新元素小于等于当前最小值时,就把新元素复制一份压入另一个栈S'。弹出时,若元素在S'中出现,则也从S'中弹出。
另一种方法是记录新元素与当前最小值的差值,每个元素需要多记录1个bit。可惜C++会Memory Limit Exceeded,感觉不合理。
Minimum Window Substring
Missing Number
从左到右考虑每个元素,若不等于它的位置则把它交换到正确的位置并重复此过程。
bitmask存储列,正反斜线控制的格子。
Number of 1 Bits
求popcount。__builtin_popcount()或者网上查找各种bit twiddling hacks。
One Edit Distance
编辑距离为1有三种可能:S比T多一个字符、T比S多一个字符、S与T长度相同且有一个位置不同。除去最长公共前缀和最长公共后缀,三种可能的检查方式可以统一为判断较长的串长度是否为1。
Palindrome Partitioning
f[i][j] = calc(f[ii][jj] : i &= ii &= jj &= j)形式的动态规划可以采用如下计算方式。
1234for (int i = --i &= 0; )
for (int j = i; j & j++) {
// calc [i,j]
Palindrome Partitioning II
求原串的最少回文串划分。O(n^2)时间,可以优化到O(n)空间。
我没能找到比O(n^2)快的算法。相关的问题有检测一个串是否分割成k个回文串。k小于等于4时均有线性算法,参见Text Algorithms一书Theorem 8.17。k大于4时不确定。
Perfect Squares
由Lagrange’s four-square theorem知答案为1~4,由Legendre’s three-square theorem检查答案是否为4。若小于4,则再\(O(\sqrt{n})\)尺取法检查是否是perfect square或能表示为两个perfect squares之和,若不是则答案为3。
另有期望\(O((\log n)^2)\)的算法,参见M. O. Rabin, J. O. Shallit, Randomized Algorithms in Number Theory, Communications on Pure and Applied Mathematics 39 (1986), no. S1, pp. S239–S256. doi:10.1002/cpa.
使用Morris in-order traversal找到邻接失序对。
如果交换的元素相邻,则有一个邻接失序对(如0 1 2 3 4 -&0 1 3 2 4),否则有两个邻接失序对(如0 1 2 3 4 5 -& 0 4 2 3 1 5)。
Remove Nth Node From End of List
使用pointers-to-pointers很多时候能简化实现。
Regular Expression Matching
P为模式串,T为文本, f[i][j]表示P的前i个字符能否匹配T的前j个字符。 根据f[i-1][*]计算f[i][*]。 这个方法也可以看作构建了精简表示的Thompson’s automaton,时间复杂度O(|P|*|T|)。
Remove Element
原地修改数组,我喜欢把这种操作称作deflate。
Repeated DNA Sequences
可以用类似Rabin-Karp的rolling hash方法,求出所有长为10的子串的hash值,判断是否有重复。只有四个字符,长度为10,故可以使用polynomial hash code。 另外也可以用suffix tree/suffix array等方法,找longest common prefix大于等于10的两个后缀。注意不能重复输出字串。
Reverse Bits
Hacker’s Delight (2nd) 7.1 Reversing Bits and Bytes。
Rotate Array
经典的三次reverse,或者拆成gcd(k,n)个置换群循环右移一格。
类似于same fringe problem,可以试验generator、co-routine、lazy evaluation、continuation等。如果要O(1)的空间复杂度可以用Morris in-order traversal等方法。
Set Matrix Zeroes
O(1)空间复杂度。使用两个标志表示第0行或第0列是否需要清零,之后用第0行表示各列是否需要清零,第0列表示各行是否需要清零。
C++ &algorithm&的lower_bound。
Shortest Palindrome
找出原串的最长回文前缀,用Manacher’s algorithm求解,把剩下部分复制一份、翻转后、接到左边,就得到了最短的回文串。求最长回文前缀的另一种方法是把原串以及翻转用特殊字符隔开,用Morris-Pratt algorithm求border,该值即为最长回文前缀长度。
Single Number III
所有数的xor等于两个出现一次的数的xor,不妨设为k,则k & -k为二进制表示上某个为1的数位。根据这个数位把所有元素划分成两份,每份只有一个出现一次的数。
Sort Colors
Dutch national flag problem。如果不要求,允许,那么有交换次数更少的Bentley-McIlroy算法
Hacker’s Delight (2nd) 11.1.1。牛顿迭代法。46340 = floor(sqrt(INT_MAX))
Scramble String
我用了O(n^3)的空间和O(n^4)的时间,应该有更好的算法。但查阅了一些文献还没有找到。
我用了较麻烦的natural merge sort。Quicksort实现会简单很多。
Strobogrammatic Number III
定义一个函数,用于计算小于给定数的strobogrammatic number个数,则用减法即可得到题目所求。下面考虑如何计算这个函数。 小于给定数high的strobogrammatic number分为两类:
位数不够的,可看作有若干前导0。容易求得。
位数相同。可以枚举与high的公共前缀的长度,再计算下一个数位小于high中对应数位的strobogrammatic number个数。由于位数为\(n\)的strobogrammatic number只有\(ceil(n/2)\)个数位是独立的,枚举的公共前缀长度得小于\(ceil(n/2)\)。
Sudoku Solver
转化为exact cover problem,使用dancing links + Algorithm X求解。
Trapping Rain Water
两个指针夹逼,空间O(1)。
类似动态规划的思想,子树可以复用。
Wiggle Sort
从左到右考虑每一对相邻元素,如果不满足大小关系则交换,交换不会影响之前一对元素的大小关系。
对于前序序列中的递减连续段,它们形成了BST的某个左孩子链。
Maximal Rectangle
逐行扫描棋盘,对于每一行\(i\)记录三个值:
\(h_j\)表示第\(j\)列上方连续为1的行数:h[j] = a[i][j] == 0 ? 0 : h[j]+1;
若\(a_{i,j}=1\),令\(l_j\)为\(min(k : h_{k+1\ldots j}\ge h_j)\);若\(a_{i,j}=0\)则设为\(j\)
若\(a_{i,j}=1\),令\(r_j\)为\(max(k : h_{j\ldots k-1}\ge h_j)\);若\(a_{i,j}=0\)则设为\(j\)
\(l\)的计算方式是: 1234567if a[i][j] == 0:
l[j] = jelif h[j] == 1:
l[j] = j-(a[i][j]及左边连续的1的个数)+1else
# 下式中右边的l[j]是第i-1行计算得到的l[j]
l[j] = min(l[j], j-(a[i][j]及左边连续的1的个数)+1)
\(r\)的计算类似。对于每一列\(j\),\((r_j-l_j+1)\times h_j\)是一个候选解。
123456789101112131415161718192021222324252627#define ROF(i, a, b) for (int i = (b)#define FOR(i, a, b) for (int i = (a)#define REP(i, n) for (int i = 0class Solution {public:
int maximalRectangle(vector&vector&char& & &a) {
if (a.empty()) return 0
int m = a.size(), n = a[0].size(), ans = 0
vector&int& h(n), l(n), r(n, n-1)
REP(i, m) {
int ll = -1
REP(j, n) {
h[j] = a[i][j] == '1' ? h[j]+1 : 0
if (a[i][j] == '0') ll = j
l[j] = h[j] ? max(h[j] == 1 ? 0 : l[j], ll+1) : j
int rr = n
ROF(j, 0, n) {
if (a[i][j] == '0') rr = j
r[j] = h[j] ? min(h[j] == 1 ? n-1 : r[j], rr-1) : j
ans = max(ans, (r[j]-l[j]+1)*h[j]);
}};
潘宇超 2008
123456789101112131415161718192021222324252627#define ROF(i, a, b) for (int i = (b)#define FOR(i, a, b) for (int i = (a)#define REP(i, n) for (int i = 0class Solution {public:
int maximalRectangle(vector&vector&char& & &a) {
if (a.empty()) return 0
int m = a.size(), n = a[0].size(), ans = 0
vector&int& h(n), l(n), r(n, n-1)
REP(i, m) {
REP(j, n) {
h[j] = a[i][j] == '1' ? h[j]+1 : 0
while (l[j] && h[l[j]-1] &= h[j])
l[j] = l[l[j]-1]
ROF(j, 0, n) {
while (r[j]+1 & n && h[j] &= h[r[j]+1])
r[j] = r[r[j]+1]
ans = max(ans, (r[j]-l[j]+1)*h[j]);
}};
ACRush某TopCoder SRM
逐行扫描棋盘,对于每一行记录\(h_j\):第\(j\)列上方连续为1的行数。
对于每一行\(i\),从大到小排序\(h\),并计算候选解: 1234x = [0,0,...,0]foreach j, h[j]: # 从大到小遍历h[j]
ans = max(ans, (max(k : x[j..k]都为1) - min(k : x[k..j]都为1)) * h[j])
上面伪代码可以在\(O(n)\)时间内求出。方法是把\(x\)中设置为1的元素看成若干集合,相邻元素在同一个集合中。每次把某个\(x\)中元素设为1可以看成新建了一个集合,若左边或右边的集合存在则合并。
不相交集合可以用union-find算法,但针对这种特殊情形存在\(O(n)\)的算法:每个集合(即某个1的连续段)左端和右端互指,其他元素的指针任意。新建集合时若左边或右边的集合存在,则更新它们两端的指针。
123456789101112131415161718192021222324252627282930313233343536373839404142#define ROF(i, a, b) for (int i = (b)#define FOR(i, a, b) for (int i = (a)#define REP(i, n) for (int i = 0#define REP1(i, n) for (int i = 1class Solution {public:
int maximalRectangle(vector&vector&char& & &a) {
if (a.empty()) return 0
int m = a.size(), n = a[0].size(), ans = 0
vector&int& h(n), p(n), b(m+1), s(n)
REP(i, m) {
h[j] = a[i][j] == '1' ? h[j]+1 : 0
fill(b.begin(), b.end(), 0)
REP1(j, m)
b[j] += b[j-1]
s[--b[h[j]]] = j
fill(p.begin(), p.end(), -1)
ROF(j, 0, n) {
int x = s[j], l = x, r = x
if (x && p[x-1] != -1) {
l = p[x-1]
if (x+1 & n && p[x+1] != -1) {
r = p[x+1]
ans = max(ans, (r-l+1)*h[x]);
}};
栈维护histogram
逐行扫描棋盘,对于每一行记录\(h_j\):第\(j\)列上方连续为1的行数。
对于每一行\(i\),维护一个以\(h\)值为元素的栈,满足从栈底到栈顶\(h\)值递增。从左到右枚举各列\(j\),在插入\(h_j\)前维持栈性质,弹出元素后栈顶代表\(j\)往左能“看到”的最远位置,插入\(h_j\)。
其他方法 
枚举两列,求出夹在两列间的最大1子矩形。\(O(n^3)\)
令\(s_{i,j}\)表示以原矩形最上角和\((i,j)\)两个端点的矩形内1的数目,枚举所有子矩形\(((i,j),(ii,jj))\),用\(s_{ii,jj}-s_{ii,j-1}-s_{i-1,jj}+s_{i-1,j-1}\)求出这个子矩形内1的数目,判断是否合法。\(O(n^4)\)
把0看成障碍点,求不包含障碍点的最大子矩形。 把障碍点按横座标排序。枚举障碍点作为候选子矩形的左端,向右扫描各个障碍点,维护纵座标和枚举点最近的上下个一个障碍点。枚举点、扫描线和上下障碍点确定了候选子矩形的边界。 若0的数目为,\(O(m^2)\),最坏\(O(n^4)\)。
123456copy($('td:has(.ac) ~ td:nth-child(3) a').map(function(_,e){
var id = $(e).parent().prev().text();
var h=e.href.replace(/htt.*problems/,'/leetcode');h=h.substr(0,h.length-1);
var title=e.textContent, href=e.href, name=h.replace(/.*\//,'');
return '|'+id+'|['+title+']('+href+')|['+name+'.cc]('+name+'.cc)|'}).toArray().join('\n'))
1234567891011121314151617require 'mechanize'agent = Mechanize.newpage = agent.get '/accounts/login/'doc = page.form_with {|form|
form['login'] = 'MaskRay'
form['password'] = getpass}.submit.parsertotal = doc.css('td:nth-child(3)').sizesolved = doc.css('td:has(.ac)').sizeputs "You have solved #{solved}/#{total} problems."for a in doc.css('td:nth-child(3) a')
id = a.parent.previous_element.text
href = a['href']
name = href.sub(/\/problems\/(.*)\//, '\1')
title = a.text
puts "|#{id}|[#{title}](#{href})|[#{name}.cc](#{name}.cc)|"end杭电oj部分水题
c语言源代码_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
评价文档:
杭电oj部分水题
c语言源代码
上传于||文档简介
&&菜​鸟​适​用
大小:18.85KB
登录百度文库,专享文档复制特权,财富值每天免费拿!
你可能喜欢Cisco IOS Quality of Service Solutions Configuration Guide, Release 12.2 - Policing and Shaping Overview [Cisco IOS Software Releases 12.2 Mainline] - Cisco
Downloads:
(PDF - 228.0 KB)
Policing and Shaping Overview
Cisco IOS QoS offers two kinds of traffic regulation mechanisms—policing and shaping.
The rate-limiting features of committed access rate (CAR) and the Traffic Policing feature provide the functionality for policing traffic. The features of Generic Traffic Shaping (GTS), Class-Based Shaping, Distributed Traffic Shaping (DTS), and Frame Relay Traffic Shaping (FRTS) provide the functionality for shaping traffic.
Note To identify the hardware platform or software image information associated with a feature, use the Feature Navigator . You can access Feature Navigator at.
You can deploy these features throughout your network to ensure that a packet, or data source, adheres to a stipulated contract and to determine the QoS to render the packet. Both policing and shaping mechanisms use the traffic descriptor for a packet—indicated by the classification of the packet—to ensure adherence and service. (See the chapter
in this book for a description of a traffic descriptor.)
Policers and shapers usually identify traffic descriptor violations in an identical manner. They usually differ, however, in the way they respond to violations, for example:
oA policer typically drops traffic. (For example, the CAR rate-limiting policer will either drop the packet or rewrite its IP precedence, resetting the type of service bits in the packet header.)
oA shaper typically delays excess traffic using a buffer, or queueing mechanism, to hold packets and shape the flow when the data rate of the source is higher than expected. (For example, GTS and Class-Based Shaping use a weighted fair queue to delay packets in order to shape the flow, and DTS and FRTS use either a priority queue, a custom queue, or a FIFO queue for the same, depending on how you configure it.)
Traffic shaping and policing can work in tandem. For example, a good traffic shaping scheme should make it easy for nodes inside the network to detect misbehaving flows. This activity is sometimes called policing the traffic of the flow.
This chapter gives a brief description of the Cisco IOS QoS traffic policing and shaping mechanisms. Because policing and shaping all use the token bucket mechanism, this chapter first explains how a token bucket works. This chapter includes the following sections:
A token bucket is a formal definition of a rate of transfer. It has three components: a burst size, a mean rate, and a time interval (Tc). Although the mean rate is generally represented as bits per second, any two values may be derived from the third by the relation shown as follows:
mean rate = burst size / time interval
Here are some definitions of these terms:
oMean rate—Also called the committed information rate (CIR), it specifies how much data can be sent or forwarded per unit time on average.
oBurst size—Also called the Committed Burst (Bc) size, it specifies in bits (or bytes) per burst how much traffic can be sent within a given unit of time to not create scheduling concerns. (For a shaper, such as GTS, it spec for a policer, such as CAR, it specifies bytes per burst.)
oTime interval—Also called the measurement interval, it specifies the time quantum in seconds per burst.
By definition, over any integral multiple of the interval, the bit rate of the interface will not exceed the mean rate. The bit rate, however, may be arbitrarily fast within the interval.
A token bucket is used to manage a device that regulates the data in a flow. For example, the regulator might be a traffic policer, such as CAR, or a traffic shaper, such as FRTS or GTS. A token bucket itself has no discard or priority policy. Rather, a token bucket discards tokens and leaves to the flow the problem of managing its transmission queue if the flow overdrives the regulator. (Neither CAR nor FRTS and GTS implement either a true token bucket or true leaky bucket.)
In the token bucket metaphor, tokens are put into the bucket at a certain rate. The bucket itself has a specified capacity. If the bucket fills to capacity, newly arriving tokens are discarded. Each token is permission for the source to send a certain number of bits into the network. To send a packet, the regulator must remove from the bucket a number of tokens equal in representation to the packet size.
If not enough tokens are in the bucket to send a packet, the packet either waits until the bucket has enough tokens (in the case of GTS) or the packet is discarded or marked down (in the case of CAR). If the bucket is already full of tokens, incoming tokens overflow and are not available to future packets. Thus, at any time, the largest burst a source can send into the network is roughly proportional to the size of the bucket.
Note that the token bucket mechanism used for traffic shaping has both a token bucket and a data buffer, if it did not have a data buffer, it would be a policer. For traffic shaping, packets that arrive that cannot be sent immediately are delayed in the data buffer.
For traffic shaping, a token bucket permits burstiness but bounds it. It guarantees that the burstiness is bounded so that the flow will never send faster than the token bucket's capacity, divided by the time interval, plus the established rate at which tokens are placed in the token bucket. See the following formula:
(token bucket capacity in bits / time interval in seconds) + established rate in bps =
maximum flow speed in bps
This method of bounding burstiness also guarantees that the long-term transmission rate will not exceed the established rate at which tokens are placed in the bucket.
CAR embodies a rate-limiting feature for policing traffic, in addition to its packet classification feature discussed in the chapter
in this book. The rate-limiting feature of CAR manages the access bandwidth policy for a network by ensuring that traffic falling within specified rate parameters is sent, while dropping packets that exceed the acceptable amount of traffic or sending them with a different priority. The exceed action for CAR is to drop or mark down packets.
The rate-limiting function of CAR does the following:
oAllows you to control the maximum rate of traffic sent or received on an interface.
oGives you the ability to define Layer 3 aggregate or granular incoming or outgoing (ingress or egress) bandwidth rate limits and to specify traffic handling policies when the traffic either conforms to or exceeds the specified rate limits.
Aggregate bandwidth rate limits match all of the packets on an interface or subinterface. Granular bandwidth rate limits match a particular type of traffic based on precedence, MAC address, or other parameters.
CAR is often configured on interfaces at the edge of a network to limit traffic into or out of the network.
CAR examines traffic received on an interface or a subset of that traffic selected by access list criteria. It then compares the rate of the traffic to a configured token bucket and takes action based on the result. For example, CAR will drop the packet or rewrite the IP precedence by resetting the type of service (ToS) bits. You can configure CAR to send, drop, or set precedence.
Aspects of CAR rate limiting are explained in the following sections:
CAR utilizes a token bucket measurement. Tokens are inserted into the bucket at the committed rate. The depth of the bucket is the burst size. Traffic arriving at the bucket when sufficient tokens are available is said to conform, and the corresponding number of tokens are removed from the bucket. If a sufficient number of tokens are not available, then the traffic is said to exceed.
Traffic matching entails identification of traffic of interest for rate limiting, precedence setting, or both. Rate policies can be associated with one of the following qualities:
oIncoming interface
oAll IP traffic
oIP precedence (defined by a rate-limit access list)
oMAC address (defined by a rate-limit access list)
oMultiprotocol Label Switching (MPLS) experimental (EXP) value (defined by a rate-limit access list)
oIP access list (standard and extended)
CAR provides configurable actions, such as send, drop, or set precedence when traffic conforms to or exceeds the rate limit.
Note Matching to IP access lists is more processor-intensive than matching based on other criteria.
CAR propagates bursts. It does no smoothing or shaping of traffic, and therefore does no buffering and adds no delay. CAR is highly optimized to run on high-speed links—DS3, for example—in distributed mode on Versatile Interface Processors (VIPs) on the Cisco 7500 series.
CAR rate limits may be implemented either on input or output interfaces or subinterfaces including Frame Relay and ATM subinterfaces.
Rate limits define which packets conform to or exceed the defined rate based on the following three parameters:
oAverage rate. The average rate determines the long-term average transmission rate. Traffic that falls under this rate will always conform.
oNormal burst size. The normal burst size determines how large traffic bursts can be before some traffic exceeds the rate limit.
oExcess Burst size. The Excess Burst (Be) size determines how large traffic bursts can be before all traffic exceeds the rate limit. Traffic that falls between the normal burst size and the Excess Burst size exceeds the rate limit with a probability that increases as the burst size increases.
The maximum number of tokens that a bucket can contain is determined by the normal burst size configured for the token bucket.
When the CAR rate limit is applied to a packet, CAR removes from the bucket tokens that are equivalent in number to the byte size of the packet. If a packet arrives and the byte size of the packet is greater than the number of tokens available in the standard token bucket, extended burst capability is engaged if it is configured.
Extended burst is configured by setting the extended burst value greater than the normal burst value. Setting the extended burst value equal to the normal burst value excludes the extended burst capability. If extended burst is not configured, given the example scenario, the exceed action of CAR takes effect because a sufficient number of tokens are not available.
When extended burst is configured and this scenario occurs, the flow is allowed to borrow the needed tokens to allow the packet to be sent. This capability exists so as to avoid tail-drop behavior, and, instead, engage behavior like that of Random Early Detection (RED).
Here is how the extended burst capability works. If a packet arrives and needs to borrow n number of tokens because the token bucket contains fewer tokens than its packet size requires, then CAR compares the following two values:
oExtended burst parameter value.
oCompounded debt. Compounded debt is computed as the sum over all ai:
–a indicates the actual debt value of the flow after packet i is sent. Actual debt is simply a count of how many tokens the flow has currently borrowed.
–i indicates the ith packet that attempts to borrow tokens since the last time a packet was dropped.
If the compounded debt is greater than the extended burst value, the exceed action of CAR takes effect. After a packet is dropped, the compounded debt is effectively set to 0. CAR will compute a new compounded debt value equal to the actual debt for the next packet that needs to borrow tokens.
If the actual debt is greater than the extended limit, all packets will be dropped until the actual debt is reduced through accumulation of tokens in the token bucket.
Dropped packets do not count against any rate or burst limit. That is, when a packet is dropped, no tokens are removed from the token bucket.
Note Though it is true the entire compounded debt is forgiven when a packet is dropped, the actual debt is not forgiven, and the next packet to arrive to insufficient tokens is immediately assigned a new compounded debt value equal to the current actual debt. In this way, actual debt can continue to grow until it is so large that no compounding is needed to cause a packet to be dropped. In effect, at this time, the compounded debt is not really forgiven. This scenario would lead to excessive drops on streams that continually exceed normal burst. (See the example in the following section, &.&
Testing of TCP traffic suggests that the chosen normal and extended burst values should be on the order of several seconds worth of traffic at the configured average rate. That is, if the average rate is 10 Mbps, then a normal burst size of 10 to 20 Mbps and an Excess Burst size of 20 to 40 Mbps would be appropriate.
Cisco recommends the following values for the normal and extended burst parameters:
normal burst = configured rate * (1 byte)/(8 bits) * 1.5 seconds
extended burst = 2 * normal burst
With the listed choices for parameters, extensive test results have shown CAR to achieve the configured rate. If the burst values are too low, then the achieved rate is often much lower than the configured rate.
This example shows how the compounded debt is forgiven, but the actual debt accumulates.
For this example, assume the following parameters:
oToken rate is 1 data unit per time unit
oNormal burst size is 2 data units
oExtended burst size is 4 data units
o2 data units arrive per time unit
After 2 time units, the stream has used up its normal burst and must begin borrowing one data unit per time unit, beginning at time unit 3:
DU arrivals
Actual Debt
Compounded Debt
-------------------------------------------------------
3 (temporary) &&6 (temporary)
At this time a packet is dropped because the new compounded debt (6) would exceed the extended burst limit (4). When the packet is dropped, the compounded debt effectively becomes 0, and the actual debt is 2. (The values 3 and 6 were only temporary and do not remain valid in the case where a packet is dropped.) The final values for time unit 5 follow. The stream begins borrowing again at time unit 6.
DU arrivals
Actual Debt
Compounded Debt
-------------------------------------------------------
4 (temporary)&& 7 (temporary)
At time unit 6, another packet is dropped and the debt values are adjusted accordingly.
DU arrivals
Actual Debt
Compounded Debt
-------------------------------------------------------
CAR utilizes a token bucket, thus CAR can pass temporary bursts that exceed the rate limit as long as tokens are available.
Once a packet has been classified as conforming to or exceeding a particular rate limit, the router performs one of the following actions on the packet:
oTransmit—The packet is sent.
oDrop—The packet is discarded.
oSet precedence and transmit—The IP Precedence (ToS) bits in the packet header are rewritten. The packet is then sent. You can use this action to either color (set precedence) or recolor (modify existing packet precedence) the packet.
oContinue—The packet is evaluated using the next rate policy in a chain of rate limits. If there is not another rate policy, the packet is sent.
oSet precedence and continue—Set the IP Precedence bits to a specified value and then evaluate the next rate policy in the chain of rate limits.
For VIP-based platforms, two more actions are possible:
oSet QoS group and transmit—The packet is assigned to a QoS group and sent.
oSet QoS group and continue—The packet is assigned to a QoS group and then evaluated using the next rate policy. If there is not another rate policy, the packet is sent.
A single CAR rate policy includes information about the rate limit, conform actions, and exceed actions. Each interface can have multiple CAR rate policies corresponding to different types of traffic. For example, low priority traffic may be limited to a lower rate than high priority traffic. When there are multiple rate policies, the router examines each policy in the order entered until the packet matches. If no match is found, the default action is to send.
Rate policies can be independent: each rate policy deals with a different type of traffic. Alternatively, rate policies can be cascading: a packet may be compared to multiple different rate policies in succession.
Cascading of rate policies allows a series of rate limits to be applied to packets to specify more granular policies (for example, you could rate limit total traffic on an access link to a specified subrate bandwidth and then rate limit World Wide Web traffic on the same link to a given proportion of the subrate limit) or to match packets against an ordered sequence of policies until an applicable rate limit is encountered (for example, rate limiting several MAC addresses with different bandwidth allocations at an exchange point). You can configure up to a 100 rate policies on a subinterface.
CAR and VIP-distributed CAR can only be used with IP traffic. Non-IP traffic is not rate limited.
CAR or VIP-distributed CAR can be configured on an interface or subinterface. However, CAR and VIP-distributed CAR are not supported on the following interfaces:
oFast EtherChannel
oAny interface that does not support Cisco Express Forwarding (CEF)
CAR is only supported on ATM subinterfaces with the following encapsulations: aal5snap, aal5mux, and aal5nlpid.
Note CAR provides rate limiting and does not guarantee bandwidth. CAR should be used with other QoS features, such as distributed weighted fair queueing (WFQ) (DWFQ), if premium bandwidth assurances are required.
Traffic policing allows you to control the maximum rate of traffic sent or received on an interface, and to partition a network into multiple priority levels or class of service (CoS).
The Traffic Policing feature manages the maximum rate of traffic through a token bucket algorithm. The token bucket algorithm can use the user-configured values to determine the maximum rate of traffic allowed on an interface at a given moment in time. The token bucket algorithm is affected by all traffic entering or leaving (depending on where the traffic policy with Traffic Policing configured) and is useful in managing network bandwidth in cases where several large packets are sent in the same traffic stream.
The token bucket algorithm provides users with three actions for each packet: a conform action, an exceed action, and an optional violate action. Traffic entering the interface with Traffic Policing configured is placed in to one of these categories. Within these three categories, users can decide packet treatments. For instance, packets that conform can be configured to be transmitted, packets that exceed can be configured to be sent with a decreased priority, and packets that violate can be configured to be dropped.
Traffic Policing is often configured on interfaces at the edge of a network to limit the rate of traffic entering or leaving the network. In the most common Traffic Policing configurations, traffic that conforms is transmitted and traffic that exceeds is sent with a decreased priority or is dropped. Users can change these configuration options to suit their network needs.
The Traffic Policing feature supports the following MIBs:
oCISCO-CLASS-BASED-QOS-MIB
oCISCO-CLASS-BASED-QOS-CAPABILITY-MIB
This feature also supports RFC 2697, A Single Rate Three Color Marker.
For information on how to configure the Traffic Policing feature, see the chapter
in this book.
Traffic policing allows you to control the maximum rate of traffic sent or received on an interface. Traffic policing is often configured on interfaces at the edge of a network to limit traffic into or out of the network. Traffic that falls within the rate parameters is sent, whereas traffic that exceeds the parameters is dropped or sent with a different priority.
Packet marking allows you to partition your network into multiple priority levels or classes of service&(CoS), as follows:
oUse traffic policing to set the IP precedence or differentiated services code point (DSCP) values for packets entering the network. Networking devices within your network can then use the adjusted IP Precedence values to determine how the traffic should be treated. For example, the DWRED feature uses the IP Precedence values to determine the probability that a packet will be dropped.
oUse traffic policing to assign packets to a QoS group. The router uses the QoS group to determine how to prioritize packets.
The following restrictions apply to the Traffic Policing feature:
oOn a Cisco 7500 series router, traffic policing can monitor CEF switching paths only. In order to use the Traffic Policing feature, CEF must be configured on both the interface receiving the packet and the interface sending the packet.
oOn a Cisco 7500 series router, traffic policing cannot be applied to packets that originated from or are destined to a router.
oTraffic policing can be configured on an interface or a subinterface.
oTraffic policing is not supported on the following interfaces:
–Fast EtherChannel
–Any interface on a Cisco 7500 series router that does not support CEF
On a Cisco 7500 series router, CEF must be configured on the interface before traffic policing can be used.
For additional information on CEF, refer to the Cisco IOS Switching Services Configuration Guide.
Cisco IOS QoS software has three types of traffic shaping: GTS, class-based, and FRTS. All three of these traffic shaping methods are similar in implementation, though their CLIs differ somewhat and they use different types of queues to contain and shape traffic that is deferred. In particular, the underlying code that determines whether enough credit is in the token bucket for a packet to be sent or whether that packet must be delayed is common to both features. If a packet is deferred, GTS and Class-Based Shaping use a weighted fair queue to hold the delayed traffic. FRTS uses either a custom queue or a priority queue for the same, depending on what you have configured.
This section explains how traffic shaping works, then it describes the Cisco IOS QoS traffic shaping mechanisms. It includes the following sections:
For description of a token bucket and explanation of how it works, see the section
earlier in this chapter.
Traffic shaping allows you to control the traffic going out an interface in order to match its flow to the speed of the remote target interface and to ensure that the traffic conforms to policies contracted for it. Thus, traffic adhering to a particular profile can be shaped to meet downstream requirements, thereby eliminating bottlenecks in topologies with data-rate mismatches.
The primary reasons you would use traffic shaping are to control access to available bandwidth, to ensure that traffic conforms to the policies established for it, and to regulate the flow of traffic in order to avoid congestion that can occur when the sent traffic exceeds the access speed of its remote, target interface. Here are some example reasons why you would use traffic shaping:
oControl access to bandwidth when, for example, policy dictates that the rate of a given interface should not on the average exceed a certain rate even though the access rate exceeds the speed.
oConfigure traffic shaping on an interface if you have a network with differing access rates. Suppose that one end of the link in a Frame Relay network runs at 256 kbps and the other end of the link runs at 128 kbps. Sending packets at 256 kbps could cause failure of the applications using the link.
A similar, more complicated case would be a link-layer network giving indications of congestion that has differing access rates on different attached DTE; the network may be able to deliver more transit speed to a given DTE device at one time than another. (This scenario warrants that the token bucket be derived, and then its rate maintained.)
oIf you offer a subrate service. In this case, traffic shaping enables you to use the router to partition your T1 or T3 links into smaller channels.
Traffic shaping prevents packet loss. Its use is especially important in Frame Relay networks because the switch cannot determine which packets take precedence, and therefore which packets should be dropped when congestion occurs. Moreover, it is of critical importance for real-time traffic such as Voice over Frame Relay that latency be bounded, thereby bounding the amount of traffic and traffic loss in the data link network at any given time by keeping the data in the router that is making the guarantees. Retaining the data in the router allows the router to prioritize traffic according to the guarantees it is making. (Packet loss can result in detrimental consequences for real-time and interactive applications.)
Traffic shaping limits the rate of transmission of data. You can limit the data transfer to one of the following:
oA specific configured rate
oA derived rate based on the level of congestion
As mentioned, the rate of transfer depends on these three components that constitute the token bucket: burst size, mean rate, measurement (time) interval. The mean rate is equal to the burst size divided by the interval.
When traffic shaping is enabled, the bit rate of the interface will not exceed the mean rate over any integral multiple of the interval. In other words, during every interval, a maximum of burst size can be sent. Within the interval, however, the bit rate may be faster than the mean rate at any given time.
One additional variable applies to traffic shaping: Be size. The Excess Burst size corresponds to the number of noncommitted bits—those outside the CIR—that are still accepted by the Frame Relay switch but marked as discard eligible (DE).
In other words, the Be size allows more than the burst size to be sent during a time interval in certain situations. The switch will allow the packets belonging to the Excess Burst to go through but it will mark them by setting the DE bit. Whether the packets are sent depends on how the switch is configured.
When the Be size equals 0, the interface sends no more than the burst size every interval, achieving an average rate no higher than the mean rate. However, when the Be size is greater than 0, the interface can send as many as Bc + Be bits in a burst, if in a previous time period the maximum amount was not sent. Whenever less than the burst size is sent during an interval, the remaining number of bits, up to the Be size, can be used to send more than the burst size in a later interval.
You can specify which Frame Relay packets have low priority or low time sensitivity and will be the first to be dropped when a Frame Relay switch is congested. The mechanism that allows a Frame Relay switch to identify such packets is the DE bit.
You can define DE lists that identify the characteristics of packets to be eligible for discarding, and you can also specify DE groups to identify the data-link connection identifier (DLCI) that is affected.
You can specify DE lists based on the protocol or the interface, and on characteristics such as fragmentation of the packet, a specific TCP or User Datagram Protocol (UDP) port, an access list number, or a packet size.
As mentioned, GTS, Class-Based Shaping, DTS, and FRTS are similar in implementation, sharing the same code and data structures, but they differ in regard to their CLIs and the queue types they use.
Here are a few ways in which these mechanisms differ:
oFor GTS, the shaping queue is a weighted fair queue. For FRTS, the queue can be a weighted fair queue (configured by the frame-relay fair-queue command), a strict priority queue with WFQ (configured by the frame-relay ip rtp priority command in addition to the frame-relay fair-queue command), custom queueing (CQ), priority queueing (PQ), or FIFO.
oFor Class-Based Shaping, GTS can be configured on a class, rather than only on an access control list (ACL). In order to do so, you must first define traffic classes based on match criteria including protocols, ACLs, and input interfaces. You can then apply traffic shaping to each defined class.
oFRTS supports shaping on a per-DLCI GTS and DTS are configurable per interface or subinterface.
oDTS supports traffic shaping based on a variety of match criteria, including user-defined classes, and DSCP.
summarizes these differences.
Table&11 Differences Between Shaping Mechanisms
Command-Line Interface
oApplies parameters per subinterface
otraffic group command supported
oApplies parameters per interface or per class
oApplies parameters per interface or subinterface
oClasses of parameters
oApplies parameters to all virtual circuits (VCs) on an interface through inheritance mechanism
oNo traffic group command
Queues Supported
oWFQ per subinterface
oCBWFQ inside GTS
oWFQ, strict priority queue with WFQ, CQ, PQ, first- come, first- served (FCFS) per VC
oWFQ, strict priority queue with WFQ, CQ, PQ, FCFS per VC
You can configure GTS to behave the same as FRTS by allocating one DLCI per subinterface and using GTS plus backward explicit congestion notification (BECN) support. The behavior of the two is then the same except for the different shaping queues used.
Traffic shaping smooths traffic by storing traffic above the configured rate in a queue.
When a packet arrives at the interface for transmission, the following sequence happens:
1. If the queue is empty, the arriving packet is processed by the traffic shaper.
–If possible, the traffic shaper sends the packet.
–Otherwise, the packet is placed in the queue.
2. If the queue is not empty, the packet is placed in the queue.
When packets are in the queue, the traffic shaper removes the number of packets it can send from the queue every time interval.
GTS shapes traffic by reducing outbound traffic flow to avoid congestion by constraining traffic to a particular bit rate using the token bucket mechanism. (See the section
earlier in this chapter.)
GTS applies on a per-interface basis and can use access lists to select the traffic to shape. It works with a variety of Layer 2 technologies, including Frame Relay, ATM, Switched Multimegabit Data Service (SMDS), and Ethernet.
On a Frame Relay subinterface, GTS can be set up to adapt dynamically to available bandwidth by integrating backward explicit congestion notification (BECN) signals, or set up simply to shape to a specified rate. GTS can also be configured on an ATM/ATM Interface Processor (AIP) interface to respond to the Resource Reservation Protocol (RSVP) feature signalled over statically configured ATM permanent virtual circuits (PVCs).
GTS is supported on most media and encapsulation types on the router. GTS can also be applied to a specific access list on an interface.
Note GTS is not supported on Multilink PPP (MLP) interfaces.
shows how GTS works.
Figure&12 Generic Traffic Shaping
For information on how to configure GTS, see the chapter
in this book.
Traffic shaping allows you to control the traffic going out an interface in order to match its transmission to the speed of the remote, target interface and to ensure that the traffic conforms to policies contracted for it. Traffic adhering to a particular profile can be shaped to meet downstream requirements, thereby eliminating bottlenecks in topologies with data-rate mismatches.
For information on how to configure Class-Based Shaping, see the chapter
in this book.
Class-Based Shaping can be enabled on any interface that supports GTS. Using the Class-Based Shaping feature, you can perform the following tasks:
oConfigure GTS on a traffic class. Configuring GTS to classes provides greater flexibility for configuring traffic shaping. Previously, this ability was limited to the use of ACLs.
oSpecify average rate or peak rate traffic shaping. Specifying peak rate shaping allows you to make better use of available bandwidth by allowing more data than the CIR to be sent if the bandwidth is available.
oConfigure class-based weighted fair queueing (CBWFQ) inside GTS. CBWFQ allows you to specify the exact amount of bandwidth to be allocated for a specific class of traffic. Taking into account available bandwidth on the interface, you can configure up to 64 classes and control distribution among them, which is not the case with flow-based WFQ.
Flow-based WFQ applies weights to traffic to classify it into conversations and determine how much bandwidth each conversation is allowed relative to other conversations. These weights, and traffic classification, are dependent on and limited to the seven IP Precedence levels.
CBWFQ allows you to define what constitutes a class based on criteria that exceed the confines of flow. CBWFQ allows you to use ACLs and protocols or input interface names to define how traffic will be classified, thereby providing coarser granularity. You need not maintain traffic classification on a flow basis. Moreover, you can configure up to 64 discrete classes in a service policy.
Peak and average traffic shaping is configured on a per-interface or per-class basis, and cannot be used in conjunction with commands used to configure GTS from previous versions of Cisco IOS. These commands include the following:
otraffic-shape adaptive
otraffic-shape fecn-adaptive
otraffic-shape group
otraffic-shape rate
Adaptive traffic shaping for Frame Relay networks is not supported using the Class-Based Shaping feature. To configure adaptive GTS for Frame Relay networks, you must use the commands from releases prior to Release 12.1(2) of Cisco IOS software.
The DTS feature provides a method of managing the bandwidth of an interface to avoid congestion, to meet remote site requirements, and to conform to a service rate that is provided on that interface.
DTS uses queues to buffer traffic surges that can congest a network and send the data in to the network at a regulated rate. This ensures that traffic will behave to the configured descriptor, as defined by the CIR, Bc, and Be. With the defined average bit rate and burst size that is acceptable on that shaped entity, you can derive a time interval value.
The Be size allows more than the Bc size to be sent during a time interval under certain conditions. Therefore, DTS provides two types of shape commands: average and peak. When shape average is configured, the interface sends no more than the Bc size for each interval, achieving an average rate no higher than the CIR. When the shape peak command is configured, the interface sends Bc plus Be bits in each interval.
In a link layer network such as Frame Relay, the network sends messages with the forward explicit congestion notification (FECN) or BECN if there is congestion. With the DTS feature, the traffic shaping adaptive mode takes advantage of these signals and adjusts the traffic descriptors, therefore regulating the amount of traffic entering or leaving the interface accordingly.
DTS provides the following key benefits:
oOffloads traffic shaping from the Route Switch Processor (RSP) to the VIP.
oSupports up to 200 shape queues per VIP, supporting up to OC-3 rates when the average packet size is 250 bytes or greater and when using a VIP2-50 or better with 8 MB of SRAM. Line rates below T3 are supported with a VIP2-40.
oConfigures DTS at the interface level or subinterface level.
oShaping based on the following traffic match criteria:
–Access list
–Packet marking
–Input port
–Other matching criteria. For information about other matching criteria, see the section
in the chapter
in this book.
oOptional configuration to respond to Frame Relay network congestion (indicated by the presence of BECN or ForeSight signals) by reducing the shaped-to rate for a period of time until congestion is believed to have subsided. Supports FECN, BECN, and ForeSight Frame Relay signalling.
This feature runs on Cisco 7500 series routers with VIP2-40, VIP2-50, or greater.
For information on how to configure DTS, see the chapter
in this book.
DTS does not support the following:
oFast EtherChannel interfaces, Multilink PPP (MLP), tunnels and dialer interfaces
Note Hierarchical DTS (that is, DTS configured in both a parent-level policy and a child-level policy), is not supported on subinterfaces.
oAny VIP below a VIP2-40
Note A VIP2-50 is strongly recommended when the aggregate line rate of the port adapters on the VIP is greater than DS3. A VIP2-50 card is required for OC-3 rates.
Distributed Cisco&Express Forwarding (dCEF) must be enabled on the interface before DTS can be enabled.
A policy map and class maps must be created before DTS is enabled.
Cisco has long provided support for FECN for DECnet and OSI, and BECN for Systems Network Architecture (SNA) traffic using Logical Link Control, type 2 (LLC2) encapsulation via RFC 1490 and DE bit support. FRTS builds upon this existing Frame Relay support with additional capabilities that improve the scalability and performance of a Frame Relay network, increasing the density of VCs and improving response time.
As is also true of GTS, FRTS can eliminate bottlenecks in Frame Relay networks that have high-speed connections at the central site and low-speed connections at branch sites. You can configure rate enforcement—a peak rate configured to limit outbound traffic—to limit the rate at which data is sent on the VC at the central site.
Using FRTS, you can configure rate enforcement to either the CIR or some other defined value such as the excess information rate on a per-VC basis. The ability to allow the transmission speed used by the router to be controlled by criteria other than line speed (that is, by the CIR or the excess information rate) provides a mechanism for sharing media by multiple VCs. You can allocate bandwidth to each VC, creating a virtual time-division multiplexing (TDM) network.
You can also define PQ, CQ, and WFQ at the VC or subinterface level. Using these queueing methods allows for finer granularity in the prioritization and queueing of traffic, providing more control over the traffic flow on an individual VC. If you combine CQ with the per-VC queueing and rate enforcement capabilities, you enable Frame Relay VCs to carry multiple traffic types such as IP, SNA, and Internetwork Packet Exchange (IPX) with bandwidth guaranteed for each traffic type.
Using information contained in the BECN-tagged packets received from the network, FRTS can also dynamically throttle traffic. With BECN-based throttling, packets are held in the buffers of the router to reduce the data flow from the router into the Frame Relay network. The throttling is done on a per-VC basis and the transmission rate is adjusted based on the number of BECN-tagged packets received.
With the Cisco FRTS feature, you can integrate ATM ForeSight closed loop congestion control to actively adapt to downstream congestion conditions.
In Frame Relay networks, BECNs and FECNs indicate congestion. BECN and FECN are specified by bits within a Frame Relay frame.
FECNs are generated when data is sent out a they indicate to a DTE device that congestion was encountered. Traffic is marked with BECN if the queue for the opposite direction is deep enough to trigger FECNs at the current time.
BECNs notify the sender to decrease the transmission rate. If the traffic is one-way only (such as multicast traffic), there is no reverse traffic with BECNs to notify the sender to slow down. Thus, when a DTE device receives an FECN, it first determines if it is sending any data in return. If it is sending return data, this data will get marked with a BECN on its way to the other DTE device. However, if the DTE device is not sending any data, the DTE device can send a Q.922 TEST RESPONSE message with the BECN bit set.
When an interface configured with traffic shaping receives a BECN, it immediately decreases its maximum rate by a large amount. If, after several intervals, the interface has not received another BECN and traffic is waiting in the queue, the maximum rate increases slightly. The dynamically adjusted maximum rate is called the derived rate.
The derived rate will always be between the upper bound and the lower bound configured on the interface.
For information on configuring Frame Relay and FRTS, see the , Release 12.4T.
FRTS applies only to Frame Relay PVCs and switched virtual circuits (SVCs).
FRTS is not supported on the Cisco 7500 series router.

我要回帖

更多关于 output是什么意思 的文章

 

随机推荐