Traffic Control

tc, qdiscs, classes, filters

This is by no means comprehensive. I may add to this when I get more of a chance. There are Wonder Shaper or the ADSL Bandwidth Management HOWTO. (though I feel that they are inadequate or employ the wrong strategies).

tc, qdiscs, classes, filters, oh my!

tc, the traffic control tool, is used to configure the Linux kernel to accomplish the shaping, scheduling, policing, and dropping of packets.

Each interface by default has a root qdisc. By default, it uses pfifo_fast algorhythm (in our case, it will be configured to use HTB). Think of the root qdisc as the main container that everything resides. Inside the root qdisc, we can classify various types of traffic into classes and attach them to the root handle. After the classes have been defined, filters are used to match and redirect the packets into the right classes.

Using tc

Some example tc commands to create a root handle and attach some classes to it:

  • tc qdisc add dev eth0 root handle 1: htb default 60 - creates a root handle attached to eth0 using the HTB qdisc and classify packets as classid 60 by default
  • tc class add dev eth0 parent 1: classid 1:1 htb rate 116kbit - create a parent class using HTB with a maximum (capped) rate of 116 kilobits/s (about 90% of 128 kilobits/s)
  • tc class add dev eth0 parent 1:1 classid 1:10 htb rate 25kbit ceil 116kbit prio 0 - create a leafnode class using HTB with a guaranteed minimum of 25 kilobits/s and a maximum ceiling of 116 kilobits/s and a priority of zero (highest - note that available bandwidth is offered in order of priority)
  • tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 - attach another qdisc handle to class 1:10 using SFQ - this implements the SFQ round robin qdisc inside the HTB that kicks in only when the queue is saturated to ensure that no one "conversation" hogs the pipe.

Note: You always have to have a root handle, and that classes can only be attached to handles. The reason why dev (eth0) is specified for each command is that you can attach handles and classes with the same names to each of network interface (if you have multiple interfaces). Also, all of the commands below depend on these classes created by the commands above.

Some example tc commands to filter packets to the desired classes:

  • Create a filter that classifies acknowledgement packets (small ACK packets):
    tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32\
    match ip protocol 6 0xff \
    match u8 0x05 0x0f at 0 \
    match u16 0x0000 0xffc0 at 2 \
    match u8 0x10 0xff at 33 \
    flowid 1:10
  • Creates a filter that classifies SSH packets:
    tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32\
    match ip dport 22 0xfffe flowid 1:10

Note: You don't actually need to use any other tool other than tc to do traffic shaping. However, since the syntax is arcane and not generally considered human readable, I recommend using iptables (with classify if available) even though this adds overhead. Please don't ask me any questions on writing other tc filters to classify packets (I copied and pasted the above rules from other docs).

Using iptables (marking) with tc

Some example iptables commands to do similar as above:

  • Mark acknowledge packets of an established session between 40 and 100 bytes:
    iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL ACK -m state --state ESTABLISHED -m length --length 40:100 -j MARK --set-mark 20
  • Mark SSH packets that starts new sessions with a packet length between 40 and 68 bytes:
    iptables -t mangle -A PREROUTING -p tcp --dport 22 --syn -m state --state NEW -m length --length 40:68 -j MARK --set-mark 22

Now use the fw filter to classify the marked packets:

  • Create a filter that classifies packets based on the fwmark (20) on the packet as belonging to classid 1:10:
    tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:10
  • Creates a filter that classifies packets based on the fwmark (22) on the packet as belonging to classid 1:10:
    tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 22 fw flowid 1:10

Note: The advantage of marking packets with iptables is that the marking facilities are generally supported by stock kernels. If you have a kernel that supports iptables classify, use that instead (see below).

Using iptables (classify)

Mmm, classify:

  • Classify packets of an established session between 40 and 100 bytes:
    iptables -t mangle -A POSTROUTING -p tcp --tcp-flags ALL ACK -m state --state ESTABLISHED -m length --length 40:100 -J CLASSIFY --set-class 1:10
  • Classify SSH packets that starts new sessions with a packet length between 40 and 68 bytes:
    iptables -t mangle -A POSTROUTING -p tcp --dport 22 --syn -m state --state NEW -m length --length 40:68 -j CLASSIFY --set-class 1:10

Note:

  • Verify that /lib/modules/<kernel version>/kernel/net/ipv4/netfilter/ipt_CLASSIFY.o and /usr/lib/iptables/libipt_CLASSIFY.so exist to see if you have the required kernel support (note that the path may vary depending on your distribution).
  • If you do not have the kernel support and are planning to compile your own kernel, download the source for iptables and the latest patch-o-matic, apply the CLASSIFY patch, enable it in the .config, and also remember to recompile iptables.
  • The CLASSIFY target only work in the mangle table and the POSTROUTING chain.
  • It's possible to use both the iptables classifier and the tc filters together. Any traffic that does not get classified by iptables will be go through the tc filters.

Some sample scripts

/etc/init.d/shaper:

	#!/bin/sh
	# init script written by shane at knowplace dot org
	# this script only creates the qdiscs and classes required for shaping, it
	# does NOT create the necessary filters
	INTERFACE='eth0'
	rc_done="  done"
	rc_failed="  failed"
	return=$rc_done
	TC='/sbin/tc'
	tc_reset ()
	{
	# Reset everything to a known state (cleared)
	$TC qdisc del dev $INTERFACE root 2> /dev/null > /dev/null
	}
	tc_status ()
	{
	echo "[qdisc - $INTERFACE]"
	$TC -s qdisc show dev $INTERFACE
	echo "------------------------"
	echo
	echo "[class - $INTERFACE]"
	$TC -s class show dev $INTERFACE
	}
	tc_showfilter ()
	{
	echo "[filter - $INTERFACE]"
	$TC -s filter show dev $INTERFACE
	}
	case "$1" in
	start)
	echo -n "Starting traffic shaping"
	tc_reset
	U320="$TC filter add dev $INTERFACE protocol ip parent 1:0 prio 0 u32"
	#
	# dev eth0 - creating qdiscs & classes
	#
	$TC qdisc add dev $INTERFACE root handle 1: htb default 60
	$TC class add dev $INTERFACE parent 1: classid 1:1 htb rate 116kbit
	$TC class add dev $INTERFACE parent 1:1 classid 1:10 htb rate 32kbit ceil 116kbit prio 0
	$TC class add dev $INTERFACE parent 1:1 classid 1:20 htb rate 22kbit ceil 116kbit prio 1
	$TC class add dev $INTERFACE parent 1:1 classid 1:30 htb rate 22kbit ceil 116kbit prio 2
	$TC class add dev $INTERFACE parent 1:1 classid 1:40 htb rate 20kbit ceil 116kbit prio 3
	$TC class add dev $INTERFACE parent 1:1 classid 1:50 htb rate 18kbit ceil 116kbit prio 4
	$TC class add dev $INTERFACE parent 1:1 classid 1:60 htb rate 2kbit ceil 116kbit prio 5
	$TC qdisc add dev $INTERFACE parent 1:10 handle 10: sfq perturb 10
	$TC qdisc add dev $INTERFACE parent 1:20 handle 20: sfq perturb 10
	$TC qdisc add dev $INTERFACE parent 1:30 handle 30: sfq perturb 10
	$TC qdisc add dev $INTERFACE parent 1:40 handle 40: sfq perturb 10
	$TC qdisc add dev $INTERFACE parent 1:50 handle 50: sfq perturb 10
	$TC qdisc add dev $INTERFACE parent 1:60 handle 60: sfq perturb 10
	tc_status
	;;
	stop)
	echo -n "Stopping traffic shaper"
	tc_reset || return=$rc_failed
	echo -e "$return"
	;;
	restart|reload)
	$0 stop && $0 start || return=$rc_failed
	;;
	stats|status)
	tc_status
	;;
	filter)
	tc_showfilter
	;;
	*)
	echo "Usage: $0 {start|stop|restart|stats|filter}"
	exit 1
	esac
	test "$return" = "$rc_done" || exit 1
	

Script placeholder:

I'm too lazy to write a custom script for you. =) I use the narc-custom.conf file to insert custom commands via narc to classify traffic. Below are some example commands. They are provided here only to show you what could be done. Please feel free to adapt them to your own needs.

# give "overhead" packets highest priority
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --syn -m length --length 40:68 -j CLASSIFY \
--set-class 1:10
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL SYN,ACK -m length --length 40:68 \
-j CLASSIFY --set-class 1:10
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL ACK -m length --length 40:100 \
-j CLASSIFY --set-class 1:10
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL RST -j CLASSIFY --set-class 1:10
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL ACK,RST -j CLASSIFY \
--set-class 1:10
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --tcp-flags ALL ACK,FIN -j CLASSIFY \
--set-class 1:10
# interactive SSH traffic
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --sport ssh -m length --length 40:100 \
-j CLASSIFY --set-class 1:20
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport ssh -m length --length 40:100 \
-j CLASSIFY --set-class 1:20
# interactive mail or web traffic
iptables -t mangle -A POSTROUTING -o eth0 -p tcp -m multiport --sport http,pop,imap,https,imaps \
-j CLASSIFY --set-class 1:30
# dns lookups
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport domain -j CLASSIFY --set-class 1:30
# ICMP, UDP
iptables -t mangle -A POSTROUTING -o eth0 -p udp -j CLASSIFY --set-class 1:40
iptables -t mangle -A POSTROUTING -o eth0 -p icmp -m length --length 28:1500 -m limit \
--limit 2/s --limit-burst 5 -j CLASSIFY --set-class 1:40
# bulk traffic
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --sport ssh -m length --length 101: \
-j CLASSIFY --set-class 1:50
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport ssh -m length --length 101: \
-j CLASSIFY --set-class 1:50
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --sport 25 -j CLASSIFY --set-class 1:50
iptables -t mangle -A POSTROUTING -o eth0 -p tcp --dport 6667 -j CLASSIFY --set-class 1:50

Traffic shaping with tc From OpenVZ Wiki

 

Packet routes

First of all, a few words about how packets travel from and to a VE. Suppose we have Hardware Node (HN) with a VE on it, and this VE talks to some Remote Host (RH). HN has one "real" network interface eth0 and, thanks to OpenVZ, there is also "virtual" network interface venet0. Inside the VE we have interface venet0:0.

    venet0:0               venet0    eth0
VE >------------->-------------> HN >--------->--------> RH
venet0:0               venet0    eth0
VE <-------------<-------------< HN <---------<--------< RH

Limiting outgoing bandwidth

We can limit VE outgoing bandwidth by setting the tc filter on eth0.

DEV=eth0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip src X.X.X.X flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10

X.X.X.X is an IP address of VE.

Limiting incoming bandwidth

This can be done by setting the tc filter on venet0:

DEV=venet0
tc qdisc del dev $DEV root
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 100mbit
tc class add dev $DEV parent 1: classid 1:1 cbq rate 256kbit allot 1500 prio 5 bounded isolated
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 match ip dst X.X.X.X flowid 1:1
tc qdisc add dev $DEV parent 1:1 sfq perturb 10

Note that X.X.X.X is an IP address of VE.

Limiting VE to HN talks

As you can see, two filters above don't limit VE to HN talks. I mean a VE can emit as much traffic as it wishes. To make such a limitation from the HN, it is necessary to use tc police on venet0:

DEV=venet0
tc filter add dev $DEV parent 1: protocol ip prio 20 u32 match u32 1 0x0000 police rate 2kbit buffer 10k drop flowid :1

Limiting packets per second rate from VE

To prevent dos atacks from the VE you can limit packets per second rate using iptables.

DEV=eth0
iptables -I FORWARD 1 -o $DEV -s X.X.X.X -m limit --limit 200/sec -j ACCEPT
iptables -I FORWARD 2 -o $DEV -s X.X.X.X -j DROP

Here X.X.X.X is an IP address of VE

An alternate approch using HTB

For details refer to the HTB Home Page

#!/bin/sh
#
# Incoming traffic control
#
VE_IP1=$1
VE_IP2=$2
DEV=venet0
#
tc qdisc del dev $DEV root
#
tc qdisc add dev $DEV root handle 1: htb default 10
#
tc class add dev $DEV parent 1: classid 1:1 htb rate 100mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 10mbit ceil 10mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:20 htb rate 20mbit ceil 20mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:30 htb rate 30mbit ceil 30mbit burst 15k
#
tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10
#
if [ ! -z $VE_IP1 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip dst "$VE_IP1" flowid 1:20 
fi
if [ ! -z $VE_IP2 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip dst "$VE_IP2" flowid 1:30 
fi
#
echo;echo "tc configuration for $DEV:"
tc qdisc show dev $DEV
tc class show dev $DEV
tc filter show dev $DEV
#
# Outgoing traffic control
#
DEV=eth0
#
tc qdisc del dev $DEV root
#
tc qdisc add dev $DEV root handle 1: htb default 10
#
tc class add dev $DEV parent 1: classid 1:1 htb rate 100mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:10 htb rate 10mbit ceil 10mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:20 htb rate 20mbit ceil 20mbit burst 15k
tc class add dev $DEV parent 1:1 classid 1:30 htb rate 30mbit ceil 30mbit burst 15k
#
tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10
#
if [ ! -z $VE_IP1 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip src "$VE_IP1" flowid 1:20
fi
if [ ! -z $VE_IP2 ]; then
tc filter add dev $DEV protocol ip parent 1:0 prio 1 u32 match ip src "$VE_IP2" flowid 1:30
fi
#
echo;echo "tc configuration for $DEV:"
tc qdisc show dev $DEV
tc class show dev $DEV
tc filter show dev $DEV

Sample traffic shaping script

The following is a sample traffic shaping init script. These configurations correspond to those used in the refresh-qos-routes.pl script described above.

Example 4. traffic.sh

			#!/bin/sh
			#
			# traffic - script that configures network traffic shaping
			extif=eth0
			# line speed of the network interface
			ifrate=1000mbit
			# maximum traffic rate
			maxrate=500mbit
			# shaped limits
			localrate=500mbit
			i2rate=300mbit
			i1rate=35mbit
			TC=/sbin/tc
			start() {
			# clear existing rules
			$TC qdisc del dev $extif root 2>/dev/null
			# root qdisc 1:0
			$TC qdisc add dev $extif root handle 1: \
			htb default 12
			# root class 1:1
			$TC class add dev $extif parent 1:0 classid 1:1 \
			htb rate $maxrate
			# class 1:10 -- local destinations
			$TC class add dev $extif parent 1:1 classid 1:10 \
			htb rate $localrate
			# class 1:11 -- I2 destinations
			$TC class add dev $extif parent 1:1 classid 1:11 \
			htb rate $i2rate
			# class 1:12 -- non-I2 destinations
			$TC class add dev $extif parent 1:1 classid 1:12 \
			htb rate $i1rate
			# qdisc defs for classes
			$TC qdisc add dev $extif parent 1:10 handle 10: \
			sfq quantum 1514b perturb 15
			$TC qdisc add dev $extif parent 1:11 handle 11: \
			sfq quantum 1514b perturb 15
			$TC qdisc add dev $extif parent 1:12 handle 12: \
			sfq quantum 1514b perturb 15
			# filter for 1:10 -- local destinations
			$TC filter add dev $extif parent 1:0 protocol ip pref 100 \
			route to 2 flowid 1:10
			# filter for 1:11 -- I2 routes
			$TC filter add dev $extif parent 1:0 protocol ip pref 100 \
			route to 5 flowid 1:11
			}
			stop() {
			# clear existing rules
			$TC qdisc del dev $extif root 2>/dev/null
			}
			status() {
			echo "qdisc:"
			$TC qdisc show dev $extif
			echo "filter:"
			$TC filter show dev $extif parent 1:
			echo "class:"
			$TC class show dev $extif
			}
			counts() {
			echo "qdisc:"
			$TC -s qdisc show dev $extif
			echo "class:"
			$TC -s class show dev $extif
			}
			case "$1" in
			start)
			start
			;;
			stop)
			stop
			;;
			status)
			status
			;;
			counts)
			counts
			;;
			restart)
			stop
			start
			;;
			*)
			echo $"Usage: $0 {start|stop|restart|status|counts}"
			exit 1
			esac
			exit 0
			

UDP "connection" limiting

UDP "connection" limiting
Because the firewall in StarV3 is based on linux's iptables, the iptables commands can be used to limit UDP "connections". The method for implementing this varies slightly depending on whether you're applying it at the CPE or AP level.

On CPE:
Code:
iptables -A FORWARD -p udp -m state --state NEW -m limit --limit 5/second --limit/burst 5 -j ACCEPT
iptables -A FORWARD -p udp -m state --state NEW -j DROP
On AP:
Code:
iptables -A FORWARD -p udp -s <cust IP> -m state --state NEW -m limit --limit 5/second --limit/burst 5 -j ACCEPT
iptables -A FORWARD -p udp -s <cust IP> -m state --state NEW -j DROP
iptables -A FORWARD -p udp -d <cust IP> -m state --state NEW -m limit --limit 5/second --limit/burst 5 -j ACCEPT
iptables -A FORWARD -p udp -d <cust IP> -m state --state NEW -j DROP
As to what the code is doing:
  • iptables -A FORWARD: Append a rule into the FORWARD firewall chain
  • -p udp: Loads the udp protocol match extensions for this rule
  • -s or -d <cust IP>: Will only match packets with a source (-s) or destination (-d) IP of the specified IP, depending on the rule. If you do not do this on the AP side, all udp packets passing through the AP will be checked against the same bucket. Doing so will result in either such a small amount of udp passing through the AP to make udp traffic stop up completely, or such a large amount passing through that you aren't limiting filesharing/virus traffic at all.
  • -m state: Calls the connection state match.
  • --state NEW: Parameter defining what state(s) to match, in this case NEW. The other valid states for conntrack are ESTABLISHED, RELATED, and INVALID
  • -m limit: Calls the limit matching
  • --limit 5/second: Sets the rate at which the allowed number of new connections refreshes, in this case 1 connection every 1/5 of a second (or 5/second). Rates must be defined as /second, /minute, /hour, or /day, and will refresh steadily over time (such that 1/second and 60/minute will be functionally equivalent)
  • --limit-burst 5: Sets the "size of the bucket", or the number of connections to initially allow through prior to checking against the limit rate. By default, this will be 5 if you do not set this parameter.
On the client device, you can leave the -s/-d parameters out and thus only insert the rule once, because the client is the only customer that is running through the CPE. On the AP, those parameters and the "dual rule" is essential. The way this works is to create a "bucket" that holds a certain number of allowed connections (as specified by --limit-burst). When a packet comes through the firewall, the connection tracking software looks at the source & destination IPs and compares that to the connection table to determine if it has seen any traffic between these two IPs on the specified ports. If it has, the packet is given the ESTABLISHED state, and thus will not hit the above rules. If it has not, the packet is given the NEW state, and will hit the above rule. The firewall will then check the "bucket" to see if there are any tokens available. If there are, the firewall accepts the packet and decreases the number of tokens in the "bucket" by 1. If there are not, the firewall drops the packet (without sending notification to the original sender).

In testing this on someone for whom our non-Star packet shaper had recorded 10,000 UDP Skype data flows within the past hour, the firewall rule dropped about 60% of their traffic in the initial five minutes, and then their computer adjusted and stopped opening as many "new connections". This cut their recorded Skype data flows down under 1,000 when we checked the packet shaper again a little over an hour later. The customer did not call in to complain about not being able to use the Internet, so I believe the rule was able to contain their connections without making the downloads for them completely unusable.

PPP Load Balancer Script

PPP Load Balancer Script - 0.1.1

¤ÇÒÁÊÒÁÒö

  • Í͡ẺãËéãªé¡Ñº PPP Connection (ADSL - PPPoE, Dial-up - PPP)
  • ·Ó Load Balance ẺÍѵâ¹ÁµÔ
  • ÁÕ¡ÒèѴ¡ÒáѺ¡ÒÃàª×èÍÁµèÍ·ÕèÍÒ¨¨ÐËÅØ´ÃÐËÇèÒ§ãªé§Ò¹ (ãªé cron à»ç¹µÑǪèÇÂ㹡Òà polling)
  • äÁè¨Ó¡Ñ´¨Ó¹Ç¹¡ÒÃàª×èÍÁµèͧ͢ PPP áÅÐÊÒÁÒöà¾ÔèÁ ËÃ×ÍÅ´¨Ó¹Ç¹ä´éã¹¢³Ð·Ó§Ò¹

¢éͨӡѴ

  • ãªé cron 㹡Òà polling «Öè§ÁÕ¢éͨӡѴàÃ×èͧàÇÅÒµèÓÊش㹡Òà poll ¤×Í 1 ¹Ò·Õ à»ç¹¼ÅãËé¡Ò÷ӧҹÁÕ Delay 㹡ÒèѴ¡Òà Load Balance ºéÒ§àÅ硹éÍÂ

Source

http://neutron.debianclub.com/neutron/projects/ppp-balance/ppp-balance.s...

¡ÒùÓä»ãªé§Ò¹

à¹×èͧ¨Ò¡à»ç¹ Script ·ÕèÍÂÙèã¹ÃÐËÇèÒ§¡ÒþѲ¹Ò ¨Ö§ÍÒ¨ÁÕ¢éͼԴ¾ÅÒ´·ÕèÂѧµÃǨäÁ辺ÍÂÙèºéÒ§ áµè¨Ò¡¡Ò÷´Êͺà»ç¹àÇÅÒÃÐÂÐ˹Öè§áÅéÇ Script ¹ÕéÊÒÁÒöãªé§Ò¹ä´é´Õ ¨Ö§ä´é¹ÓÍÍ¡ÁÒà¼Âá¾ÃèÊÙèÊÒ¸Òóª¹ à¾×èÍà»ç¹»ÃÐ⪹ìµèͼÙé·ÕèÁÕ¤ÇÒÁʹ㨠«Öè§ÊÒÁÒö¹Óä»ãªéà¾×èÍÈÖ¡ÉÒ ËÃ×ÍáÁé¡ÃзÑè§àªÔ§¡ÒäéÒ áµè¡ÃسҤ§äÇé«Öè§ License ·ÕèÃкصÑǼÙé¾Ñ²¹Ò à¾×èÍà»ç¹¡ÓÅѧã¨ã¹¡ÒþѲ¹Ò§Ò¹´Õ æ µèÍä»

ÇÔ¸Õãªé§Ò¹

àµÃÕÂÁ¤ÇÒÁ¾ÃéÍÁ¢Í§Ãкº (Debian GNU/Linux) â´ÂµÔ´µÑé§â»Ãá¡ÃÁ·Õè¨Óà»ç¹µèÒ§ æ áÅз´ÊͺÇèÒ ÊÒÁÒöãªé§Ò¹ PPP connection ä´éÍÂèÒ§»¡µÔ¡è͹ŧÁ×Íãªé Script

# aptitude install iproute

µÃǨÊͺÃкºÇèÒÁÕ¡ÒÃàª×èÍÁµèÍ PPP àÃÕºÃéÍÂËÃ×ÍÂѧ 㹡óբͧ¼ÁµèÍà»ç¹ PPPoE ¼èÒ¹ ADSL ¤ÃѺ (ã¹·Õè¹Õé¨ÐäÁèä´é¾Ù´¶Ö§ÇÔ¸Õ¡ÒÃàª×èÍÁµèÍ PPP)

# ifconfig
...
lo        Link encap:Local Loopback  
inet addr:127.0.0.1  Mask:255.0.0.0
...
...
ppp0      Link encap:Point-to-Point Protocol  
inet addr:118.173.xxx.xxx  P-t-P:118.173.xxx.xxx  Mask:255.255.255.255
...
...
ppp1      Link encap:Point-to-Point Protocol  
inet addr:118.173.xxx.xxx  P-t-P:118.173.xxx.xxx  Mask:255.255.255.255
...
...

¨ÐàËç¹ÇèÒàÃÒÁÕ¡ÒÃàª×èÍÁµèÍÍÂÙè 2 Interface ¤×Í ppp0 ¡Ñº ppp1 áÅÐà»ç¹à»éÒËÁÒÂ㹡ÒÃ·Ó Load Balance ¢Ñé¹µèÍ令×Í ´Ö§ Script ÁÒãªé§Ò¹

# cd /usr/local/bin
# wget http://neutron.debianclub.com/neutron/projects/ppp-balance/ppp-balance.sh

ËÅѧ¨Ò¡¹Ñé¹ ÊÑè§ãËé Script ·Ó§Ò¹

# ./ppp-balance.sh
...
Mon Apr 28 16:46:24 ICT 2008: Updating default route ...
Mon Apr 28 16:46:24 ICT 2008: /sbin/ip route add default scope global equalize  
nexthop via 118.173.xxx.xxx dev ppp0 weight 1 
nexthop via 118.173.xxx.xxx dev ppp1 weight 1

µÃǨÊͺ¤ÇÒÁ¶Ù¡µéͧ㹡ÒÃ·Ó Load Balance

# ip route
118.173.xxx.xxx dev ppp0  proto kernel  scope link  src 118.173.xxx.xxx 
118.173.xxx.xxx dev ppp1  proto kernel  scope link  src 118.173.xxx.xxx 
172.30.8.0/21 dev eth2  proto kernel  scope link  src 172.30.8.2 
default equalize 
nexthop via 118.173.xxx.xxx  dev ppp0 weight 1
nexthop via 118.173.xxx.xxx  dev ppp1 weight 1

ËÒ¡»ÃÒ¡¯¢éͤÇÒÁ default equalize .... áÊ´§ÇèÒÃкº¨Ñ´¡ÒÃ·Ó Load Balance àÃÕºÃéÍÂáÅéÇ ÊÒÁÒö·´Êͺãªé§Ò¹ ä´éµÒÁ»¡µÔ

# host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 74.125.19.103
www.l.google.com has address 74.125.19.99
www.l.google.com has address 74.125.19.147
www.l.google.com has address 74.125.19.104

à»ç¹ÍѹàÃÕºÃéÍÂ㹡Ò÷´Êͺ ¢Ñé¹µèÍ令×Í ·ÓãËéÃкº Polling à¾×è͵ÃǨÊͺʶҹСÒÃàª×èÍÁµèÍ ÇèÒÁÕËÅØ´ ÁÕà»ÅÕè¹á»Å§ËÃ×ÍäÁè

ÊÃéÒ§ /etc/cron.d/ppp-balance

*/1 *     * * *     root  /usr/local/bin/ppp-balance.sh >/dev/null 2>&1

ÊÑè§ Restart cron

# /etc/init.d/cron restart