'Linux Virtual Server'에 해당되는 글 5건

  1. 2009.11.27 ipvsadm을 이용한 LVS 시스템 구축
  2. 2009.09.24 CentOS에서 ipvsadm으로 Virtual Server 구축
  3. 2009.02.02 LVS(Linux Virtual Server) Case 별 설치
2009.11.27 22:29

ipvsadm을 이용한 LVS 시스템 구축

Linux Virtual Server 란?

1대의 서버로 증가하는 사용자들을 처리하기 힘들때 2대이상의 서버를 구축하여 로드  하는 운영 시스템이다.

 

일반적으로 DNS의 라운드 로빈  로드 밸런싱 하는 방법이 있으나, LVS 시스템은 라운드 로빈 방식  아니라 총 4가지 방법으로 로드 밸런싱을 할수 있어 사용자의 상황에  대처할수 있다.

 

1. 라운드 로빈(round-robin)

로드 밸런서로 들어오는 요청들을 차례대로 순차적으로 각각의 서버로 할당하는 

DNS 로도 쉽게 구현이 가능한 방식이다.

 

2. 가중 라운드 로빈(weighted round-rovin)

동작 자체는 라운드 로빈 방식이지만, 로드 밸런싱을 할 서버들이 각각  사양일 경우 서로 다른 가충치를 설정하여 요청들을 할당하는 방식이다.

 

3. 최소 연결(least connection)

 서버들중에서 가장 적은 수의 요청을 처리하고 있는 서버를 선택하여 요청을  할당하는 방식이다. 이 방식의 장점은 서버들의 현재 부하 상태를 동적으로  요청을 처리할수 있다.

 

4. 가중  연결 (weighted least connection)

 연결 방식을 따르지만, 가중 라운드 로빈 방식과 마찮가지로 각 서버에  다른 가중치를 설정하여 요청들을 할당할수 있다.

 

 시스템에서 NAT, DR(Direct routing) IP터널링 으로 구성이 가능하지만 여기서는 DR방식으로 

 

DR(Direct rounting) 기본 구성도는 아래 링크를 참고  바랍니다.

http://wiki.kldp.org/Translations/html/Virtual_Server-KLDP/VS-DRouting.gif

 

 

이제  시스템을 구축해보자

 

모든 설명은 레드햇 기반인 CentOS  버전을 기준으로 설명한다.

웹서버 2대에 대해서 1대의 LVS 가  밸런싱을 하는 방식에 대해서 설명한다.

 

Virtual IP  192.168.10.10

Real IP(Web1) = 192.168.10.20

Real IP(Web2) = 

 

LVS 서버에 yum명령을 사용하여 ipvsadm 을 설치한후 

 

#yum -y install ipvsadm

# ipvsadm 
IP Virtual Server version 1.2.1 (size=4096)
 LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn

#

 

이렇게 나오면 제대로 설치가 된것이다.

 

이제 LVS 서버에 사용할 Virtual IP를 설정해보자.

 

# ifconfig eth0:1 192.168.10.10 netmask 255.255.255.0 up

 

ifconfig 를 사용한 네트워크 설정은 서버가 재부팅이 되면 사라지기때문에,

 파일을 생성해주자.

 

또한 해당 시스템을 통해 다른  패킷이 포워딩 될수 있도록 수정하자.

 

#vi 

net.ipv4.ip_forward=1

 

#sysctl 

 

이제 설치한 ipvsadm 에 새로운 서비스를 

 

#ipvsadm -A -t 192.168.10.10:80 -s rr

192.168.10.10 IP의 tcp 80번포트를 라운드 로빈(rr) 방식으로 새로운 서비스로 

자세한 명령어 옵션은 ipvsadm --help 로 확인해보자

참고로  언급된 4가지 스케쥴링 방법을 모두 적용가능하다

예) ipvsadm  -t 192.168.10.10:80 -s wlc (가중 최소 연결)

 

 추가가 끝났으면, 로드 밸런싱을 할 대상 서버들의 Real IP를 추가해준다.

 

#ipvsadm -a -t 192.168.10.10:80 -r 192.168.10.20:80 -g

#ipvsadm -a -t 192.168.10.10:80 -r 192.168.10.30:80 -g

#ipvsadm -L

IP Virtual Server version  (size=4096)
Prot LocalAddress:Port Scheduler Flags
   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.10.10:http rr
   ->           Route   1      0         0
   -> 192.168.10.30:http         Route   1      0          

 

LVS서버의 설정은 끝났다. 하지만 이 정보는  재부팅이 되면 날아간다.

재부팅후에 정보가 남아 있도록 약간의 작업을  진행해주자.

 

#/etc/rc.d/init.d/ipvsadm save

 ipvsadm on

참고) ipvsadm save 명령을 내리면 /etc/sysconfig/ipvsadm  설정 값이 저장된다.

 

이제 LVS 서버의 설정은  끝이 났다.

 

이제 서비스 되어질 웹서버 2대에  설정한다.

설정방법은 2대다 동일하기 때문에 1대만 설명하겠다.

 

먼저 웹서버에서도 가상 디바이스를 설정하여 Virtual IP를 추가해야한다.

 위에 설명되어 있다. 참고바란다.

 

여기서 의문점이 생긴다.  가상 디바이스를 추가하여 Virtual IP를 할당해주어야 하는데.

여기서 ARP  발생할수 있다. 그렇다면 ARP는 무엇인가.

 

ARP란?

IP주소를 해당 MAC주소로 바꿔주는 프로토콜을 말한다. 네트워크장비는 기본적으로  통신을 하기때문에 IP주소를 MAC주소로 변환을 해줘야 통신이 가능해진다.

클라인트가 가상 IP를 요청했을 때 동일 네트워크에 Virtual  2개 이상 갖고 있기 때문에 경우에 따라서 Real Server에서 응답을  경우가 있다고 한다. 이렇게 되면 로드밸런싱이 되지 않기 때문이다.
arptables_jf를 이용해서 응답하지 않도록 해줘야 한다.

#yum -y arptables_jf

#arptables -i  -A IN -d 192.168.10.10 -j DROP
#arptables -i eth0 -A  -d 192.168.10.10 -j mangle --mangle-ip-s 192.168.10.20
#/etc/rc.d/init.d/arptables_jf save
#chkconfig arptables_if 

 

arptables_jf save 명령 사용시에도  /etc/sysconfig/arptables 파일에 값이 기록되어 저장된다.

 

 을 이용하여 LVS 시스템을 Direct Routing 구성으로 구축해보았다.

ps) 단순히 rr(라운드 로빈) 방식만을 취한다면 차라리 DNS  로빈 설정이 훨신 수월하고 편리하다는 느낌이다. 하지만 위에 언급된 4가지  운영 상황에 맞게 설정할수 있어 충분히 장점이 될것 같고, L4 등도 고가라 도입하기가 쉽지 않을떄 한번 고려해볼만한 LVS 시스템이라 생각된다.


출처 : 리눅스포탈


Trackback 0 Comment 0
2009.09.24 13:21

CentOS에서 ipvsadm으로 Virtual Server 구축

기존에 CentOS에서 piranha-gui를 이용해서 Linux Virtual Server를 구축했었는데, 영 감이 안와서... 검색해서 ipvsadm으로 Virtual Server를 구축했다.

다음의 IP로 가상 서버를 구축한다고 가정한다.
Virutal IP : 192.168.1.10
Real IP : 192.168.1.20
Real IP : 192.168.1.21


일단은, 방금 구축한 경험으로 봤을 때 2대의 실 서버를 로드밸런싱하려면 ipvsadm을 동작시킬 서버까지 포함해서 3대가 있어야 될것 같다.(direct routing 방법 사용시)

Virtual Server
일단은 ipvsadm을 설치한다. 나의 경우는 yum을 이용해서 패키지 설치를 했다.
# yum install ipvsadm
# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
#

요렇게 해서 나오면 정상.

이제 가상 IP를 설정한다. 네트워크 디바이스가 eth0으로 할당되어 있다고할 때, eth0:1의 이름으로 가상 디바이스를 생성한다.
# ifconfig eth0:1 192.168.1.10 netmask 255.255.255.0 up
# ifconfig

이렇게 하면 eth0:1의 이름으로 네트워크 디바이스가 생성되어 있을 것이다. 재부팅되었을 때 자동적으로 올라오게 하기 위해서 다음의 파일을 생성한다.
# vi /etc/sysconfig/network-scripts/ifcfg-eth0:1
DEVICE=eth0:1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.1.10
NETMASK=255.255.255.0


Virtual Server에서는 가상 IP로 접속되는 연결을 실제 서버로 연결해야 하기 때문에 ip_forward 옵션이 활성화 되어 있어야 한다.
수정할 파일 : /etc/sysctl.conf
net.ipv4.ip_forward=1 로 변경한다.
# sysctl -p
혹은
# sysctl -w net.ipv4.ip_forward=1


이제 ipvsadm에 새로운 서비스를 추가한다.
# ipvsadm -A -t 192.168.1.10:80 -s wlc

옵션에 대한 자세한 사항은 ipvsadm --help 명령으로 확인하기 바란다.
위 옵션은 다음의 의미를 갖는다.
-A : 새로운 서비스 추가
-t : tcp 서비스
-s : scheduling. Weighted Least Connection

이제 서비스를 제공하는 실제 서버에 대한 세팅을 수행한다.
필요한 패키지로는 arptables_jf가 있다. 역시 yum으로 설치한다.
# yum install arptables_jf

arptables_jf 패키지를 설치하는 이유는 다음과 같다고 한다(나도 검색해서 찾았음).
Real Server에도 Virtual IP를 설정해 주어야 하는데, 이렇게 되는 경우 클라인트가 가상 IP를 요청했을 때 동일 네트워크에 Virtual IP를 2개 이상 갖고 있기 때문에 경우에 따라서 Real Server에서 응답을 주는 경우가 있다고 한다. 이렇게 되면 로드밸런싱이 되지 않기 때문에 arptables_jf를 이용해서 응답하지 않도록 해줘야 한다.
arptables_jf 설정을 수행한다.
# arptables -A IN -d <virtual_ip> -j DROP
# arptables -A OUT -d <virtual_ip> -j mangle --mangle-ip-s <real_ip>
# service arptables_jf save
# chkconfig arptables_if on

즉, 192.168.1.20에 해당하는 Real Server를 설정한다면,
# arptables -A IN -d 192.168.1.10 -j DROP
# arptables -A OUT -d 192.168.1.10 -j mangle --mangle-ip-s 192.168.1.20

이 된다.

이제, 네트워크 디바이스를 추가한다.
# ifconfig eth0:1 192.168.1.10 netmask 255.255.255.0

이 설정도 부팅시 자동으로 디바이스가 추가되도록 /etc/sysconfig/network-scripts에 파일을 생성한다.
파일 내용은 이미 기록했기때문에 생략한다.

여기가지 설정이 완료되면, 이제 ipvsadm에 real server를 추가해준다.
Virtual Server에서
# ipvsadm -a -t 192.168.1.10:80 -r 192.168.1.20 -g

여기서 -g 옵션은 direct routing으로 추가하겠다는 의미임.

나머지 Real Server도 동일한 작업을 수행한 뒤 마지막으로 ipvsadm 설정을 저장하고, 서비스를 활성화 시켜준다.

# service ipvsadm save
# chkconfig ipvsadm on


이상이다.


출처 : http://www.bongbong.net

Trackback 0 Comment 0
2009.02.02 09:48

LVS(Linux Virtual Server) Case 별 설치

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PB'S NUTSHELL HOWTO FOR PIRANHA/LVS/NAT
This represents my amateur experience learning
and installing Piranha/LVS and making it work.
I do not claim this will work for you, but it
might help. Contact the piranha-list@redhat.com
if you need help.

Revised: Wed Apr 23, 2003. kernel-2.4.20-9 (RH9.0 no more ip_vs)
Revised: Thu Mar 06, 2003. kernel-2.4.18-26.7.x (RH7.3/7.2)
Revised: Tue Nov 10, 2002. Kernel 2.4.18-18.7 (RH7.3)
Revised: Mon Nov 11, 2002. Kernel 2.4.18-17.7.x notes.
Revised: Tue Oct 29, 2002. RH8.0 v. 7.3 notes.
Revised: Wed Jul 8, 2002. Hearbeat ADDENDUM 3 added.
Revised: Wed Jun 19, 2002. Kernel re-compiling note.
Revised: Tue Jun 18, 2002. Red Hat 7.3 and new IPVSADM
Revised: Tue May 21, 2002. Note on newest RPM and ftp site.
Revised: Fri Apr 11, 2002. General editing for LVS archives.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

NEWS FLASHES!!!

Red Hat Linux 9.0 has been released with kernel 2.4.20-8
and errata update 2.4.20-9. Sadly, Red Hat has chosen
to remove ip_vs from its kernel and that means you need
to re-compile kernels with ip_vs back in, or you need
to buy Red Hat Enterprise Linux AS (Advanced Server).
Additionally, see ADDENDUM 5 for some tips on compiling
ip_vs into the kernel, as well as one source for
obtaining re-compiled kernels in RPM format with ip_vs
happily put back, and some other modules removed to
enhance performance. Additionally, Addendum 2 now
contains some hints on using initscripts intead of
the rc.lvs script set forth in this howto.

Red Hat kernel 2.4.18-26.7.x was released for Red Hat
Linux 7.2 and 7.3 which fixes Gigabit/Broadcom/TG3
network card driver issue. Piranha was tested with
this kernel and working fine.

Red Hat kernel 2.4.18-18.7 was released for Red Hat Linux
7.2 and 7.3, and kernel 2.4.18-18.0 was released for Red
Hat Linux 8.0. This is a fix to a local DOS attack bug.

Red Hat 7.2 and 7.3 now has kernel errata version 2.4.18-18.7.x
which you can install and works with Piranha.

RedHat 8.0 uses Apache 2.0 and apparantly some updated
PHP code which breakes the Piranha-GUI. Therefore until
RedHat fixes Piranha-GUI to work with it, I suggest
sticking with the follow combination that requires no
re-compiling at all:

RedHat 7.3 + errata including kernel 2.4.18-10,
and Apache 1.3.23-14, and Piranha 1.7.0-3, Ipvsadm 1.20-7,
scsi_reserve and scsi_reserve_devel 0.5.3-9.

While RedHat 8.0 uses now comes with ipvsadm 1.21-4 RPM, and it appeared to
set all the routes correctly used on RedHat 7.3, it
never-the-less complains about ipvs version mismatch
(0.9.7 versus 1.0.4 it requires). You can fix ipvs module versions
by adding the new one and recompiling the kernel if you like to do that.
Also, I tried RedHat 8.0, and uninstalling httpd 2.0.40-8 and
re-installing apache 1.3.23-14, but the PHP code Piranha_GUI uses
still did not work right, so for Nutshell install purposes, I
would not use RedHat 8.0 right now with Piranha.

Fix for Piranha-GUI not displaying CURRENT LVS ROUTING TABLE.
See Appendix at the end of this document.

Update versions ipvsadm-1.20-7 RH7.3 and ipvsadm-1.21-4 RH8.0 now available.
Version piranha-0.7.0-3.i386.rpm is still the latest available.
Previous piranha-0.6.1-3 and piranha-0.6.0-19 still available.
scsi-reserve and scsi_reserve_devel 0.5.3-9 still current version.

Red Hat 7.3's 2.4.18-4 kernel is using ipvs 0.9.7 patch. Therefore,
you need to use newer version of ipvsadm. There is a modification for
ipvsadm release from linuxvirtualserver.org with patch to support Red
Hat 7.2/7.3 system. You can get both source and i386 RPM from here:

http://www.academy.rpi.edu/~yua/open_source/piranha/

Definitely get ipvsadm 1.20-7 RPM from here. If you update to ipvs
1.0.4 then get ipvsadm 1.21-4 from the RedHat 8.0 distribution.

This should work well for RedHat kernel recompiles:
* get the default .config file for the RedHat 7.3 kernel
* grab the 2.4.18 kernel from ftp.kernel.org
* copy your 7.3 .config over to the 2.4.18 source tree
* apply all the patches you need, config as necessary, then recompile.
(This may also work with your 7.2 .config).
This is a way to get a RedHat-like kernel using the latest
patches without conflict.

Please refer to the new heartbeat section - ADDENDUM 3
at the end of this document.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

AND NOW TO THE INSTALL! PIRANHA/LVS SERVICES NEEDS:

Install Red Hat 7.2 or 7.3 download and install the following RPM's from
Download and install the following RPM's from websites:
http://ftp.linux.org.uk/pub/linux/piranha/7.2/
http://freshmeat.net/releases/70660/
http://www.academy.rpi.edu/~yua/open_source/piranha/

piranha-0.7.0-3.i386.rpm
scsi-reserve-0.7-6.i386.rpm
scsi-reserve-devel-0.7-6.i386.rpm
ipvsadm-1.20-7.i386.rpm (RH 7.2/7.3 with 2.4.9-[31|34] kernel)
ipvsadm-1.21-4.i386.rpm (RH 7.2/7.3 with 2.4.18-[18.7|18.8] kernel)
(Note: Kernel 2.4.18-18.8 is the Red Hat 8.0 update. 8.0 not
recommended for use with Piranha-GUI at this time.)

CONFIGURATION INSTRUCTIONS:

Fix Docs:

cd to /etc/sysconfig/ha/web and rename docs.html and
create a symbolic link as follows:
ln -s /usr/share/doc/piranha-0.x.x/docs docs.html
(depending on piranha version you installed)
so the Piranha GUI documentation link will work.

Fix Piranha GUI Output:

Fix Piranha-GUI Output of Routing table:
With Red Hat v7.3 the Piranha-GUI works nicely EXCEPT it fails
to display the "CURRENT LVS ROUTING TABLE". Just comes up blank.
Even though it does properly display "CURRENT LVS PROCESSERS".
The simple fix is this:

cd /bin
ln -s /sbin/ipvsadm ipvsadm
chmod +s /sbin/ipvsadm

This adds a symbolic link /bin/ipvsadm to the file
system, and adds user/group execution (+s) permissions
to /sbin/ipvsadm. It worked for me!!!

Another reported way to fix this is as follows:

perl -pi -e "s,/sbin/ipvsadm,/bin/ipvsadm,g"
/etc/sysconfig/ha/web/secure/control.php3


IP FORWARDING:

Make your /etc/sysctl.conf file look like this:
# Enables packet forwarding
net.ipv4.ip_forward = 1
# Enables source route verification
net.ipv4.conf.default.rp_filter = 1
# Disables the magic-sysrq key
kernel.sysrq = 0


CONFIGURATION FILES AND STARTUP SCRIPTS:

Put this rc.lvs (which I created) in your /etc/rc.d/
directory and make the rc.local file run it.
NOTE: See Sebastien's comments in the Addendum 2
(below) for alternative initscripts method. I
found the rc.lvs method more instructive for
beginners, since it puts it all together in one
script.
~~~~~~rc.lvs~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#!/bin/sh

###################################################
## LVS (ip_vs / piranha) Setup Requires this.
###################################################

# Flush previous rules
iptables -t nat -F -v


# Turn on IP Forwarding or set in /etc/sysctl.conf
echo 1 >/proc/sys/net/ipv4/ip_forward


# EXAMPLES Kernel 2.4 should use this.
# modprobe iptable_nat
# iptables -t nat -A POSTROUTING -s n.n.n.n/24 -j MASQUERADE
modprobe iptable_nat
iptables -v -t nat -A POSTROUTING -s 184.226.10.0/24 -j MASQUERADE


# EXAMPLES Firewall Marks - Only if used in lvs.cf.
# iptables -F -t mangle -v
# iptables -t mangle -A PREROUTING -i eth0 -p tcp -s 0.0.0.0/0 -d 10.11.12/24 --dport 21 -j MARK --set-mark 2 -v
# iptables -t mangle -A PREROUTING -i eth0 -p tcp -s 0.0.0.0/0 -d 10.11.12/24 --dport ftp -j MARK --set-mark 2 -v
# iptables -t mangle -A PREROUTING -i eth0 -p tcp -s 0.0.0.0/0 -d 10.11.12/24 --dport 10000:20000 -j MARK --set-mark 2 -v
# iptables -L -t mangle -v


# EXAMPLES Another way to add a load-balanced service to several real hosts. wlc rr
# ipvsadm -A -t 207.175.44.110:80 -s rr
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.1:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.2:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.3:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.4:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.5:80 -m
#
# Need this to make smtp port 25 route with masq. Run after loading lvs.
# smtp masqued routing does not work without this.
ipvsadm -A -t 184.226.13.26:25 -s wlc
ipvsadm -a -t 184.226.13.26:25 -r 184.226.10.37:25 -m
ipvsadm -a -t 184.226.13.26:25 -r 184.226.10.38:25 -m


# Run lvs mannually or uncomment. This will also spawn nanny daemons.
# Or let pulse daemon start lvs and nanny and networking, and comment
# out the lvs statement.
# /usr/sbin/pulse
# /usr/sbin/pulse --forceactive
/usr/sbin/lvs --configfile=/etc/sysconfig/ha/lvs.cf


# Start the Piranha-GUI - Do this last.
# Access with http://piranhahostname:3636
/etc/rc.d/init.d/piranha-gui start

# Show the state of the routing table.
/sbin/ipvsadm -L

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

You can edit the /etc/sysconfig/ha/lvs.cf file with
the Piranha GUI using URL
http://yourPiranhaHostname:3636 but you need to first
run "piranha-passwd YourPassword". Then the ID is
"piranha" with that password, when you access Piranha
GUI.

Or you can edit your /etc/sysconfig/ha/lvs.cf
manually, but not recommended unless you know what you
are doing.
But it's pretty straight forward. Here's a sample
which uses one LVS server to load balance to one real
server (and later two real servers). Note you
can only use NAT routing. You CANNOT use IP TUNNELING,
nor DIRECT ROUTING, as the stock kernel has an ARP
problem. There is a patch, but I cannot help with
that.
My setup layed out herein uses NAT only.

~~~~~~~~~~~~~lvs.cf sample~~~~~~~~~~~~~~~~~~~~~
serial_no = 89
primary = 184.226.13.25
service = lvs
backup_active = 0 <== make 1 if a backup LVS server exists.
backup = 0.0.0.0 <== only put true IP of LVS backup here is exists.
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = nat
nat_router = 184.226.10.22 eth1:1
nat_nmask = 255.255.255.0
reservation_conflict_action = preempt
debug_level = NONE
virtual piranha_http {
active = 1
address = 184.226.13.26 eth0:1
vip_nmask = 255.255.255.0
port = 80
send = "GET / HTTP/1.0rnrn"
expect = "HTTP"
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server real1.foobar.com {
address = 184.226.10.56
active = 1
weight = 1
}
server real2.foobar.com {
address = 184.226.10.57
active = 1
weight = 1
}
}
virtual piranha_smtp {
active = 1
address = 184.226.13.26 eth0:2 <==note you can run multiple failover ip's.
vip_nmask = 255.255.255.0
port = 25
send = "/etc/sysconfig/ha/test_smtp.sh %h"
expect = "OK"
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server real1.foobar.com {
address = 184.226.10.37
active = 1
weight = 1
}
server real2.foobar.com {
address = 184.226.10.38
active = 1
weight = 1
}
}
virtual piranha_https {
active = 1
address = 184.226.13.26 eth0:3
vip_nmask = 255.255.255.0
port = 443
persistent = 900
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server real1.foobar.com {
address = 184.226.10.27
active = 1
weight = 1
}
server real2.foobar.com {
address = 184.226.10.28
active = 1
weight = 1
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The /etc/sysconfig/ha/test_smtp.sh script. Note you can
change port 25 to other ports as well, like LDAP port
389 and rename it to test_ldap.sh. I've also added
an ldapsearch test method for ldap ports.

#!/bin/sh

# Depending on which port you are testing, uncomment one of the following
# and edit it for your port. Or you can create your own kind of test.
# If you do not use a script, or a character string in lvs.cf to test
# ports, the nanny daemon still processes the ports and load balancing
# still occurs to active ports, and inactives are removed from the
# (NAT) routing table.

# Port 25 for smtp - using telnet test.
TEST=`printf "quitn" | telnet $1 25 2>/dev/null |grep -i "connected"|wc -l` > /dev/null 2>&1

# Port 25 for smtp - using netcat (nc) test.
# test_smtp.sh:#TEST=`printf "helo foobar.comnmail from: PortTest@foobar.com nquitn" | nc -v $1 25 2>/dev/null |grep -i "sender ok"|wc -l` > /dev/null 2>&1

# Port 389 for ldap
# TEST=`printf "n" | ldapsearch -x -l 3 -h $1 -p 389 -LLL -s base -b "o=ROOT" dn 2>/dev/null |grep -i "ROOT"|wc -l` > /dev/null 2>&1

if [ $TEST -eq 1 ]; then
echo "OK"
else
echo "FAIL"
fi

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Load Monitor Note:
I used "uptime" as the load monitor in the
lvs.cf script, which required you load
snmp and rpc.rstatd on your Linux boxes
or other OS. Make sure you have the following
start scripts:

/etc/rc.d/init.d/rpc.statd
from rpm package rusers-server-0.17-12

/etc/rc.d/init.d/snmpd
from rpm package ucd-snmp-4.2.3-1.7.2.3

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In the lvs.cf above take note of the Piranha/LVS
server's eth0 / eth0:1 (also eth0:2, eth0:3, etc.
since Piranha can support High Availability failover
of multiple virtual interfaces) is the public side,
and eth1 / eth1:1 for the private network side. eth0 is
the host address, eth0:1 is the VIRTUAL address (which
you put in your DNS for this host); eth1 is a host
address and eth1:1 is the NAT ROUTER address for
private network side. The gateway for each eth card is
the one your LAN/WAN Admin gives you.

On the private network are REAL SERVERS 1, 2, etc.
(only 1 is in the lvs.cf above). Each real server has
eth0 as host address, eth0:1, eth0:2 etc. for each
service, in my case for http 80, smtp 25 and https
443.
(USING MULTIPLE VIRTUAL ETHERNETS ON THE REAL SERVERS
IS NOT REQUIRED, I simply set it up that way, one IP
per service Port, but you do not need to do it that way.)

The GATEWAY for the REAL SERVERS must be the IP
ADDRESS OF THE Piranha/LVS SERVER'S NAT ROUTER
address.

You can setup the ethernets and gateways on RH7.2 in
the /etc/sysconfig/network (for default gateway) and
/etc/sysconfig/networking/ and
/etc/sysconfig/network-scripts/ dirs where you will
see scripts like ifcfg-eth0, ifcfg-eth0:1 and so on.
/etc/rc.d/init.d/network restart/stop/start will reset
your network, as well as ifdown eth0:x and ifup
eth0:x.
Check your routing with route -e.

The IP ADDRESSES for ethernet cards for the
Piranha/LVS server on the PRIVATE NETWORK **MUST BE*
the same network as the ethernet cards in the REAL
SERVERS which are on the private network. The IP
ADDRESSES for the eth cards for the Prianha/LVS PUBLIC
SIDE NETWORK should be a differnet network.

Graphically speaking:

PUBLIC NETWORK (ie. 13.0)
|
eth0 host addr public net side
eth0:1 virtual addr public net
[LVS SERVER]
eth1 host addr private side
eth1:1 NAT Router addr private side
|
PRIVATE NETWORK (ie. 10.0)
|
eth0 http 80 service on private net
eth0:1 smtp 25 service "
eth0:2 https 443 service "
[REAL SERVER]

The above setup is reflected in the lvs.cf file.
(But again, I am not sure if each service needs
to have it's own IP address like I am showing, I just
got it to work this way.)

You can access the LVS Howto's in the Piranha GUI for
more on iptables and ipvsadm commands and firewall
marks, but I includes helpful examples in the rc.lvs
above which are both from those howto's and from the
piranha-list red hat help.



THE APACHE HTTPD.CONF FILE FOR WEBMAIL:
This only applies for a WEBMAIL service like Silkymail from
Cyrusoft.com or other similar web browser-based mail.
The httpd.conf file should contain the following entries
so that Pirianha can successfully route port 80 over to 443:

~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Note the real2 entries here would be "real1" on real1
## and the 10.x IP addresses refering to real2 here would
## also be replaced with real1's on real1.
## All else is kept the same like 13.x address and other hostnames.
## Note webmail is a cname (alias) for real1 or real2.foobar.com
## as would be smtp.foobar.com.

NameVirtualHost 184.226.10.27
NameVirtualHost 184.226.13.26

<VirtualHost 184.226.10.27:443>
ServerName webmail.foobar.com
SSLFlag on
SSLCertificateKeyFile
/usr/local/cyrusoft/silkymail/openssl/certs/webmail.foobar.com.key
# Signed certificate for the server
SSLCertificateFile
/usr/local/cyrusoft/silkymail/openssl/certs/webmail.foobar.com.crt
</VirtualHost>

<VirtualHost 184.226.10.27 184.226.10.27:80 webmail.foobar.com foobar.com
184.226.13.26 184.226.13.26:80 >
ServerName webmail.foobar.com
Redirect / https://webmail.foobar.com:443/
</VirtualHost>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~



SILKYMAIL default.php3:
The
/usr/local/cyrusoft/siklymail/www/htdocs/silkymail/imp/config/defaults.php3

should contain this entry on both real1 and real2:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/* Server Specific Configuration */
$default->localhost = "webmail.foobar.com";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



FINAL NOTES:
The above setup got my http, smtp and https requests
NAT-routing with masquerade through my Piranha/LVS
server to my one backend Real Server (and later my
two backend real servers) running http and
smtp. You should of course add a 2nd real server to
make this worth while, and thing about a 2nd
Piranha/LVS server with heartbeat failover.

Killing the lvs damon and can be done by first doing a
"ps afx | grep lvs" then "kill -15 <parent pid for lvs>" and
that will also kill lvs and the nanny daemons. (Sounds
like a singing group.) Far as I can
tell you need to do then anytime you change the
lvs.cf, and there's no way to make it -SIGHUP, as it
leaves the nanny's out to lunch.





====================================================================================
== ADDENDUM 1:
== My final working system running on Piranha, with real servers Real1, Real2
== over-all setup and methods of management.
====================================================================================

SYSTEM IP-ADDR PORT SERVICE ETHERNET-DEVICE

piranha 13.25 n/a public host address eth0
piranha 13.26 Virtual IP Address (this is what DNS uses for smtp & webmail) eth0:1
piranha 10.21 n/a private host address eth1
piranha 10.22 n/a NAT Router Address eth1:1

real1 10.57 80 http (webmail) eth0
real1 10.28 443 https (webmail secure) eth0:1
real1 10.38 25 smtp (sendmail) eth0:2

real2 10.56 80 http (webmail) eth0
real2 10.27 443 https (webmail secure) eth0:1
real2 10.37 25 smtp (sendmail) eth0:2

HOW IT ALL WORKS: The current system piranha.foobar.com accepts requests
for smtp/http/https services on IP 13.26
and routes them via Weighted Least Connections load balancing scheme
(though that can be adjusted to other
schemes if ever needed) to via the NAT Router IP 10.22 to the
appropriate service on the "Real Servers" Real1 and
Real2, based on the PORT. (Down the road, we will eventually have two
Piranha boxes (Piranha and Piranha2)
which will have a heartbeat between then, so if one fails, the other
will pickup the Virtual IP Address and the
NAT Router Address to continue services... and manual failback will be
required. Both Piranha boxes would be
configured with identical lvs.cf configurations and network setups,
except the Virtual IP and NAT Router will
only be on one box at a time. )


=================================================================================

=== MANAGING PIRANHA SERVICES
=================================================================================

==>Access Piranha.foobar.com via ssh with same pw as Real1/Real2. Do access
Real1/Real2 you
need to first ssh to piranha, then ssh to real1/real2.


==> You can check the ports that are "up" as follows:
(1) ssh to piranha.foobar.com and type "ipps localhost" to see which
ports Piranha
itself is running, should be the following (note: 80,443,25 are not
there
which is correct! ):
22 ssh
111 sunrpc
199 smux
(2) Also from piranha.foobar.com type "ipps real2" and "ipps real1" and
this time you
should see ports 80, 443 and 25 up.
22 ssh
25 smtp <===
80 http <===
111 sunrpc
199 smux
443 https <===
587 submission


By The Way Credit for IPPS:
-------------------------------------------
IPPS 1.0 IP Port Scanner
(C) 1999 Victor STANESCU <bruno@lmn.pub.ro>
-------------------------------------------


==>While ssh to piranha, run a "ps afx" from a shell terminal, you
should see the
following LVS, NANNY and PIRANHA_GUI daemons running as follows (it will look a lot more orderly on
your shell screen):

#piranha: ps afx

26166 ? S 0:00 /usr/sbin/lvs --configfile=/etc/sysconfig/ha/lvs.cf
26169 ? S 0:00 _ /usr/sbin/nanny -c -h 184.226.10.56 -p 80 -s GET / HTTP/1.0rnrn
-x HTTP -a 1526170 ? S 0:00 _ /usr/sbin/nanny -c -h 184.226.10.57 -p 80 -s GET /
HTTP/1.0rnrn -x HTTP -a 1526174 ? S 0:00 _ /usr/sbin/nanny -c -h 184.226.10.37 -p
25 -e /usr/sbin/test_smtp.sh %h -x OK -a 226176 ? S 0:00 _ /usr/sbin/nanny -c -h
184.226.10.38 -p 25 -e /usr/sbin/test_smtp.sh %h -x OK -a 226188 ? S 0:00 _
/usr/sbin/nanny -c -h 184.226.10.27 -p 443 -a 15 -I /sbin/ipvsadm -t 6 -w 1 -V 1326205 ? S
0:00 _ /usr/sbin/nanny -c -h 184.226.10.28 -p 443 -a 15 -I /sbin/ipvsadm -t 6 -w 1 -V 1326214

? S 0:00 /usr/sbin/piranha_gui -D HAVE_PHP4 -f /etc/sysconfig/ha/conf/httpd.conf
26217 ? S 0:00 _ /usr/sbin/piranha_gui -D HAVE_PHP4 -f
/etc/sysconfig/ha/conf/httpd.conf
14382 ? S 0:00 _ /usr/sbin/piranha_gui -D HAVE_PHP4 -f
/etc/sysconfig/ha/conf/httpd.conf



====> STOPPING AND RESTARTING PRIRANHA SERVICES

Killing the daemons:

kill -15 26217 26166
(note that these two PID's are the parent for Piranha_GUI and for LVS)

Restarting the Daemons:
/etc/rc.d/rc.lvs



=============================================================================
== ADDENDUM 2: Answers to some user questions
=============================================================================

Here's Sebastien's fine comments on somewhat different ways to setup
your Piranha server boot scripts and so on:

> You would add to your lvs.cf config file and rc.lvs
> script. That is, all protocols (ports) need to be
> added to those files. See my example lvs.cf and rc.lvs
> on my website.

I wouldn't suggest using your rc.lvs because of several points :
* it uses ipvsadm whereas it shouldn't (it piranha's role)
* it mixes virtual server commands, filtering commands, routing
commands
* it starts daemons which do have initscripts

Form my point of view:
* everything related to iptables should be moved to
/etc/sysconfig/iptables and the iptables service should be activated
* sysctl variables should be set in /etc/sysctl.conf only
* no ipvsadm stuff, piranha will manage it by itself, just activate the
pulse service
* piranha-gui is also a service, activate it

> Also, if you're users are using http to get to a LOGIN webpage, you
> also have to consider adding persistance = 900 to your lvs.cf for
> http service, but that is another story.

This is based on the assumption that session tracking is real server
bound, which is obviously and fortunately not always true. Just think
DB
or file sharing.

> Since you are starting "lvs" daemon by executing the "pulse" daemon
> (which is also true in my rc.lvs script)

It's not ! The uncommented line is :
/usr/sbin/lvs --configfile=/etc/sysconfig/ha/lvs.cf

(EDITORS NOTE: Actually the example rc.lvs script states that you can comment out
either line you like).

--
S?astien Bonnet
Centre de contacts - Experian France



Here are some answers to Scott's questions from 3-9-2002:

> scsi-reserve-0.7-6.i386.rpm
> scsi-reserve-devel-0.7-6.i386.rpm

What are these used for? (Out of curiosity)

ANSWER: Piranha RPM's require them for install. Please check with Redhat for anything additional.

> Make your /etc/sysctl.conf file look like this:
> # Disables packet forwarding
> net.ipv4.ip_forward = 1

"Disables" should be "Enables" for consistency.

ANSWER: "1" is the meaningful part, and it ENABLES packet forwarding.

> # Enables source route verification
> net.ipv4.conf.default.rp_filter = 1
> # Disables the magic-sysrq key
> kernel.sysrq = 0

I would - as a general rule - suggest to enable this, it makes debugging so
much easier.

> virtual piranha_smtp {
> active = 1
> address = 184.226.13.26 eth0:1
> vip_nmask = 255.255.255.0
> port = 25
> send = "GET / HTTP/1.0rnrn"
> expect = "HTTP"

This doesn't make any sense. You are talking HTTP to an SMTP port ;-)

ANSWER: Please see my latest lvs.cf file in this how-to which shows how
to make a shell script to test a port. Your question was asked when I
had an older lvs.cf in place.


=============================================================================
== ADDENDUM 3: Piranha/LVS Heartbeat tips
=============================================================================

I got some input from Mike McLean and Brian Gray at Red Hat. I felt
it should be shared with you, as it definitely reduced the confusion
factor for me. A thank you to the piranha-list@redhat.com.

The Piranha_GUI lets you edit the /etc/sysconfig/ha/lvs.cf or you can do it
manually to add these heartbeat requirements. You edit this file, then
copy it exactly as-is to the backup LVS as well.
The "pulse" daemon is what needs to be run to make LVS failover work.
You will need to start this daemon on both Piranha/LVS routers
with "/etc/rc.d/init.d/pulse start". Based on my over-all Piranha/LVS
solution discussed in this document, you can add that pulse start
statment to your /etc/rc.d/rc.lvs script which is started by
/etc/rc.d/rc.local.

Here's the settings in lvs.cf related to heartbeat:

primary = IP Address of host's public side interface - usually on eth0
backup = IP Address of public side host interface - also usually on eth0

Example:

primary = 184.226.13.25
backup = 184.226.13.180

Special tips: the above IP's are NOT the public IP of the Piranha/LVS service
but of the hosts addresses on the public side only, usually on eth0 interface.
You usually put the Piranha/LVS service IP address on virtual interface eth0:1.
The primary and backup Piranha/LVS hosts determine who they are by comparing
these addresses. In other works the lvs.cf should be identical on both systems!

Set this as follows to switch on hearbeat - of course only applies if "pulse"
daemon is running - and you select a port, the default is 539:

heartbeat = 1
heartbeat_port = 539

The following additional settings are in seconds, keepalive is how long
between heartbeats, and deadtime is how long the LVS waits until it
decides to take over as primary:

keepalive = 6
deadtime = 18

This was another stumper for me - however it turns out the nat_router
IP address ought to be THE SAME on both primary and backup Piranha/LVS
systems:

nat_router = 184.226.10.22 eth1:1

Option is you want LVS to keep the lvs.cf in sync on both LVS systems -
I 'm afraid I do not know anything else about this setting yet:

rsh_command = rsh -or- ssh


Special tip: Piranha (assumedly the pulse daemon) will handle starting
and stopping alias network interface devices (meaning devices eth0:1 the Piranha/LVS
public service IP and eth1:1 the private side NAT Router IP). DO NOT
set up any external scripts to configure these devices - Piranha does it
based on lvs.cf.

NEW TIP: Piranha/LVS can failover MULTIPLE IP ADDRESSES. That is, if you
need one load balanced service on eth0:1 and another on eth0:2
you can set those virtual devices in the lvs.cf file for each
virtual server, and Piranha will enable those devices either from
PULSE daemon or running LVS daemon directly.

* * *

The preceeding was based on Piranha/LVS's own support for heartbeat (ie. pulse)
and failover methodology. Because I was confused by certain specifics
which I've now clarified above, I previously wrote my own Perl script
which you can download from
<http://peterbaitz.com/rc.d/rc.PiranhaLVS.PBnutshellscripts-070802.tgz>
and unzip/untar it as follows:
gunzip -c rc.PiranhaLVS.PBnutshellscripts-070802.tgz | tar xvf -
and you will find a README in it. To the point, the rc.lvs.failover.pl
is my script. Just edit it, change the variables to match your system,
and run it on both primary and backup Piranha/LVS systems AS AN ALTERNATIVE
to the Piranha/LVS settings/setup discussed beforehand.
With my Perl script solution, you do need to setup the virtual
interfaces to be in place, but not to activate on bootup.
My rc.lvs and rc.lvsdown scripts (which start and stop the
Piranha/LVS services use the ifup and ifdown commands to
bring up or shutdown those virtual interfaces eth0:1 and eth1:1).
If you think my ping solution is not elegent, make no mistake, the
Piranha/LVS pulse solution is exactly that - a ping solution too.
There's may be a little faster failover however, mine would take
a minute to check, and two minutes between checking - unless you
changes the defaults in my script.

=================================================================
== Addendum 4 - Fix for Piranha-GUI not displaying routing table
=================================================================

With Red Hat v7.3 the Piranha-GUI works nicely EXCEPT it fails
to display the "CURRENT LVS ROUTING TABLE". Just comes up blank.
Even though it does properly display "CURRENT LVS PROCESSERS".
The simple fix is this:

cd /bin
ln -s /sbin/ipvsadm ipvsadm
chmod +s /sbin/ipvsadm

This adds a symbolic link /bin/ipvsadm to the file
system, and adds user/group execution (+s) permissions
to /sbin/ipvsadm. It worked for me!!!

Another reported way to fix this is as follows:

perl -pi -e "s,/sbin/ipvsadm,/bin/ipvsadm,g"
/etc/sysconfig/ha/web/secure/control.php3


=================================================================
== Addendum 5 - Tips for compiling ip_vs into Red Hat 9.0
=================================================================
Red Hat Linux 9.0 kernel-2.4.20-9 with ip_vs 1.0.8 recompiled
back in downloadable from:

http://peter.baitz.com/kernel9/
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
No responsibility or liability assumed for any damages from use
of this kernel. Assumed you know how to test it first.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

If you want to recompile ip_vs into the kernel yourself you can
refer to these websites:

http://sources.redhat.com/piranha/
http://www.redhat.com/docs/manuals/haserver/RHHAS-1.0-Manual/ch-lvs.html
http://www.linuxvirtualserver.org


THE FOLLOWING IS FROM EMAIL ON THE lvs-users@LinuxVirtualServer.org LIST:

Well, i have updated my patches at

http://mail.incredimail.com/howto/lvs/install/

and split them to

http://mail.incredimail.com/howto/lvs/install/RedHat7.3/
(now including latest 1.0.8), and
http://mail.incredimail.com/howto/lvs/install/RedHat9.0/

The latter including rebuild kernels with ipvs 1.0.8, and the ipvsadm
rpm.

the INSTALL document describes the process of taking the original
kernel.src.rpm, and produring the desired, ipvs patched kernel. Worth
reading for RH users.

another option for rh 9 users is to use an rh 8 kernel - this is also
in the
document...

On other note, while buiding the ipvsadm package, i ran into 3 problems
:

1. make rpm in ipvsadm directory runs "rpm -ba" to build the rpms. this
will
not work on RH 8 and up. the command should be "rpmbuild -ba" - this is
supported in the RH 7 distros - it is backwards compatible, and should
be
safe change.

2. since redhat's kernel-source package doesn't install it's source
files to
/usr/src/linux, but to /usr/src/linux-2.4 , i use "make
KERNELSOURCE=/usr/src/linux-2.4" to build the module on 7.3, but i
cannot
pass this on while building ipvsadm rpm. There should be some mechanism
in
place to deal with that.

3. documentation :

a. the minihowto states that redhat's kernel include ipvs - this is no
longer true ;(
b. the howto includes my note about redhat's kernel and heartbeat here

http://www.linux-vs.org/Joseph.Mack/HOWTO/LVS-HOWTO.unsupported.html#RH_prep
atched_LVS_from_the_mailing_list

this is no longer true, since kernel 2.4.18-27

Comments and suggestions are welcome,

Alex.

----- Original Message -----
From: "Alex Kramarov"
To: "LinuxVirtualServer.org users mailing list."
<lvs-users@LinuxVirtualServer.org>
Sent: Friday, April 11, 2003 7:05 PM
Subject: Re: [ANNOUNCE] ipvs 1.0.8


> I have already done extensive research about redhat 9 kernel and lvs
>
> 1. althought pre RH 9 redhat kernels did include ipvs, rh 9 doesn't.
rh9
is
> desktop oriented, they want us to purchase RHAS to have ipvs
>
> 2. rh 9 is heavily patched, the most intruding patches are o1
scheduler,
and
> the ntpl patches. these interfere with a lot of kernel work, ipvs and
uml
> are these i met problem with in the last week.
>
> 3. you should not use the kernel-source rpm to build your own kernel,
you
> should get the kernel.srpm, install it into /usr/src/redhat, and
modify
the
> spec file to include the ipvs patch. i was able to do that, but
adding the
> ipvs patch, and removing O1 (which includes some preemt pieces) and
ntpl
> related patches from the spec file (i also removed the lowlat
patches,
> although they don't collide with ipvs). these redhat's optimisations
are
> good for desktops, but they are affecting lvs director performance as
much
> as 25 percent - see the list archives.
>
> 4. i will post my spec file, and the prebuilt RH 9 kernel rpms and
srpms
for
> the i686 platform later, in the standard place.
>
> http://mail.incredimail.com/howto/lvs/install
>
> i will also see if the modules compile standalone with the regular
redhat
9
> kernel - they did compile with the 7.3 kernel.
>
> Note. althought i don't see any impact of removing the ntpl patches
from
the
> kernel on any functionality or stability of the machines i use, if
anyone
> has some input on this, i would be happy to hear it, since glibc is
compiled
> with ntpl support on rh 9, and i am not going to recompile that !
>
> thank you.
>
> Alex.
>
> P.S. RH 7.3 is still me faviourite distro for servers, too bad it has
only
8
> more months of support ...


----------------------------------------------------------------------------

there are several patches you could apply to the client kernel, though
i
believe you are referring to the hidden patch, which is most common. I
didn
t want to include the hidden patch in these rpms, since the clients
will
probably need to run the kernel redhat expects, with nptl, since you
will
probably run userspace daemons, so you better off taking the original
srpm
and only add to it the hidden patch, or any other client side patch you
want
I don't use DR, so i don't have the need for such kernel, and i don't
compile it, but i provide some instructions here :

http://mail.incredimail.com/howto/lvs/install/RedHat9.0/kernel-2.4.20-9
hidden/INSTALL

Alex.


=================================================================
== Addendum 6 - How to start or stop a program from lvs.cf
=================================================================
Example:

lvs.cf

serial_no = 22
primary = 10.0.13.5
service = fos
backup_active = 1
backup = 10.0.13.6
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = direct
reservation_conflict_action = preempt
debug_level = 2
failover spdb {
address = 10.0.13.30 aft0:1
vip_nmask = 255.255.255.0
active = 1
timeout = 6
send_program = "/scripts/check_pgsql_alive"
expect = "OK"
start_cmd = "/scripts/startdb"
stop_cmd = "/scripts/stopdb"
}


#startdb
cd /var/lib/pgsql/data
su - postgres -c "/var/lib/pgsql/data/start_db"
echo "started db"

#stopdb
cd /var/lib/pgsql/data
su - postgres -c "/var/lib/pgsql/data/stop_db"
echo "stopped db"

This was from: "Edward Croft" <ecroft@openratings.com>


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
T H E E N D
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Peter Baitz (PB)
computer systems engineer
& systems administrator
peterbaitz@yahoo.com
http://peterbaitz.com/lvs.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Trackback 13 Comment 0