본문 바로가기
운영체제 (LNX,WIN)

LVS(Linux Virtual Server) Case 별 설치

by 날으는물고기 2009. 2. 2.

LVS(Linux Virtual Server) Case 별 설치

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PB'S NUTSHELL HOWTO FOR PIRANHA/LVS/NAT
This represents my amateur experience learning
and installing Piranha/LVS and making it work.
I do not claim this will work for you, but it
might help. Contact the piranha-list@redhat.com
if you need help.

Revised: Wed Apr 23, 2003. kernel-2.4.20-9 (RH9.0 no more ip_vs)
Revised: Thu Mar 06, 2003. kernel-2.4.18-26.7.x (RH7.3/7.2)
Revised: Tue Nov 10, 2002. Kernel 2.4.18-18.7 (RH7.3)
Revised: Mon Nov 11, 2002. Kernel 2.4.18-17.7.x notes.
Revised: Tue Oct 29, 2002. RH8.0 v. 7.3 notes.
Revised: Wed Jul 8, 2002. Hearbeat ADDENDUM 3 added.
Revised: Wed Jun 19, 2002. Kernel re-compiling note.
Revised: Tue Jun 18, 2002. Red Hat 7.3 and new IPVSADM
Revised: Tue May 21, 2002. Note on newest RPM and ftp site.
Revised: Fri Apr 11, 2002. General editing for LVS archives.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

NEWS FLASHES!!!

Red Hat Linux 9.0 has been released with kernel 2.4.20-8
and errata update 2.4.20-9. Sadly, Red Hat has chosen
to remove ip_vs from its kernel and that means you need
to re-compile kernels with ip_vs back in, or you need
to buy Red Hat Enterprise Linux AS (Advanced Server).
Additionally, see ADDENDUM 5 for some tips on compiling
ip_vs into the kernel, as well as one source for
obtaining re-compiled kernels in RPM format with ip_vs
happily put back, and some other modules removed to
enhance performance. Additionally, Addendum 2 now
contains some hints on using initscripts intead of
the rc.lvs script set forth in this howto.

Red Hat kernel 2.4.18-26.7.x was released for Red Hat
Linux 7.2 and 7.3 which fixes Gigabit/Broadcom/TG3
network card driver issue. Piranha was tested with
this kernel and working fine.

Red Hat kernel 2.4.18-18.7 was released for Red Hat Linux
7.2 and 7.3, and kernel 2.4.18-18.0 was released for Red
Hat Linux 8.0. This is a fix to a local DOS attack bug.

Red Hat 7.2 and 7.3 now has kernel errata version 2.4.18-18.7.x
which you can install and works with Piranha.

RedHat 8.0 uses Apache 2.0 and apparantly some updated
PHP code which breakes the Piranha-GUI. Therefore until
RedHat fixes Piranha-GUI to work with it, I suggest
sticking with the follow combination that requires no
re-compiling at all:

RedHat 7.3 + errata including kernel 2.4.18-10,
and Apache 1.3.23-14, and Piranha 1.7.0-3, Ipvsadm 1.20-7,
scsi_reserve and scsi_reserve_devel 0.5.3-9.

While RedHat 8.0 uses now comes with ipvsadm 1.21-4 RPM, and it appeared to
set all the routes correctly used on RedHat 7.3, it
never-the-less complains about ipvs version mismatch
(0.9.7 versus 1.0.4 it requires). You can fix ipvs module versions
by adding the new one and recompiling the kernel if you like to do that.
Also, I tried RedHat 8.0, and uninstalling httpd 2.0.40-8 and
re-installing apache 1.3.23-14, but the PHP code Piranha_GUI uses
still did not work right, so for Nutshell install purposes, I
would not use RedHat 8.0 right now with Piranha.

Fix for Piranha-GUI not displaying CURRENT LVS ROUTING TABLE.
See Appendix at the end of this document.

Update versions ipvsadm-1.20-7 RH7.3 and ipvsadm-1.21-4 RH8.0 now available.
Version piranha-0.7.0-3.i386.rpm is still the latest available.
Previous piranha-0.6.1-3 and piranha-0.6.0-19 still available.
scsi-reserve and scsi_reserve_devel 0.5.3-9 still current version.

Red Hat 7.3's 2.4.18-4 kernel is using ipvs 0.9.7 patch. Therefore,
you need to use newer version of ipvsadm. There is a modification for
ipvsadm release from linuxvirtualserver.org with patch to support Red
Hat 7.2/7.3 system. You can get both source and i386 RPM from here:

http://www.academy.rpi.edu/~yua/open_source/piranha/

Definitely get ipvsadm 1.20-7 RPM from here. If you update to ipvs
1.0.4 then get ipvsadm 1.21-4 from the RedHat 8.0 distribution.

This should work well for RedHat kernel recompiles:
* get the default .config file for the RedHat 7.3 kernel
* grab the 2.4.18 kernel from ftp.kernel.org
* copy your 7.3 .config over to the 2.4.18 source tree
* apply all the patches you need, config as necessary, then recompile.
(This may also work with your 7.2 .config).
This is a way to get a RedHat-like kernel using the latest
patches without conflict.

Please refer to the new heartbeat section - ADDENDUM 3
at the end of this document.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

AND NOW TO THE INSTALL! PIRANHA/LVS SERVICES NEEDS:

Install Red Hat 7.2 or 7.3 download and install the following RPM's from
Download and install the following RPM's from websites:
http://ftp.linux.org.uk/pub/linux/piranha/7.2/
http://freshmeat.net/releases/70660/
http://www.academy.rpi.edu/~yua/open_source/piranha/

piranha-0.7.0-3.i386.rpm
scsi-reserve-0.7-6.i386.rpm
scsi-reserve-devel-0.7-6.i386.rpm
ipvsadm-1.20-7.i386.rpm (RH 7.2/7.3 with 2.4.9-[31|34] kernel)
ipvsadm-1.21-4.i386.rpm (RH 7.2/7.3 with 2.4.18-[18.7|18.8] kernel)
(Note: Kernel 2.4.18-18.8 is the Red Hat 8.0 update. 8.0 not
recommended for use with Piranha-GUI at this time.)

CONFIGURATION INSTRUCTIONS:

Fix Docs:

cd to /etc/sysconfig/ha/web and rename docs.html and
create a symbolic link as follows:
ln -s /usr/share/doc/piranha-0.x.x/docs docs.html
(depending on piranha version you installed)
so the Piranha GUI documentation link will work.

Fix Piranha GUI Output:

Fix Piranha-GUI Output of Routing table:
With Red Hat v7.3 the Piranha-GUI works nicely EXCEPT it fails
to display the "CURRENT LVS ROUTING TABLE". Just comes up blank.
Even though it does properly display "CURRENT LVS PROCESSERS".
The simple fix is this:

cd /bin
ln -s /sbin/ipvsadm ipvsadm
chmod +s /sbin/ipvsadm

This adds a symbolic link /bin/ipvsadm to the file
system, and adds user/group execution (+s) permissions
to /sbin/ipvsadm. It worked for me!!!

Another reported way to fix this is as follows:

perl -pi -e "s,/sbin/ipvsadm,/bin/ipvsadm,g"
/etc/sysconfig/ha/web/secure/control.php3


IP FORWARDING:

Make your /etc/sysctl.conf file look like this:
# Enables packet forwarding
net.ipv4.ip_forward = 1
# Enables source route verification
net.ipv4.conf.default.rp_filter = 1
# Disables the magic-sysrq key
kernel.sysrq = 0


CONFIGURATION FILES AND STARTUP SCRIPTS:

Put this rc.lvs (which I created) in your /etc/rc.d/
directory and make the rc.local file run it.
NOTE: See Sebastien's comments in the Addendum 2
(below) for alternative initscripts method. I
found the rc.lvs method more instructive for
beginners, since it puts it all together in one
script.
~~~~~~rc.lvs~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#!/bin/sh

###################################################
## LVS (ip_vs / piranha) Setup Requires this.
###################################################

# Flush previous rules
iptables -t nat -F -v


# Turn on IP Forwarding or set in /etc/sysctl.conf
echo 1 >/proc/sys/net/ipv4/ip_forward


# EXAMPLES Kernel 2.4 should use this.
# modprobe iptable_nat
# iptables -t nat -A POSTROUTING -s n.n.n.n/24 -j MASQUERADE
modprobe iptable_nat
iptables -v -t nat -A POSTROUTING -s 184.226.10.0/24 -j MASQUERADE


# EXAMPLES Firewall Marks - Only if used in lvs.cf.
# iptables -F -t mangle -v
# iptables -t mangle -A PREROUTING -i eth0 -p tcp -s 0.0.0.0/0 -d 10.11.12/24 --dport 21 -j MARK --set-mark 2 -v
# iptables -t mangle -A PREROUTING -i eth0 -p tcp -s 0.0.0.0/0 -d 10.11.12/24 --dport ftp -j MARK --set-mark 2 -v
# iptables -t mangle -A PREROUTING -i eth0 -p tcp -s 0.0.0.0/0 -d 10.11.12/24 --dport 10000:20000 -j MARK --set-mark 2 -v
# iptables -L -t mangle -v


# EXAMPLES Another way to add a load-balanced service to several real hosts. wlc rr
# ipvsadm -A -t 207.175.44.110:80 -s rr
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.1:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.2:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.3:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.4:80 -m
# ipvsadm -a -t 207.175.44.110:80 -r 192.168.10.5:80 -m
#
# Need this to make smtp port 25 route with masq. Run after loading lvs.
# smtp masqued routing does not work without this.
ipvsadm -A -t 184.226.13.26:25 -s wlc
ipvsadm -a -t 184.226.13.26:25 -r 184.226.10.37:25 -m
ipvsadm -a -t 184.226.13.26:25 -r 184.226.10.38:25 -m


# Run lvs mannually or uncomment. This will also spawn nanny daemons.
# Or let pulse daemon start lvs and nanny and networking, and comment
# out the lvs statement.
# /usr/sbin/pulse
# /usr/sbin/pulse --forceactive
/usr/sbin/lvs --configfile=/etc/sysconfig/ha/lvs.cf


# Start the Piranha-GUI - Do this last.
# Access with http://piranhahostname:3636
/etc/rc.d/init.d/piranha-gui start

# Show the state of the routing table.
/sbin/ipvsadm -L

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

You can edit the /etc/sysconfig/ha/lvs.cf file with
the Piranha GUI using URL
http://yourPiranhaHostname:3636 but you need to first
run "piranha-passwd YourPassword". Then the ID is
"piranha" with that password, when you access Piranha
GUI.

Or you can edit your /etc/sysconfig/ha/lvs.cf
manually, but not recommended unless you know what you
are doing.
But it's pretty straight forward. Here's a sample
which uses one LVS server to load balance to one real
server (and later two real servers). Note you
can only use NAT routing. You CANNOT use IP TUNNELING,
nor DIRECT ROUTING, as the stock kernel has an ARP
problem. There is a patch, but I cannot help with
that.
My setup layed out herein uses NAT only.

~~~~~~~~~~~~~lvs.cf sample~~~~~~~~~~~~~~~~~~~~~
serial_no = 89
primary = 184.226.13.25
service = lvs
backup_active = 0 <== make 1 if a backup LVS server exists.
backup = 0.0.0.0 <== only put true IP of LVS backup here is exists.
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = nat
nat_router = 184.226.10.22 eth1:1
nat_nmask = 255.255.255.0
reservation_conflict_action = preempt
debug_level = NONE
virtual piranha_http {
active = 1
address = 184.226.13.26 eth0:1
vip_nmask = 255.255.255.0
port = 80
send = "GET / HTTP/1.0rnrn"
expect = "HTTP"
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server real1.foobar.com {
address = 184.226.10.56
active = 1
weight = 1
}
server real2.foobar.com {
address = 184.226.10.57
active = 1
weight = 1
}
}
virtual piranha_smtp {
active = 1
address = 184.226.13.26 eth0:2 <==note you can run multiple failover ip's.
vip_nmask = 255.255.255.0
port = 25
send = "/etc/sysconfig/ha/test_smtp.sh %h"
expect = "OK"
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server real1.foobar.com {
address = 184.226.10.37
active = 1
weight = 1
}
server real2.foobar.com {
address = 184.226.10.38
active = 1
weight = 1
}
}
virtual piranha_https {
active = 1
address = 184.226.13.26 eth0:3
vip_nmask = 255.255.255.0
port = 443
persistent = 900
load_monitor = uptime
scheduler = wlc
protocol = tcp
timeout = 6
reentry = 15
quiesce_server = 0
server real1.foobar.com {
address = 184.226.10.27
active = 1
weight = 1
}
server real2.foobar.com {
address = 184.226.10.28
active = 1
weight = 1
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The /etc/sysconfig/ha/test_smtp.sh script. Note you can
change port 25 to other ports as well, like LDAP port
389 and rename it to test_ldap.sh. I've also added
an ldapsearch test method for ldap ports.

#!/bin/sh

# Depending on which port you are testing, uncomment one of the following
# and edit it for your port. Or you can create your own kind of test.
# If you do not use a script, or a character string in lvs.cf to test
# ports, the nanny daemon still processes the ports and load balancing
# still occurs to active ports, and inactives are removed from the
# (NAT) routing table.

# Port 25 for smtp - using telnet test.
TEST=`printf "quitn" | telnet $1 25 2>/dev/null |grep -i "connected"|wc -l` > /dev/null 2>&1

# Port 25 for smtp - using netcat (nc) test.
# test_smtp.sh:#TEST=`printf "helo foobar.comnmail from: PortTest@foobar.com nquitn" | nc -v $1 25 2>/dev/null |grep -i "sender ok"|wc -l` > /dev/null 2>&1

# Port 389 for ldap
# TEST=`printf "n" | ldapsearch -x -l 3 -h $1 -p 389 -LLL -s base -b "o=ROOT" dn 2>/dev/null |grep -i "ROOT"|wc -l` > /dev/null 2>&1

if [ $TEST -eq 1 ]; then
echo "OK"
else
echo "FAIL"
fi

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Load Monitor Note:
I used "uptime" as the load monitor in the
lvs.cf script, which required you load
snmp and rpc.rstatd on your Linux boxes
or other OS. Make sure you have the following
start scripts:

/etc/rc.d/init.d/rpc.statd
from rpm package rusers-server-0.17-12

/etc/rc.d/init.d/snmpd
from rpm package ucd-snmp-4.2.3-1.7.2.3

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In the lvs.cf above take note of the Piranha/LVS
server's eth0 / eth0:1 (also eth0:2, eth0:3, etc.
since Piranha can support High Availability failover
of multiple virtual interfaces) is the public side,
and eth1 / eth1:1 for the private network side. eth0 is
the host address, eth0:1 is the VIRTUAL address (which
you put in your DNS for this host); eth1 is a host
address and eth1:1 is the NAT ROUTER address for
private network side. The gateway for each eth card is
the one your LAN/WAN Admin gives you.

On the private network are REAL SERVERS 1, 2, etc.
(only 1 is in the lvs.cf above). Each real server has
eth0 as host address, eth0:1, eth0:2 etc. for each
service, in my case for http 80, smtp 25 and https
443.
(USING MULTIPLE VIRTUAL ETHERNETS ON THE REAL SERVERS
IS NOT REQUIRED, I simply set it up that way, one IP
per service Port, but you do not need to do it that way.)

The GATEWAY for the REAL SERVERS must be the IP
ADDRESS OF THE Piranha/LVS SERVER'S NAT ROUTER
address.

You can setup the ethernets and gateways on RH7.2 in
the /etc/sysconfig/network (for default gateway) and
/etc/sysconfig/networking/ and
/etc/sysconfig/network-scripts/ dirs where you will
see scripts like ifcfg-eth0, ifcfg-eth0:1 and so on.
/etc/rc.d/init.d/network restart/stop/start will reset
your network, as well as ifdown eth0:x and ifup
eth0:x.
Check your routing with route -e.

The IP ADDRESSES for ethernet cards for the
Piranha/LVS server on the PRIVATE NETWORK **MUST BE*
the same network as the ethernet cards in the REAL
SERVERS which are on the private network. The IP
ADDRESSES for the eth cards for the Prianha/LVS PUBLIC
SIDE NETWORK should be a differnet network.

Graphically speaking:

PUBLIC NETWORK (ie. 13.0)
|
eth0 host addr public net side
eth0:1 virtual addr public net
[LVS SERVER]
eth1 host addr private side
eth1:1 NAT Router addr private side
|
PRIVATE NETWORK (ie. 10.0)
|
eth0 http 80 service on private net
eth0:1 smtp 25 service "
eth0:2 https 443 service "
[REAL SERVER]

The above setup is reflected in the lvs.cf file.
(But again, I am not sure if each service needs
to have it's own IP address like I am showing, I just
got it to work this way.)

You can access the LVS Howto's in the Piranha GUI for
more on iptables and ipvsadm commands and firewall
marks, but I includes helpful examples in the rc.lvs
above which are both from those howto's and from the
piranha-list red hat help.



THE APACHE HTTPD.CONF FILE FOR WEBMAIL:
This only applies for a WEBMAIL service like Silkymail from
Cyrusoft.com or other similar web browser-based mail.
The httpd.conf file should contain the following entries
so that Pirianha can successfully route port 80 over to 443:

~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Note the real2 entries here would be "real1" on real1
## and the 10.x IP addresses refering to real2 here would
## also be replaced with real1's on real1.
## All else is kept the same like 13.x address and other hostnames.
## Note webmail is a cname (alias) for real1 or real2.foobar.com
## as would be smtp.foobar.com.

NameVirtualHost 184.226.10.27
NameVirtualHost 184.226.13.26

<VirtualHost 184.226.10.27:443>
ServerName webmail.foobar.com
SSLFlag on
SSLCertificateKeyFile
/usr/local/cyrusoft/silkymail/openssl/certs/webmail.foobar.com.key
# Signed certificate for the server
SSLCertificateFile
/usr/local/cyrusoft/silkymail/openssl/certs/webmail.foobar.com.crt
</VirtualHost>

<VirtualHost 184.226.10.27 184.226.10.27:80 webmail.foobar.com foobar.com
184.226.13.26 184.226.13.26:80 >
ServerName webmail.foobar.com
Redirect / https://webmail.foobar.com:443/
</VirtualHost>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~



SILKYMAIL default.php3:
The
/usr/local/cyrusoft/siklymail/www/htdocs/silkymail/imp/config/defaults.php3

should contain this entry on both real1 and real2:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/* Server Specific Configuration */
$default->localhost = "webmail.foobar.com";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



FINAL NOTES:
The above setup got my http, smtp and https requests
NAT-routing with masquerade through my Piranha/LVS
server to my one backend Real Server (and later my
two backend real servers) running http and
smtp. You should of course add a 2nd real server to
make this worth while, and thing about a 2nd
Piranha/LVS server with heartbeat failover.

Killing the lvs damon and can be done by first doing a
"ps afx | grep lvs" then "kill -15 <parent pid for lvs>" and
that will also kill lvs and the nanny daemons. (Sounds
like a singing group.) Far as I can
tell you need to do then anytime you change the
lvs.cf, and there's no way to make it -SIGHUP, as it
leaves the nanny's out to lunch.





====================================================================================
== ADDENDUM 1:
== My final working system running on Piranha, with real servers Real1, Real2
== over-all setup and methods of management.
====================================================================================

SYSTEM IP-ADDR PORT SERVICE ETHERNET-DEVICE

piranha 13.25 n/a public host address eth0
piranha 13.26 Virtual IP Address (this is what DNS uses for smtp & webmail) eth0:1
piranha 10.21 n/a private host address eth1
piranha 10.22 n/a NAT Router Address eth1:1

real1 10.57 80 http (webmail) eth0
real1 10.28 443 https (webmail secure) eth0:1
real1 10.38 25 smtp (sendmail) eth0:2

real2 10.56 80 http (webmail) eth0
real2 10.27 443 https (webmail secure) eth0:1
real2 10.37 25 smtp (sendmail) eth0:2

HOW IT ALL WORKS: The current system piranha.foobar.com accepts requests
for smtp/http/https services on IP 13.26
and routes them via Weighted Least Connections load balancing scheme
(though that can be adjusted to other
schemes if ever needed) to via the NAT Router IP 10.22 to the
appropriate service on the "Real Servers" Real1 and
Real2, based on the PORT. (Down the road, we will eventually have two
Piranha boxes (Piranha and Piranha2)
which will have a heartbeat between then, so if one fails, the other
will pickup the Virtual IP Address and the
NAT Router Address to continue services... and manual failback will be
required. Both Piranha boxes would be
configured with identical lvs.cf configurations and network setups,
except the Virtual IP and NAT Router will
only be on one box at a time. )


=================================================================================

=== MANAGING PIRANHA SERVICES
=================================================================================

==>Access Piranha.foobar.com via ssh with same pw as Real1/Real2. Do access
Real1/Real2 you
need to first ssh to piranha, then ssh to real1/real2.


==> You can check the ports that are "up" as follows:
(1) ssh to piranha.foobar.com and type "ipps localhost" to see which
ports Piranha
itself is running, should be the following (note: 80,443,25 are not
there
which is correct! ):
22 ssh
111 sunrpc
199 smux
(2) Also from piranha.foobar.com type "ipps real2" and "ipps real1" and
this time you
should see ports 80, 443 and 25 up.
22 ssh
25 smtp <===
80 http <===
111 sunrpc
199 smux
443 https <===
587 submission


By The Way Credit for IPPS:
-------------------------------------------
IPPS 1.0 IP Port Scanner
(C) 1999 Victor STANESCU <bruno@lmn.pub.ro>
-------------------------------------------


==>While ssh to piranha, run a "ps afx" from a shell terminal, you
should see the
following LVS, NANNY and PIRANHA_GUI daemons running as follows (it will look a lot more orderly on
your shell screen):

#piranha: ps afx

26166 ? S 0:00 /usr/sbin/lvs --configfile=/etc/sysconfig/ha/lvs.cf
26169 ? S 0:00 _ /usr/sbin/nanny -c -h 184.226.10.56 -p 80 -s GET / HTTP/1.0rnrn
-x HTTP -a 1526170 ? S 0:00 _ /usr/sbin/nanny -c -h 184.226.10.57 -p 80 -s GET /
HTTP/1.0rnrn -x HTTP -a 1526174 ? S 0:00 _ /usr/sbin/nanny -c -h 184.226.10.37 -p
25 -e /usr/sbin/test_smtp.sh %h -x OK -a 226176 ? S 0:00 _ /usr/sbin/nanny -c -h
184.226.10.38 -p 25 -e /usr/sbin/test_smtp.sh %h -x OK -a 226188 ? S 0:00 _
/usr/sbin/nanny -c -h 184.226.10.27 -p 443 -a 15 -I /sbin/ipvsadm -t 6 -w 1 -V 1326205 ? S
0:00 _ /usr/sbin/nanny -c -h 184.226.10.28 -p 443 -a 15 -I /sbin/ipvsadm -t 6 -w 1 -V 1326214

? S 0:00 /usr/sbin/piranha_gui -D HAVE_PHP4 -f /etc/sysconfig/ha/conf/httpd.conf
26217 ? S 0:00 _ /usr/sbin/piranha_gui -D HAVE_PHP4 -f
/etc/sysconfig/ha/conf/httpd.conf
14382 ? S 0:00 _ /usr/sbin/piranha_gui -D HAVE_PHP4 -f
/etc/sysconfig/ha/conf/httpd.conf



====> STOPPING AND RESTARTING PRIRANHA SERVICES

Killing the daemons:

kill -15 26217 26166
(note that these two PID's are the parent for Piranha_GUI and for LVS)

Restarting the Daemons:
/etc/rc.d/rc.lvs



=============================================================================
== ADDENDUM 2: Answers to some user questions
=============================================================================

Here's Sebastien's fine comments on somewhat different ways to setup
your Piranha server boot scripts and so on:

> You would add to your lvs.cf config file and rc.lvs
> script. That is, all protocols (ports) need to be
> added to those files. See my example lvs.cf and rc.lvs
> on my website.

I wouldn't suggest using your rc.lvs because of several points :
* it uses ipvsadm whereas it shouldn't (it piranha's role)
* it mixes virtual server commands, filtering commands, routing
commands
* it starts daemons which do have initscripts

Form my point of view:
* everything related to iptables should be moved to
/etc/sysconfig/iptables and the iptables service should be activated
* sysctl variables should be set in /etc/sysctl.conf only
* no ipvsadm stuff, piranha will manage it by itself, just activate the
pulse service
* piranha-gui is also a service, activate it

> Also, if you're users are using http to get to a LOGIN webpage, you
> also have to consider adding persistance = 900 to your lvs.cf for
> http service, but that is another story.

This is based on the assumption that session tracking is real server
bound, which is obviously and fortunately not always true. Just think
DB
or file sharing.

> Since you are starting "lvs" daemon by executing the "pulse" daemon
> (which is also true in my rc.lvs script)

It's not ! The uncommented line is :
/usr/sbin/lvs --configfile=/etc/sysconfig/ha/lvs.cf

(EDITORS NOTE: Actually the example rc.lvs script states that you can comment out
either line you like).

--
S?astien Bonnet
Centre de contacts - Experian France



Here are some answers to Scott's questions from 3-9-2002:

> scsi-reserve-0.7-6.i386.rpm
> scsi-reserve-devel-0.7-6.i386.rpm

What are these used for? (Out of curiosity)

ANSWER: Piranha RPM's require them for install. Please check with Redhat for anything additional.

> Make your /etc/sysctl.conf file look like this:
> # Disables packet forwarding
> net.ipv4.ip_forward = 1

"Disables" should be "Enables" for consistency.

ANSWER: "1" is the meaningful part, and it ENABLES packet forwarding.

> # Enables source route verification
> net.ipv4.conf.default.rp_filter = 1
> # Disables the magic-sysrq key
> kernel.sysrq = 0

I would - as a general rule - suggest to enable this, it makes debugging so
much easier.

> virtual piranha_smtp {
> active = 1
> address = 184.226.13.26 eth0:1
> vip_nmask = 255.255.255.0
> port = 25
> send = "GET / HTTP/1.0rnrn"
> expect = "HTTP"

This doesn't make any sense. You are talking HTTP to an SMTP port ;-)

ANSWER: Please see my latest lvs.cf file in this how-to which shows how
to make a shell script to test a port. Your question was asked when I
had an older lvs.cf in place.


=============================================================================
== ADDENDUM 3: Piranha/LVS Heartbeat tips
=============================================================================

I got some input from Mike McLean and Brian Gray at Red Hat. I felt
it should be shared with you, as it definitely reduced the confusion
factor for me. A thank you to the piranha-list@redhat.com.

The Piranha_GUI lets you edit the /etc/sysconfig/ha/lvs.cf or you can do it
manually to add these heartbeat requirements. You edit this file, then
copy it exactly as-is to the backup LVS as well.
The "pulse" daemon is what needs to be run to make LVS failover work.
You will need to start this daemon on both Piranha/LVS routers
with "/etc/rc.d/init.d/pulse start". Based on my over-all Piranha/LVS
solution discussed in this document, you can add that pulse start
statment to your /etc/rc.d/rc.lvs script which is started by
/etc/rc.d/rc.local.

Here's the settings in lvs.cf related to heartbeat:

primary = IP Address of host's public side interface - usually on eth0
backup = IP Address of public side host interface - also usually on eth0

Example:

primary = 184.226.13.25
backup = 184.226.13.180

Special tips: the above IP's are NOT the public IP of the Piranha/LVS service
but of the hosts addresses on the public side only, usually on eth0 interface.
You usually put the Piranha/LVS service IP address on virtual interface eth0:1.
The primary and backup Piranha/LVS hosts determine who they are by comparing
these addresses. In other works the lvs.cf should be identical on both systems!

Set this as follows to switch on hearbeat - of course only applies if "pulse"
daemon is running - and you select a port, the default is 539:

heartbeat = 1
heartbeat_port = 539

The following additional settings are in seconds, keepalive is how long
between heartbeats, and deadtime is how long the LVS waits until it
decides to take over as primary:

keepalive = 6
deadtime = 18

This was another stumper for me - however it turns out the nat_router
IP address ought to be THE SAME on both primary and backup Piranha/LVS
systems:

nat_router = 184.226.10.22 eth1:1

Option is you want LVS to keep the lvs.cf in sync on both LVS systems -
I 'm afraid I do not know anything else about this setting yet:

rsh_command = rsh -or- ssh


Special tip: Piranha (assumedly the pulse daemon) will handle starting
and stopping alias network interface devices (meaning devices eth0:1 the Piranha/LVS
public service IP and eth1:1 the private side NAT Router IP). DO NOT
set up any external scripts to configure these devices - Piranha does it
based on lvs.cf.

NEW TIP: Piranha/LVS can failover MULTIPLE IP ADDRESSES. That is, if you
need one load balanced service on eth0:1 and another on eth0:2
you can set those virtual devices in the lvs.cf file for each
virtual server, and Piranha will enable those devices either from
PULSE daemon or running LVS daemon directly.

* * *

The preceeding was based on Piranha/LVS's own support for heartbeat (ie. pulse)
and failover methodology. Because I was confused by certain specifics
which I've now clarified above, I previously wrote my own Perl script
which you can download from
<http://peterbaitz.com/rc.d/rc.PiranhaLVS.PBnutshellscripts-070802.tgz>
and unzip/untar it as follows:
gunzip -c rc.PiranhaLVS.PBnutshellscripts-070802.tgz | tar xvf -
and you will find a README in it. To the point, the rc.lvs.failover.pl
is my script. Just edit it, change the variables to match your system,
and run it on both primary and backup Piranha/LVS systems AS AN ALTERNATIVE
to the Piranha/LVS settings/setup discussed beforehand.
With my Perl script solution, you do need to setup the virtual
interfaces to be in place, but not to activate on bootup.
My rc.lvs and rc.lvsdown scripts (which start and stop the
Piranha/LVS services use the ifup and ifdown commands to
bring up or shutdown those virtual interfaces eth0:1 and eth1:1).
If you think my ping solution is not elegent, make no mistake, the
Piranha/LVS pulse solution is exactly that - a ping solution too.
There's may be a little faster failover however, mine would take
a minute to check, and two minutes between checking - unless you
changes the defaults in my script.

=================================================================
== Addendum 4 - Fix for Piranha-GUI not displaying routing table
=================================================================

With Red Hat v7.3 the Piranha-GUI works nicely EXCEPT it fails
to display the "CURRENT LVS ROUTING TABLE". Just comes up blank.
Even though it does properly display "CURRENT LVS PROCESSERS".
The simple fix is this:

cd /bin
ln -s /sbin/ipvsadm ipvsadm
chmod +s /sbin/ipvsadm

This adds a symbolic link /bin/ipvsadm to the file
system, and adds user/group execution (+s) permissions
to /sbin/ipvsadm. It worked for me!!!

Another reported way to fix this is as follows:

perl -pi -e "s,/sbin/ipvsadm,/bin/ipvsadm,g"
/etc/sysconfig/ha/web/secure/control.php3


=================================================================
== Addendum 5 - Tips for compiling ip_vs into Red Hat 9.0
=================================================================
Red Hat Linux 9.0 kernel-2.4.20-9 with ip_vs 1.0.8 recompiled
back in downloadable from:

http://peter.baitz.com/kernel9/
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
No responsibility or liability assumed for any damages from use
of this kernel. Assumed you know how to test it first.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

If you want to recompile ip_vs into the kernel yourself you can
refer to these websites:

http://sources.redhat.com/piranha/
http://www.redhat.com/docs/manuals/haserver/RHHAS-1.0-Manual/ch-lvs.html
http://www.linuxvirtualserver.org


THE FOLLOWING IS FROM EMAIL ON THE lvs-users@LinuxVirtualServer.org LIST:

Well, i have updated my patches at

http://mail.incredimail.com/howto/lvs/install/

and split them to

http://mail.incredimail.com/howto/lvs/install/RedHat7.3/
(now including latest 1.0.8), and
http://mail.incredimail.com/howto/lvs/install/RedHat9.0/

The latter including rebuild kernels with ipvs 1.0.8, and the ipvsadm
rpm.

the INSTALL document describes the process of taking the original
kernel.src.rpm, and produring the desired, ipvs patched kernel. Worth
reading for RH users.

another option for rh 9 users is to use an rh 8 kernel - this is also
in the
document...

On other note, while buiding the ipvsadm package, i ran into 3 problems
:

1. make rpm in ipvsadm directory runs "rpm -ba" to build the rpms. this
will
not work on RH 8 and up. the command should be "rpmbuild -ba" - this is
supported in the RH 7 distros - it is backwards compatible, and should
be
safe change.

2. since redhat's kernel-source package doesn't install it's source
files to
/usr/src/linux, but to /usr/src/linux-2.4 , i use "make
KERNELSOURCE=/usr/src/linux-2.4" to build the module on 7.3, but i
cannot
pass this on while building ipvsadm rpm. There should be some mechanism
in
place to deal with that.

3. documentation :

a. the minihowto states that redhat's kernel include ipvs - this is no
longer true ;(
b. the howto includes my note about redhat's kernel and heartbeat here

http://www.linux-vs.org/Joseph.Mack/HOWTO/LVS-HOWTO.unsupported.html#RH_prep
atched_LVS_from_the_mailing_list

this is no longer true, since kernel 2.4.18-27

Comments and suggestions are welcome,

Alex.

----- Original Message -----
From: "Alex Kramarov"
To: "LinuxVirtualServer.org users mailing list."
<lvs-users@LinuxVirtualServer.org>
Sent: Friday, April 11, 2003 7:05 PM
Subject: Re: [ANNOUNCE] ipvs 1.0.8


> I have already done extensive research about redhat 9 kernel and lvs
>
> 1. althought pre RH 9 redhat kernels did include ipvs, rh 9 doesn't.
rh9
is
> desktop oriented, they want us to purchase RHAS to have ipvs
>
> 2. rh 9 is heavily patched, the most intruding patches are o1
scheduler,
and
> the ntpl patches. these interfere with a lot of kernel work, ipvs and
uml
> are these i met problem with in the last week.
>
> 3. you should not use the kernel-source rpm to build your own kernel,
you
> should get the kernel.srpm, install it into /usr/src/redhat, and
modify
the
> spec file to include the ipvs patch. i was able to do that, but
adding the
> ipvs patch, and removing O1 (which includes some preemt pieces) and
ntpl
> related patches from the spec file (i also removed the lowlat
patches,
> although they don't collide with ipvs). these redhat's optimisations
are
> good for desktops, but they are affecting lvs director performance as
much
> as 25 percent - see the list archives.
>
> 4. i will post my spec file, and the prebuilt RH 9 kernel rpms and
srpms
for
> the i686 platform later, in the standard place.
>
> http://mail.incredimail.com/howto/lvs/install
>
> i will also see if the modules compile standalone with the regular
redhat
9
> kernel - they did compile with the 7.3 kernel.
>
> Note. althought i don't see any impact of removing the ntpl patches
from
the
> kernel on any functionality or stability of the machines i use, if
anyone
> has some input on this, i would be happy to hear it, since glibc is
compiled
> with ntpl support on rh 9, and i am not going to recompile that !
>
> thank you.
>
> Alex.
>
> P.S. RH 7.3 is still me faviourite distro for servers, too bad it has
only
8
> more months of support ...


----------------------------------------------------------------------------

there are several patches you could apply to the client kernel, though
i
believe you are referring to the hidden patch, which is most common. I
didn
t want to include the hidden patch in these rpms, since the clients
will
probably need to run the kernel redhat expects, with nptl, since you
will
probably run userspace daemons, so you better off taking the original
srpm
and only add to it the hidden patch, or any other client side patch you
want
I don't use DR, so i don't have the need for such kernel, and i don't
compile it, but i provide some instructions here :

http://mail.incredimail.com/howto/lvs/install/RedHat9.0/kernel-2.4.20-9
hidden/INSTALL

Alex.


=================================================================
== Addendum 6 - How to start or stop a program from lvs.cf
=================================================================
Example:

lvs.cf

serial_no = 22
primary = 10.0.13.5
service = fos
backup_active = 1
backup = 10.0.13.6
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = direct
reservation_conflict_action = preempt
debug_level = 2
failover spdb {
address = 10.0.13.30 aft0:1
vip_nmask = 255.255.255.0
active = 1
timeout = 6
send_program = "/scripts/check_pgsql_alive"
expect = "OK"
start_cmd = "/scripts/startdb"
stop_cmd = "/scripts/stopdb"
}


#startdb
cd /var/lib/pgsql/data
su - postgres -c "/var/lib/pgsql/data/start_db"
echo "started db"

#stopdb
cd /var/lib/pgsql/data
su - postgres -c "/var/lib/pgsql/data/stop_db"
echo "stopped db"

This was from: "Edward Croft" <ecroft@openratings.com>


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
T H E E N D
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Peter Baitz (PB)
computer systems engineer
& systems administrator
peterbaitz@yahoo.com
http://peterbaitz.com/lvs.html
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
728x90

댓글