OCFS2
http://oss.oracle.com/projects/ocfs2/ OCFS2 is a Cluster File System which allows simultaneous access from many nodes. We will set this on our drbd device to access it from both nodes simultaneously. While configuring OCFS2 we provide the information about nodes, which will access the file system later. Every Node that has a OCFS2 file system mounted, must regularly write into a meta-data of file system, letting the other nodes know that node is still alive.
Installation
sudo apt-get install ocfs2-tools ocfs2console
Configuration
Edit /etc/ocfs2/cluster.conf as follows
sudo vi /etc/ocfs2/cluster.conf
#/etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.0.128
number = 0
name = node1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.0.129
number = 1
name = node2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
reconfigure ocfs2 with following command with their default values
dpkg-reconfigure ocfs2-tools
sudo /etc/init.d/o2cb restart
sudo /etc/init.d/ocfs2 restart
drbd8
Installation
http://en.wikipedia.org/wiki/Drbd
The advantage of drbd8 over drbd7 is: It allows the drbd resource to be “master” on both nodes and so can be mounted read-write. We will build drbd8 modules and load it in kernel. For that we need packages “build-essential” and “kernel-headers-xen”
sudo apt-get install drbd8-utils build-essential linux-headers-xen
sudo update-modules
sudo modprobe drbd
This builds the drbd module kernel/drivers/block/drbd.ko against the current running kernel. A default configuration file is installed as /etc/drbd.conf
Configuration
Edit the /etc/drbd.conf
sudo vi /etc/drbd.conf
#/etc/drbd.conf
global {
usage-count yes;
}
common {
syncer { rate 10M; }
}
resource r0 {
protocol C;
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
outdate-peer "/usr/sbin/drbd-peer-outdater";
}
startup {
}
disk {
on-io-error detach;
}
net {
allow-two-primaries;
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
syncer {
rate 10M;
al-extents 257;
}
on node1 {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.0.128:7788;
flexible-meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.0.129:7788;
meta-disk internal;
}
}
“ allow-two-primaries” option in net section of drbd.conf allows the resource to be mounted as “master” on both nodes. Copy the /etc/drbd.conf to node2 and restart drbd on both nodes with following command.
Itt mar elore fol kell tenni a hearbeatet, mert a drbd jogosultsagainak kell a hearbeat user.
You need install these packcages before start drbd.
sudo apt-get install heartbeat-2 heartbeat-2-gui
sudo /etc/init.d/drbd restart
!!!!!!
Ha nem tud letrehozni arra particora, mert mar erzekel rajta valami filerendszert akarmit, akkor o is azt ajanlja hogy irjuk tele nullakkal:
dd if=/dev/zero bs=1M count=1 of=/dev/sdb2; sync
dd if=/dev/zero bs=1M count=1 of=/dev/sda2; sync
drbdadm create-md r0
drbdadm -- -o primary r0
mkfs /dev/drbd0
!!!!!!!
If you check the status it looks like this
suddo /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.3 (api:86/proto:86)
SVN Revision: 2881 build by root@node1, 2008-01-20 12:48:36
0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r---
ns:143004 nr:0 dw:0 dr:143004 al:0 bm:43 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:8916 misses:22 starving:0 dirty:0 changed:22
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
change the resource to “master” with following command on both nodes
sudo drbdadm primary r0
and check the status again
sudo /etc/init.d/drbd status
drbd driver loaded OK; device status:
version: 8.0.3 (api:86/proto:86)
SVN Revision: 2881 build by root@node1, 2008-01-20 12:48:36
0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
ns:143004 nr:0 dw:0 dr:143004 al:0 bm:43 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:8916 misses:22 starving:0 dirty:0 changed:22
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
As you can see resource is “master” on both nodes Th drbd device is now accessible under /dev/drbd0
---
Nagyon orulunk meg minden, de nekem ugy szetesett az elso drbd-m mint a sicc, es nem is tudtam elsore visszacsinalni. syslogba is csak annyi volt hogy drbdadm split-brain....de ezzel nem tudtam megoldani.
mind2 gepen Standalone lett az allapot
drbdadm disconnect all #mind2 gepen
drbdadm pri-lost r0 #mind2 gepen
drbdadm connect r0
---
File system
We can now create a file system on /der/drbd0 by following command
sudo mkfs.ocfs2 /dev/drbd0
This can be mounted on both nodes simultaneously with
sudo mkdir /drbd0
sudo mount.ocfs2 /dev/drbd0 /drbd0
Now we have a common storage which will be synchronized with drbd on both nodes
Init script
We have to make sure that after reboot, the system will set drbd resources again to “master” and mount those on “/drbd0” before starting Heartbeat and Xen machines.
Edit /etc/init.d/mountdrbd.sh
sudo vi /etc/init.d/mountdrbd.sh
#/etc/init.d/mountdrbd.sh
drbdadm primary r0
mount.ocfs2 /dev/drbd0 /mnt
make it executable and add symbolic link to this under /etc/rc3.d/S99mountdrbd.sh
sudo chmode +x /etc/init.d/mountdrbd.sh
sudo ln -s /etc/init.d/mountdrbd.sh /etc/rc3.d/S99mountdrbd.sh
Actually this step can be integrated also in Heartbeat by adding appropriate resources to the configuration. But as time being we will do this with script.
Heartbeat2
http://www.linux-ha.org/Heartbeat
Installation
Now we can install and setup Heartbeat 2
Ezt kellett drbd elott:
You did it:
sudo apt-get install heartbeat-2 heartbeat-2-gui
Edit /etc/ha.d/ha.cf
sudo vi /etc/ha.d/ha.cf
#/etc/ha.d/ha.cf
crm on
bcast eth0
node node1 node2
and restart heartbeat2 with
sudo /etc/init.d/heartbeat restart
Es innet teljesen mas utat valasztottam mert a stonith nalam folyamatos rebootokat okozott es nem tudtam belole kimaszni.
Igy a CRM es configot lecsereltem sima haresource-ra.
De azzal az ocfs2-t nem kezelni igy nem a legjobb megoldast kaptam. Reboot utan ujra kell csinalni ocfs2-t.
Szoval ezek utan ezt a leirast alkalmaztam:
http://www.howtoforge.org/set-up-a-loadbalanced-ha-apache-cluster-ubunt…
- daYna blogja
- A hozzászóláshoz be kell jelentkezni
- 4315 megtekintés
Hozzászólások
Csináltam ilyen cluster-t nemrég, az OCFS kivételével. A drbd fejlesztők a primary-primary-t shared disk filrendszer megoldással egyelőre nem javasolják éles üzembe.
A consequence of mirroring data on block device level is that you can only access your data (using a file system) on the active node. This is not a shortcoming of DRBD; this is caused due to the nature of most file systems (ext3, XFS, JFS, ext4, ...). These file systems are designed for one computer accessing one disk, so they cannot cope with two computers accessing one (virtually) shared disk.
In spite of this limitation, there are still a few ways to access the data on the second node:
- Use DRBD on logical volumes and use LVM's capabilities to take snapshots on the standby node, and access the data via the snapshot.
- DRBD's primary-primary mode with a shared disk file system (GFS, OCFS2). These systems are very sensitive to failures of the replication network. Currently we cannot generally recommend this for production use.
--
trey @ gépház
- A hozzászóláshoz be kell jelentkezni
A GFS2 ráadásul megbízhatóbb, mint az ocfs, legalábbis egy erős teszt erről árulkodik a neten.
A drbd kernel modulod és a user tools azonos verziójú?
Látatlanban azt mondanám, hogy nem.
Nézd meg és ha kell, töltsd le a netről az újat és forgass.
--------------------------
"Utoljára mondom jóember, hogy nem rendeltünk óriástrambulint!"
-Open Source. Give it a chance: StartIT-
- A hozzászóláshoz be kell jelentkezni
Viszont az ocfs2-t nem kicsit volt egyszerűbb hadrendbe állítani.
+1 az ocfs2-nek.
--------------------------
"Utoljára mondom jóember, hogy nem rendeltünk óriástrambulint!"
-Open Source. Give it a chance: StartIT-
- A hozzászóláshoz be kell jelentkezni