DRBD All about

OK, sredio sam djelimicno.

DRBD se starta. Takodjer filesystem funkcionise, dakle /dev/drbd0 se mount-a u /storage.

DRBD resource je definisan van grupe,

<master_slave id="master-slave-drbd0"> <meta_attributes id="ma-master-slave-drbd0"> <attributes> <nvpair id="ma-master-slave-drbd0-1" name="clone_max" value="2"/> <nvpair id="ma-master-slave-drbd0-2" name="clone_node_max" value="1"/> <nvpair id="ma-master-slave-drbd0-3" name="master_max" value="1"/> <nvpair id="ma-master-slave-drbd0-4" name="master_node_max" value="1"/> <nvpair id="ma-master-slave-drbd0-5" name="notify" value="yes"/> <nvpair id="ma-master-slave-drbd0-6" name="globally_unique" value="false"/> <nvpair name="target_role" id="ma-master-slave-drbd0-7" value="#default"/> </attributes> </meta_attributes> <primitive id="drbd0" class="ocf" provider="heartbeat" type="drbd"> <instance_attributes id="instance-attr-drbd0"> <attributes> <nvpair id="instance-attr-drbd0-1" name="drbd_resource" value="drbd1"/> </attributes> </instance_attributes> <operations> <op id="ms-drbd0_monitor" name="monitor" interval="10" timeout="20" start_delay="1m" role="Started" disabled="false" on_fail="restart"/> </operations> </primitive> <instance_attributes id="master-slave-drbd0"> <attributes> <nvpair id="master-slave-drbd0-target_role" name="target_role" value="started"/> </attributes> </instance_attributes> </master_slave>
dok je filesystem resource definisan unutar grupe cluster_1.

<group id="cluster_1"> <primitive class="ocf" provider="heartbeat" type="Filesystem" id="fs0"> <meta_attributes id="ma-fs0"> <attributes> <nvpair name="target_role" id="ma-fs0-1" value="#default"/> </attributes> </meta_attributes> <operations> <op id="fs0_1" name="monitor" interval="30s" timeout="10s"/> </operations> <instance_attributes id="ia-fs0"> <attributes> <nvpair id="ia-fs0-1" name="fstype" value="ext3"/> <nvpair id="ia-fs0-2" name="directory" value="/storage"/> <nvpair id="ia-fs0-3" name="device" value="/dev/drbd0"/> </attributes> </instance_attributes> <meta_attributes id="fs0-meta-options"> <attributes> <nvpair id="fs0-meta-options-timeout" name="timeout" value="10s"/> </attributes> </meta_attributes> <instance_attributes id="fs0"> <attributes> <nvpair name="target_role" id="fs0-target_role" value="started"/> </attributes> </instance_attributes> </primitive>
Constraints za ovo izgleda ovako:

<constraints> <rsc_order id="promote_drbd0_before_group" action="start" from="cluster_1" type="after" to_action="promote" to="master-slave-drbd0"/> <rsc_colocation id="fs0_on_drbd0-stopped" to="master-slave-drbd0" to_role="stopped" from="fs0" score="-infinity"/> <rsc_colocation id="fs0_on_drbd0-slave" to="master-slave-drbd0" to_role="slave" from="fs0" score="-infinity"/> <rsc_colocation id="group_where_drbd0_is" to="master-slave-drbd0" to_role="master" from="cluster_1" score="infinity"/> </constraints> </configuration>
Dakle, do ove tacke funkcionise sve.

Eh, sada, treba mi vise virtualnih IP adresa, tacnije 4.

Napravio sam poseban xml file u kojem sam definisao resources:

<resources> <group id="cluster_1> <primitive class="ocf" id="IP0" provider="heartbeat" type="IPaddr2"> <operations> <op id="IP0_mon" interval="10s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IP0_inst_attr"> <attributes> <nvpair id="IP0_attr_0" name="ip" value="192.168.4.200"/> <nvpair id="IP0_attr_1" name="netmask" value="24"/> <nvpair id="IP0_attr_2" name="nic" value="eth1"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="IP1" provider="heartbeat" type="IPaddr2"> <operations> <op id="IP1_mon" interval="10s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IP1_inst_attr"> <attributes> <nvpair id="IP1_attr_0" name="ip" value="192.168.4.301"/> <nvpair id="IP1_attr_1" name="netmask" value="24"/> <nvpair id="IP1_attr_2" name="nic" value="eth1"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="IP2" provider="heartbeat" type="IPaddr2"> <operations> <op id="IP2_mon" interval="10s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IP2_inst_attr"> <attributes> <nvpair id="IP2_attr_0" name="ip" value="192.168.4.302"/> <nvpair id="IP2_attr_1" name="netmask" value="24"/> <nvpair id="IP2_attr_2" name="nic" value="eth1"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="IP3" provider="heartbeat" type="IPaddr2"> <operations> <op id="IP3_mon" interval="10s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IP3_inst_attr"> <attributes> <nvpair id="IP3_attr_0" name="ip" value="192.168.4.303"/> <nvpair id="IP3_attr_1" name="netmask" value="24"/> <nvpair id="IP3_attr_2" name="nic" value="eth1"/> </attributes> </instance_attributes> </primitive> </group> </resources>
Kada ovo dodam sa:

uredno se sve ubaci u cib.xml ali nakon restarta cijela grupa cluster_1 se ne pokrene. Dakle, drbd funkcionise ali filesystem a ni IP adrese se ne pokrenu.

Mislim da je nvpair kod svake od cetiri IP adrese tacan. Moguce da je problem sa type=“IPaddr2” ?

ovo :

jednostavno ne mozes imat . ip adrese ti idu do .255 :smiley: osim toga ako se ne pokrene uradi

i procitaj sta ti govori

tipfeler, 201, 202, 203.

A sto je najbolje u cijeloj situaciji, nakon skoro pa 10 godina iskustva u networkingu, hladno sam ovo pokusao konfigurisati.

Heroj.

Hvala @maher_

Dakle,

root@node1:/storage# crm_verify -LV crm_verify[6899]: 2010/11/16_08:57:41 WARN: unpack_rsc_op: Processing failed op VIP_0_monitor_0 on node1: Error crm_verify[6899]: 2010/11/16_08:57:41 WARN: unpack_rsc_op: Processing failed op VIP_0_stop_0 on node1: Error crm_verify[6899]: 2010/11/16_08:57:41 WARN: unpack_rsc_op: Compatability handling for failed op VIP_0_stop_0 on node1 crm_verify[6899]: 2010/11/16_08:57:41 WARN: unpack_rsc_op: Processing failed op VIP_0_monitor_0 on node2: Error crm_verify[6899]: 2010/11/16_08:57:41 WARN: unpack_rsc_op: Processing failed op VIP_0_stop_0 on node2: Error crm_verify[6899]: 2010/11/16_08:57:41 WARN: unpack_rsc_op: Compatability handling for failed op VIP_0_stop_0 on node2 crm_verify[6899]: 2010/11/16_08:57:41 WARN: native_color: Resource VIP_0 cannot run anywhere crm_verify[6899]: 2010/11/16_08:57:41 ERROR: native_create_actions: Attempting recovery of resource VIP_0 crm_verify[6899]: 2010/11/16_08:57:41 WARN: custom_action: Action VIP_0_stop_0 (unmanaged) crm_verify[6899]: 2010/11/16_08:57:41 WARN: custom_action: Action VIP_0_stop_0 (unmanaged) Warnings found during check: config may not be valid
config:

<resources> <group id="cluster_1"> <primitive class="ocf" id="VIP_0" provider="heartbeat" type="IPaddr2"> <operations> <op id="VIP_0_mon" interval="10s" name="monitor"/> </operations> <instance_attributes id="VIP_0_inst_attr"> <attributes> <nvpair id="VIP_0_attr_0" name="ip" value="192.168.4.200"/> <nvpair id="VIP_0_attr_1" name="netmask" value="24"/> <nvpair id="VIP_0_attr_2" name="nic" value="eht1"/> </attributes> </instance_attributes> </primitive>
Moze li ovo biti problem sa razlicitim konfiguracijama za IPaddr i IPaddr2?

I ako stavim u monitor

interval=“10s”

da li timeout mora biti veci od intervala?

EDIT: isti problem sa type=“IPaddr”

Ok, ja debil opet.

eht1 naravno ne postoji.

novi problem.

Rijesio IP adrese, sve super.

crm_mon:

[code]============
Last updated: Tue Nov 16 18:40:38 2010
Current DC: node1 (edec6395-0329-408e-9b47-c1fbc17e8cc7)
2 Nodes configured.
2 Resources configured.

Node: node2 (295aa315-0dcf-4a0b-914e-977fc9b4c985): online
Node: node1 (edec6395-0329-408e-9b47-c1fbc17e8cc7): online

Master/Slave Set: master-slave-drbd0
drbd0:0 (heartbeat::ocf:drbd): Started node1
drbd0:1 (heartbeat::ocf:drbd): Master node2
Resource Group: cluster_1
fs0 (heartbeat::ocf:Filesystem): Started node2
VIP_0 (heartbeat::ocf:IPaddr2): Started node2
VIP_1 (heartbeat::ocf:IPaddr2): Started node2
VIP_2 (heartbeat::ocf:IPaddr2): Started node2
VIP_3 (heartbeat::ocf:IPaddr2): Started node2[/code]
Kada sa cibadminom ubacim apache resource:

<resources> <group id="cluster_1"> <primitive class="ocf" provider="heartbeat" id="apache_resource" type="apache"> <operations> <op id="apache_mon" interval="60s" name="monitor" timeout="30s"/> </operations> <instance_attributes id="apache_res_attr"> <attributes> <nvpair name="configfile" value="/etc/apache2/apache2.conf" id="apache_res_attr_0"/> <nvpair name="httpd" value="/usr/sbin/apache2" id="apache_res_attr_1"/> <nvpair name="statusurl" value="http://localhost/server-status" id="apache_res_attr_2"/> </attributes> </instance_attributes> </primitive> </group> </resources>
crm_mon javlja ovo:

[code] apache_resource (heartbeat::ocf:apache): Stopped

Failed actions: apache_resource_start_0 (node=node1, call=24, rc=-2): Timed Out apache_resource_start_0 (node=node2, call=28, rc=-2): Timed Out[/code]
Slicno mi javlja i sa tomcat resursima i sa mysql resursom, dakle sa onim resursima koji se oslanjaju na external config file.

crm_verify

root@node2:/opt/heartbeat_config# crm_verify -LV crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Processing failed op apache_resource_start_0 on node1: Timed Out crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Compatability handling for failed op apache_resource_start_0 on node1 crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Processing failed op apache_resource_start_0 on node2: Timed Out crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Compatability handling for failed op apache_resource_start_0 on node2 crm_verify[14105]: 2010/11/16_18:45:11 WARN: native_color: Resource apache_resource cannot run anywhere Warnings found during check: config may not be valid
Any ideas?

[quote=Amar]novi problem.

Rijesio IP adrese, sve super.

crm_mon:

[code]============
Last updated: Tue Nov 16 18:40:38 2010
Current DC: node1 (edec6395-0329-408e-9b47-c1fbc17e8cc7)
2 Nodes configured.
2 Resources configured.

Node: node2 (295aa315-0dcf-4a0b-914e-977fc9b4c985): online
Node: node1 (edec6395-0329-408e-9b47-c1fbc17e8cc7): online

Master/Slave Set: master-slave-drbd0
drbd0:0 (heartbeat::ocf:drbd): Started node1
drbd0:1 (heartbeat::ocf:drbd): Master node2
Resource Group: cluster_1
fs0 (heartbeat::ocf:Filesystem): Started node2
VIP_0 (heartbeat::ocf:IPaddr2): Started node2
VIP_1 (heartbeat::ocf:IPaddr2): Started node2
VIP_2 (heartbeat::ocf:IPaddr2): Started node2
VIP_3 (heartbeat::ocf:IPaddr2): Started node2[/code]
Kada sa cibadminom ubacim apache resource:

<resources> <group id="cluster_1"> <primitive class="ocf" provider="heartbeat" id="apache_resource" type="apache"> <operations> <op id="apache_mon" interval="60s" name="monitor" timeout="30s"/> </operations> <instance_attributes id="apache_res_attr"> <attributes> <nvpair name="configfile" value="/etc/apache2/apache2.conf" id="apache_res_attr_0"/> <nvpair name="httpd" value="/usr/sbin/apache2" id="apache_res_attr_1"/> <nvpair name="statusurl" value="http://localhost/server-status" id="apache_res_attr_2"/> </attributes> </instance_attributes> </primitive> </group> </resources>
crm_mon javlja ovo:

[code] apache_resource (heartbeat::ocf:apache): Stopped

Failed actions: apache_resource_start_0 (node=node1, call=24, rc=-2): Timed Out apache_resource_start_0 (node=node2, call=28, rc=-2): Timed Out[/code]
Slicno mi javlja i sa tomcat resursima i sa mysql resursom, dakle sa onim resursima koji se oslanjaju na external config file.

crm_verify

root@node2:/opt/heartbeat_config# crm_verify -LV crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Processing failed op apache_resource_start_0 on node1: Timed Out crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Compatability handling for failed op apache_resource_start_0 on node1 crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Processing failed op apache_resource_start_0 on node2: Timed Out crm_verify[14105]: 2010/11/16_18:45:11 WARN: unpack_rsc_op: Compatability handling for failed op apache_resource_start_0 on node2 crm_verify[14105]: 2010/11/16_18:45:11 WARN: native_color: Resource apache_resource cannot run anywhere Warnings found during check: config may not be valid
Any ideas?[/quote]
zaviri u /var/log/messages , syslog i daemon takodje pogledaj i debug log … tamo ces imat cijelu kilometarsku proceduru , prati jednostavno koji ti error prikaze nakon startanja apache-a (negdje si pogrijesio u config-u ili apache-a ili heartbeat-a , log ce ti reci puno toga)

Master/Slave Set: master-slave-drbd0 drbd0:0 (heartbeat::ocf:drbd): Master node2 drbd0:1 (heartbeat::ocf:drbd): Started node1
Dakle, ovo imam u crm_mon

Slave node se ne starta kao slave, stoji samo started. Kao posljedica toga imam slijedece:

root@node2:/opt/heartbeat_config# cat /proc/drbd version: 8.0.11 (api:86/proto:86) GIT-hash: b3fe2bdfd3b9f7c2f923186883eb9e2a0d3a5b1b build by phil@mescal, 2008-02-12 11:56:43 0: cs:StandAlone st:Primary/Unknown ds:UpToDate/DUnknown r--- ns:0 nr:0 dw:124 dr:3209 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits:0 misses:0 starving:0 dirty:0 changed:0 act_log: used:0/257 hits:31 misses:0 starving:0 dirty:0 changed:0
ovo je resource u cib.xml:

<master_slave id="master-slave-drbd0"> <meta_attributes id="ma-master-slave-drbd0"> <attributes> <nvpair id="ma-master-slave-drbd0-1" name="clone_max" value="2"/> <nvpair id="ma-master-slave-drbd0-2" name="clone_node_max" value="1"/> <nvpair id="ma-master-slave-drbd0-3" name="master_max" value="1"/> <nvpair id="ma-master-slave-drbd0-4" name="master_node_max" value="1"/> <nvpair id="ma-master-slave-drbd0-5" name="notify" value="yes"/> <nvpair id="ma-master-slave-drbd0-6" name="globally_unique" value="false"/> <nvpair name="target_role" id="ma-master-slave-drbd0-7" value="#default"/> </attributes> </meta_attributes> <primitive id="drbd0" class="ocf" provider="heartbeat" type="drbd"> <instance_attributes id="instance-attr-drbd0"> <attributes> <nvpair id="instance-attr-drbd0-1" name="drbd_resource" value="drbd1"/> </attributes> </instance_attributes> <operations> <op id="ms-drbd0_monitor" name="monitor" interval="10" timeout="20" start_delay="1m" role="Started" disabled="false" on_fail="restart"/> </operations> </primitive> <instance_attributes id="master-slave-drbd0"> <attributes> <nvpair id="master-slave-drbd0-target_role" name="target_role" value="started"/> </attributes> </instance_attributes> </master_slave>
i jos constraints:

<constraints> <rsc_order id="promote_drbd0_before_group" action="start" from="cluster_1" type="after" to_action="promote" to="master-slave-drbd0"/> <rsc_colocation id="fs0_on_drbd0-stopped" to="master-slave-drbd0" to_role="stopped" from="fs0" score="-infinity"/> <rsc_colocation id="fs0_on_drbd0-slave" to="master-slave-drbd0" to_role="slave" from="fs0" score="-infinity"/> <rsc_colocation id="group_where_drbd0_is" to="master-slave-drbd0" to_role="master" from="cluster_1" score="infinity"/> </constraints> </configuration>

konfigurisi wait-for connection i wait for degraded/outdated cini mi se da se zove opcija u drbd.conf-u … ovdje ti se desilo da je wait-for connection timeout isteko i jednostavno drbd se ne spaja vise. konfigurisi da ceka beskonacno na konekciju pa ce ti radit. ako si installiro drbd 8.2 il 8.3 onda bi ti uz njega trebala doci OCF skripta od linbit-a (class=“ocf” provider=“linbit” type=“drbd”) <- nju koristi…

stariji je drbdm, nije dosla ocf skripta. Hvala na brzom odgovoru, problem je bio u Split-Brain-u, zato se drbd nije htio konektovati. Rijesio sam Split-Brain i za sada ubacio stonith preko ssh.

S obzirom da su oba noda samo VM na Xen serverima, odlucili smo se za custom-made stonith skriptu koja ce ici na xen-serverima. Kolega je vec pise pa je okacim ovdje ako sve bude ferceralo.

Samo kratki update.

Svi problemi rijeseni, na slijedeci nacin:

  1. sa svih masina deinstaliran Ubuntu.
  2. u kantu baceni svi Ubuntu instalacijski mediji, Ubuntu naljepnice, manual-i, knjige
  3. firminim pravilnikom zabranjena upotreba Ubuntua
  4. zabranjeno koristenje rijeci Ubuntu, kazna momentalni otkaz
  5. na sve masine instaliran Debian Squeeze
  6. sve (drbd, pacemaker/corosync, stonith) konfigurisano bez problema za jedan dan

EDIT: pise se corosync seljak a ne cronosync

cestitam :slight_smile:

Ovo treba negdje uokviriti :smiley:

De nek mu neko napravi od ove teme sticky zivi bili :smiley:

[quote=Amar]Samo kratki update.

Svi problemi rijeseni, na slijedeci nacin:

  1. sa svih masina deinstaliran Ubuntu.
  2. u kantu baceni svi Ubuntu instalacijski mediji, Ubuntu naljepnice, manual-i, knjige
  3. firminim pravilnikom zabranjena upotreba Ubuntua
  4. zabranjeno koristenje rijeci Ubuntu, kazna momentalni otkaz
  5. na sve masine instaliran Debian Squeeze
  6. sve (drbd, pacemaker/corosync, stonith) konfigurisano bez problema za jedan dan

EDIT: pise se corosync seljak a ne cronosync[/quote]
ufff… tvoja riječ protiv ubuntuove :slight_smile:

pa sad, nije nam cilj bio lijep desktop sa ultra 3D efektima i finim bojama nego stabilno rjesenje za cluster. Za prvo je Ubuntu Tito, za drugo nije ni do koljena Debianu. Ako ti mislis da jeste, bujrum instaliraj ono sto sam ja probao pa se javi.

ma linux ko linux, znam iz iskustva, nije nemoguće, na nekima je možda malo “teže”, nejse sorry za offtopic, malo me iznerviralo blaćenje, pa sam radi toga uletio jedan klizeći :wink:

ma nije frka, ne kazem da je nemoguce samo da je neuporedivo teze nego recimo na Debianu. A i to iz iskustva, i to skorasnjeg :).

Ne blatim ja Ubuntu, samo kazem da nije idealno rjesenje za servere.

EDIT:

neke stvari su mozda i nemoguce. Recimo nasa situacija. Imamo sve servere kao VM na Xen masinama. Paravirtualizacija je u slucaju Ubuntu-a podrzana do verzije 9.04. Sve novije verzije nisu podrzane od Citrix-a. Slicna stvar i sa Debianom, podrzan Lenny 32-bita a 64 ne, Squeeze takoder nije podrzan. Eh sada, paravirtualizacija se moze i rucno odraditi. U slucaju Ubuntu-a 10.4 i 10.10, jos zalim za izgubljenom sedmicom mog zivota. U slucaju Squeeze-a, zavrseno sve jucer za cca sat vremena.

ma dobro u zadnje vrijeme ima dosta firmi koje nude baš ubuntu kao srv platformu, npr evo ova dva screenshot-a, da ne nabrajam više…

prije kakvog deploymenta najbolje vidjeti koja je distro certificirana (po mogućnosti) za dati software, ili koju distro preporučuju developeri (na šta se furaju :))

Stoji, ali gledaj, ja sam se pogresno gore izrazio, dakle, nije u pitanju Xen nego bas Ubuntu. Ubuntu je do verzije 9.04 podrzavao Xen, mogao se regularno xen-kernel instalirati i kasnije paravirtualizacija na Xen masini odraditi. Od verzije 9.10, Ubuntu je prestao podrzavati Xen i nudi kao zamjensko rjesenje KVM. To super zvuci s tim da je KVM stabilan ko BH privreda.
Eh sada, mi kao firma radimo sa Xen-om, odavno, i sada ne zelimo da mijenjamo taj dio samo zato sto je neko u Cannonicalu odlucio da je Xen bezveze.

Poredjenja radi, pri instalaciji Squeeza, automatski je prepoznato da se radi o Xen VM-u, te je pola posla oko paravirzualizacije odradio Debian sam. Ja sam samo morao instalirati novi kernel (jer na businesscard.iso jednostavno dolazi standardni kernel) i jos par komandi odraditi na samoj Xen masini. Sve skupa trajalo oko pola sata.

Dok sam izgubio sigurno sedmicu dana pokusavajuci da uradim upravo to isto za Ubuntu 10.4 i 10.10.

Sad necu da hvalim svog konja pa da kazem da je Xen najbolje virtualisaiton rjesenje, ali cinjenica jeste a je jako rasireno rjesenje i cudi me odluka Cannonicala da ga izbaci iz podrske.