raid10 mdadm ?

 ( vajtsz | 2008. január 8., kedd - 13:44 )

Hello

Hiába keresgéltem információk után ,sehogy sem tudtam rájönni hogy hogyan tudok én debian alatt sw raid10 et készíteni.
Mármint úgy hogy ne raid1 koteteket kapcsoljak raid0 ba - így megoldható, tudom.

Ububuntu alatt probaltam ott megy egy lepesben, sőt ott megnezem kernel configban bennevan a raid10 opció. De letöltöm kernel.org legfrisebb kernelt, abban seholsem találom ezt az opciót.
Valamilyen kernel patch kell ? vagy micsoda ?

Vajtsz

Hozzászólás megjelenítési lehetőségek

A választott hozzászólás megjelenítési mód a „Beállítás” gombbal rögzíthető.

Nekem ezzel az volt a bajom, hogy a sw raid 10 sajátságosan értelmezi a dolgokat, nem egyértelmű, hogy melyik raid membert számítja be a raid1 esek közé és melyikeket veszi raid0-ba.

Szóval, nekem úgy tűnt már az induláskor kell neki mind a 4 partíció.
valahogy így:

#mknod /dev/md0 b 9 5
#mdadm --create /dev/md0 -v --raid-devices=4 --chunk=32 --level=raid10 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

mdadm: layout defaults to n1
mdadm: size set to annyiamennyiK
mdadm: array /dev/md0 started.

# cat /proc/mdstat

Personalities : [raid1] [raid10]
md0 : active raid10 sdd1[3] sdc1[2] sdb1[1] sda1[0]
1465143808 blocks 32K chunks 2 near-copies [4/4] [UUUU]
[>....................] resync = 0.2% (3058848/1465143808) finish=159.3min speed=152942K/sec

valahogy így kéne menjen...

persze én is előbb a raid1-et csináltam meg ezt kéne ilyen "echte" raid10-be konvertálni amihez már kevés a winyóm.
Majd próbálkozom missing-ekkel lecrealni.... huh, izgalmas lesz.

ja és még 1 dolog ha jól vettem ki akkor nem lehet a raid10-es tömb a root?

Akkor most tenyleg kezeli raid10-et vagy o maga is raid1+0 -val kesziti?
Kerdes, elso commentem jogossaga.

--
drk

úgy olvastam valahol, hogy nem raid1+0-ban kezeli hanem valami "sajatsagos szempontok alapjan teszi ide vagy oda az adatot". Tudom ez viccesen hangzik, de nem talalom azt ahol ez szepen le volt irva. ha meglesz postolom.

Tehat a valasz: tenyleg kezeli!

előtte egy melyik a jobb raid1+0 vs. raid0+1 http://aput.net/~jheiss/raid10/

Aztán meg te határozod meg melyiket csinálja:
-----------------------------------cuthere----------------------
7. Which RAID10 layout scheme should I use
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RAID10 gives you the choice between three ways of laying out the blocks on
the disk. Assuming a simple 4 drive setup with 2 copies of each block, then
if A,B,C are data blocks, a,b their parts, and 1,2 denote their copies, the
following would be a classic RAID1+0 where 1,2 and 3,4 are RAID0 pairs
combined into a RAID1:

near=2 would be (this is the classic RAID1+0)

hdd1 Aa1 Ba1 Ca1
hdd2 Aa2 Ba2 Ca2
hdd3 Ab1 Bb1 Cb1
hdd4 Ab2 Bb2 Cb2

offset=2 would be

hdd1 Aa1 Bb2 Ca1 Db2
hdd2 Ab1 Aa2 Cb1 Ca2
hdd3 Ba1 Ab2 Da1 Cb2
hdd4 Bb1 Ba2 Db1 Da2

far=2 would be

hdd1 Aa1 Ca1 .... Bb2 Db2
hdd2 Ab1 Cb1 .... Aa2 Ca2
hdd3 Ba1 Da1 .... Ab2 Cb2
hdd4 Bb1 Db1 .... Ba2 Da2

Where the second set start half-way through the drives.

The advantage of far= is that you can easily spread a long sequential read
across the drives. The cost is more seeking for writes. offset= can
possibly get similar benefits with large enough chunk size. Neither upstream
nor the package maintainer have tried to understand all the implications of
that layout. It was added simply because it is a supported layout in DDF and
DDF support is a goal.
-----------------------------------cuthere----------------------

szóval lehet súlyozni, hogy egy drive kiesése esetén mekkora legyen a baj!? (mindenféleképp sürgősen pótolni kell a kiesett drive-ot.)
Szóval a near, offset, vagy far a szimpatikusabb. http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10
Még én sem döntöttem.

(még egy kis olvasnivaló:)
------------cuthere----------
RAID10 provides a combination of RAID1 and RAID0, and sometimes known as RAID1+0. Every datablock is duplicated some number of times, and the resulting collection of datablocks are distributed over multiple drives.

When configuring a RAID10 array it is necessary to specify the number of replicas of each data block that are required (this will normally be 2) and whether the replicas should be 'near', 'offset' or 'far'. (Note that the 'offset' layout is only available from 2.6.18).

When 'near' replicas are chosen, the multiple copies of a given chunk are laid out consecutively across the stripes of the array, so the two copies of a datablock will likely be at the same offset on two adjacent devices.

When 'far' replicas are chosen, the multiple copies of a given chunk are laid out quite distant from each other. The first copy of all data blocks will be striped across the early part of all drives in RAID0 fashion, and then the next copy of all blocks will be striped across a later section of all drives, always ensuring that all copies of any given block are on different drives.

The 'far' arrangement can give sequential read performance equal to that of a RAID0 array, but at the cost of degraded write performance.

When 'offset' replicas are chosen, the multiple copies of a given chunk are laid out on consecutive drives and at consecutive offsets. Effectively each stripe is duplicated and the copies are offset by one device. This should give similar read characteristics to 'far' if a suitably large chunk size is used, but without as much seeking for writes.

It should be noted that the number of devices in a RAID10 array need not be a multiple of the number of replica of each data block, those there must be at least as many devices as replicas.

If, for example, an array is created with 5 devices and 2 replicas, then space equivalent to 2.5 of the devices will be available, and every block will be stored on two different devices.

Finally, it is possible to have an array with both 'near' and 'far' copies. If and array is configured with 2 near copies and 2 far copies, then there will be a total of 4 copies of each block, each on a different drive. This is an artifact of the implementation and is unlikely to be of real value.
---------cuthere---------

-p, --layout=
This option configures the fine details of data layout for raid5, and raid10 arrays, and controls the failure modes for faulty.

The layout of the raid5 parity block can be one of left-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs. The default is left-symmetric.

When setting the failure mode for faulty the options are: write-transient, wt, read-transient, rt, write-persistent, wp, read-persistent, rp, write-all, read-fixable, rf, clear, flush, none.

Each mode can be followed by a number which is used as a period between fault generation. Without a number, the fault is generated once on the first relevant request. With a number, the fault will be generated after that many request, and will continue to be generated every time the period elapses.

Multiple failure modes can be current simultaneously by using the "--grow" option to set subsequent failure modes.

"clear" or "none" will remove any pending or periodic failure modes, and "flush" will clear any persistent faults.

To set the parity with "--grow", the level of the array ("faulty") must be specified before the fault mode is specified.

Finally, the layout options for RAID10 are one of 'n', 'o' or 'p' followed by a small number. The default is 'n2'.

n signals 'near' copies. Multiple copies of one data block are at similar offsets in different devices.

o signals 'offset' copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, and are one chunk further down.

f signals 'far' copies (multiple copies have very different offsets). See md(4) for more detail about 'near' and 'far'.

The number is the number of copies of each datablock. 2 is normal, 3 can be useful. This number can be at most equal to the number of devices in the array. It does not need to divide evenly into that number (e.g. it is perfectly legal to have an 'n2' layout for an array with an odd number of devices).

tehát akkor valahogy így kell kreálni:

mdadm --create /dev/md0 --chunk=1024 --level=raid10 --layout=f2 --raid-devices=4 /dev/sd[abcd]1

(játszani kell majd a layouttal [n2,f2,o2], kinek melyik a nyerő? n2 a hagyományos Raid1+0 nekem az nem tetszik.)

Szia,

hatarozottan ugy tudom, hogy raid10 = 2*raid1 >raid0
- Vagyis ez mar raid20!!! :D:D:D -
vagy raid0+1 ami meg 2xraid0 > raid1

hivatkozaskep http://en.wikipedia.org/wiki/Nested_RAID_levels.

SZVSZ ubuntuban max leegyszerusitik, hogy ne kelljen mar a usernek vegig gondolni mi is tortenik, vagy ilyesmi okokat tudok elkepzelni.

--
drk

Akkor a raid0+1 az 2*raid0+raid1 = raid1? OMFG MEGALOL

Letöltöttem a legújabb kernelt (2.6.23.12), és van benne raid10 (CONFIG_MD_RAID10).

Igen megint én voltam a hunyó, azért nem "látszott" nekedm a raid10 mert experimental, es nem engedelyeztem az experimental driverek megjelenítését menucofigban...

Szerintem ha van 4 diszked, akkor inkabb ma'r raid6. Akkor tokmindegy melyik 2 hal ki, mindig megvan az adat.

Bar kerdes, hogy az swraid6 mennyire gyors, ennek az algoritmusa mar nemtrivialis. Azt tudom hogy degradalt uzemben a reknost mar elegge cpuido igenyes lehet, kerdes, hogy az ira's mennyire draga.