I take it back

The war on noise continues apace. Next up, remove Jimmy from service, replacing it with George.
# fdisk -H 224 -S 56 /dev/sdd  

# fdisk -H 224 -S 56 /dev/sde
# sfdisk -l /dev/sdd

Disk /dev/sdd: 182401 cylinders, 255 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/224/56 (instead of 182401/255/63).
For this listing I'll assume that geometry.
Units = cylinders of 6422528 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdd1 0+ 233598 233599- 1465132900 fd Linux raid autodetect
/dev/sdd2 0 - 0 0 0 Empty
/dev/sdd3 0 - 0 0 0 Empty
/dev/sdd4 0 - 0 0 0 Empty

sdd and sde are a pair WD15EARSs. With 4k blocks. Normaly I use sfdisk -d to copy partitions, but that failed on 4k blocks.
mdadm -C -n 2 -l 1 /dev/md2 /dev/sdd1 /dev/sde1

pvcreate /dev/md2
pvs -o name,pe_start
vgcreate -s 32M T00 /dev/md2
lvcreate -l 99%VG --name LV00 T00
mkfs -t ext4 -E stride=32 -m 1 -O extents,uninit_bg,dir_index,filetype,has_journal /dev/T00/LV00
tune4fs -c 0 -i 0 /dev/T00/LV00

I got the formating commands from I Do Linux. I read up on all those mkfs options. I'd never have guessed they were the "best" options to use.

The tune4fs is basically turning off the fsck that happens automatically every X days or Y reboots; fsck is SLOW, and the automatic fsck always happens when you least want it. And with journals, UPSes and so on, an unsafe shutdown isn't supposed to happen.

I left some space on the VG free so I could do snapshots.

But then I got to thinking. Blocking off one large LV means that the file server VM gets access to the entire disk and all that space can't be used by a different VM or the host for another purpose. With LVM, I can grow the LV if I need it. So why not give it 500G at a time?
e4fsck -f /dev/T00/LV00 

resize4fs -p /dev/T00/LV00 500G
lvresize -L 500G -t -v /dev/T00/LV00
resize4fs -p /dev/T00/LV00

This was as much a test of ext4 resizing as anything else. And it worked flawlessly. Btw, fsck on an empty FS is fast.

So what does it look like:
# df -h

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/G00-root 572G 537G 5.8G 99% /
/dev/md0 950M 31M 870M 4% /boot
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/mapper/T00-LV00 493G 493G 0 100% /T00


Well, no. I wanted to make sure the 500G as resize4fs sees it and 500G as lvresize sees it were the same thing, so I filled the FS to the brim:
# dd if=/dev/zero of=/T00/HUGE

dd: writing to /T00/HUGE': No space left on device
1031717425+0 records in
1031717424+0 records out
528239321088 bytes (528 GB) copied, 6608.14 seconds, 79.9 MB/s

See that? Bare in mind that this dd was happening during the initial RAID build. I hypothesis that the mobo in Corey (ASUS M2N-MX SE) that I was doing the previous tests on SUCKS. And that the mobo in George (ASUS DSBV-DX) does not.

# time rm -f /T00/HUGE

real 0m31.020s
user 0m0.000s
sys 0m20.302s

No comments: