Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

2025/02/03

grub2 vs xfs

So I just tried

# grub2-install --boot-directory=/boot2 /dev/sdb1
Installing for i386-pc platform.
grub2-install: error: hd0 appears to contain a xfs filesystem which isn't known to reserve space for DOS-style boot.  Installing GRUB there could result in FILESYSTEM DESTRUCTION if valuable data is overwritten by grub-setup (--skip-fs-probe disables this check, use at your own risk).

And then I did

# grub2-install --boot-directory=/boot2 /dev/sdb1 --skip-fs-probe
Installing for i386-pc platform.
grub2-install: warning: File system `xfs' doesn't support embedding.
grub2-install: warning: Embedding is not possible.  GRUB can only be installed in this setup by using blocklists.  However, blocklists are UNRELIABLE and their use is discouraged..
grub2-install: error: will not proceed with blocklists.

But of course I'm an idiot; I don't want to install the grub loader on sdb1, I want to install it on sdb, where the BIOS can actually find it

# grub2-install --boot-directory=/boot2 /dev/sdb
Installing for i386-pc platform.
Installation finished. No error reported.

2024/04/23

thunderbird vs self-signed certs

A default dovecot install on AlmaLinux 9 creates a self-signed SSL certifiate. Thunderbird is now very picky about SSL certs. It used to tell you a certificate wasn't valid and allow you to create an exception. Now it just spins and does nothing. You will see the following in your dovecot logs:

Apr 23 18:47:42 sHOST dovecot[12484]: imap-login: Disconnected: Connection closed: SSL_accept() failed: error:0A000412:SSL routines::sslv3 alert bad certificate: SSL alert number 42 (no auth attempts in 0 secs): user=<>, rip=CLIENTIP, lip=HOSTIP, TLS handshaking: SSL_accept() failed: error:0A000412:SSL routines::sslv3 alert bad certificate: SSL alert number 42, session=<BNl+V8sWFOgKAAAF>

I spent 4-5 hours running around in circles to try and find a solution

First step is to import the key, tell dovecot to listen on port 443 (https) by adding the following lines to the service imap-login stanza in /etc/dovecot/conf.d/10-master.conf:

#service imap-login {
  inet_listener https {
    port = 443
    ssl = yes
  }

Note that you could also set up lighttpd to serve up the cert.

Restart dovecot with:

systemctl restart dovecot

Test the above with:

openssl s_client -connect YOURHOST:443

Then, in Thuderbird, you go into Hamburger > Preferences > Privacy & security > (scroll way down) > Manage Certificates... In the Certificate Manager window, you select the Servers tab and click Add Exception... and enter https://YOURHOST:443. Then click on Get Certificate and Confirm Security Exception.

We now have an exception for YOURHOST:443, but we want YOURHOST:993 (if you are using SSL/TLS) or YOURHOST:143 (if you are using STARTTLS). To fix the port number, you need to close Thunderbird, then modify the Thunderbird profile directly. Under Linux, this is ~USER/.thunderbird/SOMETHING-NON-OBVIOUS. I had a half dozen directories. To find the one you want:

cd ~/.thunderbird
find . -name cert_override.txt | xargs ls -l --sort=time

The most recently modified file is the one you want to edit.

YOURHOST:443    OID.2.16.840.1.101.3.4.2.1      HEX-STRING-HERE U       BASE64-STRING-HERE

Change the :443 on that line to :993 (for SSL/TLS) or :143 (for STARTTLS).

You can confirm you have the correct line by comparing the HEX-STRING-HERE with your dovecot cert's SHA256 fingerprint:

openssl x509 -sha256 -in /etc/pki/dovecot/certs/dovecot.pem -noout -fingerprint

2022/02/25

mecab-devel, where are you?

To compile MySQL from srpm on AlmaLinux8, you need mecab-devel, which doesn't seem to exist. After some digging around, this is the solution I found :

sudo yum --enablerepo=powertools group install "Development Tools"
sudo yum install make gcc-c++ rpmbuild

mkdir -pv ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
cd ~/rpmbuild/SOURCES
wget 'https://drive.google.com/uc?export=download&id=0B4y35FiV1wh7cENtOXlicTFaRUE' -O mecab-0.996.tar.gz
cd ~/rpmbuild/SPECS
wget https://git.almalinux.org/rpms/mecab/raw/branch/c8-stream-8.0/SPECS/mecab.spec

rpmbuild -ba mecab.spec

cd ~/rpmbuild/RPMS/x86_64/
sudo yum install mecab*.rpm

This isn't perfect. Why would someone host their code on Google drive? But it seems this is what the author wanted.

2022/01/27

virsh vs old VMs

So I upgraded my VM server. Fresh new NVMe drives in RAID1 with AlmaLinux 8 on them. I then copied all my VMs over, updated their machines to pc or q35 as needed and launched them to test them. Most booted up fine. Three of them failed with the following error.

qemu-kvm: block/io.c:1438: bdrv_aligned_preadv: Assertion `(offset & (align - 1)) == 0' failed.

Fortunately they weren't important VMs so I could wait until today to fix the problem. The solution was to use qemu-img to convert the rewrite the disk images.

cd /kvm/VM/pool
mkdir bad
mv vda.img bad
qemu-img convert bad/vda.img vda.img -p -O qcow2

That should work for most people. Note that I assume your images are qcow2 (which they should be). If you had some raw images, you could change qcow2 to raw above. Or change types='raw' to type='qcow2' via virsh edit VM.

Here we verify the new images.

modprobe nbd max_part=8
qemu-nbd --connect=/dev/nbd0 /kvm/VM/pool/vda.img
# disconcertingly, Alma will autoactivate any VGs on the image.  Otherwise we'd do
partx -a /dev/nbd0
pvscan /dev/nbd0p2
vgchange -ay VGNAME
# now things should be available
fsck -f /dev/nbd0p1
fsck -f /dev/mapper/VGNAME-lvname
vgchange -an VGNAME
qemu-nbd --disconnect=/dev/nbd0

My image had /boot as partition 1 and / as a LV in in partition2.

2021/12/14

If you're like me, you run libvirt on a headless server and look at VM consoles with virt-viewer. You also probably see the following warnings:

Gtk-Message: 14:08:40.348: Failed to load module "canberra-gtk-module"
Gtk-Message: 14:08:40.348: Failed to load module "pk-gtk-module"
Gtk-Message: 14:08:40.477: Failed to load module "canberra-gtk-module"
Gtk-Message: 14:08:40.477: Failed to load module "pk-gtk-module"

The solution is as follows

yum install PackageKit-gtk3-module libcanberra-gtk3

2021/04/14

Sendmail smart relay with TLS and plain auth

Instructions on how I set up sendmail smart relay with TLS and plain authenetication on CentOS 6.

First, make sure you have enough installed :

yum -y install ca-certificates sendmail sendmail-cf

Create /etc/mail/authinfo:

AuthInfo:YOUR.HOST.COM    "U:YOUR-USER@YOUR.HOST.COM" "I:YOUR-USER" "P:YOUR-PASSWORD" "M:LOGIN PLAIN"

Replace YOUR.HOST.COM, YOUR-USER and YOUR-PASSWORD with the correct stuff. LOGIN PLAIN stays as-is if you are using plaintext logins. Make sure to chmod 0600 this file.

Add the following to /etc/mail/sendmail.mc, making sure you use m4's dumbass `quotation' style

define(`SMART_HOST', `YOUR.HOST.COM')dnl
define(`RELAY_MAILER',`esmtp')dnl
define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl
FEATURE(`authinfo')dnl
define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl
define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl

Note that the above is TCP port 587, which you might need to change.

Finally you restart sendmail and test as you normally woudl.

chmod 0600 /etc/mail/authinfo
service sendmail restart
echo "Testing" | mail -s "Test 1" somebody@example.com
tail -F /var/log/maillog

2020/03/30

Laser printer vs UPS

As an extra wrinkle to the previous post, the new Lexmark is on an UPS. Yes, I can hear you screaming "NEVER PUT A LASER PRINTER ON AN UPS" but I'm a professional, I know what I'm doing. Specifically, I over-engineered my solution.

The problem with a laser printer on a UPS is you might be tempted to print something during a black out. And your UPSes battery will never supply the current your fuser needs. My solution is to build a plug with relays connected to an arduino connected to a computer connected to the UPS. When power goes out, apcupsd on the computer runs a small script that sends a command to the arduino via USB to turn off the relays. The arduino also has a momentary switch. When I push the switch, the arduino turns the relays on. I also have small script that sends the command to turn the relays on.

I probably could have avoided the arduino and used a transistor latch. Power off turns the latch and relays off. Only a button press turns the latch and relays on. But then I couldn't do the following.

One annoying """feature""" of the Lexmark is that it has AirPrint and Wi-fi. I have yet to find out how to turn these off. So the printer spends most of its time turned off. So I wrote a CUPS backend that checks if the printer is on, sends the arduino the "turn on" command if it isn't, then chains to the normal socket backend to actually send the data to the printer.

#!/bin/bash

HWEL=10.0.0.68
LOG=/tmp/hwel-driver.log

if [[ $# == 1 ]] ; then
    exec /usr/lib/cups/backend/socket "$@"
fi

function aping () {
    local host=$1
    if ping -c 1 -q $host >/dev/null ; then
        return 0
    fi
    return 1
}

function ping_wait () {
    local host=$1
    while true ; do
        if ping -c 1 -q $host >/dev/null ; then
            return
        fi
        sleep 2
    done
}


status=$(/sbin/apcaccess status localhost:3552 | grep STATUS | cut -d: -f 2)
if [[ $status =~ ONLINE ]] ; then
    echo "INFO: hwel battery status=$status" | tee -a $LOG >&2
else
    echo "ERROR: hwel battery status=$status" | tee -a $LOG >&2
    exit 17
fi

aping $HWEL || sudo /home/fil/bin/hwel-on
ping_wait $HWEL

export DEVICE_URI=socket://$HWEL:9100
exec /usr/lib/cups/backend/socket "$@"

I put this script in /usr/lib/cups/backend/hwel and tell CUPS to use a new URL.

lpadmin -p hwelraw -v hwel://hwel.localdomain:9100

Yes, my printer is called Hwel. Yes, I name all my printers after Discworld dwarfs.

Windows 10 vs Lexmark

So I have a new Lexmark MB2236adw laser printer. For some reason beyond my ken, Windows 10 keeps saying it's offline, when clearly it isn't. Printer works fine from Linux, so I decided that Windows is going to print via CUPS.

First, create a raw printer on your CUPS host:

lpadmin -p hwelraw -v socket://hwel.localdomain:9100 -E \
     -D "Lexmark on UPS (raw)" -L "Philip's Office" -o raw

Then add the following to cups.conf to allow printing via the network:

<Location /printers>
    Order allow,deny
    Allow localhost
    Allow 10.0.0.*
</Location>

Remember to restart CUPS

service cups reload

Open up the port 631 in iptables.

Now on the windows computer, install the Lexmark drivers.

Then you have to turn on IPP : Control Panel > Programs > Turn Windows Features On or Off > Print and Document Services > Internet Printing Client must be checked.

Now it's time to Add a printer, using the The printer that I want isn’t listed button. You are going to be using an http URL, that will look like http://scott/pritners/hwelraw. Replace scott with your Linux computer's name and hwelraw with the CUPS queue name. Select the printer driver installed previously, print your test page and bask in the glory of getting a computer to do your bidding.

As a bonus step, print a 2 page document to make sure Windows can print double sided because my new printer does cool things like that.

2020/01/16

daemontools vs selinux

While CentOS ships with a policy module for daemontools, it expects you to install things in /admin and /supervised. I don't, I put things in /var/daemontools/admin and /var/daemontools/supervised. So I spent far to much time trying to make a policy module that would work with my setup. My initial attempt worked and gave me hope :

cp /usr/share/selinux/devel/include/contrib/daemontools.{if,te,cf} .
joe daemontools.cf # changed all the dirs to /var/daemontools
make -f /usr/share/selinux/devel/include/Makefile
semodule -i daemontoos.pp

However, this replaced the default daemontools module and might mess with other systems. I'm probably the last person to care about daemontools outside of qmail. Also, I wanted to learn some selinux.

My next thought was that I could create a daemontools_quaero policy module that gave the executables an fcontext from daemontools module. This didn't work and I don't know why.

After turning to IRC and much messing around and back and forth, grift led me to the following solution:

cat - > daemontools_quaero.cil <<'CIL'
(filecon "/var/daemontools/admin/daemontools-0.76/command/envdir" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/envuidgid" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/fghack" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/multilog" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/pgrphack" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/setlock" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/setuidgid" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/softlimit" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/svc" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/svok" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/svscan" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/svscanboot" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/admin/daemontools-0.76/command/supervise" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/prog-log" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/prog-user" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/sudo-user" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/.+/env" dir (system_u object_r svc_conf_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/.+/run" file (system_u object_r bin_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/.+/log/env" dir (system_u object_r svc_conf_t ((s0)(s0))))
(filecon "/var/daemontools/supervised/.+/log/run" file (system_u object_r bin_t ((s0)(s0))))
CIL
sudo semodule -i daemontools_quaero.cil
sudo restorecon -RvF /var/daemontools/admin/daemontools-0.76/command /var/daemontools/supervised/

And there was much rejoicing.

Along the way, I discovered semodule -E, matchpathcon and the policy language as well as sesearch, seinfo, ps auxZ, ls -lZ.

2019/10/30

daemontools vs systemd

Here's the bare minimum to get daemontools running under systemd

cat <<CNF > /usr/lib/systemd/system/daemontools.service
[Unit]
Description=DJB daemontools
After=network.target

[Service]
ExecStart=/var/daemontools/command/svscanboot
Restart=always

[Install]
WantedBy=multi-user.target
CNF
systemctl enable daemontools
systemctl start daemontools

Note that the default daemontools install would put the executable in /command/svscanboot.

2018/07/17

CPUs are computers

So I wanted to get a Shuttle DX30 working under LTSP. First step, it uses a pxe-client-id that's 9 bytes long. CentOS's dhcpd throws away anything that doesn't have a 17 byte pxe-client-id. I have no idea why. I tried poking around the RFCs but didn't find much of interest.

Once I patched dhcped, PXE booting worked and I managed to load a kernel. Except it was slow.

Very.

Very.

Very.

Slow.

Just how slow? It took over 30 minutes to get to a XDM prompt slow.

I saw Machine check events going past. To track those down, I put my laptop's 2.5 inch HDD into it. This was also slow. While not as bad, it was clearly not working properly. But I got mcelog to grab the following:

mcelog: failed to prefill DIMM database from DMI data
mcelog: mcelog read: No such device
Hardware event. This is not a software error.
MCE 0
CPU 0 BANK 4 
ADDR fef5d200 
TIME 1531800239 Tue Jul 17 00:03:59 2018
MCG status:
MCi status:
Error overflow
Uncorrected error
MCi_ADDR register valid
Processor context corrupt
MCA: Internal unclassified error: 408
Running trigger `unknown-error-trigger'
STATUS e600000000020408 MCGSTATUS 0
MCGCAP c07 APICID 0 SOCKETID 0 
CPUID Vendor Intel Family 6 Model 92

After some messing around, cursing the Gods, discussing Intel NUCs on #ltsp, general lack of sleep, I found the solution : kernel-ml

yum -y  --enablerepo=elrepo-kernel install kernel-ml
joe /etc/grub.conf # set default=0

It should be noted it took over 10 minutes for dracut to create the initrd for kernel-ml. But once I then booted into the new kernel, everything was fine. I installed kernel-lt as a test, because it needs the CPU and the disk to run smoothly and it took 2 minutes, which is annoying but expected. For reference, installing kernel-ml on my desktop takes 1.5 minutes but it has an SSD.

kernel-lt was a failure, 9m54 to install kernel-ml-4.17.5-1.el6.elrepo.x86_64. I booted back into kernel-ml, removed and reinstalled kernel-ml-4.17.5-1.el6.elrepo.x86_64 and it took 1m52. So kernel-ml is the clear winner.

Now to find out how to install kernel-ml into LTSP.

2017/05/10

vmware-vdiskmanager and CentOS 6

This is how you install vmware-vdiskmanager on CentOS 6. I needed to do this so I could read my old vmware-server vmdk.

First go this old KB article and download 1023856-vdiskmanager-linux.7.0.1.zip. It's at the bottom, in the Attachements section. Now you do the following little dance:

unzip 1023856-vdiskmanager-linux.7.0.1.zip
cp 1023856-vmware-vdiskmanager-linux.7.0.1  /usr/local/sbin/vmware-vdiskmanager 
chmod +x  /usr/local/sbin/vmware-vdiskmanager
yum -y install zlib.i686 glibc.i686 openssl098e.i686
md -pv /usr/lib/vmware/lib
cd /usr/lib/vmware/lib/
ln -s /usr/lib/libcrypto.so.0.9.8e 
ln -s libcrypto.so.0.9.8e libcrypto.so.0.9.8 
ln -s libcrypto.so.0.9.8e libcrypto.so.0
ln -s libcrypto.so.0.9.8e libcrypto.so
ln -s /usr/lib/libssl.so.0.9.8e
ln -s libssl.so.0.9.8e libssl.so.0.9.8
ln -s libssl.so.0.9.8e libssl.so.0
ln -s libssl.so.0.9.8e libssl.so

That fucking around in /usr/lib/vmware/lib is because even though VMware claims this is a static binary, it in fact dynamically loads crypto libraries at run time from non-standard places.

You can now convert your split vmdk to a single file and mount it:

vmware-vdiskmanager -r sda.vmdk -t 0 sda-single.vmdk
modprobe nbd max_part=8
qemu-nbd -r --connect=/dev/nbd0 sda-single.vmdk
kpartx -a /dev/nbd0
vgscan
vgchange -a y YOURVG
mount -o ro /dev/mapper/YOURVG-YOURLV /mnt

Aren't you glad you created a unique VG for each of your VMs?

To unmount:

umount /mnt
vgchange -a n YOURVG
kpartx -d /dev/nbd0
qemu-nbd -d sda-single.vmdk

2017/05/01

daemontools, system V init and mysql

This is how you setup a babysitter for a service started with system V init scripts using DJB's deamontools. We can't just put service $service start into a run file, because sys V init scripts start up background daemons. We have to use the daemon's PID file to watch what's going on.

First, I create mysql-babysit. I'm using mysql as an example. For other services, adjust $service and $pidfile.

# mkdir /var/daemontools/supervised/mysql-babysit
# cd /var/daemontools/supervised/mysql-babysit
# cat <<'SH' > mysql-babysit
#!/bin/bash

service=mysql

datadir=/var/lib/mysql
pidfile=$datadir/$(hostname).pid


##################
sleepPID=
function sig_finish () {
    echo $(date) $service "$1"
    service $service stop
    [[ $sleepPID ]] && kill $sleepPID
}
trap 'sig_finish TERM' TERM
trap 'sig_finish KILL' KILL


##################
echo $(date) $service start

service $service start

if [[ -f $pidfile ]] ; then
    pid=$(< $pidfile)
    if [[ $pid ]] ; then
        while grep -q $service /proc/$pid/cmdline 2>/dev/null ; do
            sleep 60 & sleepPID=$!
            wait $sleepPID
        done
        echo $(date) $service exited
        exit 0
    fi
fi
echo $(date) $service failed to start
sleep 5
exit 3
SH

Next we create and activate the run script

cd /var/daemontools/supervised/mysql-babysit
# cat <<'SH' >run
#!/bin/bash

exec /var/daemontools/supervised/mysql-babysit/mysql-babysit
SH
# chmod +x run
# chkconfig mysql off
# service mysql stop
# cd ../../service
# ln -s ../supervised/mysql-babysit

We can control mysql with

svc -d /var/daemontools/supervised/mysql-babysit # shutdown mysql
svc -u /var/daemontools/supervised/mysql-babysit # startup mysql
killall mysql # restart mysql

2017/03/22

Wacom Intuos Photo and CentOS 6

I've had a Wacom Intuos 5x4 since 1998 or so. But support for the serial protocol it used has disappeared. What's more, my tablet was getting really crusty over nearly 20 years of use. So I went and bought a Wacom Intuos Photo, which has a smaller, wider surface (which is useful given I have 2 screens) but also can use a finger instead of a pen.

Of course, the new tablet didn't work out of the box. Well, it nearly worked.

Back in the day, I patched wacom_drv for XFree86 to get it working. Things have changed greatly since then. On modern Linux, the driver is in the kernel (wacom.ko) which creates an input and event device. X.org then uses HAL to enumerate input devices and HAL also provides hints on how to configure them. The long and short is we no longer need to mess around in xorg.conf when we change hardware. However, it means it gets very hard to debug when one of those layers does something annoying.

The easy way to get an Wacom Intuos Photo, Draw, Art to work on CentOS 6 is to install the backports of linuxwacom drivers. If you are running the stock 2.6.32 kernel, everything will Just Work.

I'm using the elrepo's 4.10 kernel-ml. This gets lm-sensors working for my motherboard and removes an annoying bug with my PS/2 keyboard.

In 4.10 kernel, the wacom driver is recognizing the tablet and doing it's job : creating 3 input event IDs one for the pad, one for the stylus and one for finger touch. However, lshal is rejecting the finger touch. I traced it down to HAL_PROP_BUTTON_TYPE not being set when hald-probe-input is called. This means the stylus automatically works on X.org, but touch doesn't.

To get finger touch to work on X.org, I had to force things. First, I need to create a symlink to the finger event ID using udev, then a partial config file for X.org :

/usr/local/lib/udev/wacom-type.sh will output a short name for each device it is called on. Make sure this script is executable!

#!/bin/bash

name=$(cat /sys/$DEVPATH/device/name)
# echo "$DEVPATH=$name" >>/tmp/wacom-dev.txt

shopt -s nocasematch

if [[ $name =~ Finger ]] ; then
    echo finger
elif [[ $name =~ Pen ]] ; then
    echo pen
elif [[ $name =~ Pad ]] ; then
    echo pad
else
    echo unknown
fi

exit 0

/etc/udev/rules.d/99-wacom.rules convinces udevd to call the above when the tablet is detected. It also convinces udevd to create a symlink in /dev/input/wacom-finger. Note that I restrict to 056a:033c, which is a Wacom Intuos Draw/Photo/Art small version. You can find the USB ID of your tablet with lsusb.

#
# Will create /dev/input/wacom-finger, I hope
#
ACTION!="add|change", GOTO="my_wacom_end"
KERNEL!="event*", GOTO="my_wacom_end"

ENV{ID_VENDOR_ID}!="056a", GOTO="my_wacom_end"
ENV{ID_MODEL_ID}=="033c", PROGRAM=="/usr/local/lib/udev/wacom-type.sh", SYMLINK+="input/wacom-%c"

LABEL="my_wacom_end"

Test the above by doing udevadm control --reload-rules, unplug tablet, wait, plug in tablet, then ls -l /dev/input and you should see:

lrwxrwxrwx  1 root root      7 Mar 22 15:50 wacom-finger -> event11
lrwxrwxrwx  1 root root      7 Mar 22 15:50 wacom-pad -> event12
lrwxrwxrwx  1 root root      7 Mar 22 15:50 wacom-pen -> event10

The numbers after event will change each time you reboot or replug the tablet.

/etc/X11/xorg.conf.d/wacom.conf will finally convince X.org to use the Wacom finger event id as a touch pad.

Section "InputDevice"
    Identifier "Finger"
    Driver "wacom"
    Option "Vendor" "Wacom"
    Option "AutoServerLayout" "on"
    Option "Type" "touch"
    Option "Device" "/dev/input/wacom-finger"
    Option "Mode" "Absolute"
    Option "Touch" "on"
    Option "Gesture" "off"
#    Option "Tilt" "on"
    Option "Threshold" "20"
    Option "Suppress" "6"
    Option "USB"    "On"
EndSection

I'd like to very much thank whot and jigpu who spent an impressive amount of time helping me over IRC.

UPDATE: 24 hours later, I have found a problem with the approach. If you unplug and replug the tablet, the Finger event ID will change. And while the wacom-finger symlink will be updated, X.org will not know that it's changed and hold onto the old event ID. This means finger will no longer work after replugging the tablet, at least until you restart X.org.

UPDATE: 1 year later and kernel-ml has gone to 4.16, which won't work. 4.15.15 is the last version that does work.

2017/03/20

Linux console

So I've finally upgraded Corey. By "upgrade", I mean "replaced every last component except the PSU." So basically it's a replacement.

This means I'm now running CentOS 6 on my desktop ("So soon!?" shut up). It also means I have to fix all the little annoying things about CentOS 6. One of which is that modern kernels us a framebuffer, which switches the console to illegibly small text. The solution is to put video=640x480 or video=800x600 on your kernel command line. Ideally in grub.conf.

2014/08/05

Do not try this at home

What you are about to see is bad and wrong. There are many better and easier ways to do this. I'm documenting it here so you can see how hoary it is.

Say you have a system set up with RAID1 on / and /boot, 2 active, 1 spare disks. This system can function even if reduced to a single active disk. Surely one can clone the system just by rebuilding the arrays with new disks?

Short answer is "Yes".

The longer answer is "No, don't do that. Use Clonezilla."

The reason you shouldn't do it this way is that Linux RAID (aka mdadm aka dm) uses UUIDs to identify arrays. It also has a field that contains the hostname the array was created on. What's more LVM has volume group names. These need to be unique if ever these the original and clone array are going to appear on the same system. Even if they you are sure they never will, some admin software you are going to try out some day.

But can't you change the UUIDs, VGs, hostname and so on? Yes you can. I just did it for a client. It was more pain then it was worth.

The following walk-through assumes my normal setup : first partition of each disk is part of a RAID1 3 active, no spares that goes on /boot (called md1 or md126). Second partition is partition of each disk is part of a RAID1 2 active, no spares that goes on / (called md0 or md127).

DO NOT TRY THIS AT HOME. I am a professional sysadmin with years of experience fucking up working systems. The following walk-through is provided without any warranty as to applicability or suitability to a any sane or useful or safe task. Back up your data. Verify your backup. RAID is not a backup. YMMV. HTH. HAND.

  1. Make sure the arrays are fully in sync.
  2. Do a clean shutdown
  3. Make sure you can boot from the first and second drives of the array.
  4. Remove the first active and the spare drive from the computer. Label them well and set aside
  5. Disconnect the second drive. This will be the first drive on the new clone
  6. Boot from a LiveDVD or USB or something. You will need a distro that has mdadm, uuidgen, lvm. I used the CentOS 6.5 LiveDVD.
  7. telinit 1 # single user mode
    pstree # make sure nothing unwanted is running
    killall dhclient # kill everything unwanted.  You will need udevd
  8. Plug the old-second-new-first drive in and wait for things to settle
  9. cat /proc/mdstat Make sure your arrays are inactive. They will have (S) to mean they need to sync with something.
  10. This is the hairy bit:
    # get /boot working
    mdadm --stop /dev/md126
    mdadm --assemble --update=uuid --uuid=$(uuidgen) /dev/md126 /dev/sda1
    mdadm --stop /dev/md126
    mdadm --assemble --update=name --name=$(hostname):1 /dev/md126 /dev/sda1
    mdadm --stop /dev/md126
    mdadm --assemble /dev/md126 /dev/sda1 --run
    tune2fs -U $(uuidgen) /dev/md126
    # get / working
    mdadm --stop /dev/md127
    mdadm --assemble --update=uuid --uuid=$(uuidgen) /dev/md127 /dev/sda2
    mdadm --stop /dev/md127
    mdadm --assemble --update=name --name=$(hostname):0 /dev/md127 /dev/sda2
    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/sda2 --run
    # activate LVM on /dev/md127
    vgscan
    # rename VG
    vgrename OLDVG NEWVG
    # mount /
    vgchange -a y NEWVG
    tune2fs -U $(uuidgen) /dev/mapper/NEWVG-root
    mount /dev/mapper/NEWVG-root /mnt
    # mount /boot
    mount /dev/md1 /mnt
  11. Now comes the really annoying part: You have to update /etc/fstab (CentOS 6 has the UUID of /boot array), /boot/grub/grub.conf (CentOS 6 has the UUID of / array and the VG of /) and and possibly /boot/grub/initramfs-MUTTER.img to use the new UUIDs. The really fun part (for me) is that the LiveDVD doesn't have joe. So I had to write the UUID down on a piece of paper, then write it into grub.conf.

    You can find the UUID of an array with

    mdadm --detail /dev/md0
    If you want more flexibility, do
    mount --bind /proc /mnt/proc
    mount --bind /dev /mnt/dev
    mount --bind /sys /mnt/sys
    mount --bind /tmp /mnt/tmp
    chroot /mnt
    This will allow you to run mkinitrd if you need to. Note that this assumes your live DVD has a kernel that is compatible with your Linux distro.
  12. Reboot to new system. Keep your fingers crossed.
  13. Now you just insert your 2 other disks and run
    sfdisk -d /dev/sda | sfdisk /dev/sdb
    sfdisk -d /dev/sda | sfdisk /dev/sdc
    mdadm --add /dev/md126 /dev/sdb1
    mdadm --add /dev/md126 /dev/sdc1
    # wait until rebuild is finished
    cat <<GRUB | grub
    device (hd0) /dev/sdb
    root (hd0,0)
    setup (hd0)
    GRUB
    cat <<GRUB | grub
    device (hd0) /dev/sdc
    root (hd0,0)
    setup (hd0)
    GRUB
    mdadm --add /dev/md127 /dev/sdb2
    mdadm --add /dev/md127 /dev/sdc2
    
  14. Ask yourself - was this really worth it? Wouldn't Clonezilla have been so much easier?

That didn't seem to hard, you might be saying. What I'm omitting is that when I booted to the new array, I got a lot of checksum errors and a failed fsck. I did fsck -y /dev/md127 a bunch of times until it came up clean.

Also - how do I get my arrays back to md1 and md0? The old method (--update=super-minor) no longer works.

2014/06/12

End of an era

A hard drive was dying on Billy.

Thu Jun 12 19:15:31 EDT 2014
19:15:31 up 1316 days, 19:15,  2 users,  load average: 0.97, 0.97, 0.77

But the server is rented. So while I would have tried to do a hot swap, iWeb wisely wanted to do a shutdown.

Thu Jun 12 20:36:03 EDT 2014
20:36:03 up 8 min, 12 users,  load average: 2.79, 1.28, 0.56

Oh well.

2014/03/20

25,000 Linux/UNIX Servers Infected with Malware

And this is a big reason is why you pay a real sysadmin to do your system administration.

In short, people were installing WordPress badly (friends don't let friends use PHP). They were allowing password authenticated ssh login over the internet. They were doing chmod 0777 ~apache/html_docs. They were doing other highly unsafe things.

If you can't see the problem with these things, then you need to talk to a professional sysadmin.

2014/03/18

I am INVINCIBLE!

Nothing is beyond me. When it comes to computers, I am all conquering.

That actually might be an exaggeration. But I just pulled off a stunt that really impressed me.

I'm moving all my systems from CentOS 5 to CentOS 6. (Why so soon? Shut up) In the process, I need to move my VMs from VMware Server 1 (Seriously? Shut up) to KVM (libvirt specificly). For the most part, I'm actually starting up whole new VMs and reconfiging them. But I still might want to look at my old data, fetch old files, and what not. This means being able to read VMware's vmdk files. This is "easy":

modprobe nbd max_part=8
qemu-nbd -r --connect=/dev/nbd0 /vmware/files/sda.vmdk
kpartx -a /dev/nbd0
vgscan
vgchange -a y VolGroup00
mount -o ro /dev/mapper/VolGroup00-LogVol00 /mnt/files

There are 3 complications to this:

First, you can't have multiple VGs with the same name active at once. Work around is to only mount one at a time. You can renamed VGs with vgrename but that's a job for another day.

Next off, I chose to have my vmdk split into multiple 2GB files. This makes copying them around so much more fun. But qemu only understands monolithic files, so you need vmware-vdiskmanager to convert them. Specificly

vmware-vdiskmanager -r /vmware/files/sda.vmdk -t 0 /vmware/files/single.vmdk

Lastly (and this is the main point of this post) CENTOS 6 DOESN'T SHIP WITH NBD! After WTFing about as hard as I could, I googled around for one. Someone must have needed nbd at some point, surely. The only solution I found was to recompile the kernel from scratch. Which is stupid. As a work around, I used the kernel-lt from elrepo. But the real solution would be a kmod. I thought doing a kmod would be hard, so I set aside a few hours. Turns out, it's really easy and I got it right on the first try.

tl;dr - rpm -ivh kmod-nbd-0.0-1.el6.x86_64.rpm

I based my kmod on kmod-jfs from elrepo.

  1. Install the kmod-jfs SRPM;
  2. Copy jfs-kmod.spec to nbd-kmod.spec;
  3. Copy kmodtool-jfs-el6.sh to kmodtool-nbd-el6.sh
  4. Edit nbd-kmod.spec. You have to change kmod_name and the %changelog section. You might also want to change kversion to your current kernel (uname -r). If not, you need to add --define "kversion $(uname -r)" when running rpmbuild;
  5. Create nbd-0.0.tar.bz2;
  6. Build, install and test the new module.
    rpmbuild -ba nbd-kmod.spec
    rpm -ivh ~/rpmbuild/RPMS/x86_64/kmod-nbd-0.0-1.el6.x86_64.rpm
    modprobe nbd
    ls -l /dev/nbd*
  7. FLAWLESS VICTORY!

The hard part (of course) is that I wasn't sure what to put in nbd-0.0.tar.bz2. The contents of jfs-0.0.tar.bz2 just look like the files from drivers/jfs in the kernel tree with Kconfig and Makefile added on. So I pull down the kernel SRPM, did a rpmbuild -bp on that (just commend out all the BuildRequires that give you grief. You aren't doing a full build.) Then I poked around for nbd in ~/rpmbuild/BUILD/vanilla-2.6.32-431.5.1.el6/. Turns out there's only nbd.c and nbd.h. So that goes in the pot. I copied over the Makefile from jfs, modifying it slightly because jfs is spread over multiple source files. Kconfig looked like kernel configuration vars. I just copied BLK_DEV_NBD out of vanilla-2.6.32-431.5.1.el6/drivers/block/Kconfig.

This entire process took roughly 1 hours. It worked on the first try. Of course, all the magic is in kmodtool-nbd-el6.sh. But I was expecting a lot of pain. Instead it worked on the first try. I was so surprised I did modprobe -r nbd ; ls -l /dev/nbd* just to make sure I wasn't getting a false positive.

2014/01/24

Postfix mail relaying

RHEL 6 (and CentOS) have moved from sendmail to postfix. This is for the most part a Good Thing; sendmail was a mess. However it means I have to learn some new stuff. Specifically, how to convince postfix to relay email through my ISP's SMTP server.

First, I have a VM that does relaying for all my other computers. On this VM, I set up postfix to relay all mail :

# /etc/postfix/main.cf
myorigin = awale.qc.ca
relayhost = smtp.cgocable.ca
inet_interfaces = all
mynetworks = 10.0.0.0/24, 127.0.0.0/8

myorigin means user@localhost becomes user@awale.qc.ca.

relayhost is where email is relayed to.

inet_interfaces means postfix will listen for SMTP on all the VMs networks (default is only localhost).

mynetworks means postfix will trust any email coming from a host on my LAN. Yes, this is not very secure. But I trust my LAN explicitly. I have Wifi on a separate subnet, so anything on my LAN will have to be physically connected to my LAN.

On other computers/VMs, I just need myorigin and relayhost. This last points to the postfix VM, not COGECO.