ssh $host "commands;" 2>&1 | while read line ; do
# react to any error messages or messages from commands in $line
done
For instance, say you were running
x11vnc on a remote host.  x11vnc has the annoying habit of using a port other then the one you specify, if the one you want is already taken.  Very annoying.  So:
ssh $host "x11vnc ...." 2>&1 | while read line ; do
if [[ $line =~ 'PORT=([[:digit:]]+)' ]] ; then
port=${BASH_REMATCH[1]}
# now set up some sort of port forwarding so that $port is a sane, known port
ssh -L '*:9600:localhost'$port $host
fi
done
This has some problems, in that the second ssh can survive
x11vnc exiting.  I thought "hey, how about $?" but that has other problems; say the second ssh exits before its time.  The $? you saved could be reused.  The kill you'd want to do would provoke hilarity.  While complaining about this on IRC, a wise soul suggested that I open a lock file, and then any process with that file still open must be killed.  I don't need locking, and didn't want to learn about flock in bash right away, so what I roughly did was:
child_kill () {
    if [[ ! $LOCKFILE ]] ; then
        return 0
    fi
    lsof -F '' $LOCKFILE | while read ppid ; do
            if [[ $ppid =~ '^p([[:digit:]]+)$' ]] ; then
                pid=${BASH_REMATCH[1]}
                if [[ $pid != $$ ]] ; then
                    kill -HUP $pid
                fi
            fi
        done
    rm -f $LOCKFILE
}
local LOCKFILE=$(mktemp -p /tmp)
trap "child_kill" EXIT
ssh ... | while read line ; do 
    ....
    ( exec 123>$LOCKFILE
      ssh -L ..... $host &
    )
done
child_kill
I wish there was a better way to deal with
lsof's output, but this works so why complain?
What's more, I wish I could use SSH's ControlMaster to make the second connection that much faster. But the quick testing I did with 4.9p1 failed. Bugger
 
No comments:
Post a Comment