Showing posts with label perl. Show all posts
Showing posts with label perl. Show all posts

2023/06/01

AI genius part 2

Another REST server generated by AI:

use strict;
use warnings;
use HTTP::Server;
use JSON;
my $port = 8080;
my $server = HTTP::Server->new(
    Host => 'localhost',
    Port => $port,
);
$server->add_handler('/', sub {
    my $request = shift;
    my $method = $request->method();
    if ($method eq 'GET') {
        # GET request
        my $data = get_data();
        return JSON::encode($data);
    } elsif ($method eq 'POST') {
        # POST request
        my $content_type = $request->header('Content-Type');
        if ($content_type =~ /application\/json/) {
            my $body = $request->body();
            my $data = JSON::decode($body);
            process_data($data);
            return 'Data processed successfully';
        } else {
            return 'Invalid content type';
        }
    } else {
        return 'Invalid method';
    }
});
$server->start;
sub get_data {
    # Return some sample data
    my $data = {
        name => 'John Doe',
        age => 30,
        email => 'johndoe@example.com',
    };
    return $data;
}
sub process_data {
    # Process the data and do something with it
    my $data = shift;
    print "Received data: ", JSON::encode($data), "\n";
}

It suffers from the slight problem that HTTP::Server doesn't exist.

2023/05/29

AI genis

This is some AI generating a Perl preforking HTTP server.

use IO::Prefork;
use LWP::UserAgent;
use Time::HiRes qw(gettimeofday);
use Socket;

my $prefork = IO::Prefork->new();
$prefork->listen(4000, '127.0.0.1');

sub handle_request {
    my ($req, $env) = @_;
    my $now = gettimeofday();
    print STDERR "Request received: $req\n";
    
    # fork a new process for this request
    my $pid = fork();
    if ($pid == 0) {
        # child process, handle the request
        eval {
            $prefork->handler($req, $env);
        };
        if (WANTED_EXIT_CODE) {
            exit(WANTED_EXIT_CODE);
        }
    } else {
        # parent process, wait for child to finish and handle its result
        my $child_exit_code = wait();
        die "Child process did not exit cleanly: $child_exit_code" if ($child_exit_code != 0);
        
        # send the result of the request back to the client
        my $response = $prefork->result;
        print STDERR "Sending response back to client\n";
        print $response;
    }
    
    # log the time it took to handle this request
    my $elapsed_time = $now[3] - $now[2];
    print STDERR "Request processed in $elapsed_time seconds\n";
}

$prefork->process({ 'id' => 1, 'cmd' => sub { exec("/usr/bin/php-fpm")->out(0); } });

First problem: IO::Prefork doesn't exist. And that functionality wouldn't go in the IO:: namespace anyway.

Second problem: It seems to want to do the forking in handle_request(), which would imply it's forking during each request, not before requests, as asked. I say "seems" because handle_request() isn't used anywhere, anyway.

The cherry on top is the last line, where it just execs php-fpm anyway.

2022/02/09

Attribute::Handlers vs Exporter

In Perl, how does one export an attribute handler?

Do the following:

package MyAttrDecl;
use strict;
use warnings;

require Exporter;
our @EXPORT = qw( _ATTR_CODE_MyAttr );
our @ISA = qw( Exporter );

  
sub MyAttr :ATTR(CODE) 
{
    my ($package, $symbol, $referent, $attr, $data, $phase, $filename, $linenum) = @_;
    my $name = *{$symbol}{NAME};
    warn "Adding an attribute to package=$package name=$name";
}

1;

The important part is _ATTR_CODE_MyAttr. Obviously you change this to match your code.

The above is called with the following:

use MyAttrDecl;

sub new :MyAttr( "/pos/v1" )
{
    return bless {}, shift;
}

2021/11/15

CentOS 6 vs CPAN and Let's Encrypt

Here is the magic to get CPAN CLI to work with https.

# cpan

cpan[1]> o conf urllist https://www.perl.com/CPAN
Please use 'o conf commit' to make the config permanent!

cpan[2]> o conf urllist                                 
    urllist           
        0 [https://www.perl.com/CPAN]
Type 'o conf' to view all configuration items

cpan[3]> o conf commit
commit: wrote '/usr/share/perl5/CPAN/Config.pm'

If it is giving you problems with SSL certificat verification, then you have to upgrade openssl, ca-certificate to the latest version. Perl also maintains it's own SSL certificates in Mozilla::CA, so you might need to do

SSL_CERT_FILE=/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem cpan Mozilla::CA

2016/06/22

Minimal Perl

Setting up Perl on CentOS 6. I'm putting this here so that I can find it easily.

yum install perl perl-CPAN
cpan 
# make everything automatic
o conf prerequisites_policy follow
o conf build_requires_install_policy yes
o conf commit
q
cpan local::lib
cpan Bundle::CPAN # keep an eye on this because Realine wants you to hit enter
cpan App::cpanminus

Now we can install Imager (say)

sudo yum install giflib-devel libjpeg-devel libpng-devel libtiff-devel freetype-devel t1lib-devel
cpanm Imager

2016/03/23

Someone broke the build

One can no longer cleanly do cpan Bundle::CPAN on a fresh install of CentOS 6. Some dependencies don't install properly. I had to do the following:

cpan CPAN::Meta::YAML Parse::CPAN::Meta 
cpan Test::YAML
cpan Compress::Raw::Zlib
cpan Spiffy Test::Base
cpan Module::Metadata  CPAN::Meta Perl::OSType version
cpan Compress::Raw::Bzip2
cpan Sub::Identify
cpan SUPER
cpan Test::MockModule
cpan Bundle::CPAN

At least I didn't have to go into /root/.cpan and install things by hand.

2014/04/30

Furthermore.

A corollary to my previous dictum that a method may make a decisions OR do something is that you want to cut a larger into smaller pieces. And each piece generally looks like the following:

sub doing_something {
    my( $self ) = @_;
    $self->prepare_something;
    if( $self->is_it_time_to_do_something ) {
        $self->before_something;
        $self->something;
        $self->after_something;
   }
   $self->unprepare_something;
}

In the above, something is just the name of the particular small piece of the larger task. The prepare_something/unprepare_something calls are there to avoid all possible side-effects in is_it_time_to_do_something. I would use before_something/after_something are there for things logging, timing, transactions and other "admin" actions that aren't related to something.

I feel like I've been infected by all the Java I did last spring.

2014/04/09

Parsing HTTP::Request->content with CGI.pm

Back in the mists of time, when the Web was young and unconquered, Lincoln Stein wrote a module for Perl that would allow people to easily deal with parameters handed to a CGI program and to generate HTML. This module eventually grew to include not one but several kitchen sinks. It includes its own autoload mechanism, it's own file handle class and more. It Just Works when called FastCGI, Perlex, mod_perl and others.

While CGIs have all but disappeared, this module is still very useful for handling all the finicky edge cases for dealing with HTTP request content. But if you write your own web server environment, using CGI.pm to parse the HTTP content can get be hard. You basically have to fake it out.

This is how you get the params from a GET request.

# $req is a HTTP::Request object
local $ENV{REQUEST_METHOD} = 'GET';
local $CGI::PERLEX = $CGI::PERLEX = "CGI-PerlEx/Fake";
local $ENV{CONTENT_TYPE} = $req->header( 'content-type' );
local $ENV{'QUERY_STRING'} = $req->uri->query;
my $cgi = CGI->new();

# Now use $cgi as you wish

And here we parse the params from a POST request. Note that POST request can be big. Very big. If you aren't careful, they will fill up your memory. Always check Content-Length before reading in a POST request. In the following code, all the content was written to a file.

# $req is a HTTP::Request object
# $file is a filename that contains the unparsed request content
local $ENV{REQUEST_METHOD} = 'POST';
local $CGI::PERLEX = $CGI::PERLEX = "CGI-PerlEx/Fake";
local $ENV{CONTENT_TYPE} = $req->header( 'content-type' );
local $ENV{CONTENT_LENGTH} = $req->header( 'content-length' );
local $CGITempFile::TMPDIRECTORY = "/YOUR/TEMP/DIR/HERE";
# CGI->read_from_client reads from STDIN
my $keep = IO::File->new( "<&STDIN" ) or die "Unable to reopen STDIN: $!";
open STDIN, "<$file" or die "Reopening STDIN failed: $!";
my $cgi = CGI->new();
open STDIN, "<&".$keep->fileno or die "Unable to reopen $keep: $!";
undef $keep;
unlink $file

# Now use $cgi as you wish

The fun is that CGI will only read POST data from STDIN, so we have to redirect that to our file, saving and restoring the previous STDIN.

The above code also works when you are uploading a file with multipart/form-data which is how I got caught up in all this kerfuffle.

It's really to bad that one can't just do

my $cgi = CGI->new( $req );

2013/12/13

Fun with perl

Want to help me out? Run the following code:
#!/usr/bin/perl

use strict;
use warnings;

use Data::Dump qw( ddx );
BEGIN { ddx "DIE=", $SIG{__DIE__}; };
use Test::More tests => 1;
BEGIN { ddx "DIE=", $SIG{__DIE__}; };

diag "Perl: ", sprintf "%vd", $^V;
diag "Test::More: $Test::More::VERSION";
pass "one test";

What do you expect $SIG{__DIE__} to contain? Acceptable answers are undef, "" and "DEFAULT". And the answer varies according to what version of perl and what version of Test::More you are running.

Example output:

# DIE=""
1..1
# DIE=sub { "???" }
# Perl: 5.004_05
# Test::More: 0.47
ok 1 - one test
# die:7: ("DIE=", undef)
1..1
# die:9: ("DIE=", "")
# Perl: 5.8.8
# Test::More: 1.001002
ok 1 - one test
# die:7: ("DIE=", undef)
1..1
# die:9: ("DIE=", undef)
# Perl: 5.10.1
# Test::More: 1.001002
ok 1 - one test
1..1
# -:9: ("DIE=", undef)
# Perl: 5.16.2
# Test::More: 1.001002
ok 1 - one test

And yes, I know 5.004 dates from the previous century.

2013/02/19

PAR Wins!

New client, they set up a SuSE Enterprise Linux VM for me to install on. I approach this with trepidation. SLES is distinct from OpenSUSE, so I can't set up a VM at home for staging. Turns out my fears were misplaced; a PAR built on CentOS 5.8 runs on SLES 11.2 with only a minor problem. Colour me Impressed.

The minor problem was that some libraries aren't the same version. But a simple -l db -l gnutls -l gnutls-extra -l gnutls-openssl works around this. Of course, PAR::Packer only looks in LD_LIBRARY_PATH for libraries. MUST NOT BE SIDETRACKED INTO FIXING THIS.

Here's how to check shared libraries. First you build your PAR. Then extract it:

mkdir t
cd t
unzip ../your.par

Now scan that with the following code:

#!/usr/bin/perl

use strict;
use warnings;

use File::Finder;

my %already;
foreach my $lib ( File::Finder->name("*.so")->in('lib') ) {
    foreach my $line ( qx(ldd $lib) ) {
        chomp( $line );
        next unless $line =~ /(.+)\s+=>\s+(.+)\s\(0x/;
        my( $so, $file ) = ( $1, $2 );
        next if $file =~ m(^/lib);  # assume anything in /lib and /lib64 are OK
        next if $already{$file}++;
        print "rpm -qf $file # $lib\n";
    }
}

This will produce some commands that you then paste into a command prompt on the remote computer. If one of the lines doesn't show up, you will have to include it in the PAR. Note that sometimes one distro will put the library in /usr/lib64 and another distro will put it in /lib64 which might cause false negatives. I wonder what the overhead of including EVERY library in the PAR would be. MUST NOT BE SIDETRACKED.

2013/01/25

And now for something silly

I've mentioned Gunilla before. Here is a small Perl program, based on code by Yaakov.

Usage :

set-message
Sets the message on default printer to "Insert Coin"
set-message something
Sets the message on default printer to "something"
set-message 10.0.0.12 "Hello World"
Sets the message on printer at 10.0.0.12 to "Hello World"

I made my life harder (of course) by supporting \n. To do this, the program needs to know the width of your display ($WIDTH). Adding something like ~/bin/gunilla $(date +%A\\n%Y/%m/%d) to a crontab might be useful.

#!/usr/bin/perl

use strict;
use warnings;
use IO::Socket;

# Configuration:
my $WIDTH = 16;
my $peeraddr = 'gunilla';

my $rdymsg;
if( @ARGV == 0 ) {
    $rdymsg = 'Insert Coin';
}
elsif( @ARGV == 1 ) {
    $rdymsg = $ARGV[0];
}
elsif( @ARGV == 2 ) {
    ( $peeraddr, $rdymsg ) = @ARGV;
}
else {
    die "usage: $0 [] [\"\"]\n";
}
 
$rdymsg =~ s{^(.*)(\n|\\n)}{newline($1)}e;

my $socket = IO::Socket::INET->new(
        PeerAddr  => $peeraddr,
        PeerPort  => "9100",   
        Proto     => "tcp",    
        Type      => SOCK_STREAM
    ) or die "Could not create socket: $!";

my $data = <<EOJ;
\e%-12345X\@PJL JOB
\@PJL RDYMSG DISPLAY="$rdymsg"
\@PJL EOJ
\e%-12345X
EOJ
   
print $socket $data;

sub newline
{
    my( $text ) = @_;
    my $l = $WIDTH - length $text;
    $l = 1 if $l < 1;
    return $text. ( ' ' x $l );   
}

2012/06/09

An object method may make a decision or change something

Lucs pushed me to start a small series of posts on programming dictums. I'm going to start with the one I thought up myself and post a few more as I think of them/get them written.

The first deals with OO. I do 90% of my coding in OO. In fact, one of my profs back in CEGEP commented that my assembler programs had an OO style.

So, first dictom is something I realised a few years ago: An object method may make a decision or change something.

I never quite got the hang of MVC because I couldn't understand what goes in the Controler, what goes in the Model. The answer, of course, is related to side-effects. Side-effects and decisions should not be mixed up in one method.

Ideally one should have one method per decision. If the method needs to load some data to make the decision, the loading code of course goes in a separate method. These methods may remember their decisions, but that's it as far as side effects go. I like to give these methods names like needs_barcode(), has_done_something(), is_ajax() but that's personal taste. These methods will tend to go in a Controller or be used by the Controller (if one is doing MVC, that is).

Methods that change something means any side effect: saving a file, setting a variable, munging some data. Ideally one wants to cut the side-effects up into small methods that handle related data. These methods will tend to go in a Model.

Done right, top level methods look like the following made up example.

if( $self->is_ajax ) {
    $self->output_json;
}
else {
    if( not $self->has_sent_header ) {
        $self->output_header;
    }
    $self->output_html;
    if( $self->is_streaming )
        $self->queue_next;
}

The benefits to dividing the code as I propose is when it comes to implementing sub-classes. Fine grain methods allow one to overload just the behaviour necessary, without (worse case) copying huge methods just to tweak a small part. It also makes it harder to stumble on hidden interdependencies and unintended consequences.

What's more, smaller methods are always a good thing. Small chunks are easier to stare at and prove to oneself that, yes they are doing what you intended them to do. They also improve readability, create a smaller units to build tests around and make refactoring code easier. What's more, I find that a good half of refactoring is taking large methods and cutting them up into smaller chunks.

2011/09/14

Pretty lights

In a fit of weakness, I bought 2 silly USB mail lights. This are basically a PIC that talks USB on one side, and turns coloured leds on and off. They come with software that is supposed to check if you have mail and change colour depending on what's going on. Of course, the supplied software is Windows only and doesn't even work it seems.

Not that I care about a light that tells me if I have e-mail pending. But controling pretty lights from the computer? I'm all over that. Someone had already written a script in Python to control these things. I ported it to Perl because, well because I like Perl.

To use it, you must have Device::USB installed. And perl, of course.

http://pied.nu/Perl/setcolour

Doing setcolour will show you all devices installed

Doing setcolour 1294:1320 1 will set all mail devices to colour 1, which is blue on one of my devices, green on the other.

Doing setcolour 2-7 2 will set one mail device to colour 2, which is red on both my devices.

2011/02/03

ZeroMQ

Chatting away on #POE last night, someone mentioned ZeroMQ. I'd heard of it before, but this time it stuck. I read the book. And I'm hooked. I keep thinking of new ways to use it. And old problems that could be better solved with ZeroMQ. POEx::ZeroMQ might be in my near future.

2011/01/21

Perl signals and fork aren't safe.

My very first Perl bug ever!

Long story short: there is a race condition between the safe signal handlers (which just update a flag) and pp_fork() invokes the system fork(). No, pp_fork() doesn't reset all the flags, so the signals could be dispatched in the child process, even if they were meant for the parent.

What boggles me is that this bug is easily over 10 years old. I find it hard to believe that no one noticed it before. Of course, it will take a while for a patch to make it into perl. And all the perls installed out their still have the bug.

The work around is to block signals, fork(), then unblock signals.

I'll write (or someone) IPC::SafeFork to do the above. But also draw attention to the issue.

2011/01/13

Perl crumudgeon

Back last century, I read p5p-summaries, I participated in YAPC, I was active on #perl. I loved it when randal or tchrist would show us other, better, cleverer ways of doing things. I wrote some ambitious code and had some ambitious plans for all-dancing, all-singing frameworks. I one point I even looked into how to automagically build apps with Data::Flow.

Write the minimum of code and it would All Just Work.

Of course, I got over that. No framework can be all-dancing. There's a joke about a Clippy-a-like for an IDE: "I see you are writing a framework. Would you like me to a- erase this and Google for something that does what you want?"

But that's not all: I have a colocated server that runs some applications for clients. These apps were written for perl 5.004_05. I have no time nor want to get the running with a modern perl or mod_perl. But for years all the code I wrote had to with 5.004. And I insisted that POE would too.

And it now seems everyone is using Catalyst and Moose. And the shiny new things like "say" and "given/when." And perl 5.12. I JUST STARTED USING 5.8!

Of course "just" means a few years ago.

I think I've become a Perl curmudgeon, a more recent version of my associate Jean-Phillipp. When I first started working with him, he was using a modem to get to his customers, telnet and rlogin to communicate between servers, SCO and BASIC for everything. He's now accepted ssh (though I spotted an instance of rlogin last summer), gets me to install and sysadmin Linux. But still uses BASIC.

I guess I'm showing my years. "say" just looks ugly as a language construct; I have no problem typing "\n". Moose rubs me all wrong, despite it being very much aware of how much it is needed.

And while I've done a good deal of JavaScript recently, I haven't done any real AJAX, ie bidirectional. What they call comet.

In my defence, I've only really started 2 large projects in the last 6 years. These projects have taken most of my time. And with large projects, you should to stick with what the Way It Is Done or you get schizophrenic code. Not that my code isn't schizophrenic, but I've decided that just because you discovered some cool new way of doing something, you shouldn't use it the same day in an existing project. Magic is very hard to maintain.

What's more my toes are cold.

Now that I've identified the above, maybe I can change somethings and not be as hide-bound.

2010/12/16

Why I hate the CPAN

(OK, actually I love the CPAN, but that now that I have your attention.)

Every time I write and upload a new distro, I dream of how it's going to make some toiling programmers life wonderful, the way POE, LWP and other bits of the CPAN give my life meaning. IKC and then POE::Component::Generic where my bids at taking over the world. Or at least a certain part of it.

However uploading a distro to the CPAN is something of a commitment. My code is going to live for several years/decades, if not forever. I also do not want to be like the CPAN authors that annoy me, with inadequate test coverage, poor doco and annoying holes in their feature set.

These three points intertwine.

I'll be revising the doco and come to a point where I write "doing X is not advised/not yet implemented." Or I'll write "doing X and Y together has not been tested."

But then I kick myself with "Why isn't it implemented? Why don't I write a unit test for that edge case?" The implementation would take 20 minutes, the test cases for that new feature 30 minutes. A new paragraph of doco and now I've lost my place in my original task of revising the documentation.

What's more, I won't do it just once, but 4 times. 8 times. Before you know it, I've spent 2 days improving a module that originally took 2 hours to write.

These aren't just random numbers. The initial draft of POEx::HTTP::Server took 2 hours Monday morning, of which 30 minutes were wasted because I didn't adequately remember how POE::Session::Multiplex works. I spent the rest of the day improving on that start, then 2 more days adding features, tweaking POE::Session::Multiplex and POE::Component::Deamon because of misfeatures I'd discovered.

But backing up further, I wrote POEx::HTTP::Server because last Friday I thought to myself "Now that Sphinx integration is nearly complete, what will I work on next." And the answer was of course SPEED SPEED SPEED! So I went to look at lighttpd as a front end, with a FastCGI or SCGI call to the app server for dynamic content.

So while my real goal was to get version 2.5 of the Document Warehouse newly known as Quaero ready as soon as possible, I ended up spending a week implementing some technology from 1990. Or 1997 depending on how you look at it. And it's not over; today I realised there was a race condition in my code. A race that IKC probably shares. So a day or three tracking that down.

Still, Firebug shows 19ms response times from POEx::HTTP::Server. Of which only 5ms are waiting aka POEx::HTTP::Server generating the content the other 12 being DNS and network overhead.

Which is damn sweet.

2010/11/05

One for the Google bot

If you get the following really strange message :
Can't coerce array into hash at /usr/lib/perl5/5.8.8/ExtUtils/Install.pm line 94. 

Just do
touch Makefile.PL; make ; make install

Yes you need 2 makes: first one will cause Makefile to be rebuilt with the same params as used the first time, second one will do the install you wanted in the first place.

Now the longer question is "WHAT CHANGED?" And I have no answer to that.