A corollary to my previous dictum that a method may make a decisions OR do something is that you want to cut a larger into smaller pieces. And each piece generally looks like the following:

sub doing_something {
    my( $self ) = @_;
    if( $self->is_it_time_to_do_something ) {

In the above, something is just the name of the particular small piece of the larger task. The prepare_something/unprepare_something calls are there to avoid all possible side-effects in is_it_time_to_do_something. I would use before_something/after_something are there for things logging, timing, transactions and other "admin" actions that aren't related to something.

I feel like I've been infected by all the Java I did last spring.


Parsing HTTP::Request->content with CGI.pm

Back in the mists of time, when the Web was young and unconquered, Lincoln Stein wrote a module for Perl that would allow people to easily deal with parameters handed to a CGI program and to generate HTML. This module eventually grew to include not one but several kitchen sinks. It includes its own autoload mechanism, it's own file handle class and more. It Just Works when called FastCGI, Perlex, mod_perl and others.

While CGIs have all but disappeared, this module is still very useful for handling all the finicky edge cases for dealing with HTTP request content. But if you write your own web server environment, using CGI.pm to parse the HTTP content can get be hard. You basically have to fake it out.

This is how you get the params from a GET request.

# $req is a HTTP::Request object
local $CGI::PERLEX = $CGI::PERLEX = "CGI-PerlEx/Fake";
local $ENV{CONTENT_TYPE} = $req->header( 'content-type' );
local $ENV{'QUERY_STRING'} = $req->uri->query;
my $cgi = CGI->new();

# Now use $cgi as you wish

And here we parse the params from a POST request. Note that POST request can be big. Very big. If you aren't careful, they will fill up your memory. Always check Content-Length before reading in a POST request. In the following code, all the content was written to a file.

# $req is a HTTP::Request object
# $file is a filename that contains the unparsed request content
local $CGI::PERLEX = $CGI::PERLEX = "CGI-PerlEx/Fake";
local $ENV{CONTENT_TYPE} = $req->header( 'content-type' );
local $ENV{CONTENT_LENGTH} = $req->header( 'content-length' );
# CGI->read_from_client reads from STDIN
my $keep = IO::File->new( "<&STDIN" ) or die "Unable to reopen STDIN: $!";
open STDIN, "<$file" or die "Reopening STDIN failed: $!";
my $cgi = CGI->new();
open STDIN, "<&".$keep->fileno or die "Unable to reopen $keep: $!";
undef $keep;
unlink $file

# Now use $cgi as you wish

The fun is that CGI will only read POST data from STDIN, so we have to redirect that to our file, saving and restoring the previous STDIN.

The above code also works when you are uploading a file with multipart/form-data which is how I got caught up in all this kerfuffle.

It's really to bad that one can't just do

my $cgi = CGI->new( $req );