ActivePerl Migration Win 2003 Server to Win 2008 Server

K

Klaus

Hello,

I hope this post is not OT in c.l.p.m.

I have got a couple of medium-sized (1000+ lines of code) of Perl
programs (64 bits) running on Windows Server 2003, the perl version
is:

C:\>perl -v
This is perl, v5.10.1 built for MSWin32-x64-multi-thread
(with 2 registered patches, see perl -V for more detail)
Binary build 1007 [291969] provided by ActiveState http://www.ActiveState.com
Built Jan 27 2010 14:12:21

I am using ADODB to connect to MSAccess databases, I read
ActiveDirectory entries and I use StorageCentral to setup Storage
quotas for 1000+ directories.

Now I have to migrate Windows Server from 2003 to 2008.

Before I embark on this adventure, I would ask for the wisdom of the
perl user community:

- Does anybody have experience with migrating perl programs from
Windows Server 2003 to 2008 ?

- Are there any pitfalls I need to be aware of ?

- Do you recommend upgrading from perl 5.10 (x64) to the newer perl
5.12 (x64) at the same time ?
 
D

Dr.Ruud

Ben said:
For instance, I just upgraded to 5.12 and
found that Data::Alias no longer works and isn't likely to be fixed.

I still have good hopes that it will be fixed. It is a real favorite
module of mine. When used well, it saves you quite some CPU-cycles and
memory.
 
K

Klaus

Certainly not :).

Thanks :)
While the upgrade *should* be painless, IME it isn't always and you
should make sure to test everything properly with the new perl before
putting it into production. For instance, I just upgraded to 5.12 and
found that Data::Alias no longer works and isn't likely to be fixed.

I am not using Data::Alias, but I have a couple of in-house modules
that certainly push the limits of what is possible under Perl 5.10
(x64) and Windows Server 2003.

Your advice is very well taken, and I will test properly with Perl
5.12 (x64) and Windows Server 2008.
 
D

Dr.Ruud

Ben said:
I don't think so. It's only ever a syntactic convenience over using
references, it's just a really nice one.

That's a lie! :)

Data::Dumper does many things at compile time (so no ENTER/LEAVE
overhead that other aliassing solutions need (like Lexical::Alias).

And you can make all (or many) values of a (huge) hash be a single SV
(undef, for example).

It is/was a great module. It should be resurrected.

Though the new ':=' operator is promising too.

Data::Alias gives things like

alias my %bar = %{ $foo->{ bar } };

which will also set $foo->{ bar } to {} if it was undef.
Etc.
 
D

Dr.Ruud

Dr.Ruud said:
That's a lie! :)

Data::Dumper does many things at compile time (so no ENTER/LEAVE
overhead that other aliassing solutions need (like Lexical::Alias).

And you can make all (or many) values of a (huge) hash be a single SV
(undef, for example).

It is/was a great module. It should be resurrected.

Though the new ':=' operator is promising too.

Data::Alias gives things like

alias my %bar = %{ $foo->{ bar } };

which will also set $foo->{ bar } to {} if it was undef.
Etc.

An example of the Etc.


perl -Mstrict -MData::Alias -wle '
#alias
my $s = "foo,bar,baz";
my @s;
for my $i ( 0 .. 2 ) {
alias $s[ $i ] = substr( $s, $i * 4, 3 );
}
eval { $s[ 1 ] = "xyz"; 1 } or warn "w: ", $@;
print $s;
print "@s";
'
foo,xyz,baz
foo xyz baz


Also try the above with the '#' removed from line 2.
 
J

jl_post

It is/was a great module. It should be resurrected.


Data::Alias certainly is a great module. I've loved it ever since
Ben Morrow introduced me to it. When used correctly, it makes code
much more readable (and comprehensible to those who struggle with
references).

Programmers new to Perl often discover that @arrays and %hashes
aren't passed in and out of functions in quite the same way that
scalars are. As a result, they're more likely to declare many of
their @arrays and %hashes as global variables (or to avoid
declarations and "use strict" altogether) than to deal with passing in
references and correctly dereferencing them with $ref->{...} and $ref-
[...] syntax (which is tricky to get right when not familiar with
them).

For this reason I think that Data::Alias should be made a core
module of Perl. Perl programmers (and Perl programs) as a whole will
be better off once they all have access to the Data::Alias module.

But that's just my opinion. ;)

-- Jean-Luc
 
D

Dr.Ruud

It is/was a great module. It should be resurrected.


Data::Alias certainly is a great module. I've loved it ever since
Ben Morrow introduced me to it. When used correctly, it makes code
much more readable (and comprehensible to those who struggle with
references).

Programmers new to Perl often discover that @arrays and %hashes
aren't passed in and out of functions in quite the same way that
scalars are. As a result, they're more likely to declare many of
their @arrays and %hashes as global variables (or to avoid
declarations and "use strict" altogether) than to deal with passing in
references and correctly dereferencing them with $ref->{...} and $ref-
[...] syntax (which is tricky to get right when not familiar with
them).

For this reason I think that Data::Alias should be made a core
module of Perl. Perl programmers (and Perl programs) as a whole will
be better off once they all have access to the Data::Alias module.

But that's just my opinion. ;)

I agree to the "core module" part, but not to the syntax parts.
I would never recommend Data::Alias for such a reason. To me it
is mainly a module that brings better performance.


alias my %foo = %{ $table->{ foo } };

allows one to write

$foo{ key }

in stead of

$table->{ foo }{ key }

and that performs better.


perl -Mstrict -MData::Dumper -MData::Alias -MBenchmark=cmpthese -wle'
my $href;
alias my %foo = %{ $href->{ foo } };

$foo{ test }= 0;
#print "\n", Dumper( $href );

cmpthese( -1, {
plain => sub { $href->{ foo }{ test }= 1 },
alias => sub { $foo{ test }= 2 },
});

#print "\n", Dumper( $href );
'

Rate plain alias
plain 2194985/s -- -54%
alias 4812084/s 119% --
 
U

Uri Guttman

R> I agree to the "core module" part, but not to the syntax parts.
R> I would never recommend Data::Alias for such a reason. To me it
R> is mainly a module that brings better performance.

it is still just syntax sugar.


R> $foo{ key }

R> in stead of

R> $table->{ foo }{ key }

i never write deep hashes like that more than one time. i take refs to
the deeper level and use that. so the speed savings can be done without
this module. it may be nice and cute but it isn't any major win in speed
or syntax. you can't use it to hide from refs all the time. and it makes
the code maintainer learn all its tricks before he can work on the code.

uri
 
D

Dr.Ruud

Uri said:
it is still just syntax sugar.

Or you are just wrong.

i never write deep hashes like that more than one time. i take refs to
the deeper level and use that. so the speed savings can be done without
this module.

perl -Mstrict -MData::Alias -MBenchmark=cmpthese -wle'
my $h;
alias my %alias = %{ $h->{ foo } };
$alias{ test }= 0;
my $plain= $h->{ foo };

cmpthese( -1, {
plain => sub { $plain->{ test }= 1 },
alias => sub { $alias{ test }= 2 },
});
'
Rate plain alias
plain 3276800/s -- -28%
alias 4549606/s 39% --


(just because dereferencing has runtime costs)

it may be nice and cute but it isn't any major win in speed
or syntax. you can't use it to hide from refs all the time. and it makes
the code maintainer learn all its tricks before he can work on the code.

What a loss of cycles that you just never found out how much more it is.
 
S

sln

- Do you recommend upgrading from perl 5.10 (x64) to the newer perl
5.12 (x64) at the same time ?

I would say absolutely not do this at the same time.
It complicates the issue.
Certify the existing production code to 2008 with
the version 5.10 that is compatable with 2008
(if there is such a thing. Which for Perl, I don't think
there is that granularity at the OS level, only 32/64 bit,
which I think is the 2003 server sdk headers/libs).


-sln
 
U

Uri Guttman

R> perl -Mstrict -MData::Alias -MBenchmark=cmpthese -wle'
R> my $h;
R> alias my %alias = %{ $h->{ foo } };
R> $alias{ test }= 0;
R> my $plain= $h->{ foo };

R> cmpthese( -1, {
R> plain => sub { $plain->{ test }= 1 },
R> alias => sub { $alias{ test }= 2 },
R> });
R> '
R> Rate plain alias
R> plain 3276800/s -- -28%
R> alias 4549606/s 39% --


how often would you need to do a single level hash lookup to save here?
if i needed it so often i would copy to a scalar or make a ref to
it. again, not often so it isn't a win unless you have very odd or bad
old code.

R> (just because dereferencing has runtime costs)

and this is doing a dereference under the hood. hash lookups are a
different matter. compare alias to $ref = \$plain->{test} and
${$ref}. those should be about the same.

uri
 
D

Dr.Ruud

Uri said:
and this is doing a dereference under the hood. hash lookups are a
different matter. compare alias to $ref = \$plain->{test} and
${$ref}. those should be about the same.

You really still don't get it. But I never get tired promoting
Data::Alias, so no problem here.

Data::Alias does many things at compile time, by changing the op-tree.
No "dereference under the hood"!


Another nice (memory saving) functionality of Data::Alias:

perl -Mstrict -MData::Alias -MDevel::Size=size,total_size
-MBenchmark=cmpthese -wle'
my ( %plain, %slice, %alias );
my @keys= "aaaa" .. "zzzz";
my $VALUE= "foo";

print "keys: ", scalar @keys;

cmpthese( -1, {
slice => sub { @slice{ @keys }= ($VALUE) x @keys },
plain => sub { $plain{ $_ }= $VALUE for @keys },
alias => sub { alias $alias{ $_ }= $VALUE for @keys },
});

print "";
for ( [ plain => \%plain ], [ slice => \%slice ], [ alias => \%alias
] ) {
print sprintf qq{%s k=%s, v=%s}, $_->[0], size( $_->[1] ),
total_size( $_->[1] ) - size( $_->[1] );
}
'
keys: 456976
Rate slice alias plain
slice 3.33/s -- -13% -19%
alias 3.81/s 14% -- -8%
plain 4.13/s 24% 8% --

plain k=13978588, v=12795328
slice k=13978588, v=12795328
alias k=13978588, v=28

Try also with slice initialised to (), then make the $VALUE undef.
You'll find that slice is twice as fast, but still uses (for no good
reason) quite some memory for all the undef-SVs.

This is a nice way to implement sets, where all you need is exists().
Or any other situation where the values come in packs.

Cheers!
 
U

Uri Guttman

R> You really still don't get it. But I never get tired promoting
R> Data::Alias, so no problem here.

R> Data::Alias does many things at compile time, by changing the op-tree.
R> No "dereference under the hood"!

ok, i didn't know about its guts. but the fact that it messes with
perl's guts scares me too. as someone pointed out it broke under a newer
perl. that is not the kind of module i want to depend upon.

uri
 
D

Dr.Ruud

Uri said:
ok, i didn't know about its guts. but the fact that it messes with
perl's guts scares me too. as someone pointed out it broke under a newer
perl. that is not the kind of module i want to depend upon.

Well, don't get too afraid of op-tree manipulations yet, because there
is a whole new set of tools coming to do just that, though in a better,
more controllable and sustainable way, and even in Perl (compare
Devel::Declare).

That is also more or less the way that Data::Alias should get fixed.
It needed to be fixed after multiple Perl releases, and because it isn't
in core, that is lagging.
 
D

Dr.Ruud

Dr.Ruud said:
Another nice (memory saving) functionality of Data::Alias:

perl -Mstrict -MData::Alias -MDevel::Size=size,total_size
-MBenchmark=cmpthese -wle'
my ( %plain, %slice, %alias );
my @keys= "aaaa" .. "zzzz";
my $VALUE= "foo";

print "keys: ", scalar @keys;

cmpthese( -1, {
slice => sub { @slice{ @keys }= ($VALUE) x @keys },
plain => sub { $plain{ $_ }= $VALUE for @keys },
alias => sub { alias $alias{ $_ }= $VALUE for @keys },
});

print "";
for ( [ plain => \%plain ], [ slice => \%slice ], [ alias => \%alias ]
) {
print sprintf qq{%s k=%s, v=%s}, $_->[0], size( $_->[1] ),
total_size( $_->[1] ) - size( $_->[1] );
}
'
keys: 456976
Rate slice alias plain
slice 3.33/s -- -13% -19%
alias 3.81/s 14% -- -8%
plain 4.13/s 24% 8% --

plain k=13978588, v=12795328
slice k=13978588, v=12795328
alias k=13978588, v=28

Try also with slice initialised to (), then make the $VALUE undef.
You'll find that slice is twice as fast, but still uses (for no good
reason) quite some memory for all the undef-SVs.

This is a nice way to implement sets, where all you need is exists().
Or any other situation where the values come in packs.

Cheers!

You can do similar things with arrays:

perl -Mstrict -MData::Alias -MDevel::Size=size,total_size -wle'

my ( $MAX, $undef, @plain, @sparse, @alias )= ( 1234567 );

$plain[ $_ ]= $undef for 0 .. $MAX;
$sparse[ $MAX ]= $undef;
alias $alias[ $_ ]= $undef for 0 .. $MAX;

my %existing;

print "";
for ( [ plain => \@plain ],
[ spars => \@sparse ],
[ alias => \@alias ],
) {
my ( $k, $v ) = @$_;
$existing{ $k }= scalar grep exists $v->[ $_ ], 0 .. $#$v;
print sprintf qq{%s meta=%s B, data=%s B, k#=%s},
$k,
size( $v ),
total_size( $v ) - size( $v ),
$existing{ $k };
}
'

plain meta=8388740 B, data=14814816 B, k#=1234568
spars meta=4938420 B, data=12 B, k#=1
alias meta=8388740 B, data=12 B, k#=1234568


So use sparse arrays if not many slots will be occupied,
and use Data::Alias if you have a lot of equal values in the array.

But also look into the Vector modules, and into PDL,
and find out what is fittest in your context.
 
L

l v

Hello,

I hope this post is not OT in c.l.p.m.

I have got a couple of medium-sized (1000+ lines of code) of Perl
programs (64 bits) running on Windows Server 2003, the perl version
is:

C:\>perl -v
This is perl, v5.10.1 built for MSWin32-x64-multi-thread
(with 2 registered patches, see perl -V for more detail)
Binary build 1007 [291969] provided by ActiveState http://www.ActiveState.com
Built Jan 27 2010 14:12:21

I am using ADODB to connect to MSAccess databases, I read
ActiveDirectory entries and I use StorageCentral to setup Storage
quotas for 1000+ directories.

Now I have to migrate Windows Server from 2003 to 2008.

Before I embark on this adventure, I would ask for the wisdom of the
perl user community:

- Does anybody have experience with migrating perl programs from
Windows Server 2003 to 2008 ?

- Are there any pitfalls I need to be aware of ?

- Do you recommend upgrading from perl 5.10 (x64) to the newer perl
5.12 (x64) at the same time ?

I would keep the Perl version the same during the server migration /
upgrade so as not to complicate the migration.

I'm sure I'll catch some flak over this. I the past on older Windows
OSes, I would install Perl in the same drive and directory location on
the 2008 box as it was installed on the 2003 box. Then copy the 2003
Perl installation to the 2008 box. I never uncounted any problems with
this procedure.
 
M

Mart van de Wege

l v said:
On 8/19/2010 6:01 AM, Klaus wrote:


I would keep the Perl version the same during the server migration /
upgrade so as not to complicate the migration.

I'm sure I'll catch some flak over this.

Why?

It's common sense in doing systems administration: never execute more
than one change at a time. If something goes wrong, you don't have to
first debug which change cased the problem, you can just roll back the
last change.

Mart
 
M

Martijn Lievaart

Why?

It's common sense in doing systems administration: never execute more
than one change at a time. If something goes wrong, you don't have to
first debug which change cased the problem, you can just roll back the
last change.

Not quite, it's always a risk/benefit trade off. If rollback is easy and
the risks are low, it may very well be beneficial to bother your users
only once and do all changes at once.

And as you tested all changes beforehand, the risks should be low.

(I currently work in a bank where every environment has a complete DTAP
street where this holds, and at the some other gig where you are lucky if
some test environment can be freed, all IT is outsourced and changes
normally go sour. Guess which one follows which change model...)

M4
 
M

Mart van de Wege

Martijn Lievaart said:
Not quite, it's always a risk/benefit trade off. If rollback is easy and
the risks are low, it may very well be beneficial to bother your users
only once and do all changes at once.
Well, nothing precludes you from rolling out your changes in a single
change window of course.

But you'd still be smart to execute each part in sequence and only after
confirming functionality continue to the next change.

So in OP's case, I'd do the OS upgrade, test if the application still
works, Perl upgrade, test again. (or Perl first, depending on how easy a
rollback is).
And as you tested all changes beforehand, the risks should be low.

Heh. If only. Experience tells me that there is always a chance that
something that worked in testing falls flat in production.

Mart
 
M

Martijn Lievaart

Heh. If only. Experience tells me that there is always a chance that
something that worked in testing falls flat in production.

In some environments where I work, it is more than "a chance". :)

M4
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,969
Messages
2,570,161
Members
46,710
Latest member
bernietqt

Latest Threads

Top