From: Peng Yu on
I have the following code. The there is a chance that the two print
line (with '#') may not function print consecutively. Is there a way
to lock so that I can guarantee they write consecutively?

$ cat main.pl
#!/usr/bin/env perl

use strict;
use warnings;

use Parallel::ForkManager;

open OUT, ">main.txt";

my $pm = new Parallel::ForkManager(5);

foreach my $n (1..10) {
my $pid = $pm->start and next;
print $n, "\n";
sleep rand(1);
print OUT $n;#
print OUT $n;#
$pm->finish;
}

$pm->wait_all_children;
From: Ben Morrow on

Quoth Peng Yu <pengyu.ut(a)gmail.com>:
> I have the following code. The there is a chance that the two print
> line (with '#') may not function print consecutively. Is there a way
> to lock so that I can guarantee they write consecutively?
>
> $ cat main.pl
> #!/usr/bin/env perl
>
> use strict;
> use warnings;
>
> use Parallel::ForkManager;
>
> open OUT, ">main.txt";

*Always* check the return value of open.
Use 3-arg open.
Keep filehandles in variables.

open my $OUT, ">", "main.txt"
or die "can't write to 'main.txt': $!";

> my $pm = new Parallel::ForkManager(5);
>
> foreach my $n (1..10) {
> my $pid = $pm->start and next;
> print $n, "\n";
> sleep rand(1);
> print OUT $n;#
> print OUT $n;#

Since you are writing, you also need to close the filehandle and check
for errors. Since perl buffers writes, you need to do this *inside* the
lock. (If you want to keep the file open, you can call IO::Handle::flush
inside the lock instead.)

> $pm->finish;
> }
>
> $pm->wait_all_children;

There are lots of ways. The simplest is to flock a file; however, you
can't just use the main.txt file you've already got open due to the way
fork(2) and flock(2) interact (at least on systems with proper BSD flock
semantics).

use Fcntl qw/:flock/;

...
sleep rand(1);
open my $LOCK, ">", "main.txt.lock"
or die "can't open main.txt.lock: $!";
flock $LOCK, LOCK_EX or die "can't lock main.txt.lock: $!";
print $OUT "...";
close $OUT or die "can't write to main.txt: $!";
# close will release the lock
close $LOCK;

(You can get all those 'or die's supplied automatically by using the
'autodie' module.)

Alternatively, if you open the file separately in each child, you *can*
flock the $OUT filehandle; however, you will need to be careful to seek
to the end of the file under the lock or open in append mode.

Other locking primitives include fcntl file locking, which will allow
you to lock against other forked processes so long as you are careful
never to reopen the same file; SysV semaphores (IPC::Semaphore), which
are usually supported on Unix systems but not usually elsewhere; and
POSIX.1b semaphores (POSIX::RT::Semaphore), which are not yet
universally supported. Your system may also support other lock types.

Ben

From: Eric Pozharski on
with <15fe947f-67be-42f9-8ac2-1d19bc593654(a)t2g2000yqe.googlegroups.com> Peng Yu wrote:
> I have the following code. The there is a chance that the two print
> line (with '#') may not function print consecutively. Is there a way
> to lock so that I can guarantee they write consecutively?
*SKIP*
> sleep rand(1);

Please, read carefully the very last paragraph of 'perldoc -f sleep'

% perl -wle 'print "$_ -> ", sleep $_ foreach map rand 1, 0 .. 10'
0.040502967153774 -> 0
0.135818231941329 -> 0
0.57742617841917 -> 0
0.999873014489772 -> 0
0.560751288326248 -> 0
0.480306753175668 -> 0
0.0107077151972952 -> 0
0.870398831594763 -> 0
0.131003141627307 -> 0
0.957688366820321 -> 0
0.803041790395394 -> 0

(That's just "my 0.02$"; others showed you light already)

*CUT*

--
Torvalds' goal for Linux is very simple: World Domination
Stallman's goal for GNU is even simpler: Freedom