From: Rahul on
Joe Beanfish <joe(a)nospam.duh> wrote in news:i2v1ja
$bjv(a)news.thunderstone.com:

> Note that this may slam the NFS server and/or use a lot of your net
> bandwidth.
>
> If your mounted system are the same architecture and version as
> the host it looks like you could write a shell script wrapper that
> would run locate with the option(s) to use the database on the mounted
> system. Run once normally for local files. Run once for each mounted
> filesystem telling it to use the mounted database.
>

Thanks for all the tips! In the interest of not slamming the NFS server I
decided the cleanest approach was: Index once by running mlocate
(locally) and then have all remote systems use the same index.db via NFS.

This is what I did (in case it helps anyone):

On storage server:
cp /var/lib/mlocate/mlocate.db /opt/tmp/ ##put in a daily cron job
chgrp slocate /opt/tmp/mlocate.db ##else locate on remote cannot read db

[ /opt/tmp is exported to all remote systems. ]

On remote system:
alias locate 'locate -d /opt/tmp/mlocate.db'
locate '*foofile*'
/home/foouser/foofile
/home/baruser/foofile
[snip]

Seems to work. It would have been more elegant had a link operation
worked rather than a copy to /opt/tmp. Unfortunately neither soft
(follows on the local system) nor hard links (different file system) work
here. But if there's a way around this I'd love to know.

Thanks again for all the pointers!



--
Rahul
From: Rahul on
Chris Davies <chris-usenet(a)roaima.co.uk> wrote in
news:pemai7xg5h.ln2(a)news.roaima.co.uk:

> Is the filesystem containing your home directory NFS mounted as a
> symbolic link from something under /net (or one of the other
> directories listed in PRUNEPATHS)?
>

Nope. Not the case.

--
Rahul