From: Henrik Carlqvist on
Chris Vine <chris(a)cvine--nospam--.freeserve.co.uk> wrote:
> I tend to use sshfs, which is a FUSE file system. It's advantage is
> that, unlike with NFS, the user's status is entirely determined by her
> log-on credentials on the server, rather than by credentials on the
> client machine.

Thanks for the tip about sshfs. The strenght compared with NFS is really
that the user is determined by the server. However, for my use that also
becomes a weakness.

On my machines we do not only have home directories automounted by NFS,
also common project directories are mounted this way. Several users might
be logged in to the same machine at any time and access files from the
same project.

If I understand sshfs correctly a share mounted at /proj/project1 would
allways be accessed as the ssh user who mounted the share?

NFS is not the perfect solution for us either. One problem we have with
NFS is that the NFS protocol only allows up to 16 unix groups for each
user. If the NFS server is running Linux the option --manage-gids to
mountd is a workaround, but not all our NFS servers are running Linux.

If I had the need to mount directories from my home machine when I connect
my laptop to internet sshfs seems like the best choice to me.

regards Henrik
--
The address in the header is only to prevent spam. My real address is:
hc3(at)poolhem.se Examples of addresses which go to spammers:
root(a)localhost postmaster(a)localhost

From: Chris Vine on
On Wed, 30 Jun 2010 07:54:47 +0200
Henrik Carlqvist <Henrik.Carlqvist(a)deadspam.com> wrote:
> On my machines we do not only have home directories automounted by
> NFS, also common project directories are mounted this way. Several
> users might be logged in to the same machine at any time and access
> files from the same project.
>
> If I understand sshfs correctly a share mounted at /proj/project1
> would allways be accessed as the ssh user who mounted the share?

Yes, that's right (which is why allow_other is usually not a wise mount
option on home directories). In essence, as you note elsewhere, user
identity and permissions are not transitive with sshfs. That is why
sshfs is secure and NFS is not.

In your scenario you could have a master user and group (say
"project"), mount with the allow_user option and make all your users
who want to access the project files members of the project group (with
as necessary both read and write access for members of the group),
with the ability to log in to the file server via ssh. As I mentioned,
the point to note here though is that sshfs does not have file locking
with respect to concurrent writes and/or read-writes. However, for this
kind of usage NFS looks secure enough with root_squash, and that of
course does do locking.

Chris
From: Helmut Hullen on
Hallo, Henrik,

Du meintest am 22.06.10:

>>> Try passing "-o vers=3" to the mount command.

[...]

> So it seems as if Slackware 13.1 by default uses NFS version 4 which
> was not supported at all in your earlier versions of Slackware. If
> this is the solution it is also possible to add "vers=3" to the line
> in /etc/fstab.

By the way: "vers=3" solved some problemes in the "rsnapshot"
environment too ...

Viele Gruesse
Helmut

"Ubuntu" - an African word, meaning "Slackware is too hard for me".

From: Grant on
On 04 Jul 2010 13:42:00 +0200, Helmut(a)Hullen.de (Helmut Hullen) wrote:

>Hallo, Henrik,
>
>Du meintest am 22.06.10:
>
>>>> Try passing "-o vers=3" to the mount command.
>
>[...]
>
>> So it seems as if Slackware 13.1 by default uses NFS version 4 which
>> was not supported at all in your earlier versions of Slackware. If
>> this is the solution it is also possible to add "vers=3" to the line
>> in /etc/fstab.
>
>By the way: "vers=3" solved some problemes in the "rsnapshot"
>environment too ...

Doesn't surprise me, AFAIK nfs-4 is experimental, they're still trying
to get nfs-3 working properly ;) I've had zero problems with nfs-3 for
a long time now.

Grant.
From: Grant on
On Mon, 21 Jun 2010 15:47:09 +0000, Robby Workman <newsgroups(a)rlworkman.net> wrote:

>On 2010-06-20, john <here(a)home.hams> wrote:
>> I can't mount the two exported nfs directories on my file storage box
>> like I would usually do with prior slackware versions. The nfs-utils on
>> the serverside are vers. 1.0.7. and nfs-utils-1.2.2 on the Slackware
>> 13.1 install. It still mounts the directories on a 13.0 install with no
>> problems. I,m using a simple hosts list in /etc/hosts on all machines.
>> Has anyone one else experienced a problem or do I need to mount the
>> remote directories differently?
>
>
>Try passing "-o vers=3" to the mount command.
>
>-RW

Just got bitten with this with a custom kernel on slack64-13.1:

root(a)pooh64:~# mount /home/common/
mount.nfs: an incorrect mount option was specified
root(a)pooh64:~# mount -o vers=3 /home/common/
[okay]

Why default to nfs4? Seems it's a compile time option for mount.nfs,
since the man page says there's supposed to be a mount.nfs plus a
mount.nfs4. Slackware64-13.1 only has the /sbin/mount.nfs :(

Somebody at slackware stuffed up this one?

Sure, I can work around by enabling kernel nfs4, but I like to stay
where things are reliable.

Grant.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4
Prev: X graphics
Next: Slackware 13 Pkgtools on backlevel systems