From: xetum on
I'm implementing a simple remote shell for educative purposes:
client: arsh serveraddr port
server: arshd

client selects between descriptors STDIN_FILENO and socket
server redirects STDIN_FILENO, STDOUT_FILENO, STDERR_FILENO to socket

Problem is when server executes a program with STDIN redirection,
p.e.:
arsh$ cat <input.txt
cat reads lines until a ^D, then client shutdowns socket for writing
(SHUT_WR) so that cat detects an EOF. Child server process executing
cat finishes, but when server tries to read next command line after
prompt ($arsh) sees that socket is closed and finishes unexpectedly.

Is there a way to send an EOF through a socket but maintain it open?
p. e. reading from the console (instead of a socket) works well.
Example:

#include <stdio.h>
int main() {
int ch = getchar();
printf("ch = %d\n", ch);
int ch1 = getchar();
printf("ch1 = %d\n", ch1);
return 0;
}

pressing ^D and 'a' produces

ch = -1
a
ch1 = 97

i.e. data can be introduced after having closed STDIN

I'll thank any clues about how this is solved in p.e. ssh

Best regards
From: Rainer Weikusat on
xetum <francesc.oller(a)upc.edu> writes:
> I'm implementing a simple remote shell for educative purposes:
> client: arsh serveraddr port
> server: arshd

[...]

> Problem is when server executes a program with STDIN redirection,
> p.e.:
> arsh$ cat <input.txt
> cat reads lines until a ^D, then client shutdowns socket for writing
> (SHUT_WR) so that cat detects an EOF. Child server process executing
> cat finishes, but when server tries to read next command line after
> prompt ($arsh) sees that socket is closed and finishes unexpectedly.
>
> Is there a way to send an EOF through a socket but maintain it open?

The socket doesn't care. The problem is likely that the ^D is
processed by the terminal driver on the client side but it needs to be
processed on the server side. For this to happen, you need to
configure your local terminal to not use canonical mode but to pass
control characters uninterpreted to the remote applications which
needs to use a so-called 'pseudo terminal' (pty) in order to get the
terminal driver on the remote machine to process them instead. AFAIK,
this means the general structure of your application needs to be like
the ASCII arg image below:

------ ------ -----
|client| - - - - - socket connection - - - - - |server| . . pty link . . |shell|
------ ------ -----
terminal in 'raw' mode pty master device pty slave device
From: xetum on
On 18 Març, 13:22, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote:
> xetum <francesc.ol...(a)upc.edu> writes:
> > I'm implementing a simple remote shell for educative purposes:
> > client: arsh serveraddr port
> > server: arshd
>
> [...]
>
> > Problem is when server executes a program with STDIN redirection,
> > p.e.:
> > arsh$ cat <input.txt
> > cat reads lines until a ^D, then client shutdowns socket for writing
> > (SHUT_WR) so that cat detects an EOF. Child server process executing
> > cat finishes, but when server tries to read next command line after
> > prompt ($arsh) sees that socket is closed and finishes unexpectedly.
>
> > Is there a way to send an EOF through a socket but maintain it open?
>
> The socket doesn't care. The problem is likely that the ^D is
> processed by the terminal driver on the client side but it needs to be
> processed on the server side. For this to happen, you need to
> configure your local terminal to not use canonical mode but to pass
> control characters uninterpreted to the remote applications which
> needs to use a so-called 'pseudo terminal' (pty) in order to get the
> terminal driver on the remote machine to process them instead. AFAIK,
> this means the general structure of your application needs to be like
> the ASCII arg image below:
>
>  ------                                         ------                    -----
> |client| - - - - - socket connection - - - - - |server| . . pty link . . |shell|
>  ------                                         ------                    -----
>  terminal in 'raw' mode                         pty master device         pty slave device

Ah..., yes, pseudoterminals. Effectively it works, thanks, but now
I've another problem. For testing, I've coded this simple forkpty
example:

#define BUF_SIZE 128
#define max(i, j) ((i > j) ? (i) : (j))

void selector (int fd) {
char buf[BUF_SIZE];
int n;

fd_set ss, ret;
FD_ZERO(&ss);
FD_SET(fd, &ss);
FD_SET(STDIN_FILENO, &ss);

while (true) {
ret = ss;
if (select(fd+1, &ret, NULL, NULL, NULL) == -1)
SysError("selector:select");
if (FD_ISSET(fd, &ret)) {
if ((n = read(fd, buf, BUF_SIZE)) > 0) {
write(STDOUT_FILENO, buf, n);
printf("n = %d writing to STDOUT\n", n);
} else if (n == 0) {
printf("fd closed\n");
break;
} else {
printf("errno = %d\n", errno);
SysError("selector:read:fd");
}
} else if (FD_ISSET(STDIN_FILENO, &ret)) {
if ((n = read(STDIN_FILENO, buf, BUF_SIZE)) > 0) {
write(fd, buf, n);
printf("n = %d writing to fd\n", n);
} else if (n == 0) {
printf("STDIN closed\n");
break;
} else
SysError("selector:read:STDIN_FILENO");
}
}
}

int main(int argc, char *const argv[])
{
int pty;
tty_raw(STDIN_FILENO);
if (atexit(tty_reset) != 0)
RTError("atexit: can't register tty_reset");
switch(forkpty(&pty, NULL, NULL, NULL)) {
case -1: SysError("pty:forkpty");
case 0:
//console();
_exit(EXIT_SUCCESS);
default:
selector(pty);
}
return 0;
}

Client attached to pty slave just exits. select in father awakes with
fd (pty) ready, and read should return 0, shouldn't it? to indicate
EOF (stdout closed at pty slave), but returns -1 with errno=EIO (I/O
error). All pty example codes I've seen don't handle this read == -1
as an error but as a program finalization, but then how to tell
between EOF and a true error?

Regards, Francesc Oller