Jump to content

Implementing Bash shell like functionality via TCP sockets


Recommended Posts

I would like to implement something like a Bash shell session between two machines which are connected via a TCP socket and utilize JSON-RPC.

 

For my specific need, the socket client will act as the machine that implements the Bash shell, and the socket server will act as the machine which accesses the shell.  Upon creating a connection, the socket client machine will keep the connection open indefinitely and operate asynchronously.  Commands to be run on the socket client will ultimately be provided by a user via a web-browser.   Each socket client will have a hard coded GUID used to identify it, and will send this GUID to the server upon connection.  The socket client will be running as a user other than root, however, must be able to execute any command such as installing software using yum/apt.  The socket server machine will have user credentials which have been added to the sudo list on all socket clients.  Network configuration prevents the client from directly being remotely logged onto, and this implementation basically acts as a backdoor.

 

I envision workflow as follows:

  1. The socket server machine will also operate a HTTP server.
  2. User via a web-browser submits a command to the server's HTTPs server (strong authentication will be implemented).
  3. HTTP server opens a socket client connection to the socket server (which as indicated previously is running on the same machine as the socket server) and passes that command to it along with a means to identify a given socket client.  HTTP server does not respond to web-browser until it receives a response from the socket client (or a given amount of time expires).
  4. Socket server passes the command to the appropriate socket client along with a message ID (all messages will be confirming and transmitted asynchronously) using SSL.
  5. Socket client blindly executes the command, and sends the output back to the socket server, which is in turn returned to the server's socket client (and then this connection is closed), which is in turn returned the the HTTP server, which is in turn returned to the web-browser.
While I haven't yet implemented the above, I see no immediate roadblocks.  I've never done this before nor even heard of one doing so, however, and am uncertain about how to create a persistent shell connection and how to allow the socket client to execute commands with root permissions.

 

Regarding "persistent shell connection", normally one logs onto a given machine as a given user, and can substitute user (su) and then perform tasks allowed by that user.  I don't think this concept can be implemented under my scenario.  Agree?  Would I need to pass the client's root password upon every communication between the socket server and socket client, and have the client execute it using sudo?  Now that I think about it, this would require the user operating the socket client to be added to the sudo list, right?  I guess this would be okay, but does make me a bit nervous.  Also, I am uncertain how to handle passwords on the clients, and question how I can use a different password for each client.

 

Sorry for rambling.  If you see any immediate pitfalls, or know of a totally different why to implement this functionality, please comment.

 

Thank you

Link to comment
Share on other sites

Sounds like you're trying to reinvent SSH. Don't do that.

 

If you need to work around a firewall, use tunneling, don't invent your own protocol.

 

Agree trying to reinvent SSH will likely lead to misery.

 

I am not responsible for the firewall but must support the client, and I also maintain the server.

 

The maintainer of the firewall is not willing to configure the firewall to allow external direct access to the client.

 

The firewall is setup to allow outgoing access using a given port from a given local static IP (that of the socket client) to a given remote IP (the socket server), and presumably receive direct communication from the server after the socket is established (will be testing next week).  I can get them to also allow outgoing access on another port if necessary for tunneling.

 

Please confirm that this can be accomplished using tunneling.  Also, can you better point me in the right direction as I have never implemented tunneling (should I just google "use tunneling to get around firewall", or is there something more specific?).

Link to comment
Share on other sites

"client blindly executes the command, and sends the output back" is basically the entire purpose of SSH.

 

You don't need direct access to the client - just a way to establish a connection. If that goes through a firewall with port forwarding, whatever. In fact that's a fairly standard practice.

Link to comment
Share on other sites

"client blindly executes the command, and sends the output back" is basically the entire purpose of SSH.

 

You don't need direct access to the client - just a way to establish a connection. If that goes through a firewall with port forwarding, whatever. In fact that's a fairly standard practice.

 

 

What type of connection?  All I have is an established sockets connection.  Looking at http://php.net/manual/en/function.ssh2-connect.php, I see creating a new ssh connection from a client to a server.  I can certainly have the server send a message via the socket connection to the client asking it to create such a connection, but I don't know what to do next.  I need the server to access the client's shell, not the other way around.

Link to comment
Share on other sites

Remember what I said a while back about the terms "client" and "server"? How the client is the one doing the connecting and the server is the one receiving the connection? And that the words themselves have no bearing on any other purposes of the machines in question?

 

Client and server are not things. They are roles.

 

SSH has its own specifics to the client and server concept. One machine runs an SSH service that receives connections so it is the server. Another machine has an SSH-capable program that connects to whatever computer so it is the client.

 

Your ser-- er, big fancy machine would be acting as the SSH client and it would connect to whatever other machine is acting as the SSH server. How those two interact in this protocol you're working with is completely separate.

Link to comment
Share on other sites

Yes I do remember what you said a while back about the terms "client" and "server"!

 

But the big fancy machine can't establish a new connection, but only utilize an existing sockets connection established by the other small thing.

 

How can the big fancy machine act as a client?

Link to comment
Share on other sites

Theoretically you could have your client connect to your server, set up a simple pipe between it and a new SSH session, and have your server continue the process from there

server  client's communications socket  client's SSH connection  client's SSH server
but PHP doesn't provide a way to do that. So unless you want to write and build a PHP extension (probably mostly just copy and paste, though) then I think your hands are tied: server tells client commands to execute.

 

You can still reconsider who's responsible for determining how to execute commands. I still suggest the abstract "give me a list of files in this directory" approach, but if there are too many possibilities to account for then I'd have the client decide how to handle the commands. You still have the same problem of determining whether a command can be, or was successfully, executed, but that is a limitation of the client's environment and so should be the client's responsibility.

Link to comment
Share on other sites

No, you definitely don't want that.

 

I haven't followed your project, but if all you want to do is make an SSH connection through that firewall, this shouldn't be an issue.

# on a small machine: establish tunnel to the big machine
# if the big machine sends data to its local port 2222, it will be forwarded to port 22 of the small machine
ssh -R 2222:localhost:22 user@big-machine

# on the big machine: establish SSH connection to small machine through existing tunnel
ssh -p 2222 user@localhost

Since the connection is initiated by the small machine, it should go through the firewall.

 

Surely there are existing tools which make this more streamlined, reconnect if the connection is lost etc. It's also possible to restrict the shell by sending all commands to a "proxy" script/program which then decides what to do. This is how some custom remote shells are implemented: They use standard SSH, but you cannot actually execute arbitrary commands, only predefined ones.

Link to comment
Share on other sites

  • 3 weeks later...

No, you definitely don't want that.

 

I haven't followed your project, but if all you want to do is make an SSH connection through that firewall, this shouldn't be an issue.

# on a small machine: establish tunnel to the big machine
# if the big machine sends data to its local port 2222, it will be forwarded to port 22 of the small machine
ssh -R 2222:localhost:22 user@big-machine

# on the big machine: establish SSH connection to small machine through existing tunnel
ssh -p 2222 user@localhost

Since the connection is initiated by the small machine, it should go through the firewall.

 

Surely there are existing tools which make this more streamlined, reconnect if the connection is lost etc. It's also possible to restrict the shell by sending all commands to a "proxy" script/program which then decides what to do. This is how some custom remote shells are implemented: They use standard SSH, but you cannot actually execute arbitrary commands, only predefined ones.

 

Question...

 

What if there is more than one small machine, and one first established a tunnel to the big machine using ssh -R 2222:localhost:22 user@big-machine.  Would a second small machine be able to establish a similar tunnel while the first small machine already has one?  What then would be the affect of doing ssh -p 2222 user@localhost?

Link to comment
Share on other sites

-R 2222:localhost:22 will take traffic to port 2222 on the remote side (ie, big machine's 2222) and forward it to localhost:22 on the local side (ie, small machine's 22).

 

Since only one thing can use a port at once, and another client has used the remote port 2222, you'd have to pick a different port for the second client to use. Like 2223. Picking a random number within a certain range (like 2200-2500) may be good enough for your purposes.

Link to comment
Share on other sites

In order to prevent the big machine from complaining that the little machine's RSA identity has changed, must one create a different user on the big machine for each small machine, or is there another option (other than giving all the little machines the same identity or deleting known hosts on the big machine before each attempt)?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.