[07:53] <InHisName2> Morning
[10:24] <rmg51> Morning
[13:19] <teddy-dbear> Morning peoples, dogs, turkey, hamsters and everything else
[13:24] <JonathanD> hiya
[13:28] <teddy-dbear> o/
[13:37] <InHisName2> jthan: y did u drop the "j" for 3 seconds, --> missed typing a key ?
[13:37] <InHisName2> Morning again
[16:25] <ChinnoDog> hi
[16:45] <lazyPower> o/
[17:19] <ChinnoDog> Is there a way to channel a terminal program or a screen session through a unix socket or named pipe? I know this seems like a strange question.
[17:58] <ChinnoDog> This is too complicated. Maybe I would rather not try to do that.
[17:59] <square-r00t> screen sessions already use sockets
[18:00] <square-r00t> /var/run/screen
[18:00] <ChinnoDog> oh.
[18:00] <ChinnoDog> So how do I connect to screen via socket then?
[18:02] <ChinnoDog> Meaning... If I have access to the file system containing the screen socket but screen is not running on the system I can see it from, how do I connect to it?
[18:04] <square-r00t> when you start a screen session, it opens a new socket.
[18:05] <square-r00t> i'm going to take a guess here and assume you're trying to attach to a process that was initiated remotely and not within screen/dtach?
[18:06] <ChinnoDog> Actually no but.. can I do that?
[18:07] <ChinnoDog> That would be even better if I could run processes that aren't attached to a screen or terminal and connect to them later.
[18:10] <square-r00t> no, you can't. hah
[18:11] <ChinnoDog> This conversation is creating more questions than answers now. What I am trying to do is figure out how to isolate CLI programs inside docker containers.
[18:11] <square-r00t> oh, ps auxf|less
[18:11] <square-r00t> hit / (search)
[18:11] <square-r00t> search for the dock name
[18:11] <square-r00t> ps auxf prints all processes, their full execution path, in "tree" mode
[18:12] <ChinnoDog> I don't understand how that helps me
[18:12] <square-r00t> so you can see what processes another process spawns
[18:12] <square-r00t> if it's running from the dock, you'll be able to find the process of the dock, and check out the children processes
[18:12] <ChinnoDog> How do I communicate with a process that is running in a container if it doesn't have network ports or a unix socket in the filesystem?
[18:12] <square-r00t> *most* docks, iirc, spawn them that way
[18:12] <square-r00t> simple answer, you don't
[18:13] <square-r00t> you can send KILL sigs and that's it
[18:13] <square-r00t> (e.g. HUP, USR1, etc.)
[18:13] <ChinnoDog> Not a dock, Docker
[18:13] <square-r00t> http://www.linux.org/threads/kill-commands-and-signals.4423/
[18:14] <square-r00t> oh...thehell. looks like some kind of UML clone
[18:14] <square-r00t> the process tree *should* still show the process
[18:15] <square-r00t> ultimately, What Are you Trying to Specifically Do(TM)
[18:15] <ChinnoDog> The only connection I have to processes running in docker containers is network ports and file system access
[18:16] <square-r00t> through the docker interface/API, sure
[18:16] <ChinnoDog> Enable me to run multiple versions of a CLI environment at the same time by isolating the applications within containers
[18:17] <square-r00t> but unless it's running in a paravirt mode or fully virtualized hardware (and it doesn't look like Docker does that; it looks more like chroots) you should be able to still see and interact (via SIG at least) the processes within the container
[18:17] <square-r00t> but grain of salt, i'm not using docker or anything
[18:18] <ChinnoDog> But that won't let me attach it to an arbitrary terminal or screen.. will it?
[18:25] <ChinnoDog> It seems to me that to do this properly will create far too much overhead. I /could/ run an sshd in every container and then redirect users that ssh in to the correct container
[18:25] <ChinnoDog> That will create 1 sshd per application though plus 1 just to get in.
[18:26] <ChinnoDog> It might be better to abandon Docker for this. If I do that though I will need a safe way to update the system.
[18:28] <ChinnoDog> Actually, this could be a good use for btrfs snapshots.
[18:29] <ChinnoDog> To perform safe modifications I could snapshot the root file system, chroot into the snapshot to perform and test updates, and then remount root to the new snapshot.
[18:29] <ChinnoDog> square-r00t: What do you think?
[18:30] <square-r00t> ChinnoDog: that's what you gotta do, re: sshd
[18:30] <ChinnoDog> I don't want to run that many sshds
[18:31] <square-r00t> but no, unless it has some sort of hypervisor or such you're not going to be able to get a tty (or pty) to that container
[18:41] <square-r00t> you might want to check out virtuozzo/openvz
[18:41] <square-r00t> it doesn't quite need a hypervisor like fully virtualized hardware, and you can easily enter containers
[18:42] <ChinnoDog> That will have even more overhead. Resources for an entire system in every VM.
[18:42] <square-r00t> bad news is they're full OS installs for each container, not just a chrooted application
[18:42] <square-r00t> it's not virt'd hardware
[18:42] <square-r00t> it just takes up more disk space
[18:42] <square-r00t> would use the same ram/cpu per container as docker
[18:42] <ChinnoDog> I know it is paravirt but every system is going to be running all the normal processes
[18:42] <square-r00t> (roughly)
[18:43] <square-r00t> no, it's *not* paravirt.
[18:43] <square-r00t> that's what i'm saying
[18:43] <square-r00t> vz = chroots with some isolated device special files, etc.
[18:44] <square-r00t> each container's going to be running about 4-5 processes as overhead, and that's all running via host kernel hooks
[18:44] <square-r00t> (so yeah, you need a custom kernel)
[18:44] <square-r00t> but you can't have your cake and eat it too
[18:46] <square-r00t> (paravirt means it's able to, and does, run its own kernel inside the container on fully virtualized hardware)
[18:46] <square-r00t> point being, you need to decide what you actually wanna do, because you're not gonna have a way of getting interactive shell with processes running inside a container without modifying the kernel
[18:46] <ChinnoDog> That is also more overhead than I want. My alternate plan seems more practical here. Abandon containers entirely and do system updates using file system snapshots. It is not the same level of isolation but it won't take down running processes and I can test the results before I commit.
[18:47] <square-r00t> (shrug)
[18:47] <ChinnoDog> I don't want more to manage, I want less to manage.
[18:48] <square-r00t> lol. sounds like your solution has a hell of a lot more to manage than me, but what do i know; i'm a linux sysadmin
[18:48] <square-r00t> s/than/to/
[18:49] <ChinnoDog> Snapshots allows me to manage one system. With OpenVZ I would have to manage one system per user. With containers I would have to manage one container per apps and additional apps to connect the containers.
[18:49] <ChinnoDog> None of these approaches is perfect.
[18:50] <square-r00t> btrfs is also beta, you'd be hacking the thing together yourself, and you won't be able to run them in parallel effectively. to me, that's a higher nightmare
[18:51] <ChinnoDog> Seems stable enough to me. I've been running it for years.
[18:52] <ChinnoDog> I had more problems with "stable" resierfs code than I ever did with btrfs.
[18:52] <square-r00t> in-house always has the benefit of doing it "exactly" the way you want something done (assuming the skillsets are available to make that happen), but at the cost of admin/dev time.
[18:52] <square-r00t> that's because reiserfs is trash
[18:53] <square-r00t> but 1.) that's anecdotal evidence, 2.) still says nothing about the other two major issues i presented
[18:53] <square-r00t> granted, you still haven't told me *why* you're doing this, so i have no idea of the use case
[18:54] <ChinnoDog> To host an application on the internet for users to access.
[18:54] <ChinnoDog> Since I am hosting it stability as well as density is important.
[18:55] <square-r00t> stable, density, convenient; choose two
[19:01] <MutantTurkey> two?
[19:02] <MutantTurkey> you're lucky if you get one in the end!