[00:20] <Keybuk> ion: talking of which
[00:20] <Keybuk> http://upstart.at/git/?p=scott/intendant.git;a=summary
[00:20] <Keybuk> could do with some testing
[00:21] <ion> woot
[00:21] <Keybuk> basically you build it then do
[00:21] <Keybuk> sudo ./intendant /some/binary
[00:21] <ion> I’ll try it out after some sleep.
[00:21] <Keybuk> in theory no binary should be able to escape it's supervision
[00:21] <Keybuk> if you can try it with a few favorite daemons and mail me the logs, that'd be great
[00:21] <Keybuk> (since it's also for comparing techniques)
[00:21] <ion> Alright
[00:26] <ion> What’s the clone URL? I guessed git://upstart.at/scott/intendant.git but it didn’t work.
[00:26] <SeveredCross> Hey folks, how do I set the working directory for an upstart job?
[00:27] <Keybuk> SeveredCross: "chdir /working/directory" in the .conf file
[00:27] <SeveredCross> Awesome, thanks.
[00:27] <Keybuk> ion: oh, does it not say? git://upstart.at/git/scott/intendant.git iirc
[00:27] <ion> That was my second guess and it didn’t work either. :-)
[00:28] <Keybuk> weird
[00:29] <ion> When using gitweb, one can add a file named ‘cloneurl’ to the root of each repository and the UI’ll display the contents.
[00:30] <ion> That is, to the directory where ‘description’ goes as well.
[00:33] <Keybuk> try now
[00:36] <ion> Works with /git/. (Btw, isn’t that redundant? When i’ve been running git-daemon i haven’t had the need to include that part in the clone URLs.)
[00:36] <Keybuk> yeah, turns out that is an option to git-daemon
[00:36] <Keybuk> I added --base-path=/git --base-path-relaxed now
[00:36] <Keybuk> so both should work
[00:41] <ion> Yeah, works now.
[06:17] <twb> My new upstart job needs DNS resolution to be working before it starts
[06:17] <twb> What even do I "start on" ?
[06:19] <twb> (This is on lucid, btw)
[06:36] <SeveredCross> Probably networking.
[06:36] <SeveredCross> I don't think there's anything more fine-grained that would get you /just/ DNS.
[06:37] <twb> It didn't like networking
[06:37] <twb> Based on what vsftpd does, I used start on (runlevel [2345] and net-device-up IFACE!=lo)
[06:37] <twb> Which assumes there's only one iface, but that's OK for my immediate purposes
[06:38] <twb> IIUC networking didn't work because the *network* comes up before dhclient finishes setting up the configuration recommended to it by the dhcp server
[06:39] <twb> UGH, but now it isn't terminating properly
[06:40] <twb> http://paste.debian.net/108031/
[06:40] <twb> http://paste.debian.net/108033/
[06:45] <JanC> you could probably hook into a dhclient exit hook
[06:45] <JanC> or something like that
[06:48] <twb> there should already be one...
[06:49] <twb> at least, i thought there was
[06:49] <twb> although come to think of it, the host I'm testing is in the dmz and has static networking
[06:58] <twb> For some reason http://paste.debian.net/108034/ STARTS UP again during the shutdown sequence
[07:09] <twb> I give up.  I'll just remove the options where collectd needs name resolution, and hard-code IPs into its config file.
[07:12] <SeveredCross> Ooh, I've been looking at deploying collectd, heh.
[07:12] <SeveredCross> I want to collect data from my machines, relay it to a central box, and make pretty graphs so I can monitor them.
[07:13] <twb> That's what I'm doing
[07:13] <twb> I'll pastebin my build scripts for you
[07:13] <SeveredCross> Oh, nice, thanks.
[07:14] <twb> http://paste.debian.net/108035/ is the hub
[07:14] <twb> http://paste.debian.net/108036/ is the spokes
[07:14] <twb> (Note that the hub starts out as a spoke, so really hub = spoke+hub)
[07:15] <twb> And the latter should also have an "apt-get install collectd-core" at the top.
[07:15] <SeveredCross> Nice.
[07:15] <SeveredCross> Thanks, I'll save those and work off of them.
[07:17] <twb> Unless your spokes are all LXC containers, you'll want to add in a bunch more plugins, e.g. disk and network
[07:17] <SeveredCross> Yeah, disk, network, processes and memory is what I want to monitor.
[07:17] <twb> Ref. http://collectd.org/wiki/index.php/Table_of_Plugins
[07:18] <SeveredCross> I'll probably end up graphing the data with rrdtool on the hub side, but I'm not sure how that's going to work
[07:18] <twb> In my code, you can see I use collection3 to do it
[07:18] <SeveredCross> Yeah.
[07:18] <twb> It's at least as nice as munin
[07:19] <SeveredCross> I'm just unsure if it's going to display all the machines in the same graph for any particular data type, or if it'll do graph per spoke per data type.
[07:19] <twb> SeveredCross: collection3 can display all hosts for a graph, or all graphs for a host (or subsets of either).
[07:19] <SeveredCross> Nice.
[07:19] <twb> Each image is of one datum of one host
[07:20] <SeveredCross> I'd like to at some point write a frontend for it that I can just connect to a connectd hub and it'll get the data, but that's far off in the future.
[07:20] <twb> SeveredCross: well, collection3 does that -- you connect by way of HTTP :-P
[07:21] <SeveredCross> :P
[07:21] <twb> Or you could rsync the rrd databases over ssh, I guess
[07:22] <SeveredCross> Yeah, but that's failure-prone, and why do that when I have a fancy VPN connection to the hub. :>
[07:22] <twb> What, you run rsh instead of ssh on your private networks?
[07:23] <SeveredCross> Heh, no, but I'm forgetful, and I don't like having lots of files on my home machine that I don't need.
[19:54] <ion> keybuk: You’ve got mail.
[20:54] <Keybuk> ion: thx!