[01:06]  * thumper takes a deep breath
[01:07]  * thumper dives into local provider syslog
[03:26]  * thumper shakes his head
[03:26]  * thumper sighs
[03:26]  * thumper thinks...
[03:27] <axw> what'cha doing thumper?
[03:27] <thumper> axw: trying to make the syslog udp port configurable
[03:27] <thumper> all done for the machines
[03:27] <thumper> but the deployer is a problem
[03:27] <thumper> as we need api calls to work it out
[03:28] <thumper> then I notice that the deployer makes multiple api calls instead of just one specialist call
[03:28] <thumper> trying to work out how far to go with this...
[03:28] <thumper> the State address calls methods are icky
[03:28] <thumper> since we ask for Addresses and APIAddresses, we get the environment config twice
[03:29] <thumper> since I also want to get the SyslogPort from config
[03:29] <thumper> that would be another
[03:29] <thumper> ideally we should just have one call...
[03:29] <thumper> I don't want to shave the whole yak
[03:29] <axw> heh
[03:30] <thumper> we are incredibly inefficient in the calls
[03:30] <thumper> but it is way less than the network lag
[03:30] <thumper> so noone cares
[03:33] <thumper> aargghh...
[03:33] <thumper> stabby stabby
[03:34] <thumper> state/address.go:103 c.f. :89
[03:35]  * thumper sighs
[03:37]  * thumper gets out the mega-razor with the number one clip
[03:46] <axw> thumper: I'm looking at finishing off synchronous bootstrap, specifically the cancellation behaviour
[03:46] <axw> I'm thinking of modifying the Environ interface to be like this: http://paste.ubuntu.com/6446222/
[03:46] <axw> does that look sensible?
[03:46]  * thumper looks
[03:47] <axw> the returned function will do the synchronous bit, and cmd/juju will install a signal handler to tomb.Kill
[03:47] <thumper> so the finalisation function is to call back to kill the environment instance
[03:48] <axw> no, it's to SSH to the machine and install the agent
[03:49] <axw> but it takes a tomb so it can be cancelled
[03:49] <axw> thumper: I made it a callback so the signal handling can be in the CLI
[03:51] <thumper> ok, I think
[03:51] <thumper> yeah, seems ok to me
[03:52] <axw> okey dokey, will plough ahead - ta
[04:04] <thumper> well, that is the hind quarters shaved
[04:04] <thumper> I should stop shaving
[04:06]  * thumper is reminded yet again how terrible our cloudinit tests are
[04:08] <thumper> I'm almost prepared to pay someone to fix these tests
[04:17] <thumper> fark
[04:18]  * thumper has a nearly hairless yak
[04:33] <thumper> wow...
[04:33] <thumper> that "simple" refactoring is: 25 files changed, 194 insertions(+), 58 deletions(-), total diff size of 846 lines
[04:33] <thumper> and that is before I start what the branch needs
[04:33]  * thumper creates another pipe
[04:36]  * thumper takes a breath again and tries to work out what the original aim was before the hairy yak raised its ugly head
[04:57]  * thumper goes swimming
[04:57] <wallyworld> thumper: how close are you to pushing a kvm broker implementation?
[04:58] <thumper> wallyworld: FWIW, I was able to deploy ubuntu to a local kvm environment
[04:58] <thumper> but no logging
[04:58] <wallyworld> \o/
[04:58] <thumper> so I need to fix that
[04:58] <wallyworld> ok, just checking
[04:58] <thumper> wallyworld: a day or two
[04:58] <wallyworld> i have a branch ready to plug into that
[04:58]  * thumper nods
[04:58] <wallyworld> ie the NewKvmBroker() is a place holder
[04:58] <thumper> :)
[04:58] <wallyworld> and just now pushed a ranch to clean up provisioner
[04:59] <thumper> coolio
[04:59]  * thumper has swimming coaching in 15 minutes, so I need to get a move on
[04:59] <thumper> ciao
[04:59] <wallyworld> later
[09:17] <jam> mgz: poke
[09:42]  * fwereade is experiencing some intestinal discomfort and is going to lie down for a bit
[09:47] <TheMue> fwereade: get well soon
[09:49] <jam> fwereade: rest well, feel better
[10:34] <rogpeppe1> rebooting
[10:45] <jam> standup time: https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig
[10:45] <jam> fwereade: (if you're feeling better) ^^
[10:45] <jam> mgz, TheMue, rogpeppe, dimitern: ^^
[10:50] <jamespage> hey juju devs - I'm seeing this issue in one of the QA labs:
[10:50] <jamespage> https://bugs.launchpad.net/juju-core/+bug/1178312
[10:50] <_mup_> Bug #1178312: ERROR state: TLS handshake failed: x509: certificate signed by unknown authority <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1178312>
[10:50] <jamespage> whenever I bootstrap a server
[10:58] <jam> jamespage: have you set "ssl-hostname-verification: false" in your env config ?
[11:01] <jamespage> jam: makes no difference - its a verification failure between the juju client and mongodb
[11:02] <jam> jamespage: so do you actually match the description of "you have an outdated .pem on a second machine from the one that did the actual bootstrap" ?
[11:02] <jamespage> jam: no this is all on a single client endpoint
[11:02] <jam> if you're doing a fresh bootstrap and it still fails, then it isn't exactly that bug
[11:03] <jam> though we might have another one
[11:03] <jamespage> jam: probably a new bug then
[11:03] <jam> you *might* have a local ".pem" file that is actually invalid
[11:03] <jamespage> jam: where do I find those?
[11:03] <jam> jamespage: it would be in ~/.juju/$ENVNAME.pem I believe. New versions of Juju put the cert contents into the $ENVNAME.jenv file
[11:04] <jam> but I *think* it tries to use a .pem if it already existed
[11:04] <jam> I would have thought you couldn't bootstrap if you had an invalid one, but there have been weirder things
[11:04] <jam> jamespage: a pastebin of the output of "juju bootstrap --debug && juju status --debug" might be helpful
[11:05] <jam> jamespage: for example, is someone trying to hijack your connection and redirecting you do another machine, or running HTTP on a HTTPS port etc
[11:06] <jamespage> jam; http://paste.ubuntu.com/6447597/
[11:10] <jamespage> jam: and http://paste.ubuntu.com/6447604/
[11:10] <jamespage> jam: I could see stuff in the .jenv
[11:10] <jamespage> so I scrubbed .juju/environments and re-tried - same problem
[11:10] <jamespage> jam: I can see the client connections being rejected by mongod on the bootstrap node once its up
[11:11] <jamespage> (i.e. I can ssh to it)
[11:13] <jam> jamespage: can you paste the /var/log/cloud-init-output.log
[11:14] <jam> and possibly /var/log/juju/machine-0.log
[11:14] <jamespage> jam:sure - can do
[11:14] <jam> and if we're being complete /var/lib/juju/**/agent.conf (i think it is /var/lib/juju/agents/machine-0/agent.conf but I'm not positive)
[11:15] <jam> jamespage: This is a little bit concerning: 2013-11-20 11:09:15 DEBUG juju.environs.configstore disk.go:77 Making /var/lib/jenkins/.juju/environments 2013-11-20 11:09:15 INFO juju.environs open.go:156 environment info already exists; using New not Prepare 2013-11-20 11:09:15 DEBUG juju.provider.maas environprovider.go:33 opening environment "maas".
[11:15] <jam> "Using New not Prepare"
[11:15] <jam> I'm guessing that is because you have to sync-tools first?
[11:17] <rogpeppe> jam: ~/.juju/$ENVNAME.pem is no longer used AFAIK
[11:17] <rogpeppe> jam: it's all kept in the .jenv file now
[11:18] <jam> rogpeppe: I realize we don't create it, but I thought we might use it if it existed
[11:18] <jam> as part of the migration strategy
[11:19] <jam> rogpeppe: as in, if it exists, we'll put it into the .jenv when we create it
[11:19] <jamespage> jam: http://paste.ubuntu.com/6447635/
[11:19] <rogpeppe> jam: hmm, yes we probably do, if there's no .jenv file
[11:19] <jamespage> cloud-init-output
[11:20] <jam> jamespage: so I think I can get what I was looking for from cloud-init. Maybe the .jenv next (unless you've already nuked it)
[11:20] <rogpeppe> jamespage: i understand the issue here - we should do a better job of knowing when to retry a connection
[11:20] <jam> rogpeppe: well the problem is that we just bootstrapped and then failed
[11:20] <jam> to do status
[11:20] <rogpeppe> jam: from a different machine, right?
[11:21] <jam> rogpeppe: no, from the same machine
[11:21] <jamespage> jam: http://paste.ubuntu.com/6447662/
[11:21] <jamespage> jenv
[11:21] <jam> as in "juju bootstrap && juju status" => bad TLS cert
[11:21] <rogpeppe> jamespage: ah, ok, i'm looking at bug #1178312 - is that not what we're talking about here?
[11:21] <_mup_> Bug #1178312: ERROR state: TLS handshake failed: x509: certificate signed by unknown authority <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1178312>
[11:21] <jamespage> rogpeppe, I thought so but it appears not
[11:22] <jam> rogpeppe: so bug #1178312 is about "I bootstrapped again and now other people can't talk to it because the cert doesn't match"
[11:22] <_mup_> Bug #1178312: ERROR state: TLS handshake failed: x509: certificate signed by unknown authority <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1178312>
[11:22] <jam> rogpeppe: *but* I think james has a different bug which is "I bootstrapped, and I can't connect to the thing I just started"
[11:23] <rogpeppe> jam: if that's the case, that definitely is a bug
[11:23] <jam> jamespage: so if I decode the data in cloud-init-output.log it *doesn't* match the data in your .jenv file
[11:24] <jamespage> jam: well that sucks :-)
[11:24] <jam> jamespage: you're sure this is a matching "destroy-environment" into "bootstrap" into "status" ?
[11:24] <jam> If you just nuke the file and bootstrap again, it may fail to bootstrap but still generate a new cert, etc.
[11:25] <jamespage> jam: cloud-init-output and jenv definately match
[11:26]  * rogpeppe wishes that certificates were encoded in opaque asn1 format
[11:26] <rogpeppe> s/were/were no/
[11:26] <rogpeppe> t
[11:26] <jamespage> http://paste.ubuntu.com/6447671/
[11:26] <jamespage> thats status
[11:27] <jamespage> the bootstrap was already OK
[11:27] <jam> jamespage: "was already ok" ?
[11:28] <jamespage> jam: I'd given you the output of the environment I just boostrapped already
[11:28] <jam> jamespage: if you nuked ~/.juju/environments/* then we don't have the cert recorded anymore
[11:28] <jamespage> jam: no - that was prior to this run through
[11:28] <jam> jamespage: k, so you nuked it, then ran bootstrap and status
[11:28] <jamespage> yup
[11:28] <rogpeppe> i agree with jam - the cert and private key in the cloudinit output don't seem to match the .jenv contents
[11:29] <rogpeppe> jamespage: you didn't just check the first few lines, did you?
[11:29] <jamespage> OK - lemme teardown, scrub and do it again
[11:29] <jam> rogpeppe: but does it match the stateserver key or does it match the cacert key
[11:29] <jam> I'm checking that one
[11:30] <rogpeppe> jam: the certificate should be the same in both cases, i think
[11:30] <jam> rogpeppe: I thought we had a CA key that then generated other keys for all the agents
[11:31] <jam> regardless something funny
[11:31] <rogpeppe> jam: we do, but i'm just looking at the certificate which signs that key
[11:31] <jam> because .jenv(ca-cert) matches the first ~5 lines of cloud-init-output cacert.decode('base64')
[11:31] <jam> but they *don't* match the remaining lines
[11:31] <jam> which seems surprising
[11:31] <rogpeppe> jam: i mean which is signed by that key, i guess
[11:32] <rogpeppe> jam: the first 5 lines are boilerplate
[11:32] <rogpeppe> jam: the actual rsa key comes later on in the cert data
[11:32] <rogpeppe> jam: that's why it would be nice to have human-readable certs...
[11:34] <jamespage> jam, rogpeppe: http://paste.ubuntu.com/6447700/
[11:35] <jamespage> bootstrap-jenv-status - all appear to be a consistent key
[11:35] <rogpeppe> jamespage: so is it working now?
[11:35] <jamespage> just waiting for the bootstrap unit to startup
[11:35] <jamespage> (its is maas on physical hardware)
[11:36] <jam> rogpeppe: well, I can load it into python and OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, cacert)) but I'm not sure that helps much :)
[11:37] <rogpeppe> jam: i've used openssl before, i think
[11:37] <jam> rogpeppe: I just mean I can get it into a parsed form, but I'm not really sure what to do from there
[11:37] <rogpeppe> jam: dump it out as text...
[11:39] <rogpeppe> jam: it would be really nice if we generated the env uuid client-side and encoded in the certificate somewhere
[11:40] <jamespage> wtf - now its working
[11:40] <rogpeppe> jam: here's the textual form of the cert, BTW: http://paste.ubuntu.com/6447718/
[11:40] <jamespage> the bootstrap node landed on a different machine
[11:40] <jamespage> wonder if we have something wonky in DNS/DHCP world
[11:40] <rogpeppe> jamespage: i have a faint suspicion as to what might have been going on for you
[11:40] <rogpeppe> jamespage: or... maybe not :-)
[11:41] <rogpeppe> jamespage: if you really were trying to talk to the wrong machine, then everything was working as designed.
[11:41] <rogpeppe> jamespage: except that it would be nice to have some better error messages
[11:41] <jamespage> rogpeppe, I don't think that was the case
[11:42] <jamespage> all of the other servers in the lab are powered off apart from the bootstrap node
[11:42] <rogpeppe> jamespage: that's a fair indication :-)
[11:42] <rogpeppe> jamespage: so, i suspect that you accidentally bootstrapped twice
[11:42] <rogpeppe> jamespage: each bootstrap generates its own CA cert
[11:43] <rogpeppe> jamespage: if you wipe ~/.juju/environments, it can't know that it's a bad idea to try to bootstrap again
[11:43] <rogpeppe> jamespage: (if a .jenv file already exists, it won't recreate CA cert, BTW)
[11:43] <jamespage> sure
[11:44] <jamespage> maybe
[11:44] <jamespage> lets see
[11:44] <jam> rogpeppe: but if you actually wiped it, then it wouldn't know what machine was started earlier, and it would either (a) look up the provider-state to see if it already existed or (b) generate a new control bucket which means it would bootstrap another machine)
[11:44]  * jamespage runs the full test cycle again
[11:45] <rogpeppe> jamespage: it would look up the provider-state, which in this case would have the old machine, right?
[11:45] <rogpeppe> s/jamespage/jam/
[11:45] <jamespage> phew
[11:45] <jamespage> :-)
[11:46] <rogpeppe> jam: which makes me think that it would be good to keep the env uuid in the provider state for sanity checking
[11:46] <rogpeppe> jam: not that we can do that currently, of course
[11:47] <jam> rogpeppe: but *bootstrap* would fail with "already bootstrapped" not succeed and just have generated new cruft
[11:47] <jam> rogpeppe: I have no problem with the sanity checks.
[11:47] <rogpeppe> jam: it would, yes, but if you ignored that error message and typed "juju status", wouldn't you get the symptoms we've seen above?
[11:48] <jam> rogpeppe: and certainly for bug #1178312 we should notice that it is a TLS error (not a failed to connect but a I didn't get what I thought would be there) and die immediately rather than retrying
[11:48] <_mup_> Bug #1178312: ERROR state: TLS handshake failed: x509: certificate signed by unknown authority <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1178312>
[11:48] <jam> rogpeppe: you would if james hadn't already pasted the bootstrap output without a failure message
[11:48] <rogpeppe> jam: which also comes back to: if we've generated a .jenv file when bootstrapping, and the bootstrap fails, we should remove the .jenv file
[11:48] <jam> rogpeppe: the original bootstrap: http://paste.ubuntu.com/6447604/
[11:49] <jam> rogpeppe: I'm a little concerned about: juju.provider.maas environ.go:193 picked arbitrary tools
[11:49] <jam> but that shouldn't affect anything with mongo
[11:50] <rogpeppe> jam: presumably that's not the *original* bootstrap, otherwise there wouldn't have been a .jenv file already there
[11:52] <jam> rogpeppe: so to follow the conversation, james had done one in the past, then nuked it and deleted everything from ~/.juju/environments then did it again, and the bootstrap + status + cloud-init-output (etc) were all from that pristine try
[11:52] <jamespage> urg - its back
[11:52] <rogpeppe> jamespage: you've got the same problem again?
[11:52] <jamespage> juju/maas picked a different node (the one that it picked earlier) and I get this issue
[11:52] <jam> jamespage: how are you killing the system ?
[11:52] <jam> with "juju destroy-environment" /
[11:52] <jam> ?
[11:53] <jamespage> yes
[11:54] <rogpeppe> jam: there's actually something useful we can determine from the certs - we can find out when the cert was created
[11:54] <jam> jamespage: so if all of this is fresh, the contents of cloud-init-output.log and .jenv give us enough info to see if they are actually not matching.
[11:54] <rogpeppe> jamespage: ok, so, from scratch now: could you paste the contents of the .jenv file and the contents of the cloud-init-output log file on the bootstrap node, please?
[11:55] <jam> preferably in the same paste so we don't get confused later :)
[11:55] <jamespage> http://paste.ubuntu.com/6447758/
[11:55] <jamespage> jenv
[11:55] <jamespage> cloud-init-output: http://paste.ubuntu.com/6447759/
[11:56] <jamespage> jam, rogpeppe: all yours ^^
[11:58] <rogpeppe> jamespage: the bootstrap node was bootstrapped on the 8th november
[11:58] <rogpeppe> jamespage: it has not just been created
[11:58] <rogpeppe> jamespage: i guess that means that destroy-environment has not destroyed it correctly
[11:59] <TheMue> rogpeppe: any chance to take a quick look on https://codereview.appspot.com/24040044/
[11:59] <TheMue> ?
[11:59] <rogpeppe> jamespage: which is, i suppose, only to be expected, because we're using the new metadata to try to destroy the old environment
[11:59] <jamespage> urgh - OK - lemme go check wtf is going on
[11:59] <jamespage> its being powercycled but it looks like its not being re-isntalled
[12:01] <rogpeppe> jam: would it be a terrible security hole if we allowed unauthenticated access to an API to find out its uuid?
[12:02] <jam> rogpeppe: the clietn would still have to allow a TLS connection to a machine it doesn't recognize
[12:02] <rogpeppe> jam: not necessarily
[12:02] <rogpeppe> jam: we could allow http access for just that info
[12:02] <rogpeppe> jam: although... https and http are on different ports, aren't they?
[12:03] <rogpeppe> jam: or, actually, it's just a client-side thing isn't it?
[12:04] <jam> so I think you can technically do both on the same port, but it is really hard to do
[12:04] <jam> because https is just TLS and raw HTTP underneath
[12:04] <jam> and I *think* the server inits the TLs data
[12:04] <rogpeppe> jam: we could just provide a UUID method alongside Login, and the client could fall back to trying https allowing any cert to make a decent error message
[12:05] <jam> rogpeppe: well, we can assume that the UUID is wrong if the TLS cert doesn't match
[12:05] <jam> I don't think reporting a UUID actually helps *users*
[12:05] <jam> as they don't know WTF
[12:06] <jam> it is
[12:06] <rogpeppe> jam: well, if the TLS cert doesn't match, it may be because someone's trying to break in, i suppose
[12:07] <jam> rogpeppe: if they can break in, then unauthenticated UUID isn't going to help us
[12:08] <rogpeppe> jam: another possibility is that the client could look at the certificate presented by the server and use it to tell the user when the environment was bootstrapped
[12:09] <rogpeppe> jam: that's probably more useful
[12:09] <rogpeppe> jam: but i do think that env uuids will become more useful (and recognised) as time goes on and multi-environment setups become more common.
[12:10] <jam> rogpeppe: it is still a UUID which means random hex bytes
[12:10] <jam> not "my environment named X"
[12:10] <jam> I think env name + timestamp might make way more sense to the user
[12:11] <rogpeppe> jam: we have to think carefully about the role of environment names
[12:11] <rogpeppe> jam: currently there's nothing stopping an environment having different names for different clients, i think
[12:12] <rogpeppe> jam: s/for/when used by/
[12:13] <rogpeppe> jam: currently an environment certificate does include the name that the environment was given, but that's by no means unique
[12:14] <jam> rogpeppe: sure, but it isn't quite about uniqueness, it is about giving them something that they can understand
[12:14] <jam> and UUID is not that
[12:14] <jam> they can lookup a UUID somewhere, and kind-sorta-squint to see if it looks like this other one
[12:14] <jam> but it doesn't have *meaning*
[12:15] <rogpeppe> jam: i'm concerned it might be misleading
[12:16] <rogpeppe> jam: for example, if i send someone a .jenv file and they store it as "foo.jenv" but the original bootstrapper named it "bar.jenv", the messages will print "bar" but the user would expect "foo"
[12:16] <rogpeppe> jam: because that's their local name for the environment
[12:16] <jam> rogpeppe: if you send it to them, it has a name already
[12:17] <jam> so "yes" with a big "but not really"
[12:17] <rogpeppe> jam: depends how you send it to them
[12:17] <jam> rogpeppe: 90% of all ways involve a filename  (yes you could paste it somewher)
[12:18] <rogpeppe> jam: yeah, i was thinking pastebin
[12:19] <rogpeppe> jam: and i might *need* to rename it - for example, if you sent me an "ec2.jenv", i'd need to rename it so it didn't clash with my own env of that name
[12:19] <rogpeppe> jam: i think this may tie in with user auth stuff
[12:20] <rogpeppe> jam: for instance, if the name was "john meinel's ec2", that would be more useful
[12:27]  * TheMue => lunch
[13:34] <bac> hi jam, i have some questions about juju local's use of mongo.  do you know much about how it works?
[13:34] <jam> bac: I know a bit at least
[13:34] <jam> what's up?
[13:34] <adeuring> rogpeppe: could please you have a look here: https://codereview.appspot.com/29680043 ?
[13:34] <jam> adeuring: I reviewed it already
[13:34] <jam> though you're welcome to ask for another
[13:35] <bac> jam i see it creates /etc/init/juju-db-bac-blah.  is that supposed to be removed when the environment is destroyed?
[13:35] <bac> jam i ask b/c it lingered and upon reboot prevented my *real* /etc/init/mongdb.conf from launching the mongo i need for other work
[13:35] <adeuring> jam: thanks, that was fast.-- should haved looked at the page first ;)
[13:35] <jam> bac: I believe so. I know axw__ has been doing some work in that area.
[13:35] <jam> adeuring: I just happened to see the email come in
[13:36] <bac> jam: if that mongo instance is ephemeral, why put entries in /etc/init?
[13:36] <jam> bac: because it isn't
[13:37] <jam> bac:  that is the central DB talking about all of the instances you have on your machine
[13:37] <jam> if you reboot, it is supposed to come up again
[13:37] <bac> jam, ah, ok
[13:37] <bac> jam, well it needs to play nice with real mongo
[13:38] <bac> jam, now that i understand more i can file a better bug.  thank you.
[13:38] <jam> bac: so you can certainly file a bug that we shouldn't be setting something in /etc/default/mongodb. We *do* it because everywhere but local that machine isn't running a mongo otherwise
[13:38] <jam> and installing the mongodb-server automaticlaly starts an instance we won't use
[13:38] <jam> without that line
[13:38] <jam> bac: I believe the fix is to have juju depend on a "juju-db" package which is mongo without the default instance
[13:38] <jam> bac: you can poke jamespage for more details there
[13:39] <bac> jam, i don't thing /etc/default/mongodb is the problem but the /etc/init/juju-db-<user> one
[13:39] <jam> bac: we write "don't start" to /etc/default/mongodb when we create ours
[13:39] <bac> jam: a good bug report is descriptive not prescriptive so i'll leave the solution to y'all.  :)
[13:39] <bac> (or so preaches bigjools)
[13:40] <jamespage> jam, bac: yeah - not done that yet
[13:40] <jam> jamespage: but you agree on the proposed "how it should be fixed" ?
[13:40] <jam> (we stop writing to /etc/default/mongodb, but depend on a different package)
[13:40] <jamespage> jam: broadley yes
[13:40] <jamespage> binary only package +1
[13:41] <jam> bac: so even if we do/don't remove that file, you need to fix /etc/default/mongodb
[13:41] <jam> if you want an actual Mongo running
[13:41] <jam> they *can* coexist, but we have the problem that on "normal" nodes we were starting 2 processes and only wanted 1
[13:42] <jam> but that does impact a local test when you actually do want the other one to run
[13:42] <bac> jam, i have not /etc/default/mongodb.
[13:42] <bac> jam the problem i see is /etc/init/juju-db starts before /etc/init/mongodb and the latter sees an instance running and does nothing
[13:43] <bac> s/have not/have no
[13:46] <jam> jamespage,bac: ok, so this is slightly different. I know we do write /etc/default/mongodb but that *might* only be triggered in non-local.
[13:46] <jam> jamespage: the issue bac is pointing to is that the default /etc/init/mongodb just uses 'start-stop-daemon'
[13:46] <jam> which seems like it just does a ps to see if anything named "mongodb" is running
[13:46] <jam> and doesn't notice that the one that is running is using a different config
[13:46] <jamespage> hmm
[13:46] <rogpeppe> bac: i'm slightly surprised that /etc/init/mongodb sees an instance running - how does it know that?
[13:46] <jamespage> that sucks a bit
[13:47] <bac> jam, jamespage: bug 1253084 files.  please augment as needed.
[13:47] <_mup_> Bug #1253084: local use of mongo prevents default mongodb from starting <juju-core:New> <https://launchpad.net/bugs/1253084>
[13:47] <rogpeppe> jamespage: ah, i see
[13:47] <rogpeppe> s/jamespage/jam/ !
[13:47] <jam> rogpeppe: right "man start-stop-daemon"
[13:47] <rogpeppe> jam: that sounds like definite crack to me
[13:47] <jam> rogpeppe: it works well if you have things controlled by one file and really want just one running
[13:49] <rogpeppe> jam: if /etc/init/mongodb will only work properly if there's no other mongodb started anywhere, then we have a problem
[13:49] <jam> jamespage: so if we called the binary only thing /usr/bin/jujudb we would get aroud that, but I 'm not sure that is great, either
[13:49] <jamespage> jam: its fixable
[13:50] <rogpeppe> why is /etc/init/mongodb using start-stop-daemon anyway?
[13:50] <rogpeppe> it *is* the daemon... or should be
[13:50] <jam> rogpeppe: cause its a shortcut for doing that sort of thing
[13:50] <jam> it handles the pid file, etc.
[13:50] <jam> rogpeppe: it is "/etc/init/mongodb.conf" which isn't a daemon
[13:51] <rogpeppe> jam: it bypasses all the nice namespacing that upstart gives you
[13:51] <jam> rogpeppe: I can imagine it was because someone ported an /etc/init.d scripte
[13:51] <jam> script
[13:51] <rogpeppe> jam: sounds very plausible
[13:52] <rogpeppe> jam: i suppose what i mean is that upstart should be the one keeping track of started processes and daemons.
[13:53] <jam> rogpeppe: sounds like, but I think jamespage would know best here
[13:53] <jamespage> rogpeppe, yup
[13:54] <rogpeppe> jamespage: that /etc/init/mongodb uses start-stop-daemon seems pretty much like a straight-up bug to me
[13:54] <jamespage> the fix is a hack - you just tell start-stop-daemon to look for a process name that will never exist
[13:54] <jamespage> rogpeppe, that is not
[13:54] <jamespage> a bug
[13:54] <jam> jamespage: that doesn't prevent it from double starting, does it?
[13:55] <jamespage> upstart manages the process via start-stop-daemon just fine
[13:55] <jam> so if you did "start mongodb; start mongodb" you'd end up with 2 of them ?
[13:55] <jamespage> all its doing is changing the uid
[13:55] <jamespage> (unlike the juju mongodb which just runs as root :-))
[13:55] <jamespage> jam: ^^ that will get picked up in the MIR work this cycle
[13:56] <jam> jamespage: k. Because juju mongodb doesn't need root (AFAICT)
[13:56] <jamespage> yup
[13:56] <jamespage> I think I raised that bug :-)
[13:56] <jam> jujud probably does
[13:56] <jamespage> agreed
[13:56] <jamespage> that's OK tho
[13:57] <jcastro> http://summit.ubuntu.com/uds-1311/2013-11-20/
[13:58] <jcastro> rogpeppe, or fwereade ^^^
[13:58] <jcastro> we have a session in 2 hours
[13:58] <rogpeppe> jcastro: "juju code dev update" ?
[13:59] <jcastro> yeah basically what users can expect for 14.04
[14:00] <rogpeppe> jcastro: fwereade has a better bird's eye view than me at this point
[14:00] <jam> jamespage: so there isn't a way to ask upstart to change the UID ?
[14:00] <rogpeppe> jamespage: so could mongodb.conf just use sudo -u mongodb /usr/bin/mongodb ... ?
[14:01] <jamespage> rogpeppe, that is exceptionally bad as it starts a user session
[14:01] <jamespage> (try shutting down a desktop when you do that)
[14:01] <jam> rogpeppe: http://superuser.com/questions/213416/running-upstart-jobs-as-unprivileged-users
[14:01] <jam> new versions can use "setuid"
[14:01] <jamespage> jam: there is bit globally for the configuration
[14:01] <rogpeppe> jamespage: no way to avoid that?
[14:01] <jam> but maybe we need to still support P?
[14:01] <jamespage> so you can do mkdir -p /var/lib/mongodb in pre-start
[14:02] <jam> (which it does)
[14:02] <jcastro> fwereade, are you available in 2 hours?
[14:04] <rogpeppe> jcastro: he wasn't too well earlier. i'm ok doing it if he can't make it.
[14:05] <rogpeppe> jcastro: or perhaps jam might want to
[14:05] <jcastro> I don't care who it is as long as they can just talk about juju
[14:05] <jcastro> jam is fine! :)
[14:05] <jam> rogpeppe: jcastro: its pretty late for me
[14:05] <fwereade> jcastro, rogpeppe: well, I just crawled up to tell people I wasn't likely to come back today
[14:05] <jcastro> ok
[14:05] <rogpeppe> fwereade: np
[14:05] <jcastro> is there anyone available?
[14:05] <rogpeppe> fwereade: could you maybe link me to our current roadmap document, if there is one?
[14:06] <rogpeppe> fwereade: i'm not quite sure what's likely to get in 14.04 currently
[14:06] <fwereade> rogpeppe, I'm pulling all that back together into something resembling sanity but it's not currently a oc
[14:06] <jam> jcastro: so the ones you want someone are for 16:05 UTC and 18:05UTC, except there are 2 Juju ones at 18:05 (I guess one is GUI)
[14:06] <fwereade> rogpeppe, I will give you the overview
[14:06] <rogpeppe> fwereade: thanks
[14:06] <jam> rogpeppe: there is also https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0At5cjYKYHu9odDJTenFhOGE2OE16SERZajE5XzZlRVE&usp=drive_web#gid=0
[14:07] <jam> which isn't 100% up to date, but can give some hints
[14:07] <jam> rogpeppe: the one at 18:05 looks more like a jamespage one
[14:07] <jam> "what do we need to do to get Juju into main"
[14:08] <jam> http://summit.ubuntu.com/uds-1311/meeting/22112/juju-core-development-update/ looks like a summary of what is going on
[14:16] <natefinch> jam, rogpeppe, fwereade: I'm also available to go to meetings, since they're more in my time zone
[14:28] <rogpeppe> jamespage, jam: ISTM that another and slightly easier fix to /etc/init/mongodb.conf would be to pass "-u mongodb" to start-stop-daemon
[14:34] <jamespage> rogpeppe, "--name mongodb-server" will also work as no process will ever be named mongodb-server
[14:34] <jamespage> which avoids problems if someone wants to run multiple mongod processes under the mongodb user
[14:35] <rogpeppe> jamespage: it's not clear to me how --exec interacts with --name interacts with --start
[15:26] <jcsackett> sinzui, abentley: can one of you look at https://code.launchpad.net/~jcsackett/charmworld/better-sparklines/+merge/195972 when you have time?
[15:45] <abentley> jcsackett: looking
[15:47] <jcsackett> abentley: thanks
[15:48] <abentley> jcsackett: r=me.
[15:48] <jcsackett> abentley: thanks!
[16:04] <rogpeppe> jcsackett: have you got a hangout link?
[16:05] <rogpeppe> oops
[16:05] <rogpeppe> s/jsackett/jcastro/
[16:05] <rogpeppe> jcastro: ^
[16:05] <jcsackett> :-P
[16:05] <jcsackett> happens all the time. :-)
[16:05] <rogpeppe> jcsackett: i can see why
[16:05] <jcastro> sec
[16:06] <jcastro> https://plus.google.com/hangouts/_/7ecpjmodidvjime8kk2kb7s28k?authuser=0&hl=en
[16:06] <jcastro> rogpeppe, he's the good looking one
[16:37] <jam> good job rogpeppe
[16:37] <rogpeppe> jam: thanks. did i sound reasonably coherent?
[16:37] <jam> yeah
[16:37] <jam> I felt you did well
[16:38] <rogpeppe> jam: phew :-)
[18:04] <jamespage> who from the juju-dev team is attending the Juju activities for 14.04 session starting shortly?
[18:05] <natefinch> jamespage: I can attend
[18:05] <jamespage> excellent
[18:05] <jamespage> #ubuntu-uds-servercloud-1
[19:05] <rogpeppe> g'night all
[19:06] <natefinch> night roger
[19:25] <sinzui> natefinch, I think Bug #1057665 is fix committed in trunk
[19:25] <_mup_> Bug #1057665: juju destroy-environment is terrifying; please provide an option to neuter it <canonical-webops> <destroy-environment> <pyjuju:Fix Committed by hazmat> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1057665>
[19:28] <jam> sinzui: it depends how you interpret that bug. I think they are actually asking for a sort of "termination protection" so that you can't just destroy-environment it
[19:28] <jam> we do make it slightly better by requiring you to name it
[19:29] <jam> we could spin off another bug about termination protection
[19:29] <sinzui> jam, I agree with what is being asked for, but my understanding was that we decided on an argument change
[19:30] <sinzui> jam, why are you awake?
[19:30] <jam> it isn't midnight here yet
[19:30] <jam> 30 min to go
[19:31] <sinzui> natefinch, jam, I need to decide if the bug is fix committed, or deferred to the next milestone.
[19:31] <sinzui> moving it to the next is easy
[19:31] <jam> sinzui: I'm fine closing that one, but I'd like to capture the thought about a second command
[19:32] <jam> whether we *do* that one we can decide later
[19:33] <sinzui> jam, do you know if Bug #1236691 is really In Progress?
[19:33] <_mup_> Bug #1236691: null provider bootstrap fails if default-series does not match target <ssh-provider> <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1236691>
[19:33] <jam> I think axw was looking at it
[19:34] <jam> but it shouldn't block the release if it doesn't land
[19:34] <sinzui> thank you
[19:38] <jam> sinzui: have a good night
[19:38]  * jam heads to bed
[19:39] <sinzui> goodnight
[19:42] <natefinch> sinzui: hey, I stepped out for a bit.  Sounds like you and jam figured it out, though
[19:44] <natefinch> sinzui: in my opinion, the change I made addresses the problem they stated they had.  I didn't implement their solution to the problem, but that's a different story, as jam said.  I think the feature they proposed is interesting, but problematic to implement, and therefore seems unlikely to be implemented any time soon, with the priorities and manpower we have right now.
[19:54] <sinzui> natefinch, I agree. I prefer bugs reports to state the problem, not prescribe a solution. The later is almost always a unique feature. While the former, is easily grouped and duped to form a solution
[19:57] <natefinch> sinzui: yep, exactly
[20:07] <thumper> o/
[20:08] <natefinch> thumper: howdy
[20:09] <thumper> hey
[20:17] <natefinch> thumper: do you know if the mouse pointer will resize along with the font setting?  I haven't rebooted since I changed the font setting, so it might not have gotten applied... but that's one thing that is surprisingly annoying on the high res screen - trying to find and use a really tiny pointer.
[20:18] <thumper> IIRC the mouse pointer is special, and drawn by the graphics driver, most likely fixed size
[20:18] <natefinch> thumper: boo.  I looked for an accessibility thing for that and also failed... hoped there'd be a "bigger mouse pointer for people with bad eyesight" but no luck
[21:52] <hazmat> is there api support yet for deploy ... more specifically charm upload?
[21:53] <hazmat> hmm..
[21:53] <hazmat> conn.PutCharm
[21:53] <natefinch> hazmat: https://blueprints.launchpad.net/juju-core/+spec/t-cloud-juju-cli-api
[21:53] <natefinch> says dimiter is working on deploy
[21:54] <thumper> HA!
[21:54] <thumper> I think I found the bug where the local provider doesn't start properly
[21:54] <thumper> just encountered it.
[21:55]  * thumper adds it to the FIX IT stack
[21:55] <hazmat> natefinch, thanks
[21:56] <natefinch> hazmat: no problem
[21:57] <natefinch> gah... mgo refuses to dial into mongo if I pass it --replSet ... mongo sees the connection and then the connection is immediately dropped.  Sigh.  Gotta figure out what's going on in mgo.
[22:00] <natefinch> ... but not today.   later all
[22:59] <davecheney> sinzui: ping
[22:59] <davecheney> you were going to send me details on the jenkins setup

[23:31] <sinzui> yep
[23:57] <davecheney> sinzui: ta muchly