[06:40] <marcoceppi> jamespage: ping
[06:41] <jamespage> marcoceppi, hey
[06:41] <marcoceppi> jamespage: thanks for your help yesterday, we had a question today about the ceph charm during our charm school
[06:42] <jamespage> marcoceppi, fire away
[06:42] <marcoceppi> fsid config option says "fsid of the ceph cluster. To generate a suitable value use `uuid`. This configuration element is mandatory and the service will fail on
[06:42] <marcoceppi>       install if it is not provided."
[06:43] <marcoceppi> Why not just have the charm generate an fsid using uuid during a hook instead of having the charm fail deployment? Or, why have the charm fail at all, why not just silently wait until it's set?
[06:43] <jamespage> marcoceppi, because it has to be consistent across all of the monitor nodes and there is no reliable way todo that in hooks (even cluster hooks)
[06:44] <marcoceppi> jamespage: you can't have peer election of fsid?
[06:44] <jamespage> marcoceppi, so injecting it as config (as well as the monitor secret) is a simpler and more reliable approach
[06:44] <marcoceppi> jamespage: gotchya, any feedback on the second half of their question?
[06:44] <jamespage> marcoceppi, well I wanted to ensure that a charm user knows that they have not done something required
[06:45] <jamespage> so erroring out the hook is pretty much the only way to provide feedback!
[06:45] <marcoceppi> jamespage: right
[06:45] <jamespage> marcoceppi, I guess I could have provided default values
[06:46] <jamespage> but I wanted to avoid there being 100's of ceph clusters all with the same fsid and secret :-)
[06:46] <marcoceppi> jamespage: yeah, it rubs me wrong because charms should provide sane defaults
[06:46] <marcoceppi> but I understand why it was done this way
[06:46] <jamespage> marcoceppi, if we had a nice way of doing atomic sharing of stuff like this across peers then it would be a different approach
[06:47] <marcoceppi> they were just laughing at the idea of having to run uuid, and I didn't know enough about the charm to say why it didn't just do that
[06:47] <marcoceppi> jamespage: what happens if you change the fsid after initial deployment? does it ruin everything or will the charm properly accept it?
[06:47] <marcoceppi> can ceph handle a change in fsid*
[06:47] <jamespage> marcoceppi, no it can't
[06:48] <marcoceppi> Okay, so that's why
[06:48] <jamespage> requirement for immutable config - I have expressed this to the juju-core team
[06:48] <marcoceppi> jamespage: what you have now is fine, I can echo that sentiiment saying "this affects users"
[06:48] <marcoceppi> give us immutable configs!
[06:48] <marcoceppi> jamespage: thanks for the information
[06:48] <jamespage> please!
[06:48] <jamespage> marcoceppi, oh - the other thing is that changing source does not actually do anything post install
[06:50] <jamespage> marcoceppi, I know some charms (like the openstack ones) support upgrades through that route
[06:50] <jamespage> ceph does not
[07:01] <marcoceppi> jamespage: thank you sir!
[07:02] <jamespage> marcoceppi, np
[08:24] <_mup_> Bug #1212146 was filed: JuJu fails installing a unit (Failure: exceptions.KeyError: 'JUJU_ENV_UUID') <juju> <juju-jitsu> <maas> <twisted> <juju:New> <https://launchpad.net/bugs/1212146>
[15:15] <utlemming> what is the PPA where Juju pulls its mongodb from for juju-core?
[15:22] <jamespage> utlemming, ppa:juju/stable
[15:22] <utlemming> jamespage: thank you kindly Mr. Page
[15:23] <jamespage> sidnei, for the life of me I could not work out why the lxc provider was using and apt proxy of "http://localhost:8000"
[15:23] <jamespage> sidnei, but then I realised it was using the setting from the host
[15:23] <jamespage> sidnei, which is my case self refers
[15:23] <jamespage> ...
[15:35] <sidnei> jamespage: you can change the host to use 10.0.3.1 which is the lxc bridge
[15:35] <jamespage> sidnei, yeah - testing that now
[15:36] <sidnei> jamespage: im open to suggestions on how to improve that. hazmat suggested just assumming that if 3142 is listening on 10.0.3.1 just use that blindly, which i'm not a big fan of. squid-deb-client would auto-detect a proxy but that might be too magic.
[15:37] <jamespage> sidnei, hmm
[15:37] <sidnei> maybe the answer is just documenting
[15:38] <jamespage> sidnei, in my setup I actually run squid-deb-proxy on my laptop
[15:39] <jamespage> and I have it configured to use a peer squid-deb-proxy that I have at home if its contactable
[15:39] <jamespage> otherwise it just goes direct to the archive
[15:39] <jamespage> sidnei, the nice things is that everything is local off SSD with no network in the way
[15:39] <jamespage> sidnei, I'd be open to adding squid-deb-proxy to the juju-local package on the list of Depends
[15:41] <sidnei> jamespage: i don't have any preference, but pyjuju had apt-cacher-ng instead
[15:42] <jamespage> sidnei, I never liked that personally
[15:42] <jamespage> sidnei, squid-deb-proxy is in main, apt-cacher-ng is not
[15:42] <sidnei> that's a pretty good argument :)
[15:43] <sidnei> apt-cacher-ng has some advantages that you can tell it to mirror a whole repo and there's a way to purge unused files manually, but again, for someone that's advanced enough to care they can set that up themselves
[15:44] <jamespage> sidnei, I agree its more repository aware
[15:44] <jamespage> sidnei, but me likes squid :-)
[15:44] <jamespage> sidnei, I swung big time when I realised it did intelligent peering
[15:46] <jamespage> sidnei, although its nicer in that apt-cacher-ng will map all *.archive.ubuntu.com to a single source
[15:46] <sidnei> *yes*
[15:48] <jamespage> oh - and welcome back debug-hooks
[15:48]  * jamespage had missed that
[15:50] <sidnei> jamespage: ultimately, you're in a better position to advise how to auto-configure the containers and wether to pick squid-deb-proxy or apt-cacher-ng than me.
[15:53] <jamespage> sidnei, OK _ lemme give it some thought
[16:00] <arosales> jcastro, do you want to fire up the G+ for the weekly charm meeting?
[16:00] <jcastro> DSJFGHWERTHJFWEDFSA
[16:00] <jcastro> I totally forgot
[16:01] <jcastro> ugh
[16:01] <jcastro> I am totally unprepared
[16:01] <jcastro> one sec.
[16:02] <jcastro> https://plus.google.com/hangouts/_/62e91c8590f44673e739cf992bc2ac06bfebbc3d?authuser=0&hl=en
[16:02] <arosales> Weekly charm meeting URL: https://juju.ubuntu.com/community/weekly-charm-meeting/
[16:03] <arosales> ether pad: http://pad.ubuntu.com/7mf2jvKXNa
[16:03] <arosales> For folks interested in joining our weekly charm meeting
[16:03] <arosales> ^^
[16:20] <ev> is there some trick to testing relation-get/relation-list outside of normal execution in gojuju. I vaguely recall being able to set the right envvars in pyjuju, but the socket path doesn't seem to stay around in gojuju
[16:21] <ev> just trying to test a for x in relation list; do relation-get private-address $x; done
[16:21] <sidnei> ev, debug-hooks is back
[16:22] <ev> sidnei: I've never entirely understood debug-hooks. I run it, I get tmux, and then I'm supposed to add relations and things to see them pop up in additional windows?
[16:23] <sidnei> ev, correct. you trigger any hook and it will pop a new window, where you can do some manual stuff and optionally call ./hooks/<hook>
[16:23] <ev> but I'd like to just work with the current arrangement, rather than having to do an add-unit to force it to give me relation-get and relation-list
[16:23] <ev> is this not possible?
[16:23] <sidnei> it may be possible, i just never bothered
[16:24]  * ev shrugs
[16:24] <ev> I'll give that a try
[16:24] <ev> thanks!
[16:27] <ev> that worked. Thanks again, sidnei
[16:28] <sidnei> \o/
[18:50] <jcastro> arosales: ok I think I found a problem with the HP instructions
[18:50] <jcastro> but it might be just me
[18:50] <jcastro> in the instructions the tenant-name is like evilnick-project1
[18:50] <jcastro> but in my HP cloud panel it's long number
[19:02] <jcastro> arosales: https://bugs.launchpad.net/juju-website/+bug/1212396
[19:02] <_mup_> Bug #1212396: HP Cloud instructions need update <Juju Website:New for evilnick> <https://launchpad.net/bugs/1212396>
[19:03] <arosales> jcastro, are you looking at the tenant name or id?
[19:03] <arosales> jcastro, in the HP console go to
[19:03] <jcastro> yeah
[19:03] <arosales> Account --> Manage Projects
[19:03] <jcastro> right
[19:03] <jcastro> that wasn't there before
[19:04] <jcastro> which is why the docs are out of date
[19:04] <arosales> you should see the project name, and then the ID
[19:04] <jcastro> I know where it is now
[19:04] <arosales> ya, the console has changed since the docs where screen cap'ed
[19:04] <arosales> jcastro, what you want here is the "name" not the ID though
[19:04] <jcastro> right
[19:05] <arosales> jcastro, thanks for updating it. Let me know if you need me to validate anything
[19:06] <jcastro> it's working for me now, did a deploy, etc.
[19:06] <jcastro> I wonder if we can set a watch on an HP cloud console page
[19:06] <jcastro> or if this will just be the nature of the beast
[21:18] <marcoceppi> jcastro: tennant name, tennant id, project name are all the same (though different values) it's the "Project Name" now in HP Cloud
[22:36] <adam_g> uh. with juju-core,  how do i remove a relation between two services, when a previous relation joined hook had failed?
[22:36] <adam_g> agent-state-info: 'hook failed: "relation-joined"
[22:37] <adam_g> remove-relation command succceeds, but the relation isn't actually removed. and i am unable to re-add it
[22:37] <thumper> :(
[22:37] <thumper> sounds like a bug
[22:38] <thumper> adam_g: is there a --force or something?
[22:38] <adam_g> thumper, doesn't look like it
[22:39] <adam_g> FWIW, i see the unit attempting to fire the departed and broken hook. but in this case, the charm does not implement either
[22:54] <marcoceppi> adam_g: we ran in to this yesterday. juju resolved the service
[22:55] <adam_g> marcoceppi, huh?
[23:18] <marcoceppi> adam_g: if service is in an error state you can't process anymore actions against it until the service/unit has been resolved. Meaning the remove-relation was accepted by bootstrap and is queued until you resolve the relation error on the unit.
[23:19] <marcoceppi> running juju resolved aganist the machine until it drops out of error would then allow it to continue processing the events queued (removal) which would then allow you to re-add the relation
[23:19] <adam_g> marcoceppi, oh, i read your last comment differently. yeah, i believe i tried to resolve the error and remove the relation
[23:19] <marcoceppi> adam_g: I noticed, yesterday, that we had to run resolved a few times to get a unit back to health, fwiw