=== CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === defunctzombie is now known as defunctzombie_zz === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === defunctzombie_zz is now known as defunctzombie === tasdomas_afk is now known as tasdomas === CyberJacob|Away is now known as CyberJacob === defunctzombie is now known as defunctzombie_zz [06:40] jamespage: ping [06:41] marcoceppi, hey [06:41] jamespage: thanks for your help yesterday, we had a question today about the ceph charm during our charm school [06:42] marcoceppi, fire away [06:42] fsid config option says "fsid of the ceph cluster. To generate a suitable value use `uuid`. This configuration element is mandatory and the service will fail on [06:42] install if it is not provided." [06:43] Why not just have the charm generate an fsid using uuid during a hook instead of having the charm fail deployment? Or, why have the charm fail at all, why not just silently wait until it's set? [06:43] marcoceppi, because it has to be consistent across all of the monitor nodes and there is no reliable way todo that in hooks (even cluster hooks) [06:44] jamespage: you can't have peer election of fsid? [06:44] marcoceppi, so injecting it as config (as well as the monitor secret) is a simpler and more reliable approach [06:44] jamespage: gotchya, any feedback on the second half of their question? [06:44] marcoceppi, well I wanted to ensure that a charm user knows that they have not done something required [06:45] so erroring out the hook is pretty much the only way to provide feedback! [06:45] jamespage: right [06:45] marcoceppi, I guess I could have provided default values [06:46] but I wanted to avoid there being 100's of ceph clusters all with the same fsid and secret :-) [06:46] jamespage: yeah, it rubs me wrong because charms should provide sane defaults [06:46] but I understand why it was done this way [06:46] marcoceppi, if we had a nice way of doing atomic sharing of stuff like this across peers then it would be a different approach [06:47] they were just laughing at the idea of having to run uuid, and I didn't know enough about the charm to say why it didn't just do that [06:47] jamespage: what happens if you change the fsid after initial deployment? does it ruin everything or will the charm properly accept it? [06:47] can ceph handle a change in fsid* [06:47] marcoceppi, no it can't [06:48] Okay, so that's why [06:48] requirement for immutable config - I have expressed this to the juju-core team [06:48] jamespage: what you have now is fine, I can echo that sentiiment saying "this affects users" [06:48] give us immutable configs! [06:48] jamespage: thanks for the information [06:48] please! [06:48] marcoceppi, oh - the other thing is that changing source does not actually do anything post install [06:50] marcoceppi, I know some charms (like the openstack ones) support upgrades through that route [06:50] ceph does not [07:01] jamespage: thank you sir! [07:02] marcoceppi, np === CyberJacob is now known as CyberJacob|Away === defunctzombie_zz is now known as defunctzombie [08:24] <_mup_> Bug #1212146 was filed: JuJu fails installing a unit (Failure: exceptions.KeyError: 'JUJU_ENV_UUID') === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === jam1 is now known as jam [15:15] what is the PPA where Juju pulls its mongodb from for juju-core? === tasdomas is now known as tasdomas_afk [15:22] utlemming, ppa:juju/stable [15:22] jamespage: thank you kindly Mr. Page [15:23] sidnei, for the life of me I could not work out why the lxc provider was using and apt proxy of "http://localhost:8000" [15:23] sidnei, but then I realised it was using the setting from the host [15:23] sidnei, which is my case self refers [15:23] ... [15:35] jamespage: you can change the host to use 10.0.3.1 which is the lxc bridge [15:35] sidnei, yeah - testing that now [15:36] jamespage: im open to suggestions on how to improve that. hazmat suggested just assumming that if 3142 is listening on 10.0.3.1 just use that blindly, which i'm not a big fan of. squid-deb-client would auto-detect a proxy but that might be too magic. [15:37] sidnei, hmm [15:37] maybe the answer is just documenting [15:38] sidnei, in my setup I actually run squid-deb-proxy on my laptop [15:39] and I have it configured to use a peer squid-deb-proxy that I have at home if its contactable [15:39] otherwise it just goes direct to the archive [15:39] sidnei, the nice things is that everything is local off SSD with no network in the way [15:39] sidnei, I'd be open to adding squid-deb-proxy to the juju-local package on the list of Depends [15:41] jamespage: i don't have any preference, but pyjuju had apt-cacher-ng instead [15:42] sidnei, I never liked that personally [15:42] sidnei, squid-deb-proxy is in main, apt-cacher-ng is not [15:42] that's a pretty good argument :) [15:43] apt-cacher-ng has some advantages that you can tell it to mirror a whole repo and there's a way to purge unused files manually, but again, for someone that's advanced enough to care they can set that up themselves [15:44] sidnei, I agree its more repository aware [15:44] sidnei, but me likes squid :-) [15:44] sidnei, I swung big time when I realised it did intelligent peering [15:46] sidnei, although its nicer in that apt-cacher-ng will map all *.archive.ubuntu.com to a single source [15:46] *yes* [15:48] oh - and welcome back debug-hooks [15:48] * jamespage had missed that [15:50] jamespage: ultimately, you're in a better position to advise how to auto-configure the containers and wether to pick squid-deb-proxy or apt-cacher-ng than me. [15:53] sidnei, OK _ lemme give it some thought [16:00] jcastro, do you want to fire up the G+ for the weekly charm meeting? [16:00] DSJFGHWERTHJFWEDFSA [16:00] I totally forgot [16:01] ugh [16:01] I am totally unprepared [16:01] one sec. [16:02] https://plus.google.com/hangouts/_/62e91c8590f44673e739cf992bc2ac06bfebbc3d?authuser=0&hl=en [16:02] Weekly charm meeting URL: https://juju.ubuntu.com/community/weekly-charm-meeting/ [16:03] ether pad: http://pad.ubuntu.com/7mf2jvKXNa [16:03] For folks interested in joining our weekly charm meeting [16:03] ^^ [16:20] is there some trick to testing relation-get/relation-list outside of normal execution in gojuju. I vaguely recall being able to set the right envvars in pyjuju, but the socket path doesn't seem to stay around in gojuju [16:21] just trying to test a for x in relation list; do relation-get private-address $x; done [16:21] ev, debug-hooks is back [16:22] sidnei: I've never entirely understood debug-hooks. I run it, I get tmux, and then I'm supposed to add relations and things to see them pop up in additional windows? [16:23] ev, correct. you trigger any hook and it will pop a new window, where you can do some manual stuff and optionally call ./hooks/ [16:23] but I'd like to just work with the current arrangement, rather than having to do an add-unit to force it to give me relation-get and relation-list [16:23] is this not possible? [16:23] it may be possible, i just never bothered [16:24] * ev shrugs [16:24] I'll give that a try [16:24] thanks! [16:27] that worked. Thanks again, sidnei [16:28] \o/ === CyberJacob|Away is now known as CyberJacob === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [18:50] arosales: ok I think I found a problem with the HP instructions [18:50] but it might be just me [18:50] in the instructions the tenant-name is like evilnick-project1 [18:50] but in my HP cloud panel it's long number [19:02] arosales: https://bugs.launchpad.net/juju-website/+bug/1212396 [19:02] <_mup_> Bug #1212396: HP Cloud instructions need update [19:03] jcastro, are you looking at the tenant name or id? [19:03] jcastro, in the HP console go to [19:03] yeah [19:03] Account --> Manage Projects [19:03] right [19:03] that wasn't there before [19:04] which is why the docs are out of date [19:04] you should see the project name, and then the ID [19:04] I know where it is now [19:04] ya, the console has changed since the docs where screen cap'ed [19:04] jcastro, what you want here is the "name" not the ID though [19:04] right [19:05] jcastro, thanks for updating it. Let me know if you need me to validate anything [19:06] it's working for me now, did a deploy, etc. [19:06] I wonder if we can set a watch on an HP cloud console page [19:06] or if this will just be the nature of the beast === dosaboy_1 is now known as dosaboy === tasdomas_afk is now known as tasdomas === tasdomas is now known as tasdomas_afk [21:18] jcastro: tennant name, tennant id, project name are all the same (though different values) it's the "Project Name" now in HP Cloud [22:36] uh. with juju-core, how do i remove a relation between two services, when a previous relation joined hook had failed? [22:36] agent-state-info: 'hook failed: "relation-joined" [22:37] remove-relation command succceeds, but the relation isn't actually removed. and i am unable to re-add it [22:37] :( [22:37] sounds like a bug [22:38] adam_g: is there a --force or something? [22:38] thumper, doesn't look like it [22:39] FWIW, i see the unit attempting to fire the departed and broken hook. but in this case, the charm does not implement either [22:54] adam_g: we ran in to this yesterday. juju resolved the service [22:55] marcoceppi, huh? === CyberJacob|Away is now known as CyberJacob [23:18] adam_g: if service is in an error state you can't process anymore actions against it until the service/unit has been resolved. Meaning the remove-relation was accepted by bootstrap and is queued until you resolve the relation error on the unit. [23:19] running juju resolved aganist the machine until it drops out of error would then allow it to continue processing the events queued (removal) which would then allow you to re-add the relation [23:19] marcoceppi, oh, i read your last comment differently. yeah, i believe i tried to resolve the error and remove the relation [23:19] adam_g: I noticed, yesterday, that we had to run resolved a few times to get a unit back to health, fwiw