[01:13] So hallyn was telling me on Friday about using KVM with the local provider; is it also possible to use this with the manual provider, i.e. to control a remote KVM host? [01:14] Or alternatively, change the container type on a manual host after the environment has been bootstrapped? [01:23] if you put maas in a container and use the maas provider, setup ssh keys for the maas user, you can specify the qemu url to a remote machine for the power control, just need ssh [01:24] juju add-machine would spin up a KVM instance on that server [01:31] spaok: Yeah - I've already done that; but I'd like to avoid having to add KVM instances to the MAAS controller === rmcall_ is now known as rmcall [06:58] Good morning Juju world! [08:01] hi kjackal [08:02] hi SDBStefano, what's up? [08:02] how could I create a charm for Xenial instaed of trusty ? [08:02] in the metadata.yaml you set the proper series, let me find an example [08:03] SDBStefano: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml#L10 [08:05] SDBStefano: although the charm build command when there is a single series decides to put it under the trusty build path (but that might be some missconfiguration I have on my side) [08:06] so, I have added into the yaml file - series: - xenial [08:07] and I done the build [08:07] are you saying that the stuff under the trusty directory is now for Xenial ? [08:09] SDBStefano: when you do a charm build first line should say where the output directory is [08:10] SDBStefano: go ahead and open the /metadata.yaml . It should say that the series is xenial, right? [08:11] ok, it creates a new dirctory name 'builds' there the metadata.yaml contains "series": ["xenial"] [08:12] so, now I should deploy using 'juju deploy $JUJU_REPOSITORY/builds/use --series xenial' [08:12] SDBStefano: awesome! then this charm is xenial. Yes, you can deploy it now! [08:14] I have to remove the previous app, but it has an error : - ude error 1 ude local 6 ubuntu [08:14] so the 'juju remove-application ude' is not removing the app [08:15] is it possible to force the removal ? [08:17] you can remove-machine where the unit is with --force [08:17] SDBStefano: ^ [08:18] yes, it worked, I'm deploying, thanks for helping [08:19] you can also deploy the same application with a different name like so: juju deploy $JUJU_REPOSITORY/builds/use myappname --series xenial [08:19] SDBStefano: ^ === tinwood_ is now known as tinwood [10:24] jamespage, good morning! [10:25] morning neiljerram [10:25] how things? [10:25] Quite well, thanks! [10:25] You suggested that I ping you here in your review of https://review.openstack.org/#/c/382563/ [10:25] * magicaltrout knows from experience, anyone who greats another user specifically whilst saying good morning in reality has an issue or favour to ask..... "quite well" or not ;) [10:26] neiljerram, yes! [10:26] s/greats/greets [10:26] So doing that. I think your proposal is basically right, so just looking for pointers on how to start reworking the code into the correct form... [10:27] magicaltrout, I just thought that 'good morning' was a little more friendly than 'ping'! [10:27] neiljerram, so since the original calico integration was written, we've done quite a bit of work to minimize the amount of vendor specific code that is required in the core principle charms [10:27] neiljerram, you already have a subordinate for nova-compute [10:27] neiljerram, that's been complemented with the same approach for neutron-api [10:28] jamespage, yes, agreed, and makes sense. [10:28] neiljerram, so that all of the SDN specific bits can reside in an SDN specific charm [10:28] neiljerram, right and the nice bit of this is that gnuoy's just completed some work to make writing those a whole lot easier [10:28] jamespage, Ah, nice. [10:28] neiljerram, we've done quite a bit of work on making reactive and layers work for openstack charming this cycle [10:28] neiljerram, so your charm can be quite minimal [10:29] neiljerram, https://github.com/openstack/charm-neutron-api-odl has been refactored as an example for reference [10:29] neiljerram, but we also have a template for charm create === caribou_ is now known as caribou [10:29] gnuoy`, ^^ is that live yet? [10:30] neiljerram, I'd love to get a neutron-api-calico charm up and running, so we can deprecate the neutron-api bits for ocata release and remove them next cycle [10:31] jamespage, the template ? That is very nearly ready. I just need to run the guide using the template to check they are both in sync and work [10:31] jamespage, Just as a thought, might it even make sense to have a single 'neutron-calico' charm that provides both the compute and the server function? I assume it can detect at runtime which charm it is subordinate to? If it's subordinate to nova-compute, it would provide the compute function; if it's subordinate to neutron-api, it would provide the server side function. [10:31] neiljerram, that's absolutely fine [10:31] +1 [10:31] neiljerram, neutron-calico already exists right? [10:32] jamespage, Thanks. So, is there an example of another SDN subordinate charm that already uses gnuoy's new facilities? [10:32] jamespage, Yes, neutron-calico already exists (for the compute function). [10:32] neiljerram, https://github.com/openstack/charm-neutron-api-odl does [10:32] neiljerram, so approach re neutron-calico as a 'does both' type charm [10:32] neiljerram, right now its not possible to upgrade from a non-reactive charm to a reactive charm [10:33] neiljerram, neutron-calico is an older style python charm I think [10:33] neiljerram, I'm still smoothing of the rough corners but https://github.com/gnuoy/charm-guide/blob/master/doc/source/new-sdn-charm.rst may help [10:33] jamespage, TBH we've never really tested for upgrading yet at all. [10:33] neiljerram, how many live deployments are you aware of using calico deployed via juju? [10:33] haha good point [10:34] neiljerram, in which case I'd take the hit and move to the new layers+reactive approach now [10:34] jamespage, Just two: OIL, and a Canonical customer that I'm not sure I can name here. [10:34] neiljerram, ok OIL is manageable [10:34] neiljerram, I'll poke on the other one :-) [10:35] gnuoy`, what do you think to the single subordinate doing both roles approach discussed above? [10:35] jamespage, I think you're right about neutron-calico being an older style charm. So perhaps it would be a simpler first step to make a separate neutron-api-calico, in the most up-to-date style (reactive) [10:35] jamespage, I'm fine with that [10:36] neiljerram, ack [10:36] gnuoy`, that charm-guide update is for the hypervisor integration - do we have the equiv for API? [10:37] jamespage, yep, https://review.openstack.org/#/c/387238/ === gnuoy` is now known as gnuoy [10:38] gnuoy, sorry I mean't the neutron-api subordinate charm version [10:39] jamespage, no, not atm. [10:39] gnuoy, ok that's what neiljerram will be after [10:39] ack [10:39] neiljerram, you might need to give us a week or two to pull that bit into shape [10:40] jamespage, then https://github.com/openstack/charm-neutron-api-od is the best bet [10:40] * https://github.com/openstack/charm-neutron-api-odl [10:40] jamespage, But I could start by looking at https://github.com/openstack/charm-neutron-api-odl for inspiration, and ask any questions here? [10:40] neiljerram, ^^ yeah that byt example is our current doc - but we'll be working on that [10:40] neiljerram, yeah that's fine - but we've move openstack charm discussion over to #openstack-charms [10:40] but either is still fine [10:40] jamespage, Ah OK, I'll go there now... [10:41] neiljerram, ta [10:52] Hi hackal, I deployed a local charm, but it's stucked into : [10:52] UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE ude/11 waiting allocating 20 10.15.177.80 waiting for machine MACHINE STATE DNS INS-ID SERIES AZ 20 pending 10.15.177.80 juju-19dc1e-20 xenial [10:53] ok, it has just changed into : [10:53] MACHINE STATE DNS INS-ID SERIES AZ 20 started 10.15.177.80 juju-19dc1e-20 xenial [10:53] so it seems very slow === saibarspeis is now known as saibarAuei [12:26] bdx, do I remember you trying to get Juju working with Digital Ocean? [13:24] Hmm... [13:24] Why my "juju status" saying to me that: [13:24] Unit Workload Agent Machine Public address Ports Message [13:24] sme/5* maintenance idle 5 10.190.134.241 Enabling Apache2 modules [13:25] But as I see all of my tasks has been made. [13:25] So the last one should set up state "active" [13:25] How I can track it? [13:32] Spaulding: is there any info in juju debug-log [13:33] kjackal: [13:33] unit-sme-5: 14:31:32 INFO unit.sme/5.juju-log Invoking reactive handler: reactive/sme.py:95:enable_apache_mods [13:33] Hm... it looks like reactive ran this task again [13:34] that's why my status changed... [13:40] Spaulding - thats the rub. you cannot gauarantee ordering of the methods in reactive unless its a tightly controlled flow through states. if you want something to be run only once, it would be a good idea to decorate with a @when_not, and then subsequently, set the state its decorating against. do note, that you'll have to handle removal of that state if you ever want ot re-execute that body of code again. [13:46] lazyPower: yeah, i already noticed that [13:47] and I'm using when & when_not [13:47] it's like puppet [13:50] lazyPower: I found it! [13:50] @when_not('http_mods_enabled') [13:50] it should be sme.http ... [13:50] Spaulding - Sounds like you're on the right track :) [13:51] because of you guys! :) === coreycb` is now known as coreycb [14:01] s/greats/greets [14:17] o/ magicaltrout [14:51] I've the 'juju machines' command that does not list any machines but trying to add one leads to the following error message: ERROR machine is already provisioned [14:52] fbarilla: ah, is the machine you're trying to add in use in another model? [14:55] I've two models, 'controller and default' . None of them list the machine I want to add [14:56] In the 'controller' model I've the LXD container where juju has been bootstrapped [15:02] lazyPower: i broke it! :( [15:02] https://gist.github.com/pananormalny/765622f3d2c332bd9dece6f35b9ff267 [15:02] maybe someone can spot an issue? [15:03] it's running in a loop - again... :/ [15:03] the main idea was to run those tasks one by one... [15:04] from the top to the beginning... in that order [15:04] to the bottom** [15:07] Spaulding - i left a few comments on the gist, nothing stands out other than the state of sme.ready being commented out, and instead being set on 331 [15:08] Spaulding i dont see a remove_state however, and thats the other tell-tale, as removing states cause the reactive dispatcher to re-test the state queue to determine what needs to be run, which makes it possible to enter an inf. loop if you're not careful with how you're decorating the methods. [15:08] but that doesn't appear to be the case, so that last message is more FYI than anything else. [15:09] lazyPower: about jinja2 I'll probably use it [15:09] but basically i would like to have any "prototype" of juju - before OpenStack Barcelona [15:09] so I'm trying to do this ASAP [15:10] After that I'll have more time to do it properly... now it's just proof-of-concept [15:11] right, just a suggestion [15:11] its not wrong to put heredocs in there, its not best practice. [15:11] I know [15:11] It's dirty way - but in this case - it's working [15:12] Spaulding - get me an update on that gist with the output of charms.reactive get_states [15:12] i imagine whats happened is this isn't a fresh deploy, and you've modified state progression and run an upgrade-charm and now its misbehaving - is that consistent with whats happened? [15:13] basically - i'm trying every time to deploy it from scratch [15:15] Spaulding - `juju run --unit sme/0 "charms.reactive get_states" ` [15:15] assuming sme is the name of your charm and its unit 0 [15:15] lazyPower: will do [15:16] lazyPower: is there any other way to remove application if it fails at install hook? [15:16] juju remove-machine # --force [15:16] cause right now I'm destroying the model [15:16] that'll strip the underlying machine unit from the charm and the charm should finish removal on its own [15:16] i tried with force with 2.0rc3... couldn't get that working... [15:16] maybe it's fixed(?) now... [15:18] http://paste.ubuntu.com/23339079/ [15:18] seems like its functioning fine in 2.0.0 [15:19] notice that it leaves/orphans the application/charm for a short while before reaping the charm. === saibarAuei is now known as saibarspeis === saibarspeis is now known as saibarAuei === saibarAuei is now known as saibarspeis [16:14] petevg: I added a bunch comments in reply to and based on your review on https://github.com/juju-solutions/matrix/pull/2 [16:14] bcsaller: Our feedback awaits you. :) [16:15] cory_fu, petevg: thank you both [16:15] np [16:16] cory_fu: thx. Reading your comments ... [16:19] cory_fu: Do you know what might be causing this? http://pastebin.ubuntu.com/23339455/ [16:19] From running charm build [16:22] aisrael: Not sure, but I'd guess that the log message contains utf8 encoded data and should maybe be .decode('utf8')ed before being logged? [16:22] aisrael: Can you drop a breakpoint into /usr/lib/python2.7/dist-packages/charmtools/utils.py and see what the output var holds? [16:25] cory_fu: Not yet. I'm doing a charm school/training and they hit it. I'll dig in deeper, though. Thanks! === dames is now known as thedac [16:26] aisrael: I'd look for unicode characters in their yaml files, then. metadata.yaml and layer.yaml specifically. Otherwise, I'm not really sure [16:27] Weird. We haven't touched those at all. [16:35] cory_fu: looks like it may be related to an actions.yaml that's UTF-8-encoded [16:35] aisrael: Strange. Well, we should definitely handle that better [16:36] is there any particular reason we're limiting that to ascii? (just curious) [16:36] cory_fu: definitely. Once I confirm, I'll file a bug [16:37] Thanks [16:45] It definitely looks like a locale/encoding issue. I had them send me the charm and I built it locally with no problem [16:45] Hrm [17:08] Is anyone having an issue with 2.0 and lxd where the agents never report as started? Just started for me with the GA release [17:10] Seems to be an issue with the ssh keys [17:11] cory_fu: I've been having successful deploys so far [17:13] cory_fu, cmars: should we also set status to 'blocked', or 'error' here -> http://paste.ubuntu.com/23339654/ [17:14] bdx: I'd say "blocked", yeah [17:15] cory_fu, cmars: or is setting the supported series in metadata enough? [17:15] probably both for good measure? [17:15] bdx, cmars: Setting the supported series in the metadata could (probably would) be overwritten by the charm layer. [17:15] bdx, cory_fu please open an issue. iirc think you can force series with juju deploy --series [17:16] cory_fu, i've noticed something with that.. if you have multiple layers that both specify the same series in metadata, the series gets output twice. and the CS rejects that [17:16] cory_fu, i think I opened a bug.. [17:17] (but i forget where.. so many projects) [17:17] bdx, interested to get your thoughts on https://github.com/cmars/layer-lets-encrypt/issues/1 as well [17:18] i think we might be able to make this a little more reusable without introducing too much complexity [17:18] cmars: https://github.com/juju/charm-tools/issues/257 I do think the series list needs to be de-duped [17:18] cory_fu, thanks :) [17:45] Odd. It seems to only be trusty lxd instances that get stuck in pending. Guess I'll try torching my trusty image [18:02] * magicaltrout hands cory_fu the matches [18:03] magicaltrout: Thanks, but it didn't help [18:26] awww [18:35] cory_fu, xenial host and trusty container? [18:35] cmars: Yes [18:35] cmars: It looks like the trusty image isn't getting the lxdbr0 interface for some reason, but I can't find anything obvious in any of the logs I could think of to check [18:36] cory_fu, one thing it could be, is a known systemd issue [18:36] Oh? [18:36] cory_fu: 22-09-2016 15:23:18 < stgraber!~stgraber@ubuntu/member/stgraber: cmars: if you mask (systemctl mask) the systemd units related to binfmt (systemctl -a | grep binfmt) and reboot, then the problem is gone for good. This is related to systemd's use of automount and not to binfmt-misc (which just ships a bunch of binfmt hooks) [18:37] cory_fu, a telltale symptom is that the trusty container gets stuck in mountall [18:37] cmars: Huh. Worth a shot. FWIW, this only started with the GA. It was fine on the last RC [18:37] cmars: How do I tell where it's stuck? [18:38] cory_fu, a ps -ef would show a mountall process, and /var/log/upstart/mountall.log shows some binfmt errors [18:38] cory_fu, a quickfix is this (pardon my language): https://github.com/cmars/tools/blob/master/bin/lxc-unfuck [18:38] but masking the systemd units is a better solution [18:39] cory_fu, unless this is a completely different issue, in which case, forget everything I've said :) [18:39] cmars: Yep, that seems to be exactly the issue! [18:40] cory_fu: http://paste.ubuntu.com/23340056/ [18:41] cory_fu: can't seem to replicate :-( [18:42] bdx: That looks like what I'm seeing. The trusty machine never goes out of pending [18:42] cmars: I'm not really familiar with systemd mask. Do you have the exact commands handy, or can you point me to some docs? [18:43] cory_fu: it started .... http://paste.ubuntu.com/23340072/ [18:43] bdx: Oh, well, it never does for me. But it sounds like cmars has the solution for me [18:43] strange ... nice [18:46] cory_fu, this *should* do it: sudo systemctl mask $(systemctl -a | awk '/binfmt/{print $2}') [18:46] cory_fu, this might break binfmt on your host, if you run .NET exe binaries directly, for example, that might stop working [18:47] cmars: I don't think I do, but good to know. No way to have both work, I take it? [18:47] cory_fu, ideally, someone would fix this issue in systemd or binfmt or wherever it needs to be done [18:48] cmars: I don't think that awk is right. It just gives me "loaded" four times [18:48] cory_fu, what do you get from: systemctl -a | grep binfmt ? [18:48] I assume it should be $1 instead [18:49] cmars: http://pastebin.ubuntu.com/23340090/ [18:49] cory_fu, on my machine, $1 is a big white dot for some reason. could be my terminal type? [18:49] cory_fu, yep, $1 for you [18:49] Could be. I just have leading whitespace. [18:49] cmars: In case this causes issues, what would be the command to undo this? [18:50] cory_fu, systemctl unmask the units that were masked [18:50] Ok, thanks === alexisb is now known as alexisb-afk === med_` is now known as medberry [19:06] hi all :) [19:06] I followed this instruction : https://jujucharms.com/docs/stable/getting-started [19:07] its pretty clear, except what exaclty does that 20GB mean? [19:08] troontje: during the lxd init step? [19:08] exaclty [19:08] when setting the loop device [19:08] troontje: the size of the loop back device to use for storage on LXD machiens [19:08] when LXD boots machiens, they'll all be allocated a slice of that storage [19:09] marcoceppi: ah ok, so when setting that 20GB that will the 20GB that will be shared with all the machines together? [19:09] troontje: exactly [19:10] marcoceppi: I was trying to install openstack base with that 20 GB ,athough it does not fail it just not continues the process [19:10] I first thought that value was per machine [19:11] troontje: yeah, it's for all machines, and each machine boots with like 8GB of storage from that 20 [19:11] haha, well I cant blame it that it sort of failed === alexisb-afk is now known as alexisb [19:17] marcoceppi: ok, so I set everything back and the canvas is clean again [19:17] How can I increase that value for the storage [19:18] destorying it and make it new is no problem btw [19:19] bcsaller, petevg: Resolved merge conflicts and added TODO comment re: the model connection. Going to merge now [19:20] cmars: what happens when you don't have an 'A' record precreated -> http://paste.ubuntu.com/23340240/ [19:21] cmars: or created pointing at another ip :/ [19:21] bdx, pretty sure in that case that the standalone method will fail [19:22] yea [19:22] we should call that out in layer le readme [19:22] bdx, ack, good call [19:23] bcsaller, petevg: Merged [19:28] bcsaller, cory_fu: Just pushed a small fix to master: glitch now grabs context.juju_model rather than context.model (good name change; just needed to update glitch's assumptions). [19:31] how can I increase that zfs disk for JUJU? [19:57] cmars: it would be worth noting that layer le cannot be used on lxd/contianer deploys [19:57] bdx, true, true. wish there was a way to expose containers through the host machine.. [19:58] cory_fu: can you have a hyphen in auto_accessors? https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L25 [19:58] kwmonroe: Hyphens are translated to underscores [19:59] cool, thx cory_fu === med_ is now known as Guest62846 [20:01] cmars: That systemctl mask fix worked perfectly. Thanks! [20:01] bdx, cmars, you could use it with some tweaking of juju-default lxd profile, but its not likely to do what you want. [20:01] cory_fu, sure thing [20:02] jrwren, ah, that's true. i do some lxd bridging at home, but that doesn't always work on public clouds. ipv4 addresses are in short supply [20:05] cmars, jrwren: I'm thinking layer:letsencrypt might be best consumed by a public facing endpoint [20:06] the public endpoint (nginx/haproxy), could also be a reverse proxy [20:06] cmars, jrwren: so it could be used as an ssl termination + reverse proxy [20:07] bdx, a frontend that negotiates certs for its backends, and then does a passthrough to them (haproxy's tcp mode)... interesting [20:07] cmars: exactly [20:09] ^ is a skewed model bc it takes for granted that a user would be deploying lxd continers on the aws provider :/ [20:12] but not in all cases [20:12] just the one I want [20:12] bdx, i think i'd like to keep layer:lets-encrypt lightweight and usable in standalone web apps like mattermost.. but also usable in such a frontend charm as you describe [20:13] entirely [20:13] bdx, with the layer:nginx part removed, i think we'll have something reusable to this end [20:13] right [20:15] bdx: did you see the fixes for lxd/local [20:15] it was indeed to tune kernel settings [20:15] bdx: ping if you are still hitting the 8 lxd limit [20:15] bdx, it'd be really cool if the frontend could operate as a DNS server too. then it could register subdomains AND obtain certs [20:16] arosales: no I haven't .... just the warning to reference the production lxd docs when bootstrapping .... is this what you are talking about? [20:17] cmars: thats a great idea ... just what I've been thinking too [20:24] arosales, rick_h_: I've seen and heard bits and pieces about the rbd lxd backend, and also that getting rbd backend compat with nova-lxd isn't on the current roadmap. Can you comment on the status/progress of lxd rbd backend work to any extent? [20:40] bdx: yes, you have to tune settings on the host to spawn more than 8 lxd [20:41] bdx: re nova-lxd rockstar may have more info [20:41] bdx: but for general bootstrapping with lxd make sure to tune your host settings per the bootstrap info [20:42] if you want > 8 [20:42] :-) [20:47] is there a way to rename a model? [20:48] jhobbs: no, not currently [20:48] rick_h_: ok, thanks [20:49] * magicaltrout went to production with a model called Jeff recently..... [20:50] magicaltrout: even for testing I would have been thought of a more whimsical model name than Jeff [20:51] ah well, when you're trying to get LXD containers running inside Mesos my ability to come up with names fails me [21:04] mesos is taxing :-) [21:10] arosales: I deployed the entirety of openstack (16 units?) onto LXD at the charmer's summit without tuning lxd at all... [21:15] icey: you were using ubuntu desktop which has different default /etc/sysctl [21:16] touche bdx :) [21:16] :) [21:16] icey: I have as well. It depends on your host settings. We do know on stock ubuntu server that the host settings need to be adjusted [22:44] lazyPower: ping