=== rmcall_ is now known as rmcall === dpm is now known as dpm-afk === frankban|afk is now known as frankban === zeus is now known as Guest71546 === marcoceppi_ is now known as marcoceppi [12:09] marcoceppi: hey. I'm just sorting the ntp related charms so good timing. I've just refreshed cs:~ntp-team/ntpmaster ready for promulgation. ntp next, so I'll review (ha) and merge your mp first [12:11] (still not sold on this -team vs -charmers, since the set of maintainers of the software vs the charm are totally distinct) [12:11] stub: I agree,but UX and design folks said -charmers rated poorly given they didn't understand what that was [12:12] stub: and the majority of charms we'll want to get upstream to take over - or at least help maintain [12:12] k [12:15] Still feels rude to be making false claims, since the charm store isn't the only place this is visible. Doubt it makes trouble in reality though. [12:17] The first real conflict will be between the snap package maintainers and charm maintainers for some product, since I doubt an upstream will take on both simultaneously [12:21] Real solution seems to be to use the team's displayname rather than the id, but the charm store might not have that information since it is syncing team membership via openid extensions rather than querying the Launchpad API. === rogpeppe1 is now known as rogpeppe [12:30] stub: yeah, but worth bringing up [12:30] I filed a bug :) === dpm-afk is now known as dpm [13:03] hi, I have a question regarding juju store and resources [13:06] I'm developping a charm that uses 3 resources. [13:06] I've pushed my charm onto cs:~6wind/trusty/virtual-accelerator-12 [13:06] I'm trying to charm-release it, but it complains that resources are missing from publish request: [13:07] do I really need to send resources to the store along with my charm? (I would have to use boilerplate files, as my resources contain credentials for using our proprietary software) [13:11] pascalmazon: you can just upload empty files [13:11] or some placeholder [13:11] and have your charm check the content [13:12] did the juju api for deploy change recently requiring series to not be empty? [13:17] stokachu: following trunk or in rc1 from b18? [13:17] rc1 from b18 [13:17] stokachu: not aware of any changes but info to narrow down have to check commits tbh [13:17] we were very careful on the path to rc there === jasondotstar_ is now known as jasondotstar [13:17] ok np [13:18] it's nbd i can fix it in our api code [13:19] rick_h_: http://paste.ubuntu.com/23242144/ [13:19] thats the api server error when we tried to deploy that charm via conjure-up [13:19] not sure why that doesn't default to xenial [13:19] https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml is the bundle [13:19] this works in regular juju deploy though === Guest71546 is now known as zeus [13:20] rick_h_: stokachu what about multi-series subordinates? [13:21] the kubernetes worker looks like it's xenial only [13:21] stokachu: hmm, so looks like a change from a long while ago: https://github.com/juju/juju/commit/ff86e5c5413b2920986dc2769d57c6adadf8237f [13:21] the bundle has a series: xenial defined [13:22] maybe we just need to specify that in our api call [13:22] stokachu: I see, so maybe there's something with that not carrying through the bundle. [13:23] yea i think we just need to pull the default series in the bundle and make sure it's set for each deploy call that doesn't have a series in the charm id [13:23] im just surprised i didn't hit this earlier [13:23] stokachu: yea, maybe the bundle normally had series in the charm urls? [13:23] yea i bet all the other bundles we use have the series in the charm urls [14:03] hey tvansteenburgh got a hot second? [14:04] charles3: yup === charles3 is now known as lazyPower [14:04] tvansteenburgh: its been a while since i've tried to co-locate a service in a bundle. I'm getting the return from amulet when trying to deploy the bundle: 2016-09-27 13:59:53 Invalid application placement easyrsa to lxd:etcd/0 [14:04] is this known behavior, or should I file a bug about this? [14:05] lazyPower: you have latest deployer? [14:05] double checking, 1 sec [14:06] i didn't, it just pulled an update. However i get the same result [14:06] lazyPower: gimme a min [14:07] ack. https://gist.github.com/4447433ddce4729c88a737524ed7f0c9 -- bundle for reference [14:07] magicaltrout: now that our k8s formation has kind of settled, is it time to get some of your mesos in my kubernetes? or is it time to get some of my kubernetes in your mesos [14:13] lazyPower: s/applications/services/ [14:15] ah, same bug that bit amulet bit deployer? [14:15] s/bug/change/ [14:17] lazyPower: yeah. [14:17] lazyPower: thanks for the heads-up, i'll file a bug [14:20] tvansteenburgh: does deployer/amulet also need to be updated for the new nomenclature of colocation? s/lxc/lxd? [14:20] http://paste.ubuntu.com/23242376/ [14:22] lazyPower: no, that was already done [14:22] ok cool. I'll just update teh bundle for now. If you're busy i can also file that bug about s/application/services/ [14:22] lazyPower: already filed, thanks [14:22] you da man [14:23] lazyPower: your placement should work if you s/applications/services. let me know if it doesn't [14:26] hey whats up everyone? [14:27] I've got some nonsense going on here around aws spaces and subnets [14:27] check it out [14:28] tvansteenburgh: doesn't appear to - that output was with this bundle https://gist.github.com/3bcb688d317589e502a41c734f28f734 [14:28] well, i commented out the lxd to get this run goin, but i digress, it was uncommented and compained. [14:29] lazyPower: ok, looking [14:32] Hi. we created a charm that will install and configure one of our storage driver as backend for openstack cinder. And it also modify nova.conf file. I integrated my charm with Openstack bundle. From relations section how can I provide relation from our charm to cinder and nova services separately. [14:32] So who's knowlegeable about multi-series charms? (do they work in juju1? -- is there an incompat between juju1 and juju2 with the metadata.yaml format with series a list vs string?) [14:34] here is my space and subnet [14:34] http://paste.ubuntu.com/23242403/ [14:35] lutostag: so Juju 1.25 (.4+ I think) should auto pick the first one in the list and run with that [14:35] previously, as you can see here -> http://paste.ubuntu.com/23242441/ [14:35] lutostag: but it's not a fully supported feature as that was a 2.0 feature [14:37] bdx_: ? what's up? [14:38] well I guess juju status doesn't show the private ip, here is a screen shot of the aws console showing the instance is in the correct subnet/space -> https://postimg.org/image/pkhhfjzvh/ [14:38] is there an equivalent to “juju resolved --retry” in juju 2.0 rc1? [14:39] so whats going on here, is my instances no longer will deploy to the space/subnet I have defined in my model [14:39] rock__: there is a cinder-client layer that should provide most if not all of that template for you [14:40] I run the same command I ran to get the 0th instance deployed to my defined space/subnet on subsequent instance deploys, but my instances are getting deployed to random subnets now [14:40] hml: just juju resolve should auto retry. [14:40] bdx_: does it show in the yaml output? [14:40] bdx_: /me goes to look there was a bug about not all addresses showing in status [14:41] rick_h_: thanks [14:41] rick_h_: this should answere all of your questions -> http://paste.ubuntu.com/23242465/ [14:42] bdx_: what's juju status --format=yaml [14:42] rick_h_: http://paste.ubuntu.com/23242474/ [14:43] lazypower: OK. Thank you. [14:43] bdx_: so the machine has multiple addresses, you're not seeing the one you want that's on the space correct? [14:44] lazyPower: sounds good although we have a go live this week. The stuff still remains pretty much as it was I have plans to circle back around to it next week [14:44] rick_h_: more than that [14:44] rick_h_: not only is the instance not deploying to the specified space/subnet, I'm getting the private ip in juju status [14:45] instead of the public like the other instances [14:45] instance* [14:45] rick_h_: this is a wild one .... sucks it had to show itself when I'm in florida setting up our dev teams with juju :-( [14:46] bdx_: yes, looks like: https://bugs.launchpad.net/juju/+bug/1512875 as far as the reporting [14:46] Bug #1512875: juju 1.25.0 using MAAS 1.9-beta2 juju incorrectly reports the private address [14:46] rick_h_: thats 1 bug that I'm experiencing for sure [14:46] maybe, bah [14:47] bdx_: yea, the other thing is that you're using spaces with just machines. So are you using the spaces constraing with add-machine? [14:47] or some variant of it [14:47] rick_h_: yea, I am [14:48] yea, so there's a few things. 1) juju not picking the 'preferred' address for the display. That's a known issue. 2) That we only show one address in status and should show all addresses a machine has. 3) not sure why the spaces constraint would not get you a machine not in the space :/ [14:49] hei, I deployed Openstack but neturon-gateway did not deloy l2, dhcp or metadata [14:49] I was getting machines in the correct space up till this morning when I went to run a demo [14:49] any known issue? [14:49] MrDan: yes, known issue with the Neutron gateway charm and not liking the bridged interfaces setup by RC1 [14:50] bdx_: yea, sorry, that one doesn't ring any bells and not sure we've seen that one. [14:50] what can I do as a workaround? [14:50] MrDan: https://bugs.launchpad.net/juju/+bug/1627037 [14:50] Bug #1627037: rc1 bridges all nics, breaks neutron-gateway [14:50] rick_h_: ok, I'll file a bug later today when I have a min [14:50] rick_h_: thanks [14:51] bdx_: k, sorry man [14:51] MrDan: there's a couple of notes in the bug. We're working on updating for RC2 and working with the charmers of neutron gateway to correct it soon. [14:51] MrDan: best thing is to backtrack RC or look at the maas hacks in the bug. [14:52] ok, thanks [14:53] magicaltrout: no rush, we're still launching ourselves. today is the day [14:56] just insinuated the board at the ASF was an old boys club, I expect rockets to land on my house shortly [14:56] good bye all [15:00] cna the neutron-gateway be worked aroud if I have only one NIC configured on that host? [15:01] MrDan: I don't think so as it'll still see a bridge on that one interface [15:01] MrDan: and will refuse to use it [15:06] ah, so the issue is that the public interface, eth0 in my case, should not be bridged as neutron-gateway charm skipps bridge networks [15:08] MrDan: right, and juju auto bridges so that things work as expected when deployed into containers/etc. [15:09] MrDan: the long term fix is to represent L2 interfaces in the model, but until then the charmers are looking to drop the bridge for neutron specifically === daniel1 is now known as Odd_Bloke [15:52] i see, so basically right now openstack is not deployable atm with latest packets [16:08] MrDan: yeah, if you can beta18 is a good candidate [16:20] marcoceppi: where can we download beta18? [16:21] hml: let me check if it's still in the ppa [16:24] hml: we can also look to get you a binary if you need, or build it from the tag? https://github.com/juju/juju/tree/juju-2.0-beta18 [16:25] thedac, thanks for landing the mysql c-h sync @ https://code.launchpad.net/~1chb1n/charms/trusty/mysql/newton/+merge/306554 - marcoceppi, what do we need to do to get that rev into the cs? [16:25] rick_h: okay, if it’s not in the ppa i’ll build one - question though… [16:25] hml: shoot [16:25] marcoceppi, it looks like i've got perms to charm push it, just not sure of the expected process/flow on that one [16:26] rich_h: when I do a bootstrap - it appears that juju is looking to download the latest, which makes me nervuous [16:26] hml: hmm, if it's custom built I thought it would not auto use the matching tools [16:26] anastasiamac: can you speak to the changes wallyworld did here? ^ [16:27] rick_h_: i’m using a custom build right now and got “ cmd cmd.go:129 Looking for packaged Juju agent version 2.0-rc2 for amd64” [16:27] rich_h_: though in the end it uses my local juju build [16:27] hml: :/ hmm maybe it's just part of the Id process? [16:27] hml: it used to be you used --upload-tools to make sure it used your local binary [16:27] hml: if it's finding the right binary I that's working as intended then [16:27] rick_h_: those changes are black magic for me, sorry :D [16:27] * rick_h_ doesn't have a ton of experience with the new flow there [16:27] anastasiamac: k [16:27] anastasiamac: ty [16:28] rick_h_: natefinch has had more epxosure and mayb of more help here ^^ [16:28] exposure even LD [16:28] rick_h_: it does in the end… but if i’m using the installed version of juju - and it’s looking to upgrade every time - eeks. :-) [16:31] hml: I have a beta18 binary if you'd like [16:31] marcoceppi: cool, how can I get it? === xnox_ is now known as xnox === beisner- is now known as beisner === med_ is now known as Guest44064 === zeus is now known as Guest66430 === Guest66430 is now known as zeus` === zeus` is now known as zeus [17:06] hi, sorry, was at lunch... rick_h_, anastasiamac, hml - use --build-agent to force juju to upload a locally built jujud [17:06] (it always rebuilds, so you need the source, and need to be able to build it, but otherwise, works like --upload-tools) === stokachu_ is now known as stokachu === aluria` is now known as aluria === kragniz1 is now known as kragniz === zeus- is now known as zeus` === Spads_ is now known as Spads === spammy is now known as Guest51211 === zeus- is now known as zeus` === zeus` is now known as zeus === tvansteenburgh1 is now known as tvansteenburgh === med_ is now known as Guest79278 === frankban is now known as frankban|afk === med_ is now known as Guest40876 === beisner- is now known as beisner [19:30] In juju 2, what is the correct way to remove an application and a unit that's in error, say a install hook error. === med_ is now known as Guest45789 [19:43] hatch: you have to resolve it first. --retry is not built into resolve [19:44] hatch: so you can juju resolved app/unit [19:44] hatch: or juju resolved --no-retry if you don't want to bother [19:45] rick_h_: so when I tried `juju resolved app/unit` it just kept retrying the hook [19:45] I had to run --no-retry [19:46] so to remove a unit in error I had to run `juju resolved app/unit --no-retry` a few times after `juju remove-application app` [19:46] is this intentional? [19:46] it's quite unintuitive [19:47] hatch: so the normal wish is to retry. The thing is that it goes through each hook [19:47] right, but the application has already been marked to be destroyed [19:48] so why would we care if the unit is any good? [19:48] hatch: so if you fail/retry a config-changed and then it hits a relation hook it'll get stuck and you have to resolved agagin [19:48] hatch: right, but destroying it invokes hooks [19:48] hatch: so it's stuck going through them thus the few resolved tries [19:48] would it make sense to have a warning returned when you destroy an application on how to actually get it gone if any of the units are in error? [19:49] hatch: the thing is that it's all async. when you go to destroy it, you don't really know what's up. [19:49] hmm [19:49] hatch: there might be a case in there to figure out sometimes, but not consistent [19:50] yeah...ok [19:50] this is an interesting problem [19:50] maybe a always-on notice [19:50] "if something fails when tearing down, do x" [19:50] hatch: maybe push charm authors to test their charms :) [19:51] hatch: there's definitely room for improvement [19:51] something for 2.1 [19:51] lol [19:51] yeah I was just trying what I usually do 'spam resolved' [19:51] but that didn't work because I needed --no-retry [19:51] hatch: right, that's what's new in rc1 [19:51] so I thought that was odd [19:51] in the 'normal' case that makes sense [19:51] hatch: normally you have to --retry but that's now the default because usually, you want that on [19:52] to retry be the default [19:52] right [19:52] so maybe if the application is in a dying status and units are in error when you run `resolved` it just does that [19:52] or would you want to potentially still run the hook? === valeech_ is now known as valeech [19:55] hatch: well for things that might need to cleanup, hit an API when going down, etc. It makes sense to hook hook exec'ing as it goes down [19:55] "hey, I'm going down, take me out of the load balancer" [19:56] yeah, that's an excellent point [19:56] tough problem here [19:56] heh [19:56] touch ux problem that is [19:56] touch [19:56] tough [19:56] lol [19:56] * hatch can't type === JoseeAntonioR is now known as jose [20:38] relation_set still works in reactive land right? === scuttle is now known as scuttlemonkey [21:04] hey arosales did that ibm talk you gave get recorded? === Guest51211 is now known as spammy [21:38] magicaltrout: he's out at strata this week [21:38] I see some stills from it but no media that i can tell. I'll sync with james over that and see if we have any assets from that talk [21:41] ta [21:41] looked interesting thats all [21:41] slides, video whatever [21:57] does anyone know if you can create LXD machines from a bundle.yaml for juju deploy without having to specify machines? seems just doing a contraint for container=lxd doesn't work [22:18] spaok: so you want to create empty unused machines? === hatch_ is now known as hatch [22:19] naw, I want juju deploy to create containers for the services its deploying, I know can use the to:\n - lxd:0 type of syntax, but that requires a machine to be defined first [22:21] spaok: it does, because the placement of containers on machines is typically important [22:21] spaok: if you use the GUI to generate the bundle placement then you might find that easier === Guest45789 is now known as med_ [22:22] we are trying to setup automation for openstack deployment, so we want to just target machines tagged in maas, but we don't want to have to pre-populate a bundle file with machines, cause the counts may change [22:24] ahhh, I'm not sure of any way around that to be honest. It might be worth an email to the list or a feature request [22:25] spaok: check the constraints docs [22:25] rick_h_: ya, gone through it backwards, forwards, up and down [22:25] can't find a way [22:25] spaok: i think theres a :lxd syntax for --to [22:25] spaok: hmm ok. [22:26] there is, but to use it you need to define machines [22:26] ideally if I could use contraints + to: lxd [22:27] spaok: so you want each unit on a new machine but in a container? [22:27] spaok: ah sorry, not xonstraints docs but the olacement docs [22:27] basically ya [22:28] it works with the direct commands [22:28] for instance, juju deploy cs:xenial/glance --to lxd --constraints tags=lxc,rack2 [22:29] will create a container on the machine tagged lxc [22:29] i see gotcha [22:30] juju deploy --to=lxd:[node name] [22:30] MrDanDan: question is, how do I do it in a bundle.yaml without needed to define all the machines [22:30] and specify the app after, of course