=== admcleod_ is now known as admcleod [08:42] gnuoy`, https://review.openstack.org/#/c/323264/ if you please :-) [08:43] jamespag`, good morning to you too :) [08:43] morning === jamespag` is now known as jamespage [08:43] that's better [08:46] ta === gnuoy` is now known as gnuoy [12:13] Hi, I wonder if anyone can help me out a bit here please: I'm trying to use juju with LXD seeing as though juju 2.0 no longer has a local provider. However, I've cleary done something wrong as I keep hitting ERROR invalid config: can't connect to the local LXD server: Response was missing `api_compat` - if anyone has any ideas whyt this is happening, could you let me know? thx [12:34] lazyPower, ping? [12:54] anyone have experience with the relation side of charms.reactive ? [13:24] lazyPower, ping? [14:14] beisner, I've got the tempest charm working with charms.openstack now: https://github.com/openstack-charmers/charm-tempest/pull/8 [14:15] tinwood, excellent! (however, we need that dev to occur @ https://github.com/openstack/charm-tempest) [14:15] hmm, beisner, when did it move? [14:16] beisner, and more to the point, why is the old one still there? Very confusing. [14:16] tinwood, Thu. that was sort of my main topic of convo last wk ;-) and yep, we need to retire the old repo. [14:17] beisner, I didn't realise they had shifted over yet. Okay, I'll drop the PR there, and retry it on the other one (plus try to sync it, if needed). [14:17] tinwood, much appreciated [14:20] gnuoy, jamespage - we got the openstack-charmers/hacluster sync'd into the gerrit repo on Fri. I think we'll have some maint/housekeeping @ LP and openstack-charmers GH, yah? also as tinwood points out, with tempest also moved into gerrit, we should rm the repo from openstack-charmers. just lmk if/what you want me to tackle on that. [14:23] beisner, okay, on the new one: https://github.com/openstack/charm-tempest/pull/1 [14:24] beisner, or should this now be in gerrit? === dames is now known as thedac [14:27] tinwood, it's gerrit. [14:27] beisner, so it's a git review one now? [14:27] tinwood, indeed [14:27] * tinwood sigh [14:30] beisner, thanks for getting the charm repo populated, how did you manage it? [14:31] tasdomas o/ hey sorry i didnt see the pings this morning. What can I help you with? [14:35] gnuoy, yw, happy to do it. mea culpas and begging in openstack-infra ;-) only infra-root members can do it. one dev pushed back telling us to just do 1 big commit. but persistence... [14:36] and ellipsis [14:36] beisner, cool, welll thank you so much for getting it done. It would have been a shame to loose the commit history [14:36] gnuoy, indeed. welcome so much ;-) [14:36] lazyPower - so I'm still working on relation departure handling with charms.reactive [14:37] lazyPower - and I think there's a subtle bug in the conversation code [14:37] lazyPower - it's impossible to get remote data from a relation that is being departed from [14:37] even though the remote information is still available via the command line tools [14:38] really? conversation.get_remote() fails when juju relation-get works? [14:38] lazyPower, yes [14:38] lazyPower I think it's because get_remote relies on the list of units in the relation [14:39] oo we def. need to take a look at that. Do you mind filing a bug against charms.reactive? https://github.com/juju-solutions/charms.reactive [14:40] lazyPower - will do [14:41] lazyPower - another question - what could be the reason for charms.reactive trying to pull in relation information from relations that have already been departed from? [14:41] i'm goign to have to tap cory_fu_ to answer that one. I'm not super familiar with that area of the code [14:43] tasdomas, lazyPower: The first thing that jumps to mind is if an interface layer uses the -broken hook, it will often get states set on conversations attached to a remote-unit of None, which will then always act like it's set and can't be removed [14:44] cory_fu - so is the proper approach then to ignore -broken hooks in the interface layer? [14:46] Yes. They don't give you any useful information in reactive anyway. They fire when all related units are gone, which is easy to detect in reactive by the lack of {relation_name}.joined state [14:46] oo cory_fu_ - thats a problematic revelation [14:46] https://github.com/juju-solutions/interface-etcd/pull/5/files [14:47] i was unable to discern the departing unit without using that [14:47] is there an alternative pattern i can use? [14:47] lazyPower: Just change "broken" there to "departed" [14:47] ok so departed only runs on the departing unit? [14:47] Correct [14:48] i'm pretty sure i had this flipped and it tanked caused every unit the self unregister [14:48] so/the self/to self/ [14:48] I sent an email about this to the Juju list a while back, but clearly I need to push it more. Broken is broken in reactive. [15:04] beisner, for tempest charm: https://review.openstack.org/325966 Move files to new layered location [15:06] tinwood, [testenv:pep8] needs to be the tox enviro name for lint [15:07] if we are to keep this in line with other openstack projects and other os-charms [15:07] beisner, kk [15:27] cory_fu_ - yeah i just flipped it from broken to departed and i get behavior i absolutely dont want [15:27] it nukes everything but the leader [15:32] gnuoy, tinwood - fyi transitory os-charmer hacluster and tempest gh repos deleted. ✓ [15:32] beisner, yay! Thanks, it will help to reduce my confusion! [15:56] hi there, I'm trying to get sosreport ready for Juju 2.0 and otherwise improve out automated logging plugin [15:56] I want to capture juju debug-log -n 1000, but the syntax appears to be materially different between juju 1 and 2 [15:57] for juju 2.0 - juju debug-log -T -n 1000 [15:57] for juju 1.25 - juju debug-log -n 1000 [15:58] and if you do with -T on 1.25 it fails, if you do without -T on 2.0 it will hang indefinitely [15:58] * gQuigs really doesn't want to have to special case versions; [16:01] gQuigs: you're going to pretty much have to, 2.0 is a backwards breaking release [16:03] trying to install JUJU 2.0 as nonroot [16:03] marcoceppi: this is the first change I've ran into that actually breaks anything [16:03] but getting error [16:04] gQuigs: wait until you run `juju get` ;) [16:04] err, now juju get-config === arturt_ is now known as arturt [16:05] marcoceppi: could the tail be only when run in an interactive session? [16:05] running commands that just don't work is fine.. it's the hanging indefinitely that really gets us [16:05] so we can go both juju get and juju get-config and not care the output failed [16:06] gQuigs: open a bug, not a core developer so i can't say for sure [16:06] marcoceppi: will do, thanks [16:15] when installing juju 2.0 getting error as "ERROR unable to contact api server after 1 attempts: cannot load cookies: open /home/charm/.go-cookies: permission denied" [16:18] reported - https://bugs.launchpad.net/juju/+bug/1589581 [16:18] Bug #1589581: Consistant basic use of debug-log between 1.25 and 2.0 [16:18] bah humbug [16:18] there we go [16:19] I think I ask this every time I report a bug.. can we close the old project https://launchpad.net/juju? [16:22] anita_ - sounds like you may have run juju with sudo at somepoint and logged in? [16:23] interestingly enough my .go-cookies is owned by root with permissions: 0600 [16:24] lazyPower_:I followed the document juju2.0 [16:24] But I logged in as root and then su - [16:25] anita_ https://jujucharms.com/docs/devel/getting-started [16:25] Hmm for me -rw------- 1 root root 5 Jun 6 02:49 .go-cookies [16:26] Ok I will follow this === lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta8 release notes: https://jujucharms.com/docs/devel/temp-release-notes [16:27] I did exactly like this [16:27] somewhere i missed something? [16:27] my suggestion would be to remove the .go-cookies and re-login [16:28] Ok [16:28] let me try that way [16:29] just to see when i changed the owner of .go-cookies, bootstrap is successful [16:29] any idea how to delete a controller [16:30] juju destroy-controller? [16:31] yep [16:31] ok [16:31] juju help commands will lend a hand from here [16:31] k [16:35] I delete .go-cookies and re-login as nonroot after i destroy default controller [16:37] is anyone not seeing the navigation toolbar @ https://jujucharms.com/docs/devel/getting-started [16:38] now its successful [16:38] arosales confirmed, i am not seeing the nav bar [16:38] arosales: yes it's erroring in the JS [16:38] anita_ great :) [16:38] lazyPower: rick_h_ thanks for confirming [16:38] rick_h_: need me to file a bug, or are folks already working on it? [16:38] Then the same step i should follow as mentioned in doc [16:39] arosales: I'd file a bug and see if either something in docs broke things or the GUI folks did [16:39] before bootstrap, i will delete the .go-cookies and re-login? [16:39] is it a correect method? [16:39] rick_h_: will do, thanks [16:40] also i have done one more extra step comapred to doc, i.e : sudo chgrp lxd /var/lib/lxd/unix.socket [16:41] is this two steps correct? [16:42] lazyPower_: please confirm if the above two steps that followed for nonroot juju2.0 installation is valid? one is sudo chgrp lxd /var/lib/lxd/unix.socket and then deleted .go-cookies and relogin as nonroot [16:43] how's it going everyone? happy monday! [16:43] anita_ i'm not sure where that .go-cookies came from. so its hard for me to say. I would say thats a valid working fix for the error you encountered, but i dont think its required every time you setup juju, no. [16:43] does anyone know the status of cross-model-relations? [16:44] ok thanks a lot [16:44] rick_h_: ^ [16:45] bdx: on this roadmap's cycle of work [16:45] rick_h_: nice! thats exciting! [16:45] thx [16:46] bdx: hello and happy monday indeed. It seems summer is upon us as well [16:47] summer is relative [16:48] magicaltrout: ha, is it raining for you today? [16:49] june gloom [16:49] i left london last week in rain, i arrive in san diego for the week and its grey-ish [16:49] arosales: yea .... Portland winters last a long, dark, rainy 9 months .... so pumped for the sun [16:50] its great for programming though [16:50] magicaltrout *hattip* glad you enjoyed [16:51] indeed lazyPower had it on a few times now [16:52] marcoceppi - curious: i've got a "top" layer which charm-proofs ok -- but what's are the intents and expectations for charm-proof wrt layers/interfaces? [16:53] beisner: proof is really only designed to run against the built layer [16:53] we've not really gotten a layer/interface proof yet [16:54] bdx: I hear you're getting lots of sun as of late [16:54] * marcoceppi opens a feature for it on charm-tools [16:55] marcoceppi, ack thx. well fwiw, proof passes on an unbuilt layer currently, which may actually be a bad thing? [16:56] beisner: not really [16:56] but not all layers are expected to pass proof, though I suppose top layers are [16:56] beisner: but intermediate layers may not have a complete metadata.yaml, for example [16:56] marcoceppi, indeed [17:18] is there a juju command to generate dot files? [17:20] aha, in a side channel I learned of https://code.launchpad.net/juju-viz [17:21] That looks neat skay, what does it do? [17:23] lazyPower: I think it will generate a dot file with a graph showing a juju deployment, but I haven't tried it out yet, just got the url just now [17:24] lazyPower: I am working some changes to a charm and a mojo spec that will remove a website from our environment, and having a diagram to show what the current state is will make it easier for people [17:24] lazyPower: I have a doc, but it has lots of words. no pictures. generating a dot file will really help [17:25] (I am not great at using drawing programs) [17:26] i completely understand skay :) [17:26] they are tedious at best [17:26] have you thought about generating a bundle/export and sending it through svg.juju.solutions? [17:27] aha, in that repo in bin there is juju-dotty.py for the curious. it takes output from juju status [17:27] lazyPower: no. I am not familiar with how to do that [17:27] we use the raw bundle files in merges to visualize our prs. eg https://github.com/juju-solutions/bundle-beats-core/pull/1 [17:28] pardon the unwiedly url - but the markdown in the pr comment shows the magic: http://svg.juju.solutions/?bundle-file=https://raw.githubusercontent.com/juju-solutions/bundle-beats-core/cc14520c94ed69b29c667c3d59d189ce3a6166ee/bundle.yaml [17:29] Anyone else seeing the side-bar missing on https://jujucharms.com/docs/devel/getting-started [17:29] i suppose this may not be as useful, i think the api requires a url to curl to fetch the bundle [17:29] cory_fu_ - there's an open bug about it [17:29] Ah, ok [17:29] cory_fu_ https://github.com/juju/docs/issues/1141 [17:30] cory_fu_ if you have a sec i'm curious if there's another way i can get the unit thats actually leaving the relationship other than using the -broken hook as its broken. [17:31] -broken will never give you a departing unit because it's only called after every single remote unit is gone [17:31] What you actually want is the -departed hook, which behaves exactly like you're thinking -broken does [17:31] i have code that may be side-effecting into working [17:32] lazyPower: wow that is pretty [17:32] That is, within the -departed hook, conv.units (or hookenv.remote_unit()) will contain the unit that is leaving [17:32] And if you add a state to the conversation, it will only apply to the unit that is departing [17:32] lazyPower: ^ [17:33] lazyPower: /me searches. https://github.com/marcoceppi/svg.juju.solutions yes? looks like it [17:33] cory_fu_ do you have an example? https://github.com/chuckbutler/interface-etcd/blob/c58568c4ffaa9099deea8f20199d9ec501f5aeff/peers.py -- this conversation is scope unit [17:33] * skay reads readme and churckles. hehe [17:34] kwmonroe: I did see that bdx said "working on a complete fix now" in https://github.com/jamesbeedy/layer-puppet-agent/issues/2 -- I mentioned that during daily and said that the commit he referenced before that looks to me like it should resolve the issue, so I'm not sure what "complete solution" is required [17:34] kwmonroe: I'd say go ahead and test the current fix [17:35] cool [17:36] lazyPower: In your etcd peers.py, the only thing you should need to do is change -relation-broken on line 28 to -relation-departed. It should function correctly with just that change, though you should also add a "dismiss" method that removes that state once it's been processed. Let me find you an example where we use the same pattern [17:38] lazyPower: https://github.com/juju-solutions/interface-mapred-slave/blob/9493fab49a447a317b276523595e40df064299d5/requires.py#L31 [17:38] lazyPower: And it's used in the charm (now in upstream!) at https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py#L155 [17:40] cory_fu_ - this is different behavior in peers i think. Every unit is getting the .departed state set [17:40] its not scoped between just the departing unit and the leader. every peer is getting that state and in this case telling itself ot unregister and terminate the application. [17:41] when i switch from -broken to -departed [17:42] lazyPower: You are incorrect [17:42] :) [17:42] cory_fu_ - hang on i'll gather logs and make my case [17:42] https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L315 [17:44] lazyPower: That handler will match if *any* unit has the departed state. I think what you actually want there is @when_not('cluster.joined') [17:44] charm build seems to not recognize / use the series: key in the metadata.yaml [17:44] well for one [17:44] i love the .dismiss method on the interface [17:44] i'm clearly doing interface state handling wrong [17:45] lazyPower: Ok, I think I see the problem [17:46] You're using the pattern of setting a state on join and then immediately removing it. You want to do that because you want to track the indvidual unit that is new, which is understandable. However, because you're not leaving a state set on the unit, you have no way of tracking when there are still units remaining [17:47] So you're trying to use -broken to detect when all units are gone. Also understandable. However, that doesn't really play well with reactive because when -broken fires, there's no longer any remote units to be participating in conversations. You end up setting a state on a None unit conversation, which will cause problems for you later because you can never remove it [17:48] also a good point [17:49] lazyPower: What I would recommend is to have a .joined state that you don't remove in provide_cluster_details whose purpose is to keep track of all units that are still joined. You then remove it in -departed, and you can detect "no remaining peers" by @when_not('cluster.joined') [17:50] You can keep the .joining state to make it easy to tell which unit is new. But I'm not sure what the .declare_self state is for [17:50] Can you explain that? [17:52] cory_fu_ - sure, the leader acts as the single unit we care about during turn up. It performs a single node bootstrap and waits for peers. When a peer knows its about to come online, it has to talk to the leader first. declare_self triggers the peer saying "Hey i'm here with this detail" and the leader responds with the static initial cluster configuration. [17:52] lazyPower: Also, to clarify why I said you were incorrect before. The state does not get attached to every peer; rather, it gets attached to a single peer and that is sufficient to trigger the @when('cluster.departed') test. Does that match up with what you were seeing in your logs? [17:53] this was the source of a lot of headaches before, so this may need additional love in the layer, but peers were racing, and static configuration requires staggered start. [17:53] so you have ot gate based on direct communication with the etcd application running on the leader [17:54] Hrm. This is all symptomatic of the failure of the conversation model. It's too divergent from how people are used to thinking about relations and is causing more confusion than clarity. :( [17:56] i'll give it another go based on our findings here. There's a goldmine of feedback here [17:57] In the next pass, I want to make the underlying process more explicit by dropping the idea of "conversations". A unit joins the relation, so you attach a state to it. Handlers match states and the relation classes group all units that are in that state and let you operate on them individually or as a group. (That's basically how it works now, but we tried to abstract some of that out in the Conversation model and ended up making [17:57] it more opaque) [17:58] from what i understood of that, it makes sense :) [17:58] or, alternatively, we can do some better docs and maybe some example interface seminars for these use cases [17:59] has anyone ever got the error "ERROR Get https://.... x509: certificate has expired or is not yet valid" when bootstrapping lxd (with master) [17:59] our current interface docs were primarily cut/paste and remixed by me, so there's some gaps in there [17:59] hatch not with the lxd provider no, i have with juju stable releases and the local provider [17:59] lazyPower: do you recall what you did to fix it? [18:00] devel side bar is back [18:00] thanks matthelmke and evilnickveitch :-) [18:00] :) [18:00] https://jujucharms.com/docs/devel/ that is if folks were wondering about the context [18:01] hatch https://bugs.launchpad.net/juju-core/+bug/1245550 - ooolllddd bug [18:01] Bug #1245550: ERROR TLS handshake failed: EOF waiting for stateserver [18:02] oh heh [18:02] hmm [18:03] lazyPower: do you know where this certificate is stored? So that I can check the date? [18:03] or maybe delete it :) [18:03] not with juju2, check the $JUJU_DATA Directory [18:03] $home/.local/share/juju [18:03] btw $JUJU_DATA isn't defined [18:03] :) [18:04] its in the variable reference guide [18:05] hatch https://jujucharms.com/docs/devel/reference-environment-variables right thurrrr at the top [18:05] ahh maybe because it's built and not installed [18:26] skay: there's a charm for that too, as well as http://svg.juju.solutions [18:27] marcoceppi: I didn't realize that was a real url I could use. I did a search and found your repo and was trying to install it [18:28] spiffy [18:29] skay: oh yeah, it's just a web service [18:29] marcoceppi: I started down a rabbit hole of finding jujusvg [18:30] skay: you can, with juju 2.0-beta8, just `juju deploy cs:~marcoceppi/xenial/charm-svg` (https://jujucharms.com/u/marcoceppi/charm-svg/3) [18:30] skay: it was such a pain in the ass to setup, I ended up writing a charm for it [18:36] should I be able to generate svg based on juju status --format yaml? [18:37] skay: not really, though you should be able to [18:37] skay: it'd be a nice feature [18:37] marcoceppi: it complained about icons and didn't generate an svg. I didn't want to go down a rabbithole on that if it wasn't intended that way [18:37] marcoceppi: yeah, it would be neat [18:37] skay: it expects a v3 bundle [18:38] skay: are these local charms? [18:39] marcoceppi: private ones [18:39] marcoceppi: some are private, some publi [18:42] skay: all the private ones won't show up [18:42] * skay nods [18:52] cory_fu, kwmonroe: sup [18:52] yo yo bdx [18:52] hey! I've a few things to run by you [18:52] concerning layer-puppet [18:53] specifically puppet-agent [18:54] sure thing bdx [18:54] kwmonroe, cory_fu, firstly -> http://apt.puppetlabs.com/ [18:55] fwiw, i think layer-puppet should go away in favor of puppet-agent as the base layer.. and a new layer-puppet be a charm that includes puppet-agent and is like a deployable puppet service. [18:55] what I'm running into is that puppet3 and puppet4 are not both supproted equally on a per-release basis [18:56] kwmonroe: entirely, thats how I've been using it .... I'm running into complications though [18:56] e.g. xenial isn't supported for puppet3, but it is for puppet4 [18:56] ugh, lame [18:57] the puppet3 debs are the puppetlabs-release-.deb [18:57] found here apt.puppetlabs.com [18:58] it looks puppet4 is packaged for all releases since precise [18:59] kwmonroe, I guess my question is, should I make 'puppet-version' config default to puppet4 [18:59] and then throw a conditional for puppet3 that xenial isn't supported? [19:00] or should I throw away everything puppet3 [19:00] and make the charm only puppet4 [19:00] ? [19:00] bdx: we need 3 for bigtop [19:00] darn [19:00] admcleod: mind if I enquire as to why? [19:01] bdx: it doesnt work with 4 :} [19:01] bdx: what's the puppet4 package name? i don't see stuff like puppet_4.x for trusty: http://apt.puppetlabs.com/pool/trusty/main/p/puppet/ [19:01] or am i looking in the wrong place? [19:01] kwmonroe [19:01] kwmonroe: just go to the base url [19:02] anything with 'puppetlabs-release-pc1-.deb' is puppet4 [19:02] admcleod: how does bigtop currently use puppet-agent/puppet-master [19:03] puppet3 -> puppetlabs-release-.deb [19:03] puppet4 -> puppetlabs-release-pc1-.deb [19:04] bdx: its masterless - i cant remember the exact issue, let me test it again now just incase i was wrong [19:05] bdx: i don't see a big deal with defaulting to 4 as long as there's a config opt to specify 3 if we need it. [19:06] admcleod, kwmonroe: I think masterless and masterfull puppet should be separate layers/charms .... the deps are entirely different, so is the implimentation and use case [19:06] kwmonroe, ok, just add a conditional to check for xenial/vivid/wily for puppet3? [19:06] bdx: well. we dont need any deps for masterless, we just need to install the package. so ideally 'if puppet master not defined, dont configure it - assume its masterless' [19:09] admcleod, kwmonroe: should I just set status to 'blocked' if ubuntu_release is xenial/vivid/wily and puppet-version => 3? [19:09] bdx: i think you mean == 3 there [19:10] totally my bad [19:10] yea [19:10] bdx: you can either be nice and detect that series won't have the package available, or you can just let apt fail and be like "you should have read the readme where i talked about puppet3 only available < vivid" [19:10] gotcha [19:11] bdx: because who knows -- maybe somebody has a xenail ppa with puppet3 that they have configured as an install_source.. we shouldn't block them just in case puppet3/xenial is available in their configured env. [19:11] totally [19:13] I'd like clarification on why bigtop doesn't work with puppet 4. Are there significant syntax changes that make the bigtop manifests not work? [19:13] admcleod: ^ [19:14] cory_fu_: testing [19:14] kwmonroe: thats the other thing .... because of the ubuntu release in the deb url, the functionality of layer-apt is limited e.g. install_sources config can't be used :-( [19:15] so bdx, i like the idea of a single puppet layer that can work in master/less mode, but i don't know enough to know how disjoint those 2 modes are. if it's totally separate code paths depending on the config, that's one thing.. but if it's master = masterless + some, then i think we can still have a single puppet layer. [19:15] kwmonroe: here the interesting part [19:16] kwmonroe: puppet-agent for puppet3 depends on puppet AND puppet-common [19:16] puppet-agent for puppet4 depends on neither [19:18] kwmonroe, the dependency for masterless puppet is included by in the default archives [19:18] thedac, can you land this? testing ci/push flow on hacluster post-move. https://review.openstack.org/#/c/325478/ [19:19] hi, with juju 2.0 is it possible to use a bundle with stages like was supported with juju-deployer. 'juju deploy bundle.yaml stage1' isn't working for me in this example https://pastebin.canonical.com/158108/ [19:21] thedac, also this one plz & thx: https://review.openstack.org/#/c/324795 [19:22] mattrae: no, the built in bundle support is basic and does not do that. [19:22] kwmonroe: the masterless puppet dep doesn't change between 3 and 4, unless you add the deb sources for puppet3, which just gives you the full set of puppet releases [19:22] mattrae: the deployer was updated to work with 2.0 if you need it [19:23] rick_h_: ahh cool, thanks! [19:23] but adding puppet4 deb has no effect on masterless puppet deps, and as it looks to me masterless puppet3 doesnt need deb sources add either [19:28] bdx: cory_fu_ hrm looks like it might just be a sys path problem [19:30] admcleod, cory_fu, kwmonroe: cloud-init doesn't have support for puppet4 yet .... [19:30] Why does cloud-init need to support puppet4? [19:30] bdx: cory_fu_ kwmonroe right, puppet3 installs /usr/bin/puppet, 4 is /opt/puppetlabs/bin/puppet which isnt added to juju sys path so the hook fails [19:31] I was just saying ... a lot of people use cloud-init to automate puppet getting provisioned on their infra [19:32] totally [19:32] Really? I wasn't aware that cloud-init had any tie-ins with puppet, but then, I don't know that much about cloud-init [19:32] theres a cloud-init puppet module [19:32] I see [19:33] it allows me to spinup instances and have them auto puppet from my puppet master [19:33] very usefull [19:34] it's use should be replaced by layer-puppet [19:34] for my infra at least [19:35] useful* [19:35] bdx: cory_fu_ kwmonroe /opt/puppetlabs/bin is added to /etc/environment but i guess its not re-sourced after its added. so hooks fail, and juju run fails. === redir is now known as redir_lunch [19:38] yeah, I think the cloud-init puppet module needs a similar fix [19:38] a drive-by o/ howdy -> bdx [19:39] beisner! you got me excited last week with all that multi-hypervisor support talk [19:40] beisner: by the way, who is heading that initiative, you? [19:40] bdx: cory_fu_ kwmonroe created a symlink but it fails because theres no /etc/puppet/hiera.yaml. this will need more testing and i have to EOD, can pick it up again tomorrow morning [19:40] bdx! yo [19:41] admcleod: sweet, I'll have pushed some new up by the am, I'll ping you tomo [19:43] bdx: AFAIR, there are quite a few breaking changes between 3 and 4 - not just the config but also how relaxed it is about broken syntax and other issues. i seem to recall it not being a simple upgrade [19:44] bdx, the collective 'we' os-charmers really. we've got test bundles and a readme documenting the process. [19:45] admcleod: yea, its not, we just underwent the upgrade here at dhc .... huge refactoring to get our puppetstack code base to puppet4 [19:46] beisner: where is this happening? [19:46] bdx: yeah ive mostly respressed the experience :} [19:47] bdx, PoC @ http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/README-multihypervisor.md which exercises nova kvm and nova lxd in a single deploy, firing up both types of instances. [19:48] beisner: has juju the comprehension of multiple image uuid's now, or is that part of the WIP? [19:48] bdx that example is completely independent of juju features (and currently only really validated @ 1.25.x, but i see no reason 2.x would be an issue, known bugs notwithstanding) [19:49] bdx, but: i believe there are some juju features growing to support the flip side of that. ie. juju deploying on top of that multi-hv cloud. right now, i've only validated that the cloud itself listens and responds appropriately to `nova boot` type of operations. [19:50] nice [19:50] awesome [19:51] bdx, so tldr: glance images are tagged with hypervisor types, then when a user nova boots one of those images, it 'just works' [19:53] beisner: nice, so you are just adding '--property hypervisor_type', or is there other config? [19:53] that's the one bdx [19:53] bdx, worth having a look at the test bundle though too [19:53] yea, I am now [19:54] thats a good repo [19:54] great* [19:54] excellent examples [19:55] bdx, that's where openstack bundles start life. then we test, generally automate, and selectively publish once everyone's convinced. ;-) just bear in mind anything in there is subject to breakage as that's where we iterate and dev things. [19:56] entirely [20:08] beisner: I've been running openstack on lxd all on one node, I have it currently supporting a copy of all of our staging infra .... its f***ing awesome .... check it [20:09] http://imghub.org/image/U3D4 [20:10] http://imghub.org/image/UvyQ [20:11] http://imghub.org/image/UD10 [20:12] beisner: lxd openstack bundle + dvr + vlan tenant nets .... bet I'm the first :-) [20:15] 10x 128GB Samsung 850 Pros under the raidz :-) [20:17] http://imghub.org/image/UnxF === redir_lunch is now known as redir [20:29] bdx, sweet! [21:24] hi marcoceppi, any known issues with md rendering @ charm store? i've got this [1] which renders a code block all in one line (i believe unexpectedly) [2]. [21:24] [1] https://raw.githubusercontent.com/openstack/charm-tempest/master/README.md [21:24] [2] https://jujucharms.com/u/openstack-charmers-next/tempest/xenial/0 [21:25] ie. i've used 3 diff md tools and they all render it as two lines in the code block [21:48] beisner: ya it is known issue, let me see if I can find the bug link [21:50] arosales, ack thx for the info. i'm not too worried about it. mainly wanted to confirm that it's a known thing vs. adjusting the .md content specifically for the cs:. [21:53] beisner: https://github.com/CanonicalLtd/jujucharms.com/issues/276 [21:53] beisner: thanks for confirming it is a known issue or if a bug needed to be filled [21:53] https://mail.google.com/mail/u/1/#inbox [21:53] oops === freyes__ is now known as freyes === alexisb is now known as alexisb-afk [23:22] hi, i'm deploying services to lxc containers using juju 2.0 beta8 and maas 2.0 beta6. i see the containers are getting an interface bridged to physical interfaces that are configured. one issue i'm seeing is even though i have a default gateway configured on one of the networks, its not getting added to the container. https://pastebin.canonical.com/158125/ [23:23] when i add the default gateway manually its working. i'm wondering what i'm missing to have the default gateway added [23:35] mattrae: hmm, interesting not sure what the correct answer here is. Suggest if no one else chimes in to give the juju mail a try and possible get more eyes on it [23:38] mattrae: have you configured the maas space to use the gateway? [23:44] mattrae: I'm sure you've already seen this, but the bottom half of this page might hook you up if not -> https://maas.ubuntu.com/docs2.0/rack-configuration.html [23:48] I need to debug a service that is stuck in a "pending" state. How do I do that? [23:48] Specifically I'm trying to deploy a custom (local) charm I wrote. [23:49] I have all the start-up lifecycle hooks "install config-changed start", but nothing in them. [23:49] gugpe: is the unit or the machine stuck in pending [23:50] gugpe: I am assuming you have a machine but the unit for the given charm[service] is stuck in pending, correct? [23:50] gugpe: also what version of juju 1.x or 2.x? [23:51] Correct, the lxc machine is pending [23:51] 2.0-beta8-xenial-amd64 [23:51] Both machine and unit. [23:53] sudo juju deploy /home/wurde/GitLab/charm-gitlabomnibus [23:53] that's the command I'm sending. [23:54] What's the best way to debug this? [23:55] gugpe: don't be executing this as root for a start [23:55] gugpe: there is a debug flag '--debug' I thing [23:55] think [23:56] Ok. I needed to set ownership of controllers.yaml. I'll look for the --debug flag. [23:59] The message JUJU-STATUS allocating Waiting for agent initialization to finish [23:59] I'm running on xenial series