[08:42] <jamespag`> gnuoy`, https://review.openstack.org/#/c/323264/ if you please :-)
[08:43] <gnuoy`> jamespag`, good morning to you too :)
[08:43] <jamespag`> morning
[08:43] <jamespage> that's better
[08:46] <jamespage> ta
[12:13] <autonomouse> Hi, I wonder if anyone can help me out a bit here please: I'm trying to use juju with LXD seeing as though juju 2.0 no longer has a local provider. However, I've cleary done something wrong as I keep hitting ERROR invalid config: can't connect to the local LXD server: Response was missing `api_compat` - if anyone has any ideas whyt this is happening, could you let me know? thx
[12:34] <tasdomas> lazyPower, ping?
[12:54] <tasdomas> anyone have experience with the relation side of charms.reactive ?
[13:24] <tasdomas> lazyPower, ping?
[14:14] <tinwood> beisner, I've got the tempest charm working with charms.openstack now: https://github.com/openstack-charmers/charm-tempest/pull/8
[14:15] <beisner> tinwood, excellent!  (however, we need that dev to occur @ https://github.com/openstack/charm-tempest)
[14:15] <tinwood> hmm, beisner, when did it move?
[14:16] <tinwood> beisner, and more to the point, why is the old one still there?  Very confusing.
[14:16] <beisner> tinwood, Thu.  that was sort of my main topic of convo last wk ;-)    and yep, we need to retire the old repo.
[14:17] <tinwood> beisner, I didn't realise they had shifted over yet.  Okay, I'll drop the PR there, and retry it on the other one (plus try to sync it, if needed).
[14:17] <beisner> tinwood, much appreciated
[14:20] <beisner> gnuoy, jamespage - we got the openstack-charmers/hacluster sync'd into the gerrit repo on Fri.    I think we'll have some maint/housekeeping @ LP and openstack-charmers GH, yah?    also as tinwood points out, with tempest also moved into gerrit, we should rm the repo from openstack-charmers.  just lmk if/what you want me to tackle on that.
[14:23] <tinwood> beisner, okay, on the new one: https://github.com/openstack/charm-tempest/pull/1
[14:24] <tinwood> beisner, or should this now be in gerrit?
[14:27] <beisner> tinwood, it's gerrit.
[14:27] <tinwood> beisner, so it's a git review one now?
[14:27] <beisner> tinwood, indeed
[14:27]  * tinwood sigh
[14:30] <gnuoy> beisner, thanks for getting the charm repo populated, how did you manage it?
[14:31] <lazyPower> tasdomas o/ hey sorry i didnt see the pings this morning. What can I help you with?
[14:35] <beisner> gnuoy, yw, happy to do it.   mea culpas and begging in openstack-infra ;-)   only infra-root members can do it.  one dev pushed back telling us to just do 1 big commit.  but persistence...
[14:36] <beisner> and ellipsis
[14:36] <gnuoy> beisner, cool, welll thank you so much for getting it done. It would have been a shame to loose the commit history
[14:36] <beisner> gnuoy, indeed.  welcome so much ;-)
[14:36] <tasdomas> lazyPower - so I'm still working on relation departure handling with charms.reactive
[14:37] <tasdomas> lazyPower - and I think there's a subtle bug in the conversation code
[14:37] <tasdomas> lazyPower - it's impossible to get remote data from a relation that is being departed from
[14:37] <tasdomas> even though the remote information is still available via the command line tools
[14:38] <lazyPower> really? conversation.get_remote() fails when juju relation-get works?
[14:38] <tasdomas> lazyPower, yes
[14:38] <tasdomas> lazyPower I think it's because get_remote relies on the list of units in the relation
[14:39] <lazyPower> oo we def. need to take a look at that. Do you mind filing a bug against charms.reactive? https://github.com/juju-solutions/charms.reactive
[14:40] <tasdomas> lazyPower - will do
[14:41] <tasdomas> lazyPower - another question - what could be the reason for charms.reactive trying to pull in relation information from relations that have already been departed from?
[14:41] <lazyPower> i'm goign to have to tap cory_fu_ to answer that one. I'm not super familiar with that area of the code
[14:43] <cory_fu_> tasdomas, lazyPower: The first thing that jumps to mind is if an interface layer uses the -broken hook, it will often get states set on conversations attached to a remote-unit of None, which will then always act like it's set and can't be removed
[14:44] <tasdomas> cory_fu - so is the proper approach then to ignore -broken hooks in the interface layer?
[14:46] <cory_fu_> Yes.  They don't give you any useful information in reactive anyway.  They fire when all related units are gone, which is easy to detect in reactive by the lack of {relation_name}.joined state
[14:46] <lazyPower> oo cory_fu_ - thats a problematic revelation
[14:46] <lazyPower> https://github.com/juju-solutions/interface-etcd/pull/5/files
[14:47] <lazyPower> i was unable to discern the departing unit without using that
[14:47] <lazyPower> is there an alternative pattern i can use?
[14:47] <cory_fu_> lazyPower: Just change "broken" there to "departed"
[14:47] <lazyPower> ok so departed only runs on the departing unit?
[14:47] <cory_fu_> Correct
[14:48] <lazyPower> i'm pretty sure i had this flipped and it tanked caused every unit the self unregister
[14:48] <lazyPower> so/the self/to self/
[14:48] <cory_fu_> I sent an email about this to the Juju list a while back, but clearly I need to push it more.  Broken is broken in reactive.
[15:04] <tinwood> beisner, for tempest charm: https://review.openstack.org/325966 Move files to new layered location
[15:06] <beisner> tinwood, [testenv:pep8] needs to be the tox enviro name for lint
[15:07] <beisner> if we are to keep this in line with other openstack projects and other os-charms
[15:07] <tinwood> beisner, kk
[15:27] <lazyPower> cory_fu_ - yeah i just flipped it from broken to departed and i get behavior i absolutely dont want
[15:27] <lazyPower> it nukes everything but the leader
[15:32] <beisner> gnuoy, tinwood - fyi transitory os-charmer hacluster and tempest gh repos deleted. ✓
[15:32] <tinwood> beisner, yay!  Thanks, it will help to reduce my confusion!
[15:56] <gQuigs> hi there, I'm trying to get sosreport ready for Juju 2.0 and otherwise improve out automated logging plugin
[15:56] <gQuigs> I want to capture juju debug-log -n 1000, but the syntax appears to be materially different between juju 1 and 2
[15:57] <gQuigs> for juju 2.0 - juju debug-log -T -n 1000
[15:57] <gQuigs> for juju 1.25 - juju debug-log -n 1000
[15:58] <gQuigs> and if you do with -T on 1.25 it fails, if you do without -T on 2.0 it will hang indefinitely
[15:58]  * gQuigs really doesn't want to have to special case versions;
[16:01] <marcoceppi> gQuigs: you're going to pretty much have to, 2.0 is a backwards breaking release
[16:03] <anita_> trying to install JUJU 2.0 as nonroot
[16:03] <gQuigs> marcoceppi: this is the first change I've ran into that actually breaks anything
[16:03] <anita_> but getting error
[16:04] <marcoceppi> gQuigs: wait until you run `juju get` ;)
[16:04] <marcoceppi> err, now juju get-config
[16:05] <gQuigs> marcoceppi: could the tail be only when run in an interactive session?
[16:05] <gQuigs> running commands that just don't work is fine.. it's the hanging indefinitely that really gets us
[16:05] <gQuigs> so we can go both juju get and juju get-config and not care the output failed
[16:06] <marcoceppi> gQuigs: open a bug, not a core developer so i can't say for sure
[16:06] <gQuigs> marcoceppi: will do, thanks
[16:15] <anita_> when installing juju 2.0 getting error as "ERROR unable to contact api server after 1 attempts: cannot load cookies: open /home/charm/.go-cookies: permission denied"
[16:18] <gQuigs> reported - https://bugs.launchpad.net/juju/+bug/1589581
[16:18] <mup> Bug #1589581: Consistant basic use of debug-log between 1.25 and 2.0 <pyjuju:New> <https://launchpad.net/bugs/1589581>
[16:18] <gQuigs> bah humbug
[16:18] <gQuigs> there we go
[16:19] <gQuigs> I think I ask this every time I report a bug.. can we close the old project https://launchpad.net/juju?
[16:22] <lazyPower> anita_ - sounds like you may have run juju with sudo at somepoint and logged in?
[16:23] <lazyPower> interestingly enough my .go-cookies is owned by root with permissions: 0600
[16:24] <anita_> lazyPower_:I followed the document juju2.0
[16:24] <anita_> But I logged in as root and then su - <nonroot user>
[16:25] <lazyPower> anita_ https://jujucharms.com/docs/devel/getting-started
[16:25] <anita_> Hmm for me -rw------- 1 root  root       5 Jun  6 02:49 .go-cookies
[16:26] <anita_> Ok I will follow this
[16:27] <anita_> I did exactly like this
[16:27] <anita_> somewhere i missed something?
[16:27] <lazyPower> my suggestion would be to remove the .go-cookies and re-login
[16:28] <anita_> Ok
[16:28] <anita_> let me try that way
[16:29] <anita_> just to see when i changed the owner of .go-cookies, bootstrap is successful
[16:29] <anita_> any idea how to delete a controller
[16:30] <anita_> juju destroy-controller?
[16:31] <lazyPower> yep
[16:31] <anita_> ok
[16:31] <lazyPower> juju help commands will lend a hand from here
[16:31] <anita_> k
[16:35] <anita_> I delete .go-cookies and re-login as nonroot after i destroy default controller
[16:37] <arosales> is anyone not seeing the navigation toolbar @ https://jujucharms.com/docs/devel/getting-started
[16:38] <anita_> now its successful
[16:38] <lazyPower> arosales confirmed, i am not seeing the nav bar
[16:38] <rick_h_> arosales: yes it's erroring in the JS
[16:38] <lazyPower> anita_ great :)
[16:38] <arosales> lazyPower: rick_h_ thanks for confirming
[16:38] <arosales> rick_h_: need me to file a bug, or are folks already working on it?
[16:38] <anita_> Then the same step i should follow as mentioned in doc
[16:39] <rick_h_> arosales: I'd file a bug and see if either something in docs broke things or the GUI folks did
[16:39] <anita_> before bootstrap, i will delete the .go-cookies and re-login?
[16:39] <anita_> is it a correect method?
[16:39] <arosales> rick_h_: will do, thanks
[16:40] <anita_> also i have done one more extra step comapred to doc, i.e : sudo chgrp lxd /var/lib/lxd/unix.socket
[16:41] <anita_> is this two steps correct?
[16:42] <anita_> lazyPower_: please confirm if the above two steps that followed for nonroot juju2.0 installation is valid? one is sudo chgrp lxd /var/lib/lxd/unix.socket and then deleted .go-cookies and relogin as nonroot
[16:43] <bdx> how's it going everyone? happy monday!
[16:43] <lazyPower> anita_ i'm not sure where that .go-cookies came from. so its hard for me to say. I would say thats a valid working fix for the error you encountered, but i dont think its required every time you setup juju, no.
[16:43] <bdx> does anyone know the status of cross-model-relations?
[16:44] <anita_> ok thanks a lot
[16:44] <bdx> rick_h_: ^
[16:45] <rick_h_> bdx: on this roadmap's cycle of work
[16:45] <bdx> rick_h_: nice! thats exciting!
[16:45] <bdx> thx
[16:46] <arosales> bdx: hello and happy monday indeed. It seems summer is upon us as well
[16:47] <magicaltrout> summer is relative
[16:48] <bdx> magicaltrout: ha, is it raining for you today?
[16:49] <magicaltrout> june gloom
[16:49] <magicaltrout> i left london last week in rain, i arrive in san diego for the week and its grey-ish
[16:49] <bdx> arosales: yea .... Portland winters last a long, dark, rainy 9 months .... so pumped for the sun
[16:50] <bdx> its great for programming though
[16:50] <lazyPower> magicaltrout *hattip* glad you enjoyed
[16:51] <magicaltrout> indeed lazyPower had it on a few times now
[16:52] <beisner> marcoceppi - curious:  i've got a "top" layer which charm-proofs ok -- but what's are the intents and expectations for charm-proof wrt layers/interfaces?
[16:53] <marcoceppi> beisner: proof is really only designed to run against the built layer
[16:53] <marcoceppi> we've not really gotten a layer/interface proof yet
[16:54] <arosales> bdx: I hear you're getting lots of sun as of late
[16:54]  * marcoceppi opens a feature for it on charm-tools
[16:55] <beisner> marcoceppi, ack thx.  well fwiw, proof passes on an unbuilt layer currently, which may actually be a bad thing?
[16:56] <marcoceppi> beisner: not really
[16:56] <marcoceppi> but not all layers are expected to pass proof, though I  suppose top layers are
[16:56] <marcoceppi> beisner: but intermediate layers may not have a complete metadata.yaml, for example
[16:56] <beisner> marcoceppi, indeed
[17:18] <skay> is there a juju command to generate dot files?
[17:20] <skay> aha, in a side channel I learned of https://code.launchpad.net/juju-viz
[17:21] <lazyPower> That looks neat skay, what does it do?
[17:23] <skay> lazyPower: I think it will generate a dot file with a graph showing a juju deployment, but I haven't tried it out yet, just got the url just now
[17:24] <skay> lazyPower: I am working some changes to a charm and a mojo spec that will remove a website from our environment, and having a diagram to show what the current state is will make it easier for people
[17:24] <skay> lazyPower: I have a doc, but it has lots of words. no pictures. generating a dot file will really help
[17:25] <skay> (I am not great at using drawing programs)
[17:26] <lazyPower> i completely understand skay  :)
[17:26] <lazyPower> they are tedious at best
[17:26] <lazyPower> have you thought about generating a bundle/export and sending it through svg.juju.solutions?
[17:27] <skay> aha, in that repo in bin there is juju-dotty.py for the curious. it takes output from juju status
[17:27] <skay> lazyPower: no. I am not familiar with how to do that
[17:27] <lazyPower> we use the raw bundle files in merges to visualize our prs.  eg https://github.com/juju-solutions/bundle-beats-core/pull/1
[17:28] <lazyPower> pardon the unwiedly url - but the markdown in the pr comment shows the magic:  http://svg.juju.solutions/?bundle-file=https://raw.githubusercontent.com/juju-solutions/bundle-beats-core/cc14520c94ed69b29c667c3d59d189ce3a6166ee/bundle.yaml
[17:29] <cory_fu_> Anyone else seeing the side-bar missing on https://jujucharms.com/docs/devel/getting-started
[17:29] <lazyPower> i suppose this may not be as  useful, i think the api requires a url to curl to fetch the bundle
[17:29] <lazyPower> cory_fu_ - there's an open bug about it
[17:29] <cory_fu_> Ah, ok
[17:29] <lazyPower> cory_fu_ https://github.com/juju/docs/issues/1141
[17:30] <lazyPower> cory_fu_ if you have a sec i'm curious if there's another way i can get the unit thats actually leaving the relationship other than using the -broken hook as its broken.
[17:31] <cory_fu_> -broken will never give you a departing unit because it's only called after every single remote unit is gone
[17:31] <cory_fu_> What you actually want is the -departed hook, which behaves exactly like you're thinking -broken does
[17:31] <lazyPower> i have code that may be side-effecting into working
[17:32] <skay> lazyPower: wow that is pretty
[17:32] <cory_fu_> That is, within the -departed hook, conv.units (or hookenv.remote_unit()) will contain the unit that is leaving
[17:32] <cory_fu_> And if you add a state to the conversation, it will only apply to the unit that is departing
[17:32] <cory_fu_> lazyPower: ^
[17:33] <skay> lazyPower: /me searches. https://github.com/marcoceppi/svg.juju.solutions yes? looks like it
[17:33] <lazyPower> cory_fu_  do you have an example? https://github.com/chuckbutler/interface-etcd/blob/c58568c4ffaa9099deea8f20199d9ec501f5aeff/peers.py  -- this conversation is scope unit
[17:33]  * skay reads readme and churckles. hehe
[17:34] <cory_fu_> kwmonroe: I did see that bdx said "working on a complete fix now" in https://github.com/jamesbeedy/layer-puppet-agent/issues/2 -- I mentioned that during daily and said that the commit he referenced before that looks to me like it should resolve the issue, so I'm not sure what "complete solution" is required
[17:34] <cory_fu_> kwmonroe: I'd say go ahead and test the current fix
[17:35] <kwmonroe> cool
[17:36] <cory_fu_> lazyPower: In your etcd peers.py, the only thing you should need to do is change -relation-broken on line 28 to -relation-departed.  It should function correctly with just that change, though you should also add a "dismiss" method that removes that state once it's been processed.  Let me find you an example where we use the same pattern
[17:38] <cory_fu_> lazyPower: https://github.com/juju-solutions/interface-mapred-slave/blob/9493fab49a447a317b276523595e40df064299d5/requires.py#L31
[17:38] <cory_fu_> lazyPower: And it's used in the charm (now in upstream!) at https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py#L155
[17:40] <lazyPower> cory_fu_ - this is different behavior in peers i think. Every unit is getting the .departed state set
[17:40] <lazyPower> its not scoped between just the departing unit and the leader. every peer is getting that state and in this case telling itself ot unregister and terminate the application.
[17:41] <lazyPower> when i switch from -broken to  -departed
[17:42] <cory_fu_> lazyPower: You are incorrect
[17:42] <cory_fu_> :)
[17:42] <lazyPower> cory_fu_ - hang on i'll gather logs and make my case
[17:42] <cory_fu_> https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L315
[17:44] <cory_fu_> lazyPower: That handler will match if *any* unit has the departed state.  I think what you actually want there is @when_not('cluster.joined')
[17:44] <icey> charm build seems to not recognize / use the series: key in the metadata.yaml
[17:44] <lazyPower> well for one
[17:44] <lazyPower> i love the .dismiss method on the interface
[17:44] <lazyPower> i'm clearly doing interface state handling wrong
[17:45] <cory_fu_> lazyPower: Ok, I think I see the problem
[17:46] <cory_fu_> You're using the pattern of setting a state on join and then immediately removing it.  You want to do that because you want to track the indvidual unit that is new, which is understandable.  However, because you're not leaving a state set on the unit, you have no way of tracking when there are still units remaining
[17:47] <cory_fu_> So you're trying to use -broken to detect when all units are gone.  Also understandable.  However, that doesn't really play well with reactive because when -broken fires, there's no longer any remote units to be participating in conversations.  You end up setting a state on a None unit conversation, which will cause problems for you later because you can never remove it
[17:48] <lazyPower> also a good point
[17:49] <cory_fu_> lazyPower: What I would recommend is to have a .joined state that you don't remove in provide_cluster_details whose purpose is to keep track of all units that are still joined.  You then remove it in -departed, and you can detect "no remaining peers" by @when_not('cluster.joined')
[17:50] <cory_fu_> You can keep the .joining state to make it easy to tell which unit is new.  But I'm not sure what the .declare_self state is for
[17:50] <cory_fu_> Can you explain that?
[17:52] <lazyPower> cory_fu_ - sure, the leader acts as the single unit we care about during turn up. It performs a single node bootstrap and waits for peers. When a peer knows its about to come online, it has to talk to the leader first. declare_self triggers the peer saying "Hey i'm here with this detail" and the leader responds with the static initial cluster configuration.
[17:52] <cory_fu_> lazyPower: Also, to clarify why I said you were incorrect before.  The state does not get attached to every peer; rather, it gets attached to a single peer and that is sufficient to trigger the @when('cluster.departed') test.  Does that match up with what you were seeing in your logs?
[17:53] <lazyPower> this was the source of a lot of headaches before, so this may need additional love in the layer, but peers were racing, and static configuration requires staggered start.
[17:53] <lazyPower> so you have ot gate based on direct communication with the etcd application running on the leader
[17:54] <cory_fu_> Hrm.  This is all symptomatic of the failure of the conversation model.  It's too divergent from how people are used to thinking about relations and is causing more confusion than clarity.  :(
[17:56] <lazyPower> i'll give it another go based on our findings here. There's a goldmine of feedback here
[17:57] <cory_fu_> In the next pass, I want to make the underlying process more explicit by dropping the idea of "conversations".  A unit joins the relation, so you attach a state to it.  Handlers match states and the relation classes group all units that are in that state and let you operate on them individually or as a group.  (That's basically how it works now, but we tried to abstract some of that out in the Conversation model and ended up making
[17:57] <cory_fu_>  it more opaque)
[17:58] <lazyPower> from what i understood of that, it makes sense :)
[17:58] <lazyPower> or, alternatively, we can do some better docs and maybe some example interface seminars for these use cases
[17:59] <hatch> has anyone ever got the error "ERROR Get https://.... x509: certificate has expired or is not yet valid" when bootstrapping lxd (with master)
[17:59] <lazyPower> our current interface docs were primarily cut/paste and remixed by me, so there's some gaps in there
[17:59] <lazyPower> hatch not with the lxd provider no, i have with juju stable releases and the local provider
[17:59] <hatch> lazyPower: do you recall what you did to fix it?
[18:00] <arosales> devel side bar is back
[18:00] <arosales> thanks matthelmke and evilnickveitch :-)
[18:00] <matthelmke> :)
[18:00] <arosales> https://jujucharms.com/docs/devel/ that is if folks were wondering about the context
[18:01] <lazyPower> hatch https://bugs.launchpad.net/juju-core/+bug/1245550 - ooolllddd bug
[18:01] <mup> Bug #1245550: ERROR TLS handshake failed: EOF waiting for stateserver <bootstrap> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1245550>
[18:02] <hatch> oh heh
[18:02] <hatch> hmm
[18:03] <hatch> lazyPower: do you know where this certificate is stored? So that I can check the date?
[18:03] <hatch> or maybe delete it :)
[18:03] <lazyPower> not with juju2, check the $JUJU_DATA Directory
[18:03] <lazyPower> $home/.local/share/juju
[18:03] <hatch> btw $JUJU_DATA isn't defined
[18:03] <hatch> :)
[18:04] <lazyPower> its in the variable reference guide
[18:05] <lazyPower> hatch https://jujucharms.com/docs/devel/reference-environment-variables right thurrrr at the top
[18:05] <hatch> ahh maybe because it's built and not installed
[18:26] <marcoceppi> skay: there's a charm for that too, as well as http://svg.juju.solutions
[18:27] <skay> marcoceppi: I didn't realize that was a real url I could use. I did a search and found your repo and was trying to install it
[18:28] <skay> spiffy
[18:29] <marcoceppi> skay: oh yeah, it's just a web service
[18:29] <skay> marcoceppi: I started down a rabbit hole of finding jujusvg
[18:30] <marcoceppi> skay: you can, with juju 2.0-beta8, just `juju deploy cs:~marcoceppi/xenial/charm-svg` (https://jujucharms.com/u/marcoceppi/charm-svg/3)
[18:30] <marcoceppi> skay: it was such a pain in the ass to setup, I ended up writing a charm for it
[18:36] <skay> should I be able to generate svg based on juju status --format yaml?
[18:37] <marcoceppi> skay: not really, though you should be able to
[18:37] <marcoceppi> skay: it'd be a nice feature
[18:37] <skay> marcoceppi: it complained about icons and didn't generate an svg. I didn't want to go down a rabbithole on that if it wasn't intended that way
[18:37] <skay> marcoceppi: yeah, it would be neat
[18:37] <marcoceppi> skay: it expects a v3 bundle
[18:38] <marcoceppi> skay: are these local charms?
[18:39] <skay> marcoceppi: private ones
[18:39] <skay> marcoceppi: some are private, some publi
[18:42] <marcoceppi> skay: all the private ones won't show up
[18:42]  * skay nods
[18:52] <bdx> cory_fu, kwmonroe: sup
[18:52] <kwmonroe> yo yo bdx
[18:52] <bdx> hey! I've a few things to run by you
[18:52] <bdx> concerning layer-puppet
[18:53] <bdx> specifically puppet-agent
[18:54] <kwmonroe> sure thing bdx
[18:54] <bdx> kwmonroe, cory_fu, firstly -> http://apt.puppetlabs.com/
[18:55] <kwmonroe> fwiw, i think layer-puppet should go away in favor of puppet-agent as the base layer.. and a new layer-puppet be a charm that includes puppet-agent and is like a deployable puppet service.
[18:55] <bdx> what I'm running into is that puppet3 and puppet4 are not both supproted equally on a per-release basis
[18:56] <bdx> kwmonroe: entirely, thats how I've been using it .... I'm running into complications though
[18:56] <bdx> e.g. xenial isn't supported for puppet3, but it is for puppet4
[18:56] <kwmonroe> ugh, lame
[18:57] <bdx> the puppet3 debs are the puppetlabs-release-<ubuntu version>.deb
[18:57] <bdx> found here apt.puppetlabs.com
[18:58] <bdx> it looks puppet4 is packaged for all releases since precise
[18:59] <bdx> kwmonroe, I guess my question is, should I make 'puppet-version' config default to puppet4
[18:59] <bdx> and then throw a conditional for puppet3 that xenial isn't supported?
[19:00] <bdx> or should I throw away everything puppet3
[19:00] <bdx> and make the charm only puppet4
[19:00] <bdx> ?
[19:00] <admcleod> bdx: we need 3 for bigtop
[19:00] <bdx> darn
[19:00] <bdx> admcleod: mind if I enquire as to why?
[19:01] <admcleod> bdx: it doesnt work with 4 :}
[19:01] <kwmonroe> bdx: what's the puppet4 package name?  i don't see stuff like puppet_4.x for trusty: http://apt.puppetlabs.com/pool/trusty/main/p/puppet/
[19:01] <kwmonroe> or am i looking in the wrong place?
[19:01] <bdx> kwmonroe
[19:01] <bdx> kwmonroe: just go to the base url
[19:02] <bdx> anything with 'puppetlabs-release-pc1-<release>.deb' is puppet4
[19:02] <bdx> admcleod: how does bigtop currently use puppet-agent/puppet-master
[19:03] <bdx> puppet3 -> puppetlabs-release-<release>.deb
[19:03] <bdx> puppet4 -> puppetlabs-release-pc1-<release>.deb
[19:04] <admcleod> bdx: its masterless - i cant remember the exact issue, let me test it again now just incase i was wrong
[19:05] <kwmonroe> bdx: i don't see a big deal with defaulting to 4 as long as there's a config opt to specify 3 if we need it.
[19:06] <bdx> admcleod, kwmonroe: I think masterless and masterfull puppet should be separate layers/charms .... the deps are entirely different, so is the implimentation and use case
[19:06] <bdx> kwmonroe, ok, just  add a conditional to check for xenial/vivid/wily for puppet3?
[19:06] <admcleod> bdx: well. we dont need any deps for masterless, we just need to install the package. so ideally 'if puppet master not defined, dont configure it - assume its masterless'
[19:09] <bdx> admcleod, kwmonroe: should I just set status to 'blocked' if ubuntu_release is xenial/vivid/wily and puppet-version => 3?
[19:09] <kwmonroe> bdx: i think you mean == 3 there
[19:10] <bdx> totally my bad
[19:10] <bdx> yea
[19:10] <kwmonroe> bdx: you can either be nice and detect that series won't have the package available, or you can just let apt fail and be like "you should have read the readme where i talked about puppet3 only available < vivid"
[19:10] <bdx> gotcha
[19:11] <kwmonroe> bdx: because who knows -- maybe somebody has a xenail ppa with puppet3 that they have configured as an install_source.. we shouldn't block them just in case puppet3/xenial is available in their configured env.
[19:11] <bdx> totally
[19:13] <cory_fu_> I'd like clarification on why bigtop doesn't work with puppet 4.  Are there significant syntax changes that make the bigtop manifests not work?
[19:13] <cory_fu_> admcleod: ^
[19:14] <admcleod> cory_fu_: testing
[19:14] <bdx> kwmonroe: thats the other thing .... because of the ubuntu release in the deb url, the functionality of layer-apt is limited e.g. install_sources config can't be used :-(
[19:15] <kwmonroe> so bdx, i like the idea of a single puppet layer that can work in master/less mode, but i don't know enough to know how disjoint those 2 modes are.  if it's totally separate code paths depending on the config, that's one thing.. but if it's master = masterless + some, then i think we can still have a single puppet layer.
[19:15] <bdx> kwmonroe: here the interesting part
[19:16] <bdx> kwmonroe: puppet-agent for puppet3 depends on puppet AND puppet-common
[19:16] <bdx> puppet-agent for puppet4 depends on neither
[19:18] <bdx> kwmonroe, the dependency for masterless puppet is included by in the  default archives
[19:18] <beisner> thedac, can you land this?  testing ci/push flow on hacluster post-move.  https://review.openstack.org/#/c/325478/
[19:19] <mattrae> hi, with juju 2.0 is it possible to use a bundle with stages like was supported with juju-deployer. 'juju deploy bundle.yaml stage1' isn't working for me in this example https://pastebin.canonical.com/158108/
[19:21] <beisner> thedac, also this one plz & thx:  https://review.openstack.org/#/c/324795
[19:22] <rick_h_> mattrae: no, the built in bundle support is basic and does not do that.
[19:22] <bdx> kwmonroe: the masterless puppet dep doesn't change between 3 and 4, unless you add the deb sources for puppet3, which just gives you the full set of puppet releases
[19:22] <rick_h_> mattrae: the deployer was updated to work with 2.0 if you need it
[19:23] <mattrae> rick_h_: ahh cool, thanks!
[19:23] <bdx> but adding puppet4 deb has no effect on masterless puppet deps, and as it looks to me masterless puppet3 doesnt need deb sources add either
[19:28] <admcleod> bdx: cory_fu_ hrm looks like it might just be a sys path problem
[19:30] <bdx> admcleod, cory_fu, kwmonroe: cloud-init doesn't have support for puppet4 yet ....
[19:30] <cory_fu_> Why does cloud-init need to support puppet4?
[19:30] <admcleod> bdx: cory_fu_ kwmonroe right, puppet3 installs /usr/bin/puppet, 4 is /opt/puppetlabs/bin/puppet which isnt added to juju sys path so the hook fails
[19:31] <bdx> I was just saying ... a lot of people use cloud-init to automate puppet getting provisioned on their infra
[19:32] <bdx> totally
[19:32] <cory_fu_> Really?  I wasn't aware that cloud-init had any tie-ins with puppet, but then, I don't know that much about cloud-init
[19:32] <bdx> theres a cloud-init puppet module
[19:32] <cory_fu_> I see
[19:33] <bdx> it allows me to spinup instances and have them auto puppet from my puppet master
[19:33] <bdx> very usefull
[19:34] <bdx> it's use should be replaced by layer-puppet
[19:34] <bdx> for my infra at least
[19:35] <bdx> useful*
[19:35] <admcleod> bdx: cory_fu_ kwmonroe /opt/puppetlabs/bin is added to /etc/environment but i guess its not re-sourced after its added. so hooks fail, and juju run fails.
[19:38] <bdx> yeah, I think the cloud-init puppet module needs a similar fix
[19:38] <beisner> a drive-by o/ howdy -> bdx
[19:39] <bdx> beisner! you got me excited last week with all that multi-hypervisor support talk
[19:40] <bdx> beisner: by the way, who is heading that initiative, you?
[19:40] <admcleod> bdx: cory_fu_ kwmonroe created a symlink but it fails because theres no /etc/puppet/hiera.yaml. this will need more testing and i have to EOD, can pick it up again tomorrow morning
[19:40] <beisner> bdx! yo
[19:41] <bdx> admcleod: sweet, I'll have pushed some new up by the am, I'll ping you tomo
[19:43] <admcleod> bdx: AFAIR, there are quite a few breaking changes between 3 and 4 - not just the config but also how relaxed it is about broken syntax and other issues. i seem to recall it not being a simple upgrade
[19:44] <beisner> bdx, the collective 'we' os-charmers really.  we've got test bundles and a readme documenting the process.
[19:45] <bdx> admcleod: yea, its not, we just underwent the upgrade here at dhc .... huge refactoring to get our puppetstack code base to puppet4
[19:46] <bdx> beisner: where is this happening?
[19:46] <admcleod> bdx: yeah ive mostly respressed the experience :}
[19:47] <beisner> bdx, PoC @ http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/README-multihypervisor.md   which exercises nova kvm and nova lxd in a single deploy, firing up both types of instances.
[19:48] <bdx> beisner: has juju the comprehension of multiple image uuid's now, or is that part of the WIP?
[19:48] <beisner> bdx that example is completely independent of juju features (and currently only really validated @ 1.25.x, but i see no reason 2.x would be an issue, known bugs notwithstanding)
[19:49] <beisner> bdx, but:  i believe there are some juju features growing to support the flip side of that.  ie. juju deploying on top of that multi-hv cloud.   right now, i've only validated that the cloud itself listens and responds appropriately to `nova boot` type of operations.
[19:50] <bdx> nice
[19:50] <bdx> awesome
[19:51] <beisner> bdx, so tldr:  glance images are tagged with hypervisor types, then when a user nova boots one of those images, it 'just works'
[19:53] <bdx> beisner: nice, so you are just adding '--property hypervisor_type', or is there other config?
[19:53] <beisner> that's the one bdx
[19:53] <beisner> bdx, worth having a look at the test bundle though too
[19:53] <bdx> yea, I am now
[19:54] <bdx> thats a good repo
[19:54] <bdx> great*
[19:54] <bdx> excellent examples
[19:55] <beisner> bdx, that's where openstack bundles start life.  then we test, generally automate, and selectively publish once everyone's convinced.  ;-)   just bear in mind anything in there is subject to breakage as that's where we iterate and dev things.
[19:56] <bdx> entirely
[20:08] <bdx> beisner: I've been running openstack on lxd all on one node, I have it currently supporting a copy of all of our staging infra .... its f***ing awesome .... check it
[20:09] <bdx> http://imghub.org/image/U3D4
[20:10] <bdx> http://imghub.org/image/UvyQ
[20:11] <bdx> http://imghub.org/image/UD10
[20:12] <bdx> beisner: lxd openstack bundle + dvr + vlan tenant nets .... bet I'm the first :-)
[20:15] <bdx> 10x 128GB Samsung 850 Pros under the raidz :-)
[20:17] <bdx> http://imghub.org/image/UnxF
[20:29] <beisner> bdx, sweet!
[21:24] <beisner> hi marcoceppi, any known issues with md rendering @ charm store?  i've got this [1] which renders a code block all in one line (i believe unexpectedly) [2].
[21:24] <beisner> [1] https://raw.githubusercontent.com/openstack/charm-tempest/master/README.md
[21:24] <beisner> [2] https://jujucharms.com/u/openstack-charmers-next/tempest/xenial/0
[21:25] <beisner> ie. i've used 3 diff md tools and they all render it as two lines in the code block
[21:48] <arosales> beisner: ya it is known issue, let me see if I can find the bug link
[21:50] <beisner> arosales, ack thx for the info.  i'm not too worried about it.  mainly wanted to confirm that it's a known thing vs. adjusting the .md content specifically for the cs:.
[21:53] <arosales> beisner: https://github.com/CanonicalLtd/jujucharms.com/issues/276
[21:53] <arosales> beisner: thanks for confirming it is a known issue or if a bug needed to be filled
[21:53] <jhobbs> https://mail.google.com/mail/u/1/#inbox
[21:53] <jhobbs> oops
[23:22] <mattrae> hi, i'm deploying services to lxc containers using juju 2.0 beta8 and maas 2.0 beta6. i see the containers are getting an interface bridged to physical interfaces that are configured. one issue i'm seeing is even though i have a default gateway configured on one of the networks, its not getting added to the container. https://pastebin.canonical.com/158125/
[23:23] <mattrae> when i add the default gateway manually its working. i'm wondering what i'm missing to have the default gateway added
[23:35] <arosales> mattrae: hmm, interesting not sure what the correct answer here is. Suggest if no one else chimes in to give the juju mail a try and possible get more eyes on it
[23:38] <bdx> mattrae: have you configured the maas space to use the gateway?
[23:44] <bdx> mattrae: I'm sure you've already seen this, but the bottom half of this page might hook you up if not -> https://maas.ubuntu.com/docs2.0/rack-configuration.html
[23:48] <gugpe> I need to debug a service that is stuck in a "pending" state. How do I do that?
[23:48] <gugpe> Specifically I'm trying to deploy a custom (local) charm I wrote.
[23:49] <gugpe> I have all the start-up lifecycle hooks "install config-changed start", but nothing in them.
[23:49] <arosales> gugpe: is the unit or the machine stuck in pending
[23:50] <arosales> gugpe: I am assuming you have a machine but the unit for the given charm[service] is stuck in pending, correct?
[23:50] <arosales> gugpe: also what version of juju 1.x or 2.x?
[23:51] <gugpe> Correct, the lxc machine is pending
[23:51] <gugpe> 2.0-beta8-xenial-amd64
[23:51] <gugpe> Both machine and unit.
[23:53] <gugpe> sudo juju deploy /home/wurde/GitLab/charm-gitlabomnibus
[23:53] <gugpe> that's the command I'm sending.
[23:54] <gugpe> What's the best way to debug this?
[23:55] <bdx> gugpe: don't be executing this as root for a start
[23:55] <bdx> gugpe: there is a debug flag '--debug' I thing
[23:55] <bdx> think
[23:56] <gugpe> Ok. I needed to set ownership of controllers.yaml. I'll look for the --debug flag.
[23:59] <gugpe> The message JUJU-STATUS allocating Waiting for agent initialization to finish
[23:59] <gugpe> I'm running on xenial series