[07:48] <stub> cory_fu: Are you working on actions for reactive? I'm going to want them sooner rather than later and might give it a go.
[09:49] <Prabakaran> I am not able to install charm tools on Linux c277-pkvm-vm54 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux architechture. Could someone please advise me on this?
[09:51] <Prabakaran> I am gettting this issue http://pastebin.ubuntu.com/16093003/
[09:51] <Prabakaran> while installing charm tools
[10:06] <nottrobin> Is the new resource stuff introduced here https://insights.ubuntu.com/2016/02/15/introducing-juju-resources/ documented more completely anywhere?
[11:31] <marcoceppi> Prabakaran: it's a known issue
[11:36] <marcoceppi> nottrobin: not at the moment, any questions?
[12:19] <cory_fu> stub: I am not, but also want them so if you have the time to work on them, please do
[12:23] <nottrobin> marcoceppi: am I correct in thinking I could use resource-set on the host and resource-get in the charm hook to pass, say, a URL of where to download code from?
[12:24] <nottrobin> marcoceppi: to basically replace the functionality provide by content-fetcher: https://jujucharms.com/u/gnuoy/content-fetcher/precise/11#charm-config-archive_location
[12:26] <marcoceppi> nottrobin: not quite, think of it more like. the charm describes a set of resources it needs. Then, when deployed from the charm store, where the charm would have been uploaded with a binary copy of some or all of the resources it's described, it'd be delivered to disk, by juju, those resources. Then, you - an operator - could push new versions of one or more of the resources to the deployed charm. It's not a content fetcher more so a content
[12:26] <marcoceppi> delivery mechanism. `wget https://payload.tld/thing.tar.gz; juju attach app payload-thing=./thing.tar.gz`
[12:27] <magicaltrout> see marcoceppi it needs more functionality! :P
[12:28] <marcoceppi> nottrobin: an example of this is the charm-svg charm, which provides https://svg.juju.solutions - it needs two resources, the golang binary for converting a bundle to an svg and the bottle.py web app. So instead of having the charm just wildly grab these from the internet where you have problems like version drift between units, unreliable services, I just deploy the charm with the resources from my machine (or the charm store) after I've gotten/
[12:28] <marcoceppi> verified them
[12:29] <marcoceppi> magicaltrout: yes, it could be expanded a bit, where the charm could say "here's where you get it" but you still have problems with that model
[12:29] <marcoceppi> magicaltrout: this just makes it a really straight forward tuple of charm-version and resourceN-version, it's locked, tested working together, always good set with a mechanism for you to update
[12:30] <magicaltrout> this is true. You could have a mandatory checksum or something to go with it?
[12:30] <nottrobin> marcoceppi: could you dump a few commands for how you'd then deploy the charm-svg charm with its resource into a pastebin?
[12:30] <marcoceppi> magicaltrout: we're not out to solve every problem, just solve MVP and iterate on feedback ;) we'll never get everything everyone wants on the first stab
[12:30] <nottrobin> marcoceppi: I'm trying to understand how we could use this
[12:31] <marcoceppi> nottrobin: absolutely, since I have to redeploy in like 20 mins
[12:31]  * marcoceppi grumbles about betas in production
[12:31] <nottrobin> marcoceppi: at present, we have 2 charms we use, wsgi-app and content-fetcher, which have a setting to allow us to tell it where to get the code, from the internet
[12:31] <magicaltrout> indeed, I'm just ribbing you. But it would be cool to be able to ship a charm and have the server grab the resrouces so it can deploy locally, with manual intervention
[12:31] <nottrobin> marcoceppi: we put the code in a tarball which we upload to swift with a unique ID
[12:31] <nottrobin> marcoceppi: then we do a "juju set archive_location={new_url}"
[12:32] <nottrobin> marcoceppi: so I'd like to understand how that process could be replaced by resources
[12:34] <marcoceppi> nottrobin: oh, absolutely
[12:59] <tvansteenburgh> anyone have an example of a charm that works with upstart or systemd, depending on the os release it's deployed on?
[12:59] <jcastro> postgres would be my first guess
[13:00] <marcoceppi> tvansteenburgh: why not just install upstart and use upstart on all machines?
[13:01] <marcoceppi> tvansteenburgh: systemd will wrap it, so it really doesn't matter
[13:01] <marcoceppi> tvansteenburgh: just make sure you apt intsall upstart
[13:01] <tvansteenburgh> marcoceppi: cool, didn't know it was that easy, will do, thanks
[13:07] <mattyw> evilnick, ping?
[13:07] <evilnick> mattyw pong
[13:08] <mattyw> evilnick, hey there, I know your busy, but quick question: The docs at https://jujucharms.com/docs only list 1.25. Anyone that gets there from xenial is going to be quite confused, is there a plan to add a 2.0 tab?
[13:09] <evilnick> matty, the 2.0 docs will go live, probably at some point before the 2.0 release
[13:10] <evilnick> as there are still command changes etc going on, it would possibly be more confusing to have broken docs...
[13:10] <evilnick> mattyw^
[13:12] <mattyw> evilnick, ok ack
[13:12] <mattyw> cherylj, ping?
[13:21] <tvansteenburgh> stub: in the apt layer, is there a state that gets set when everything in `extra_packages` has been installed?
[13:27] <cherylj> hey mattyw, what's up?
[13:31] <mattyw> cherylj, hey hey, sent you an email :)
[13:32] <stub> tvansteenburgh: @when_not('apt.queued_installs') is the best you can do at the moment
[13:33] <tvansteenburgh> stub: ok thanks
[13:33] <stub> (which is true when everything queued has been installed, not just extra_packages
[13:34] <tvansteenburgh> stub: but could it also be true before anything is queued
[13:38] <tvansteenburgh> stub: i guess the `hookenv.atstart(configure_sources)` means the apt stuff will always run *before* my charm code
[13:40] <tvansteenburgh> stub: so, if not('apt.queued_installs'), i can assume everything has been installed, not that nothing has been queued yet
[13:40] <stub> tvansteenburgh: Yes. Which can be annoying, as if you have handlers adding apt sources they will likely get invoked after an attempt is made to install packages declared in extra_packages, which is not ideal.
[13:41] <tvansteenburgh> stub: ack, thanks for clarifying
[13:47] <ReSam> I'm having problems deploying xenial/keystone-0 on juju2-beta5: "Ports which should be open, but are not: 5000, 35357"
[13:50] <jamespage> marcoceppi, urulama: hey - does the juju gui install automatically on a controller node for 2.0 yet?
[13:50] <urulama> jamespage: yes it does
[13:51] <jamespage> urulama, hmm - an special port I have to use?
[13:51] <jamespage> any rather?
[13:51] <urulama> jamespage: https://blog.jujugui.org/2016/04/15/juju-2-0-beta-4-now-with-embedded-gui/
[13:51] <urulama> jamespage: no, just type juju gui --show-credentials
[13:51] <urulama> it'll connect to a model you're in atm
[13:52] <jamespage> urulama, awesoe thankyou!
[14:34] <tvansteenburgh> cory_fu: in light of the current behavior of config.changed.* states, do you have a suggestion for a way to work around the problem you illustrated here https://github.com/juju-solutions/layer-basic/pull/61
[14:35] <tvansteenburgh> cory_fu: b/c that's pretty much how i want to organize my code, but i don't want install() called twice, obviously
[14:35] <cory_fu> tvansteenburgh: You can see the approach I used in https://code.launchpad.net/~johnsca/layer-ibm-base/fix-multi-call/+merge/292845
[14:35] <cory_fu> That should be future-proof if that PR gets accepted
[14:35] <tvansteenburgh> cory_fu: cool, thanks!
[14:36] <cory_fu> But with the current behavior, you could actually leave out the .new. state.  The extra state makes it more wordy, but at the same time, I think it makes it a bit more clear.  *shrug*
[14:37] <tvansteenburgh> cory_fu: so it's the config.set. that's fixing the problem, right?
[14:38] <cory_fu> No, that's replacing the "if" inside the handler.  It's the @when('config.changed.curl_url') and the removal of the "only do this once" state (ibm-base.curl.resource.fetched)
[14:38] <tvansteenburgh> cory_fu: oic
[14:38] <cory_fu> Basically, when the config changes (from being nothing, or to a new value), do the install
[14:38] <tvansteenburgh> yeah, ok, i get it now
[14:39] <cory_fu> It will do it at least once (because the config is new, or "changed" from "non-existant") and then won't do it again unless the config value changes
[14:39] <tvansteenburgh> check for config.changed instead of maintaining your own "installed" state, in my case
[14:39] <cory_fu> Yep
[14:39] <tvansteenburgh> cool, thanks
[14:39] <cory_fu> np
[14:41] <jcastro> lazyPower: http://pastebin.ubuntu.com/16095681/
[14:41] <jcastro> in the swarm charm it fails to install docker-engine
[14:42] <lazyPower> swap to xenial?
[14:42] <jcastro> oh, I would have thought the charm would have declared the series no?
[14:42] <lazyPower> it does, it says its trusty compat
[14:44] <lazyPower> oh i'm full of lies :)
[14:44] <jcastro> it's their package, investigating
[14:51] <jcastro> lazyPower: ok so do you want me to use xenial with this or do you plan on supporting trusty?
[14:51] <lazyPower> Trusty was target for the spec. I can land xenial support in a couple days using our docker.io package vs upstreams
[14:51] <lazyPower> the only reason docker-engine is being used in trusty is because docker 1.6 is quite old, has CVE's, and should go sit in the corner.
[14:55] <jcastro> lazyPower: hey is it possible to use the apt layer to install docker? install_docker.sh is ;_;
[14:56] <lazyPower> jcastro - file a bug
[14:56] <jcastro> on the charm?
[14:57] <jcastro> yeah so it looks like it's a circular thing, the service won't start because dpkg is unconfigured, and dpkg won't complete because the service won't start
[14:58] <lazyPower> if you deployed on xenial, i'm not surprised :)
[14:58] <jcastro> it's on trusty
[14:58] <lazyPower> thats completely new
[14:58] <lazyPower> what substrate?
[14:58] <jcastro> lxd
[14:59] <lazyPower> jorge
[14:59] <lazyPower> unless you apply teh docker profile, use the docker.io package, and a few other small tweaks - docker/swarm will not run in lxd as it stands today (by default)
[14:59] <jcastro> ah!
[14:59] <jcastro> I ran right into bruzerville and didn't even notice!
[14:59] <lazyPower> jcastro well - its like this. when KVM was removed as a provider you have 2 choices
[14:59] <lazyPower> setup maas, or go to the cloud :)
[15:00] <lazyPower> i defer to you, to pick your poison
[15:00] <jcastro> sure, are you testing on aws or gcloud?
[15:00] <jcastro> I'll use the one you're not using
[15:00] <lazyPower> i've done my primary testing on aws, gcloud would be some welcome re-check of results
[15:00] <jcastro> ack
[15:03] <marcoceppi> jcastro: I am so close to autopkgtest for charm and charm-tools
[15:03] <jcastro> \o/
[15:05] <marcoceppi> jcastro: eco-wx?
[15:05] <jcastro> omw
[16:08] <marcoceppi> lazyPower cory_fu https://github.com/juju-solutions/layer-basic/pull/63
[16:09] <cory_fu> marcoceppi: What is the reason for that?
[16:11] <lazyPower> cory_fu - i found a pattern that causes it to choke
[16:11] <lazyPower> i'm implicity caching a value in config()
[16:12] <cory_fu> Example?
[16:12] <lazyPower> cory_fu https://github.com/juju-solutions/layer-beats-base/blob/master/lib/elasticbeats.py#L17
[16:12] <lazyPower> thing is, i had no idea i could cache in config
[16:13] <cory_fu> Oh yeah, that was a feature tvansteenburgh added before we had unitdata.  I'd prefer you use unitdata, but I guess the whitelist is reasonable anyway
[16:13] <lazyPower> cory_fu - i was improperly thinking config just returned a dict
[16:14] <lazyPower> its a class, and has its own behaviors. so i kind of stumbled into this by being forceful with stuffing things in config to render a template :P
[16:16] <cory_fu> lazyPower: Yeah, it's mostly a dict, but it does let you change the values, but I'm not sure that's a good pattern to encourage.  I'd prefer it to be immutable
[16:16] <cory_fu> Otherwise, you'll see things coming out of hookenv.config() that don't match what you see from `juju get`
[16:16] <lazyPower> cory_fu - yeah its not too ugly to make a separate context object and clone in the stuff you care about
[16:16] <lazyPower> which i think i'll do
[16:16] <cory_fu> Yeah
[16:17] <cory_fu> But anyway, the PR has been merged
[16:17] <marcoceppi> cory_fu: thanks
[16:26] <marcoceppi> cory_fu: I'm going to align the deps with what's in xenial, because we can just backport xenial packages to trusty ppa
[16:29] <cory_fu> +1
[16:29] <marcoceppi> cory_fu: so a few things got bumped, will be in my test suite fixes
[16:45] <marcoceppi> cory_fu: could I get a review of this but hold off on merge? https://github.com/juju/charm-tools/pull/195
[16:47] <cory_fu> marcoceppi: Looks reasonable to me.
[16:47] <marcoceppi> cory_fu: cool, I wasn't 100% sure tbh
[16:47] <marcoceppi> waiting for autopkgtests to finish, but hopefully this address that import issue
[16:51] <nottrobin> marcoceppi: did you get a chance to create that pastebin to illustrate resources?
[16:52] <marcoceppi> nottrobin: sorry, not yet. fighting meetings most of my day so far
[16:55] <nottrobin> marcoceppi: no rush. just ping me when/if you do get around to it
[17:38] <Prabakaran> Hello Team, When ever i do juju add-machine command in juju framework .. it is creating xenial series machine instead of trusty series container...see the output of juju status http://pastebin.ubuntu.com/16108619/ which has trusty series for machine 0 and xenial series for machine 1.. please advise me to resolved this issue?
[17:42] <Prabakaran> uname -a is Linux c277-pkvm-vm54 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
[17:50] <lazyPower> Prabakaran - if you're on juju 2.0 you can add a machine with --series=trusty
[17:51] <lazyPower> otherwise i believe it constraints="series=trusty"
[17:52] <Prabakaran> i am using 1.25.5-trusty-ppc64el
[18:15] <LiftedKilt> deploying the openstack-lxd bundle, I get errors in my ceph cluster about too many PGs per OSD is there a way to tune that?
[18:21] <lazyPower> cory_fu https://github.com/juju-solutions/layer-beats-base/pull/2
[18:22] <cory_fu> lazyPower: Whoa.  Why are you doing manual hooks?
[18:22] <lazyPower> i need that value set, and it was quicker to drop that as the hook file in bash?
[18:23]  * cory_fu scowls at lazyPower.
[18:23] <lazyPower> its been like that since i wrote it :P  guess it only sneaks past if its not in a MP?
[18:25] <cory_fu> I never reviewed that layer
[18:26] <cory_fu> We're supposed to have a fancy new RQ that lets us review layers.
[18:26]  * cory_fu glances at tvansteenburgh.
[18:26] <tvansteenburgh> cory_fu: i'm making the charm as we speak
[18:26] <cory_fu> Yay!
[18:27] <tvansteenburgh> cory_fu: although the first rev is just for charms
[18:27] <cory_fu> lazyPower: There is a juju-info layer, but it needs  minor modification for your use-case: https://github.com/juju-solutions/interface-juju-info
[18:28] <lazyPower> i...i made that interface...
[18:28] <lazyPower> wait, so i can add behavior to that?
[18:28]  * lazyPower sighs
[18:29] <lazyPower> okay give me 30 minutes to unwind this and update the interface + beats-base + filebeat and re-run the tests
[18:32] <cory_fu> lazyPower: It's not the end of the world if you want to punt on that to get it ready.  That hook is pretty trivial
[18:32] <lazyPower> ok
[18:32] <lazyPower> thats my preferred solution at this juncture
[18:32] <lazyPower> if you bug it though, i swear i'll circle back and get it fixed
[18:32] <cory_fu> But I'm going to open an issue to fix it later
[18:32] <lazyPower> !
[18:32] <lazyPower> perfect
[18:39] <lazyPower> cory_fu https://github.com/juju-solutions/layer-filebeat/pull/4
[18:49] <lazyPower> cory_fu - and if thats g2g https://bugs.launchpad.net/charms/+bug/1560166
[18:49] <mup> Bug #1560166: New charm proposal: Filebeat <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1560166>
[18:50] <cory_fu> lazyPower: I'm going to be a few minutes on those.  Getting pulled in a bunch of directions
[18:50] <lazyPower> no rush. just getting the roadmap layed out so its easy to follow :)
[19:52] <cory_fu> lazyPower: One comment on https://github.com/juju-solutions/layer-filebeat/pull/4
[19:58] <lazyPower> cory_fu - awesome, almost ready to hand off topbeat and circle back to that rev comment
[20:02] <lazyPower> cory_fu - replied on the comment
[20:02] <lazyPower> but thats in teh wrong spot, gah
[20:04] <lazyPower> fixed and re-pushed
[20:14] <kwmonroe> cory_fu: is the reactive .py processed top-down?  i have "@when x; def slow()" followed by "@when y; def fast()".  x and y are totally independent and both set. if i move "@when y" before my "@when x", will it execute first?
[20:19] <cory_fu> kwmonroe: There are no guarantees about what order handlers are executed in
[20:23] <kwmonroe> cory_fu: is that because there is not guarnatee about the order hooks are fired?  if so, what if for a given hook both slow() and fast() should be executed.  is there nothing i can do to make fast() go first?
[20:24] <magicaltrout> ask it nicely?
[20:24] <cory_fu> lazyPower: You could use @when_any for "or"
[20:25] <lazyPower> oh thats a good point
[20:25] <kwmonroe> magicaltrout: i feel like we're going to have a nice time in vancouver.
[20:25]  * lazyPower updates
[20:25] <cory_fu> lazyPower: Although, with the issues with @when_file_changed, it's probably better that you're doing it that way
[20:25] <lazyPower> yeah?
[20:25] <lazyPower> well, i can leave it as is until when_file_changed gets some love
[20:26] <cory_fu> lazyPower: Actually, I'd almost recommend not using @when_file_changed at all.  :(
[20:26] <magicaltrout> looking forward to it kwmonroe :P
[20:26] <lazyPower> thats a pretty big warning for something we're still shipping
[20:26] <cory_fu> Using it by itself is probably fine, but it can get tripped up if states are removed
[20:26] <lazyPower> hmm
[20:26] <lazyPower> ok yeah its not doing any state modification in that method body
[20:26] <cory_fu> I know.  I just haven't had time to address the issues with it
[20:26] <lazyPower> is that teh only side effect it has? or does it get tripped up if another decorated method removes state?
[20:27] <lazyPower> sorry i wasn't meaning to be critical either...
[20:27] <cory_fu> lazyPower: No, I mean if any other handlers in the same execution queue remove states, it can cause the @when_file_changed handler to get dropped
[20:27] <lazyPower> ah ok
[20:28] <cory_fu> lazyPower: https://github.com/juju-solutions/charms.reactive/issues/44
[20:29] <cory_fu> kwmonroe: There's no guarantee because all the handlers are put in a pool and pulled out at random when their conditions match.  If you want one handler to go before another, you should chain them together with a state (or have one call the other directly)
[20:30] <lazyPower> cory_fu - so, full disclosure
[20:30] <lazyPower> i have the same method(s) in topbeat
[20:30] <lazyPower> should i refactor these then? if you're not happy with that decorator going in we can change it
[20:31] <lazyPower> i'd like to know beforei kick off this test and wait through and have to re-test it. it delays ~ 15 minutes per run
[20:32] <cory_fu> lazyPower: Is there any other handler besides render_filebeat_template that you expect to update that file?
[20:32] <gnuoy> Is there a way to trigger an update of charm-tools in the ppa? I'm hoping to get my hands on a fix that landed yesterday
[20:32] <lazyPower> cory_fu - nope. I can just put it in that body and eliminate the event handler
[20:34] <cory_fu> Yeah, +1.  Inline the restart and avoid when_file_changed
[20:35] <tvansteenburgh> gnuoy: i believe marcoceppi is the trigger
[20:35] <cory_fu> You can manually call charms.reactive.helpers.any_file_changed(['file.yaml']) to prevent unnecessary restarts
[20:35] <gnuoy> tvansteenburgh, ah, ok, thanks.
[20:35] <cory_fu> lazyPower: ^
[20:36] <marcoceppi> gnuoy: build from trunk
[20:36] <marcoceppi> gnuoy: we're in the mist of getting an MRE for charm-tools, but I want to hold off having too much package drift until then
[20:39] <bsod90> Hi guys! I'm a newbie to juju and I'm seeking for some general advices on how do I debug issues with it. In particular, I'm trying to deploy openstack base bundle on 4 maas machines and I'm stuck in a state like this: http://d.pr/i/142ox where some services came up, but some did not and nothing is happening anymore. What are my next steps to debug this? I looked into "juju status", but it's not very verbose.
[20:39] <bsod90> What are the common places to start collecting more detailed information on what's going on? Thank you for any help.
[20:39] <magicaltrout> bsod90: juju debug-log will help you
[20:39] <marcoceppi> bsod90: can you put the output of juju status into a pastebin? (paste.ubuntu.com for example)
[20:40] <magicaltrout> nice screenshot
[20:40] <magicaltrout> looks like a mindmap
[20:41] <bsod90> magicaltrout: thank you, it looks much more verbose. marcoceppi: one moment
[20:43] <bsod90> marcoceppi: http://pastebin.com/7B9RMMK7
[20:43] <lathiat> bsod90: juju status --format=tabular is also slightly easier to look at for a summary (but the same info mostly, more concise)
[20:46] <marcoceppi> lathiat: that is format tabular
[20:47] <marcoceppi> bsod90: you've got a few things going on here
[20:47] <lathiat> oh, hes using juju2 :-)
[20:47] <lathiat> ++ for the change
[20:49] <marcoceppi> bsod90: how did you try to deploy openstack?
[20:49] <bsod90> I can seet that machine 2 is still in "pending" state (whatever that means) and in debug-log I noticed a message saying "etting machine addresses: cannot set machine addresses of machine 2: state changing too quickly;" popping up sometimes. I'm trying to correlate this with potential network issues.
[20:49] <bsod90> marcoceppi: just found the bundle on jujucharms
[20:50] <bsod90> I've added it to 'admin' model
[20:50] <bsod90> and then moved contents of one of 'new' nodes to 0 (to collocate it with juju itself)
[20:50] <marcoceppi> bsod90: yeah, you need more than 4 machines for that
[20:50] <bsod90> that's because I have only 4 machines available
[20:51] <marcoceppi> bsod90: that bundle wants like 8 machines
[20:51] <marcoceppi> bsod90: have you tried the openstack-installer?
[20:51] <marcoceppi> bsod90: it's basically juju with smarter placement :)
[20:52] <bsod90> hmm. in description it says " This bundle is designed to run on bare metal using Juju with MAAS (Metal-as-a-Service); you will need to have setup a MAAS deployment with a minimum of 4 physical servers prior to using this bundle."
[20:52] <bsod90> but anyways, let me try openstack-installer then
[20:52] <bsod90> thank you!
[20:52] <marcoceppi> bsod90: http://www.ubuntu.com/download/cloud/install-openstack-with-autopilot
[20:56] <bsod90> just to confirm: my end goal is to have nova-compute providing me with lxd containers on top of my physical nodes. With the ability to add/remove nodes in future, of course. Am I approaching it right when trying to setup maas and juju and what else it would need or it's better to setup just needed openstack components manually?
[20:57] <lathiat> bsod90: maas and juju will do exactly that, you really need 4 actual nodes though .. colocating services on the juju node is not recommended
[20:58] <lathiat> bsod90: openstack-base will work on 4 nodes + bootstrap server
[21:00] <bsod90> lathiat: but juju controller node is not going to requrie a lot of resources, right? I can try finding some cheap piece of hardware around and just adding it to my cluster..
[21:00] <lathiat> bsod90: in your current deployment there was a couple of errors that you would need to review the debug logs to understand what went wrong and try and figure out why.. mainly you had nova-compute/2 and openstack-dashboard/0 fail to install totally and mysql-shard-db relations faile dfor some reason
[21:00] <lathiat> bsod90: yeah it doens't need a lot.. for testing at home I run this in a VM on some other server (not used for juju) to save a physical node
[21:01] <lathiat> as well as maas
[21:01] <lathiat> fine for testing
[21:02] <marcoceppi> bsod90: you could create a KVM instance for the bootstrap node
[21:02] <bsod90> hmm. that's feasible, I can easily allocate a VM here. but how do I tell juju to use the VM for controller and maas for the rest?
[21:02] <marcoceppi> bsod90: we do that a lot when we have limited hardware
[21:02] <lathiat> bsod90: you can manually enroll it in maas with libvirt driver
[21:02] <marcoceppi> bsod90: you can actually put 90% of the openstack services in containers, either LXC or KVM
[21:02] <lathiat> alongside your physical machines
[21:03] <marcoceppi> bsod90: so use 3 nodes for nova-compute, and ceph, then everything else into containers
[21:04] <lathiat> bsod90: puting aside the fact this breakage may be related to something like using machine 0.. we can review the debug-log to see what happened for some of these errors.  e.g. juju debug-log --include unit-glance-0 --replay -T
[21:05] <lathiat> also worth looking at unit-nova-compute-2 and openstack-dashboard-0 as they failed install for some resaon
[21:06] <bsod90> marcoceppi: I actually have 5 physical nodes. one already occupied by maas. I can manually setup LXD or KVM on it or even manually provision a VM on VMware (completely separate from my setup). I just need to know how can I use that together with maas
[21:06] <bsod90> lathiat: one sec, let me collect the logs
[21:15] <marcoceppi> bsod90: create the virtual machines, let me get you a template, then you more or less add them to maas with the libvirt backend
[21:15] <marcoceppi> bsod90: https://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes
[21:16] <bsod90> lathiat: glance: http://paste.ubuntu.com/16119929/ (quite big one) nova: http://paste.ubuntu.com/16119940/
[21:18] <lathiat> mm so nova-compute is failing on unit-get private-address
[21:19] <lathiat> mysql seems somehow the grant was not made
[21:19] <bsod90> marcoceppi: would that require my VMs to be in the same management network (when MAAS is running DHCP) as the other 4 physical nodes?
[21:19] <lathiat> bsod90: i might suggest prehaps t ogive trusty a try, instead of xenial.. the xenial stuff has just freshly landed in the last few days, which means (a) not so fleshed out yet and (b) i havent yet tried a full xenial deploy to see if I ran into such issues
[21:20] <marcoceppi> bsod90: yes, but you could bridge into that network
[21:20] <marcoceppi> bsod90: I'm looking for the script i use for this
[21:20] <lathiat> marcoceppi: the virtual-maasers one?
[21:20] <lazyPower> cory_fu another one for ya https://github.com/juju-solutions/layer-topbeat/pull/3
[21:21] <lazyPower> cory_fu - if you like that approach i'll follow on filebeat with it
[21:25] <bsod90> lathiat: I see. Yeah, I have already experienced some of the xenial freshness :) I'm a bit concerned about trusty kernel for LXD, whether our stuff will work on it or not. (basically, we need to use cgroups inside the container. I've tested that with LXD & xenial, it works. and I know that simillar thing, but in docker & trusty doesn't work). Anyways, let me probably first somehow provide it with all neede
[21:25] <bsod90> nodes, so it has fresh 4 physicals just for openstack. Then I'll retry installation and will continue debugging from there.
[21:27] <bdx> https://insights.ubuntu.com/2015/11/08/deploying-openstack-on-maas-1-9-with-juju/ -> click the "read original article" link at the bottom of the page
[21:27] <bdx> sketch
[21:27] <bdx> someone get a handle on ^ fast
[21:30] <lathiat> bsod90: could be a good idea, actually these issues are quite possibly related.. the mysql grant possibly didn't work right fo rthe same reason that the private-address lookup is failing
[21:32] <lathiat> im not 100% sure about how private-address is working.. maybe marcoceppi knows
[21:33] <cory_fu> lazyPower: +1
[21:33] <lazyPower> incoming Pr for filebeat then - let me rev the store and get you a bundle
[21:33] <lazyPower> this should round out beats-core
[22:11] <admcleod1> how do i, with an amulet test, wait for all units of a service to have a specific status message?
[22:12] <cory_fu> admcleod1: You want https://pythonhosted.org/amulet/amulet.html#amulet.sentry.Talisman.wait_for_messages
[22:12] <admcleod1> thanks cory_fu !