[00:17]  * arosales reads back scroll
[00:19] <magicaltrout> its not very complex
[00:19] <magicaltrout> arrange meeting with c0s
[00:22] <arosales> c0s we can install the payload from big top and something we should think of for power8
[00:22] <arosales> c0s the mechanism to easily change the payload is resources
[00:24] <arosales> The main thing is to ensure the target install source is always available or the user gets and install hook error and thinks it was juju not the target install host
[00:24] <arosales> c0s we can also scale services with add unit and remove unit we just need to model that correctly in the charm
[00:24] <arosales> But good questions
[00:25] <magicaltrout> he's also very much left :P
[00:25] <arosales> I would be interested in know more about your use cases
[00:25] <arosales> magicaltrout: no wonder auto complete wasn't finding him
[00:25] <arosales> :-)
[00:26] <magicaltrout> hehe
[00:26] <arosales> rick_h_: I'll catch up tomorrow when cos is back in channel
[00:26] <rick_h_> arosales: rgr
[00:26] <rick_h_> arosales: ty
[00:27] <magicaltrout> you should leverage his friendship with Roman to get pivotal onboard :)
[00:39] <magicaltrout> mostly because it would get me out of writing a fat bunch of charms that I don't have time to write but really want :)
[08:00] <axino> cory_fu: https://github.com/juju-solutions/layer-basic/pull/49 is broken I think, the apt_install doesn't silently fail on trusty
[08:00] <magicaltrout> lazyPower: let me know when the kibana stuff gets merged so I can test beats again! :)
[10:44] <jamespage> gnuoy, quite a big one - https://review.openstack.org/#/c/295714/
[10:45] <gnuoy> glad to see it though
[10:56] <dimitern> jamespage, hey when you can I'd like to investigate the issue from yesterday with the maas multi-nic containers
[12:00] <jamespage> gnuoy, and its buddy - https://review.openstack.org/#/c/295745
[12:16] <stub> I've got a production environment running 1.25.4, proposed stream. Future upgrades should hopefully only be to official releases. Can I set the agent-stream back now, or do I need to wait until I upgrade?
[13:38] <skay> I have an environment where a charm radically changed and when we ran upgrade-charm I don't think it cleaned up all the old state from the previous charm
[13:39] <skay> what's the best way to get rid of that service in order to install it from scratch?
[13:39] <skay> which destroy command should I call? do I need to call multiple ones or will one of them cascade down them for me?
[13:40] <skay> it's gunicorn and is a subordinate
[13:41] <skay> destroy-service seems intuitive, but I'd like a sanity check
[13:54] <tvansteenburgh> skay: yes, that's it
[13:54] <skay> tvansteenburgh: thanks :thumbs:
[13:54] <skay> (:thumbs: is actually bd but people mistake it for batman's mask)
[14:32] <lazyPower> magicaltrout: ack, lemme give it a little longer for manjo
[14:45] <lazyPower> aisrael - the guy we met from treasure data was Eduardo right?
[14:45] <aisrael> lazyPower: Yep
[14:45]  * lazyPower grins
[14:46] <lazyPower> sam just put me back in touch with him today
[14:46] <aisrael> Excellent!
[15:02] <magicaltrout> aye
[15:31] <lazyPower> mbruzek - if you have a minute, i just sent you a doc with our status update
[15:31] <lazyPower> can you proof that before i ping the list with it?
[15:46] <mbruzek> lazyPower: sure
[15:57] <lazyPower> mbruzek - my mail got rejected due to the attachments, so i'm converting this into ablog post
[15:58] <mbruzek> lazyPower: I removed some contractions and added a few comments.
[15:58] <mbruzek> Was my review too late?
[15:58] <lazyPower> ack
[15:58] <lazyPower> not at all
[15:58] <lazyPower> i'll get htem folded in, igotta head to a meeting with weave, but i'll incorp those before publishing
[15:58] <lazyPower> thanks for taking a look!
[15:58] <mbruzek> yar
[16:22] <jcastro> bdx: heya, mind PR'ing your slides or a link to this: https://github.com/juju/presentations
[16:22] <jcastro> I'm going to start pushing more presentations/talks into that repo
[16:25] <lazyPower> jcastro - are you exporting our docs slides to like ODP and uploading as well or?
[16:27] <jcastro> lazyPower: so I have a place to put html slides, and a place for PDFs and other outputs
[16:28] <jcastro> I figure we can use the readme for links to slides/vids, etc.
[16:28] <lazyPower> ah that or we can pdf upload
[16:28] <jcastro> and I am about to make a folder for things like "talk titles and submissions"
[16:28] <lazyPower> but im' not really a fan of keeping pdf's in git as they are blob and just bloat the repo
[16:28] <lazyPower> so nvm carrry on sir
[16:28] <jcastro> instead of "hey chuck mail me the last 10 submissions for devops days you sent in
[16:28] <lazyPower> omg i love this idea
[16:28] <lazyPower> who's teh genius i need to hug?
[16:28] <jcastro> I will put tips and tricks there too
[16:28] <jcastro> like terms to use, terms to avoid, etc.
[17:46] <narindergupta> jamespage: sometime back you were working ovs charm with DPDK? will you please point me pointers so that i can give a try with JOID>
[17:46] <jamespage> narindergupta, I've not started on that yet
[17:47] <narindergupta> jamespage: oh ok
[17:47] <narindergupta> jamespage: do you know anyone else did?
[17:47] <jamespage> no one else has done that yet...
[17:47] <jamespage> narindergupta, its on the list of things todo still, just not got to it yet...
[17:48] <narindergupta> jamespage: ok thanks for information
[18:19] <arosales> aside from 'kill-contoller' any folks have any hints on how to reclaim and juju 2.0 environment?
[18:19]  * arosales stuck in this loop http://paste.ubuntu.com/15473933/
[18:20]  * arosales may need to check in #juju-dev
[18:26] <arosales> fyi fix was to rm ~/.local/share/juju/models/cache.yaml   _if_ you only have the one controller you care about. If you have other controllers you care about then you need to remove the offending lines
[18:26] <arosales> may need to clean up clean up ~/.local/share/juju/controllers.yaml as well
[18:27] <arosales> but I am not able to reboot strap again
[18:27] <arosales> thanks to cherylj
[18:27] <magicaltrout> thats lazyPower 's favourite fix
[18:27] <arosales> lazyPower: has a pretty good toolbox
[18:27] <lazyPower> hi
[18:27] <lazyPower> what did i do?
[18:27] <lazyPower> i'm actually working on a python script to nuke a leftover controller, but i dont want to publish it
[18:28] <arosales> all the things
[18:28] <lazyPower> because hand editing cache.yaml is frightening
[18:28] <lazyPower> and i dont want to advocate anyone do this
[18:28] <arosales> lazyPower: I think it is a bug they should address, if not we'll have the juju clean up script again
[18:28] <lazyPower> bugs open
[18:28] <lazyPower> 1 sec let me find the link
[18:28] <arosales> cherylj: ping me if you need me to open a bug on it.
[18:28] <arosales> lazyPower: oh is a bug open already on it?
[18:28] <arosales> lazyPower: I basically hit http://paste.ubuntu.com/15473933/
[18:28] <lazyPower> https://bugs.launchpad.net/juju-core/+bug/1560191
[18:28] <mup> Bug #1560191: kill-controller is hinky without a model-controller behind it <juju-core:New> <https://launchpad.net/bugs/1560191>
[18:28] <cherylj> several, I'm sure
[18:29] <cherylj> that's technically a different issue
[18:29] <cherylj> this is the source of arosales' problem:  https://bugs.launchpad.net/juju-core/+bug/1543223
[18:29] <mup> Bug #1543223: kill-controller fails on missing volume <ci> <juju-release-support> <kill-controller> <juju-core:Triaged> <https://launchpad.net/bugs/1543223>
[18:29] <cherylj> see also, bug #1555744
[18:29] <mup> Bug #1555744: kill-controller / destroy-controller prevents reuse of controller name <docteam> <juju-release-support> <juju-core:Invalid by wallyworld> <https://launchpad.net/bugs/1555744>
[18:30] <arosales> lazyPower: thanks for filling the bug, I added a comment
[18:30] <lazyPower> cherylj - nice, thats the other side of it that i've hit
[18:31] <lazyPower> thanks :D
[18:31]  * cherylj cries seeing arosales' suggestion to rm cache.yaml in a bug
[18:31] <cherylj> heh
[18:31] <arosales> ok I posted in both bugs
[18:32] <arosales> but not 1555744 cause I didn't want to spam all the bug reports :-)
[18:32] <arosales> cherylj: hey it worked
[18:32] <arosales> lol :-)
[18:32] <magicaltrout> you may cry
[18:32] <arosales> cherylj: I did state "given I only had 1 controller I cared about"
[18:33] <magicaltrout> i usually remove the whole .juju folder :P
[18:33] <arosales> but I guess I should have noted that was the nuclear option
[18:33]  * cherylj weeps uncontrollably 
[18:33] <cherylj> ;)
[18:33] <arosales> ah well I don't feel as bad now magicaltrout :-)
[18:33] <magicaltrout> i happens often enough
[18:33] <magicaltrout> normally user error :P
[18:34] <arosales> its commonly pilot error for me an not a bug
[18:35] <magicaltrout> although i have one production system stuck on 2.0alpha1
[18:35] <magicaltrout> which i'm not allowed to break
[18:35] <magicaltrout> because I have no way of upgrading it \o/
[18:36] <lazyPower> those are always fun, snowflakes that we create in alpha/beta land
[18:36] <lazyPower> i've been fighting with myself to not setup any 2.0 beta controllers for running systems due to that very reason
[18:36] <magicaltrout> yeah its like the bastard child, its 2.0 with 1.2 configuration setup
[18:36] <lazyPower> but with all the goodness thats in here, its really hard to do it
[18:36] <lazyPower> *to not
[18:41] <magicaltrout> you  know when its beer o'clock and you can't drink cause you have a conference call in 20 minutes..........
[18:41] <magicaltrout> plus you need to write a proposal to build a SQL over JSON interface
[18:42] <magicaltrout> grr
[18:45] <marcoceppi> magicaltrout: that's the best time to have a beer
[18:45] <magicaltrout> hehe
[18:45] <magicaltrout> it wouldn't be the first time
[19:10] <marcoceppi> cory_fu: which is better?
[19:10] <marcoceppi> @when('nginx.available', 'charm-svg.running')
[19:10] <marcoceppi> @when('nginx.available')
[19:10] <marcoceppi> @when('charm-svg.running')
[19:11] <marcoceppi> seperation or as *args ?
[19:11] <cory_fu> They are equivalent and it's up to your personal aesthetic.
[19:11] <marcoceppi> cool beans, thanks
[19:11] <marcoceppi> cory_fu: same for @when_not?
[19:11] <cory_fu> marcoceppi: The main difference is that the order of args from @when decorators is a little confusing when split.  It basically goes bottom to top, left to right
[19:12] <marcoceppi> cory_fu: yeah, I've encountered that from patch library
[19:12] <cory_fu> marcoceppi: All decorators are ANDed together.  The only decorators that do not also AND their args are @when_any and @when_not_all
[19:12] <marcoceppi> cool
[19:13] <marcoceppi> I think I'll leave them split for now, it's easier to talk to
[19:14] <marcoceppi> cory_fu: I'
[19:15] <cory_fu> I tend to group the ones that are either related or the same for two similar blocks, and split ones that vary between two blocks.  So, if two blocks have shared preconditions, but one is @when('foo') and the other is @when_not('foo'), I will group the shared preconditions @when('bar', 'qux') but split the changing state @when('foo')
[19:15] <cory_fu> If that makes any sense
[19:15] <marcoceppi> cory_fu: Ive been thinking about the config stuff, how they're basically using states for events
[19:16] <marcoceppi> cory_fu: would it be better to standardize that as charms.reactive.emit ? so you could emit('state') instead of set_state and removing it?
[19:16] <marcoceppi> (where emit would basically say, at end of state execution, remove this state)
[19:17] <cory_fu> There was a discussion about that on one of the issues or PRs
[19:18] <cory_fu> So, emit would be shorthand for "set_state(event); hookenv.atexit(remove_state, event)"
[19:18] <marcoceppi> cory_fu: right now I'm wrestling with update-status hook. I'd like to just make a @when('update-status') or something similar which would be triggered during the update-state hook, but also something I could poke at from methods by just emitting that state
[19:18] <marcoceppi> cory_fu: basically
[19:19] <cory_fu> marcoceppi: I tend not to collect status reporting into a single handler, though sometimes it is more useful to do so
[19:19] <cory_fu> bcsaller: Thoughts on ^
[19:19] <cory_fu> ?
[19:22] <cory_fu> marcoceppi: Here was the other comment touching on events vs states: https://github.com/juju-solutions/charms.reactive/issues/44#issuecomment-176278218
[19:23] <bcsaller> remove at end of hook is only one possible semantic, sometime that same spelling might be intended to mean, remove once processed. If the hook fails either the trigger condition needs to regenerate or the cleanup event was wrong,  IMO it's better to detect state (de)activation and decouple it from hook context
[19:26] <cory_fu> bcsaller: If the hook fails, it is likely that the states will not be flushed and things will re-run from the initial state on hook retry, though that's not guaranteed
[19:26] <bcsaller> atexit can make that tricky
[19:27] <cory_fu> How so?
[19:28] <cory_fu> atexit isn't run on error
[19:31] <bcsaller> cory_fu: not on any sys.exit?
[19:32]  * TheMue listens trying to learn a bit
[19:32] <TheMue> hi Cory, hi Benjamin
[19:34] <bcsaller> hi
[19:35] <TheMue> bcsaller: even after leaving Canonical Juju fascinates me and I advertise it *bg*
[19:36] <bcsaller> excellent
[19:36] <marcoceppi> o/ TheMue
[19:37] <cory_fu> bcsaller: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/__init__.py#L74
[19:37] <cory_fu> Only if there are no errors and it exits 0
[19:39] <bcsaller> cory_fu: ahh, thanks, didn't recall we were using our own impl of that
[19:39] <TheMue> marcoceppi: people around me are always fascinated about its possibilities. my talks in the past have been more about the technology (just core *g*), but now maybe I should place talks about charms
[19:39] <bcsaller> cory_fu: I still tend to think that trying to couple states to hook invocations isn't needed
[19:42] <cory_fu> bcsaller: In general, I agree and would go even further to say that it's a bad idea.  But there are also cases where we want to ensure that a given state can be handled by every handler that wants to and is cleaned up afterward.
[19:42] <cory_fu> That can be tricky to do, because no single handler should remove the state in that case since it might block other handlers from responding.
[19:43] <cory_fu> The end of the hook context becomes a work-around in that case.  Really what it means is that "all handlers that are going to trigger given the current overall state have done so"
[19:43] <bcsaller> cory_fu: which should be something we can detect, record and clear automatically
[19:44] <cory_fu> The only way we can detect it automatically is when the dispatch loop terminates, and the very next thing that is run then is atexit
[19:48] <cory_fu> bcsaller: Merlijn had another interesting take on it with the idea of having an "event" that handlers only got one "bite at the apple", as it were.  So if their other conditions didn't match when it was emitted, they didn't see it later if their preconditions did change.  That's something we can't really model now at all
[19:50] <bcsaller> cory_fu: do you have time for a hangout at some point?
[19:50] <cory_fu> Sure
[19:52] <falanx> hi, how can we specify juju to deploy 16.04 on MAAS instead of the default 14.04?
[20:13] <marcoceppi> falanx: that depends on the charm, most of the charms are targeted at 14.04 (trusty) - what are you trying to deploy?
[20:14] <marcoceppi> stokachu: got a min for a quick review? https://github.com/battlemidget/juju-layer-nginx/pull/4
[20:15] <falanx> marcoceppi: we are trying to deploy the controller
[20:15] <marcoceppi> falanx: are you using 1.25 or 2.0?
[20:15] <falanx> 2.0
[20:19] <stokachu> marcoceppi: merged, thanks!
[20:20] <marcoceppi> falanx: you should just be able to do `juju bootstrap --bootstrap-series=xenial`
[20:20] <stokachu> marcoceppi: what do you think about me moving those layers under the juju-solutions org?
[20:21] <marcoceppi> stokachu: the nginx one?
[20:21] <stokachu> and nodejs
[20:21] <stokachu> to start with
[20:21] <marcoceppi> I don't have a problem, but they're fine in your namespace
[20:21] <stokachu> ok thats cool
[20:21] <marcoceppi> stokachu: I do have some plans to spruce up the nginx layer, but it breaks compat so I'm not sure how to handle this gracefully
[20:21] <marcoceppi> cory_fu: ^
[20:22] <stokachu> i thought there were some talk of versioned layers?
[20:22] <LiftedKilt> falanx, marcoceppi: adding bootstrap-series results in "ERROR cmd supercommand.go:448 failed to bootstrap model: no matching tools available
[20:22] <LiftedKilt> "
[20:22] <stokachu> LiftedKilt: --upload-tools
[20:22] <marcoceppi> ninja'd
[20:23] <LiftedKilt> stokachu: results in same error
[20:23] <marcoceppi> LiftedKilt: weird.
[20:23] <LiftedKilt> I'm running: juju bootstrap juju2 dr --upload-tools --bootstrap-series=xenial --debug
[20:24] <marcoceppi> LiftedKilt: what does `juju version` say?
[20:24] <LiftedKilt> marcoceppi: 2.0-beta2-wily-amd64
[20:25] <marcoceppi> LiftedKilt: can you pastebin `juju list-clouds` ?
[20:25] <stokachu> ugh i can reproduce
[20:25] <marcoceppi> huzzah
[20:25] <marcoceppi> cherylj: is it possible to bootstrap a xenial controller?
[20:26] <cherylj> marcoceppi: sure is.  What cloud?
[20:26] <LiftedKilt> cherylj: MAAS
[20:26] <falanx> a private cloud
[20:27] <LiftedKilt> MAAS version 1.9.1+bzr4543-0ubuntu1 (wily1), for what it's worth
[20:27] <cherylj> ah, if you can't get to the streams for tools, you'll need to --upload-tools
[20:28] <marcoceppi> cherylj: so, even with upload-tools the bootstrap fails
[20:28] <cherylj> LiftedKilt: are you bootstrapping from an ubuntu machine?
[20:28] <cherylj> you may not have an up-to-date distro info
[20:28] <LiftedKilt> cherylj: from a 15.10 machine, yes
[20:28] <cherylj> which would cause that error
[20:29] <cherylj> LiftedKilt: do you have Xenial in /usr/share/distro-info/ubuntu.csv?
[20:30] <LiftedKilt> 16.04 LTS,Xenial Xerus,xenial,2015-10-22,2016-04-21,2021-04-21
[20:30] <cherylj> LiftedKilt: can you send me a paste of the bootstrap --upload-tools --debug?
[20:31] <LiftedKilt> marcoceppi, cherylj: actually it fails with the same error for bootstrap-series=wily as well
[20:31] <LiftedKilt> cherylj: sure
[20:31] <cherylj> ah, there's some weird behavior with bootstrap-series
[20:31] <cherylj> LiftedKilt: try default-series=xenial
[20:32] <LiftedKilt> http://pastebin.com/L7ziCLJx
[20:32] <LiftedKilt> cherylj: I thought default series was for charms?
[20:32] <cherylj> --config default-series=xenial
[20:32] <cherylj> LiftedKilt: it will also be used when adding machines
[20:33] <LiftedKilt> cherylj: perfect - it's bootstrapping now
[20:33] <cherylj> yay!
[20:34] <cherylj> LiftedKilt: there are already bugs open about the bootstrap-series not working.  I can find them if you'd like
[20:35] <LiftedKilt> cherylj: no that's fine - as long as I can get around it I'm happy
[20:36] <LiftedKilt> cherylj, marcoceppi thanks for the assistance!
[20:36] <marcoceppi> thanks cherylj
[20:36] <cherylj> anytime!
[20:37] <cory_fu> marcoceppi: Sorry, was on a call.  Were you asking me about how we deal with breaking changes in layers?  You were one of the opponents to versioned layers, so you tell me.  ;)
[20:37] <marcoceppi> cory_fu: I was just supporting your rederict ;)
[20:38] <cory_fu> TBH, I'm still on the fence wrt versioning for base or interface layers.
[20:39] <marcoceppi> tbh, I think it's up to the charm author to deal with it, but maybe supporting "major" revision, or backward incompat versions, might be a compromise
[20:39] <marcoceppi> where each version is basically an epoch
[20:40] <hatch> can I specify the lxc profile to use when using the juju 2 lxd provider?
[20:42] <cory_fu> marcoceppi: For interface layers, there's also the difficulty that we now have two things that could potentially be versioned: the interface protocol and the interface layer's API.  Now that we have interface layers, it's actually less of a concern for the protocol to change, as long as the layer manages the complexity of maintaining backwards compat. but it's a huge deal if the layer API changes in a breaking way
[20:43] <marcoceppi> cory_fu: right, given the simplicity of the interface (key val comm) backwards compat isn't nearly as complex as a code change breakage
[20:43] <cory_fu> If we're talking about doing "epoch" versions, then each epoch version change is something like a fork
[20:43] <marcoceppi> cory_fu: more or less, yes
[20:52] <c0s> cory_fu: with all that you just said wrt protocols, interfaces, and versions - how you envision to communicate underlying components' APIs to the client software
[20:52] <c0s> say, if juju bundle include component_7 (where's 7 is next revision) then how would I know that my software will work with it?
[20:53] <c0s> it == the component that component_7 charm represents
[20:53] <c0s> am I making any sense?
[21:24] <cory_fu> c0s: So, I'm not sure I understand your question, but in particular, bundles don't contain layers (what I assume you mean by components), they just contain charms.  Charms are built from layers, and some of those layers (interface layers) are responsible for manging the communication protocol of interfaces and providing a defined, documented API to charms.
[21:24] <cory_fu> c0s: But, also, layers are not combined or updated at deploy time, only at build time (which could be thought of as the "compile" phase for the charm)
[21:25] <c0s> I guess I am looking a bit deeper into the relation between the layer (sorry for mis-using the terminology) and the actual software the layer is deploying/managing
[21:25] <cory_fu> So, regardless of whether those layers are versioned or just updated at every build, it's the job of the charm author to verify that the newly built charm works as expected, and is why we want bundles (and sometimes charms) to include tests
[21:26] <c0s> right, that's how you produce the stack
[21:27] <c0s> now, when I (as a user) deploy your stack (or bundle in the juju-speak) - how do I know that my hdfs app relying on 2.2.0 API version will work. Does the version of underlying component (software) get exposed somehow?
[21:28] <cory_fu> c0s: You would know that it works because it should be tested before it is published to the charm store
[21:29] <c0s> say, if I deploy using packages I can do apt show package-name and get some info
[21:29] <c0s> wait, this is my client application. You can not possibly claim that you test _all_ applications out there
[21:30] <c0s> I am not questioning the integrity of the bundle
[21:30] <magicaltrout> charms that are in the recommended namespace have been tested and validated
[21:30] <magicaltrout> so if you install a charm not in the recommended namespace YMMV
[21:31] <urulama|afk> just a note to all people using direct publishing. charm store on production was updated and you'll need new charm command that marcoceppi will make ready soon
[21:31] <c0s> I think we are still talking past each other. I guess that's my English as a second languge
[21:31] <cory_fu> c0s: I think there's some disconnect on what you and I mean by API
[21:31] <c0s> yeha
[21:31] <c0s> yeah, I am sure. By API I mean the interfaces of the software you're deploying with charms. I am not talking about juju interfaces
[21:32] <c0s> say, HDFS open() API
[21:32] <cory_fu> There is the API that the layers use to work together (basically, the states that they set using set_state and watch using @when, etc)
[21:32] <c0s> or better yet - truncate, which exists in some versions of HDFS and not others.
[21:32] <c0s> and if I write software to work with an HDFS cluster it will fail if I am calling truncate which isn't there
[21:33] <cory_fu> Ok, so in that case, the HDFS charm would be responsible for making sure it only installs a version of HDFS that it knows how to work with.  And the big data charms report the version of Hadoop they install to ensure they all have the same version
[21:33] <c0s> just for the sake of argument: I know this call exists in HDFS x.y.z. If I deploy apache-core-batching bundle - is it easy for me to figure out what version of HDFS is coming with it?
[21:34] <c0s> ok, good - now we are on the same page.
[21:34] <c0s> So, charms have a way to communicate the versions of underlying software (components). Good
[21:35] <c0s> let's get back to the original discussion of how charms should be versioned, shall we? ;)
[21:35] <cory_fu> c0s: Well, they can communicate the version to other charms that connect to them, but it's up to the charm author (well, actually, the interface author) to make that a part of the interface protocol
[21:35] <c0s> but what about the user? Will he has any way of finding out the version?
[21:36] <c0s> That's all I am trying to figure out
[21:36] <c0s> in other words: is there an analog of apt show packagename ?
[21:36] <cory_fu> The version might be a config option on the charm, or it might be hard-coded in the charm and documented in the README
[21:36] <c0s> ok, so I need to look into a particular charm README to find what versions it packs, right?
[21:37] <cory_fu> Generally, yes.
[21:37] <c0s> ok, got it.
[21:37] <c0s> now the last one, I promise - currently, bundles aren't versioned, per se, right?
[21:37] <c0s> although layers (or charms) are.
[21:38] <cory_fu> Most charms are hard coded to install a specific version of the software, though many do allow you to specify either a version or a source URL for the software so that you can control the version to some degree
[21:38] <c0s> ok, got it
[21:38] <cory_fu> Bundles and charms have revisions, and bundles can specify what revision of each charm they deploy (which is required for recommended bundles)
[21:38] <c0s> thanks
[21:39] <c0s> yet again ;) - revisions aren't directly visible to the user, right?
[21:39] <cory_fu> Bundles can also specify config for charms, so if the software version is configurable in a charm, the bundle can specify that
[21:39] <c0s> ok, thanks
[21:40] <cory_fu> Revisions are visible to the user, yes.  https://jujucharms.com/u/bigdata-dev/apache-hadoop-namenode/trusty/4 is revision 4 of that charm
[21:40] <cory_fu> aka cs:~bigdata-dev/trusty/apache-hadoop-namenode-4
[21:40] <cory_fu> You can also leave the revision off to get the latest
[21:41] <c0s> cool, thanks
[21:48] <c0s> I guess you guys are going to hate me soon
[21:48] <c0s> ;)
[21:50] <c0s> In the current design if I want to take an advantage of existing 3rd party deployment code (eg. a bigtop puppet recipe), I would have to wrap it into Python reactive script, right?
[21:55] <magicaltrout> c0s: lazypower did a cool talk about juju leveraging ansible
[21:55] <magicaltrout> which could be repurposed for puppet or chef
[21:56] <magicaltrout> https://www.youtube.com/watch?v=0eymk93lY8k
[21:56] <c0s> as in "I can write ansible scripts which Juju be able to reuse for the deployment" or something else?
[21:56] <magicaltrout> yeah, as in, I have ansible already but want to leverage juju's deployment capabilities so make a few changes and deploy my ansible code via juju
[21:56] <magicaltrout> or something like that
[21:57] <c0s> make sense, thanks for the link
[21:59] <cory_fu> c0s: You can also do reactive handlers in bash, but you still do end up having to call out to your existing cfg mgmt tool from a handler, yes
[22:00] <c0s> yup, that's what I thought. Thanks!
[22:03] <falanx> Why does the lxd charm only have options for lvm and btrfs?  Wasn't zfs also supported?
[22:09] <LiftedKilt> jamespage, falanx: What would it take to enable zfs support in the lxd charm for xenial?
[22:42] <marcoceppi> tvansteenburgh: I've done some long needed triage in the charm-tools repo, cleaned up the milestones, and make sure things were assigned: https://github.com/juju/charm-tools/milestones unless you're going to be working anymore on these tonight I'm going to cut a 2.0.0 since the new charm command is ready
[22:45] <lazyPower> marcoceppi - yeah man you filled me up with notices :D
[22:45] <lazyPower> but look at all that progress!
[22:59] <blahdeblah> Hi all; any charmers able to look at http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/3277/ and http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/3278/ and confirm that this is just broken test infrastructure?  I can't see anything in the logs which indicates it's a problem with my MP.
[22:59] <blahdeblah> *MPs
[23:20] <lazyPower> blahdeblah - yeah looks like the security group cleanup at the testrun start is what caused that
[23:21] <lazyPower> the output looks fine on 3278 otherwise
[23:21] <lazyPower> boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
[23:21] <blahdeblah> lazyPower: thanks - anything I need to do to make sure the MPs don't end up in limbo?
[23:21] <lazyPower> blahdeblah - i'll poke someone to take a look at this tomorrow and follow up - but it LGTM otherwise, i see your bundle stood up and got 1 test passed.
[23:22] <lazyPower> is it listed on review.juju.solutions?
[23:22] <blahdeblah> yep
[23:22] <lazyPower> i dont see anything from you in there :/
[23:22] <blahdeblah> 2nd & 3rd from the bottom
[23:22] <lazyPower> oh probably not sharing irc handle w/ launchpad
[23:23] <lazyPower> derp
[23:23] <lazyPower> Yeah, so long as they are in there it'll get reviewed. Typically we kick the ci runs off the morning of our review time to get fresh results
[23:23] <blahdeblah> cool
[23:23] <blahdeblah> thanks
[23:23] <lazyPower> so i wouldn't stress over that initial result :( i can kick it again if you like, maybe its been resolved
[23:24] <tvansteenburgh> marcoceppi: cool, i won't be getting any more done tonight, release away
[23:25] <marcoceppi> tvansteenburgh: cool, thanks
[23:31] <arosales> kwmonroe: cory_fu: I am still not able to deploy realtime-syslog-analytics with juju 2.0 beta2 in aws-east1
[23:31]  * arosales is going to try a different region
[23:31] <arosales> http://paste.ubuntu.com/15475911/ is what I see --- stuck in waiting to agent init to finish . . .
[23:35] <magicaltrout> well arosales if the machine doesn't come up you wont get the agent init to finish
[23:35] <lazyPower> arosales - have you done this?
[23:35] <magicaltrout> you sure you're not getting something like the AWS instance upper limit errors?
[23:35] <lazyPower> arosales 'juju retry-provisioning #'
[23:36] <lazyPower> default upper limit is 15 iirc, bumpable to 25
[23:44] <arosales> sorry I was looking at my aws account
[23:44] <arosales> magicaltrout: ya it feels like an aws limit, but I have bootstrapped this before
[23:44] <arosales> and my amazon instance doesn't show any limit issues
[23:44] <arosales> pehraps my sec groups limit is getting close . . .
[23:45] <arosales> lazyPower: I haven't tried that yet, I was going to see if I had better luck in us-west-2
[23:46] <arosales> 500 sec group limit, I am at 162 so ok there
[23:46] <arosales> and 20 instance limit and I have 0 current, so I am ok there too
[23:47] <magicaltrout> surely juju debug-log gives some clue as to the failure cause?
[23:48] <arosales> not really
[23:48] <magicaltrout> excellent! ;)
[23:49] <lazyPower> arosales i had some issues with local charms earlier this week but i was hard to reliably reproduce
[23:50] <lazyPower> none of those charms are local right?
[23:50] <lazyPower> i realize this was at the infra provider level, but, just crossing off the list
[23:50] <arosales> me still investigating though
[23:50]  * arosales still investigating though
[23:50] <arosales> lazyPower: all charm store charms
[23:50] <lazyPower> mfw using juju 1.25 where juju dhx -s still works
[23:50] <magicaltrout> semi off topic but there's 2 tools ubuntu is missing by default. mosh for those of us moving around and pastebinit to dump charm logs to pastebin without messing around
[23:51]  * lazyPower falls in love with dhx all over again, after all these weeks
[23:52] <arosales> magicaltrout: it would be nice to pass to cloud-init via juju a metadata file that includes user tools they like seeing in their environments like the ones you mention
[23:52] <magicaltrout> yeah arosales thats a cool idea