[00:07] <arosales> Hello
[00:08] <arosales> GlueCon was an interesting conference. Seems Juju fits into a lot of uses cases an problem scerious represented
[00:10] <arosales> There is similar conference in November folks interested in presenting their work should submit CFPs to
[00:10] <arosales> http://defragcon.com/
[00:37] <godleon> Hi all, can I use remote LxD to be Juju's cloud provider ?
[00:44] <rick_h_> godleon: no, it does not yet support remote lxd endpoints, just the local one
[00:46] <godleon> rick_h: oh ok, I found juju consume as many physical machines as the numbers of charms I drag and drop into the juju gui.
[01:54] <x58> arosales: How's it going?
[02:07] <lazyPower> x58 o/
[02:08] <x58> lazyPower: Heya :-D
[02:08] <lazyPower> Groovy, mattrae tipped me off you might be here :)
[02:08] <x58> Yeah, I posted some commentary ^^^
[02:09] <x58> Let me know if you need me to grab that backlog and repeat it.
[02:09] <godleon> Does juju-deployer support juju 2.0 ?
[02:09] <lazyPower> Ah i see
[02:10] <lazyPower> but what i've seen out of the new layer  is they reconcile when the leader sets the info after both have  completed their membership add, that initial bring up failure is ok
[02:11] <x58> On our machines we haven't had them cluster yet :-(
[02:12] <lazyPower> I understand, i think its a race condition in the charms logic
[02:12] <x58> The leader wins, and then one will do member add, and never seem to join the cluster, then the other will member add, and that one can't cluster because two nodes can't be in "new member" state at the same time.
[02:12] <lazyPower> ah
[02:12] <lazyPower> good insight!
[02:13] <x58> Also, each of the new nodes has to have INITIAL_PEERS or whatever that variable is (I don't have it in front of me at the moment) of all the other nodes in the cluster
[02:13] <x58> when you try to add two at the same time, they won't ever get a proper peerlist.
[02:13] <x58> and fail to cluster.
[02:13] <x58> You have to stagger them...
[02:15] <x58> On the old charm, not the layered one, I tried adding a random sleep, and that worked about 30% of the time, mostly because I can't gaurantee that one happens after the other
[02:16] <x58> So whle 1 might be sleeping 20 seconds, the other might sleep for 10, and that second one may have taken longer to install/bring up, and then no cluster ;-)
[02:16] <x58> etcd is just a giant pain ;-)
[02:16] <lazyPower> I can do a better job of coordinatig the registration (new state) and when we attempt to start etcd post rendering the defaults file. Each unit knows their identifier in that string, so it makes sense for me to figure out if we have registered with the leeader and are realy ready to attempt turn up.
[02:17] <x58> I am getting my feet wet with Charms... so I'd definitely be interested in seeing how that is done.
[02:17] <x58> but that sounds like it would potentially work.
[02:17] <lazyPower> well lets start here :) https://github.com/juju-solutions/layer-etcd
[02:17] <lazyPower> ill ping you on the PR so you can get some eyes on it
[02:17] <x58> Oh, I'm familiar with that code :P
[02:17] <x58> @bertjwregeer
[02:18] <lazyPower> \o/
[02:18] <x58> BTW, pip shouldn't be installed on Xenial, since it has pip3 installed by defauled.
[02:18] <x58> I had to rm -rf pip*.tar.gz from wheelhouse
[02:18] <lazyPower> pips coming from layer-basic
[02:18] <x58> Ah
[02:18] <lazyPower> we just merged or had a great discussion about merging something that fixes that
[02:18]  * lazyPower goes and checks
[02:19] <lazyPower> yep! https://github.com/juju-solutions/layer-basic/pull/70 landed
[02:19] <x58> Awesome.
[02:26] <arosales> x58: hello, going good just got some dinner
[02:27] <x58> arosales: I'm working on Charms, helping debug a bunch of the OpenStack ones :P
[02:27] <x58> Didn't even have to join your team ;-)
[02:29] <x58> arosales: Mad respect for what you and your team do though. It's making my life easier :-D
[02:30] <arosales> x58: we are one big team here, hopefully the charms also help you out as well :-)
[02:31] <arosales> x58: glad you popped in here
[02:31] <x58> arosales: mattrae asked me to, to be able to chat with lazyPower about the etcd charm =)
[02:33] <arosales> Good suggestion by mattrae
[02:33] <arosales> Lots of collective knowledge here
[02:34] <x58> Yeah, I'll stick around :-)
[02:34] <arosales> x58: also trying to make the openstack charm dev process more straightforward
[02:35] <arosales> Getting some docs around the dev workflow now that one can deploy to LXD and dev on their desktop or laptop
[02:36] <x58> Sounds fantastic :-D
[02:36] <x58> The current openstack charms are interesting though, they spawn some insane amount of processes if you have a ton of cores on a node... overkill amount of processes :P
[02:36] <arosales> x58: do you know of the openstack irc charm meeting?
[02:37] <x58> I do not.
[02:38] <arosales> That process spawning is a function of the openstack serviced though, I think
[02:38] <x58> Well, the charm has a multipler on it by default that is set to 2, 2x the amount of CPU cores on a node.
[02:39] <x58> Our nodes are 80 core monsters... so 160 keystone WSGI processes get started :P
[02:42] <x58> arosales: When is the openstack irc charm meeting?
[02:42] <arosales> Fit Nova?
[02:43] <x58> Fit nova?
[02:43] <x58> It's the API's right now, nova-api, glance-api, keystone-api, all of those have multiplers on them. Led to some interesting process lists ;-)
[02:43] <arosales> x58:  I was just looking for the meeting time, and couldn't find it. gnuoy leads it and I'll ask him to repost to the list
[02:44] <arosales> Sorry I meant is that multiplier on the nova charm ?
[02:44] <x58> Yeah
[02:45] <x58> I am not at work, so I can't give you the list of them, but we ended up setting them to 1, we'd like to set that to 0.5 since we are still running so many more API processes than required for the openstack cluster.
[02:46] <x58> I think Matt may have entered info into Salesforce for that, since the multiplier only takes integers.
[02:47] <arosales> Ok I'll take a look also
[02:48] <stub> aisrael: The pull request at https://github.com/juju-solutions/charms.reactive/pull/51 fixes this and a number of other import related glitches, so you can do 'import reactive.storage' or whatever when you need
[02:49] <stub> aisrael: That said, best practive seems to be to put your 'api' things in lib/charms somewhere, which can be imported just fine without the patch
[02:50] <arosales> godleon: juju deployer has been updated for 2.0 but native juju deploy should just work now
[02:52] <stub> aisrael: I need both, so I have a workaround in the PG charm. If you create a lib/reactive/__init__.py file like http://bazaar.launchpad.net/~postgresql-charmers/postgresql-charm/built/view/head:/lib/reactive/__init__.py, it gets invoked when you do an 'import reactive.whatever' and extends the search path to find the real reactive package (using standard mechanisms, not the horrible symlink hack I first came up with)
[02:53] <stub> lazyPower: ^
[02:54] <godleon> arosales: do you mean I can just use juju deploy to replace juju-deployer?
[02:55] <x58> YEs
[02:55] <arosales> godleon: correct
[02:56] <godleon> ok....... it seems juju-deployer will phase out in the future......?
[02:56] <arosales> godleon: bundle inheritance is the one feature that didn't come across that people do ask about
[02:57] <arosales> Juju deployer should phase out
[02:57] <arosales> Dev will be focused on native deploy
[03:13] <godleon> arosales: I got it. Thanks! :)
[03:14] <arosales> godleon: np
[03:14] <godleon> lazyPower: thanks for your information about juju and kube. :)
[03:20] <lazyPower> godleon np
[06:58] <godleon> Hi all, is that possible ssh key didn't inject into machine when deploying it?
[07:00] <godleon> I found I failed deploying charm because "hook failed: install", and when I executed "juju debug-hooks xxxx/x", I got permission deny as response.
[07:01] <godleon> and I also found all charms I tried to deploy on that machine hang for a long time. then can not be solved by juju resolved
[07:43] <jamespage> gnuoy, if you have cycles: https://review.openstack.org/#/c/320450/
[07:49] <Alex_____> Hi, could anybody point me to the docs on using spot instances with juju on AWS please?
[08:11] <kjackal> Hey Alex_____ , I do not think we support provisoning of spot instances. (At least couldn't find any docs for this)
[08:12] <kjackal> Alex_____: How/why are you asking? I mean how did this question came about?
[08:32] <kjackal> Hey gnuoy, gavriil here has some questions on Openstack and MaaS and he could use our help!
[08:32] <gnuoy> hi gavriil
[08:32] <gavriil> hello
[08:34] <gavriil> i am interested in a "cloud platform" which will spawn and manage virtual machines for me
[08:35] <gnuoy> gavriil, Openstack sounds like a good candidate
[08:36] <gavriil> yes, indeed. I tried to install it manually on my 2 desktop pcs but i failed.
[08:37] <gnuoy> gavriil, what problem(s) did you hit?
[08:38] <gavriil> i used tutorial and the official documentation, but each time i hit different bugs that i couldn't overcome. Some of them were shared by other users and weren't resolved yet.
[08:38] <gavriil> tutorials*
[08:39] <gavriil> and others maybe caused by my novice level of networking skills
[08:40] <gnuoy> gavriil, have you looked at conjure-up ?
[08:40] <gnuoy> http://conjure-up.io/
[08:44] <gavriil> no, does it have any specific requirement?
[08:50] <gavriil> https://www.dropbox.com/s/gein73gkjlcoihr/systemCloud.pdf?dl=0 this is my setup.
[08:56] <gnuoy> gavriil, I believe it can be used to either deploy Openstack fully in containers on your laptop or to utilise maas and give you some options about placing services. tbh I haven't used it.
[08:57] <gnuoy> gavriil, tbh, I think the next steps would probably be to go through the process (with or without conjure-up) and shout when you hit an issue
[09:04] <gavriil> my pcs are i5 4590 3.3ghz 16 gb ram and i7 860 2.8ghz 16 ram . Each one has 2 wired network interfaces and one wireless.
[09:04] <gavriil> is it enough for a two node installation?
[09:04] <gavriil> the second pc has 12 gb ram*
[09:07] <jamespage> gnuoy, https://code.launchpad.net/~james-page/charm-helpers/swift-dev-support/+merge/295918
[09:10] <gnuoy> gavriil, it depends on whether those two nodes need to supply the maas server and the juju bootstrap node or if you have other machines (or kvms) for those.
[09:12] <gnuoy> jamespage, +1 can I leave you to merge it?
[09:12] <jamespage> gnuoy, yah
[09:24] <jamespage> gnuoy, and another - https://code.launchpad.net/~james-page/charm-helpers/is-ip-ipv6/+merge/295920
[09:27] <gnuoy> jamespage, +1
[09:28] <jamespage> gnuoy, ta  - landed
[09:28] <gavriil> gnuoy, i can try to host the maas server on my VM or buy a another low end desktop PC( i3 ) with enough ram. Can you suggest me how to setup my machines and what network configuration is needed (if needed), before registering them to MaaS.
[09:47] <gnuoy> gavriil, you could have the two servers maas is going to manage and maas on a dedicated network. The second nic on each of the servers maas is going to manage could be used to route traffic in and out of openstack once it's installed.
[10:04] <gavriil> gnuoy, correct me if i got it wrong, i need 3 machines for maas deployment, one for maas controller and 2 for running openstack.  My router provides dhcp (internet connection) and all 3 machines should be connected to it. The first step is the installation of the maas controller which will automatically detect the dhcp of the router and start managing it. After that i will register the remaining two pcs to MaaS.
[10:05] <gavriil> The last step is the use of juju to spawn containers that will host the openstack services?
[10:08] <gavriil> gnuoy, can you please fill me in with any important logical steps that i probably ommited?
[10:20] <gnuoy> gavriil, the logical steps seem fine. I don't think maas is going to automatically  detect the dhcp of the router and start managing it, it'll manage whatever IP range you give it. Also, the nodes under maas control do need internet access but you may choose to proxy that internat access through another machine, like the maas controller.
[10:21] <gnuoy> gavriil, usually I'd have maas on a dedicated vlan so maas, somthing like https://docs.google.com/drawings/d/12YgpEucC0OADkVrVzwNXe1V7lwrvcsBfsTwPmz-h6Co/edit?usp=sharing
[10:30] <jamespage> gnuoy, can you take a look at https://review.openstack.org/#/c/322035/ pls
[10:41] <gavriil> gnoy,  Two parts are not clear to me. First, the nodes under maas control are connected to the first network, so they can have internet access through the router. So why should i proxy the internet access through other machines. Second, in your drawing there are 2 networks. The maas network is virtual or physical and what is its purpose? Thank you very much for your help!
[11:04] <Alex_____> kjackal: sorry, was AFK. The question comes from the idea the juju can help data scientists and engineers to save money on AWS by avoiding EMR fee and using cheapest on-demand option which is spot instance one. Could you point me to the codebase wich does an AWS provisioning? I'd love to understand how hard would it be to add spot instance support
[11:05] <kjackal> Alex_____ the provisoners (!?) live in juju-core
[11:05] <kjackal> let me see if I can spot them
[11:06] <kjackal> Alex_____: have alook here: https://github.com/juju/juju/tree/master/provider/ec2
[11:06] <Alex_____> kjackal: thanks! if I think more about it - how do you think, would it be possible to work around by manually provisioning N spot instances and then using Juju just to deploy everything on running instances?
[11:08] <kjackal> This is an option yes! However, I have a feeling that adding support for provisioning Ec2 spots is a contribution that the juju-core devs will find it hard to resist :)
[11:09] <Alex_____> that sounds awesome :) Can I help somehow to make it even more attractive for the juju-code devs?
[11:09] <Alex_____> I'm looking at https://jujucharms.com/docs/1.24/config-manual is that the right place for described workaround?
[11:12] <kjackal> Alex_____ : The _not_recomended_way_ is what digital ocean is doing https://jujucharms.com/docs/1.24/config-digitalocean source in here https://github.com/kapilt/juju-digitalocean
[11:13] <kjackal> Alex_____ Again Alex if you are to spend time in automating the process of spot instances you are strongly advised to do it in juju-core and offer it as a contribution
[11:14] <kjackal> there are certain limitations when using maual provider (eg you cannot add-units)
[11:15] <Alex_____> kjackal: got it. Thanks for letting me know, that helps a lot!
[11:16] <kjackal> Alex_____  Actualy there is a email that just landed on the juju list that you might want to kep an eye on. The title is "Juju support for Synnefo", there the people from a cloud called okeanos are interesting in contributing a provider for their cloud
[11:18] <kjackal> I am very curious what the recomnded way to approach this kind of a problem is. Perhapse Alex_____ you could also ask at the list of the proper way to extend an existing provider
[11:18] <kjackal> I am sorry I cannot answer this, as I am not from the juju-core team
[11:20] <Alex_____> kjackal: thank you so much for pointing this all out! I appreciate, kjackal. It is exactly the information I was looking for. I would keep an eye on the mentioned thread as well as ask this question on the list
[11:31] <Alex_____> I'm building use cases for a workshop about open source data analytics tools that people can use for their hobby projects at home and cost is very important factor here
[11:31] <kjackal> So Alex_____ why do you need EC2 spots?
[11:31] <kjackal> Sorry go on
[11:31] <Alex_____> sure! so the idea is: with recent adoption of Juju in Apache Software Foundation (BigTop especially, Zeppelin, etc) more and more people will start looking into it as an open source option for their hobby\part-time projects and those are people who love opensource and are cheapscaters (in a good way), so this value proposition will be dear to their hearts (I'm one of them :) ).
[11:32] <kjackal> Sound cool!
[11:32] <Alex_____> And then the same people will bring it thought the doors of their organizations to daytime jobs later on. So it should help the adoption
[11:38] <Alex_____> Having an AWS provisioner that supports something like `add-unit` with spot instance for adding more workers to the instance group would be a dope
[11:42] <gnuoy> gavriil, the first nic is used by maas  to provision the servers (dhcp, pxe etc) and when the servers are installing, traffic like packages updates is routed through the maas node, the second nic is not used at all. In fact the second nic could be attatched to any network you like since I was thinking it would be used to route traffic in and out of the vms that you spin up within openstack (they would act as the ext-port https://api.jujucharms.com/charmsto
[11:42] <gnuoy> re/v5/trusty/neutron-gateway-5/archive/config.yaml)
[11:44] <jcastro> Alex_____: https://bugs.launchpad.net/juju/+bug/945862
[11:44] <mup> Bug #945862: Support for AWS "spot" instances <pyjuju:Confirmed> <https://launchpad.net/bugs/945862>
[11:45] <jcastro> I agree 100%, I'll talk to the core team about it the next time we meet
[11:45] <jcastro> Alex_____: it would help us out tremendously if you could add a comment to the bug with some of the things you've outlined here in IRC.
[11:46] <Alex_____> jcastro: sure! I always fancied that nice lanuchpad account but was alway lazy to register :)
[11:47] <magicaltrout> also point out you have a beard
[11:47] <magicaltrout> you'll get more kudos
[11:47] <jcastro> Alex_____: the reason I ask is it's one thing if I ask, like when I filed a bug. But it's a totally different priority when someone who uses it in real life +1's a bug
[11:48] <Alex_____> jcastro: makes perfect sense, I
[11:53] <jcastro> but yes, I have wanted this feature for a very long time, so I look forward to having evidence that someone would use it
[11:53] <jcastro> for example, in a bundle, it would be nice to do something like:
[11:53] <jcastro> servicename:
[11:54] <jcastro>    allow_spot: true
[11:54] <jcastro>    min_ondemand: 3
[11:54] <jcastro> so if I add unit past three, then go spot instances, but I want enough ondemand instances to keep the service up and reliable
[11:54] <jcastro> but other than that, go as cheap as possible
[11:55] <jcastro> and then of course, we could have constraints on cost, just like we do for cpu and memory
[11:55] <Alex_____> yup, that sounds like a prefect plan
[11:56] <magicaltrout> it makes sense
[11:56] <magicaltrout> especially on services that can fail hard
[12:07] <Alex_____> jcastro magicaltrout did my best at https://bugs.launchpad.net/juju/+bug/945862
[12:07] <mup> Bug #945862: Support for AWS "spot" instances <pyjuju:Confirmed> <https://launchpad.net/bugs/945862>
[12:14] <gennadiy> hi everybody. we have openstack with disabled security groups - `enable_security_group = False`. so in this case juju can't create machine because it requires security group. do we have some juju parameters to prevent security groups creating ?
[12:20] <gennadiy> another question do we have possibility to specify count of network interfaces?
[12:23] <gnuoy> bryan_att, Hi, I've deployed successfully on trusty http://paste.ubuntu.com/16729946/ . Creating and listing a datasource is the full extent of my testing though :-)
[12:50] <SaMnCo> marcoceppi cory_fu: ping
[12:50] <SaMnCo> I used your advice from yesterday, nearly flawless victory, so thanks
[12:51] <SaMnCo> but I have a problem with arguments of the provide function that are optional
[12:51] <SaMnCo> I don't understand how to pass them in bash
[12:51] <SaMnCo> essentially, I have 2 sets of arguments that can be used, combined or not
[12:52] <SaMnCo> both optional
[12:52] <SaMnCo> relation_call doesn't let me specify the names of the arguments, so I am sort of out of business
[12:52] <SaMnCo> any thoughts?
[12:54] <cholcombe> dosaboy, do you have experience with sphinx docs?
[12:55] <dosaboy> cholcombe: not recently
[12:56] <cholcombe> dosaboy, ok.  i was wondering why my :param list: blah blah isn't parsing properly with sphinx
[12:56] <cholcombe> it's so picky about syntax it seems
[13:09] <gnuoy> jamespage, you've given yourself a +1 on https://review.openstack.org/#/c/322035/1 , I don't think that's really the done thing is it?
[13:10] <jamespage> gnuoy, I was using that to make your life easier - those are ones I think are ready to go
[13:10] <jamespage> but I'd not told you that yet :-)
[13:10] <jamespage> gnuoy, https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync
[13:10] <gnuoy> ah, ok
[13:11] <gnuoy> jamespage, what did you and beisner agree was the minimum for osci to run to approve a charm helper sync?
[13:12] <jamespage> gnuoy, we've agreed it change by change - but fwiw I think these are OK with a smoke only
[13:12] <gnuoy> jamespage, yep, +1
[13:17] <jamespage> gnuoy, https://review.openstack.org/#/c/322035/ is ready as well + 3 more on the general sync list
[13:33] <marcoceppi> SaMnCo: I was under the impresion you could pass key=val to relation_call but I could be wrong
[13:33] <SaMnCo> I tried that but it didn't work, I thought it would as well
[13:33] <SaMnCo> Maybe I should try from scratch
[13:33] <SaMnCo> let me do that
[13:38] <SaMnCo> marcoceppi: would something around those lines be better?
[13:38] <SaMnCo> https://www.irccloud.com/pastebin/NArzBWDD/script
[13:39] <marcoceppi> SaMnCo: that wouldn't really work, I don't think , due to lack of context
[13:40] <icey> we don't seem to be testing charm-tools with python3?
[13:46] <jamespage> gnuoy, https://review.openstack.org/#/q/topic:charmhelpers-resync+status:open more ready to go
[13:53] <cholcombe> jamespage, cinder question.  It looks like huawei needs an xml configuration file.  The HuaweiSubordinateContext that I return to Cinder is just a json blob.  Will cinder take care of writing that xml file or do I need to patch that also?
[13:54] <jamespage> cholcombe, the subordinate is responsible for writing anything that is specific to it
[13:54] <cholcombe> jamespage, gotcha.  ok that's fine
[13:54] <jamespage> cholcombe, so for ceph - > ceph.conf and the client keys
[13:54] <jamespage> cinder-ceph that is
[13:54] <cholcombe> jamespage, i'm just going off of your vmware cinder driver code.  It looks like it returns a context back to cinder
[13:55] <jamespage> cholcombe, it might pass some data back to cinder, but that's cause it need to be written into cinder.conf
[13:55] <jamespage> sticking to the principle that only a single charm can own a file, it must be done that way
[13:55] <cholcombe> ok i think i know what it needs then. Just the basic use this driver, here's the config file, etc
[13:55] <jamespage> cholcombe, most likely
[13:59] <cholcombe> how do we unit test layered charms that depend on packages that are installed via the wheelhouse?
[14:00] <lazyPower> I'm pretty sure you can just pipe in the dependency list to tox...
[14:00] <jamespage> cholcombe, by building the charm and writing amulet tests for it...
[14:00] <icey> jamespage: issue is with dependencies
[14:01] <jamespage> cholcombe, oh sorry - yes I see - well talk with tinwood and gnuoy - they have this figured out
[14:02] <jamespage> cholcombe, icey: but broadley it involved building a tox virtual env, installing what you know is needed for unit testing and executing some unit tests...
[14:02] <icey> jamespage: it gets more painful, trust me :)
[14:03] <jamespage> icey, cholcombe: all I'm saying is don't repeat thinking that might of already been done in this space :-)
[14:04] <cholcombe> jamespage, yup :)
[14:05] <icey> jamespage: a big part of my issues seem to be that I'm migrating our python2 unit rests to python3+layers
[14:07] <jamespage> icey, quite possibly
[14:26] <magicaltrout> I was about to jokingly ask marcoceppi where you submit papers for the charmer summit
[14:26] <magicaltrout> and then saw the button on the website.....
[14:26] <marcoceppi> magicaltrout: haha already a step ahead!
[14:26] <magicaltrout> :P
[14:27] <magicaltrout> I'm on a talk submission afternoon
[14:27] <marcoceppi> magicaltrout: well I may have one more for you
[14:28] <magicaltrout> hook me up
[14:28] <jcastro> marcoceppi: hey any word from design for summit.j.s?
[14:28] <marcoceppi> jcastro: not yet
[14:28] <jcastro> I would like to totally announce that badboy
[14:30] <jamespage> gnuoy, three more to go if you have two ticks - https://review.openstack.org/#/q/topic:charmhelpers-resync+status:open
[14:41] <magicaltrout> there you go, partner summit proposal submitted
[15:03] <cory_fu> SaMnCo: I'm actually out today, but since I'm here for a second, I can tell you that reactive uses charmhelpers's cli module, and it looks like any params with default values ought to be treated as options that must be provided as --var_name=value
[15:04] <SaMnCo> cory_fu: I'll try that
[15:04] <SaMnCo> thanks, and sorry to put you out of your vacation
[15:04] <cory_fu> SaMnCo: That said, the calling convention for methods via the CLI is necessarily more restricted that in Python, so there may be things that you simply cannot pass in from bash that you could from Python.  It would be up to the interface layer author to keep that in mind, I guess
[15:04] <cory_fu> No worries.  :)
[15:05] <cory_fu> Anyway, back to day off.  \o
[15:05] <magicaltrout> jammy git
[15:14] <gennadiy> hi. we have openstack with disabled security groups - `enable_security_group = False`. so in this case juju can't create machine because it requires security group. do we have some juju parameters to prevent security groups creating ?
[15:14] <gennadiy> another question do we have possibility to specify count of network interfaces?
[15:54] <cholcombe> there should be an award for converting old charms to layered :)
[16:12] <SaMnCo> cory_fu, marcoceppi : the --var_name=value doesn't work
[16:13] <marcoceppi> cholcombe: absolutely
[16:14] <SaMnCo> https://www.irccloud.com/pastebin/5h6hqAme/call_api
[16:26] <lazyPower> cory_fu marcoceppi  - btw charm build -r is like my new favorite thing ever
[16:38] <bryan_att> gnuoy: have some time to talk about how you did that? I'm getting an error:  https://www.irccloud.com/pastebin/7GH0InRK/
[16:46] <bryan_att> gnuoy: nevermind - it's the version issue again. I used the other command format, trying now.
[16:46] <marcoceppi> lazyPower: what's the -r do?
[16:46] <marcoceppi> ah, the reporting
[16:46] <lazyPower> marcoceppi it generates a report of what changed (the delta) and runs `charm proof` on the assembled charm
[16:46] <marcoceppi> we should consider making that the default
[17:05] <lazyPower> +1 that sounds good to me
[17:27] <bryan_att> https://www.irccloud.com/pastebin/ltEyPcEd/gnuoy%3A%20having%20some%20issues%20with%20the%20install%20per%20the%20current%20repo
[17:28] <bryan_att> gnuoy: see the previous post - I put the note in the filename field... having some issues with the install per the current repo
[18:03] <x58> jamespage: Thanks for all the help with the RabbitMQ charm :-)
[18:21] <bdx> big-data: Any current plans for packetbeat?
[19:00] <bryan_att> gnuoy: ping
[19:44] <lazyPower> bdx yes, and dockerbeat
[19:44] <lazyPower> bdx - i've been swamped with this etcd rework this week but i have plans on releasing layers for both packetbeat and dockerbeat in the next 2 weeks and proposing them against the beats-core stack
[19:46] <lazyPower> s/stack/bundle
[21:05] <magicaltrout> it was better when the charmstore login was broken
[21:05] <magicaltrout> at least i didn't have to look at my face everytime i go there
[21:12] <x58> Is there a way to tell juju deploy to only deploy a single machine at a time when deploying a bundle?
[21:28] <x58> Running into a bug I think is related to how fast JuJu is asking machines to be deployed in MaaS: https://bugs.launchpad.net/maas/+bug/1586540
[21:28] <mup> Bug #1586540: MaaS 2.0 beta 5 fails to assign IP address to nodes when multiple nodes go into deploying at once <cpec> <juju> <maas2.0> <MAAS:New> <https://launchpad.net/bugs/1586540>
[23:19] <x58> Nevermind, that wasn't the bug we thought it was. I feel bad.
[23:19] <x58> Here's a new bug instead: https://bugs.launchpad.net/maas/+bug/1586555
[23:19] <mup> Bug #1586555: MaaS 2.0 BMC information not removed when nodes are removed <cpec> <MAAS:New> <https://launchpad.net/bugs/1586555>