[00:07] Hello [00:08] GlueCon was an interesting conference. Seems Juju fits into a lot of uses cases an problem scerious represented [00:10] There is similar conference in November folks interested in presenting their work should submit CFPs to [00:10] http://defragcon.com/ [00:37] Hi all, can I use remote LxD to be Juju's cloud provider ? [00:44] godleon: no, it does not yet support remote lxd endpoints, just the local one [00:46] rick_h: oh ok, I found juju consume as many physical machines as the numbers of charms I drag and drop into the juju gui. [01:54] arosales: How's it going? [02:07] x58 o/ [02:08] lazyPower: Heya :-D [02:08] Groovy, mattrae tipped me off you might be here :) [02:08] Yeah, I posted some commentary ^^^ [02:09] Let me know if you need me to grab that backlog and repeat it. [02:09] Does juju-deployer support juju 2.0 ? [02:09] Ah i see [02:10] but what i've seen out of the new layer is they reconcile when the leader sets the info after both have completed their membership add, that initial bring up failure is ok [02:11] On our machines we haven't had them cluster yet :-( [02:12] I understand, i think its a race condition in the charms logic [02:12] The leader wins, and then one will do member add, and never seem to join the cluster, then the other will member add, and that one can't cluster because two nodes can't be in "new member" state at the same time. [02:12] ah [02:12] good insight! [02:13] Also, each of the new nodes has to have INITIAL_PEERS or whatever that variable is (I don't have it in front of me at the moment) of all the other nodes in the cluster [02:13] when you try to add two at the same time, they won't ever get a proper peerlist. [02:13] and fail to cluster. [02:13] You have to stagger them... [02:15] On the old charm, not the layered one, I tried adding a random sleep, and that worked about 30% of the time, mostly because I can't gaurantee that one happens after the other [02:16] So whle 1 might be sleeping 20 seconds, the other might sleep for 10, and that second one may have taken longer to install/bring up, and then no cluster ;-) [02:16] etcd is just a giant pain ;-) [02:16] I can do a better job of coordinatig the registration (new state) and when we attempt to start etcd post rendering the defaults file. Each unit knows their identifier in that string, so it makes sense for me to figure out if we have registered with the leeader and are realy ready to attempt turn up. [02:17] I am getting my feet wet with Charms... so I'd definitely be interested in seeing how that is done. [02:17] but that sounds like it would potentially work. [02:17] well lets start here :) https://github.com/juju-solutions/layer-etcd [02:17] ill ping you on the PR so you can get some eyes on it [02:17] Oh, I'm familiar with that code :P [02:17] @bertjwregeer [02:18] \o/ [02:18] BTW, pip shouldn't be installed on Xenial, since it has pip3 installed by defauled. [02:18] I had to rm -rf pip*.tar.gz from wheelhouse [02:18] pips coming from layer-basic [02:18] Ah [02:18] we just merged or had a great discussion about merging something that fixes that [02:18] * lazyPower goes and checks [02:19] yep! https://github.com/juju-solutions/layer-basic/pull/70 landed [02:19] Awesome. [02:26] x58: hello, going good just got some dinner [02:27] arosales: I'm working on Charms, helping debug a bunch of the OpenStack ones :P [02:27] Didn't even have to join your team ;-) [02:29] arosales: Mad respect for what you and your team do though. It's making my life easier :-D [02:30] x58: we are one big team here, hopefully the charms also help you out as well :-) [02:31] x58: glad you popped in here [02:31] arosales: mattrae asked me to, to be able to chat with lazyPower about the etcd charm =) [02:33] Good suggestion by mattrae [02:33] Lots of collective knowledge here [02:34] Yeah, I'll stick around :-) [02:34] x58: also trying to make the openstack charm dev process more straightforward [02:35] Getting some docs around the dev workflow now that one can deploy to LXD and dev on their desktop or laptop [02:36] Sounds fantastic :-D [02:36] The current openstack charms are interesting though, they spawn some insane amount of processes if you have a ton of cores on a node... overkill amount of processes :P [02:36] x58: do you know of the openstack irc charm meeting? [02:37] I do not. [02:38] That process spawning is a function of the openstack serviced though, I think [02:38] Well, the charm has a multipler on it by default that is set to 2, 2x the amount of CPU cores on a node. [02:39] Our nodes are 80 core monsters... so 160 keystone WSGI processes get started :P === redir is now known as redir_afk [02:42] arosales: When is the openstack irc charm meeting? [02:42] Fit Nova? [02:43] Fit nova? [02:43] It's the API's right now, nova-api, glance-api, keystone-api, all of those have multiplers on them. Led to some interesting process lists ;-) [02:43] x58: I was just looking for the meeting time, and couldn't find it. gnuoy leads it and I'll ask him to repost to the list [02:44] Sorry I meant is that multiplier on the nova charm ? [02:44] Yeah [02:45] I am not at work, so I can't give you the list of them, but we ended up setting them to 1, we'd like to set that to 0.5 since we are still running so many more API processes than required for the openstack cluster. [02:46] I think Matt may have entered info into Salesforce for that, since the multiplier only takes integers. [02:47] Ok I'll take a look also [02:48] aisrael: The pull request at https://github.com/juju-solutions/charms.reactive/pull/51 fixes this and a number of other import related glitches, so you can do 'import reactive.storage' or whatever when you need [02:49] aisrael: That said, best practive seems to be to put your 'api' things in lib/charms somewhere, which can be imported just fine without the patch [02:50] godleon: juju deployer has been updated for 2.0 but native juju deploy should just work now [02:52] aisrael: I need both, so I have a workaround in the PG charm. If you create a lib/reactive/__init__.py file like http://bazaar.launchpad.net/~postgresql-charmers/postgresql-charm/built/view/head:/lib/reactive/__init__.py, it gets invoked when you do an 'import reactive.whatever' and extends the search path to find the real reactive package (using standard mechanisms, not the horrible symlink hack I first came up with) [02:53] lazyPower: ^ [02:54] arosales: do you mean I can just use juju deploy to replace juju-deployer? [02:55] YEs [02:55] godleon: correct [02:56] ok....... it seems juju-deployer will phase out in the future......? [02:56] godleon: bundle inheritance is the one feature that didn't come across that people do ask about [02:57] Juju deployer should phase out [02:57] Dev will be focused on native deploy [03:13] arosales: I got it. Thanks! :) [03:14] godleon: np [03:14] lazyPower: thanks for your information about juju and kube. :) [03:20] godleon np [06:58] Hi all, is that possible ssh key didn't inject into machine when deploying it? [07:00] I found I failed deploying charm because "hook failed: install", and when I executed "juju debug-hooks xxxx/x", I got permission deny as response. [07:01] and I also found all charms I tried to deploy on that machine hang for a long time. then can not be solved by juju resolved === frankban|afk is now known as frankban [07:43] gnuoy, if you have cycles: https://review.openstack.org/#/c/320450/ [07:49] Hi, could anybody point me to the docs on using spot instances with juju on AWS please? [08:11] Hey Alex_____ , I do not think we support provisoning of spot instances. (At least couldn't find any docs for this) [08:12] Alex_____: How/why are you asking? I mean how did this question came about? [08:32] Hey gnuoy, gavriil here has some questions on Openstack and MaaS and he could use our help! [08:32] hi gavriil [08:32] hello [08:34] i am interested in a "cloud platform" which will spawn and manage virtual machines for me [08:35] gavriil, Openstack sounds like a good candidate [08:36] yes, indeed. I tried to install it manually on my 2 desktop pcs but i failed. [08:37] gavriil, what problem(s) did you hit? [08:38] i used tutorial and the official documentation, but each time i hit different bugs that i couldn't overcome. Some of them were shared by other users and weren't resolved yet. [08:38] tutorials* [08:39] and others maybe caused by my novice level of networking skills [08:40] gavriil, have you looked at conjure-up ? [08:40] http://conjure-up.io/ [08:44] no, does it have any specific requirement? [08:50] https://www.dropbox.com/s/gein73gkjlcoihr/systemCloud.pdf?dl=0 this is my setup. [08:56] gavriil, I believe it can be used to either deploy Openstack fully in containers on your laptop or to utilise maas and give you some options about placing services. tbh I haven't used it. [08:57] gavriil, tbh, I think the next steps would probably be to go through the process (with or without conjure-up) and shout when you hit an issue [09:04] my pcs are i5 4590 3.3ghz 16 gb ram and i7 860 2.8ghz 16 ram . Each one has 2 wired network interfaces and one wireless. [09:04] is it enough for a two node installation? [09:04] the second pc has 12 gb ram* [09:07] gnuoy, https://code.launchpad.net/~james-page/charm-helpers/swift-dev-support/+merge/295918 [09:10] gavriil, it depends on whether those two nodes need to supply the maas server and the juju bootstrap node or if you have other machines (or kvms) for those. [09:12] jamespage, +1 can I leave you to merge it? [09:12] gnuoy, yah [09:24] gnuoy, and another - https://code.launchpad.net/~james-page/charm-helpers/is-ip-ipv6/+merge/295920 [09:27] jamespage, +1 [09:28] gnuoy, ta - landed [09:28] gnuoy, i can try to host the maas server on my VM or buy a another low end desktop PC( i3 ) with enough ram. Can you suggest me how to setup my machines and what network configuration is needed (if needed), before registering them to MaaS. [09:47] gavriil, you could have the two servers maas is going to manage and maas on a dedicated network. The second nic on each of the servers maas is going to manage could be used to route traffic in and out of openstack once it's installed. [10:04] gnuoy, correct me if i got it wrong, i need 3 machines for maas deployment, one for maas controller and 2 for running openstack. My router provides dhcp (internet connection) and all 3 machines should be connected to it. The first step is the installation of the maas controller which will automatically detect the dhcp of the router and start managing it. After that i will register the remaining two pcs to MaaS. [10:05] The last step is the use of juju to spawn containers that will host the openstack services? [10:08] gnuoy, can you please fill me in with any important logical steps that i probably ommited? [10:20] gavriil, the logical steps seem fine. I don't think maas is going to automatically detect the dhcp of the router and start managing it, it'll manage whatever IP range you give it. Also, the nodes under maas control do need internet access but you may choose to proxy that internat access through another machine, like the maas controller. [10:21] gavriil, usually I'd have maas on a dedicated vlan so maas, somthing like https://docs.google.com/drawings/d/12YgpEucC0OADkVrVzwNXe1V7lwrvcsBfsTwPmz-h6Co/edit?usp=sharing [10:30] gnuoy, can you take a look at https://review.openstack.org/#/c/322035/ pls [10:41] gnoy, Two parts are not clear to me. First, the nodes under maas control are connected to the first network, so they can have internet access through the router. So why should i proxy the internet access through other machines. Second, in your drawing there are 2 networks. The maas network is virtual or physical and what is its purpose? Thank you very much for your help! [11:04] kjackal: sorry, was AFK. The question comes from the idea the juju can help data scientists and engineers to save money on AWS by avoiding EMR fee and using cheapest on-demand option which is spot instance one. Could you point me to the codebase wich does an AWS provisioning? I'd love to understand how hard would it be to add spot instance support [11:05] Alex_____ the provisoners (!?) live in juju-core [11:05] let me see if I can spot them [11:06] Alex_____: have alook here: https://github.com/juju/juju/tree/master/provider/ec2 [11:06] kjackal: thanks! if I think more about it - how do you think, would it be possible to work around by manually provisioning N spot instances and then using Juju just to deploy everything on running instances? [11:08] This is an option yes! However, I have a feeling that adding support for provisioning Ec2 spots is a contribution that the juju-core devs will find it hard to resist :) [11:09] that sounds awesome :) Can I help somehow to make it even more attractive for the juju-code devs? [11:09] I'm looking at https://jujucharms.com/docs/1.24/config-manual is that the right place for described workaround? [11:12] Alex_____ : The _not_recomended_way_ is what digital ocean is doing https://jujucharms.com/docs/1.24/config-digitalocean source in here https://github.com/kapilt/juju-digitalocean [11:13] Alex_____ Again Alex if you are to spend time in automating the process of spot instances you are strongly advised to do it in juju-core and offer it as a contribution [11:14] there are certain limitations when using maual provider (eg you cannot add-units) [11:15] kjackal: got it. Thanks for letting me know, that helps a lot! [11:16] Alex_____ Actualy there is a email that just landed on the juju list that you might want to kep an eye on. The title is "Juju support for Synnefo", there the people from a cloud called okeanos are interesting in contributing a provider for their cloud [11:18] I am very curious what the recomnded way to approach this kind of a problem is. Perhapse Alex_____ you could also ask at the list of the proper way to extend an existing provider [11:18] I am sorry I cannot answer this, as I am not from the juju-core team [11:20] kjackal: thank you so much for pointing this all out! I appreciate, kjackal. It is exactly the information I was looking for. I would keep an eye on the mentioned thread as well as ask this question on the list [11:31] I'm building use cases for a workshop about open source data analytics tools that people can use for their hobby projects at home and cost is very important factor here [11:31] So Alex_____ why do you need EC2 spots? [11:31] Sorry go on [11:31] sure! so the idea is: with recent adoption of Juju in Apache Software Foundation (BigTop especially, Zeppelin, etc) more and more people will start looking into it as an open source option for their hobby\part-time projects and those are people who love opensource and are cheapscaters (in a good way), so this value proposition will be dear to their hearts (I'm one of them :) ). [11:32] Sound cool! [11:32] And then the same people will bring it thought the doors of their organizations to daytime jobs later on. So it should help the adoption [11:38] Having an AWS provisioner that supports something like `add-unit` with spot instance for adding more workers to the instance group would be a dope [11:42] gavriil, the first nic is used by maas to provision the servers (dhcp, pxe etc) and when the servers are installing, traffic like packages updates is routed through the maas node, the second nic is not used at all. In fact the second nic could be attatched to any network you like since I was thinking it would be used to route traffic in and out of the vms that you spin up within openstack (they would act as the ext-port https://api.jujucharms.com/charmsto [11:42] re/v5/trusty/neutron-gateway-5/archive/config.yaml) [11:44] Alex_____: https://bugs.launchpad.net/juju/+bug/945862 [11:44] Bug #945862: Support for AWS "spot" instances [11:45] I agree 100%, I'll talk to the core team about it the next time we meet [11:45] Alex_____: it would help us out tremendously if you could add a comment to the bug with some of the things you've outlined here in IRC. [11:46] jcastro: sure! I always fancied that nice lanuchpad account but was alway lazy to register :) [11:47] also point out you have a beard [11:47] you'll get more kudos [11:47] Alex_____: the reason I ask is it's one thing if I ask, like when I filed a bug. But it's a totally different priority when someone who uses it in real life +1's a bug [11:48] jcastro: makes perfect sense, I [11:53] but yes, I have wanted this feature for a very long time, so I look forward to having evidence that someone would use it [11:53] for example, in a bundle, it would be nice to do something like: [11:53] servicename: [11:54] allow_spot: true [11:54] min_ondemand: 3 [11:54] so if I add unit past three, then go spot instances, but I want enough ondemand instances to keep the service up and reliable [11:54] but other than that, go as cheap as possible [11:55] and then of course, we could have constraints on cost, just like we do for cpu and memory [11:55] yup, that sounds like a prefect plan [11:56] it makes sense [11:56] especially on services that can fail hard [12:07] jcastro magicaltrout did my best at https://bugs.launchpad.net/juju/+bug/945862 [12:07] Bug #945862: Support for AWS "spot" instances [12:14] hi everybody. we have openstack with disabled security groups - `enable_security_group = False`. so in this case juju can't create machine because it requires security group. do we have some juju parameters to prevent security groups creating ? [12:20] another question do we have possibility to specify count of network interfaces? [12:23] bryan_att, Hi, I've deployed successfully on trusty http://paste.ubuntu.com/16729946/ . Creating and listing a datasource is the full extent of my testing though :-) [12:50] marcoceppi cory_fu: ping [12:50] I used your advice from yesterday, nearly flawless victory, so thanks [12:51] but I have a problem with arguments of the provide function that are optional [12:51] I don't understand how to pass them in bash [12:51] essentially, I have 2 sets of arguments that can be used, combined or not [12:52] both optional [12:52] relation_call doesn't let me specify the names of the arguments, so I am sort of out of business [12:52] any thoughts? [12:54] dosaboy, do you have experience with sphinx docs? [12:55] cholcombe: not recently [12:56] dosaboy, ok. i was wondering why my :param list: blah blah isn't parsing properly with sphinx [12:56] it's so picky about syntax it seems [13:09] jamespage, you've given yourself a +1 on https://review.openstack.org/#/c/322035/1 , I don't think that's really the done thing is it? [13:10] gnuoy, I was using that to make your life easier - those are ones I think are ready to go [13:10] but I'd not told you that yet :-) [13:10] gnuoy, https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync [13:10] ah, ok [13:11] jamespage, what did you and beisner agree was the minimum for osci to run to approve a charm helper sync? [13:12] gnuoy, we've agreed it change by change - but fwiw I think these are OK with a smoke only [13:12] jamespage, yep, +1 [13:17] gnuoy, https://review.openstack.org/#/c/322035/ is ready as well + 3 more on the general sync list === bladernr` is now known as bladernr [13:33] SaMnCo: I was under the impresion you could pass key=val to relation_call but I could be wrong [13:33] I tried that but it didn't work, I thought it would as well [13:33] Maybe I should try from scratch [13:33] let me do that [13:38] marcoceppi: would something around those lines be better? [13:38] https://www.irccloud.com/pastebin/NArzBWDD/script [13:39] SaMnCo: that wouldn't really work, I don't think , due to lack of context [13:40] we don't seem to be testing charm-tools with python3? [13:46] gnuoy, https://review.openstack.org/#/q/topic:charmhelpers-resync+status:open more ready to go [13:53] jamespage, cinder question. It looks like huawei needs an xml configuration file. The HuaweiSubordinateContext that I return to Cinder is just a json blob. Will cinder take care of writing that xml file or do I need to patch that also? [13:54] cholcombe, the subordinate is responsible for writing anything that is specific to it [13:54] jamespage, gotcha. ok that's fine [13:54] cholcombe, so for ceph - > ceph.conf and the client keys [13:54] cinder-ceph that is [13:54] jamespage, i'm just going off of your vmware cinder driver code. It looks like it returns a context back to cinder [13:55] cholcombe, it might pass some data back to cinder, but that's cause it need to be written into cinder.conf [13:55] sticking to the principle that only a single charm can own a file, it must be done that way [13:55] ok i think i know what it needs then. Just the basic use this driver, here's the config file, etc [13:55] cholcombe, most likely [13:59] how do we unit test layered charms that depend on packages that are installed via the wheelhouse? [14:00] I'm pretty sure you can just pipe in the dependency list to tox... [14:00] cholcombe, by building the charm and writing amulet tests for it... [14:00] jamespage: issue is with dependencies [14:01] cholcombe, oh sorry - yes I see - well talk with tinwood and gnuoy - they have this figured out [14:02] cholcombe, icey: but broadley it involved building a tox virtual env, installing what you know is needed for unit testing and executing some unit tests... [14:02] jamespage: it gets more painful, trust me :) === scuttle|afk is now known as scuttlemonkey [14:03] icey, cholcombe: all I'm saying is don't repeat thinking that might of already been done in this space :-) [14:04] jamespage, yup :) [14:05] jamespage: a big part of my issues seem to be that I'm migrating our python2 unit rests to python3+layers [14:07] icey, quite possibly [14:26] I was about to jokingly ask marcoceppi where you submit papers for the charmer summit [14:26] and then saw the button on the website..... [14:26] magicaltrout: haha already a step ahead! [14:26] :P [14:27] I'm on a talk submission afternoon [14:27] magicaltrout: well I may have one more for you [14:28] hook me up [14:28] marcoceppi: hey any word from design for summit.j.s? [14:28] jcastro: not yet [14:28] I would like to totally announce that badboy [14:30] gnuoy, three more to go if you have two ticks - https://review.openstack.org/#/q/topic:charmhelpers-resync+status:open [14:41] there you go, partner summit proposal submitted [15:03] SaMnCo: I'm actually out today, but since I'm here for a second, I can tell you that reactive uses charmhelpers's cli module, and it looks like any params with default values ought to be treated as options that must be provided as --var_name=value [15:04] cory_fu: I'll try that [15:04] thanks, and sorry to put you out of your vacation [15:04] SaMnCo: That said, the calling convention for methods via the CLI is necessarily more restricted that in Python, so there may be things that you simply cannot pass in from bash that you could from Python. It would be up to the interface layer author to keep that in mind, I guess [15:04] No worries. :) [15:05] Anyway, back to day off. \o [15:05] jammy git [15:14] hi. we have openstack with disabled security groups - `enable_security_group = False`. so in this case juju can't create machine because it requires security group. do we have some juju parameters to prevent security groups creating ? [15:14] another question do we have possibility to specify count of network interfaces? [15:54] there should be an award for converting old charms to layered :) [16:12] cory_fu, marcoceppi : the --var_name=value doesn't work [16:13] cholcombe: absolutely [16:14] https://www.irccloud.com/pastebin/5h6hqAme/call_api [16:26] cory_fu marcoceppi - btw charm build -r is like my new favorite thing ever === frankban is now known as frankban|afk [16:38] gnuoy: have some time to talk about how you did that? I'm getting an error: https://www.irccloud.com/pastebin/7GH0InRK/ [16:46] gnuoy: nevermind - it's the version issue again. I used the other command format, trying now. [16:46] lazyPower: what's the -r do? [16:46] ah, the reporting [16:46] marcoceppi it generates a report of what changed (the delta) and runs `charm proof` on the assembled charm [16:46] we should consider making that the default [17:05] +1 that sounds good to me [17:27] https://www.irccloud.com/pastebin/ltEyPcEd/gnuoy%3A%20having%20some%20issues%20with%20the%20install%20per%20the%20current%20repo [17:28] gnuoy: see the previous post - I put the note in the filename field... having some issues with the install per the current repo [18:03] jamespage: Thanks for all the help with the RabbitMQ charm :-) [18:21] big-data: Any current plans for packetbeat? [19:00] gnuoy: ping [19:44] bdx yes, and dockerbeat [19:44] bdx - i've been swamped with this etcd rework this week but i have plans on releasing layers for both packetbeat and dockerbeat in the next 2 weeks and proposing them against the beats-core stack [19:46] s/stack/bundle === scuttlemonkey is now known as scuttle|afk [21:05] it was better when the charmstore login was broken [21:05] at least i didn't have to look at my face everytime i go there [21:12] Is there a way to tell juju deploy to only deploy a single machine at a time when deploying a bundle? [21:28] Running into a bug I think is related to how fast JuJu is asking machines to be deployed in MaaS: https://bugs.launchpad.net/maas/+bug/1586540 [21:28] Bug #1586540: MaaS 2.0 beta 5 fails to assign IP address to nodes when multiple nodes go into deploying at once [23:19] Nevermind, that wasn't the bug we thought it was. I feel bad. [23:19] Here's a new bug instead: https://bugs.launchpad.net/maas/+bug/1586555 [23:19] Bug #1586555: MaaS 2.0 BMC information not removed when nodes are removed