=== natefinch-afk is now known as natefinch [03:34] Can anyone confirm that relation-list no longer includes the departing unit in a -departed hook, and since which version of Juju? [03:35] cory_fu: (^^ from your issue comment, which might help me in other ways) [06:47] wallyworld: would it be alright to land https://github.com/juju/bundlechanges/pull/24 ? We've just about finished implementing the changes in the GUI === Makyo|away is now known as Makyo [07:17] Makyo: that would be great [07:18] wallyworld: ty! [07:18] Makyo: juju master pulls in a specific rev off that repo so I'll update the deps once that lands [07:18] wallyworld: ah, thanks, that makes sense [07:22] wallyworld: landed, ty [07:22] Makyo: awesome, thanks you for doin ght gui changes === julenlar is now known as julenl [11:02] hi everybody, [11:02] does it windows support subordinate charms now? [11:51] Really juju newbie question, if I need to make one change to a single config file in an already deployed charm (adding a new parameter to nova.conf), what is the syntax to do that? Or do I just ssh in and change it manually? [11:58] ChrisHolcombe, https://review.openstack.org/#/c/328374/3 [11:59] some comments; I think we should bump the default as well if our reference deployments need a larger value than 300 seconds... [12:19] @xilet seems you should make new version of charm and upgrade it: juju upgrade-charm [12:20] xilet, don't ssh and change it manually - the next time a config-changed runs (like on a reboot) your change will be overwritten [12:20] xilet, what are you trying todo? [12:20] still working on the iscsi issue, so trying to add in volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver [12:20] gennadiy, I can't think why windows would not support subordinate charms [12:21] it doesn't work for me [12:21] xilet, you might be about to poke that in using config-flags [12:21] ok for linux machine [12:22] juju set nova-compute config-flags="volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver" [12:22] but that only writes to the DEFAULT section so it might not do the trick [12:22] xilet, this is for the userspace iscsi support right? [12:22] yeah [12:22] ahh thanks, that is the syntax I was looking for [12:22] xilet, this might make a nice first contribution for you to make it an offical feature [12:23] juju set nova-compute userspace-iscsi=true [12:23] is so much nicer! [12:23] if I can get this working, I will happily do so [12:23] xilet, prove the theory and then if you'd like to work on a more formal config option let me know [12:23] they are normally quite easy to add for this sort of thing [12:32] good to know, clearing out the old tests then will go through and see if it can access the disks from openstack [12:33] Hello! Can I configure Juju to boot new machines with a keypair via OpenStack provider ? Currently machines added to OpenStack via Juju do not have any keypair. [12:37] @ionutbalutoiu for linux machine or windows? [12:37] for linux machine it will add juju key [12:38] Either one don't get any nova keypair. But for Linux, I think the juju key is authorized via metadata. [12:38] But for Windows, you don't have any option. [12:38] juju uses keys from - ~/.local/share/juju/ssh/ [12:38] for windows - i have asked this question yesterday [12:39] http://ask.cloudbase.it/question/1239/jujuopenstack-how-to-get-windows-machine-password/ [12:40] also i have asked this question on juju mail list. [12:40] i have got this answer [12:40] You should be able to do something like: juju run --machine "net user JujuAdministrator " [12:40] but i haven't tested it [12:40] now we use workaround - our charm add local admin [12:40] Yes. This is only possible only with Juju 2.0 [12:41] for for stable version 1.25.5 you cannot set/retrieve any password for a Windows machine unless you have access to console. [12:41] Juju run on Windows doesn't work on version of Juju < 2.0 [12:42] After you boot a Windows machine with Nova, you can retrieve the Admin password via "nova get-password " [12:42] but that's possible only if you boot the machine with a keypair. Is would be nice if Juju could have a config option, so that every machine it boots, it uses that configured keypair. [12:44] yes, i know. [12:44] so our workaround - create own user on machine from charm [12:45] it's very easy - https://github.com/cloudbase/juju-powershell-modules [12:45] there are - New-LocalAdmin function [12:45] I know. I'm working at Cloudbase. :) [12:47] Still, that's sort of a hack. I still consider that adding a config option for keypair in case of OpenStack provider, would make things cleaner. [12:54] heyyyy powershell modules [12:54] right on, i forgot those were published [12:57] kjackal: im interested in the 'slow sync' messages in the hbase log during the smoke-test - have you seen these? [13:06] admcleod: nope, I suspect what they mean but have never seen them [13:06] let me see in hbase docs/list [13:09] admcleod: something like this: https://issues.apache.org/jira/browse/HBASE-11240 [13:09] https://github.com/juju-solutions/interface-etcd-proxy/pull/2 - could use a quick CR on this if anyone has the bandwidth [13:09] i'm down a team-mate for spot reviews [13:11] kjackal: right. >> "abnormal datanode" << [13:13] yes, basicaly, HBase writes everything in a write ahead log and then keeps it in memory. There are phases when everything that is in memory has to be written down to the secondary storage (compaction pahses). I guess this message means that the secondary storage takes toolong [13:13] admcleod: ^ [13:19] kjackal: right, which means... [13:41] hi lazyProwe, do you have some windows subordinate charms in jujustore or github? [13:47] gennadiy - I dont believe we do, i think you're pioneering [13:47] gennadiy - the only windows charms I have interfaced with were principal charms provided by cloudbase [13:54] seem i was wrong with windows subordinate issue. i have just deployed empty subordinate charm - everything is ok [14:00] gennadiy - when you say "I have deployed empty subordinate charm" - do you mean on a windows series? [14:01] yes, i used this one - https://github.com/cloudbase/windows-charms-boilerplate [14:01] i'm not surprised that would work, if it has no hooks. The agent would skip all hooks. [14:01] and it would appear to be fine, when it really just no-op'd [14:02] gennadiy - have you logged a bug with the charm that appears to be broken? I dont have the time right now to dig into it but I'm interested in following the conversation [14:03] it's my charm. i'm creating 2 charms for windows. one is principal, another is subordinate [14:03] ah, well that does present certain... hurdles [14:03] stub: I confirmed it yesterday in 2.0-beta8 [14:05] admcleod, kjackal: I don't know if you noticed that I moved the card, but the plugin ready issue was, I think, best fixed in charms.reactive, if you want to take a look: https://github.com/juju-solutions/charms.reactive/pull/71 [14:07] cory_fu: ah yeah ok [14:08] Technically, the issue was a subtlety of Python, where `None and ` returns None and not False like you would expect. That, combined with using None as a deafult value to mean something significant made for unexpected behavior [14:13] cory_fu: Ta. I'll assume is is a 2.0 feature for now. [14:13] stub: "Feature" [14:14] Seems like a bug to me, but at least I was able to work around it [14:14] Though I suppose it depends on how strictly you take the fact that the hook is -departed and not -departing [14:14] cory_fu: Oh, I was thinking it might be part of a fix. Before, in a departed hook neither end had an idea about which unit was departing. [14:15] That seems like a round-about way of communicating it. === fwereade_ is now known as fwereade [14:16] I mean, not wrong, per se, but then you still have to jump through the hoop of comparing JUJU_REMOTE_UNIT to the relation-list to figure out if it's you or the other unit [14:16] cory_fu: So I guess I shouldn't rely on the behavior and wait for a real fix to https://bugs.launchpad.net/juju-core/+bug/1417874 [14:16] Bug #1417874: Impossible to cleanly remove a unit from a relation [14:17] cory_fu - the one thing i was curious about that pr is we're catching CalledProcessError [14:17] does relation_get not ever return > 0 except only in that scenario? [14:18] seems like we could be hiding other failure and skipping it due to that one little stanza. But it seems nitty and i dont have a solid use case for where thats bad [14:20] lazyPower: Well, no charm that I've ever seen captures CalledProcessError around the call to relation-get and I've never seen it fail in another circumstance. But you're right that it's perhaps a bit heavy-handed. Ideally, hookenv.relation_get would capture the stderr output on failure and we could inspect that, but that would require changes in charmhelpers. [14:21] I'm not about that life at the moment [14:21] carry on as you were sir [14:21] lazyPower: We could also assume that the units list cleanup is functional and remove the check entirely. I'm not against that; I added the except before I was sure if I could reasonably do the cleanup [14:22] Thats the one niggly thing, and its really a nit [14:22] without empirical evidence that "hey this fails under x condition" [14:24] I did have the same concern, so I'm not against changing it. [14:28] :O Does this mean you're rubbing off on me cory_fu? [14:28] :) [14:29] cory_fu, lazyPower , marcoceppi : So I knocked up https://github.com/stub42/ReactiveCharmingWorkflow a while ago, to codify and improve the process I'd embedded in the PostgreSQL charm Makefile. And per the 'Future' section in that am thinking of making this process simpler with some git plugins. [14:29] export JUJU_REPOSITORY=$CHARM_ROOT/repo [14:29] i want that to go away [14:30] its a legacy concept and we should kill it with fire [14:30] cory_fu, lazyPower , marcoceppi : Does this seem like I'm going in the right direction, or do people have reasons to prefer multiple repos for source layer and built charm? [14:30] lazyPower: Yes, but for now I need both Juju 1 and Juju 2. Plugins will be targetted for Juju 2 though. [14:31] from what im reading so far you're on the same path i've been using [14:31] --no-local-layers before building for a publish and testing that artifact pre-push, this is all like the workflow i wanted for CI too [14:32] i like your push/publish hack too [14:32] stub: I'm traveling, but will review in a bit and leave comments, thanks for typing this up! [14:32] I've opened isues on github to hopefully avoid that hack ;) [14:33] yeah LGTM stub - there's a few things in here i'll comment on later but initial scan was good [14:33] there are some conventions in here we could probably put into layer-basic as a Makefile and make this useful to more people out of the box [14:33] I'm also considering if I should generate the branch name to contain the built version of a layer (so the master branch gets built to master-built) [14:33] but thats up to cory_fu [14:33] and other maintainers [14:34] Well, I think the git bits are ugly enough to put into a git plugin, which I could package or snap [14:34] snap ftw [14:35] git plugins should at least make things readable and avoid the ugly underlying details. [14:36] fair, i use a shell hack i got from gary bernhardts dotfiles for git pretty tree. its invaluable [14:36] (git clones into temp directories etc. to ensure clean publication, committing uncommitted changes to the built branch before building) [15:02] cory_fu: teach me some python.. are double underscored vars the convention for defining globals? eg https://github.com/juju-solutions/charms.reactive/pull/71/files [15:25] It should really be "TOGGLE = object()". You only want magic strings if you need a readable string representation [15:26] kwmonroe: What stub just said. I went with strings for debugging, but object() would have worked just as well and perhaps been less misleading [15:28] kwmonroe: Also, leading / trailing underscores are a convention in Python for representing internal things. Double underscores are usually reserved for Python-internal things, so I probably should have used single underscores, but it's just a string value and not an identifier, so it's slightly less of an issue [15:28] stub, kwmonroe: Feel free to nack the PR and I'll update it [15:29] I'm not going to be that pedantic [15:32] thx stub and cory_fu.. i didn't read that PR right from the get go.. i think i was thinking the *var* was __TOGGLE__ and didn't pick up that was just a string.. anyway, __SORRY__. [15:34] kwmonroe: Yeah, I think the way I wrote that encourages that confusion. I'm up for changing it to object() to avoid that [15:48] kwmonroe, stub: Updated. Thanks [17:44] hey guys [17:44] having an issue deploying openstack using juju charms and maas [17:46] i am seeing the same IP assigned to juju-br0 and the interface that is connected to [17:46] in /etc/network/interfaces the interface e.g eth1 is set as inet manual [17:48] and in the same file /etc/network/interfaces.d/*.cfg is sourced where a file eth1.cfg exists in which eth1 is set as inet dhcp [17:48] saudk: what versoin of maas? [17:49] 1.9.3+bzr4577-0ubuntu1~trusty1 [18:18] hey cory_fu, i'm reviewing https://github.com/juju-solutions/charms.reactive/pull/72, but don't understand why -departed has to be handled in __init__.py, or for that matter why there's no handling of -joined that does rel.conversation().join(). [18:19] kwmonroe: join happens implicitly in RelationBase.__init__: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/relations.py#L156 [18:20] The reason we need to call .depart() is that we're now depending on the internal list of units to be accurate, since Juju no longer includes the departing unit in relation-list [18:20] yup, got it [18:21] It's actually better for that list to be accurate, anyway, for charm use [18:21] kwmonroe: Give me a second to push up a minor edit to that. Going to add logging to that try / except [18:21] ack [18:26] kwmonroe: Updated [18:27] kwmonroe: You'll notice that it was always my intention to have depart() called implicitly [18:29] yes i will notice that cory_fu, but no one else will since the todo is going away. [18:29] :) [18:29] frankly, that change is probably the only thing i'm qualified to review here ;) [18:31] kwmonroe - i dub you the pr poobah [18:31] you're now qualified to review everything by virtue of being the poobah [18:31] we're doomed [18:32] i remember back in the day (like monday), when the curtain looked so pretty. why oh why did i stick my head behind there? [18:33] :) well your self confidence rating certainly shines through [18:48] i can be poobah now lazyPower! i totally get it. i was missing the link between get_remote and relation_get -- it's related_units (and relation-list therein) causing the problem. [18:51] yep, that niggly little workflow we used to use back in the day [19:14] cory_fu - correct me if i'm wrong, but there's a key in layer.yaml i can use to control omission of files right? [19:14] eg: ignores: ['known-weird-module-that-doesnt-play-well-with-other-modules.py'] [19:15] Yes, ignores, but be aware of this issue: https://github.com/juju/charm-tools/issues/220 [19:16] lazyPower: The TL;DR of that issue is that ignores is currently not scoped to an individual layer [19:16] It will prevent a file with that name from being included in the final charm, period. [19:17] Also, that behavior is likely to change, and the calling convention of ignores might change, too [19:19] ok [19:20] i have need to exclude a test in the final product thats fine for the layer (for now) [19:20] so this is a good enough impl i'm opting fo rit [19:20] except that it doesn't appear to work with pathed files [19:23] http://paste.ubuntu.com/17376175/ [20:34] hey whats up guys? Is there a way to update a resource without pushing the charm too? [20:41] bdx - see: charm attach --help [20:46] lazyPower: http://paste.ubuntu.com/17379141/ [20:46] :-( [20:46] bdx are you trying to push this to the store or to your controller? [20:47] the 502 bad gateway is troubling... but first things first :) [20:50] lazyPower: to the store [20:54] bdx ok thats odd. [20:54] jrwren - Charm store status looks good to you yeah? [20:54] * lazyPower remembers he needs to finish a reply to the etcd email now === firl_ is now known as firl [21:27] i love a prime, thats primest [21:40] lazyPower: The fix for the -relation-departed issue turned out to be a bit of a lame duck, and I had to go back and take a different approach [21:40] https://github.com/juju-solutions/charms.reactive/pull/75/files if you want to review [21:41] kwmonroe spotted my mistake and is also reviewing [21:43] you made a mistake cory_fu ?! [21:43] sad times [21:43] As far as making mistakes goes, I'm on fire today [21:43] * magicaltrout spots smoke on the horizon [21:44] https://i.imgur.com/F0NtTsP.gif [21:45] its like me on a daily basis [21:45] of course as IT professionals, we never openly admit to the weird stuff that happens daily! ;) [21:59] cory_fu: the "if unit not in self.units, continue" is to keep us from calling relation_get on globally scoped relation ids? [21:59] (referring to https://github.com/juju-solutions/charms.reactive/pull/75/files#diff-3c5467f4229b8dd06bdf1c43813c03d8R623) [22:00] kwmonroe: Not just GLOBAL. Not every unit on a given relation will be in the state that the conversation is representing. Some might still waiting on the unit to finish setting up and providing its relation data [22:00] well, i guess not just global rel id, but more .... yeah, what you said. [22:03] I'm not sure I trust Travis at this point. :p [22:05] lol cory_fu [22:05] don't be mad [22:05] https://github.com/johnsca/charms.reactive/blob/812a06f657a0b6fb3f1488f91a1a0a0aa13fe761/charms/reactive/relations.py#L19 [22:06] Oh snap. Linty linty [22:06] that's your old frient F401 [22:06] Stupid Travis [22:06] truth [22:06] May the force-push be with me [22:08] looks good [22:10] cory_fu: cool for 0.4.4? [22:10] Ok, we finally ready for a release, then? I need to go on my damned vacation. :) [22:10] ha [22:11] kwmonroe: You want me to release it, or you want to try your hand now that you're a maintainer on pypi? [22:11] the latter [22:11] thedac, interesting MR here [22:12] ugh, my first objective as maintainer will be to remove the pesky 'make test' prereq of 'make release'. this is taking too long. [22:12] thedac, there's actually a 3rd option but it's a PITA to setup [22:13] Only takes 6 seconds (times 2, I guess) for me. How long does it take for you, kwmonroe? [22:13] like at least 10 seconds [22:13] ha [22:13] cholcombe: hey, which MR are we talking about? [22:13] anyway, she's away cory_fu. go forth and vacation. [22:13] thedac, the dns HA for rgw [22:13] got it [22:13] thedac, so the other possibility here is we could use bgp [22:14] cholcombe: just keep in mind this is one of many https://review.openstack.org/#/q/topic:dnsha [22:14] kwmonroe: Sweet, thanks. Be sure to let admcleod and kjackl know they can rebuild hbase and test it again [22:14] i see [22:14] haha guyzzzz [22:14] Actually, we'll need to rebuild hadoop-plugin to fix it [22:14] what lazyPower? you see something? [22:14] lazyPower: You talking to us? [22:14] if you see something, you have to say something. [22:14] yes, one could use BGP, but these are charm handled HA options [22:14] thedac, interesting. so the charm handles the vip? [22:15] lazyPower: Don't listen to kwmonroe. You had your chance to review and you blew it. ;) [22:15] Yes, you have to add it to the config but yes hacluster/corosync handles that [22:15] cory_fu kwmonroe -nahhh i just looking over this thread of review comments ;) [22:15] Ah, lol [22:15] solid comedic gold considering i know how today has been :P [22:16] Alright, I'm out. Everyone have a good weekend! [22:16] cheers cory_fu [22:16] adios [22:16] thedac, i've had a lot of good experience with floating vip's using ctdb. It's very solid [22:16] (its only Wed.) [22:16] lol [22:16] thedac, corosync i have no experience with so i'll do the best i can reviewing [22:17] youth of today [22:17] always on holiday [22:17] lazyPower: I'm out for the rest of the week [22:17] oo snap [22:17] pretty sure it's thanksgiving this week [22:17] (jackie face) [22:18] cholcombe: keep in mind the corosync VIP solution was already there (it is just indented). I added the DNS bit which is a call out to charmhelpers which has already been reviewed. You can also just do a +1 and wait for a second opinion. [22:18] thedac, cool [22:18] > You can also just do a +1 and wait for a second opinion. [22:18] pretty much exactly what i do [22:18] cholcombe - you're getting solid guidance here. i approve [22:19] lazyPower, :) [22:19] :) [22:22] thedac, looks reasonable. I just have a question on the hooks.py code [22:22] shoot? [22:24] thedac, i'm just wondering if the iface turns up as None on the vip if we should error or log there. The code currently skips that case [22:24] thedac, I know it's not your code [22:24] looking [22:29] lazyPower: the only thing holding kubes-core from passing cwr is a "charm push . cs:~containers/bundle/kubernetes-core" from a recent https://github.com/juju-solutions/bundle-kubernetes-core dir. feeling frisky? (i am, but i'm not in ~containers) [22:29] so, good point. What I would like to eventually do is move the VIP code to charmhlpers just like the update_dns_ha_resource_params. Then we could fix it in one place. Right now I am trying to change as little as possible so that only the additional feature is added if it needs to be reverted for any reason. [22:30] Was that weasaly enough :) [22:34] thedac, fair enough :) [22:46] kwmonroe - i cant fix it like that [22:46] kwmonroe - not without landing the pile of CR's i linked elsewhere [22:46] i'd love to publish though :) [22:48] ah, ack lazyPower. i thought it was just a case of the bundle not using the latest rev of the kubes charm. [22:56] that indeed is the case, but its more involved :) [23:09] :P [23:11] primey wimey spacey wacey amirite? === jcsackett_ is now known as jcsackett === markthomas_ is now known as markthomas === StoneTable is now known as aisrael === thomi_ is now known as thomi === X-Istence is now known as x58 [23:18] bdx - i circled back and it appears to work now? http://paste.ubuntu.com/17383707/ [23:29] lazyPower: did it not work for you earlier either? [23:30] bdx - i didnt try at that time, i was judging based on the output. can you give it another go? [23:30] it may have been transient [23:30] yea, omp [23:32] lazyPower: yeah, its working now [23:32] lazyPower: trickery [23:32] ok cool. sorry that happened :( i got no response when i pinged so i circled back [23:32] hey thanks! [23:32] np mate [23:32] glad we got it sorted === verterok` is now known as verterok