=== scuttlemonkey is now known as scuttle|afk === urulama_eod is now known as urulama === TheRealMue is now known as TheMue [07:40] can juju work with mixed environment? Multiple networks, different hypervisors etc., or would I have to show it one API? (for example, setup OpenStack under Juju) [07:43] and another question: what's the story for doing orchestrated, rolling upgrades with Juju? At the moment I have an Ansible setup that will take a batch of instances, remove them from loadbalancing and silence monitoring, upgrade each of them, wait for OK from monitoring, and only then put them back into LB. Can I do something like that with Juju? [08:15] jamespage, a couple of charmsync'y reviews if you have a moment: [08:15] https://code.launchpad.net/~gnuoy/charms/precise/rabbitmq-server/sync-charmhelpers/+merge/239343 [08:15] https://code.launchpad.net/~gnuoy/charms/trusty/mysql/fix-source/+merge/239344 [08:16] gnuoy, +1 to both - marked as such [08:16] gnuoy, how are we across the OS charms with regards to release to stable today? [08:17] gnuoy, I had a walk through some proposals last night [08:20] jamespage, ta [08:43] jamespage, I don't know of any that need to go in before the big switch. [08:55] gnuoy, https://docs.google.com/a/canonical.com/spreadsheets/d/1jMjgmlH0gcKRecsLVIZTK47X8Su8ASR4loVDmmCROOw/edit#gid=0 [08:58] jamespage, I'm not sure what 'done' indicates [08:58] gnuoy, mp's reviewed [08:58] anything merged if required [09:01] jamespage, is the initial in the done column indicating who has reviewed the list or who had done the individual mps reviews? [09:01] gnuoy, its who has reviewed and said nothing needs to be landed [09:07] gnuoy, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-multi-console-fix/+merge/233612 [09:18] jamespage, I don't get your comment there. It just runs on the leader [09:19] gnuoy, your proposed change relies on hook execution to move console-auth around [09:19] if the leader dies, nothing else will take over right? [09:20] true [09:26] hmm, is askubuntu.com a better place for questions about overall Juju usage (orchestration, assumptions about environment etc.)? === beisner-afk is now known as beisner [13:00] Hey hoping for a bit of help on a openstack install… [13:00] Anyone here able to provide help with that? [13:07] Hi all wondering if anyone knows how to make a dedicated network for nova cinder iscsi traffic on juju [13:19] at what times is this channel more lively? [13:20] seems busier to me in west coast afternoon [13:21] and what timezone would that be? GMT+7? [13:22] gmt-8 [13:23] thanks, I'll try nagging people then :-) [13:23] :) [13:24] fwiw; I think folks from UTC-5 ish monitor this channel too [13:25] * thedellster slaps head should have asked ktosieks question [13:25] thedellster: I knew how to answer his :P [13:26] ;) will revert back at a more friendly west coast hour. [13:28] tell me about it, this east-coast stuff is too early for me [13:29] I’ve always thought that I should live on the east coast and say I lived on the west coast. That way I could start working at 12pm and tell everyone it’s 9am. [13:29] hahaha [13:50] coreycb, I'd like to take https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476 for release [13:50] could you update inline with my comments pls [13:51] jamespage, yes [13:56] jamespage, so, my main goal of adding that was so that I could test upgrades prior to ppa:ubuntu-cloud-archive/juno-staging prior to it landing in the UCA [13:57] jamespage, it should probably throw an exception, but then it would have to get hacked to let the code continue each release for early migration testing.. not a big deal I guess [13:58] coreycb, my intent is that we have update populated by b1 every cycle for the CA [13:58] jamespage, I like that. alright I'll fix this up. [14:03] tvansteenburgh, have you seen this before from charmhelpers? https://bugs.launchpad.net/charm-helpers/+bug/1384723 [14:03] Bug #1384723: charmhelpers.fetch.SourceConfigError: Unknown source: u'None' [14:04] mattyw: introduced by a commit on 8/21 [14:05] unknown sources used to just no-op silently, now they raise. jamespage fixed a particular case ('distro') on Monday, but i think we need to restore the original behaviour. too much stuff breaking on this now [14:06] tvansteenburgh, +1 [14:06] tvansteenburgh, gnuoy fixed it actually [14:07] stub: are you interested in looking into this? iirc you're using the SourceConfigError? [14:11] gnuoy, 2014-10-23 14:10:31 INFO unit.nova-cloud-controller/0.shared-db-relation-changed context.go:473 oslo.config.cfg.ConfigFilesNotFoundError: Failed to read some config files: /etc/neutron/neutron.conf [14:11] from nova-cc [14:11] gnuoy, neutron.conf_unused [14:11] explosion [14:12] ????? [14:13] gnuoy, the nova-cc charm always does the neutron migration right? [14:13] argh [14:13] I see [14:13] but if the neutron-api charm is related, it disables its neutron.conf and stops generating it [14:13] yep [14:13] gnuoy, hence the explosion [14:15] mattyw: it appears to be an easy fix since SourceConfigError is only caught by tests, i'll make time for it today if you or stub can't [14:18] tvansteenburgh, did you have in mind anything more complicated than: if source is None or source == "None": ... ? [14:18] jamespage, I could keep generating the file but remove the neutron-server service from the resource map [14:18] mattyw: replace the `raise SourceConfigError` with `pass` [14:19] gnuoy, yes [14:19] gnuoy, and the plugin as well [14:19] mattyw: so that anything undefined is a no-op [14:19] gnuoy, but that feels ugly [14:19] gnuoy, also we have a divergent charm issue in mysql - https://code.launchpad.net/~charmers/charms/trusty/mysql/trunk [14:19] jamespage, it is fugly but I don't see another way [14:19] tvansteenburgh, If you're happy with that I'll submit something now for it [14:19] gnuoy, actually its ok as neutron now syncs the db for all plugins, not just the configured one [14:20] mattyw: it's not awesome but that's the way it was before 8/21, so +1 [14:21] gnuoy, gah - the precise charm does not have the allowed hosts fix either [14:21] gnuoy, you focus on nova-cc [14:21] ack [14:26] jamespage, updated https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476 [14:28] tvansteenburgh, https://code.launchpad.net/~mattyw/charm-helpers/unknown-source-noop/+merge/239374 [14:28] mattyw: grep for SourceConfigError, I believe there are a few tests that will need adjusting [14:29] tvansteenburgh, ah dammit - I thought I'd added those [14:30] coreycb, the intent of that change is to enforce upgrade paths via the CA only right? [14:30] jamespage, correct [14:30] jamespage, which, it's the only charm that has that afaik [14:32] jamespage, it started as a work around for upgrading to the juno staging ppa because the open() in step_upgrade was exploding [14:35] tvansteenburgh, sorry about that, forgot to commit the test again - I'll get this right one of these days https://code.launchpad.net/~mattyw/charm-helpers/unknown-source-noop/+merge/239374 [14:35] jamespage, eh, not good. upgrading nova-cc to openstack-origin=cloud:trusty-juno/proposed also blows up on open(/etc/apt/sources.list.d/cloud-archive.list) -- the file doesn't exist [14:36] coreycb, so this is a first upgrade from trusty to juno CA right? [14:36] jamespage, yeah, I'd only tested using the ppa before [14:36] mattyw: cool, thanks! will review shortly [14:37] tvansteenburgh, no hurry, and feel free to throw it away and fix it another way, just thought I'd do something quick while I was in that part of the code [14:43] jamespage, lp:~gnuoy/charms/trusty/nova-cloud-controller/fix-db-migrations should fix it, testing now === jaywink_ is now known as jaywink [14:50] tvansteenburgh, I think I found a bug in charm tools [14:50] W: README.md includes line 6 of boilerplate README.ex [14:50] W: README.md includes line 8 of boilerplate README.ex [14:51] when I run it on my newly made README.md [14:51] but lines 6 and 8 are newlines. [14:51] jcastro: it's just a confusing message [14:51] it's matching line 6 and 8 of the boilerplate of matchable lines [14:51] so not new lines and not headers [14:51] any line over X characters [14:52] oh, do you think it's the headers? [14:52] like #Usage and so on? [14:52] jcastro: more than likely. that happens to me constantly. [14:52] I didn't realize this was still happening to people [14:52] Does anyone else find the screen keybindings for tmux in `juju debug-hooks` annoying? If you prefer the default tmux keybindings, 'juju ssh $unit touch .tmux.conf && juju debug-hooks $unit' will make it use the default key bindings. [14:52] we should fix the consistent matches, or at the very least the messaging [14:53] (You can also upload a custom .tmux.conf file that way, using juju scp.) [14:53] I would just ignore anything in # and ## [14:53] cory_fu: good fodder for a quick post on teh solutions blog. [14:53] those are supposed to be templated titles anyway [14:53] Hey, good idea. Thanks [14:54] jcastro: yeah, I'll patch that with everything else tvansteenburgh has unless he beats me to it [14:55] ok so I'll push it anyway? [14:55] marcoceppi: it's all you [14:56] we'll roll a release after this charmschool thing [14:59] marcoceppi, are you planning on doing your shift today or bumping it due to other stuff? [14:59] jcastro: I'm probably going to bump to later today [14:59] because if you are, the top three new charms on the top of the queue have good sign off from corey, I think you can just do a final check and promulgate [14:59] jcastro: sweet, thanks for the heads up [14:59] They Look Easy(tm) [14:59] lol, probably hard. === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [15:09] jamespage, I pushed a fix to https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476 that fixes the open() issue [15:10] jamespage, I'll run a full upgrade test now to juju set nova-cloud-controller openstack-origin=cloud:trusty-juno/proposed [15:10] jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/fix-db-migrations/+merge/239389 [15:10] jamespage, I mean, I'll run a full upgrade test now to cloud:trusty-juno/proposed for all the charms [15:13] mbruzek, fyi, hpcc has a green box in the review queue, I didn't notice that before [15:19] marcoceppi, got a sec or you charm schooling? [15:20] jcastro: schooling [15:23] cory_fu, ping? [15:23] mattyw: Hey, how's it going? [15:23] cory_fu, not bad thanks, hope you're well? [15:24] cory_fu, thanks for your review of my mongodb auth branch. I started looking at it but I had a question that I put on the review: https://code.launchpad.net/~mattyw/charms/precise/mongodb/auth_experiment/+merge/162887 [15:24] tvansteenburgh, ok I've already fixed like 5 or 6, incoming into the queue [15:24] tvansteenburgh, this is much better than before. <3 [15:26] mattyw: I think your security concerns are valid [15:26] jcastro: \o/ [15:27] mattyw: the only issue with this model is that it has potential to cause a temporary intermittant outage should the passowrd change while the webapp is running and delivering. [15:27] mattyw: I agree, it's less secure than not storing it, but the password is also available on the relation, so it's possible to retrieve it that way (though ever so slightly more difficult). whit and I were just discussing this morning the need for a better credentials management system for charms [15:27] mattyw: suggest to store a hash of the password to disk and validate against that hash. eg: if sha1sum(password) != cached_sha1_of_password [15:28] cory_fu: please update the docs with that information: https://juju.ubuntu.com/docs/authors-hook-debug.html https://github.com/juju/docs/blob/master/src/en/authors-hook-debug.md [15:28] mattyw: I'm not averse to recreating the password, and I gave the merge my +1. You could also just touch a flag file to indicate that you'd already set the password for that relation. [15:29] jrwren: did you ever get an answer re: your merge not showing up in the queue? [15:29] my merges are also not showing up in the queue [15:29] lazyPower: You can't retrieve the password to test against the hash from the providing side of the relation [15:30] lazyPower: nope. [15:30] cory_fu: ah good point. [15:30] I think changing it each time is fine. It was more of a passing thought, really. [15:30] cory_fu: my only concern is that it will cause an outage [15:30] I just thought it might be surprising for the admin if it changed out from under them. [15:30] if your app cannot connect to mongo, your users will see a 500 error, or your daemon may panic [15:30] lazyPower, cory_fu I don't think there will be an outage if the relation changes at the moment, It just adds a new user, the old one is still valid [15:31] ahhh, ok. [15:31] True [15:31] mattyw: good insight. ty for clarification [15:31] tbh i had not looked @ the branch as of yet [15:31] jamespage, gnuoy: unit tests added to https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/fix-step/+merge/238476 [15:31] lazyPower, cory_fu It's still not great - but in a different way ;) [15:31] was going based on the meta discussion in here [15:31] * lazyPower returns to trolling in silence [15:32] lazyPower, I prefer that approach :) [15:32] So that leads to a potential proliferation of users. So, the option of storing a flag (either in config, StoredContext, or a flag file) saying that you'd created a user and set the password for a given relation would solve that without needing to store the password on disk [15:32] jrwren: link me to your branch again if you dont mind [15:32] i'll take a look - i'm giong to be context switching to the queue as soon as ingest lands this last bundle [15:33] cory_fu, that's a good idea, sounds sensible [15:33] :) [15:33] cory_fu, I'll make that change - thanks very much [15:33] coreycb, that looks good - did it test OK? [15:33] jamespage, yep tested ok [15:33] cory_fu, I'll also take a look at the amulet tests - I don't have much experience with those, but I need to get some [15:34] lazyPower: https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config [15:34] jamespage, but ci caught me red handed with unit test issues! those are fixed in that mp [15:34] mattyw: i'm available to help with them as i wrote them :) [15:34] jrwren: ta! [15:34] coreycb, indeed! [15:34] merged - thanks! [15:34] lazyPower, that would be perfect, Can I go as far as putting something in the diary? [15:35] Hi hoping someone can give me some guidance on setting up multiple interfaces on juju openstack. Particularly an ISCSI network so that storage traffic goes out the right nic. [15:35] mattyw: not parsing re: diary? Do you mean on the calendar? [15:35] lazyPower, yeah - that's what I meant [15:35] lazyPower, can you not read minds yet? [15:35] mattyw: It shouldn't be too difficult. Just adding another line in validate_relationships() [15:35] :) [15:35] mattyw: i'm close... my psychic helmet is in the shop this week for upgrades. [15:36] but yeah that sounds good. [15:36] lazyPower, awesome [15:36] cory_fu: i've been lending myself out to people getting their hands dirty with amulet as we've been changing the story around amulet quite a bit lately [15:36] mattyw: And also not a big issue. I don't think the tests currently check the actual relation data, just the relation existence [15:36] older info still works but there's enough knowledge now for patterns to be recommending, such as your unit testing template [15:36] cory_fu, that would be nice, last time I was unable to even run the tests, so I'd like to at least get that far [15:36] and how to approach the tests from a bundle standpoint vs a charm standpoint [15:37] mattyw: This might be of help: http://blog.juju.solutions/cloud/juju/2014/10/02/charm-testing.html [15:38] I was able to run up to and including the 200_deploy test without issue with bundletester [15:38] Though 200_relate_ceilometer.test had an issue [15:38] (Unrelated to your change) [15:41] cory_fu, awesome, thanks very much, I'll take a look [15:42] np, thanks for working on this. :) === roadmr is now known as roadmr_afk [15:57] Hey all quick question. Trying to define more than two networks on juju and openstack outside of public and private. Is this possible currently, or do I need to configure the hosts using another method once juju deploys them. Online for ceph I saw the following answer posted. “Currently Juju has a very simple networking model, which assumes only a "private" (inside the cloud environment) and "public" (externally accessible) [15:57] networks. “ Is this true for all openstack services? === roadmr_afk is now known as roadmr [16:05] Is there a better forum or place I should go for information relating to this? I’ve looked online and haven’t really been able to find anything. [16:09] gnuoy, hows you nova-cc testing coming along? [16:28] jamespage, I did a deploy with it and added the comment to the mp [16:29] gnuoy, link? sorry - that will be folderized somewhere which takes time :-) [16:29] jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/fix-db-migrations/+merge/239389 [16:30] cory_fu: wait, the ceilometer test had an issue? [16:30] cory_fu: do you recall what that issue was? [16:30] gnuoy, +1 [16:31] gnuoy, pls land :-) [16:31] lazyPower: I thought it seemed like an environment issue or I would have noted it on the review, but I don't recall exactly [16:31] ok. phwew [16:31] marcoceppi, re mysql - was it intentional what precise and trusty branches have diverged [16:31] ? [16:32] cory_fu: checking myself now - thanks for teh tip. last thing i want is jamespage pinging me that we broke openstack dependencies [16:32] jamespage: no, I'm trying to get the fixes out the door today but it's not looking good [16:32] * lazyPower hides [16:32] :) [16:32] lazyPower: I was just running bundletester, so it should be easy enough to replicate [16:32] ack, thats what i'm doing on hPCloud [16:33] jamespage: going to simply submit some small fixes for the charm today, ones that move to charm helpers completely and implement better configuration and has support for oracle/5.6/5.7 [16:33] marcoceppi, awesome === roadmr is now known as roadmr_afk [16:40] jamespage, merged [16:41] gnuoy, awesome thankyou! [17:09] gnuoy, OK working through each charm for release now [17:13] gnuoy, for reference i'm saving the existing trunk charm to /old-stable under ~openstack-charmers [17:14] jamespage, ack [17:25] gnuoy, all done [17:25] gnuoy, must remember that precise hacluster != trusty hacluster [17:25] \o/ [17:25] * jamespage had to do some reverting [17:28] Anyone have good resources on using Juju with existing puppet manifests? [17:40] it looks more lively, so I'll try now - can I use Juju to setup a deploy like this: for each batch of app hosts, silence (disable?) their monitoring, remove them from load balancer, upgrade the app, re-enable monitoring, wait for monitoring to turn green, put back in load balancer === roadmr_afk is now known as roadmr [17:41] I'd like the whole process to stop as early as possible if something goes wrong, too. I can script juju calls if that cannot be done inside Juju itself, but I'm not sure how disabling only some units would work [17:43] hmm, or am I thinking with wrong categories... I can split app into few groups, and then for each unlink them from LB, do the upgrade (which should make Juju unlink/link with monitoring by itself, right?), check if it's OK with external tool, and put them back into LB [17:45] adjohn: We dont have a good example for that, but i'm happy to work with you to develop that story [17:46] adjohn: the most helpful tips I can give you, are to split your manifest resources out into the contexts that the hooks themselves handle, and execute the manifests as standalone 'recipes' in each of the hook contexts. How familiar with chef are you? i've got a few examples leveraging chef that should translate fairly well. [17:58] lazyPower: pretty familiar with Chef, that would be helpful! [17:59] adjohn: ok, i have some patterns here that should help then - its leveraging chef-solo, and each hook is outlined as a resource in the cookbook - https://code.launchpad.net/~charmers/charms/precise/rails/trunk [18:00] if you come up with any specific questions feel free to ping me [18:00] Thanks, will do! [18:16] jrwren: will review your branch after i wrap up stub's pending review of postgres. so you're up next [18:16] but a cursory review looks good - i just want to do some deploy testing [18:18] lazyPower: any idea why it missed the queue? [18:18] jrwren: the ingest is turning into chunky salsa on a bug - which is preventing items that come after it from ingesting. [18:23] jrwren: i'll probably put in some OT on helping get a fix for this. I have an idea that may help prevent this from happening in teh future - creating a "problematic items" report so we can skip items if they fail ingest for whatever reason - but continue on the ingestion path. There's probably always going to be something that winds up being a twit and doesn't ingest for w/e reason. Character encoding, weird formatting of a date, something. [18:39] jcastro: jrwren review queue is fixded [18:39] there's a ton of stuff in the review queue now [18:39] <3 you all [18:39] marcoceppi, huh, I had like 10 that should be in there now [18:39] jcastro: the ingest is still running [18:39] oh, is it still ingesting? [18:39] ack [18:40] we're moving away from celery and on to a better delayed task tool in the next week [18:40] so errors will get caught quicker [18:43] marcoceppi, my metadata spam compells you [19:45] stub: are you around? [19:56] lazyPower: some interesting stuff in the testing page: http://reports.vapour.ws/charm-tests/charm-bundle-test-1256-results [19:56] stub: landed https://code.launchpad.net/~stub/charms/precise/postgresql/integration/+merge/233666 [19:56] filing a bug re: tests for follow up work. [19:57] tvansteenburgh output is looking better ^ [20:00] stub: https://bugs.launchpad.net/charms/+source/postgresql/+bug/1384894 [20:00] Bug #1384894: Tests Fail Consistently === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [21:31] let's say I have a simple app on top of PostgreSQL with hot standby, what's the chance of split brain? Would putting 1 app instance and 1 postgres instance on a separate partition cause it? [21:33] oh, nevermind, the postgresql node won't upgrade to master without confirmation from juju main node [21:35] hmm, which make the state server a sinngle point of failure... how would it handle being brought up after environment changed? [21:35] ktosiek: you can scale the state server out to avoid it being a single point of failure [21:36] right, but my second question still stands: what would happen if something changed (like nodes failing) while state server was failing over? [21:40] Since all state servers are in sync, if one state server is failing the others would know that the postgresql server was failing as there's a call home ever X period of time. So the state would be updated accordingly regardless of the state servers state as long as there was still a state server running [21:40] I need to look in to the clustering stuff a bit more, but that's my understanding as not a juju dev [21:41] yeah, so the state server is used as a source of truth for charms with HA. Sounds sensible [21:43] thanks, I'm starting to feel I've got the basic architecture :-) === arosales_ is now known as arosales [22:06] marcoceppi: latent response - yeah i'm aware. I put this on amir's radar and will be revisiting this again. most of it is cosmetic - but there are some legitimate things bothering me re: namenode relationshipc hanged failed. [22:08] Seems like you guys are in the middle of some serious dev/testing. I have an openstack maas juju environment up, and need to do some tweeking with the network settings on cinder and nova, and integrate with a third party cinder driver. I realize you guys are probably less inclined to answer admin type questions, but I’ve really tried engaging multiple sources for more info including the openstack team at HP, online documentatio [22:08] etc, and am coming up short on answers. Is there a good forum to get help on? Or a should I come back at a later time? Thanks for any response. [22:12] thedellster: I'd try askubuntu.com or even stackoverflow as a last resort [22:14] Thanks ktosiek… [22:14] thedellster: the mailing list is also a great way to get latent responses to questions - juju@lists.ubuntu.com [22:15] not everyone monitors the irc channel, but there are a fair number of eyes monitoring the mailing list [22:15] Ah great! [22:21] thedellster: and most of the OpenStack charmers are on UK time :) [22:22] the mailing list is a great place to start, if you send in your question I'll make sure to poke them tomorrow to take a look. Ask Ubuntu is another great place as well (a bit more SEO than a mailing list) [22:22] Writing the email now… [22:23] Might stick around till UK time [22:23] thanks again everyone! [22:23] o/ === roadmr is now known as roadmr_afk [22:41] * thedellster sends email crosses fingers. Preparing to drink pot of coffee for 4am UK time handoff. [22:46] thedellster: maybe take a nap before hand so you're fresh and ready to go when they arrive. [22:47] Lol a very diplomatic way to get me out of here…. Thanks again I’ll wait on the mailing list reply, and maybe come back at 4am… [22:48] replacing tequila with coffee [22:48] thanks for the help === roadmr_afk is now known as roadmr