[00:11] magicaltrout: no worries, you can leave them running for a few weeks before I start to ask questions [00:11] esp since I can see what's running on them and anyting with dcos in it intrigues me === terje is now known as Guest91970 [07:38] how can I use "juju upgrade-juju" to upgrade my machines from beta5 to beta6? [13:33] jamespage, thedac, tinwood, beisner, wolsen I've move our layers and interfaces onto https://github.com/openstack-charmers and repointed our entries on http://interfaces.juju.solutions/ [13:40] s/move/moved/ [13:40] I'm trying to install glance - but it fails at setting up the mysql db with: "Host 1.2.3.4 is not allowed to connect to this MySQL server". But the IP mentioned is exactly the IP of the machine... === terje is now known as Guest26969 [13:52] seems to be this bug: https://bugs.launchpad.net/charms/+source/mysql/+bug/1305582 [13:52] Bug #1305582: relation with mysql fail when mysql and glance are deployed on the same node [14:11] jamespage, hypothetically, instead of two network cards, could two different vlans be used on a single network card for openstack deployment? [14:12] aka foo.2001 & foo.2002, instead of two connections? [14:12] * xnox ponders how good support for vlans is in juju/lxd/openstack world =) [14:24] hello there [14:25] i get the following [14:25] 2016-04-26 14:24:36 ERROR juju.cmd supercommand.go:429 cannot assign unit "mysql/0" to machine 1: machine is not alive [14:25] i installed the machines manually with ubuntu 14.04.4 [14:26] and after that added them with juju add-machine ssh:ubuntu@192.168.7.10 [14:26] why isn't this working? [14:27] Is there a workaround for bootstrapping xenial on aws? Bootstrap can't find any xenial images [14:28] aisrael: I asked on the ML, Mark said there will be images later this week hopefully [14:29] hi [14:30] Can I ask some support questions here, or is there a differnet channel? [14:32] SaltySolomon: you can try, this is the correct place [14:33] We are currently trying to setup openstack with maas and juju, we got maas set up and working, but everything else is bugging out when we try to install juju [14:33] you might struggle with openstack questions, I believe most of them are in austin [14:33] magicaltrout: thanks! [14:34] Bootstraping Juju takes ages [14:34] SaltySolomon: where's teh bugging start? does bootstrap complete? [14:35] well, it says bootstraping juju, but it doesn't really finish, does it normally take a long time? [14:36] SaltySolomon: shouldn't be more than 10 mins [14:36] SaltySolomon: can you try to bootstrap again, with the --debug flag, and paste the output in paste.ubuntu.com and link here? [14:36] SaltySolomon: also, what version of juju (juju version) [14:37] Okay, we got the version that is included with the liberty release on ubuntu server 15.01 [14:44] marcoceppi: Regarding the new charm push and charm publish commands. [14:44] marcoceppi: Is there any special concerns when the charm was in the charm store from the old way and we want to push/publish using the new way? [14:45] kwmonroe: ^ [15:05] mbruzek - just be aware that it disables the VCS ingestion route for that charm. Otherwise nope, it just continues on. You may need to tweak the ACLs on first upload... but thats true with any charm push [15:06] lazyPower: I thought there was some caveat with publish. That we could not go back to the old way or something else... Like what if a charm is already in the store and the author pushes to the private namespace. Is there some kind of link problem there? [15:06] aisrael, xenial AMIs should be working now [15:06] mbruzek - thats what i was referring to that it disables VCS ingestion [15:07] thanks rcj [15:07] rcj: Is that a recent fix? [15:07] mbruzek - once you charm push, you wont be able to go back to pushing to launchpad. You will have to charm push future revisions [15:07] lazyPower: ack [15:15] rcj: bootstrapping xenial working <3 [15:17] \o/ [15:25] aisrael: \o/ [15:31] SaltySolomon: could you run `juju version` that could be a number of different releases [15:32] gnuoy, ack. Are we moving to a pull request workflow for them? e.g. fork, and PR? [15:33] 1.25.5-wily-amd64 === Salty is now known as SaltySolomon [16:06] tinwood, moving to openstack gerrit eventually [16:13] marcoceppi, http://paste.ubuntu.com/16065407/ with version 1.25.5-wily-amd64 [16:14] SaltySolomon: ah, this is the openstack installer, not too familiar with this, stokachu ^? [16:15] SaltySolomon: your maas api key is incorrect [16:15] SaltySolomon: you copy it from the maas ui under your user's account [16:15] We tripplechecked it and it is the right one [16:15] SaltySolomon: did you create your own api key or let maas generate it [16:15] create our own [16:15] ah that's why [16:15] once we had the w [16:15] we check for ':' in the api key [16:16] Well, it generated a new one for us [16:16] It wasn't the default one [16:16] So we let maas generate it for us [16:16] so the api key should be 'abcd:abcd:abde' [16:17] Also once we had a typo in it [16:17] Then it didn't find any machines, later we used the right key and it even deployed on machine with juju [16:18] SaltySolomon: ok so does your api key have that format i described? 'aaaa:bbbbb:ccccc' [16:18] yes it does [16:19] ok you want to `sudo openstack-install -u` and then remove ~/.cloud-install [16:19] and start again entering that new api key [16:24] okay, it finds the maas nodes [16:24] gnuoy, okay, but for now? [16:35] Maas webinterface says it is deployed, the node output too, but the installer, is well stalling [16:35] stokachu ^ [16:36] SaltySolomon: can you pastebin ~/.cloud-install/commands.log [16:37] http://paste.ubuntu.com/16065965/ [16:37] Here you go stokachu [16:38] SaltySolomon: so it's probably still going through the cloud-init stuff with the system [16:38] has to do apt upgrade etc [16:38] the client has no connection to the inet [16:39] do you have a proxy? [16:40] okay, back it does have internet access, so it is probably the patching [16:40] ok [16:40] yea it could take a few minutes to do all that [16:42] Node is currently just saying "Installation finished" [16:44] Okay, wrong that was sme old stuff, we got the same error as before, will post you a pastebin in a sec [16:45] http://paste.ubuntu.com/16066144/ here you go stokachu [16:46] SaltySolomon: looks like it's failing to upload the juju tools during bootstrap [16:46] SaltySolomon: you sure those deployed nodes can reach the internet? [16:57] tvansteenburgh, kwmonroe: https://github.com/juju-solutions/bundle-apache-processing-mapreduce/pull/4 [17:01] cory_fu: i'd like to know why it's passing tests before fixing it [17:01] tvansteenburgh: Yeah, that's very strange. I really don't understand it [17:04] tvansteenburgh: But it's definitely passing: http://reports.vapour.ws/all-bundle-and-charm-results/charm-bundle-test-parent-6892/charm/aws/16 [17:10] tvansteenburgh: I'd like to point out that the message the stack trace is waiting for doesn't match the message in the current version of the test [17:10] Not that it matters, since the current version of the test also doesn't match what the slave should be setting [17:24] I published a charm (to my personal namespace), but forgot to change bzr-owner before I published, so now it's showing up in the ~charmers namespace. I've changed bzr-owner now, re-promulgated, but not sure if there's anything else I need to do to fix it. [17:24] i.e., it's still showing up in the wrong place in the store [17:27] stokachu, we tried again with packet forwarding activated, this time the error is: http://paste.ubuntu.com/16067371/ [17:28] SaltySolomon: so youre still getting this error iled to connect to streams.canonical.com port 443: Connection timed out\ntools from https://streams.canonical.com/juju/tools/agent/1.25.5/juju-1.25.5-trusty-amd64.tgz d [17:28] something is still blocking you from accessing streams.canonical.com [17:37] aisrael: bzr-owner only affects what Juju Cards show as the source, and is only a stop-gap until they fix Cards. However, you have to set the bzr-owner for each revision that you push [17:37] cory_fu: Ok. Any idea why it'd show up in ~charmers vs. my namespace? [17:38] Was it previously promulgated in ~charmers? You may have to unpromulgate it from there [17:38] What's the charm ID? [17:38] I don't think it was promulgated to ~charmers [17:38] it should be cs:~aisrael/trusty/plex-0 [17:39] but my page in the cs points it to cs:trusty/plex-0 [17:45] ok, I think I unpromulgated it. [17:45] aisrael: Yeah, it's not promulgated now but https://jujucharms.com/plex/trusty/0 is still showing up and I don't know why [17:46] https://jujucharms.com/plex/ rightly 404s [17:46] https://jujucharms.com/u/aisrael/plex/trusty/0 looks right as well [17:46] but this works: https://jujucharms.com/plex/trusty/0 [17:46] I have no idea where the first is getting ~charmers. Only thing I can possibly see is that, in the perms, write is set to charmers instead of you [17:47] Yeah. It shouldn't [17:47] and https://jujucharms.com/u/aisrael/ shows it's owned by charmers [17:47] Hmm. write perms.. [17:47] aisrael: Also note, user pages seem to be a bit out of whack with the new publishing process: https://github.com/CanonicalLtd/jujucharms.com/issues/249 [18:14] @stokachu, thanks a ton, it was exactly that problem, the nodes couldn't reach the internet [18:15] cool, glad to hear [18:43] kwmonroe, kjackal, admcleod1 (if you're around): Does this look ok to you? https://github.com/juju-solutions/layer-apache-hadoop-namenode/commit/e6ad8fd44f119fdc411c6104498276aa4d2ac1a8 [18:44] The "update on fencing" sort of worked, but only for the unit that was transitioning. [18:44] I could maybe make it work for both but it would be a bit hairy. But I'm not sure I like the every-minute-cron solution, either [18:45] lgtm [18:46] do we have any other option at this time? [18:46] Lets break it fast :) [18:47] I could make the fencing solution more complex (it would require doing a SSH to the other NN to launch juju-run from there to update the status) [18:47] Maybe we can do that in the future [18:48] +1 for pushing this out now and improve it as we go [18:52] yeah cory_fu, cron */1 kinda boo, but i think it's a good stab for now. also i thought i was reading some hardcore productive code for a minute there. then i realized that whole nn_status file is just for status. good grief! and nice job :) [19:54] bdx - MFW i think to myself "Man, why didn't we make a common thing to copy out certificates to path..." [19:54] bdx then i recall doing a review... and find out you contributed this in tlslib :D [19:54] high five sir [19:55] s/common thing/function/ [20:17] anyone here gonna help with the openstack keystone charm? or should I head over to some openstack channel? [20:17] keystone fails with "hook failed: config-changed" and in specific the function _ensure_initial_admin seems to fail... (it's tried 3 times until it gives up) [20:21] and: INFO worker.uniter.jujuc server.go:173 running hook tool "status-set" ["blocked" "Ports which should be open, but are not: 5000, 35357"] [20:22] do we have a centos image we can deploy on ec2 with juju? [20:23] i want to test some compatibility === scuttle|afk is now known as scuttlemonkey [21:51] I've been trying to run juju bootstrap --config default-series=xenial lxd-test lxd --debug from behind a work proxy which ultimately fails with the message: 2016-04-26 21:13:42 ERROR cmd supercommand.go:448 failed to bootstrap model: cannot start bootstrap instance: can't get info for image 'ubuntu-xenial': not found. This is running inside a fresh Ubuntu 16.06 VM. I have the same setup at home, and all works perfectly. Suggestions? [21:51] that's 16.04 [21:51] aww a123 [21:51] you need to grab an image i suspect [21:53] a123: i'm not sure about the syntax, i think its changed a bit but try something like [21:53] lxd-images import ubuntu trusty amd64 --sync --alias ubuntu-trusty [21:53] or [21:53] lxd-images import ubuntu xenial amd64 --sync --alias ubuntu-xenial [21:55] Ah. ok. I'll give that a try. I didn't realize bootstrap would check locally. I'm curious though, why are there 2 different methods of grabbing images? Why doesn't bootstrap use lxd-images...? [21:59] a123: dunno :) [21:59] LXD local does work though i have a bootstrapped setup here [22:00] is your bootstrap using the newly release 2.0? [22:00] i just installed a fresh xenial image and did apt-get [22:01] i've been running a self built LXD juju for a while and i did notice a few differences in the bootstrap [22:01] although it could just be me getting old [22:02] ok. sounds close enough to what I did. apt-get followed by setting up zfs as a backing store, then juju bootstrap.... As I mentioned, works great w/out being behind a proxy. [22:02] well [22:02] The image import syntax has changed a bit, so working on that now. [22:03] also check [22:03] lxc image list [22:04] it will also try and hook up to streams.canonical.com for the tools [22:04] if that fails i'm pretty sure it will be sad as well [22:04] i'm not sure what error you'd get there [23:03] admcleod1: https://github.com/juju-solutions/bundle-apache-processing-mapreduce/pull/5 look ok to you? [23:04] admcleod1: Also, if you're ok w/ the HA as it stands now, I'd like to get it merged. Maybe test it some more today and let me know if you hit anything, and if not I'll merge it tomorrow? [23:05] I'm thinking we should squash the merges, though, because the commit histories are quite loquacious. [23:06] check out the big word! [23:07] :) [23:07] Alright, well I'm EOD. Have a good one! [23:07] boooo [23:08] :) Gotta go celebrate a friend's b-day [23:08] i'm too old for those [23:08] have fun! === blr_ is now known as blr