=== brandon is now known as Guest50075 [07:27] gnuoy`, unit test failures on the quantum-gateway dvr merge [07:28] jamespage, ack, I'll take a look [07:28] gnuoy`, needs some further mocking by the looks of things === brandon is now known as Guest20723 [07:33] gnuoy`, nova-compute +1 and merged [07:34] jamespage, fantastic, ta [07:34] gnuoy`, I added the snippet to the kilo template as I did it [07:34] libbo but trivial [07:34] kk [07:45] gnuoy`, one trivial comment on neutron-api - take a look - I've tested it locally with the unit tests - seems OK [07:45] will do, thanks [07:50] jamespage, +1 your proposed change to my neutron-api mp [07:51] jamespage, quantum-gateway unit tests are fixed too [07:53] jamespage, I'd like to talk to you about the neutron-network-service relation in the neutron-openvswitch charm nut we can do that later [07:53] s/nut/bur/ [07:53] urgh, typing is hard [07:57] gnuoy`, sure lets do that in a sec [08:14] gnuoy`, it seems odd to have nova-cc related to neutron-openvswitch [08:16] jamespage, neutron metadata service needs keystone credentials. So, this operates in the same way as the quantum-gateway charm does. ie its gets keystone creds from nova-cc. However, I do accept that this is really an abuse and suboptimal. Currently, the keystone charm will not issue creds without the client registering an endpoint which neutron-ovs doesn;'t do. I was thinking that the longer term solution here is to amned the keystone charm to allow you [08:16] to join the identity-service relation without specifying an endpoint and get creds back [08:18] gnuoy`, the alternative is for the neutron-api charm to pass those over? [08:18] an alternative rather [08:18] jamespage, yes, that would work [08:19] gnuoy`, that would be preferable to having another relation IMHO [08:19] jamespage, fine by me. I'll make that so [08:19] gnuoy`, +1 [08:19] :q [08:23] jamespage: i've done a bit of a refactor of the cred gen code in keystone with the aim of following up with the ability for identity relation to be able to hand out creds without necessarily adding an endpoint [08:23] jamespage: gnuoy suggested perhaps a new relation [08:23] jamespage: not sure what best approach is yet === axw_ is now known as axw [09:47] gnuoy`: +1 for amending the keystone service [09:48] at the moment what I do is use keystone-admin and use keystone client to get my credentials [09:49] dosaboy: sounds good ;-) [09:50] apuimedo, yes, I think the keystone charm needs to grow that feature [09:51] indeed [09:53] gnuoy`: why is nova-api-metadata not run by the nova-cloud-controller charm? [10:03] apuimedo, I am struggling to come up with a sensible reason [10:04] ;-) [10:39] gnuoy`, legacy mode good for review? [10:39] dosaboy, your keystone refactoring landed btw [10:39] dosaboy, and some feedback on your hacluster one [10:39] jamespage, just having a moment of doubt as to whether it'll block the packages being installed [10:40] apuimedo, gnuoy`: re the neutron-api-metadata agent not running on the nova-cc - by placing it on the edges alongside the hypervisors, we avoid a) pushing all traffic to a single set of services a b) avoid having to make them HA as well [10:41] jamespage, but it currently runs on the neutron-gateway [10:41] not on the edges [10:42] gnuoy`, unless you run novat-network - in which case it does run on the hypervisors [10:42] gnuoy`, for neutron right now it sits alongside the neutron-metadata agents [10:42] on the gateway nodes [10:42] gnuoy`, dvr changes that again right? [10:42] (unless you're using dvr) [10:42] yeah [10:42] spot on [10:42] that way the neutron-metadata agent is only dependent on the api service running locally - so its quick [10:43] and resilient to a whole node failure [10:43] single service failures can still create problems tho [10:43] jamespage: got it thanks, fixing now [10:44] ok [10:44] thanks [10:53] jamespage: why is it that on the quantum-gateway charm [10:53] there's only the template for nova.conf for havana but not for icehouse/juno? [10:53] apuimedo, template loader is os series prioritized [10:54] the nova.conf in havana is ok for icehouse and juno [10:54] we only create a new template for a specific OS version if its really required [10:54] I thought as much, but I wanted to confirm :P [10:54] (as for kilo) [10:54] on my charms I follow a default on /templates plus overrides in subdirs [10:54] its on the gateway charm so it can sit alongside neutron metadata proxies in l3 and dhcp namespaces [10:55] (if needed) [10:55] jamespage, legacy mode is good for review [10:55] gnuoy`, ack - doing so now [10:55] ta [10:55] jamespage: well, it's there also because nova-api-metadata needs it to exist [10:55] right? [11:00] yeah [11:10] gnuoy`, merged legacy mode support [11:10] ta [11:22] jamespage: the legacy mode support is so that the charms that are refactored still play nice with those that had relations with the pre-refactor versions? [11:22] yup [11:23] we'll leave legacy mode on by default for this release, and then turn of off for 15.07 [11:48] gnuoy`, OK - I think I've reviewed what I can - I'm going to rebase my 0mq branches next [11:48] gnuoy`, are you looking at le next? [11:49] neutron-oopenvswitch dvr fixes next, then le, was my plan [12:04] The charm boilerplate from 'charm create' does a 'pip install charmhelpers'. How does that work with the python-apt dependency? Last I heard, you couldn't install that with pip. [12:22] stub: you can't, unfortunately, I thought python-apt was installed on the images already? [12:22] * marcoceppi_ goes to verify [13:29] I'm playing around with the GCE provider in the latest ppa:juju/devel package, and it looks like the boostrap node doesn't have port 17070 opened up to the world. [13:29] Is this a known bug? [14:02] Odd_Bloke: yep #1436191 [14:02] Bug #1436191: gce: bootstrap instance has no network rule for API [14:03] Odd_Bloke: already fixed [14:03] ericsnow: Any easy way to get the fixed code? [14:04] Odd_Bloke: we have a docker container with trunk [14:04] aisrael: have you started publishing your nightlies? [14:05] lazyPower: Ah, cool; link? [14:05] Odd_Bloke: yep, see http://reviews.vapour.ws/r/1282 [14:05] Odd_Bloke: docker run -ti -v $HOME/.juju-trunk:/home/ubuntu/.juju adamisrael/juju-trunk [14:06] Odd_Bloke: the key thing is to change the first arg to OpenPorts to env.globalFirewallName() [14:06] i'm not sure that has the fix however - confirming with aisreal that he's still tracking nightlies - we just started publishing these last week at our sprint [14:06] OK, cool; I'm just playing around so I'll hold off until aisrael confirms. [14:11] Odd_Bloke: let me or wwitzel3 know if you have any questions or run into trouble (we wrote the provider) === cmagina_ is now known as cmagina [15:17] apuimedo: o/ I understand amulet is giving you some fuss? [15:18] jcastro: et al can someone look over my answer to see if I can improve it? http://askubuntu.com/questions/603317/is-masas-juju-or-the-charm-responsible-for-ssh-keygen-on-nodes/603381#603381 [15:19] looking [15:20] lazyPower: luqas was the one that experienced the issue [15:21] he'll be able to detail it better [15:21] apuimedo: sure thing :) If i can get the error message and a sneak peek at the test code we should be able to triage/address it [15:23] lazyPower: hi, when trying to use the placement option for deployment in amulet I get an error in juju-deployer [15:23] let me find it [15:27] lazyPower: http://paste.ubuntu.com/10712730/ [15:28] but I've seen there was a fix for that in https://bugs.launchpad.net/juju-deployer/+bug/1383336 [15:28] Bug #1383336: TypeError "takes exactly 2 arguments (4 given)" raised while deploying [15:28] brb [15:30] luqas: do you have juju-deployer installed via pip? [15:30] luqas: i do believe this fix was released in the pip package, but has not yet been ported to the repository package. bit of a mismatch atm [15:33] lazyPower: yes, that can be, I have the repository package, will try with the pip one and be back if it still shows, thank you [15:33] np [15:49] Hi folks, has anyone got Juju to deploy to subscription in Azure with EA? Either via the VMDepot VM or the Azure CLI tools? I'm having problems. [15:56] joec1: i'm not sure i'm parsing what you're asking. EA = Ensure availability? [15:57] Hi @ LazyPowerAzure Sorry I meant with Microsoft Enterprise Agreement [15:57] I'm not sure if that even matters but I can't get Juju to deploy using any documented methods [15:58] joec1: yeah, i'm looking here @ the azure EA markeing page and this looks mostly like a pricing structure model, not a different provider setup. So I'm going to make the blanket statement of it should just work [15:58] what issue are you running into during bootstrapping? [16:01] joec1: can you confirm you were following the instructions located here: bzr merge lp:~canonical-ci-engineering/charms/trusty/logstash/local-tarball [16:01] I'm just attempting starting from scratch again but a few days ago I got the following: [16:01] ERROR failed to bootstrap environment: PUT request failed: BadRequest - XML Schema validation error in network configuration at line 39,18. (http code 400: Bad Request) [16:01] gah, paste fail [16:01] https://jujucharms.com/docs/1.20/config-azure [16:02] yes I followed those instructions (even though there is an error in the openssl generation commands) [16:02] joec1: can you file a bug about the openssl generation eror here? https://github.com/juju/docs/issues [16:02] forget the openssl error, that was my fault [16:03] i'm bootstrapping now to try and reproduce [16:04] joec1: i've found a relevant thread on this bug and it looks like its due to storage configuration [16:04] https://bugs.launchpad.net/juju-core/+bug/1304778 [16:04] Bug #1304778: ERROR PUT request failed: BadRequest - XML Schema validation error in network configuration at line 54,18. (http code 400: Bad Request) [16:05] brilliant! [16:05] I'll test with trunk, thanks so much [16:06] not a problem, i can confirm it bootstraps appropriately on 1.23-beta [16:06] I did see that bug but my eyes jumped over the second reported error - I thought it was just about storage not network [16:06] thanks again [16:06] cheers [16:10] ahhh [16:12] I'm using juju 1.22 stable from the repos, that bug was committed to 1.20 I think. Still, will try with 1.23-beta.... [16:12] I don't believe its a core bug, i may be wrong [16:12] it seems strange that I'm able to bootstrap if it were a core bug. I know that azure is a fairly finicky provider - its very particular about how you have it configured [16:30] strikov: http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html [16:37] @lazyPower - I don't suppose you have any easy to follow instructions for building juju trunk do you? I've already hit a problem following instructions in http://bazaar.launchpad.net/~go-bot/juju-core/trunk/view/head:/README [16:39] # launchpad.net/juju-core/testing/filetesting [16:39] ../src/launchpad.net/juju-core/testing/filetesting/filetesting.go:194: cannot use checkers.Satisfies (type check.Checker) as type gocheck.Checker in function argument: [16:39] check.Checker does not implement gocheck.Checker (wrong type for Info method) [16:39] have Info() *check.CheckerInfo [16:39] want Info() *gocheck.CheckerInfo === natefinch is now known as natefinch-afk [16:41] apologies for the spam [16:57] joec1: i do 1 moment. theres 2 methods you can follow [16:58] 1) you can use the dockerbox aisrael is publishing/maintaining of nightly builds [16:58] @lazyPower thanks its ok [16:58] I used github instead of launchpad.net [16:58] or 2) you can build from source following a tutorial here: http://marcoceppi.com/2014/11/compiling-juju-core-from-source/ [16:58] really appreciate the help thanks! :) [16:58] cheers :) [17:22] Fiddlesticks! I get the same error using juju trunk === natefinch-afk is now known as natefinch [17:41] joec1: lets recap your config [17:42] sure [17:42] can you nuke the sensitive bits and pastebin me your config? [17:42] will do [17:42] 1 sec [17:51] @lazyPower http://pastebin.com/GyExhKR5 [17:52] shows the error log also. I have to add --upload-tools as it complains: "Juju cannot bootstrap because no tools are available for your environment. You may want to use the 'agent-metadata-url' configuration setting to specify the tools location." [17:56] I've also attempted using "--constraints instance-type=Small" but get the same XML BadRequest error [17:57] joec1: You need to add a couple options to your ~/.juju/environments.yaml, under the provider you're trying to bootstrap [17:57] agent-metadata-url: https://streams.canonical.com/juju/tools [17:57] agent-stream: devel [17:58] aisrael: o/ [17:58] will try that now thanks @aisrael [17:58] aisrael: is your nightly docker image still the go-to place to get trunks code? Odd_Bloke was asking earlier. [17:58] lazyPower: Yep, it sure is! [17:58] Odd_Bloke: ^ seems like you're g2g, aisrael is on the case. [18:02] @aisrael juju can't parse those options in environments.yaml [18:02] "YAML error: line 458: found character that cannot start any token" [18:03] joec1: What error are you getting? Can you pastebin your environments.yaml (with the sensitive bits removed)? [18:03] here is the one I just made: http://pastebin.com/GyExhKR5 [18:04] which provider are you using? [18:04] joec1: apologies for the delay in a conf call - give me a few and i'll be responsive again [18:04] aisrael: this is for azure [18:04] aisrael: thanks for taking a look [18:04] * lazyPower got busy all of a sudden [18:05] :) np really appreciate your help [18:06] joec1: Based on that error, I suspect you have an error in the lines you just added. They should be added under test-juju01 [18:06] yep, that's where i added them [18:06] joec1: just for grins can you attempt to bootstrap in teh US West region? i can confirm this is the group i'm using as well - and this may be region specific. [18:07] will do [18:07] i see you're using the EU group - if we can isolate that this may be region specific i can get a bug tailored to the issue [18:10] ...creating storage account..... [18:11] same error unfortunately [18:11] I changed my environment.yaml file to reflect the new region and the new storage account created in that region [18:12] region being "West US" [18:12] a couple of things: [18:12] If you're still getting a yaml parsing error, then there's something wrong with the yaml. [18:13] OK, I didn't try with the agent-stream settings, will do now [18:14] the error I'm getting is this currently: "2015-03-31 18:11:05 ERROR juju.cmd supercommand.go:430 failed to bootstrap environment: PUT request failed: BadRequest - XML Schema validation error in network configuration at line 39,18. (http code 400: Bad Request)" [18:15] OK, juju doesn't like TAB indentation :( [18:17] however, after adding agent-metadata-url: https://streams.canonical.com/juju/tools and agent-stream: "released" I still get the XML Bad request error [18:18] juju status complains that it can't connect to API server without admin-secret [18:20] joec1: so you've already bootstrapped, these issues are coming from juju deploy ? [18:20] FYI, I'm trying to start a Juju environment in an already set up Azure service that has VMs and local network configured already, could that be the problem? [18:20] no, I haven't bootstrapped at all [18:20] that does sound like it could be part of the issue [18:20] the existing VM's not so much [18:20] but altererd networking - indeed [18:21] i would have thought that changing the region to be US West (so long as this was still vanilla networking, et-al) would have been successful. Did those networking changes propigate globally? [18:22] mmm not sure [18:22] i'm not sure how I can check because Azure web portal doesn't appear to differentiate [18:24] I'd imagine the local network config would propagate so VMs can be moved between regions easily [18:24] I've got to go for 30 minutes back soon! === joec1 is now known as joec1afk [18:26] ack, cheers joec1afk [18:32] lazyPower: aisrael: The .juju/environments.yaml written out by that Docker container is (a) owned by root:root and (b) has agent-stream at the top level which doesn't appear to apply to an environment manually added. [18:32] (i.e. I had to move the agent-stream declaration in to my gce environment mapping) [18:32] Odd_Bloke: interesting, we pass a -v to volume mount our $JUJU_HOME which should have copied your local environments.yaml [18:34] lazyPower: I actually did (copy-pasting from earlier), -v $HOME/.juju-trunk:/home/ubuntu/.juju. [18:35] But I wouldn't have had a config pointing at the devel tools anyway. :) [18:35] lazyPower: I usually point it to a fresh juju path, as to not trample over an existing environment [18:36] wierdness, #disclaimer - i havent used teh nightly image yet - but this is good feedback if its being silly on volume mounts. [18:36] Odd_Bloke: I'll take a look at that. I thought I'd fixed the top level thing. :/ [18:38] aisrael: I exited out of the pretty curses interface (because it didn't have a GCE option). [18:38] Odd_Bloke: did you exit out of juju-quickstart? [18:39] Odd_Bloke: ahh. That'd definitely cause the error with agent-stream being nested incorrectly. [18:39] aisrael: I did indeed. #prebugging [18:41] Odd_Bloke: thanks for that! I've added issues to the project. I'll see to getting those fixed. === joec1afk is now known as joec1 [18:42] aisrael: Thanks! [18:43] Next question: I'm trying out deploying Jenkins; I've juju deploy'd, and I have agent-status 'executing' and workload-status 'maintenance'. Should I translate this as "patience, my young padawan"? [18:48] the more I think about my issue the more I think it has something to do with the EA subscriptions [18:48] for example, the Azure Market doesn't work for me in the default Azure portal [18:50] OK, I've now got agent-status 'executing' and workload-status 'active', but Jenkins is not running on jenkins/0. [18:51] I think it's just patience at this point [18:51] how long has it been? [18:52] 2015-03-31 18:43:49 INFO unit.jenkins/0.start logger.go:40 * Starting Jenkins Continuous Integration Server jenkins [18:52] 2015-03-31 18:43:49 INFO unit.jenkins/0.start logger.go:40 ...done. [18:52] So ~10 minutes. [18:52] `sudo service jenkins status` reports "Jenkins Continuous Integration Server is not running" [18:53] did you deploy the slave too? [18:54] Nope; shall I? [18:54] I assume so, it's what the instructions say [18:54] "To deploy Jenkins server you will also need to deploy the jenkins-slave charm." [18:54] *shuffles feet* *avoids eye contact* [18:54] https://jujucharms.com/jenkins/ [18:54] after that you expose it and it should work [18:58] bah, I give up for now, many thank again @lazyPower and @aisrael for offering assistance [18:59] Rut roh: http://paste.ubuntu.com/10713842/ [19:00] I suspect that's a problem with GCE firewalls. [19:01] How can I get juju to retry the hook (so I can try to fix it manually)? [19:03] juju resolved --retry [19:13] OK, I think that failure is happening because the Jenkins server isn't running. [19:14] And Jenkins isn't running because of... a buffer overflow. \o/ [19:15] http://paste.ubuntu.com/10713939/ to be exact. === scuttle|afk is now known as scuttlemonkey [19:32] Odd_Bloke, what size is the instance? [19:35] jcastro: A GCE g1-small, which only has 1.7GB of RAM. [19:35] So that certainly seems a likely culprit. [19:35] yeah that's my first guess, out of RAM, but that's a guess [20:41] ericsnow: wwitzel3: The europe-west1-a zone has been deprecated and removed in GCE, but juju just tried to use it. [20:42] Odd_Bloke: it only tries zones that GCE offers (we get a list from the GCE API at runtime) [20:43] ericsnow: http://paste.ubuntu.com/10714510/ [20:45] Odd_Bloke: yeah, dimitern ran into the same thing and we decided not to worry about it since the zone will be gone before 1.23 is released and we only try zones GCE tells us about at runtime [20:45] Odd_Bloke: however, it is a pain [20:46] Ah, OK, cool. [20:46] Odd_Bloke: for now you can use a different region === scuttlemonkey is now known as scuttle|afk [20:47] Odd_Bloke: I imagine this could be a problem in the future if GCE deprecates any other zones [20:48] Odd_Bloke: we could add code to filter out known deprecated zones but that is a maintenance burden we didn't want to take on if we didn't have to [20:51] ericsnow: The API returns the deprecation info: https://cloud.google.com/compute/docs/reference/latest/zones#resource [20:53] Odd_Bloke: thanks for pointing that out, we must of missed it [20:53] Odd_Bloke: could you open a bug for this? [20:55] ericsnow: I could reopen https://bugs.launchpad.net/juju-core/+bug/1436655 ? [20:55] Bug #1436655: gce provider should stop using deprecated zone europe-west1-a [20:55] Odd_Bloke: that would be perfect [20:55] thanks [20:56] ericsnow: Actually, I can't change the status there; shall I comment and you reopen? [20:56] Odd_Bloke: sounds good [20:59] Odd_Bloke: done; thanks for looking into this. [20:59] ericsnow: No worries; thanks for writing the provider! :) [20:59] wwitzel3: could you take a look at #1436655? [21:00] Bug #1436655: gce provider should stop using deprecated zone europe-west1-a === natefinch is now known as natefinch-dinnne === natefinch-dinnne is now known as natefinch-dinner [21:03] ericsnow: yeah, I can take a look, I need a break from the CS stuff anyway [21:03] wwitzel3: :) [21:03] wwitzel3: the fix shouldn't be too bad [21:06] jcastro: Jenkins still falls over in the same way on a 12G RAM instance. :( [21:06] (And the same version doesn't do so when I run it in a similar GCE instance not via Juju) [21:29] Odd_Bloke, hmm no clue then, I'm off to dinner so perhaps post to the list? [21:32] ericsnow: I'm only working on one right now, I haven't started on the other yet [21:32] wwitzel3: k [21:33] Is there a way to stop juju from destroying a maas node when no services are deployed to it? I just destroyed the services running on a machine and it seems like it's freed immediately? It's pretty annoying to have to wait 10 minutes for maas to re-deploy the node [22:09] ericsnow: is NewZone purely for testing purposes? [22:09] wwitzel3: note sure [22:09] * ericsnow take a look [22:10] ericsnow: I don't see it being used anywhere but tests, but just watned to make sure [22:10] wwitzel3: yeah, looks like it is just for testing [22:10] ericsnow: I've got a fix, I insept the the deprecatedStatus of the zone and bubble that up via the availZoneUp method, I also annotate the default error if Google suggests a replacement [22:11] wwitzel3: cool [22:16] is there a way to get juju to reread the JUJU_DEV_FEATURE_FLAG env variable or do you have to destroy the environment and re-create it?