[00:23] hi davecheney, I see a 1.14 branch and series was created shortly after 1.13.3. Will the two fixes be merged into both 1.14 and trunk so that the release can be cut? [00:25] Im trying to deploy charms with juju 1.13.3 t oa maas environment, and while the first charm allocates resources from maas, the next charm i try to deploy doesn't even though there are nodes in the ready state [00:26] has anyone seen this before? [00:31] zradmin: you may have clock/timing issues with oauth [00:32] so time between the bottstrap noe and the maas region controller is off? [00:32] zradmin: does the debug-log complain of oauth problems? [00:32] bootstrap :) [00:33] yes, it could be [00:33] do both nodes have direct access to the internet? [00:33] yes [00:33] they should be set to pool.ntp.org [00:34] what does debug log say? [00:35] its still scrolling, i had the bug in 1.12 where the all-machines.log grew exponentially [00:35] right, don't use 1.12 [00:36] yeah, i just found all the threads on that today it was driving me crazy [00:36] I got off that as soon as I could. [00:36] 1.13.3 is what I'm working with [00:37] yeah i'm on that as well [00:37] just a second, i deleted the log file and am reattempting to deploy the charm to get fresh data [00:38] you don't need to delete the log. I've never worried about that [00:38] start up another terminal and do "watch debug-log" from your root node [00:39] I run 3 terminals when I'm deploying [00:39] 1 for commands, 1 doing a "watch juju status" and 1 doing "watch juju debug-log" [00:39] that works well for me [00:39] cool [00:40] this is a sample of the garbage im getting in the log [00:40] I sometimes have a 4th window to juju ssh to whatever node I'm working iwth [00:40] ceph3:2013-09-12 00:38:09 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused [00:40] ceph3:2013-09-12 00:38:09 INFO juju runner.go:245 worker: restarting "api" in 3s [00:40] ceph3:2013-09-12 00:38:12 INFO juju runner.go:253 worker: start "api" [00:40] ceph3:2013-09-12 00:38:12 INFO juju apiclient.go:106 state/api: dialing "wss://juju.unity:17070/" [00:40] ceph3:2013-09-12 00:38:12 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused [00:40] ceph3:2013-09-12 00:38:12 ERROR juju runner.go:211 worker: exited "api": websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused [00:40] ceph3:2013-09-12 00:38:12 INFO juju runner.go:245 worker: restarting "api" in 3s [00:40] ceph3:2013-09-12 00:38:06 INFO juju runner.go:253 worker: start "api" [00:40] ceph3:2013-09-12 00:38:06 INFO juju apiclient.go:106 state/api: dialing "wss://juju.unity:17070/" [00:40] ceph3:2013-09-12 00:38:06 ERROR juju apiclient.go:111 state/api: websocket.Dial wss://juju.unity:17070/: dial tcp 10.10.33.1:17070: connection refused^CConnection to juju.unity closed. [00:40] ARRGGGH [00:40] don't paste in here... [00:40] whoops [00:40] sorry about that [00:40] use pastebin [00:40] ok [00:40] pastebin.ubuntu.com [00:41] http://pastebin.ubuntu.com/6094948/ [00:41] my apologies to everyone else in the room [00:41] you are having connection issues [00:42] the ceph service in the log deployed correctly and seems to be checing in to juju [00:42] but that was deployed under 1.12 and then i upgraded the tools [00:43] should i just destory it again and start clean from 1.13? [00:43] I think you may need to destroy-env - but I can't answer for certain [00:43] ok, I'll try that first [00:43] also - from your node, make sure you can ping out to some internet hosts [00:44] why is 1.12 listed under juju/stable? I've had nothing but issues since testing with it [00:44] the python client never gave me too many issues [00:45] I'm not with canonical, can't help you with that [00:45] sorry [00:45] zradmin: the juju team has chosen to do evens for stable, odd for unstable, and as I understand they just haven't done a 1.14 release yet.. [00:45] ah yeah, I meant to ask "do you know why" [00:45] sarnold: isn't 1.14 for saucy? [00:45] .. never mind the results of annoying bugs in the stable series :) but the intention was for 1.13 to be less stable than 1.12 because it was under more active development [00:46] ah ok [00:46] 1.12 is dead though I think I heard [00:46] yeah probably best to think that :) [00:46] 1.13.3 is still most recent in saucy, I ohpe they can fix that up before too much longer [00:47] its been working pretty well for me so far [00:47] cool, im hoping i can get my test openstack environment up and running soon [00:48] ok now when trying to destroy the environment im getting a 409 conflict message [00:49] did you juju resolve before trying to do so? [00:49] ah…wait [00:49] you are destroying your environment completely [00:54] yeah im destroying each service individually right now and then seeing if it will let me destroy the environment [00:56] its not [00:57] getting the mass 409 error still [00:57] ok after removing all the nodes from maas it let me destory the environment [00:59] ok well this is going to take a bit while the environment bootstraps itself again, but thank you kurt_ and sarnold for the assistance! [00:59] (and the lesson in pastebin) [01:00] zradmin: I figured you may need to do that. you may be having some strange connectivity issues [01:01] good luck zradmin :) [01:02] sarnold: what does "agent-state-info: 'hook failed: "relation-changed"' mean? [01:02] getting that from nova-cloud-controller after having deployed some other services [01:03] kurt_: eek, no idea, sorry [01:03] sarnold: ok, thnx anyways [01:04] i had an issue with that under a .7, but got past it. what service are you adding a relation to nova-ccc when it gives you that message? [01:06] zradmin: it happened sometime after deploying nova-compute and adding the relations [01:08] kurt_: was it connected to keystone/rabbitmq etc. already? Also are you following this guide: https://wiki.ubuntu.com/ServerTeam/OpenStackHA [01:08] zradmin: I'm loosely following that guide [01:09] kurt_: if your following the HA guide, my biggest problem stemmed from ceph not actually setting up the osd's properly, but from some reason it let me stand up the rest of the services with no noticable errors. The root of that was essentially that the serves never started on the VIPs so Nova-CCC was the first service to report a problem for me [01:10] yes, that's a common problem a few of us have run in to. its about topology too [01:11] I'm actually trying to consolidate services down to as few nodes as possible [01:11] and I'm definitely not doing HA, and I'm 100% on VMs in Vmware Fusion [01:12] yeah i was running alot of the core api services on vms as well [01:13] I've gotten very close, but not having the topology right has bitten me more than once [01:13] I've gotten everything done minus the ability to spin up VMs [01:13] in openstack I mean [01:13] I wasn't using ceph though... [01:16] i see. I was able to upload imaged on my last attempt and start creating instances, but they would never finish deploying - quantum was my issue i think. === freeflying is now known as freeflying_away === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === freeflying_away is now known as freeflying === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === rektide_ is now known as rektide === freeflying is now known as freeflying_away === defunctzombie is now known as defunctzombie_zz [03:52] with juju-core i cannot bootstrap on a maas installed when i m behind a proxy. It used to work (after some chaneg in MAAS) with pyjuju [03:52] cloud-init error: https://pastebin.canonical.com/97399/ [03:52] any idea what it could be and if there s a way to fix it ? === defunctzombie_zz is now known as defunctzombie [03:56] melmoth: that log is from pyjuju not juju-core [03:58] hmmm, so, i ended up with juju py on my bootstrap node... [03:58] hu, actually, nope [03:58] but there s nothing like juju installed on the bootstrap node [03:59] I can tell by the log format, and that it mentions py files not go files [04:00] well, it mention 2013-09-12 03:19:23 ERROR juju supercommand.go:235 command failed: no reachable servers [04:01] but still; none version of juju seems installed, and i did not spotted any error (like an apt-get install failing) before that one [04:01] ahhh [04:01] Sep 12 03:08:00 bootstrap [CLOUDINIT] cc_apt_update_upgrade.py[WARNING]: Source Error: ppa:juju/stable:add-apt-repository failed [04:01] ah so it did [04:02] ah, it is the cloud init python failure [04:02] * thumper sighs [04:03] it was not able to install the ppa fo juju-core, most probably because the gpg key stuff failed ehind a proxy (i had to change that in maas, used to work with pyjuju) [04:05] ahhh, i think i know, my previous change only added the ppa:juju/pkgs and here i think it s trying ppa:juju/devel === freeflying_away is now known as freeflying [04:51] where can i find the list of command that cloud-init feeds to the bootstrap node ? [04:59] melmoth: on the machine or in the code? [04:59] in the code, so i can change it. [04:59] melmoth: mostly in juju-core/environs/cloudinit/cloudinit.go [04:59] thanks [05:00] is it like python , compiled on the fly so i can change it withot repakcaging the whole stuff ? [05:00] no [05:00] go is a compiled language [05:00] grumble [05:00] and it creates a statically linked executable [05:01] I know that there is effort around making sure that juju works in private clouds [05:01] with firewalls etc [05:01] so please document your issues to the juju mailing list === thumper is now known as thumper-afk [05:45] any charmers about who feel like reviewing my squid-reverseproxy fixes? they're pretty minor, but actually let the charm work on juju > 0.7 === CyberJacob|Away is now known as CyberJacob [06:10] bradm: this one ? [06:10] https://code.launchpad.net/~charmers/charms/precise/squid-reverseproxy/trunk/+merge/185202 [06:11] diff, he is empty [06:14] davecheney: huh [06:15] davecheney: I did the merge proposal against http://bazaar.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/http-port-config/revision/42, I thought [06:15] davecheney: I must have screwed it up somehow [06:15] * bradm retries. [06:17] davecheney: https://code.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/http-port-config/+merge/185204 [06:19] davecheney: looks better? [06:20] * davecheney looks [06:52] davecheney: let me know if there's a problem with that merge [06:53] bradm: change looks good [06:53] i'll have to wai for marcoceppi [06:53] i'm just a baby charmer [06:53] i stil hvae my training wheels attached [06:53] davecheney: cool [06:53] davecheney: I'm redoing python-moinmoin charm in python too, using charmhelpers [06:53] noice [06:55] davecheney: https://code.launchpad.net/~brad-marshall/charms/precise/squid-reverseproxy/fixed-ports-path/+merge/184023 is still open too, if you can get someone to merge it that'd be great :) [06:57] bradm: looking [06:57] small changes are good [06:57] lets do more of those [06:57] ok, same deal [06:57] need marcoceppi to show me the ropes [06:57] davecheney: I figure small, bite sized chunk changes make it easier on everyone [06:58] davecheney: since its obvious what they're doing, and I'm new at this too :) === defunctzombie is now known as defunctzombie_zz [07:35] I have a charm which populates data into a database when the db-relation-{joined,changed} hooks are fired. The process of loading the data can take > 30mins. The hook does not return until the load completes however juju status reports that the hook has successfully completed well before it actually has. Currently the charm doesn't log anything while the load is running, if that’s relevant. Is this the expected behaviour? [07:37] I'm looking into implementing multiplayer functionality into an Ubuntu Touch game. What charms should I be looking at? [07:55] gnuoy: I wondered whether a timeout might be involved, but as far as I can see, it's just Cmd.Wait when running hooks and no other logic [07:59] wellsb: what kind of thing are you after? I don't think there are any charms for game servers, unless nyancat counts. Looking at any charm which uses expose should give you some ideas though. [08:01] mgz: I'm thinking about something like the channels api provides for google app engine. Perhaps I can use node.js in tandem w/ haproxy to create a socket server? Then my clients can connect to that? I really don't have much experience in this area [08:04] I guess without a pomelo or maple charm, this isn't really possible [08:12] wellsb: personally, I'd have a charm specific to your game server, rather than looking for a generic game-server charm that you then customise [08:13] so, you'd write a charm that installs and uses nodejs/pomelo, rather than trying to have a generic pomelo charm with enough configurability to work with any game === defunctzombie_zz is now known as defunctzombie [08:18] gnuoy: This is Bug #1200267, or at least a facet of it. [08:18] <_mup_> Bug #1200267: Expose when stable state is reached [08:19] stub, thats the badger by the looks of it [08:19] gnuoy: juju status just tells you that the -joined hook has successfully run, which it probably has (given it probably needed to wait until the db's -joined hook had run and databases exist etc.) [08:21] stub, thanks === defunctzombie is now known as defunctzombie_zz [09:41] hi, i'm trying to add some nagios functionality to a gerrit charm, and i need some advice. I see other charms like memcached, postgres... that are using nagios plugins for it, but there isn't a nagios plugin for gerrit, what should be the best way to proceed? [09:43] yolanda: you can monitor gerrits basic availability using the http plugin [09:44] that sounds like a good starting point at least [09:45] lifeless, that for http, and if i want to check the ssh port maybe i use the check_tcp one? === frankban__ is now known as frankban [10:55] davecheney: you around? === frankban__ is now known as frankban === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away [12:29] evilnickveitch, ping [12:31] fwereade_, hi [12:34] evilnickveitch, I wanted to check in quickly about the docs I proposed to see whether they were seeming sane? [12:34] fwereade_, it looks good so far, I haven't finished them all yet :) [12:34] you did quite a bit of work there [12:35] evilnickveitch, cool, I feel like I should do more really [12:35] there are a few XXXX type bits [12:35] but really the current needs landing I think [12:35] mgz, fuck, yeah, I just remembered I didn't do sample charm metadata and config files [12:35] mgz, were there more you spotted? [12:35] just the examples I think [12:37] mgz, thanks, well spotted [12:37] evilnickveitch, but also to say that there's an effort underway to get all useful-for-developers docs collected in one place [12:38] fwereade_, yeah, i know that, we aqre working out how that can be done [12:38] evilnickveitch, and that initial indications suggest that something like restructured text docs in juju-core itself may be the most suitable source format [12:38] evilnickveitch, but, yeah, the important thing is that you're aware [12:39] evilnickveitch, anyway I tried to cover all the stuff I could think of that's relevant for charm authors [12:40] evilnickveitch, the major holes are the subordinates page (which I think I recognise as basically the original spec document) and the implicit relations page (which I couldn't really make head or tail of) [12:40] evilnickveitch, but I didn't touch those for fear of never finishing [12:41] heh, I think m_3 already went over subordinates, but once the dust settles on all the new bits we should reappraise it [12:46] evilnickveitch, ah, cool, I may be talking about the docs as of a few versions ago === freeflying_away is now known as freeflying [13:52] stub: I'm also curious if run should be in its own module [13:53] which sort of goes along with your API stability question [13:54] wedgwood: yeah, it does look a little lonely [13:56] it should go in with the fixture - nothing else about it is charm specific so its only purpose in charm-helpers is support the fixture. [13:57] stub: ok, so while I'm doing a proper review... [13:58] in keeping with the 1.0 goals, I think both modules can be combined and they need more docs. An example in the module docstring would be excellend. [13:58] *t === kentb-out is now known as kentb [14:04] stub: ah, and you'll also need to handle python-fixtures installation. [14:06] wedgwood: I need to declare the dependency if it is in contrib? Or is this because you want it moved to core? [14:07] stub: I think that there will be things in core that handle their own dependency installation. like the fetch and archive modules. I want to keep the actual dependencies (as in setup.py) down [14:09] stub: see charmhelpers.fetch.bzrurl [14:09] I don't think there is any sane way I can help with the python-fixtures dependency apart from documenting it. [14:11] stub: ^^ and also, I don't mean for it to be in charmhelpers.core.testing, just at charmhelpers.testing [14:12] oh, yeah. that is better. === kentb is now known as kentb-afk [14:19] stub: If I understand the use well enough, I *think* the API is solid. Adding additional kwargs to handle variations on placement shouldn't break anything. [14:23] juju - how to set environment variable before running script inside hooks/install | http://askubuntu.com/q/344687 [14:53] wedgwood: Ta. I'll do those changes tomorrow. [14:54] stub: cool. don't know if you noticed that I commented on the MP. thanks man and have a good night. === kentb-afk is now known as kentb [14:55] wedgwood: yes, just saw the notification come through. [14:56] o/ === frankban_ is now known as frankban === ahs3 is now known as ahs3_afk === frankban_ is now known as frankban [15:47] does anybody know if relation-ids can return relation ids in a broken state? [15:47] or are all relation ids that are returned guaranteed to be in a working state? [15:47] ahasenack: there was talk on the list about this. [15:48] marcoceppi: I was wondering if I could rely on relation-ids to know if a relation is established or not [15:48] ahasenack: I'm not 100% sure, looking int he archives [16:11] do you guys have any idea how I could end up with this error on the agent of a lxc machine: [16:11] ERROR juju machine.go:286 running machine 1 agent on inappropriate instance: machine-0:26b172cc-..... [16:12] where machine-0:26b172cc is the Nonce that I got [16:15] avoine: how can we reproduce that error? [16:18] I guess you should have the error when deploying using lxc [16:19] marcoceppi: I have a service that will only start (initscript-wise) after a db relation is joined [16:19] marcoceppi: was wondering what's the best way to track that [16:19] marcoceppi: touch a file at the end of db-relation-changed for example? [16:19] marcoceppi: the problem is actually in config-changed, it tries to start the service at the end. But the hook execution order at deploy time is [16:20] marcoceppi: install hook and then config-changed hook [16:20] so it's that run of config-changed where the start will fail, because it's not related to the db yet [16:21] ahasenack: you can list relations and check if there is a db one [16:21] ahasenack: that's how I do it, touch files, etc [16:21] avoine: the question then becomes, will relation-ids only return established relations, or does it include broken ones in its output? Relations with errors [16:21] ahasenack: I touch a file, then run config-changed at the end of every hook, which will run hooks/start [16:21] marcoceppi: ok [16:27] relation_get_all() in the haproxy charm loop over relations and fetch relation information for each ones [16:27] then use that to configure haproxy.cfg [16:27] marcoceppi: I'll dig up more and come back with a bug report if I found something [16:28] avoine: cool, I've not seen that error, but if you run everything with like --debug and -v you should get plenty of information [16:28] ok [16:28] thanks === ahs3_afk is now known as ahs3 === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [18:40] marcoceppi: can you tell me how multihomed AND statically assigned IP address systems are achieved with maas and juju (ie. for openstack)? [18:40] I was considering putting this out there to ask ubuntu, but thought maybe someone here could answer this [18:45] kurt_: did you ever sort your quantum thing? === defunctzombie_zz is now known as defunctzombie [18:45] jcastro: not yet.  [18:47] jcastro: did you get everything working yesterday? [18:49] no, it's on my todo this weekend [18:49] jcastro: do you have any ideas on my question? should I put it out there to ask ubuntu? I'll be jamespage could answer it easily. [18:50] yeah [18:50] marcoceppi: 33 unanswered `juju` questions [18:50] oh hey sinzui [18:51] we should put unanswered questions from askubuntu tagged with "juju" in the review queue as well [18:51] just like a link to: http://askubuntu.com/questions/tagged/juju?sort=unanswered&pageSize=50 [18:51] jcastro: i've got the review queue baring down on me. Questions will have to wait [18:52] jcastro, ack, jcsackett , can you report a bug about that so that we can include it in your efforts [18:52] marcoceppi: I was mentioning you as sort of "just nod and validate my idea!" [18:52] sinzui: rock, on ... lp:charmworld? [18:52] * marcoceppi nods to jcastro [18:52] yep === defunctzombie is now known as defunctzombie_zz [19:01] jcastro: was the problem you saw yesterday in adding a relationship between nova-compute and nova-cloud-controller? I keep seeing " agent-state-info: 'hook failed: "relation-changed"'" [19:02] both can be deployed without issues, but as soon as I try to join them, nova-cloud-controller errors out [19:03] http://pastebin.ubuntu.com/6098340/ [19:03] kurt_: there is a debug thing [19:03] kurt_: I think you'll need to do that to see whats failing [19:03] I'm not even getting past install hooks on some of them, but remember I'm on the local provider, there's a bunch of issues there left to resolve [19:03] this weekend I'm going to try to fire up the bundle on HP [19:04] lifeless: you are talking about the debug hooks, right? [19:04] I was considering trying that next [19:04] kurt_: no, the watch-all-the-logs and related drop-into-a-shell-when-a-hook-fires thing. [19:04] jcastro will know what I'm blathering about [19:05] juju debug-logs [19:05] yes, I do that [19:05] output is above in pastebin :) [19:06] kurt_: we're in your neck of the woods next month, I am wondering if we should just get together for beers with your stuff [19:06] and get you sorted for real [19:06] ah yeah sure :) [19:06] where and when are you talking? [19:08] lifeless: I was referring to debug-hooks - I am wondering how useful that will be here [19:08] yikes, debug-hooks, not -logs [19:08] sorry, long day! [19:09] ah…ok, that makes more sense :D [19:09] kurt_: week of 21 October, though hopefully it won't be this same issue, heh [19:11] sinzui: there's a card on our kanban for askubuntu now. [19:12] thank you jcsackett === defunctzombie_zz is now known as defunctzombie [19:34] is there a way in 1.13 to destroy subordinate services yet? [19:36] zradmin: yeah, you can destro subs for some time [19:36] zradmin: just remove the relation [19:37] marcoceppi: odd, i removed the relation and the subordinate service and the main service have been stuck in a dying state for hours now [19:38] zradmin: can I see the juju status output? [19:39] marcoceppi: here it is http://pastebin.ubuntu.com/6098492/ [19:40] zradmin: it says the agents are stopped [19:41] zradmin: anyways, run `juju resolved mysql-hacluster/0; juju resolved mysql-hacluster/1` [19:41] zradmin: that should finish the removal of the subs, and then the final cleanup of the juju status [19:41] zradmin: whenever a unit (or sub) is in an error state all future events for that unit are queued and event processing is stopped, even on a destroy-service or removal of a sub [19:41] zradmin: you need to mark the error as resolved in order for juju to process the next event [19:42] zradmin: that's why you see it in life: dying but it's not dead, because it's stuck with an error [19:42] zombie! [19:43] I would yell at evilnickveitch because this isn't in the docs yet, but he just quit [19:44] marcoceppi: ah ok, that makes sense - still relearning all the new changes in the go rewrite. in .7 it just seemed to do everything instantly [19:45] marcoceppi: that worked btw, so tyvm! [19:45] zradmin: yeah, that was a bug (technically) that has been fixed in the rewrite [19:45] zradmin: you're welcome! [20:11] Do debug-hooks work in 1.13.3? It appears I need to manually set a bunch of environmental variables. Maybe its not intended to work with the add-relations hooks? [20:22] trying to deploy an haproxy for mysql im getting an error now where corosync isn't starting because its missing some principle? i've got the VIP etc set in my config file so i don't know what is missing to finish starting the service properly. here's the debug-log section thats relevant http://pastebin.ubuntu.com/6098666/ === defunctzombie is now known as defunctzombie_zz [21:46] zradmin: haproxy for mysql, I dont' think those two play with each other [21:49] Juju debug-hooks for add-relation? | http://askubuntu.com/q/344862 [21:52] marcoceppi: its worked in the past, but it configures in active/passive mode (its also whats in the public documentation im following on https://wiki.ubuntu.com/ServerTeam/OpenStackHA) [21:53] marcoceppi: i got a little farther with it, apparently maas and juju now deploy the nics as bridges (for lxc support maybe?) so i had to adjust the config for that but the VIP didn't come up === kentb is now known as kentb-out === lifeless_ is now known as lifeless === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === freeflying is now known as freeflying_away