=== CyberJacob is now known as CyberJacob|Away === zz_mwhudson is now known as mwhudson === claytonk is now known as ninjix === ninjix is now known as claytonk === mwhudson is now known as zz_mwhudson === timrc is now known as timrc-afk [05:18] hey guys, anyone around? [05:30] bcsaller: ping === CyberJacob|Away is now known as CyberJacob [08:54] Hi.. Need a bit of help with a local installation of openstack.. when i try to bootstrap i get the following http://paste.ubuntu.com/6755106/ [09:51] juju on openstack, bootstrap fails -> caused by: the configured region "regionOne" does not allow access to all required services, namely: compute, object-store [11:08] hi marcoceppi, can You confirm, that wordpress + memcached deployment works properly on Windows Azure? I'm deploying: mysql + wordpress + add-relation, and then deploy memcached + add relation with wordpress. After all I have: agent-state-info: 'hook failed: "cache-relation-changed"' [11:09] if I run juju resolved --retry wordpress/0 it is going to run, but in WP I have "Plugin settings are not yet saved for the site, please save settings! ยป WP-FFPC Settings" [11:09] and "Memcached cache backend activated but no PHP memcached extension was found. [11:09] Please either use different backend or activate the module! [11:10] also Memcached configuration in WP-FFPC is for 127.0.0.1 [11:44] local openstack -> ERROR failed to list contents of container: juju caused by: request (http://172.16.93.211:8080/swift/v1/juju?delimiter=&marker=&prefix=tools%2Freleases%2Fjuju-) returned unexpected status: 204; error info: === amol_ is now known as amol === timrc-afk is now known as timrc [14:23] I'm having troubles with 1.17.0 and the local provider not booting the lxc containers. They sit in a "pending" state. [14:33] hello, can anyone point me to the actual link of the hooks section mentioned in https://juju.ubuntu.com/docs/authors-charm-components.html ? the one in there is just a link to the same document [14:35] perrito666, https://juju.ubuntu.com/docs/authors-charm-hooks.html [14:35] lazypower: thank you very much [14:35] perrito666, no problem. Thank you for pointing out the mis-link :) === TheLordOfTime is now known as teward [15:11] michal_s: memcached is broken in wordpress [15:38] marcoceppi: thanx for info. There is information about memcached in WordPress charm readme, so maybe there should be no info about it there? ;) [15:39] michal_s: well, it used to work, and it should just be fixed [15:39] there's a bug about it on the wordpress charm [15:39] oh, ok :) [17:04] hazmat: ping, ev told me you may be interested by http://paste.ubuntu.com/6757161/ [17:08] vila, i am, it looks like a bug in core [17:09] vila, i almost always do juju-deployer -TW [17:09] it will stream what's happening back to the client [17:10] hazmat: what I understood (incompletely ;) is that this happened when a service has an hook error, this seems to block/upset juju-deployer leading to that traceback [17:11] vila, deployer should be auto resolving errors when doing destroy [17:11] but noted [17:11] hazmat: the hook failed was gunicorn wsgi-file-relation-broken and the charm was expecting the service to exist and destroy it [17:11] hazmat: so the log was suggesting that this hook was run twice [17:11] hazmat: oh, noted [17:11] vila, can you pastebin juju-deployer -TW against that env if its still active [17:12] actually with trunk (1.17) we have a simpler way of cleanly doing this via terminate-machine --force [17:13] hazmat: not active anymore, will ping you when I encounter it again (and I run lp:juju-deployer) err, wait, 1.17 refers to ? [17:14] hazmat: ha, juju, yeah, 1.17.0-0ubuntu1~ubuntu13.10.1~juju1 here [17:14] yup .. refers to juju-version [17:15] hazmat: ok, thanks for the feedback so far, will come back to you when I've more meat ;) [17:17] vila, np [17:21] cjohnston: meet hazmat who asked for -TW output [17:21] hazmat: meet cjohnston how just reproduce the traceback in an active env \o/ [17:21] s/jow/who/ [17:21] it's running -TW now [17:21] s/how/who/ with new fingers [17:33] vila: hazmat https://pastebin.canonical.com/103023/ [17:44] cjohnston, thanks, that clarifies it a bit, does look like a bug in deployer to me === natefinch is now known as natefinch-lunch [17:47] hazmat: ok.. do you need a bug filed? [17:47] cjohnston, that would be great [17:48] cjohnston, also which version of deployer are you using? [17:48] hazmat: trunk I believe [17:48] cool [17:49] cjohnston, hmm.. that's not trunk.. it looks like a package install [17:49] ubuntu@tarmac:~/projects/amulet$ aptitude search deployer [17:49] p juju-deployer [17:52] would it be packaged under anything else hazmat ? I'm guessing it comes from http://bazaar.launchpad.net/~canonical-ci-engineering/ubuntu-ci-services-itself/trunk/view/head:/tests/run#L75 [17:53] cjohnston, that doesn't match your traceback paths [17:53] oh.. nevermind it does [17:53] i was looking at jujuclient [17:54] cool [17:55] hazmat: https://bugs.launchpad.net/juju/+bug/1269519 [17:55] <_mup_> Bug #1269519: juju-deployer -T fails with jujuclient.EnvError: [17:55] <_mup_> Bug #1269519 was filed: juju-deployer -T fails with jujuclient.EnvError: [17:55] cjohnston, thanks. [17:57] cjohnston, also this is a random issue? [17:57] cjohnston, or reproducible? [17:57] hazmat: I've reproduced it twice, vila atleast once [17:58] cjohnston, any chance i can get a copy of machine-0.log from that env [17:58] should be in machine 0 @ /var/log/juju/ === bjf is now known as bjf[afk] [17:59] cjohnston, via chinstrap is fine.. talking with a core dev about as well [17:59] hazmat: I can give you access if you like [18:00] cjohnston, sure that works.. i just need a copy of the file, not going intending to touch the live env [18:00] but if direct access is easier sure [18:00] hazmat: ubuntu@10.55.32.15 should get you there [18:03] cjohnston, got it thanks [18:04] :-) [18:08] hazmat: just making sure you don't think you'll need anything else before I blow it up [18:09] cjohnston, all good, thanks === zz_mwhudson is now known as mwhudson [18:21] hazmat: fwiw, I did just reproduce it again, so each time I've run it it's given the error [18:23] cjohnston, noted [18:26] cjohnston, one more question.. what version of jujuclient are you using (looks like that one is from package) [18:29] 0.0.7+bzr12-0~bzr16~precise1ubuntu1~0.IS.12.04 [18:31] cjohnston, bug filed against pyjuju ;-) [18:31] the deployer one you filed === mwhudson is now known as zz_mwhudson [18:45] marcoceppi, I found a silly charm proof bug I think [18:46] it thinks line 11 on the memcached charm readme is boilerplate, but it is not [18:46] jcastro: cool, I'm about to roll a release so I'll take a look [18:46] do a fresh pull of memcached, I just pushed up the readme [18:46] it was an OG clint charm, so no readme at all. :p [18:50] marcoceppi, memcached with wordpress being broken, is that a wordpress charm bug or a memcached charm bug? [18:50] jcastro: wordpress [18:52] marcoceppi, hah man, guess what my next audit is [18:52] hadoop [18:53] have funnnn [18:54] jcastro: I have no idea why it says that [18:54] marcoceppi, is it saying it for you? [18:54] yeah, but line 11 is the deploy line :\ [18:55] I marked the charm as passing proof because it does [18:55] for all intents and purposes, even though it does not, heh, and it's only a W: anyway [18:55] I just found it strange === natefinch-lunch is now known as natefinch [18:59] jcastro: found the problem [19:00] you have this sentance in the readme: Though this will be listed in the charm store itself don't assume a user will know that, so include that information here: [19:00] which is why it's matching [19:00] so it's saying that boilerplate line 11 is in the readme [19:00] OH! [19:00] fixing [19:00] I'll fix the message for this [19:01] hey also [19:01] can you cut something out of README.ex while you are there? [19:01] cut ## Charm contact and everything below that [19:02] k [19:40] marcoceppi: heard you were ripping me a new one on the storage charm? :) [19:46] Thats not what I said! [19:46] oy [19:46] dpb1: prepare youself ;) [19:47] hah [19:48] no, it's not bad. as a tl;dr I find the structure interesting, have some concerns about data integrety, a little swamped haven't be able to pound out a formal review response [19:49] marcoceppi: heh, I know the feeling (swamped). We already have some follow-on branches, including one transforming it into python with charmhelpers and tests. btw. [19:49] dpb1: oh, then by all means, feel free to push those up for review :D [19:50] OK, sure. Let me get an ETA [19:54] marcoceppi: still a bit off on that python conversion. WIP. Just expect we will follow up with it shortly. But the general structure and purpose wont' [19:54] ... won't change. [19:54] marcoceppi: IOW, sorry for the noise. lol [19:54] dpb1: awesome, I'll simply move the bug to incomplete to remove it from the queue for now [19:55] make sure to put the bug back at either "new" or "fix committed" when ready for review [19:55] marcoceppi: sure, that is fine. Do you have even partial thoughts typed up? would be good not to cycle on those [19:59] dpb1: I don't have anything typed really :\ [19:59] ok [20:00] will ping when we are ready. === zz_mwhudson is now known as mwhudson [20:07] marcoceppi, https://bugs.launchpad.net/charms/+source/memcached/+bug/1269537 [20:07] <_mup_> Bug #1269537: Charm needs a peer relation === mwhudson is now known as zz_mwhudson [21:01] hazmat: deployer can be a bit aggressive with agent state down for machines coming online [21:01] is there a way to increase it's timeout threshold? [21:03] marcoceppi, there's a couple of timeout params that can be passed via cli params [21:05] hazmat: would that fall under REL_WAIT ? [21:05] it's getting tripped up before relations though [21:07] marcoceppi, its triggering against -t, --timeout [21:07] its a global timeout for the entire deployment [21:07] says default is 45 mins? these are dying after about 5 [21:07] with the follwoing error [21:08] * marcoceppi gets it [21:13] hazmat: jujuclient.EnvError: [21:13] it seems agent state of a machine stays in "down" for about 45 seconds before moving to started [21:13] marcoceppi, i had a long discussion with roger about that.. [21:14] hazmat: ah, I thought I saw some of it earlier today in here [21:14] the state watcher was stopped, there was a corresponding conversation in juju-dev as well [21:14] https://bugs.launchpad.net/juju-core/+bug/1269519 [21:14] <_mup_> Bug #1269519: Error on allwatcher api [21:34] this really breaks automated testing :\ === rogpeppe2 is now known as rogpeppe [22:06] marcoceppi, yeah.. so in terms of trying to debug it and getting a fix from core, i think we need to turn the log level back to high [22:06] hazmat: I'm open to do whatever to help fix it [22:07] I have a way to replicate it though [22:09] marcoceppi, you do? do tell.. rogpeppe mentioned he has branch in the review queue thats turns up logging around api conn behavior a bit https://codereview.appspot.com/52850043 === bic2k-duo is now known as bic2k [22:10] hazmat: both lazypower and mbruzek can replicate it [22:10] marcoceppi: i wanna know! [22:10] marcoceppi, but how? [22:10] indeed, i was blaming the fact that i'm on plattered disks so the file copy took longer to happen than expected... [22:10] but i have no evidence to back that up [22:12] hazmat, i'm available for stack traces, just let me know what you need and i'll make myself available. [22:14] * hazmat can't remember the syntax for turning up the logging level [22:14] thumper should know [22:14] there's a doc somewhere around here too that has that info [22:14] https://lists.ubuntu.com/archives/juju/2013-September/002998.html [22:15] there might be something newer, but that's easily accessible in my history :) hehe [22:15] marcoceppi, if only it was the docs.. [22:15] * marcoceppi makes a bug [22:17] sarnold, thanks [22:28] sarnold, thanks, bookmarked [23:32] lazypower, so re reproducing what do you have a deployer file you can share? [23:33] You bet, give me another 10 minutes to wrap up this call and I'll start === zz_mwhudson is now known as mwhudson [23:39] hazmat, the output is going to have some chatter from amulet, is that going to be a problem? [23:40] lazypower, that's fine.. what we're interested in actually the log from the state-server machine-0.log there in /var/log/juju [23:40] lazypower, basically bootstrap, turn the logging way up, and then do your reproduce, and then send over the machine-0.log [23:41] Ok, that helps. I just cranked the debug level to DEBUG, running the sequence now. [23:41] cool [23:44] hazmat, i dont think i did it right [23:44] http://paste.ubuntu.com/6759208/ [23:44] i see a ton of the provider output, but not so much on the state server. [23:44] i take that back, i had not scrolled back far enough in history to see it. [23:45] lazypower, yeah.. that looks correct [23:45] its the apiserver output that's of interest [23:46] The behavior sequence is the containers go from pending => down => started [23:46] when they hit down is when the state server bails [23:47] lazypower, i don't see the sympton we're looking for in that log file.. i [23:47] lazypower, ie. deployer exiting with an EnvError relating to 'state watcher was stopped' [23:48] That bubbles up in the juju stacktrace of the actively running command [23:48] let me wipe and restart [23:51] hazmat, juju set-env 'logging-config==DEBUG;juju.api=DEBUG' [23:51] is that the correct debug tuning line i want to run? [23:53] lazypower, juju set-env "logging-config==DEBUG" should do [23:53] http://paste.ubuntu.com/6759241/ [23:53] theres the parent running command stack trace [23:53] lazypower, the log level in that pastebin looked correct, it had debug api messages from the state server [23:54] http://paste.ubuntu.com/6759246/ [23:54] theres the machine log for machine-0 [23:56] hazmat, i just thought of something that may be causing this as well, my lxc containers are bridged and set to pull from my network DHCP server. That may or may not be relevant [23:57] lazypower, that log looks truncated, the timestamps don't quite match up even taking into account utc between the host and container [23:58] i just don't see evidence of the deployer api connection in that log [23:59] I dont either, which concerns me too