[00:11] marcoceppi, here am i [00:11] phe_13: are you running this command from your machine, or the AWS machine? [00:12] my machine [00:13] Okay, do you have your juju environment setup to connect to AWS? [00:14] https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment-using-ec2 [00:14] sorry man, thats my AWS machine, but it in my own DMZ [00:16] # juju bootstrap [00:16] Could not find AWS_ACCESS_KEY_ID [00:16] 2012-06-13 21:16:24,605 ERROR Could not find AWS_ACCESS_KEY_ID [00:19] phe_13: what version of Ubuntu are you running? [00:21] debian wheezy === hasp is now known as hasp[afk] === almaisan-away is now known as al-maisan [07:40] #join pocoo === al-maisan is now known as almaisan-away === mrevell_ is now known as mrevell [11:04] hey, https://juju.ubuntu.com/ says "If you are testing Ubuntu 12.04"... I think we should apply s/testing/using/ [11:04] how can I fix that? [11:11] juju does not work well when co-installed with virtualbox [11:12] juju bootstrap fails because virbr0 is already created [11:46] aujuju: Invalid SSH key error in juju when using it with MAAS === almaisan-away is now known as al-maisan === _mup__ is now known as _mup_ [13:07] I'm guessing it is, but is the transmission of config data between the juju client to the cloud (and subsequently to the deployed instances) encrypted? [13:08] bbcmicrocomputer: data between client and bootstrap node is encrypted to my understanding, however I'm not sure about node -> node communication (though I would assume it is) [13:08] via ssh from you to the bootstrap node yes, between clients no idea [13:08] heya marcoceppi [13:08] o/ [13:09] imbrandon, marcoceppi: thanks [13:09] imbrandon: I've got fixes for the wordpress session issue *maniacal laugh* [13:09] ann yea i figured out what it was tooo [13:09] ahh* [13:10] the secret keys were diffrent on the nodes [13:10] yup [13:10] \o/ soo easy. [13:10] yea , when i noticed that i was like yay [13:10] heh [13:10] I should have generic wp charm ready end of next week [13:11] sweet, i am hoping to have the drupal one tomarrow ( drupal 7 not 6 ) [13:11] 6 is in the store [13:11] but not "great" [13:11] imbrandon: is there an nginx proxy charm yet? [13:11] thats comming with the 7 charm [13:11] cool [13:11] its really a group of charms [13:12] nginx and nginx proxy and drupal and drupal-site === zyga is now known as zyga-afk === al-maisan is now known as almaisan-away [14:14] imbrandon: we need a way to define a strong primary<->subordinate relationship. [14:14] imbrandon: I like the way you're going w/ drupal/nginx .. but its going to make the setup pretty non-intuitive.. we need "stacks" [14:15] yea i was working on a dependancy hack [14:15] but yea we need a real way [14:16] the problem i came accross in the depend hack was it made it hard to use the charm outside of that stack [14:17] e.g. nginx-proxy dont HAVE to use nginx as the server etc [14:18] SpamapS ( or hazmat ) yall know what the deal is with the docs build, i see the error but no idea why it dident build [14:24] error? [14:25] yea something about the makefile conflict, i am guessing thats it [14:25] one sec === hasp[afk] is now known as hasp [14:30] erm cant fin it in LP right now [14:31] i was checking earlier that the merge went ok after hazmat approved it yesterday [14:31] and noticed it dident build , some page listed a makefile conflict but now i cant find it , heh [14:39] hrm, I'm running a juju bootstrap in a maas environment, and when I run juju -v status, it says it's trying to SSH connect to remote port 2181; why would that be? [14:39] SSH is running on port 22 of the targeted nodes, leading to the connection for juju status timing out. [14:40] zookeeper [14:41] nothing on the targeted node is listening to port 2181 however. === med_ is now known as medberry === medberry is now known as med__ === med__ is now known as med_ === zyga-afk is now known as zyga [14:50] SpamapS: +1000 on the missing-hook idea [14:55] imbrandon: yeah, that sounds awesome === med_ is now known as med_out === med_out is now known as med_away === med_away is now known as med_ === hazmat is now known as kapilt [15:38] hello. i'm trying to find out about the status of rackspace support [15:39] there is a ticket from 2011, but that's all i could find [15:41] lucian: no native openstack provider yet... you still need to have the ec2 api enabled. so no love on the rackspace cloud proper yet [15:41] (afaik) [15:41] m_3: ok, thanks [15:42] np [15:59] 'morning all [16:03] negronjl: mornin [16:03] 'morning m_3 [16:03] heya [16:04] 'morning imbrandon === salgado is now known as salgado-lunch [17:24] congrats on getting juju into Debian unstable..good work guys [17:32] SpamapS: whoohoo!! ^ [17:35] hspencer, :) [17:35] SpamapS, siir === salgado-lunch is now known as salgado [17:41] hspencer: thanks :) [17:43] jcastro: ping [17:43] SpamapS, hello sirr [17:44] koolhead17: howdy [17:44] I'm curious, does juju on debian bootstrap debian nodes? [17:45] no [17:45] debian lacks a few things [17:45] ah ok, just curious [17:45] actually its possible that the local provider could be made to do it [17:45] I haven't looked at the lxc debian template to see [17:45] huh [17:46] But the code itself calls 'lxc-create -t ubuntu' so.. no ;) [17:46] ah ok cool [17:46] having to rewrite charms would be a waste of effort anyway [17:54] shazzner: I think at least some charms will work fine crossing over from debian and ubuntu [17:54] shazzner: but yeah, I don't see much point honestly [17:55] Maybe if somebody wants to spin up on architectures that ubuntu doesn't have === tobin__ is now known as tobin === kapilt is now known as hazmat [19:17] jcastro: any update on the HP thing? I got a call from someone in their engineering team yesterday randomly [19:29] jcastro's at a conference today [19:53] bkerensa: I can confirm you are on the list [19:53] robbiew: thanks [19:53] and we received confirmation from HP that they have you [19:54] robbiew: So I can start using a instance now? [19:54] as to why someone from engineering would call...no idea...job offer? :) [19:54] hmm [19:54] one sec [19:55] bkerensa: now THAT I don't know...let me check with our internal liason...one sec [19:58] bkerensa: not getting a response, I shoot him an email and let you know [19:58] kk [20:11] bkerensa: our internal HP contact just responded and said he'll follow up...translation, no one knows :/ [20:12] :) === salgado is now known as salgado-afk [22:30] SpamapS, around? [22:43] mars: I am, wassup? [22:45] Hey SpamapS, I replied to bug 1006553, and I have a live runaway process on my system right now. I was wondering if you needed to gather any other feedback while I have it? [22:45] <_mup_> Bug #1006553: Juju uses 100% CPU after host reboot < https://launchpad.net/bugs/1006553 > [22:46] SpamapS, it isn't hard to reproduce, takes about a day, but I thought if you needed more info, a live discussion would speed things up. But if you prefer to keep it in the bug, that's cool too. [22:47] mars: yeah hm [22:47] mars: can you strace -f $thepid -o /tmp/foo.txt .. wait about 5 seconds, then pastebin that file? [22:48] oh now I see your reply, reading [22:49] just for fun, the 5 second tracelog is 8.2M :) [22:49] mars: takes a day is a bit weird [22:50] SpamapS, well, it doesn't start as soon as the system is booted. I have to wait for the process to go nuts. I haven't measured exactly, but 24 hours is enough. [22:50] thats very weird [22:52] SpamapS, fwiw, zookeeper has a cron entry in cron.daily [22:53] mars: zookeeper or zookeeperd ? (meaning, the package names) [22:53] zookeeper [22:54] * SpamapS checks that out [22:57] mars: well that doesn't seem to cause the issue [22:57] mars: in fact that just exits immediately [23:01] SpamapS, what limits the machine agent connection loop? You said yours tries every few seconds, whereas mine is in a busy-wait loop [23:05] mars: I think mine is blocked on something else [23:05] mars: is anything landing in $datadir/machine-agent.output ? [23:06] SpamapS, nope [23:06] mars: actually it might even be /tmp/juju-$user-$envname-machine-agent.output [23:06] the only file I have is machine-agent.log, which I posted to the original bug report [23:06] mars: do you have the file in /tmp tho? [23:08] SpamapS, you mean, my data directory? Yes, that is: /tmp/local-juju/mars-local/machine-agent.log [23:11] mars: no the upstart job seems to redirect output to a special file [23:12] mars: check /etc/init/juju-mars-local-machine-agent.conf [23:12] mars: it should be redirecting output somewhere. Check that file. [23:14] mars: I'm trying to get a way for you to run the agent in the python debugger [23:15] mars: hopefully you can run it that way, and when it goes wack-o again, ctrl-c will drop you wherever it is polling [23:15] hazmat: ^^ your expertise in python debugging would be helpful here :) [23:15] jimbaker: ^^ [23:15] SpamapS, found it: machine-agent.output is empty. file-storage.output has Python exceptions in it, but it isn't growing. [23:19] hrm.. debugger doesn't actually help that much because of twisted [23:21] You need to embed one of those SIGUSR1 "dump trace and exit" hooks :) [23:21] mars: can you pastebin 'sudo lsof -n -p ...' whatever the pidof is [23:21] mars: yeah [23:24] SpamapS, http://pastebin.ubuntu.com/1041631/ [23:31] This looks promising: http://stackoverflow.com/questions/132058/showing-the-stack-trace-from-a-running-python-application [23:37] mars: the second answer looks helpful [23:39] I think threading may be getting int he way here too [23:43] mars: can you attach to the process with gdb -p $thepid and do a 'bt' then 'thread 2' then 'bt' ? [23:44] mars: In mine, there are three threads (1 2 3) and 2 are inside libzookeeper [23:45] mars: thanks for going through this btw [23:45] SpamapS, np [23:46] SpamapS, same here, three threads, Py_Main, and two in libzookeeper [23:47] SpamapS, one of them is in setsockopt, called from zookeeper_interest in libzookeeper. Is that what you see? [23:49] mars: no actually [23:50] #0 0x00007ffc5c18db03 in __GI___poll (fds=, nfds=, timeout=) at ../sysdeps/unix/sysv/linux/poll.c:87 [23:51] mars: I'm beginning to think this is some weird libzookeeper bug [23:51] mars: either way, I think we should actually just take out the 'start on ...' from the local provider agents until zookeeper is started as well [23:52] SpamapS, if it is Python, I can just hack a fix in there to test it out [23:58] mars: anyway, I think the right fix is to not start the agents on reboot