[00:07] SpamapS: that makes alot of sense [00:08] zirpu: i've always used whatever the current 15inch MBP is and been very happy the last 4 or so years [00:11] hm. i'm not a mac fan. but i have had mbp's at various companies. [00:12] i still hate case-insensitive filesystem with a flaming irrational passion of a sloth. [00:12] weird, why doesn't hp cloud recommend using python-novaclient ? :-P [00:12] * SpamapS reluctantly installs fog [00:12] MiGhTilY i say! [00:12] SpamapS: you can [00:12] SpamapS: i use it [00:13] imbrandon: it wants a username [00:13] SpamapS: its in the docs to setup either, but they are ruby fans [00:13] SpamapS: yea one sec its like ur email:tenneid [00:13] or something [00:13] one sec [00:14] export OS_TENANT_NAME="clint.byrum@canonical.com-tenant1" [00:14] that? [00:14] SpamapS: it appeared the admin node was checking for all those nodes. Is there not a hard requirement for at least 10 nodes? And can you point me to reference for that hack please? [00:14] no [00:14] burnbrighter: there's no reference to that hack [00:14] burnbrighter: because it is.. the suck ;) [00:14] burnbrighter: but for testing w/ < 10 boxes... [00:14] gotcha [00:15] burnbrighter: basically you edit ~/.juju/environments.yaml and add 'placement: local' to the environment .. [00:15] burnbrighter: but, this sends *EVERYTHING* to node 0 [00:15] interesting [00:15] burnbrighter: and it will install things in parallel, which will break things (apt-get can't run in parallel for instance) [00:15] burnbrighter: so you kind of have to set it, install anything you want on node 0 one at a time, and then un-set it. [00:16] anyway, I'm late.. gotta run [00:16] but, how would you do, say, just 5 nodes for example? [00:16] anyways, thnx [00:16] burnbrighter: you can squeeze almost everything onto that one node [00:18] ok, thanks [00:18] I'm running this all on esxi / vsphere server, so adding nodes and cloning them wasn't a problem anyways :) [00:46] anyone around to support MAAS with juju? [02:15] m_3: ping [02:15] m_3: i was just thinking ( not looked at the code yet so you might already do this ) but the juju charm [02:16] would it mayeb be a good idea to have it install by "hand" into a dediacated python virtualenv [02:16] so it cant mess up the "host" juju or mess with the versions etc if the [02:17] charms are frozen etc [02:17] e.g. a seperation of the juju charm installation and the juju on the system already if its a subordiante of like an existing webserver/appserver services grp [02:18] just a thought ... i'll peek into the code and see if u are already doing that or not [02:46] you could create a virtual env from a pip freeze and a cache dir easily w/o network access being needed. [02:46] i actually like that idea. [02:47] imbrandon: yeah that's planned [02:47] rockin , just crossed my mind ( btw i do have it as a sub on my webservers so i can call "juju" from hooks :) [02:47] imbrandon: maybe even use that to provide the tmux session you leave up and drive the env with [02:47] shhhh [02:48] yea good idea [02:48] imbrandon: that's sort of turning into a best practice for groups to manage an env.... two environments... and [02:48] but please think of us screen users too, tmux normally gets apt-get purged from me :) [02:49] easier than talking each member of the team through setup on frozen client branch [02:49] yea good call [02:49] imbrandon: ha! [02:49] yeah, was a screen die-hard for years... but I've totally drunk the cool-aide [02:49] man i cant get it to work like screen [02:49] (helps to have the same bindings) [02:49] really? [02:49] if i could i probably would, but there is a few things that i cant make work [02:49] i just took my bindings with me. [02:50] not the bindings [02:50] more about the window setup [02:50] i'm all about full screen. :-) [02:50] and making the shared screens work "right" [02:50] multiuser acts pretty differently, but imo tmux works better for that [02:50] well mine are full but i keep 5 or so windows open [02:50] i haven't used shared screens (yet). [02:51] i.e., all users looking at same screen as I switch around [02:51] m_3: yea thats what i cant get to work [02:51] on screen works perfect [02:51] tmux wont do it [02:51] there's a way to get it to behave like screen tho [02:51] alias tmux=screen [02:51] no no i dont think you get it [02:51] so it's said. :-) [02:51] where each user canbe looking at diff screens [02:51] my screen does like you say is easy in tmux [02:52] but i cant even get tmux to do it [02:52] i WANT that in tmuc [02:52] so client screens should switch when the master screen switches? [02:52] no [02:52] zirpu: that's tmux default, but opposite of screen default [02:52] i flip arround to whatever i want [02:52] and so can anyoone else [02:52] ah [02:53] or we can all goto screen 0 [02:53] and share [02:53] imbrandon: right... I'd have to dig for that... it's possible [02:53] yea if you can help me figure that out i'd be a happy tmux user [02:54] hm. so attach -r puts client in read only mode. [02:54] like i have byobu set 4 windows on login, and if you and me both ssh in we can see each other type if on the same window but i can flip to others and so can you [02:54] but it can't switch screens. weird. [02:54] mysetup^^ [02:54] read-only mode is kind of useless. [02:54] nah [02:54] great for classroom [02:55] well, ok. there's that. [02:55] or logs reads [02:55] for a dedicated log window [02:56] * zirpu does not know how all these kids live in their gui worlds. :-) [02:56] yeah, I use one-way shared screens more than I ever thought I would [02:56] but yea m_3 eveyone keep saying that WHY they swiched to tmux cuz that was hard in screen but to me thats default how screen works not hard and tmux i cant make do it [02:57] so readonly switches screens w/ the master screen. i see what you meant now. [02:57] and i like multi bars at the bottom of screens in byobu but i think i can make tmux do that too [02:57] imbrandon: http://paste.ubuntu.com/1063589/ is my conf... I'll get back to you on users watching separate windows in one session [02:57] m_3: rockin [02:57] ty [02:58] totally does the bars too [02:58] btw, whats the format for .byobu/windows.tmux ? i would need to convert my .byobu/windows file ( screen based ) [02:59] for the default windows on startup [03:00] way different scripting mechanism I think... there're a couple of different schools on that... I use tmuxinator [03:00] m_3: http://paste.ubuntu.com/1063593/ thats my .byobu/windows file on my OS X box [03:00] 'mux blog' or 'mux charmtester'.... can start with 'mux copy juju mynewcharm; mux mynewcharm' [03:01] tmuxinator is a gem so I assume it's good on osx [03:01] likely , i use a TON of ruby on OSX [03:01] as much as most ubuntu users use python :) [03:01] heh [03:02] trying to bring some of it with my to juju/ubuntu , still havent got a good juju-hook capfile balance yet [03:02] but i'll get there [03:02] s/my/me [03:03] imbrandon: equiv is http://paste.ubuntu.com/1063597/ [03:03] i keep wanting to get at cloud-init tho, juju should really provide a way for us to do that, i dont care that it got shot down on the ML [03:03] that'd be 'mux play' [03:04] its that or i just use hax and write a wrapper in jitsu [03:04] heh [03:04] * m_3 off in tweakage-land :) [03:04] hahaha yea i tweek my setups CONSTANTLY [03:04] like i'm never done [03:04] ever [03:04] jitsu is all things unloved and evil... but working :) [03:05] so anything goes [03:05] yea , perfect home for it [03:05] if i have to resort to it [03:05] i'd rather not tho cuz it will likely take a patch to juju to accomplish cleanly [03:05] and that makes me mad to have to resort yo [03:05] ah, right [03:05] to* [03:06] but i certainly will to show the utility of such [03:06] :) [03:06] crap, sorry forgot to reset my away mgs... hope you're not getting to much noise back [03:06] nope [03:06] well i dunno actually [03:06] cool [03:06] i have msgs like that off [03:06] hehe [03:07] joins parts quits aways etc etc [03:07] all prety much off [03:07] right [03:07] too many "dead" channels i idle in that would fill my disk with logs [03:07] if i dident :) [03:08] m_3: see my newest widget ? http://bholtsclaw.github.com/showdown/ [03:09] think i'm gonna make a charm out of the backend stuff i need to host those , so i can have them all central and easy to put all on one page etc etc [03:09] cool [03:10] cuz its like a mix of nodejs and mustash templates and php ( on my.phpcloud.com ) git and github pages [03:10] you should add other keys in there so we can have multiple peeps manage it [03:10] heh in otherwords i got em all spread out [03:10] and freeze the juju branch [03:10] hmmm yea [03:10] good idea [03:10] great idea actuallu [03:11] etc etc all the production best practices we reommend.... (nother test case) [03:11] * imbrandon puts that on the very short-term todo's [03:11] yea , seriously , good call [03:11] perhaps even use lp:charms/juju in the "control" environment... where it gets left open with a tmux serssion any of the team can attach/drive [03:12] (the latters still a maybe... not nec recommended practice yet) [03:12] just really needed it for scale testing [03:12] ok i need a little food and mt dew then to get started on that, i think that will be a good project tonight ... ( i dident wake up till 6pm local so i'll be up all night heh ) [03:12] and sort of need it after the fact for plumber's summit [03:12] ahh right [03:12] cool later man [03:13] yea i'll be back in ~20ish min, just need a lil snack, but i'll br round all night [03:13] and gonna work on just that, got me kinda fired up about the idea [03:13] ;) [03:13] cool.. I'll be in and out [03:13] kk [03:15] yea i like the idea of a tmux session with charms/juju installed on the bootstrap node , then the "team" can just ssh into the bootsrap and drive from a juju window there [03:15] and not need anything but their ssh key on that bootstrap [03:15] no local juju or env.y etc [03:15] ... [03:16] imbrandon: I'm thinking separate envs... [03:16] yea ok fooood then that is gonna become a reality with the widgets and button as a testcase [03:16] summit and summit-control [03:16] ahhh [03:16] yea [03:16] Anyone know about troubleshooting why the mysql charm instance didn't come up in juju/maas? I can reach the node via regular ssh, but agent shows the node is down. [03:16] yea i liky that too [03:16] both would have 'authorized-keys:' in the env with everybody's keys [03:17] right [03:17] this is for openstack-dashboard btw [03:17] burnbrighter: hrm nothing in juju-debug log? [03:17] err juju debug-log [03:17] burnbrighter: not really no... I've seen machines with services running great but the juju aagents report it down [03:17] nothing extraordinary I can see [03:17] rebooting (those ec2 instances) worked to reset the agents [03:18] this is maas [03:18] right [03:18] I've only seen that prob in ec2.... and pretty rare [03:18] restart the agent ? ( or reboot the node ) [03:18] bbiab [03:18] how does one restart the agent (me=noob) [03:18] yeah, first try taking a look in the provisioning agent's log to see that the service was started up correctly [03:18] I did restart [03:19] machines: [03:19] 1: [03:19] agent-state: not-started [03:19] dns-name: maas07 [03:19] instance-id: /MAAS/api/1.0/nodes/node-4ddba634-bfdb-11e1-8909-005056a44c88/ [03:19] instance-state: unknown [03:19] services: [03:19] mysql: [03:19] charm: cs:precise/mysql-2 [03:19] relations: [03:19] shared-db: [03:19] - glance [03:19] - keystone [03:19] - nova-cloud-controller [03:19] - nova-compute [03:19] - nova-volume [03:19] units: [03:19] mysql/1: [03:19] agent-state: pending [03:19] machine: 1 [03:19] public-address: null [03:20] burnbrighter: did it ever say 'started'? [03:20] no [03:20] looking in provision log now.. [03:20] hmmmm... yeah, dig for the maas version of the provisioning-agent log [03:21] 2012-06-27 23:01:35,045:901(0x7f45fbcc2700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f45f8fce6b0 sessionId=0 sessionPasswd= context=0x3352b20 flags=0 [03:21] and then the machine-agent log on that box [03:21] 2012-06-27 23:01:35,045:901(0x7f45f74ca700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client [03:21] 2012-06-27 23:01:38,381:901(0x7f45f74ca700):ZOO_INFO@check_events@1585: initiated connection to server [127.0.0.1:2181] [03:21] 2012-06-27 23:01:38,403:901(0x7f45f74ca700):ZOO_INFO@check_events@1632: session establishment complete on server [127.0.0.1:2181], sessionId=0x1383109683e0001, negotiated timeout=10000 [03:21] 2012-06-27 23:11:55,399: juju.agents.provision@INFO: Stopping provisioning agent [03:21] then unit-agent there too [03:21] refused...hmmm [03:22] 2012-06-27 23:13:31,767:886(0x7fa12a2f8700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7fa1271b56b0 sessionId=0 sessionPasswd= context=0x1db9090 flags=0 [03:22] 2012-06-27 23:13:31,768:886(0x7fa125b00700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client [03:22] 2012-06-27 23:13:35,105:886(0x7fa125b00700):ZOO_INFO@check_events@1585: initiated connection to server [127.0.0.1:2181] [03:22] 2012-06-27 23:13:35,133:886(0x7fa125b00700):ZOO_INFO@check_events@1632: session establishment complete on server [127.0.0.1:2181], sessionId=0x138311458a10000, negotiated timeout=10000 [03:22] 2012-06-27 23:13:35,216: juju.agents.machine@INFO: Machine agent started id:0 [03:22] zk's down maybe? [03:22] how can I check please? [03:22] ps awux | grep zoo [03:23] on that node? [03:23] netstat -lnp | less [03:23] on the maas server [03:23] ubuntu@maas07:/var/log/juju$ ps awux | grep zoo [03:23] 107 925 0.2 0.8 1835488 33088 ? Ssl 23:13 0:01 /usr/bin/java -cp /etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/zookeeper.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,ROLLINGFILE org.apache.zookee [03:23] erver.quorum.QuorumPeerMain /etc/zookeeper/conf/zoo.cfg [03:23] ubuntu 1205 0.0 0.0 9376 932 pts/0 S+ 23:22 0:00 grep --color=auto zoo [03:23] it must be up, right? because other agents are running fine [03:23] burnbrighter: this is where I'm out of my depth wrt maas.... sorry, I'm not sure exactly [03:24] I'll assume that the maas server is set up as the bootstrap node and runs the provistioning agent and zookeeper [03:24] m_3: thanks anyways. this is tricky [03:24] (note the provisioning agent is attempting to connect to localohost:zk [03:25] eh, localhost on its own zookeeper? [03:25] and getting refused for some reason [03:25] um yeah, I see that - connection to localhost 2181 [03:26] right... in std juju, the provisioning agent and zk live together on the bootstrap node [03:26] provisioning agent should be able to subscribe to zk changes and spin up machines accordingly afaik [03:26] I think in maas, and I may be wrong - the bootstrap node becomes the first node bootstrapped [03:27] ah, yeah, that's the question... does the maas server do this or does it actually still use some _other_ bootstrap node per environment [03:28] I know it pulls the ephemeral image from the primary maas node, but where does it get that info? [03:28] from the first bootstrapped node, or the primary maas node? [03:28] ugh [03:29] um... dunno [03:29] I don't know if a maas env is treated as a single juju environment [03:30] lemme dig a bit [03:30] are you on the canonical team? [03:30] this is known stuff though... might try #ubuntu-server or #juju again tomorrow [03:30] yeah, I'm on the server team, but haven't worked with maas yet [03:30] bigjools may know too - but I think he draws the line when it gets deeper in to the juju side [03:31] I think there's a #maas here too... looking [03:31] yeah - that's my primary channel [03:31] ah, see you're already there :) [03:32] but now that I've got the maas side stable, I've been focusing on trying to get the openstack stuff working [03:32] burnbrighter: yeah... that's probably our most complex stack to date on juju [03:33] and the only one I REALLY want working :) [03:33] lol [03:33] the 'not-started' sounds like a lower-level problem than the charm though [03:33] this has been a 3 week endeavor for me [03:33] wow [03:34] both cheez0r and I have both been trying to get it working [03:34] adam_g: you up still? [03:34] burnbrighter: they've got this going regularly in a hw lab [03:35] but I'm wondering if there was special tweaking for the mysql stuff [03:35] or if there are open issues [03:35] dunno [03:35] yeah [03:36] admittedly, I'm not up on the juju stuff yet enough to be dangerous with troubleshooting [03:36] try to bring it up to a 'started' state before adding any relations [03:36] how do you bring up individual nodes in started state? [03:36] I know I've heard of ordering deps in openstack charm relations [03:36] * imbrandon returns [03:37] ju do the deploys [03:37] no add-relation calls until you see them in a 'started' state [03:37] m_3: i thought about that at one time, me and marcoceppi brefly talked about it at UDS etc [03:37] but i decided that hiabu was a better way [03:37] ( pending rename ) [03:38] (there's a jitsu tool to script all of this... 'apt-get install juju-jitsu; jitsu watch -h'... eventually... but for now just do it in your scripts) [03:38] ok, so destroy the individual services, but don't add the relation calls? [03:38] eh, re-add after destroying then wait till they individually come up? [03:38] then add relations? [03:38] burnbrighter: yeah, so at this point mysql hasn't been "started" [03:39] so you should be able to remove-relation's [03:39] then destroy that service [03:39] then terminate that machine [03:39] then deploy the service [03:39] now, wait until it's 'started' [03:39] so first try to remove relations, destroy mysql service - terminate machine is new for me [03:39] then go in and 'add-relation' [03:40] I'll try that - remove all mysql related relations I take it? [03:40] burnbrighter: so I would totally recomend restarteing from scratch... but you don't _really_ have to do that since the service has never been in a started state [03:40] * imbrandon just thought about something from the backlog ... is tmux in ruby ? if so i'd drop screen a moment and deal with "issue" just for that alone [03:40] so the relation hooks on either side have never really fired [03:41] imbrandon: dunno [03:41] does destroying the service basically start things from scratch too? [03:42] burnbrighter: destroying the service _and_ terminating that machine will start that service from scratch [03:42] ie. without destroying the rm -rf "destroy-environment" [03:42] ok [03:42] burnbrighter: but won't require you to bring your whole stack down [03:42] right [03:42] cool, I can try that [03:42] thank you [03:43] I'd recommend that just for sanity's sake as the 'reboot' option... but that's more of a pain with some providers than others [03:43] and if I thought any relation hooks got fired, I'd say... destroy-environment [03:43] but it doesn't look like that here [03:44] you can tell by sshing to that machine and 'mysql -uroot; show databases' or something like that [03:44] should be empty [03:44] yeah, I didn't even see mysql coming up on that node, so that will be a good test [03:44] yup [03:45] not sure exactly how to terminate-machine in maas... that's equiv to a re-install [03:45] yeah - I learned this the hard way - you have to terminate the juju stack before trying to do anything on the maas side [03:46] 'juju terminate-machine ' on most providers totally kill the instance... so you're starting with a freshubuntu server next time [03:46] your maas nodes end up pulling the ephemeral image, but the maas nodes are based on the precise image, I believe [03:47] * m_3 _really_ needs to learn maas... gotta bump that up higher in the queue [03:47] its so very cool. Its just raw [03:48] a little baking and it will make a really nice pie [03:48] cool [03:49] ok, let me try your suggestions [03:49] ok, I'm on UTC+10 this week so I'll be in and out for several hours still [04:04] still not starting... [04:05] http://pastebin.ubuntu.com/1063654/ [04:05] burnbrighter: wow.... hmmmm [04:06] any of that look interesting? [04:06] not really... that's during an expose [04:06] I'm not exposing, just deploying [04:06] expose is a separate step... much like 'add-relation' [04:06] right [04:07] I didn't expose, I just added the service [04:07] just ran "juju deploy mysql [04:07] " [04:07] it's marked as exposed somehow... but it's not surprising that that's barfing if the machine's not coming up [04:08] ok, 'juju deploy mysql' is perfect [04:08] here is exactly what I did: 1. remove relations: [04:08] so now what did you do to throw that machine away? [04:08] juju remove-relation keystone mysql [04:08] juju remove-relation nova-cloud-controller mysql [04:08] juju remove-relation nova-volume mysql [04:08] juju remove-relation nova-compute mysql [04:08] juju remove-relation glance mysql [04:08] then I removed the service ie. juju destroy-service mysql [04:09] then do a status to see the service sitting all alone (no relations) [04:09] did that, yes [04:09] then destroy-service... sounds good [04:09] now for terminate-machine [04:09] then I terminated the node it was running on [04:09] then I went straight in to deploy - [04:09] does that just put the existing server into WOL sleep? [04:10] or does it _rebuild_ it? [04:10] well, there are VMs ;) [04:10] s/there/they are/ [04:10] oops, sorry not following [04:10] ok, so you're deploying maas on a bunch of vms [04:10] yes [04:10] which to this point and not without a lot of headaches was working :) [04:11] ok, so when that machine is terminated, does it destroy that instance _and_ image? [04:11] the headaches were all procedural though [04:11] or does it put it to sleep or just stop it? [04:11] it appears to remove the instance [04:11] from juju [04:11] let me check if the machine is actually down [04:11] it is not [04:12] ah, but it really needs to kill the instance and image and reprovision a new machine [04:12] so my guess is if I "add" it back it will be back in the line up [04:12] but why do I need to do that? [04:12] burnbrighter: so I think that whatever is causing this to show up in a not-started state is written into the image [04:13] it wouldn't be in a pristine copy of the image [04:13] but it's in that actual one [04:13] so, check this - when I asked for mysql to be redeployed - it went to a whole different node - my node 12 and got the same results [04:13] (total guess... but that's what I'm trying to determine by doing a 'terminate-machine') [04:13] oh wow [04:13] ok, so scratch that.... [04:14] mysql: [04:14] charm: cs:precise/mysql-2 [04:14] relations: {} [04:14] units: [04:14] mysql/2: [04:14] agent-state: pending [04:14] machine: 12 [04:14] public-address: null [04:14] 12: [04:14] agent-state: not-started [04:14] dns-name: maas07 [04:14] instance-id: /MAAS/api/1.0/nodes/node-4ddba634-bfdb-11e1-8909-005056a44c88/ [04:14] instance-state: unknown [04:17] burnbrighter: is the uuid different? [04:17] dunno how it reuses instances [04:18] oh actually it's the same [04:18] maybe definging the same instance as a new machine id [04:18] or mabe is a new instance from the same image [04:19] sorry for the typing... high latency hotel connection [04:19] I know how that goes :) [04:20] I just restarted the old node that I previously terminated and its rebootstapping and adding itself back to juju [04:21] yeah, looks like it didn't kill that instance... just re-used it [04:21] seems like it migrated it to another node? [04:21] so can you add new nodes into maas without terminating the whole juju environment? [04:21] sure [04:21] they had the same instance-id though [04:22] (from earlier in the channel) [04:22] just different machine_id [04:22] what does that mean though - the instance ID is the same? [04:22] which seems strange, but I don't know [04:23] perhaps maas is trying to intelligently re-use instances [04:23] but if there's a catastrophic config error in one... there's gotta be a way to tell it to wipe it and start over from a fresh install [04:23] it appears to me, juju runs a layer above maas [04:24] so juju AFAICT has little knowledge of maas [04:24] yes, but 'juju terminate-machine 12' in all other providers gives you a completely fresh ubuntu server instance [04:25] but there's _definitely_ problems re-using intances [04:25] i.e., if the charm has an error... does something stupid with config [04:25] there're plenty of situations where that's unrecoverable on _that_ instance [04:25] ah freakin weird [04:26] and a need to re-provision that from scratch [04:26] mysql just came up [04:26] ha [04:26] so don't relate anything yet [04:26] let's figure out what's up [04:26] mysql: [04:26] charm: cs:precise/mysql-2 [04:26] relations: {} [04:26] units: [04:26] mysql/2: [04:26] agent-state: started [04:26] machine: 12 [04:26] public-address: maas07.localdomain [04:27] strange... take a look in the various logs [04:27] looking [04:27] on the instance, there should be something like /var/lib/juju/units/mysql-xxx/charm.log [04:27] might be in /var/log/juju... different on different providers (#$$%#@!) [04:31] only a machine-log [04:31] http://pastebin.ubuntu.com/1063669/ [04:32] there should be a charm log there _somewhere) [04:32] but here is the debug log from the main node, more interesting [04:32] http://pastebin.ubuntu.com/1063670/ [04:33] I think that's the charm log [04:33] right? [04:35] wow [04:36] there're a couple of strange things going on here [04:37] what are you seeing? [04:37] 1.) debconf is trying to use a Dialog frontend... should be noninteractive [04:37] 2.) mysql revision is just whack [04:37] I'm testing a charmstore deploy here [04:38] while I do that, can you please grab a local copy of the charm? 'charm get mysql' or 'bzr branch lp:charms/mysql' [04:38] ohhhh wait - I saw somewhere something about needing to declare a terminal! [04:39] like Xvfb or something like that... [04:40] juju should be setting DEBIAN_FRONTEND=noninteractive [04:40] no, I was mistaken - that was in the openstack.cfg [04:40] nice find - [04:41] does that need to be fixed in the build? [04:42] it's in juju bzr544 /usr/share/pyshared/juju/hooks/invoker.py line 219 [04:43] you are speaking greek to me :) [04:45] should I destroy msql-service? [04:45] uh destroy-service mysql? [04:46] sorry... just checking out the versioning problem [04:46] ok, so you said the openstack notes said something about needing a specific terminal? [04:46] no worries. I'm happy to test out whatever you have... [04:47] no, forget that [04:47] ok [04:47] this is what I'm looking at [04:47] https://help.ubuntu.com/community/UbuntuCloudInfrastructure [04:47] look under Deploying Ubuntu Cloud Infrastructure with Juju [04:47] but that's unrelated to what we are doing [04:48] that's specific for the nova-volume charm [04:48] oh but looky there [04:48] right under that [04:48] ok, so the debconf errors are a red herring... np [04:48] step 2 [04:50] http://paste.ubuntu.com/1063695/ is a successful ec2 mysql spinup... we can compare [04:51] your log looks ok as far as I can see [04:51] so maybe it was just timing that was giving problems [04:51] you can either kill it and see if it spins up again with relations [04:51] that could be... but did you look at that step 2? [04:51] or just add the relations from here to see if they come up [04:54] they deploy from the local instance [04:54] so, how do I then tell afterward if the relations were added correctly? [04:54] burnbrighter: no, that one was deployed from the charm store [04:54] mine was you mean, right? [04:55] burnbrighter: any failed relations will have an 'error' state [04:55] my mysql charm was deployed from the charm store [04:55] vs the way they say I should be doing it which is by downloading the charm then installing locally, correct? [04:58] burnbrighter: that's what I was testing out... to see if I got any differences [04:58] ah, ok - see anything? [04:59] burnbrighter: in general, yes you want to deploy from a local charm repository... mostly just to freeze the charms you're working with [04:59] burnbrighter: but for what you're doing... just getting things up and running... the charm store should be fine [05:00] what about "DEBIAN_FRONTEND=noninteractive" ? [05:01] another question I have, is now that I terminated that instance, my instance numbering is off [05:02] ie. 0,2,3,4,5,6,7,8,9,10,11 [05:02] thats normal. they only ise [05:02] notice "1", the node I terminated before is now missing and was put back in as 12 [05:02] rise* [05:02] e.g i have an env with 3 machines and they are 0 5 and 11 [05:02] ok, no way to go backwards for linearity? [05:03] ok, low on the priority list I guess. no big deal [05:03] nope, but its highly irrelivent if you are thinking in juju terms of service mgmt, forget the machine is there :) [05:04] right... [05:05] hmmm, yeah, that puked [05:06] burnbrighter: only way to rewind ids is destroy-environment [05:06] that's just zookeeper sequence ids [05:06] no way [05:06] the relations barfed it.... or it barfed on its own? [05:07] glance puked mostly [05:07] oh [05:07] :( [05:07] also saw this a lot [05:07] 2012-06-28 05:05:17,750 unit:glance/1: hook.output INFO: db_changed: DB_HOST || DB_PASSWORD not yet set. Exit 0 and retry [05:07] burnbrighter: that's normal [05:07] and this [05:07] 2012-06-28 05:05:14,025 unit:mysql/2: hook.output INFO: DATABASE||REMOTE_HOST||DB_USER not set. Peer not ready? [05:08] burnbrighter: that too [05:08] maybe its a timing thing too [05:08] I should let all of the relations finish one at a time? instead of just pasting them all in? [05:08] those messages are part of the normal relation exchange between the two sides [05:09] I will put it in to pastebin and let you see what I am seeing ;) [05:09] burnbrighter: no,all at once _should_ be fine [05:09] I remember seeing something with relation weights to order them though [05:09] what openstack scripts are you working from? [05:10] http://pastebin.ubuntu.com/1063712/ [05:10] just basically following the procedure you I posted in above, but minus the local repository creating thing [05:10] I should follow it more closely [05:11] there is likely a reason its set up the way it it [05:11] it is rather [05:12] damn... that sure seems like the relation barfed [05:12] could be residual state from the earlier mysql instance fail [05:12] i.e., worth trying a total destroy-environment again [05:13] well, this looks good: [05:13] mysql: [05:13] charm: cs:precise/mysql-2 [05:13] relations: [05:13] shared-db: [05:13] - glance [05:13] resolved --retry ? [05:13] - keystone [05:13] - nova-cloud-controller [05:13] - nova-compute [05:13] - nova-volume [05:13] units: [05:13] mysql/2: [05:13] agent-state: started [05:13] machine: 12 [05:13] public-address: maas07.localdomain [05:13] but I'm really concerned that destroy-environment and then reboostrapping is re-using instances [05:13] yes, that looks like it's in a good state [05:14] lemme read back through the log to see if it fixed itself [05:14] still not authenticating though... [05:14] m_3: iirc it is, and intentionaly iirc inless you also terminate machine [05:14] i ran into that issue myself [05:14] unless* [05:15] mbrandon: is that directed to me? you think auth won't work off the bat? [05:15] no no [05:15] m3 [05:15] ah, ok, sorry [05:15] np [05:15] :) [05:15] imbrandon: not sure what terminate-machine is doing in maas [05:15] it was re: bootstraping reusing instances [05:16] it appeared to not be a real flush [05:16] ahh [05:16] yea last time this came up i was told it was that way due to the time it takes ec2 to spin up a new instance [05:17] so it reused it unless you explisitly say destro it [05:17] iirc [05:18] i'm not greatly familiar with that tho so i very well could be thinking of something else or just flat mistaken [05:18] :) [05:19] * imbrandon goes back to making the PoC juju-control charm [05:19] imbrandon: no, that's the issue... but re-using hardware saves time for maas.. but can also intro config problems from the instance's previous life [05:25] burnbrighter: ok, so based on the log content, I'd expect that the good status you're seeing is a lie [05:25] yeah, what I kind of thought [05:26] ok, so where are we... [05:26] I do see mysql appears to be up and running, but the errors don't paint a rosy picture [05:26] at this point, I'm thinking a destroy-enviropnment is called for [05:27] I guess its time to blow everything away - can I do that short of destroying the whole environment? It's kind of a pain to re-bootstrap everything [05:27] ie. destroy-service? [05:27] or is the only sure way to start from scratch? [05:27] yeah, you're right... re-boostrapping is a pain [05:28] perhaps do the destroy-service dance again (removing relations first) [05:28] m_3: i am now [05:28] but this time, make sure there's no way that maas can re-use that nstance [05:28] after a destroy-service [05:28] ah, but it worked the second time around [05:29] oh... [05:29] I see what you are saying... [05:29] do a terminate-machine and add the manual --with-vengeance option on there :) [05:29] I don't know what that needs to look like in maas [05:29] I think the only way to guarantee is to destroy-env right? [05:29] other than perhaps virsh destroy or even undefine [05:30] this is esxi :) [05:30] no, I don't know that destroy-environment really flushes the instances with maas [05:30] ok, I'll try destroy-service first [05:30] ok [05:30] follow the RTFM print [05:31] :) [05:31] I'll dig a bit later today and see what the code does with a terminate-instance [05:31] if you are around, I'll let you know how it goes [05:31] thnx heeps for your help [05:32] I'll bet they're erring on the "more re-use" side so they come back up quicker [05:32] burnbrighter: happy to... we'll figure it out [05:33] burnbrighter: the traceback in http://pastebin.ubuntu.com/1063712/ looks like a dns problem [05:34] dns? [05:34] m_3: g'day [05:34] SpamapS: yo [05:35] burnbrighter: i suspect glance is trying to reach a mysql host at maas07.localdomain, , but that hostname is not resolvable from that machine [05:36] burnbrighter: if you still have that up, can you confirm that? ssh to the machine and see if you can reach that host [05:36] adam_g: where is it getting the .localdomain - is that added by default? [05:36] yes, but I have it on the primary node's host file [05:37] burnbrighter: the .localdomain is the default, yeah. is MAAS handling your DNS or is that handled somewhere else? [05:38] dnsmasq runs on the primary node [05:38] but yeah, you're right, its not working... [05:38] ubuntu@maas07:~$ nslookup maas02.localdomain [05:38] Server: 172.16.100.11 [05:38] Address: 172.16.100.11#53 [05:38] ** server can't find maas02.localdomain: NXDOMAIN [05:39] ok, it is going to the right node though... [05:39] i dont follow [05:39] uh sorry [05:39] 172.16.100.11 is my maas01 - my primary maas node [05:39] and that's where dnsmasq is [05:40] burnbrighter: can the maas managed nodes reach each other via their maas00*.localdomain hostname? [05:41] not localdomain, but short name because its in the host file [05:42] I guess if I update the domain in the host file it will work [05:42] let me try that [05:42] burnbrighter: you'll need to ensure they can reach each other via their FQDN [05:42] yeah - is that .localdomain configurable? where does that come from? [05:42] burnbrighter: i personally dont know how to go about getting that right in MAAS, but it looks like thats the main issue [05:43] I think I know how to fix it from maas, but I would rather fix it on the juju side [05:43] wonder if that could be why the service got stuck in a 'not-started' state to begin with [05:44] that would make sense [05:44] seems like nothing would work [05:44] though [05:44] burnbrighter: i believe you can edit the nodes' hostnames via the web interface [05:44] eh...which web interface?? [05:44] maas? [05:44] burnbrighter: MAAS [05:44] in the node list, you have the option of editting nodes, and you can update hostnames there [05:45] I have done that - and its set to maas.local - but that is not getting to juju's configuration [05:45] somehow the nodes are getting ... hmmm [05:45] thinking [05:45] you know, my node aren't set [05:46] ok [05:46] nope, they aren't [05:46] so my choice goes back to setting it in the host files [05:46] burnbrighter: localdomain is the default domain name used when none is specified, unrelated to juju, maas, etc. [05:47] yup, that's making sense [05:47] I don't have a domain specified in maas [05:47] its only in the host file [05:47] so that does make sense [05:47] adam_g: thanks for catching the dns... missed that [05:49] yeah, ive seen similar issues. im sure someone else knows how to fix that properly in MAAS [05:50] From what bigjools said, they are getting rid of the dns configuration in the next release [05:50] not sure about the dependencies though... [05:50] maybe putting that back on the user [05:51] that'd make things easier for sure. :) [05:56] ok, now its working... [05:56] ubuntu@maas07:~$ nslookup maas02.localdomain [05:56] Server: 172.16.100.11 [05:56] Address: 172.16.100.11#53 [05:56] Name: maas02.localdomain [05:56] Address: 172.16.100.102 [05:56] ubuntu@maas07:~$ ping maas02.localdomain [05:56] PING maas02.localdomain (172.16.100.102) 56(84) bytes of data. [05:56] 64 bytes from maas02.localdomain (172.16.100.102): icmp_req=1 ttl=64 time=0.362 ms [05:56] 64 bytes from maas02.localdomain (172.16.100.102): icmp_req=2 ttl=64 time=0.228 ms [05:57] * SpamapS shakes head [05:57] DNS is such a CF [05:58] burnbrighter: what is the state of things now? where did you leave it? [05:58] Now do you think its a matter of rebuilding relations, or destroying the service and starting from scratch with the service stack? [05:58] i think its probably safe to remove relations and re-add, without tearing down. [05:58] ok, I will try that... [05:59] burnbrighter: to confirm though, remove all relations and add the glance <-> mysql one, see that it works [06:00] burnbrighter: this might be helpfuul: http://paste.ubuntu.com/1063760/ [06:02] http://pastebin.ubuntu.com/1063763/ [06:03] ah that's way different than what I was following [06:04] I was just doing this... [06:04] juju add-relation keystone mysql [06:04] juju add-relation nova-cloud-controller mysql [06:04] juju add-relation nova-volume mysql [06:04] juju add-relation nova-compute mysql [06:04] juju add-relation glance mysql [06:06] burnbrighter: can you just confirm glance <-> db is okay, 'sudo glance-manage db_version' on the glance node? [06:08] sure, hang on [06:09] ubuntu@maas09:~$ sudo glance-manage db_version [06:09] 13 [06:10] burnbrighter: cool, looks good [06:10] is there a default u/p for openstack-dashboard? [06:10] or is it admin and whatever you configure in to openstack.cfg? [06:12] burnbrighter: yeah, its the admin username and password configured in the deployment config [06:13] but when that's not working, db issues are suspect [06:13] I'm not even sure I see anything hitting the db [06:13] burnbrighter: logging into the dashboard? no, thats probably not db related [06:14] interesting, ok [06:14] burnbrighter: can you paste bin `juju status`? [06:14] sure [06:14] (wife is home, need to run in a min) [06:15] adam_g : http://pastebin.ubuntu.com/1063775/ [06:15] m_3: use juju to deploy -> MAAS using -> EC2 instsnces as Nodes -> to build an Openstack Cloud -> that you use juju to bootrap a juju-control environment -> juju-control installs juju and sets up tmux charm-tools jitsu environments.yaml and ssh-import-keys to bootstrap a child env -> juju deploys byobu-classroom to child env -> byobu-classroom creates a LXC based public bootstrap and ajaxterm node for all to marvel -> Internets break from all the [06:18] burnbrighter: i do not know why, but it shows keystone as 'pending' with no machine. perhaps its just a slow machine? wait for that to come up and 'started' before trying anything. they dont call that 'keystone' for nothing :) [06:18] question for you ... [06:18] sorry, it has a machine but it is not started [06:18] that relation list you sent me is vastly different than the one from the maas documenation [06:19] is that what I should be using? [06:19] I'm not sure I have all of those things deployed [06:19] burnbrighter: which are you referencing? [06:19] http://paste.ubuntu.com/1063760/ [06:19] burnbrighter: i meant, which maas documentation [06:20] oh.. [06:20] hold on [06:20] https://help.ubuntu.com/community/UbuntuCloudInfrastructure [06:21] burnbrighter: [06:22] just compared, they are basically the same. the order is a bit different, and in the one i pasted it is specifying the interface for each relation (eg, :identity-service) which is not needed [06:22] anyway, i have to run [06:22] ok [06:22] thnx [06:22] burnbrighter: now that your DNS is sorted, id suggest trying again from scratch (sucks, i know) [06:23] ping me tomorrow if youre still having problems, see ya [06:24] Can I ask here about juju building problems? [06:24] adam_g: thanks - what timezone / times are you usually around? [06:28] burnbrighter: he's us-west-coast [06:28] imbrandon: sounds just pathologically like inception [06:29] * koolhead11 pokes m_3 [06:29] we should start an inception-count or someting... inception depth [06:29] hey koolhead11! [06:29] alo21: yes you can and should ask about those. :) [06:30] gonna go grab food...bbiab [06:30] SpamapS: I have problem building juju from source [06:30] can someone help me? [06:34] alo21: its python.. [06:34] alo21: what source are you trying to build? [06:35] or are you trying to play with the go port? [06:35] SpamapS: I downloaded juju from apt-get source [06:35] alo21: ah [06:36] SpamapS: and then I run sudo pbuilder build *.dsc [06:38] why does everybody use pbuilder? sbuild is so much better.. :p [06:38] m_3: hahah +1 on inception-count or similar :) [06:38] imbrandon: sooo close [06:38] ? [06:38] imbrandon: hpcloud [06:38] SpamapS: I do not know what is sbuilder [06:38] oh nice, i am getting to know the nova api better [06:39] imbrandon: I think there's something wonky in the swift+nova [06:39] :( [06:39] SpamapS: could youprovide me an useful link about it? [06:40] SpamapS: re pbuilder amost ALL the docs dating back centuries recoment pbuilder is why [06:40] :) [06:40] yeah [06:40] alo21: I'm about to pass out, so sorry, no. :-/ [06:40] but pbuilder + some custom scripting and lvm snapshots can be beuitiful :) [06:41] heh [06:41] alo21: what are you looking for ? [06:41] imbrandon: all built in to sbuild [06:41] it even has btrfs support now [06:41] SpamapS: yea i'm mostly rembering back to my MOTU days and pbuilder barely existed let along all the cool stuff now [06:41] imbrandon: I am trying to build juju from source [06:41] cowbuilder [06:41] heh [06:42] alo21: ok . where ya hitting a snag ? [06:42] ahh, there's a messed up assumption in the openstack provider... [06:42] it assumes all regions will be the same [06:42] SpamapS: nice . something i can hack out ? [06:42] but object-store is in region-1.geo-1 while compute is az-3.region-a.geo-1 [06:42] imbrandon: what "ya" means? [06:42] imbrandon: I'm about done with it actually [06:43] ya == you [06:43] where are you having truble [06:43] SpamapS: nice , kk [06:44] imbrandon: http://pastebin.ubuntu.com/1063813/ [06:45] if you are on Ubuntu alo21 copy and paste this into terminal and it should build from source, you can customize it from there ... "mkdir ~/juju-build && cd ~/juju-build && bzr branch lp:juju && cd juju && python setup.py build" [06:45] * imbrandon looks at pastebin [06:47] imbrandon: done, then? [06:47] yea no idea about your pastebin, one it needs a little more context but it looks to be a pbuilder issue not juju build issue and the above command should help you there, as for pbuilder #ubuntu-motu should be able to help sort that [06:47] done then what ? [06:48] imbrandon:I am wondering if I have to run other commands? [06:49] oh no, just copy what i had in quotes and paste it as one line [06:49] without the quotes [06:49] and it will run it all but stop if there is an error [06:49] due to the && vs ; [06:50] but yea thats it, it should tell you where it built to, but i think `pwd`/build is the default iirc [06:50] imbrandon: I did not recive ant errors [06:51] then you are good, you just compiled juju [06:51] imbrandon: ok, thnaks [06:51] thanks* [06:52] if you want to do it for hacking on it a little in a saner way not to mess up your system accidenly [06:52] there is a section in the docs [06:52] for devlopers install that will guide you [06:53] alo21: http://jujucharms.com/docs/drafts/developer-install.html [07:01] imbrandon: I think most of the probs are due to hpcloud being diablo [07:01] 2012-06-28 00:00:53,347 DEBUG openstack: 201 '201 Created\n\n\n\n ' [07:01] 2012-06-28 00:00:53,348 INFO 'bootstrap' command finished successfully [07:01] * SpamapS crosses fingers [07:01] sweet [07:02] yea i've been going over their documentation , they have alot of "extensions to the openstack api" but also "noteable diffrences" and "stuff we dont implement" as well :( [07:03] i mean its cool but more like a frankenstack not openstack [07:03] * imbrandon ponders is canonistack is similar [07:03] if* [07:04] imbrandon: yeah, they're really big on "nobody will ever run vanilla openstack" [07:04] which means "openstack will go nowhere" [07:04] yea , thats a sure fire way to solidify that [07:04] who is they in this context tho, canonical ? [07:04] hrm, ssh not coming up [07:05] imbrandon: no, HP [07:05] oh [07:05] right [07:05] <-- says canonical as "we" [07:05] right, figured that but i thoght maybe the IS team was they in that sentace :) [07:05] e.g assumed canonistack admins [07:06] no its more the providers who think they must be able to differentiate outside the normal process [07:06] Canonistack might be the only semi-public vanilla openstack ever ;) [07:06] Oh wow.. still apt-getting stuff [07:07] key gen seemed to take a long time [07:07] yea seriously , thats one reason i loved linode for so long and i think RACK is just a big linode from what i have seen, but provide powerfull stuff at good price and dont cripple it cuz someone could take advantage just fire the bad apples as customers [07:07] I suspect we dont' have the same pseudo-randomness hack on their cloud as we do on EC2 [07:08] i dunno its official ami's from canonical [07:08] Yeah that would be unfortunate if it weren't [07:08] a few new ones even poped up this week for older versions [07:08] Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 49.9%id, 49.9%wa, 0.0%hi, 0.0%si, 0.0%st [07:08] disk seems SLOW [07:08] yea , disk io is horrid [07:08] BUT [07:09] cdn and nic speed is AWESOME [07:09] :) [07:09] good tactic to get people to just buy big mem-tastic instances [07:09] their network is much faster to me than aws , but their cpu and disk io blows [07:10] thus i create like lots of xsmall instances to do the job as a cluster :) [07:10] heh [07:10] ugh yeah wowowow is I/O slow [07:10] imbrandon: anyway, let me push up a branch that works [07:10] their xsmall like rackspaces isnt crippled either [07:10] sweet, if you could pastebin a config too sanitised [07:11] thats one area I'm still a little bit unsure of [07:11] and i'll buy u a crown and coke next meetup :) [07:11] because I think the env vars are used and I may have polluted it [07:11] mmcrown [07:11] * SpamapS could use some whiskey about now [07:11] heh [07:11] forgot to do my SRU's and juju patch pilot today.. er.. yesterday [07:11] seriously dpkg is just SUCKING ASS [07:12] unpacking java took almost 2 minutes [07:12] yea my env on my mac ( what i'm booted into atm ) is very poluted , heh its survived me hacking the hello out of /etc/profile and then in place upgrades from 10.4 to 10.8 preview 4 currently [07:12] heh [07:13] SpamapS: yea once i got mine all setup with "base install" i used the nova api to create a personal base image [07:13] that i use to fire new ones up with, so i'm hoping we can provide a ami-id to the hp provider [07:13] heh [07:13] imbrandon: I think we should build that into juju, all providers have a way to turn a running instance into an image [07:14] that would fall into "we shoudlent care about the env" so i gave up afrer being shut down 2 times [07:14] wasent gonna battle mr juju over it [07:14] but i totally agree its esential in the long run [07:14] SpamapS: don't be sad, you always go on friday remember? [07:15] i mean we can either not care about the env and let something like puppet provsion for us , or we can care and provide juju ways to do it [07:16] but it cant be no you cant touch the metal and expect users to take us serious [07:16] and not hack it [07:16] jcastro: true. I'm out Friday tho, so tomorrow is it. :) [07:16] heh [07:16] jcastro: btw, are your slides still shared on U1? [07:17] heya jcastro :) [07:17] jcastro: I need new ones [07:17] SpamapS: for this weekend we went with dropbox [07:17] err, week. [07:17] OpenStack LA will have 50 people tomorrow night [07:17] ohhhh i got a new set of slide templates i made that are SLICK AS HELL [07:17] I can send you my whole deck.js folder [07:17] and I am going to show them this openstack provider... [07:17] i need to dig em out for you guys [07:17] I hope for them to all show up in #juju and DEMAND that it be merged *IMMEDIATELY* [07:17] because honestly, even with the problems it has [07:17] its amazing [07:17] *amazing* [07:18] ?? [07:18] tho hpcloud is embarassing itself right now [07:18] sarcastic > [07:18] ? [07:18] jcastro: I am bootstrapping on hpcloud right now [07:18] SpamapS: will this work on RACK and DREAM openstacks too ? [07:18] i will be as soon as he pushes the button [07:18] :) [07:19] in theory [07:19] is DREAM open for beta? [07:19] I should get an account, since they're sponsoring tomorrow night ;) [07:19] lp:~clint-fewbar/juju/openstack_provider_fixes [07:19] jcastro: yea whats up with those accounts [07:19] working [07:19] cloud-init boot finished at Thu, 28 Jun 2012 07:19:12 +0000. Up 1077.37 seconds [07:19] *pathetic* [07:19] imbrandon: big company, slow gears [07:21] slow gears, but fast IO! [07:21] SpamapS: I'll tarball up our slides and send them over [07:34] SpamapS: 'lo, so, you're reviewing charms atm ? [07:35] lifeless: tomorrow morning.. about to sign off after I see mysql deploy on hpcloud [07:37] lifeless: but I will definitely take a look at your thing. :) 12 items in the queue, but half of them are adding maintainer [07:37] \o/ [07:38] * SpamapS hopes HPCloud will address their IO problems soon [07:39] 2012-06-28 07:36:52,593 - DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds [07:39] le sigh [07:45] cloud-init boot finished at Thu, 28 Jun 2012 07:44:48 +0000. Up 251.99 seconds [07:45] Wow, night and day there [08:00] more diablo lameness.. [08:00] secgroups aren't automatically setup to be able to talk freely amongst their members [08:00] and in fact, have no way to do that. :-P [08:09] * SpamapS heads to bed [08:11] yea there is a way, i had to do it when i created a databse sec grup [08:11] but no way on the console or their cli too [08:12] l [08:12] need to use the os cli nova tool [08:12] and not the one in the repos [08:12] imagine that they extended that too [08:12] :( [08:12] anyhow i'll shoot u a message with where i found the info [08:13] gnight [08:13] jcastro: found it, putting a Ubuntu logo on it and uploading, its "almost" ready for wide use, i was gonna unleash it in a few days after a little more markup cleanup [08:14] but if you like it , use it now, nothing major changing , only cleanup and some new images to replace the pixelated logos [08:14] :) [08:14] * imbrandon gets the url [08:29] jcastro: http://api.websitedevops.com/slides/template/ [08:30] then watch that on your ipad, then iphone, then driod phone , then kindle , then laptop :) [08:30] bad ass, telling you [08:30] all kinda features :) [08:31] imbrandon : c00l [08:31] just need to finish it :) [08:31] ejat: ? [08:31] heh [08:32] * ejat is that a template u guys use to present juju [08:32] or just a general ubuntu template.. [08:32] not yet. when i finish it in the next few days hopefully :) [08:32] \0/ [08:32] neither atm, its about 98% done :) [10:32] aujuju: "init: juju-..." errors in syslog after uninstalling juju === voidspac_ is now known as voidspace [10:47] is it possible for 'juju ssh' to somehow bypass the host authenticity check? [10:48] I mean, I really have no idea what the fingerprint of this thing is, and I don't know how I'd find out using a trusted channel. [10:49] jml: yea , but you do it with ssh configs not juju specific [10:49] one sec i'll get you the snipit i use [10:51] http://paste.ubuntu.com/1064070/ [10:52] ... [10:52] imbrandon: so, I'm not crazy enough to want to completely disable strict host key checking. [10:52] now i'm not advocating that its safe to ignore those warnings, but i feel you, and i personaly do, but i verify my machines in other ways [10:52] well you can narrow thats to ec2 as well [10:52] with the host * [10:53] host *.ec2.amazonaws.com [10:53] or something [10:53] anyhow, thats one solution, least get ya in the right direction for what would need to be done [10:55] btw not all of that may be needed [10:55] i just grabed that from the user i have setup to control my juju instances [11:01] imbrandon: what I meant was, why doesn't juju do this: https://code.launchpad.net/~jml/juju/no-host-key-check/+merge/112540 [11:03] not sure, imho thats a good fix , one note, you can also specify a temp known_hosts file when using that flag as well [11:03] like /dev/null [11:03] so the remark in your comment becomes moot [11:04] but either way, yours or mine, does the exact same thing, just looks scarier in my ssh config :) [11:06] another solition too would be to add that feature into jitsu , then when you wrap juju in jitsu as per normal it can have an alias for ssh with that config set [11:07] imho that would be the way to get the ball rolling anyhow, SpamapS has said in the past that its a good testing ground before getting things into core [11:12] meh. [11:15] :) === mrevell_ is now known as mrevell [13:30] 'juju status' ... look for the IP somewhere. try it in the web browser. Internal server error. 'juju status', this time look for the instance number. 'juju ssh instance/num'. manually type 'sudo cat /var/log/apache2/error.log'. [14:06] aujuju: Can I specify tighter security group controls in EC2? [14:19] jml: sorry to be so harsh ;) [14:21] jml: btw, the trusted channel you can use for ec2 is euca-get-console-output [14:21] jml: but it lags by 2 - 5 minutes [14:21] SpamapS: and lxc? [14:21] jml: eventually juju will record the fingerprint in ZK as soon as the machine agent starts up [14:21] jml: for lxc you have the file on disk ;) [14:21] cat the console log [14:22] SpamapS: hang on, walk me through this [14:22] I type 'juju ssh instance/N' [14:22] I'm asked "is this *really* the host you want?" [14:23] anyone around to help a maas / juju noob [14:23] and then, for an lxc instance, how do I actually verify that? [14:23] jcharette: we can try. Note there is also a #maas channel [14:24] SpamapS: thanks, i'll let you finish with jml and then go from there [14:24] i've asked for help on maas as well [14:24] jml: the container's root filesystem is accessible to you (via sudo) .. so you can just add the public key directly to known_hosts... [14:25] jml: it will be at /var/lib/lxc/juju-envname-machineid/rootfs/etc/ssh/id_rsa.pub or something like that [14:26] SpamapS: and what assurance does that actually grant me? [14:26] jml: that your container wasn't mitm'd on your own box ;) [14:27] jml: another better way is to just turn off strict host key checking on 192.168.122.* [14:27] jml: you can also direct known_hosts to /dev/null [14:27] so you don't get warned about changed keys [14:29] SpamapS: ok, I'll do that. [14:29] * jml subverts things [14:29] SpamapS: I'm going to read some more documentation on maas, may be back soon with q's [14:31] jml: I do think the known_hosts file needs to be rethought for the cloud. It was created at a time where creating/destroying "servers" was unheard of. [14:31] jml: seems like that bit of the cloud is ripe for an API. [14:32] SpamapS: I don't see how that could work. [14:32] In fact that would be a cool feature to add to openstack. Let the guests inject fingerprints when they boot up.. then have it available as part of the instance info available via the API [14:32] SpamapS: Because the API would also be subject to MITM [14:32] API is HTTPS [14:33] so it at least has the CA system protecting it [14:34] hmm. [14:34] jml: the API must be trusted, as it is standing in for the sysadmin of old, racking boxes and laying down images. :) [14:34] and then things like 'juju ssh' would inspect that API & write to a tmp known_hosts and ? [14:35] SpamapS: "must be trusted" as in "you have no alternative but to trust it" or "it must be trustworthy"? [14:36] jml: it must be trustworthy [14:38] jml: well hopefully we could add something to ssh that stands in for known_hosts, not a file. Something like --host-key="...content of public key" [14:38] jml: but with what we have today, yes, just a temporary known hosts of 1 [14:43] SpamapS: I guess I'm going to hold out for the future then. [15:12] it's trusted, so you better hope it is trustworthy [15:20] jml: for the present, ~/.ssh/config works great [15:36] SpamapS: is there an easy way to get the IP address of my just-deployed service? [15:37] SpamapS: I get it that juju is supposed to work for a jillion instances, but I still want to be able to open a web browser for the thing I just deployed [16:55] jml: there's a few jitsu commands to help [16:55] jml: jitsu get-unit-info [16:57] jml: I do think there is a need for charms to be able to talk back to the admins more [16:57] jml, juju status $service | grep public-address | head -n1? [16:58] james_w: oh right, status can be restricted to service [16:58] yeah [16:58] SpamapS, james_w: thanks [17:02] SpamapS: there seems to be a general pattern of "Q. X is tedious with jitsu. Any tips? A. Use jitsu." [17:02] SpamapS: why is jitsu separate to juju? [17:09] james_w: juju status libdep-service 2>/dev/null | grep public-address | head -n1 | cut -d':' -f2 | sed -e 's/ //g' [17:15] it seems as if my db-relation-changed hook isn't being called when the relation is added... or something? [17:17] jml, the relation isn't in an error state? [17:18] * jml head desks [17:18] james_w: no, I don't think so. juju status doesn't say 'error' anywhere and the logs don't have any obvious errors [17:18] of course one of the many errors I'm dismissing as false positives might be actual errors [17:18] umm... [17:18] right, so I ran debug-hooks [17:19] and then I hit C-d to quit tmux [17:19] and now I can't do debug-hooks on that instance? [17:19] and now it works? [17:19] *sigh* [17:20] And then I just had debug-hooks running, and then I added the relation (juju add-relation postgresql:db libdep-service) and nothing happened in the debug-hooks session [17:22] hmm, I don't know what that could be [17:22] you're not currently executing a hook in tmux? [17:22] that caught me out once [17:22] as hooks are serialized [17:24] james_w: no, I wasn't. Perhaps "install" isn't finished... although how the instance got into the 'started' state without that I won't know. [17:24] I've trashed my environment. Perhaps that will hepl. [17:28] yay, the hook ran this time [17:28] now I'm getting an internal server error, but that's probably my fault. [18:00] james_w, around? [18:00] hi sidnei [18:00] james_w, i see you added an upstart script to the txstatsd charm, im considering adding it to the txstatsd packaging instead, with a few changes. === dpb__ is now known as Guest1484 [18:02] sidnei, sounds ok [18:04] james_w, so generally, the way we're running txstatsd and carbon-cache for u1 is to run multiple processes in a single, to take advantage of all cores. got any handy examples of spawning multiple daemons from an upstart script, controlled by say /etc/default/$daemon having 'NPROCS = X'? [18:04] s/a single/a single machine/ [18:05] then i guess we'd control that by doing relation-set nprocs=X [18:05] sidnei, I don't have an example [18:06] ok, thanks! === zyga is now known as zyga-afk [19:04] james_w, http://upstart.ubuntu.com/cookbook/#instance fyi [19:04] cool [19:52] jml: I've had the exact experience you just did. I'd bet money you had a hook stuck in a command, so the next hook could not run. [19:53] jml: and to answer your earlier question about why jitsu is not part of juju, because juju has stringent requirements for code review and design, so it takes time for fresh new ideas to trickle into juju [19:53] jml: whereas jitsu allows rapid iteration and scratching of itches [19:53] * SpamapS starts reviewing charms [20:11] does the ppa juju package have the fix for the lxc not using sudo to bootstrap? [20:12] 0.5+bzr531-0ubuntu1 version doesn't work for lxc for me unless i'm root. and i have pwdless sudo. [20:12] * zirpu should probably just live on the edge and run from trunk. [20:16] zirpu: I wasn't aware any fixes were done around that [20:16] zirpu: the PPA is trunk [20:16] something I'd actually like to change [20:17] but.. so many ideas, so little time [20:18] yup, sometimes i wish there was 3 of me [20:18] maybe 4 [20:18] :) [20:26] SpamapS, planning to try juju for google cloud? [20:27] i'm pretty sure my clone and i would attempt to kill each other. :-) === imbrando1 is now known as imbrandon [20:32] koolhead17: looks like it is a different API, so no [20:33] SpamapS, k === marrusl is now known as marrusl-ebayqbr === marrusl-ebayqbr is now known as marrusl [21:24] hi, anyone feel like debugging a juju on osx install problem? [21:25] totally clean 10.7/homebrew, brew install gets almost to the end and then: [21:25] Finished processing dependencies for juju==0.5 [21:26] build.rb:49: in 'initialize': Bad file descriptor === james_w` is now known as james_w [21:29] imbrandon: around? [21:29] dk1: imbrandon wrote that recipe, he should be able to help [21:29] imbrandon: ^^ [21:39] sorry [21:39] like right in the middle of something [21:39] but yea i can fix you up in a lil bit [21:40] lemme get off the phone and such [21:40] dk1: ^^ [21:44] hey [21:44] ok [21:44] dk1: between now and then this will fix you up ( you made it far enough in the process ) ... open term and run "mkdir /tmp/juju && cd /tmp/juju && bzr branch lp:juju && cd juju && sudo python setup.py install && cd ~" thats _should_ fix you up from that point and i'll look deeper into the issue here in a few [21:44] sec [21:44] just copy/paste that [21:45] running now [21:48] ok, so after that running juju gives me: [21:48] ImportError: No module named zookeeper [21:49] brew uninstall zookeeper && brew install zookeeper --python [21:49] imbrandon: *** [21:49] No such file or directory [21:49] .. and [21:49] already installed [21:50] when tried running separately [21:50] you typo'd [21:50] nope [21:50] Uninstalling /usr/local/Cellar/zookeeper/3.4.3... [21:51] Error: No such file or directory [21:51] ahh your ZK barfed on install the first time [21:51] seems likely [21:51] brew unlink zookeeper [21:51] same rror [21:51] error [21:52] cd /usr/local/Cellar [21:52] rm -rf zookeeper [21:52] ok [21:52] ok [21:52] cd ~ && try installing again [21:52] with --pythonm [21:52] --python [21:53] bombs out [21:53] at.. [21:53] ok, there is a problem with the zk formula then [21:53] and i/we dont maintain that BUT you can try ... [21:54] ld: duplicate symbol _hashtable_iterator_key [21:54] wait do you have xcode and the cli tools etc [21:54] installed [21:54] yep [21:55] morning 'all [21:55] k yea its likely a zk formula problem the only other thing you can try atm is this ( without me getting on my osx box and digging in more heh ) [21:55] heya lifeless [21:55] try "brew install zookeeper --HEAD --python" [21:55] rm -rf like before if needed [21:56] did i mention i HATE zookeeper [21:56] and i see no reason that should be on the client anyhow /me grumbles [21:57] brb , lil boys room callin my name [21:57] bombs out [21:58] with a brand new error [21:58] autoreconf -if [21:58] frak, ok pastbin the output of "brew doctor" and i'll be back in a few [21:58] oh yea [21:58] you need that insatlled [21:58] should have told ya [21:58] it grabbed all of that for me [21:59] not sure what the autoreconf pkg is called but its in brew [21:59] just bombs with: [21:59] configure.ac:37: warning: macro 'AM_PATH_CPPUNIT' not found in library [21:59] oh ok , yea zk is being a bitch, you said clean 10.7 ? no python from brew or gcc from brew or nothing ? [22:00] there's a /usr/bin/python and /usr/bin/gcc, but neither in the Cellar [22:00] toss "brew doctor" output in paste.ubuntu.com and i'll brb, really got to head for a sec [22:01] k thats fine [22:04] paste.ubuntu.com/1065093/ [22:06] open /etc/paths and move the line with /usr/local/bin to the top, save exit term [22:07] reopen term, then try the zk install with --python , then with --HEAD [22:07] and afk again [22:07] brb [22:16] sec [22:21] if that dont work, screw brew and http://paste.ubuntu.com/1065123/ [22:22] * imbrandon is going to write a custom installer for the 0.5.1.x release [22:22] damn, afk AGAIN [22:26] .x ? [22:26] imbrandon: oh right you want .bzrrev [22:26] what ever would we do if it were git tho? ;-) [22:29] lifeless: reviewing opentsdb now [22:32] lifeless: thats one hell of a README :) [22:42] lifeless: review posted [22:49] SpamapS: hahah , easy [22:49] SpamapS: JUJU_BUILD_REV=`git log -n 1 --pretty=format:"%h"` :) [22:51] * imbrandon uses that for client website builds that use git or HG to print in the footer , normally a CI server is pushing it out so its very very nice [22:52] %h is ? [22:52] bholtsclaw@ares:/usr/local/Library$ git log -n 1 --pretty=format:"%h" [22:52] 2c9d87e [22:52] short hash [22:52] a hash that will always be higher than previous hashes? [22:53] uniq [22:53] need higher [22:53] not sure about higer but thats solved with the [22:53] rev number [22:53] if its like our workflow anyhow the rev num would work from github etc [22:53] you just can depend on it from diff repos [22:54] but if you build from the same one always its fine :) [22:54] yeah [22:54] same restriction for bzr really [22:54] crap, 3 hours till presentation and I literally have 1 slide [22:55] and really to be "the best" would be a combo, like [22:55] only one charm in the review queue ... nice [22:55] 0.5.1.N-%h [22:55] or something , that way devs could easily tie it to a hash local [22:56] and N will always be higher [22:56] negronjl: thats either very good (we're on top of things) or very bad (we're sinking!) [22:56] hah [22:56] scratch that .... no charms in the queue .... nice/scary (depending on your point of view) [22:56] that reminds me i was gonna do some docs merges and i got one more build work arround to maybe make 0.6.4 work [22:56] I better get to write more charms STAT so we have some stuff there :) [23:03] HOLLY CRAP, i just hit jackpot with the OSX build, i def got to write a native installer but thats easy [23:03] http://pypi.python.org/pypi/zc-zookeeper-static [23:03] ^^ staticly compiled zkpython 3,4,3 for osx [23:03] YAY! [23:04] bout time something went right [23:09] SpamapS: thanks [23:10] oh that is SOOO much easier ... [23:18] imbrandon: sweet [23:19] yea , this like is gonna make the osx installer solid and not a thorn in my side [23:19] heh [23:19] now does the client REALLY need zookeeper local or just the bindings, rember no lxc here [23:19] just bindings [23:20] ROCK! [23:20] i think i just squeeled like a lil girl [23:21] oh hell yea then no brew needed at all since bzr has a native osx .dmg [23:21] hells yea [23:22] that makes my day [23:22] hell week [23:22] infact ... let me ponder on this a few moments ... [23:23] * imbrandon opens lp:juju/setup.py [23:23] crap still need brew for gcc [23:24] damn [23:24] or xtools [23:24] oh no there is a cli dmg install of that too [23:24] * imbrandon goes to look [23:25] imbrandon: no luck [23:25] haha, i found some good news [23:26] ? [23:26] sudo easy_install zc-zookeeper-static [23:26] let that finish [23:26] ah [23:26] then ping me [23:26] builds fast i tested it here [23:27] running [23:27] this is one of the new retina mbps, fairly fast [23:27] done [23:27] nice [23:27] sweet [23:27] ok type "python" [23:27] get a prompt [23:27] yep [23:27] then "import zookeeper" [23:27] enter [23:27] ok [23:28] error ? or nother prompt [23:28] prompt [23:28] looks like it imported [23:28] sweet, ctl D [23:28] ok [23:28] then try "juju bootstrap" [23:29] no envs configured [23:29] ROCKIN [23:29] better than before [23:29] definitely [23:29] ok go read the docs and enjoy juju [23:29] your good [23:29] haha [23:29] :) [23:29] awesome [23:29] i'll fixup the installer for next time but yea [23:29] rember there is no lxc on osx [23:30] so everything is ec2 or other remote env's [23:30] :) [23:30] this is minute 5 of juju for me [23:30] trust me you dont want lxc anyhow , vbox would be a better local provider imho [23:30] :) [23:31] do i need a local linux env for anything [23:31] if i'm just writing scripts and pushing them out to test [23:31] yae just signup for a amazon acount and goto town tho, your normal install ar this point [23:31] nope , only need linux if you want lxc ( chroots ) local [23:32] cool [23:32] imbrandon: closer to jails ;) [23:32] and what are people using for deploying big rails apps [23:32] and like i said personaly i dont use/like them anyhow [23:32] upstart, runit? [23:32] forever , god [23:32] i see god used alot [23:32] or upstart [23:33] in production? [23:33] dunno any one that dont run nginx+passenger for ruby in prod :) [23:33] dk1: if you want the offline experience, simplest to just fire up an ubuntu server VM and use the local provider inside that [23:35] not usually offline, only rationale would be if the testing loop was masively faster [23:35] massively, rather [23:35] not really once you have a good env setup [23:35] if your destroying and creatign over and over then yea [23:35] maybe [23:36] but once the base is down pat, and just charm upgrades [23:36] etc [23:36] then much easyer the "normal" way imho [23:36] we're probably going to jruby+trinidad soon anyway [23:36] but we're talking 10 min diff here [23:36] not hours [23:37] dk1: its only faster if you are on a really fast SSD.. otherwise its faster to just terminate/start EC2 m1.smalls while developing [23:37] dk1: and even if you're on a fast SSD.. its still sometimes slower because many m1.smalls are going to still be more scalable than youre one 4 core i5 ;) [23:38] think this question will probably answer itself by tomorrow [23:38] thanks imbrandon & guys [23:39] i'll put it like this, i have only fired up a lxc container ONE time ever and dident even let it finish [23:39] dk1: np [23:39] pop in anytime ya need a hand, normally someoen arround [23:39] and i'm normaly the token mac fella [23:39] :) [23:39] but really once your at this point its all the same [23:39] your past the mac specifics [23:39] SpamapS: dunno how fast you think ec2 is ;) [23:40] SpamapS: but, local openstack is about 30 times faster fo rme [23:40] local openstack sure, not 2 or 3 containers on your laptop [23:40] :) [23:40] imbrandon: you haven't met my laptop :) [23:40] heh [23:40] imbrandon: 8GB ram, intel SSD, i7 4-hardware threads. [23:41] same [23:41] sounds heavy [23:41] well 16gb [23:41] but yea [23:41] same [23:41] is my daily machine [23:41] imbrandon: desktop is 16GB, 8 hardware threads; raid 10, its where my local openstack is. [23:41] dk1: its about 1kg [23:41] :) [23:41] dk1: x201s [23:42] eww but sooo ugly , i'll stick to my mbp :) [23:42] you can toss em cross the room tho [23:42] ah, didn't know you could get an s with a qc [23:43] 16gb retinas aren't shipping yet anyway, they just gave me a loaner 8gb [23:46] with sublime text open x2 adobe ps 6 + adobe flashbuilder 5 + xcode + sourcetree + god knows how many term tabs + screen windows + chrome w/ 100 tabs combine + firefix with 4 or 5 tabs + Espresso + Mail.app + cyberduck + gradient + itunes + Proseql + adium , and then whatever random app i might have temporarily open [23:46] thats what easy my 16gb ram quicky ^^ [23:46] thats actually what i have open now, and only been on the osx partition an hour [23:46] heh [23:51] imbrandon: one more thing, don't know what Formulas are yet, but the jujutools.github.com site links to a directory with text that implies there should be more than one juju.rb file in it [23:52] there will be when i put charm-tools [23:52] and a few other goodies you'll soon want to have [23:52] :) [23:53] just not got one of those roundtuits' yet, maybe a good project this weekend [23:54] SpamapS: import subprocess [23:54] _proc = subprocess.Popen('git log --no-color -n 1 --date=iso', [23:54] shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) [23:54] try: [23:54] GIT_REVISION_DATE = [x.split('Date:')[1].split('+')[0].strip() for x in [23:54] _proc.communicate()[0].splitlines() if x.startswith('Date:')][0] [23:54] except IndexError: [23:54] GIT_REVISION_DATE = 'unknown' [23:54] :)