[03:15] SpamapS: adam_g had documentation publicly available AFAIK [03:31] is there a readme or docs for setup with openstack? [04:25] would it be possible to use redhat based ami rather than a ubuntu one? [05:01] mrsipan_: would need cloud-init [05:08] SpamapS, would it be the only need package? [05:08] needed* [05:09] mrsipan_: its a pretty huge need, and there will be some things that won't work.. like PPA's [05:09] right [05:09] mrsipan_: juju seeds new nodes with cloud-init to say things like "install these packages" so it can start its agents [05:12] makes sense, I guess the cloud-init is also required to get the ec2 user (meta) data for bootstraping [05:12] and all the written formula currently expect .deb packages [05:12] err charms [05:12] well, they expect apt [05:12] fair enough [05:13] and they make heavy use of pre-seeding when appropriate. [05:13] suffice to say, the charms would have to grow OS independence [05:13] mrsipan_: there is no technical reason it won't work. There is, however, a lot of duplication of effort needed. [05:13] right [05:15] wow, we definitely need to fix this bit where txzookeeper can't reconnect on session expiry [05:15] 2011-10-18 05:12:24,050: twisted@ERROR: Failure: zookeeper.SessionExpiredException: session expired [05:15] its t3h suck [11:27] http://cloud.ubuntu.com/2011/10/ubuntu-cloud-deployment-with-orchestra-and-juju/ [11:41] kim0: awesome man! [11:45] kim0: does this work in kvm virtual machines as well, or limited to real iron? [11:48] kim0: should you maybe add that we're targeting to be "production-ready" by 12.04 (and hence still in development mode for orchestra/juju)? [11:51] TeTeT: not sure about that, because of needing 6 servers, I couldn't play with the whole thing [11:51] m_3: yeah makes sense .. I'll add that [11:56] kim0: yeah, I face the same limitation, I don't have access to 6 servers, but might be able to get 6 vms on 3 computers up and running or so [11:57] TeTeT: generally I think it's easy to configure openstack to use qemu (inside a VM) yes .. it's only a single flag in nova config file .. so "should" work yeah .. just didnt do it [11:57] kim0: ok, will give it a try eventually :) [11:58] cool :) [11:58] TeTeT: if you do .. be sure to share your experience on cloud.u.c :) [11:59] kim0: lol, i'll send you an email and you can feel free to publish it at most ;) [11:59] TeTeT: if you have a blog, I'm happy to add it though :) [11:59] kim0: I don't blog [11:59] wise man :) [12:15] offhand, does anyone know why we have separate 'environment_name' and 'config' attrs on a provider? we pass the config around everywhere, but in general you need a provider to actually find out the env name [12:27] fwereade: isn't the environment_name used for naming things? [12:28] fwereade: hmm, i think i've probably misunderstood the question [12:38] rog, sorry, missed you [12:39] fwereade: np [12:39] rog: I just had an urge to name something based on the environment_name [12:39] rog: as we do for, say, security groups [12:39] fwereade: yeah, that's what i was thinking of [12:39] rog: but that takes the environment name from the provider, not from the config dict [12:40] ah. aren't they the same? [12:40] rog: nah, the provider object has .environment_name and .config [12:40] I grabbed juju yesterday and get an error that store.juju.ubuntu.com is not resolable when deploying...anyone else see that? [12:40] fwereade: isn't the former initialised from the latter? [12:40] mjfork: we've had some trouble getting the online docs to auto-update [12:41] mjfork: stick "local:" on the front of the charm name [12:41] juju deploy --repository=blah local:mysql [12:42] rog: the provider is initialised by an Environment object, which passes in environment_name and config, yes [12:42] tells me that the charm was not found in the respitory [12:42] i see a directory with the name [12:42] mjfork: is the repo flat, or does it have an "oneiric" subdirectory with the charms? [12:43] i am in the oneiric subdir [12:43] do i need to go up a level? [12:43] that did it [12:45] fwereade: maybe the reason is that the environment name isn't actually part of the config for the given environment [12:45] fwereade: it's outside of it in the yaml [12:45] i don't see why it shouldn't be put in there though [12:46] rog: I think that's the reason indeed [12:47] rog: anyway, not something I actually need, just an idle curiosity [12:47] mjfork: glad that helped [12:47] fwereade: yeah. it's an interesting oddity. [13:58] how do i tell juju which key pair to use? [14:01] mrsipan, you can set authorized-key-path for the environment in environments.yaml [14:01] fwereade: thanks [14:01] mrsipan, sorry, authorized-keys-path [14:01] mrsipan, pointing to a .pub file [14:02] cool [14:17] what user does juju use to ssh into instances? [14:19] mrsipan, "ubuntu", as I recall [14:20] mrsipan, are you having problems? [14:20] fwereade, I keep getting ERROR SSH authorized/public key not found [14:20] when trying to bootstrap [14:20] mrsipan, odd [14:21] mrsipan, if you don't specify anything for authorized-keys-path it should just use whatever it finds in your ~/.ssh [14:22] mrsipan, any of [14:22] If your version of Ubuntu is not listed above, it is no longer supported and does not receive security or critical fixes. [14:22] oops, sorry [14:22] key_names = ["id_dsa.pub", "id_rsa.pub", "identity.pub"] [14:22] do you have any of those set up? [14:22] I have -d_rsa.pub [14:23] id_rsa.pub [14:24] mrsipan, would you run bootstrap with "-v", and pastebin me the output please [14:24] sure [14:25] does 11.10 have xen built in? [14:25] or do I have to use kvm [14:26] fwereade: http://paste.ubuntu.com/712039/ [14:27] * fwereade makes thinking noises [14:28] should have done some research before asking; both are supported [14:30] mrsipan, and you saw that error just the same before and after adding a authorized-keys-path? [14:31] fwereade: yes, I did, I can try again with the authorized-key-path option [14:31] mrsipan: the altrnative is to set "authorized-keys" in the environment, with the contents of your .pub file [14:31] (and delete authorized-keys-path) [14:31] k, will do that [14:32] mrsipan: I should note that both those options should have "key", singular, not "keys" in their names [14:32] mrsipan: but they don't, and it's confusing [14:33] fwereade, k, thanks for the clarification [14:34] mrsipan: I'm sorry, but I need to get to a shop before it closes -- I'll be back later, but you might want to follow up with someone else if you keep having problems [14:34] sorry to abandon you :( [14:34] fwereade, np, thanks a lot for your help [14:38] i was working with SpamapS to replicate the OpenStack keynote demo, evertying starts ok, but terasort.sh fails with connection refused on port 8020 [14:40] i believe its a hadoop thing, but not sure [15:08] mjfork: hey [15:08] mjfork: if terasort.sh is failing then something went wrong starting hadoop [15:09] what machine should be lsitinging on 8020? [15:09] * SpamapS really needs to finish polishing up those instructions and blogify it [15:09] mjfork: namenode [15:09] m_3: ^^ mjfork is in need of guidance using your hadoop charms. [15:10] SpamapS: if i run juju status I can see the data cluster, jobonitor, and namenode [15:10] all with a status of up [15:10] juju ssh namenode/1 [15:11] if I run a ps, i don'tsee any hadoop process [15:11] nor anything bound rto that port [15:11] mjfork: java [15:12] mjfork: dig around in /var/lib/juju/units/namenode-0/charm.log [15:12] on the controller of namnode? [15:12] on namenode/1 [15:13] actually make it /var/lib/juju/units/namenode-1/charm.log [15:15] unknown host excption [15:15] the host name was never reset [15:15] hm? [15:15] mjfork: can you pastebin the log? [15:16] but cloud-init set /etc/hosts to have server_252 [15:16] ew [15:16] server_252 .. thats no good [15:16] mjfork: should be server-252 .. I wonder if your nova-network is misconfigured [15:16] i wonder why cloud-init didn't set hostname [15:17] let me fix that [15:17] may do it [15:17] I'm getting address 'store.juju.ubuntu.com' not found, when deploying the example, any ideas why? [15:18] it seems a dns resolution error [15:18] if your charm is local. prefix it with local: [15:18] i ran into this earlier this AM [15:18] mjfork, kthx [15:29] SpamapS: i rebooted that node to pick up hotname change [15:30] and it says waiting for unit to come up...but the system isbooted [15:30] reboots ont work [15:30] dont [15:30] doh [15:30] mjfork, SpamapS: hi guys [15:30] ok [15:30] so rebooting breaks it? [15:30] whats the alternative to get the hostname set right? [15:35] mjfork: so hadoop had a problem with the 127.0.1.1 entry that cloudinit puts in /etc/hosts [15:35] mjfork: rebooting is a top priority on the production bugs list (for obvious reasons) ;) [15:36] mjfork: hadoop requires the hostname to resolve to a real (non-loopback) interface [15:36] mjfork: are you seeing this problem? i.e., `netstat -lnp | less` and see hadoop binding to localhost only? [15:37] hadoop wasn't bound at all [15:37] presumably because the host name didn't resolve [15:37] mjfork: hmmmm.... that's strange [15:38] using openstack + oneiric guest + cloud-init [15:38] mjfork: so let's start on the namenode/0 [15:38] mjfork: I would try destroying the namenode service, terminating the machine its running on, and then deploying namenode again. [15:39] yeah, worth starting fresh [15:42] SpamapS: hadoop refuses to bind based on ip address [15:42] <_mup_> juju/ssh-passthrough r403 committed by jim.baker@canonical.com [15:42] <_mup_> Initial commit [15:43] SpamapS, mjfork: so we need some sort of name that will resolve across the openstack install [15:44] it's actually a bug I'd like to push upstream [15:45] juju must keep VMs around after destorying service? [15:46] we had no problems with hostnames in our openstack.. they were server-### .. and resolved fine [15:46] every VM I log into has ubuntu as hostname [15:46] mjfork: yeah, the reason for that is that its very likely that all services will deploy inside a container in the VM, for quick cleanup/re-use [15:46] mjfork: I say very likely because its not part of the code.. yet. ;) [15:47] <_mup_> juju/ssh-passthrough r404 committed by jim.baker@canonical.com [15:47] <_mup_> Merged trunk [15:47] Would be cool to use containers that way now.. without network namespaces. [15:48] how did you wrok around? [15:48] i must be missing someting for the hostname not tbe set right, but the /etc/hosts is [15:48] mjfork: what's written in /etc/hosts once the unit is up? [15:50] mjfork: _'s in hostnames would be an openstack/DHCP server problem [15:50] mjfork: you may need to look at nova-network's configuration [15:52] ec2metadata --local-hostname returns server_221 [15:52] so does look like OS problem [16:00] mjfork: yeah I'm sure there's some default hostname template somewhere that needs changing [16:01] yah, foudn bug report [16:01] its 1 line! [16:02] if i could contrib code i would :-) [16:06] shoot [16:06] still didn't set hostname [16:08] mjfork: what's `hostname -f` and `hostname -f | xargs dig +short` return? [16:09] relaized i probably didn't set it in right spot [16:09] tryin gagain [16:10] gotcha [16:18] hostname -f says Name or serice not known [16:24] mjfork: dang... [16:25] mjfork: what gets written into /etc/hosts by cloudinit? i.e., typically a 127.0.1.1 entry [16:29] yep [16:29] shoot...still says server_268 [16:29] instead of with - [16:30] i do see i am running older build [16:30] should upgrade [16:38] older build of what? [16:41] OpenStack [16:41] i see my RPMs have a date of 0727 [16:41] sight [16:41] sigh [16:41] <_mup_> juju/ssh-passthrough r405 committed by jim.baker@canonical.com [16:41] <_mup_> Testing to verify passthrough of args [16:46] jimbaker: yay, I'll actually use that quite a bit [16:49] where can i view a list of merge requests that i have been asked to review? [16:50] i'm sure i was told a URL recently, but i stupidly didn't write it down! [16:50] * rog wishes that launchpad emails were more easily filterable [16:54] rog: the first link in the channel topic, http://j.mp/juju-florence [16:55] <_mup_> juju/ssh-passthrough r406 committed by jim.baker@canonical.com [16:55] <_mup_> Refactoring [16:57] bcsaller: that only seems to show the person that's submitted the code, not who's been asked to review it. [16:57] rog: people don't typically get asked, its a pull process [16:58] bcsaller: ok. i must be misremembering. [16:59] m_3, good to know. i think it will be a very nice feature, being able to create a tunnel (no need to expose) or change config options, or do stuff like juju ssh unit/0 cat foo [17:01] rog, i explicitly asked on one of the go reviews for you [17:01] but typically, we go to the kanban view and pick items off the review queue there [17:01] hazmat: ah, that's what i'm remembering. it's got buried in my email and i can't remember which one it was. [17:01] jimbaker: tunnels on the fly to unexposed services is the big one (-L8888:localhost:80) [17:01] rog, its go-new-revisions [17:02] m_3, sounds good. it should land in review this afternoon, just need to write a few more tests [17:02] hazmat: thanks [17:02] m_3, then i will add support for juju scp [17:03] jimbaker, whens the provisioning agent network extraction coming onto your plate? [17:06] <_mup_> Bug #877597 was filed: ec2 bootstrap fails when specifying instance type < https://launchpad.net/bugs/877597 > [17:07] hazmat, what do you mean? i can certainly work on that, but i'm not certain what the specific bug/feature is [17:08] jimbaker, re the expose-retry review comments [17:09] specifically [2] [17:09] hazmat, ok, i think you mean bug 873108 [17:09] <_mup_> Bug #873108: Move firewall mgmt support in provisioning agent into a separate class < https://launchpad.net/bugs/873108 > [17:10] sounds like a good one, so i put it in WIP [17:16] jimbaker, awesome thanks [17:20] hazmat, bcsaller: let me know if you'd like explicit examples to drive the colocated services features [17:21] they're probably pretty obvious though [17:23] shouldn't the focus be on the production bugs? [17:24] Like.. handling reboots.. HA for bootstrap.. colo'd services.. [17:24] exposing/unexposing seems to work right now.. :p [17:24] i'm off now. travelling to a conference in madrid for thurs and fri, so won't be online much. see y'all monday. [17:26] later rog... enjoy madrid! [17:29] SpamapS: so in the course of cleaning up charms... [17:29] there's a littering of one-off hooks for infrastructure aspects like logging, monitoring, etc [17:30] we'll split those off into separate charms (munin-node, log-source, rsync-source, nfs-client, etc) once colocation lands [17:31] or packages [17:32] They only need to be charms if they have network duties [17:32] but they're pretty ugly atm... [17:32] trying to keep templates inline and stuff so they're easy to move around (i.e., _one_ hook) [17:32] oh, hmmmm [17:32] that's what I was getting around to asking.... are there any better ways to do this? [17:33] but even a single package solution, like munin-node requires config right? [17:34] does that belong in the "primary" service on that machine? [17:55] SpamapS, the focus is on production issues and bugs generally, but co-location is a key msising feature to being able to use juju in the real imo [18:18] hazmat: I consider co-location production critical. :-D [18:31] hazmat: I didn't see that bcsaller was actually working on co-location when I whined about it. ;-) [18:32] SpamapS: I'll be pushing an updated spec for colo soon (hopefully by end of day) [18:32] w00t [20:49] SpamapS: i think most people do [21:20] is anyone else having problems bootstrapping from today's ppa in ec2? [21:36] m_3, testing.. [21:37] hazmat: digging through the logs on the bootstrap instance to see where it's hanging up [21:41] m_3, do you just get an instance that's stuck bootstrapping? [21:42] hmm.. no that works [21:42] * hazmat tries a deploy [21:42] bootstrap comes up, but then status can't get to it [21:42] is status returning for you? [21:43] m_3, it is [21:43] m_3, i'm running against trunk [21:43] damn [21:44] ppa's 0.5+bzr408-1juju1~oneiric1 [21:44] m_3, yeah.. same rev [21:44] * m_3 double-checking paths, envs, yamls [21:46] * hazmat does a reset on his env to double check [21:46] hazmat: thanks for the confirm... I must be on drugs... still can't connect ebfore timeout [21:46] brb [21:47] m_3, can you pastebin the console log or the cloud-init log [21:51] hazmat: your keys are added ubuntu@ec2-174-129-179-28.compute-1.amazonaws.com [21:54] m_3, thanks [21:55] m_3, machine looks okay on first glance [21:55] I know, it totally does =><= [21:56] m_3, what's the error on status, do you have multiple envs? [21:56] the zk tree is fine as well [21:56] I do have multiple envs [21:57] m_3, possible you booted one and statd the other? [21:57] m_3, outside of that we're into the realm of tcp issues, ssh is running/working, zk is running/initialized [21:58] perhaps and ssh host fingerprint mismatch on the server, but that should have an error message [21:58] watching from awsconsole too [21:58] s/and/an [21:58] cleaned up s3 [22:00] http://paste.ubuntu.com/712524/ [22:22] m_3, it looks like cloud init hadn't finished installing the keys [22:26] hazmat: same setup (acct,version,env) worked fine from a different VM... dunno what happened to my laptop between yesterday and today [23:50] ugh.. watches definitely have no delivery guarantees