[03:15] <RoAkSoAx> SpamapS: adam_g had documentation publicly available AFAIK
[03:31] <backburner2> is there a readme or docs for setup with openstack?
[04:25] <mrsipan_> would it be possible to use redhat based ami rather than a ubuntu one?
[05:01] <SpamapS> mrsipan_: would need cloud-init
[05:08] <mrsipan_> SpamapS, would it be the only need package?
[05:08] <mrsipan_> needed*
[05:09] <SpamapS> mrsipan_: its a pretty huge need, and there will be some things that won't work.. like PPA's
[05:09] <mrsipan_> right
[05:09] <SpamapS> mrsipan_: juju seeds new nodes with cloud-init to say things like "install these packages" so it can start its agents
[05:12] <mrsipan_> makes sense, I guess the cloud-init is also required to get the ec2 user (meta) data for bootstraping
[05:12] <bcsaller> and all the written formula currently expect .deb packages
[05:12] <bcsaller> err charms
[05:12] <SpamapS> well, they expect apt
[05:12] <bcsaller> fair enough
[05:13] <SpamapS> and they make heavy use of pre-seeding when appropriate.
[05:13] <SpamapS> suffice to say, the charms would have to grow OS independence
[05:13] <SpamapS> mrsipan_: there is no technical reason it won't work. There is, however, a lot of duplication of effort needed.
[05:13] <mrsipan_> right
[05:15] <SpamapS> wow, we definitely need to fix this bit where txzookeeper can't reconnect on session expiry
[05:15] <SpamapS> 2011-10-18 05:12:24,050: twisted@ERROR: Failure: zookeeper.SessionExpiredException: session expired
[05:15] <SpamapS> its t3h suck
[11:27] <kim0> http://cloud.ubuntu.com/2011/10/ubuntu-cloud-deployment-with-orchestra-and-juju/
[11:41] <m_3> kim0: awesome man!
[11:45] <TeTeT> kim0: does this work in kvm virtual machines as well, or limited to real iron?
[11:48] <m_3> kim0: should you maybe add that we're targeting to be "production-ready" by 12.04 (and hence still in development mode for orchestra/juju)?
[11:51] <kim0> TeTeT: not sure about that, because of needing 6 servers, I couldn't play with the whole thing
[11:51] <kim0> m_3: yeah makes sense .. I'll add that
[11:56] <TeTeT> kim0: yeah, I face the same limitation, I don't have access to 6 servers, but might be able to get 6 vms on 3 computers up and running or so
[11:57] <kim0> TeTeT: generally I think it's easy to configure openstack to use qemu (inside a VM) yes .. it's only a single flag in nova config file .. so "should" work yeah .. just didnt do it
[11:57] <TeTeT> kim0: ok, will give it a try eventually :)
[11:58] <kim0> cool :)
[11:58] <kim0> TeTeT: if you do .. be sure to share your experience on cloud.u.c :)
[11:59] <TeTeT> kim0: lol, i'll send you an email and you can feel free to publish it at most ;)
[11:59] <kim0> TeTeT: if you have a blog, I'm happy to add it though :)
[11:59] <TeTeT> kim0: I don't blog
[11:59] <kim0> wise man :)
[12:15] <fwereade> offhand, does anyone know why we have separate 'environment_name' and 'config' attrs on a provider? we pass the config around everywhere, but in general you need a provider to actually find out the env name
[12:27] <rog> fwereade: isn't the environment_name used for naming things?
[12:28] <rog> fwereade: hmm, i think i've probably misunderstood the question
[12:38] <fwereade> rog, sorry, missed you
[12:39] <rog> fwereade: np
[12:39] <fwereade> rog: I just had an urge to name something based on the environment_name
[12:39] <fwereade> rog: as we do for, say, security groups
[12:39] <rog> fwereade: yeah, that's what i was thinking of
[12:39] <fwereade> rog: but that takes the environment name from the provider, not from the config dict
[12:40] <rog> ah. aren't they the same?
[12:40] <fwereade> rog: nah, the provider object has .environment_name and .config
[12:40] <mjfork> I grabbed juju yesterday and get an error that store.juju.ubuntu.com is not resolable when deploying...anyone else see that?
[12:40] <rog> fwereade: isn't the former initialised from the latter?
[12:40] <fwereade> mjfork: we've had some trouble getting the online docs to auto-update
[12:41] <fwereade> mjfork: stick "local:" on the front of the charm name
[12:41] <fwereade> juju deploy --repository=blah local:mysql
[12:42] <fwereade> rog: the provider is initialised by an Environment object, which passes in environment_name and config, yes
[12:42] <mjfork> tells me that the charm was not found in the respitory
[12:42] <mjfork> i see a directory with the name
[12:42] <fwereade> mjfork: is the repo flat, or does it have an "oneiric" subdirectory with the charms?
[12:43] <mjfork> i am in the oneiric subdir
[12:43] <mjfork> do i need to go up a level?
[12:43] <mjfork> that did it
[12:45] <rog> fwereade: maybe the reason is that the environment name isn't actually part of the config for the given environment
[12:45] <rog> fwereade: it's outside of it in the yaml
[12:45] <rog> i don't see why it shouldn't be put in there though
[12:46] <fwereade> rog: I think that's the reason indeed
[12:47] <fwereade> rog: anyway, not something I actually need, just an idle curiosity
[12:47] <fwereade> mjfork: glad that helped
[12:47] <rog> fwereade: yeah. it's an interesting oddity.
[13:58] <mrsipan> how do i tell juju which key pair to use?
[14:01] <fwereade> mrsipan, you can set authorized-key-path for the environment in environments.yaml
[14:01] <mrsipan> fwereade: thanks
[14:01] <fwereade> mrsipan, sorry, authorized-keys-path
[14:01] <fwereade> mrsipan, pointing to a .pub file
[14:02] <mrsipan> cool
[14:17] <mrsipan> what user does juju use to ssh into instances?
[14:19] <fwereade> mrsipan, "ubuntu", as I recall
[14:20] <fwereade> mrsipan, are you having problems?
[14:20] <mrsipan> fwereade, I keep getting ERROR SSH authorized/public key not found
[14:20] <mrsipan> when trying to bootstrap
[14:20] <fwereade> mrsipan, odd
[14:21] <fwereade> mrsipan, if you don't specify anything for authorized-keys-path it should just use whatever it finds in your ~/.ssh
[14:22] <fwereade> mrsipan, any of
[14:22] <fwereade> If your version of Ubuntu is not listed above, it is no longer supported and does not receive security or critical fixes.
[14:22] <fwereade> oops, sorry
[14:22] <fwereade>     key_names = ["id_dsa.pub", "id_rsa.pub", "identity.pub"]
[14:22] <fwereade> do you have any of those set up?
[14:22] <mrsipan> I have -d_rsa.pub
[14:23] <mrsipan> id_rsa.pub
[14:24] <fwereade> mrsipan, would you run bootstrap with "-v", and pastebin me the output please
[14:24] <mrsipan> sure
[14:25] <wckd> does 11.10 have xen built in?
[14:25] <wckd> or do I have to use kvm
[14:26] <mrsipan> fwereade: http://paste.ubuntu.com/712039/
[14:27]  * fwereade makes thinking noises
[14:28] <wckd> should have done some research before asking; both are supported
[14:30] <fwereade> mrsipan, and you saw that error just the same before and after adding a authorized-keys-path?
[14:31] <mrsipan> fwereade: yes, I did, I can try again with the authorized-key-path option
[14:31] <fwereade> mrsipan: the altrnative is to set "authorized-keys" in the environment, with the contents of your .pub file
[14:31] <fwereade> (and delete authorized-keys-path)
[14:31] <mrsipan> k, will do that
[14:32] <fwereade> mrsipan: I should note that both those options should have "key", singular, not "keys" in their names
[14:32] <fwereade> mrsipan: but they don't, and it's confusing
[14:33] <mrsipan> fwereade, k, thanks for the clarification
[14:34] <fwereade> mrsipan: I'm sorry, but I need to get to a shop before it closes -- I'll be back later, but you might want to follow up with someone else if you keep having problems
[14:34] <fwereade> sorry to abandon you :(
[14:34] <mrsipan> fwereade, np, thanks a lot for your help
[14:38] <mjfork> i was working with SpamapS to replicate the OpenStack keynote demo, evertying starts ok, but terasort.sh fails with connection refused on port 8020
[14:40] <mjfork> i believe its a hadoop thing, but not sure
[15:08] <SpamapS> mjfork: hey
[15:08] <SpamapS> mjfork: if terasort.sh is failing then something went wrong starting hadoop
[15:09] <mjfork> what machine should be lsitinging on 8020?
[15:09]  * SpamapS really needs to finish polishing up those instructions and blogify it
[15:09] <SpamapS> mjfork: namenode
[15:09] <SpamapS> m_3: ^^ mjfork is in need of guidance using your hadoop charms.
[15:10] <mjfork> SpamapS: if i run juju status I can see the data cluster, jobonitor, and namenode
[15:10] <mjfork> all with a status of up
[15:10] <mjfork> juju ssh namenode/1
[15:11] <mjfork> if I run a ps, i don'tsee any hadoop process
[15:11] <mjfork> nor anything bound rto that port
[15:11] <SpamapS> mjfork: java
[15:12] <SpamapS> mjfork: dig around in /var/lib/juju/units/namenode-0/charm.log
[15:12] <mjfork> on the controller of namnode?
[15:12] <SpamapS> on namenode/1
[15:13] <SpamapS> actually make it /var/lib/juju/units/namenode-1/charm.log
[15:15] <mjfork> unknown host excption
[15:15] <mjfork> the host name was never reset
[15:15] <SpamapS> hm?
[15:15] <SpamapS> mjfork: can you pastebin the log?
[15:16] <mjfork> but cloud-init set /etc/hosts to have server_252
[15:16] <SpamapS> ew
[15:16] <SpamapS> server_252 .. thats no good
[15:16] <SpamapS> mjfork: should be server-252 .. I wonder if your nova-network is misconfigured
[15:16] <mjfork> i wonder why cloud-init didn't set hostname
[15:17] <mjfork> let me fix that
[15:17] <mjfork> may do it
[15:17] <mrsipan> I'm getting address 'store.juju.ubuntu.com' not found, when deploying the example, any ideas why?
[15:18] <mrsipan> it seems a dns resolution error
[15:18] <mjfork> if your charm is local. prefix it with local:
[15:18] <mjfork> i ran into this earlier this AM
[15:18] <mrsipan> mjfork, kthx
[15:29] <mjfork> SpamapS: i rebooted that node to pick up hotname change
[15:30] <mjfork> and it says waiting for unit to come up...but the system isbooted
[15:30] <SpamapS> reboots ont work
[15:30] <SpamapS> dont
[15:30] <mjfork> doh
[15:30] <m_3> mjfork, SpamapS: hi guys
[15:30] <mjfork> ok
[15:30] <mjfork> so rebooting breaks it?
[15:30] <mjfork> whats the alternative to get the hostname set right?
[15:35] <m_3> mjfork: so hadoop had a problem with the 127.0.1.1 entry that cloudinit puts in /etc/hosts
[15:35] <SpamapS> mjfork: rebooting is a top priority on the production bugs list (for obvious reasons) ;)
[15:36] <m_3> mjfork: hadoop requires the hostname to resolve to a real (non-loopback) interface
[15:36] <m_3> mjfork: are you seeing this problem?  i.e., `netstat -lnp | less` and see hadoop binding to localhost only?
[15:37] <mjfork> hadoop wasn't bound at all
[15:37] <mjfork> presumably because the host name didn't resolve
[15:37] <m_3> mjfork: hmmmm.... that's strange
[15:38] <mjfork> using openstack + oneiric guest + cloud-init
[15:38] <m_3> mjfork: so let's start on the namenode/0
[15:38] <SpamapS> mjfork: I would try destroying the namenode service, terminating the machine its running on, and then deploying namenode again.
[15:39] <m_3> yeah, worth starting fresh
[15:42] <m_3> SpamapS: hadoop refuses to bind based on ip address
[15:42] <_mup_> juju/ssh-passthrough r403 committed by jim.baker@canonical.com
[15:42] <_mup_> Initial commit
[15:43] <m_3> SpamapS, mjfork: so we need some sort of name that will resolve across the openstack install
[15:44] <m_3> it's actually a bug I'd like to push upstream
[15:45] <mjfork> juju must keep VMs around after destorying service?
[15:46] <SpamapS> we had no problems with hostnames in our openstack.. they were server-### .. and resolved fine
[15:46] <mjfork> every VM I log into has ubuntu as hostname
[15:46] <SpamapS> mjfork: yeah, the reason for that is that its very likely that all services will deploy inside a container in the VM, for quick cleanup/re-use
[15:46] <SpamapS> mjfork: I say very likely because its not part of the code.. yet. ;)
[15:47] <_mup_> juju/ssh-passthrough r404 committed by jim.baker@canonical.com
[15:47] <_mup_> Merged trunk
[15:47] <SpamapS> Would be cool to use containers that way now.. without network namespaces.
[15:48] <mjfork> how did you wrok around?
[15:48] <mjfork> i must be missing someting for the hostname not tbe set right, but the /etc/hosts is
[15:48] <m_3> mjfork: what's written in /etc/hosts once the unit is up?
[15:50] <SpamapS> mjfork: _'s in hostnames would be an openstack/DHCP server problem
[15:50] <SpamapS> mjfork: you may need to look at nova-network's configuration
[15:52] <mjfork> ec2metadata --local-hostname returns server_221
[15:52] <mjfork> so does look like OS problem
[16:00] <SpamapS> mjfork: yeah I'm sure there's some default hostname template somewhere that needs changing
[16:01] <mjfork> yah, foudn bug report
[16:01] <mjfork> its 1 line!
[16:02] <mjfork> if i could contrib code i would :-)
[16:06] <mjfork> shoot
[16:06] <mjfork> still didn't set hostname
[16:08] <m_3> mjfork: what's `hostname -f` and `hostname -f | xargs dig +short` return?
[16:09] <mjfork> relaized i probably didn't set it in right spot
[16:09] <mjfork> tryin gagain
[16:10] <m_3> gotcha
[16:18] <mjfork> hostname -f says Name or serice not known
[16:24] <m_3> mjfork: dang...
[16:25] <m_3> mjfork: what gets written into /etc/hosts by cloudinit?  i.e., typically a 127.0.1.1 entry
[16:29] <mjfork> yep
[16:29] <mjfork> shoot...still says server_268
[16:29] <mjfork> instead of with -
[16:30] <mjfork> i do see i am running older build
[16:30] <mjfork> should upgrade
[16:38] <SpamapS> older build of what?
[16:41] <mjfork> OpenStack
[16:41] <mjfork> i see my RPMs have a date of 0727
[16:41] <mjfork> sight
[16:41] <mjfork> sigh
[16:41] <_mup_> juju/ssh-passthrough r405 committed by jim.baker@canonical.com
[16:41] <_mup_> Testing to verify passthrough of args
[16:46] <m_3> jimbaker: yay, I'll actually use that quite a bit
[16:49] <rog> where can i view a list of merge requests that i have been asked to review?
[16:50] <rog> i'm sure i was told a URL recently, but i stupidly didn't write it down!
[16:50]  * rog wishes that launchpad emails were more easily filterable
[16:54] <bcsaller> rog: the first link in the channel topic, http://j.mp/juju-florence
[16:55] <_mup_> juju/ssh-passthrough r406 committed by jim.baker@canonical.com
[16:55] <_mup_> Refactoring
[16:57] <rog> bcsaller: that only seems to show the person that's submitted the code, not who's been asked to review it.
[16:57] <bcsaller> rog: people don't typically get asked, its a pull process
[16:58] <rog> bcsaller: ok. i must be misremembering.
[16:59] <jimbaker> m_3, good to know. i think it will be a very nice feature, being able to create a tunnel (no need to expose) or change config options, or do stuff like juju ssh unit/0 cat foo
[17:01] <hazmat> rog, i explicitly asked on one of the go  reviews for you
[17:01] <hazmat> but typically, we go to the kanban view and pick items off the review queue there
[17:01] <rog> hazmat: ah, that's what i'm remembering. it's got buried in my email and i can't remember which one it was.
[17:01] <m_3> jimbaker: tunnels on the fly to unexposed services is the big one (-L8888:localhost:80)
[17:01] <hazmat> rog, its go-new-revisions
[17:02] <jimbaker> m_3, sounds good. it should land in review this afternoon, just need to write a few more tests
[17:02] <rog> hazmat: thanks
[17:02] <jimbaker> m_3, then i will add support for juju scp
[17:03] <hazmat> jimbaker, whens the provisioning agent network extraction coming onto your plate?
[17:06] <_mup_> Bug #877597 was filed: ec2 bootstrap fails when specifying instance type <juju:New> < https://launchpad.net/bugs/877597 >
[17:07] <jimbaker> hazmat, what do you mean? i can certainly work on that, but i'm not certain what the specific bug/feature is
[17:08] <hazmat> jimbaker, re the expose-retry review comments
[17:09] <hazmat> specifically [2]
[17:09] <jimbaker> hazmat, ok, i think you mean bug 873108
[17:09] <_mup_> Bug #873108: Move firewall mgmt support in provisioning agent into a separate class <juju:New> < https://launchpad.net/bugs/873108 >
[17:10] <jimbaker> sounds like a good one, so i put it in WIP
[17:16] <hazmat> jimbaker, awesome thanks
[17:20] <m_3> hazmat, bcsaller: let me know if you'd like explicit examples to drive the colocated services features
[17:21] <m_3> they're probably pretty obvious though
[17:23] <SpamapS> shouldn't the focus be on the production bugs?
[17:24] <SpamapS> Like.. handling reboots.. HA for bootstrap.. colo'd services..
[17:24] <SpamapS> exposing/unexposing seems to work right now.. :p
[17:24] <rog> i'm off now. travelling to a conference in madrid for thurs and fri, so won't be online much. see y'all monday.
[17:26] <m_3> later rog... enjoy madrid!
[17:29] <m_3> SpamapS: so in the course of cleaning up charms...
[17:29] <m_3> there's a littering of one-off hooks for infrastructure aspects like logging, monitoring, etc
[17:30] <m_3> we'll split those off into separate charms (munin-node, log-source, rsync-source, nfs-client, etc) once colocation lands
[17:31] <SpamapS> or packages
[17:32] <SpamapS> They only need to be charms if they have network duties
[17:32] <m_3> but they're pretty ugly atm...
[17:32] <m_3> trying to keep templates inline and stuff so they're easy to move around (i.e., _one_ hook)
[17:32] <m_3> oh, hmmmm
[17:32] <m_3> that's what I was getting around to asking.... are there any better ways to do this?
[17:33] <m_3> but even a single package solution, like munin-node requires config right?
[17:34] <m_3> does that belong in the "primary" service on that machine?
[17:55] <hazmat> SpamapS, the focus is on production issues and bugs generally, but co-location is a key msising feature to being able to use juju in the real imo
[18:18] <SpamapS> hazmat: I consider co-location production critical. :-D
[18:31] <SpamapS> hazmat: I didn't see that bcsaller was actually working on co-location when I whined about it. ;-)
[18:32] <bcsaller> SpamapS: I'll be pushing an updated spec for colo soon (hopefully by end of day)
[18:32] <SpamapS> w00t
[20:49] <zul> SpamapS: i think most people do
[21:20] <m_3> is anyone else having problems bootstrapping from today's ppa in ec2?
[21:36] <hazmat> m_3, testing..
[21:37] <m_3> hazmat: digging through the logs on the bootstrap instance to see where it's hanging up
[21:41] <hazmat> m_3, do you just get an instance that's stuck bootstrapping?
[21:42] <hazmat> hmm.. no that works
[21:42]  * hazmat tries a deploy
[21:42] <m_3> bootstrap comes up, but then status can't get to it
[21:42] <m_3> is status returning for you?
[21:43] <hazmat> m_3, it is
[21:43] <hazmat> m_3, i'm running against trunk
[21:43] <m_3> damn
[21:44] <m_3> ppa's 0.5+bzr408-1juju1~oneiric1
[21:44] <hazmat> m_3, yeah.. same rev
[21:44]  * m_3 double-checking paths, envs, yamls
[21:46]  * hazmat does a reset on his env to double check
[21:46] <m_3> hazmat: thanks for the confirm... I must be on drugs... still can't connect ebfore timeout
[21:46] <m_3> brb
[21:47] <hazmat> m_3, can you pastebin the console log or the cloud-init log
[21:51] <m_3> hazmat: your keys are added ubuntu@ec2-174-129-179-28.compute-1.amazonaws.com
[21:54] <hazmat> m_3, thanks
[21:55] <hazmat> m_3, machine looks okay on first glance
[21:55] <m_3> I know, it totally does =><=
[21:56] <hazmat> m_3, what's the error on status, do you have multiple envs?
[21:56] <hazmat> the zk tree is fine as well
[21:56] <m_3> I do have multiple envs
[21:57] <hazmat> m_3, possible you booted one and statd the other?
[21:57] <hazmat> m_3, outside of that we're into the realm of tcp issues, ssh is running/working, zk is running/initialized
[21:58] <hazmat> perhaps and ssh host fingerprint mismatch on the server, but that should have an error message
[21:58] <m_3> watching from awsconsole too
[21:58] <hazmat> s/and/an
[21:58] <m_3> cleaned up s3
[22:00] <m_3> http://paste.ubuntu.com/712524/
[22:22] <hazmat> m_3, it looks like cloud init hadn't finished installing the keys
[22:26] <m_3> hazmat: same setup (acct,version,env) worked fine from a different VM... dunno what happened to my laptop between yesterday and today
[23:50] <hazmat> ugh.. watches definitely have no delivery guarantees