[12:05] <niemeyer> Hello!
[12:05] <niemeyer> How's everybody on that fine Monday morning?
[12:18] <hspencer> mornin niemeyer
[12:18] <hspencer> barely wakin up
[12:18] <hspencer> lol
[12:19] <niemeyer> hspencer: Hehe, that's how Mondays are
[12:19] <niemeyer> :)
[12:19] <hspencer> lol
[12:19] <hspencer> yea
[12:41]  * nijaba welcomes dxd828
[12:45] <koolhead11> hia ll
[12:46] <nijaba> Hello koolhead11
[12:46] <koolhead11> hey nijaba . how are you?
[12:47] <nijaba> koolhead11: doing good, thanks.  you?
[12:47] <koolhead11> am good too. time to update owncloud charm from 2 to version 3
[12:47] <koolhead11> reading and testing it
[12:48] <nijaba> nice
[13:46] <heylukecarrier> Hi, I'm trying to set up Juju with the local environment type and am getting errors from libvirt during the bootstrap phase (virsh net-start default needs to be run as root, but Juju's bootstrap command doesn't run it so). Should I run juju bootstrap as root? I'm guessing this would cause permissions issues?
[13:48] <fwereade_> heylukecarrier, that's funny, I thought it would ask you for your password at the point it actually needed root access, would you pastebin a transcript please?
[13:50] <heylukecarrier> fwereade_, here's a paste of the bootstrap run: http://pastebin.com/NvXZvTyZ
[13:51] <heylukecarrier> when I ran it as root (with sudo), it complained about missing keys, but worked perfectly after destroying the environment, generating the key and running the command again
[13:54] <fwereade_> heylukecarrier, this suggests that you may need to reboot, after installing the dependencies, before it will work
[13:54] <fwereade_> http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
[13:54] <fwereade_> heylukecarrier, have you tried that?
[13:58] <heylukecarrier> fwereade_, I've just followed those instructions and have the same problem at the same point. When I ran juju bootstrap as root, I could carry on
[13:58] <koolhead11> hazmat: around?
[13:58] <koolhead11> heylukecarrier: pastebin your enviornment.yaml
[13:59] <koolhead11> are you creating the yaml inside root user by any chance :P
[14:15] <heylukecarrier> fwereade_, http://pastebin.com/hpjw0FtQ
[14:15] <heylukecarrier> koolhead11, I'm not entirely insane :3
[14:15] <heylukecarrier> since I've only just installed virsh, would a reboot be in order?
[14:15] <koolhead11> heylukecarrier: :D
[14:16] <fwereade_> heylukecarrier, sorry, that was what I suggested, I think I was unclear
[14:16] <fwereade_> heylukecarrier, apparently you sometimes need it after installing the dependencies
[14:16] <heylukecarrier> Okay, rebooted, trying again now
[14:18] <heylukecarrier> Wow, so simple, it works fine. I think this should be emphasised in the documentation somewhere, though :)
[14:19] <fwereade_> heylukecarrier, good point, thanks :)
[14:23] <heylukecarrier> Another stupid question; is there any way to increase the verbosity of the debug log?
[14:23] <_mup_> Bug #923746 was filed: local provider docs should mention needing to reboot <juju:New> < https://launchpad.net/bugs/923746 >
[14:23] <hazmat> g'morning
[14:24] <fwereade_> heylukecarrier, --level DEBUG may help
[14:25] <hazmat> heylukecarrier, your host is oneiric and your using the oneiric juju package?
[14:25] <fwereade_> heylukecarrier, no wait, that's the default already
[14:26] <fwereade_> heylukecarrier, I think that's as verbose as it gets -- what do you need more visibility into?
[14:26] <hazmat> heylukecarrier, juju will ask for sudo as needed
[14:26] <hazmat> heylukecarrier, what's the locale on your system?
[14:26] <heylukecarrier> hazmat, correct. There are long periods of time where there is absolutely no output, and even using the --rewind switch doesn't reveal anything. I'm just interested in knowing what it's doing
[14:26] <fwereade_> hazmat, morning :)
[14:27] <heylukecarrier> hazmat, it's en_GB.UTF-8. There is some output, just very little for debug mode
[14:27] <hazmat> heylukecarrier, after bootstrap, during the first deploy its creating an lxc container via lxc's lxc-ubuntu template which runs deboostrap on a directory to setup a minimal environment in a directory, juju then uses lxc-clone to copy that container over for other containers. the initial debootstrap can take a few minutes
[14:28] <hazmat> during bootstrap on local, juju starts a zookeeper instance (running as regular user), starts a machine agent (running as root, this creates the lxc contianers), ensures that the default libvirt network is up and running, and setups a web server (on the libvirt network) for distributing files to the containers
[14:29] <heylukecarrier> hazmat, the bootstrap was failing initially because virsh can't configure the network stack without a reboot after the initial install
[14:29] <hazmat> fwereade_, g'morning :-)
[14:30] <heylukecarrier> once I rebooted, lxc finished successfully and the install worked. Thanks for the explanation though; I guess lxc was running
[14:33] <jcastro> SpamapS: I saw a bunch of charm- bug activity after I EODed on Friday, anything new get promulgated?
[14:35] <koolhead11> hazmat: in case you will have time today, i can get the doc updated :)
[14:36] <koolhead11> hola jcastro
[14:42] <hazmat> heylukecarrier, you have to login/logout for the libvirt install to fully take effect, it wants the user to be a member of the libvirt group
[14:43] <fwereade_> hazmat, oh, that's the reason? good to know
[14:46] <fwereade_> hazmat, could we just `newgrp libvirtd` for the same effect?
[14:47] <koolhead11> heylukecarrier: After installing it, you may have to reboot (I had to else libvirt couldn't create the network bridge indicating it was already in use). i think that askubuntu page had the info
[14:47] <hazmat> fwereade_, that looks promising
[14:48] <fwereade_> koolhead11, it's really very non-obvious there ;)
[14:48]  * koolhead11 is confused.
[14:48] <koolhead11> monday blues
[15:11] <heylukecarrier> fwereade_, koolhead11: thanks both; very helpful!
[15:11] <fwereade_> heylukecarrier, a pleasure :)
[15:12] <koolhead11> heylukecarrier: welcome!! :)
[15:48] <gary_poster> Hi.  Can two juju instances in the same cluster talk to one another over a given port without exposing the port?  I would have expected so, from past-but-maybe-forgotten-non-juju ec2 experience, but at least with a lxc-based juju cluster it seems like we might have to expose the port.  Can anyone verify expectations in this regard?
[15:48]  * gary_poster will go look for docs about that; I think others have, but I haven't tried yet.
[15:53] <gary_poster> OK, seems like the port is actually connecting fine.  go telnet.  some other problem
[16:03] <SpamapS> gary_poster: expose only controls access in and out of the cloud provider.
[16:05] <SpamapS> gary_poster: also LXC has no ability to control the firewall as of yet, so expose/unexpose are noops on the local provider.
[16:05] <gary_poster> SpamapS, expose: thanks, right, that's what I figured/hoped.  We should expect no firewalls within the provider itself, was my basic assertion/question-in-the-form-of-a-statement.
[16:05] <gary_poster> lxc expose noop: heh, ok
[16:13] <SpamapS> jcastro: re bug activity on Friday.. no promulgation yet.. but very close. :)
[16:14] <jcastro> SpamapS: good, just wanted to make sure I didn't miss a new person to ship swag to!
[16:37] <_mup_> Bug #923821 was filed: juju installs lxc/libvirt on all machines <juju:New> < https://launchpad.net/bugs/923821 >
[18:27] <grapz> Hi. Is it possible to to overrule the default instance type when launching a new unit?
[18:32] <m_3> grapz: yes, but it's an all-or-nothing setting for an environment... add an entry to the environment in ~/.juju/environments.yaml that looks like 'default-instance-type: m1.large'
[18:32] <m_3> grapz: there's also 'default-image-id: ami-5b94ca1e'
[18:34] <m_3> grapz: there's a feature request for per-unit overrides
[18:34] <grapz> m_3, ahh, just what I was about to reply with, that there should be a per-unit override :)
[18:34] <smoser> fwereade_, could you please open a bug against libvirt-bin or lxc for bug 923746
[18:34] <_mup_> Bug #923746: local provider docs should mention needing to reboot <juju:Fix Released by fwereade> < https://launchpad.net/bugs/923746 >
[18:34] <m_3> really it's going to look like constraints per service unit iirc
[18:35] <smoser>   can fail with "error: internal...." is not really acceptable
[18:35] <m_3> "try to provide >=2cpus, 1G <= RAM <= 15G, etc
[18:36] <grapz> m_3, that sounds great
[18:41] <arosales> jcastro: the Juju events page is nice, thanks for setting that up.
[18:42] <jcastro> arosales: heh, now to find someone in europe!
[18:59] <fwereade_> smoser, sure, I think it's actually just a matter of `newgrp libvirtd` once it's installed
[19:00] <SpamapS> fwereade_: I just realized you sent me a branch to test out last week, and I haven't even looked at it yet. Is that still relevant?
[19:01] <fwereade_> SpamapS, it should be: I think hazmat has been poking at the unit-relation-status-not-showing business (hm I should have a word about that)
[19:01] <fwereade_> SpamapS, but yes: giving it a kick around would be very much appreciated
[19:01] <koolhead17> hi all
[19:07] <hazmat> fwereade_, i never reproduced that
[19:07] <hazmat> fwereade_, on local provider, killing unit agents at will, i did trigger some issues killing off a machine agent
[19:07] <hazmat> but unrelated to status problems with units
[19:10] <fwereade_> hazmat, hmm, funny -- sorry, I noticed you saying that just as I was falling asleep and forgot about it
[19:11] <fwereade_> hazmat, what went wrong with the machine agent?
[19:12] <hazmat> fwereade_, an error around a node existant which the code expected to create, i didn't digg much further into it, its easy to reproduce though
[19:12] <fwereade_> hazmat, so once you bounced a unit agent with relations up subsequent statuses would still have (say) relations: db: up, rather than relations: {} ?
[19:12] <hazmat> fwereade_, yup
[19:13] <fwereade_> hazmat, interesting, I'll try to take a quick look at the machine agent tonight (contrary to appearances, I'm not really on atm)
[19:13] <fwereade_> hazmat, but, hmm, I'm confused by the status stuff
[19:14] <hazmat> fwereade_, was there a particular sequence to reproduce, and i assume you have juju-origin pointing at your branch?
[19:14] <fwereade_> hazmat, yeah, I had juju-origin pointing at my branch and I never needed to do anything more than kill a unit agent
[19:15] <fwereade_> hazmat, I'll try again and note exactly what I'm doing
[19:16] <fwereade_> hazmat, btw, I'm getting a tiny bit fretful about fix-charm-upgrade... is there something horribly wrong with it? ;)
[19:22] <hazmat> fwereade_, i'm a bit concerned about the complexity of whats introducing
[19:22] <hazmat> fwereade_, do you have a moment to chat about it?
[19:22] <hazmat> fwereade_, if we assume for the moment a WAL transition log, what about it is it re-entrant in the previous cut of this branch, the additional states and transitions are contributing what value?
[19:23] <hazmat> er.. what about it isn't re-entrant
[19:26] <hazmat> right now its a separation of upgrade into three distinct step, while managing cross-step coordination of the executor. if we take download, extract, and execute as a single transition, with accompanying error transition that effectively retries the operation, what do we lack that needs the additional states and transitions, the WAL transition log, would  encapsulate the marker on the started state workflow variables marking the transition..
[19:27] <SpamapS> WAL seems like overkill, shouldn't there just be twoo things.. intended_charm_version and agent_reported_charm_version .. the first is updated by admins, the second by agents after they complete an upgrade.
[19:27] <hazmat> SpamapS, if something dies mid transition, we have to record some state so that on recovery we proceed with the in-flight operation
[19:27] <SpamapS> "recovery" ?
[19:27] <SpamapS> just do it all again
[19:28] <SpamapS> call stop, extract new charm, call upgrade-charm, call start
[19:28] <hazmat> SpamapS, exactly, but we need to know what we should do again in some cases, though your right it should be apparent
[19:28] <hazmat> if we clear charm-upgrade post success
[19:28] <SpamapS> trying to pinpoint where in flight to start seems like folly
[19:29] <SpamapS> oddly enough, dpkg handles this quite nicely.. and we should just copy it
[19:30] <SpamapS> but a single upgrade-charm hook won't suffice if we copy dpkg
[19:30] <hazmat> the wal is generally helpful across any transition (the action boundry is transient)
[19:30] <SpamapS> has to be prerm, postrm , preinst, and postinst :P
[19:30] <hazmat> SpamapS, its even simpler than that, we just need to record transitions on disk, else its transient memory
[19:30] <SpamapS> I really dislike using disk for this
[19:30] <SpamapS> zookeeper is the system of truth
[19:31] <hazmat> fair enough
[19:31] <SpamapS> The thing I've always liked about juju is that the agents just try and make zookeeper's intentions true, or report an error
[19:31] <SpamapS> anyway I'm not deep enough in this problem to have a strong opinion
[19:31] <SpamapS> just asking pointed questions.. I'm sure you guys have a handle on it. :)
[19:32]  * SpamapS goes outside to experience a rare day of fresh air in LA for lunchtime. :)
[19:32] <hazmat> its more that zookeeper currently encodes users and environment intentions but doesn't capture all of the unit agent's state
[19:33] <hazmat> SpamapS, good questions, enjoy ;-)
[19:33] <hazmat> SpamapS, i hear they got oxygen bars out there, air o'clock ;-)
[19:44] <grapz> how is it with system requirements for the zookeeper node? should it be ok to run it on a t1.micro if you only have a couple of machines?
[19:45] <lifeless> sure
[19:45] <lifeless> zookeeper doesn't need all that much
[19:51] <grapz> great
[20:38] <SpamapS> We should probably try and do some tests to see at what point ZK exhausts all the memory on each of the instance types.. I would bet that its 1000's of units/relations/etc.
[21:03]  * SpamapS rings the bell, BONG! BONG! BONG BONG BONG BONG! promulgation!
[21:04] <SpamapS> oops-tools has been promulgated. :)
[21:05] <SpamapS> hazmat: how long does the charm browser take to notice that?
[21:08] <lifeless> SpamapS: with an amqp relation ?
[21:09] <SpamapS> lifeless: not yet, just pgsql
[21:10] <lifeless> how do you inject an external pgsql server
[21:10] <SpamapS> lifeless: you don't. subordinate charms will be how thats done..
[21:10]  * SpamapS says that about everything
[21:11] <lifeless> SpamapS: uhm, really?
[21:12] <lifeless> SpamapS: there's no way to say 'I have a machine twice as large as any instance and it runs pgsql for my cloud'
[21:12] <SpamapS> lifeless: yeah, you'd have a little charm that just contains the configs for your external pgsql server and feeds them in
[21:12] <SpamapS> lifeless: right now to do that you'd eat up a machine just to run the hooks to say that
[21:13] <SpamapS> lifeless: subordinates are the main "must have" for 12.04, and bcsaller is hard at work implementing it.. I think. :)
[21:14] <bcsaller> yes :)
[21:14]  * SpamapS ^5's bcsaller
[21:14] <SpamapS> bcsaller: stop reading IRC and get back to work
[21:14] <bcsaller> ha
[21:14]  * SpamapS lashes bcsaller with the whip
[21:14] <SpamapS> ;-)
[21:15]  * SpamapS lashes and ^5's with love, of course
[21:54] <hazmat> SpamapS, 5m
[21:59]  * m_3 wants to bash bash
[21:59] <m_3> grrrrr
[22:02] <hazmat> lifeless, the resource constraint stuff is aimed to allow you to target service deployment to appropriate resources
[22:02] <SpamapS> m_3: 'I've had bigger chunks 'o korn in mah csh' -- Fat Bastard to Dash
[22:02] <hazmat> lifeless, if your curious this is the spec.. https://code.launchpad.net/~fwereade/juju/placement-spec/+merge/84443
[22:06] <gary_poster> hi.  If you have relatively lightweight data that you care about persistently (like a database or perhaps just history/log files) what is the intended juju story?  Are we supposed to use a hook to go stash this information somewhere, or is this considered to be "not juju's problem"? (we should configure backups separately if we have any need like that)
[22:07] <SpamapS> gary_poster: right now juju does nothing except it will require you to explicitly terminate a machine so you don't lose your data.
[22:07] <gary_poster> SpamapS, so the intent is that you write a terminate hook to stash what you want?
[22:08] <SpamapS> gary_poster: no, the intent is you backup your data some how.. we haven't defined how that happens.
[22:09] <gary_poster> SpamapS, got it, thanks
[22:09] <SpamapS> gary_poster: subordinate charms will allow some interesting innovation there.. you could make 'bacula-agent' a subordinate charm to a service, and then relate it to the 'bacula-director' service which would be able to do regular backups.
[22:10] <gary_poster> hm, yeah.  Being able to know "we're about to go down so you better do a last minute stash" might still be valuable in that scenario
[22:11] <lifeless> hazmat: well the thing is inside canonical DC we have existing things that are really not cloud ready
[22:12] <lifeless> hazmat: 500GB databases, not suitable for S3 or even EBS style deploys
[22:12] <lifeless> hazmat: so gluing those into juju as external endpoints would allow us to start using juju early, while the rest of the bits come up to what we need
[22:14] <_mup_> juju/refactor-machine-agent r449 committed by jim.baker@canonical.com
[22:14] <_mup_> Initial commit
[22:19] <hazmat> lifeless, fair enough, at the cost of an additional instance, you could wire up the external service with a charm... else your just talking about hard-wiring in the external service into say the app charm.
[22:20] <hazmat> we don't have a better story for modeling external system services integration atm
[22:24] <hazmat> m_3, is that jenkins charm test using ec2 nodes with local?
[22:24] <hazmat> or are they serially executing on the jenkins server?
[22:28] <m_3> hazmat: ec2 w/local
[22:42] <SpamapS> lifeless: to add to what hazmat is saying.. for stuff that is not cloud-ready .. you just write a subordinate charm like 'prod-database-client' that does what the regular postgresql charm would do, except on a remote system instead of locally.
[22:45] <SpamapS> lifeless: when subordinate charms are "deployed" they don't actually go anywhere until they are related to a non-subordinate charm, then the two get deployed together.
[23:00] <lifeless> SpamapS: hazmat: thanks
[23:04] <grapz> When I create new EC2 environments now, I just keep getting 'Invalid SSH key' when doing any operations against it. It worked perfectly earlier today. Can't really find what I've might have screwed up.
[23:09] <SpamapS> grapz: can you pastebin the exact error?
[23:11] <grapz> SpamapS, http://pastebin.com/FzMRd9GR
[23:14] <SpamapS> grapz: btw, if you use 'pastebinit' it will use paste.ubuntu.com and we won't have to be inundated with ads. ;)
[23:15] <grapz> oh, didn't know that, thanks for the tip :)
[23:15] <SpamapS> grapz: can you ssh to that machine directly with just ssh -l ubuntu ec2-79-125-72-14.eu-west-1.compute.amazonaws.com  ?
[23:16] <grapz> SpamapS, no
[23:16] <grapz> Permission denied (publikkey)
[23:40] <SpamapS> grapz: ok, your key didn't get onto your new instances then. Perhaps they have errors on their console?
[23:40] <SpamapS> grapz: euca-get-console-output would show that
[23:43] <hazmat> grapz, are you running on the same machine/account that was bootstrap from.. if not then the ssh private key used to connect to the environment may not be available to ssh.. juju will by default used one found in ~/.ssh when executing the bootstrap command
[23:43] <grapz> hazmat, yes, I'm on the same user/machine, so that shouldn't be the issue
[23:44] <grapz> SpamapS, I see some stuff about saving keys to /etc/ssh and then it lists the SSH fingerprints
[23:50] <SpamapS> grapz: thats the host keys, different keys
[23:56] <grapz> ok
[23:56] <grapz> i'll continue looking at this tomorrow - need to get some sleep now
[23:58] <SpamapS> grapz: cheers!