[01:53] <_mup_> Bug #921895 was filed: local provider should offer ability to turn on '--force-unsafe-io' for dpkg to speed up package installs. <juju:New> < https://launchpad.net/bugs/921895 >
[09:08] <mpl> rog: and tomorrow is?
[09:08] <mpl> hi all
[09:08] <rog> mpl: almost the weekend!
[09:10] <mpl> hehe, that's right.
[09:10] <mpl> hopefully I'll finally have some time for some juju hacking then
[13:05] <hazmat> gary_poster, ping
[13:05] <gary_poster> hey hazmat
[13:07] <hazmat> gary_poster, i think i might have a line on the problem you where experiencing, but one question first.. what ubuntu version are you running on the host?
[13:08] <gary_poster> hazmat, did you see the email I sent you?  I think it was explainable simply by my environment.yaml (but maybe I'm wrong :-) ).  My host is precise.
[13:13] <hazmat> gary_poster, cool, so i think the problem is the 'distro' version of juju differs between precise and oneiric significantly. the oneiric containers are getting a much older version of juju.. if you add a line to your environment with 'juju-origin': 'ppa' that should resolve it.
[13:15] <gary_poster> hazmat, cool, yeah, it did, thanks.  Like I said in email, I also added an edit to http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage to hopefully help others.  I actually explicitly had "juju-origin: distro" because of cargo-cult (or at least following-the-fine-manual-cult).
[13:15] <hazmat> doh
[13:15] <gary_poster> :-)
[13:49] <smoser> hazmat, ping
[13:49] <hazmat> smoser, pong
[13:50] <smoser> local provider.... if i just juju deploy say N things at once
[13:50]  * hazmat nods
[13:50] <smoser> is there anything that will protect the initial create form happening N times ?
[13:50] <hazmat> smoser, serial instantiation
[13:50] <smoser> i think in juju terms, that is the creation of "master template" that i'm talking about.
[13:50] <smoser> are you saying I should do that serially? or you're saying juju covers that for me..
[13:51] <hazmat> smoser, indeed, juju should cover that for you
[13:51] <hazmat> it will create the unit containers serially not in parallel
[13:52] <hazmat> smoser, question for you re getting a cloud image suitable for lxc
[13:52] <smoser> so i'm just curious, where/how is that accomplished?
[13:52] <smoser> cloud images should boot in lxc.
[13:52] <hazmat> afaics, i need to qemu-nbd or guestfs to mount the qcow2 and then copy it over to an fs dir or lvm mount
[13:52] <smoser> ah. don't use the disk image
[13:52] <smoser> and you might be duplicating effort
[13:52] <hazmat> smoser, what should i use?
[13:53] <smoser> utlemming might also be looking at createing 'ubuntu-cloud-image' (ie, in 'lxc-create -t ubuntu-cloud-image')
[13:53] <smoser> the easiest consumable thing for you is the .tar.gz file
[13:53] <hazmat> it should be an additional option to the ubuntu template
[13:53] <hazmat> maybe not
[13:54] <hazmat> smoser, what format is that in? tar.gz?
[13:54] <hazmat> i mean the f iles inside
[13:54] <smoser> you'll unfortunately have to extract it (tar -Sxvzf image.tar.gz "*.img" && mount -o loop *.img /mnt && rsync -a ....)
[13:54] <smoser> .img file, kernel, ramdisk.
[13:54] <smoser> we used to create a .img.tar.gz,which was just a sparse partition image compressed with sparse tar
[13:54] <smoser> but stopped doing that.
[13:55] <smoser> so you'll aste the download of th e ocmpressed kernel and ramdisk
[13:55] <smoser> more annoyhing (to me) is that the .tar.gz isn't immediately useful like the qemu image is.
[13:56] <smoser> you can use the qemu image, if you'd like. but using the partition image will mean you don't have to deal with qemu-nbd
[13:57] <smoser> hazmat, if it would make your life massively simpler, i would consider adding a filesystem contents tarball
[13:58] <smoser> although i worry about somewhat arbitrarily adding things to the downloads.
[13:59] <smoser> hazmat, SpamapS pointed me at http://paste.ubuntu.com/816820/ which he started on
[14:12] <hazmat> smoser, thanks, thats useful
[14:16] <smoser> hazmat, so were you looking to create an lxc template script?
[14:20] <hazmat> smoser, i was just looking at the problem, we could either bypass and do the download in juju or use a new template script
[14:20] <smoser> i think the template script in lxc is generally more useful
[14:20] <smoser> it serves a wider audience.
[14:20] <hazmat> probably won't fly with others, but i was leaning towards trying to do the download in juju to be able to give the user some feedback
[14:21] <hazmat> true
[14:21] <smoser> a middle of the road solution..
[14:21] <smoser> woudl be to let juju do the download (duplicating download code)
[14:21] <smoser> and then let it pass '--image' to lxc
[14:22] <smoser> although.. id dont' know.
[16:01] <dpb_> Was looking for information on how to properlly configure .juju/environments.yaml for an openstack deployment.  I understand the basic stuff (url, key, secret key), but what about control-bucket and admin-secret?
[16:02] <dpb_> Also, do I need a special branch?  or would the packages on oneiric suffice?
[16:05] <m_3> dpb_: control-bucket and admin-secret can be anything afaik... juju generates it when you try to run 'juju' without an environemnts.yaml file
[16:06] <m_3> dpb_: I typically let vi replace it with `echo "some junk" | md5sum` when creating a new env
[16:06] <jimbaker> m_3, the control bucket simply needs to be unique globally for s3
[16:07] <m_3> right.. thanks
[16:07] <dpb_> m_3: OK, great, thanks for the info
[16:08] <m_3> dpb_: here's a cleaned-up example I use atm http://paste.ubuntu.com/817806/
[16:09] <dpb_> m_3: awesome, thanks!
[16:10] <m_3> dpb_: obviously, make the default match.. that's a typo
[16:15] <jorge> hi folks! I'm testing juju with a diablo openstack installation. when i run 'juju status', i get an error but the command do what is expected.
[16:15] <jorge> root@sold016:~# juju status
[16:15] <jorge> 2012-01-26 14:07:54,476 INFO Connecting to environment.
[16:15] <jorge> 2012-01-26 14:07:59,576 ERROR SSH forwarding error: bind: Cannot assign requested address
[16:16] <jorge> machines:
[16:16] <jorge>   0: {dns-name: 172.16.0.2, instance-id: i-0000001b}
[16:16] <jorge> services: {}
[16:16] <jorge> 2012-01-26 14:08:05,313 INFO 'status' command finished successfully
[16:16] <jorge> Is that normal?
[16:21] <m_3> jorge: it's normal for some networking setups
[16:22] <jorge> ok
[16:22] <m_3> jorge: you have to associate a public address with the bootstrap instance
[16:22] <jorge> now I lost my connection.
[16:22] <m_3> then it should respond to subsequent juju commands
[16:22] <jorge> Cannot connect to machine i-0000001b (perhaps still initializing): could not connect before timeout after 2 retries
[16:23] <koolhead17> m_3: is there a proper doc on this part available somewhere? i tried my best and failed to get it working.
[16:23]  * m_3 looking for using openstack with juju docs
[16:23] <m_3> koolhead17: ha!
[16:24] <koolhead17> and i thought its happening because whole openstack infra is behind proxy
[16:24] <koolhead17> even nova
[16:24] <m_3> koolhead17: the setup can vary... greatly
[16:24] <koolhead17> m_3: yeah i have not been successful with my deployment :(
[16:25] <m_3> koolhead17: lemme see if I can find my notes for the openstack cloud I'm using now
[16:25] <koolhead17> although juju works with my LXC
[16:26] <jorge> i'm using proxy too
[16:27] <jorge> when running the euca* commands you cannot have the http_proxy variables set up.
[16:27] <m_3> so I'm using euca2ools... that's probably the first place to start
[16:28] <m_3> from a machine where you can hit the endpoint urls directly
[16:29] <m_3> (so either your endpoints need to be public or your client machine needs to be in a visible network segment)
[16:31] <SpamapS> jorge: if there is a box in between you and the instances that you can SSH bounce off of, you don't need a public address, btw
[16:31] <SpamapS> jorge: you just have to setup ~/.ssh/config to bounce properly
[16:32] <jorge> some information more:
[16:33] <jorge> when I run juju bootstrap and after juju status (the first time), all worked fine.
[16:33] <jorge> after some minutes, stop to work.
[16:34] <jorge> now I can ssh to the instance using just the ssh command, with the ip of the instance. i'm on the controller and the instance in running on it. all in the same host.
[16:34] <koolhead17> jorge: your executing all this from the same internal network as of the instances
[16:35] <koolhead17> hey SpamapS
[16:35] <jorge> yes. my instances are in 172.16.0.0/24 and i'm executing these commands from the network controller (from the host with ip 172.16.0.1)
[16:36] <jorge> the connectivity to the instance are ok using ssh from command line. I can log in that.
[16:36] <jorge> see that
[16:36] <jorge> juju -v status
[16:36] <jorge> DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="172.16.0.2" remote_port="2181" local_port="41754".
[16:36] <SpamapS> koolhead17: o/
[16:37] <SpamapS> jorge: right, thats so juju can talk to zookeeper over a secure connection
[16:37] <koolhead17> jorge: k. does it tries to connect with user ubuntu
[16:38] <koolhead17> also home your using smoser`s cloud-image for the instances
[16:38] <jorge> 2181 is zookeeper?
[16:40] <jorge> let me think... shouldn't i create a rule to permit this port? maybe iptables is dropping this connection.
[16:40] <jorge> euca-authorize -P ....
[16:41] <jorge> i'm going to start tcpdump in the instance to see the connections attempts.
[16:45] <SpamapS> jorge: no, you don't need to euca-authorize 2181
[16:45] <SpamapS> jorge: its connecting to it on 127.0.0.1 *through* an ssh tunnel
[16:45] <jorge> humm, ok
[16:46] <SpamapS> jorge: can you pastebin the whole failure? like 'juju -v status 2>&1 | pastebinit'
[16:47] <negronjl> m_3: ping
[16:47] <m_3> negronjl: hey
[16:52] <jorge> SpamapS: http://pastebin.com/SHmAszL7
[16:53] <koolhead17> SpamapS: i was getting similar error
[17:01] <SpamapS> jorge: is there something listening on port 54122 ?
[17:03] <jorge> SpamapS: no. I've tried many times and juju tries to use other ports and the problem is the same.
[17:03] <jorge>  DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="172.16.0.2" remote_port="2181" local_port="50093".
[17:04] <jorge> ERROR SSH forwarding error: bind: Cannot assign requested address
[17:05] <SpamapS> jorge: try ssh -v -L 50093:127.0.0.1:2181 ubuntu@172.16.0.2
[17:06] <jorge> Ok, just a minute because i destroy the environment. It is starting again.
[17:19] <jorge> SpamapS: ssh -v -L 45039:127.0.0.1:2181 ubuntu@172.16.0.2 WORKED!!! I've changed the port because I've started the env again.
[17:20] <jorge> but juju status return error.
[17:20] <SpamapS> jorge: ok, thats good, and on that box, do you see zookeeper running?
[17:21] <jorge> SpamapS: yes, a java process and port 2181 opened.
[17:21] <SpamapS> jorge: ok, same juju status error though?
[17:24] <jorge> SpamapS: yes
[17:25] <SpamapS> jorge: really puzzling
[17:25] <jcastro> hazmat: I need to drop off for another call, thanks for inviting me!
[17:25] <hazmat> jcastro, thanks for joining
[17:54] <jorge> SpamapS: inttermitent problems. I run the juju status and in the first attemp i can see an error, in the second it worked. and, trying one more time, error. see http://pastebin.com/T4xE3KAG
[17:57] <SpamapS> jorge: I wonder if this is somehow related to the fact that you're running this on the network controller
[17:57]  * hazmat lunches
[18:25] <jcastro> http://pad.ubuntu.com/charmschool
[19:39] <_mup_> juju/deploy-invalid-conf r448 committed by kapil.thangavelu@canonical.com
[19:39] <_mup_> validate config before deploying service.
[19:58] <hazmat> bcsaller, jimbaker could i get a +1 on this trivial?  its a fix for bug 903149
[19:58] <_mup_> Bug #903149: juju fails silently with empty revision file. <juju> <juju:New> < https://launchpad.net/bugs/903149 >
[19:58] <hazmat> http://paste.ubuntu.com/818065/
[20:00] <bcsaller> hazmat: so you don't trap ServiceConfigError anymore? Thats more than just revision and can stop the processing of the repository, right?
[20:01] <hazmat> bcsaller, CharmError is a base class for serviceconfigerror
[20:01] <hazmat> its a more generic handling case
[20:01] <bcsaller> ahh, ok
[20:01] <bcsaller> +1
[20:22] <jcastro> niemeyer: I have this work item on the community charm docs for you "[niemeyer] drive dicussion about interface documentation on juju mailing list"
[20:22] <jcastro> have we done this yet?
[20:33] <niemeyer> jcastro: yo
[20:33] <niemeyer> jcastro: We haven't, but I'm on a roll on some development here ATM.. would you mind to ping me about this tomorrow?
[20:34] <jcastro> sure
[20:34] <niemeyer> jcastro: Thanks!
[20:36] <jcastro> SpamapS: you've got one WI too
[20:36] <jcastro> [clint-fewbar] add README to 'charm create' template
[21:25] <_mup_> juju/repo-find-report-charm-error r448 committed by kapil.thangavelu@canonical.com
[21:25] <_mup_> [trivial] Repositories report/log charm structural errors. [r=bcsaller][f=901495]
[21:29] <SpamapS> jcastro: ACK, that one should be done b4 FF
[21:29] <jcastro> SpamapS: ok, just keeping an eye on our burndown, ta.
[22:16] <statik> hey niemeyer, newbie question about lbox
[22:16] <statik> I tried goinstalling lbox on precise, and got some dependency package? errors
[22:16] <statik> I'm think I'm running the golang packages from precise. http://pastebin.ubuntu.com/818207/
[22:17] <statik> any tips on how to get lbox installed? I'm probably missing something simple
[22:43] <sidnei> bcsaller, hey, around?
[22:43] <bcsaller> sidnei: hey, whats up?
[23:19] <niemeyer> statik: Hey
[23:19] <niemeyer> statik: Ah, yes, I see
[23:20] <niemeyer> statik: You'll have to install golang-tip from the ppa
[23:21] <niemeyer> statik: Optionally, you can install lbox from the PPA as well
[23:21] <niemeyer> statik: pre-built
[23:26] <SpamapS> Watching debug-log .. its kind of a bummer that the output of the hooks is double-newlined
[23:29] <SpamapS> hrm.. upgrade-charm needs to re-join all peer relationships
[23:29] <SpamapS> Otherwise if you change joined/changed hooks.. they won't be re-run
[23:30] <adam_g> any way to specificy an environments.yaml located somewhere other than ~/.juju/environments.yaml?
[23:32] <adam_g> ya, specificy
[23:52] <m_3> adam_g: you might get away with resetting HOME for a subshell of juju commands... it'd move .ssh too I guess though