=== defunctzombie_zz is now known as defunctzombie [01:04] hello [01:04] is anyone using juju with openstack? [01:04] by that I mean using openstack as the environment... [01:06] mpae: yes, I am [01:07] hloeung: question, if you don't mind... I'm getting a "could not find AWS_ACCESS_KEY_ID" error when I try to bootstrap, where did you find this? [01:08] or do you already have amazon credentials... [01:09] mpae: from the sound of it, it sounds like you're using the Go version of juju? [01:09] hloeung: actually I'm not... I had that problem even when trying to use maas with the Go version [01:09] so I switched to the non Go version early on [01:10] err the python version :) [01:10] mpae: hmm interesting. What does 'juju --version' return? [01:11] juju 0.7 [01:11] hey marcoceppi, did you get to take a look at my charm/ [01:11] ? [01:11] hloeung: I'm also getting a warning about OpenStack identity service not using secure transport, I don't know if that's significant to this [01:12] mpae: hmm, so looking at one of the environments we've deployed using the Python version. AWS_ACCESS_KEY_ID isn't set [01:13] hloeung: juju -v bootstrap shows that it's using "auth-mode 'userpass'" [01:14] hloeung: I followed the instructions provided here https://juju.ubuntu.com/docs/config-openstack.html to configure the environments.yaml [01:14] hloeung: is there a different guide I should be following for the python version? [01:16] mpae: I'd try remove auth-mode [01:16] hloeung: but I don't have that anywhere in my environments.yaml [01:17] jose: not yet [01:17] hloeung: do you have the same options specified in your environment as what's specified in https://juju.ubuntu.com/docs/config-openstack.html ? [01:17] ok :) [01:17] hloeung: and are you using https or http for the keystone url? [01:18] mpae: here's an example of my juju environment - http://pastebin.ubuntu.com/5857107/ [01:18] mpae: HTTPS for keystone [01:19] hloeung: thanks for that, I see you're using openstack type rather than openstack_s3 [01:19] hloeung: you also don't have the keystone url, is that a copy paste ommission? [01:21] mpae: the environment variable OS_AUTH_URL sets my keystone URL [01:21] hloeung: I see, I have that set as well, so I guess I don't need it [01:22] hloeung: I mean I don't need it in environments.yaml [01:22] mpae: right [01:22] mpae: let me know how you go [01:22] well, different error now :) [01:23] hloeung: different error now but progress :) I'm going to try digging in to this a bit, thank you [01:24] mpae: heh cool :) [01:30] hloeung: what version of openstack are you using? the examples on the juju website say use openstack_s3 for and openstack deployment [01:36] this would be so much easier of the juju credentials download in horizon was working :'( [01:39] dang... all I had to do is download the ec2 credentials and add the ec2rc.sh to my profile [01:39] now it no longer complains about AWS credentials [01:39] and I can use openstack_s3 provider type just fine [02:07] <_mup_> Bug #1199205 was filed: juju boostrap does not work if keystone url does not have trailing / === defunctzombie is now known as defunctzombie_zz [02:10] * davecheney facepalm === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === CyberJacob|Away is now known as CyberJacob === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [06:28] simple MAAS setup in ubuntu 13.04 and virtual box | http://askubuntu.com/q/318032 === CyberJacob is now known as CyberJacob|Away === mwhudson_ is now known as mwhudson [08:06] I need a way to get a list of all units peers. I can get info about new one in peer-relation-joined all right. [08:06] but i need this information anytime, so i have to store those peers somewhere. [08:06] how can i "lock" this somewhere for both reading and writing so i m sure i change thing atomically there ? [08:07] if i dont use any locking mechanism i loose data because when i add n unit, i end up with n hooks changing the data there without tacking care of each others. [08:07] is there a way to get either a write _and_ read lock in python ? (i only manage to make a write lock with fcntl but i dont undertsdant how to block reading too) [08:08] or a thread like semaphore within a hook code ? [08:26] melmoth: You have the "relation-list" inside a relation hook. That will give you the names of all the peers (minus the one the hook is running on). [08:27] henninge, only when this hook is fired up by a "relation hook" [08:27] and i need the data at another stage, so i need to store it when i have access to it. [08:27] AFAIK, yes. [08:27] getting the data is not a problem, it s storing it in a safe way. [08:28] You can store the data in the charm directory. [08:28] I think that is ok to do. [08:28] or some other place. [08:29] Maybe some other place is better. ;-) [08:42] henninge, the problem is concurent acces. When i have n unit added at the same time, i need a way to synchronise access to wherever the data is store [08:43] i think i ll juts use a simple lock file. if it s there, let s wait. if its not, lets put it and do the job [08:50] melmoth: All hooks on one machine run in one thread. [08:50] So I don't think there is concurrency. [08:50] (I assume they run in one thread, actually) [08:51] there is. [08:51] ok [08:51] yesterday i added like 5 unit, saw in the log that all the unit where viewed, but in the resulting file i wrote, some were missing [08:52] the only way i can explain that is a race condition between the moment the file was read by one hook, and the moment it was written again wit the new data [08:52] i bet between those two moments, an other unit did the same stuff and i lost 1 info === CyberJacz is now known as CyberJacob === CyberJacob is now known as Guest68009 === _mup__ is now known as _mup_ === LanaDelRey is now known as Catbuntu [12:24] mgz, how are we with regards 1.11.2 release of juju-core? I'd like to get something uploaded to saucy and I just resolved the golang 1.1.1 issue in the interim [12:25] jamespage: I'll poke dave, I don't feel we're blocked on anything now === teknico1 is now known as teknico === teknico is now known as teknico-phone === wendar_ is now known as wendar === jcastro changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || Review Calendar: http://goo.gl/uK9HD || Review Queue: http://jujucharms.com/review-queue === jcastro changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || Review Calendar: http://goo.gl/uK9HD || Review Queue: http://jujucharms.com/review-queue || http://jujucharms.com || Reviewer: m_3 === Guest39539 is now known as med_ === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === scuttlemonkey_ is now known as scuttlemonkey === Catbuntu is now known as Guest56834 === HelenCrowley is now known as Catbuntu [20:20] quiet today === koolhead17 is now known as koolhead17|zzZZ [21:05] exit [21:05] ;-) === Guest68009 is now known as Guest68009|Away [22:03] any one have any ideas on https://lists.ubuntu.com/archives/juju/2013-July/002657.html [22:34] marcoceppi: hey, wht did you mean by 'change configuration'? === defunctzombie is now known as defunctzombie_zz [22:46] also, do I need to design my own icon? === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [23:42] jose: if a user runs `juju set postfix hostname="something else"` [23:43] right now nothing happens, this triggers the hooks/config-changed hook in the charm though [23:43] Basically you'd just put the bulk of the hooks/install in to hooks/config-changed [23:44] jose: not your own, but you should try to find the logo for postfix and put it on the charm icon template [23:44] haha, http://www.postfix.org/mysza.gif [23:45] Yup :) that should make for an interesting charm icon [23:46] perhaps a touch less professional looking than exim or sendmail [23:46] ah but still better than qmail :) http://www.qmail.org/Q.6.01.Logo.lg.jpg [23:46] Well, not much we can do a out that [23:46] about*