[00:12] enmand where ? [00:32] Where what? [00:33] Where did I read stuff? I don't remember really? mostly blogs === jamespag` is now known as jamespage [08:31] hazmat: ping, I tried using the charm update; charm getall, to obtain all the charms, but the wordpress/mysql still not working [12:54] shang, as i recall, the problem was the wordpress formula hadnt been updated to use open-port/close-port... so it wasn't accessible via the internet [12:55] shang, as of bzr rev 50, charm rev 31 of the formula that should be fixed.. ie per this change https://bazaar.launchpad.net/~charmers/charm/oneiric/wordpress/trunk/revision/50 [12:56] shang you can verify the charm formula in juju status [13:03] <_mup_> Bug #876488 was filed: juju subcommand 'debug-report' to collect logs from relevant locations < https://launchpad.net/bugs/876488 > [14:21] hazmat: ping [14:26] robbiew, pong [15:32] is there a guide that talks about openstack environments.yaml config? [15:38] mjfork: adam_g is probably the best person to ask [15:39] ok, can wait for him to rebutn [15:39] returen [15:39] sigh...return [15:44] mjfork: I have one working with openstack - its much the same as ec2 (same provider) but you need to specify ami, ec2-uri and s3-uri manually [15:54] mjfork: using against openstack, or deploying openstack with orchestra? [15:54] if the former, then jamespage's advice is correct. If the latter, then there's a guide in the works, I think RoAkSoAx is working on [15:55] RoAkSoAx: ^^ you got some docs on orchestra+juju+openstack deployment yet? [16:02] robbiew: ping [16:03] i am doing it against openstack [16:03] m_3: we got a call, right? [16:03] what is the control-bucket? [16:03] random md5? [16:08] mjfork: I usually let juju generate it for me [16:08] think it's just gotta be unique per account [16:08] ok [16:10] mjfork, m3: just a note: afaik s3 bucket names have to be globally unique [16:27] globally unique in S3 itself [16:29] SpamapS, yep [16:30] need to be away for now; back later on [16:38] hrm... local provider is having issues running on an ec2 instance [16:39] SpamapS: bummer... I was totally wanting to play with that sometime [16:39] I have an m2.xlarge with /var/lib/lxc on tmpfs .. [16:39] but virsh net-start default is failing [16:40] sudo juju bootstrap worked. :( [16:40] wow... hmmmm [16:40] so something borken in the way sudo is obtained [16:41] virbr shouldn't need a reboot after installing libvirt-bin [16:41] might tweak interfaces in /etc/libvirt/network/default.xml (something like that) [16:41] no it works as root [16:41] something else is broken [16:42] have to logout/back in [16:42] to get libvirtd privs [16:43] my personal experience is that i had to reboot to get the virtual networking to work properly, but i didn't investigate further [16:43] sounds like he got it... a groups thing [16:43] I did not [16:43] its fine now [16:44] just had to logout/back in [16:44] awesome [16:44] cool potential for test frameworks [16:45] I got my laptop up to a load factor of 12+ yesterday [16:45] You know, the more I think about it the, more I think we should just put the first unit on the bootstrap machine. [16:45] man that stack came up fast!! [16:45] little warm on the lap though [16:46] maybe try a bag of frozen peas? ;) [16:46] ha [16:46] suspend barfs the stack though [16:46] yeah the single laptop disk tends to make running 5+ containers send this one to 11 [16:46] +1 on overloading bootstrap in localdev [16:46] yeah zookeeper doesn't like the suspend/resume [16:47] well with local dev there's no need [16:47] all units go on machine 0 ;) [16:47] yeah, good point [16:57] #ubuntu-classroom now for a juju session, btw. :) [17:45] SpamapS, its the groups and the shell [17:46] the user executing the bootstrap has to be a member of the libvirt group [17:46] but on initial installation and creation of the group, the executing shell doesn't have the group [17:47] bootstrap doesn't actually create any containers, just the network, zk, machine agent [17:47] all it takes to fix the groups is to execute a new shell [18:00] wow [18:00] local provider on tmpfs .. LIGHTNING [18:01] SpamapS, :-) sweet, did you just tmpfs /var/lib/lxc ? [18:02] yep [18:02] SpamapS: maybe we should recommend that for devel [18:02] tmpfs 14G 2.3G 12G 16% /var/lib/lxc [18:02] m2.xlarge.. mmmmm [18:02] bcsaller, it was pretty fast on your ssd [18:02] I have about 40 more minutes with it before I get chargd $0.50 more [18:04] now I need to figure out why mediawiki in the charm collection doesn't have the config.yaml I added to it. :-P [18:04] hazmat: yeah, but the apt-get install phase still takes too long [18:06] thanks for the session SpamapS [18:06] but after juju deploy --repository charms local:mediawiki I got state: null [18:06] how can I start it? [18:06] elopio: its probably still deploying [18:06] elopio, null typically mean its pending [18:06] elopio: I cheated and put it on a giant tmpfs .. [18:07] elopio, we're working on making that more obvious in the status output [18:07] debug-log also needs to show the local provider's unit logs [18:07] SpamapS, there is no local provider ;-) [18:07] SpamapS, hazmat ok :) So I'll wait. [18:08] SpamapS, i mean there's no provisioning agent running [18:08] $ ls ~/src/juju/trunk/juju/providers/ [18:08] common dummy.py dummy.pyc ec2 __init__.py __init__.pyc local orchestra tests [18:08] SpamapS, the machine agent is just deploying the units [18:08] in an lxc container, it will work the same on ec2 or orchestra [18:08] when we enable lxc there [18:08] So machine agent should be sharing them? [18:08] SpamapS, 'sharing' means what? [18:09] they are all units assigned to a single machine [18:09] hazmat: sharing the logs [18:10] SpamapS, each unit has its own logs in the container .. i change the location to be a bit more fhs compliant.. /var/log/juju/unit-name.log [18:10] elopio: you should be able to see the logs under ~/.juju/data/$USERNAME-local/units [18:10] SpamapS, there's a symbolic link for convience in the data-dir [18:10] elopio: something like mediawiki-0/unit.log [18:10] elopio: have to be root tho [18:10] hazmat: I really like the idea of debug-log being comprehensive [18:11] SpamapS, it is comprehensive for all agents [18:11] Err, but it doesn't show me the unit.log stuff [18:11] SpamapS, but in this case there is no provider agent, and no place to log to [18:11] SpamapS, hmm [18:11] SpamapS, I have no ~/.juju/data directory. [18:11] SpamapS, it should, if not its a bug [18:12] elopio, its whatever directory you using as data-dir in local provider config in environments.yaml [18:12] elopio: the example environments.yaml I pasted had /home/ubuntu/.juju/data .. so you may have it there [18:13] ahh, ok. [18:13] now my units directory is empty. [18:15] elopio: check machine-agent.log in the directory above then [18:15] elopio, it takes a little while for the first time ever, as the system needs to download and debootstrap a base distribution [18:17] ok, I'm getting close to my error: http://paste.ubuntu.com/711155/ [18:18] SpamapS, hazmat ^^ [18:19] elopio, what's the output of lxc-ls? and you have a /master-customize.log ? [18:19] elopio: and this is on oneiric, not natty, right? [18:20] hazmat: it looks like lxc-create for the master just failed outright [18:21] I wouldn't expect a customize log, it didn't get that far [18:23] hazmat, there's no output of lxc-ls. And I don't have a master-customize.log [18:23] bcsaller, yes, oneiric. [18:24] time to check-in to my flight, bbiab [18:25] elopio: maybe try 'sudo lxc-create -t ubuntu -n test-lxc -- -r oneiric' [18:25] elopio: that should verify that lxc-crate *can* work on your system. ;) [18:27] SpamapS, yes, it can. [18:27] I did it all again, and now my log says Creating master container... [18:28] I think that I started juju without sudo. [18:28] you should not [18:28] just $ juju bootstrap [18:28] it doesn't need to run as root [18:28] it will use sudo when it needs it [18:28] now I did $ sudo juju bootstrap, and it seems to be working [18:28] That isn't necessary, I'm sure it was some other problem. [18:28] but glad its working in some capacity [18:29] let's see what the log says after creating the container. [18:30] well, but anyway, this juju thing rocks. [18:30] SpamapS, I'd just add sudo in front of all the commands in your script :p [18:32] yes, it's working now. I have the units/mediawiki-0 directory, and the log says it's downloading packages. [18:33] thank you people! [18:34] elopio: btw, I just updated the mediawiki charm with the config.yaml that was missing..lets you change the name, skin, logo, and admin user/pass [18:34] SpamapS, yes, I asked about the password but it seems my question didn't get through. [18:35] so I assume that there's a config.yaml for the myslq charm where you can change the password too. [18:35] elopio: if you bzr update in the mediawiki dir, you should get revision 80 .. which allows 'juju set mediawiki admins="user:pass"' [18:35] elopio: for mysql you don't actually need root access ever. ;) [18:36] elopio: the charm has it, but doesn't expose it [18:36] SpamapS, but what if I want to pimp my mysql? [18:37] well, I suppose I should make a charm for that too. [18:37] elopio: you can ssh in [18:37] elopio: and yeah, anything you need to tune should be in config.yaml as a tunable [18:37] I've been meaning to go through all the mysql tuning parameters and put them into the mysql charm [18:44] mediawiki up and running \o/ [18:44] awesome. [18:46] elopio: bonus points if you get haproxy in front of it, and a mysql slave added. ;) [18:49] SpamapS, ja, I'll ask for some vacations to keep playing with juju. I guess that my boss will say no :p [18:50] lunch's over, so I'll get back to the things I should be doing. But I hope to talk to you again. [18:55] looking at environments .yaml, does s3 on the API node have to listen on public ip [19:06] mjfork, re s3 and openstack, the s3 url needs to be accessible to the juju client and the machine nodes [19:06] s/machine/virtual [19:06] ok, can i use objectstore [19:11] mjfork, either the nova/objectstore/s3server.py or swift with the s3 middleware should work [19:11] we've primary done testing with the nova s3server [19:15] i just config'ed objectstore to listen on 0.0.0.0 [19:18] hazmat: i assume you can use a regular user in the envionrments file? [19:19] mjfork, not sure what you mean by regular user? [19:19] doesn't have to be an admin user [19:20] mjfork, as long as the openstack credentials are authorized to create machines, it should be fine [19:20] objectstore/s3server.py doesn't do any actual auth on the s3 side of it [19:27] what is admin-secret? [19:27] I am getting unauthorized CreateSEcurityGroup [19:27] guessing i need to assin some special permission in nova (using keystone for auth) [19:31] admin-secret is just any random string unique to the environment [19:32] mjfork, ^ ... also out of curiosity what version of openstack are you using? [19:32] Diablo [19:34] got bootstrap to run, i needed to run nova-manage role add juju cloudadmin [19:35] (i also did netadmin, itsec, not sure if all were needed tho) [19:37] SpamapS, re the libvirt group membership missing.. did bootstrap have an error? [19:49] it looks like it caused an error on network start [20:02] hazmat: logout/back in solved all problems I had [21:13] <_mup_> juju/test-api r237 committed by kapil.thangavelu@canonical.com [21:13] <_mup_> merge trunk [23:00] btw, the error message for ambiguous endpoints is AWESOME now