=== koolhead17 is now known as koolhead17|zzZZZ [02:35] Is there a default charm repository installed with Juju on Oneiric? [02:36] Or does it rely on store.juju.ubuntu.com? (Which doesn't seem to exists?) [06:37] enmand_, there are local repos [06:37] in oneiric ootb [06:40] hazmat: backportpackage can fix that btw [06:41] hazmat: I actually just finished backporting the oneiric txaws to ppa:clint-fewbar/fixes [06:46] hazmat: the whole dh_python2 mess does make things hard to backport tho [07:26] hey guys [07:26] where can i see what charms are available in the juju repo [08:10] * eagles0513875 waves to fwereade in here :) [13:16] hazmat, where are the local repos in Oneiric? Are they in a package besides juju? [16:16] hazmat: try logging out of the labs webpage and log back in [16:18] Ryan_Lane: euca-run-instances -k wmflabs-mmm-20111016 -t c1.medium ami-00000004 [16:22] hazmat: what error are you getting when trying to log into canonical-bridge? [16:23] enmand_: there's a package in ppa:juju/pkgs called 'charm-tools' that will include a command, 'charm getall' [16:23] enmand_: it uses bzr to checkout all of the charms from https://launchpad.net/charm [16:26] Ah, OK [16:26] So, there is no default charm set for Oneiric? [16:27] SpamapS: morning [16:33] SpamapS: is it easy to enable lp review features for lp:charm? [16:36] heh. I figured it out [16:36] bad config [16:36] I'm working on fixing it [16:42] if puppet will ever actually run :( [16:46] m_3: it's working now [16:48] m_3: enable them? err.. they're built in to it. [16:50] m_3: what features are you looking for? [16:52] SpamapS: just a +1 from anyone in charmers before promulgation [16:53] I guess the lp review stuff comes automatically with merge proposals [16:53] SpamapS: but we don't have such things for charms atm [16:55] SpamapS: (features like submit for review, pending review state, pending review queue for charmers, etc) [16:56] Ryan_Lane: testing now... [16:56] m_3: well if we were more careful and didn't just push to lp:charm/foo then we could use reviews [16:57] charm-tools still uses bound branches.. which it probably shouldn't.. :-P [16:57] m_3: the review stuff is easily enforcable by policy [16:57] and I confirmed, if you log out and log in, it changes your access and secret key [16:57] working on fixing that now [16:58] SpamapS: gotcha... can we do this alongside "charmstore" landing? [16:58] Ryan_Lane: gotcha [16:59] Ryan_Lane: s/euca-run-instances/euca-ran-instances/ ! [17:00] m_3: the charm store stuff I don't know about.. but if bzr branches are in use, the review stuff is built in [17:00] sweet [17:00] SpamapS: cool [17:01] total "Dude, where's my car?" moment :) [17:02] m_3: I believe the charm store bits will always push to a personal branch... but I really don't know [17:02] NO WHAT DOES IT *SAY* [17:02] ;) [17:08] the use of bzr in the charm store will be transparent [17:09] Ryan_Lane, for some reason i get .. Permission denied (publickey). [17:09] on ssh attempts [17:10] hmm [17:11] Ryan_Lane, i had a look from m_3's login shell it all looks normal [17:11] lemme look at logs [17:11] Invalid user kapil from 12.70.135.2 ;) [17:11] RoAkSoAx, never mind [17:11] Ryan_Lane, yeah.. just saw that [17:11] doh [17:11] user fail ;-) [17:11] :D [17:12] your novarc is likely broken too [17:12] lemme fix the problem I'm having, and fix that for you [17:12] Ryan_Lane, oh? yeah.. i'm unable to connect to the swift storage it seems [17:12] swift storage? [17:12] we don't have swift storage [17:13] but your access and secret keys are bad [17:14] Ryan_Lane, oh.. is there a s3server (from nova) running? [17:14] ah [17:14] only if that's the object store [17:14] and that would be glance [17:14] err [17:14] wait. no [17:14] glance is just for service images [17:14] glance is the image store, it layers ontop of the object store [17:14] we have no object store [17:15] no volume support right now either [17:15] did you guys need volume support for this? [17:15] Ryan_Lane, that's problematic for juju, we use the objectstore to distribute charms to the instances.. nova include a very simple s3server that just stores things in a directory [17:15] Ryan_Lane, we don't need the volume support [17:16] its nice to have, but not required, the object store is [17:16] they got rid of the objectstore in cactus, I believe [17:16] ah. wait [17:16] there's a nova-objectstore package still around [17:16] gimme a sec [17:17] Ryan_Lane, its in the source tree at nova/objectstore/s3server.py [17:17] yep [17:17] * hazmat updates his branch [17:17] I didn't have the package installed [17:17] it's now on virt1 [17:18] Ryan_Lane, cool.. currently the generated novarc are referencing a dead s3 server url .. do you know what the correct one is/will be? [17:18] it's correct now [17:19] I just brought the service up [17:19] Ryan_Lane, awesome thanks [17:19] yw [17:19] lemme know if it has any issues [17:19] I need to fix your credentials, likely [17:20] Ryan_Lane, yeah.. that seems to be the remaining issue [17:20] hazmat: btw, I tried txaws against ceph's RADOS .. worked well except creating buckets.. but I think that may have been a lighttpd fail, not RADOS [17:21] SpamapS, nice [17:21] SpamapS, is there a charm for that? [17:21] ceph that is [17:21] hazmat: yes, the ceph charm [17:21] but its kind of.. in flux. ;) [17:21] * hazmat checks charm world [17:22] Should work for a single node, or 3 node cluster. The difficulty is elasticity.. ceph is just growing things to make that easy [17:22] hazmat: I need to fix the code that keeps changing your credentials, then I'll fix your credentials. heh [17:22] Ryan_Lane, sounds good [17:23] SpamapS, fair enough.. i'd prefer gluster for most distributed fs usages now.. i still think of ceph as more on the experimental side [17:23] ie. only widely deployed by its creating org [17:24] hazmat: the CEPH guys would agree for the mounted FS case. But their objectstore is apparently already seeing extremely heavy use. [17:24] SpamapS, outside of dreamhost? [17:24] hazmat: no [17:24] :) [17:24] which has a team of 40 devs to support it ;-) [17:24] er, I think its closer to 4 [17:24] oh.. gustavo mentioned they where hiring like crazy for it [17:25] hiring and having are two different things. :) [17:25] They've only just recently starting treating CEPH as more than an experiment [17:26] Honestly, with storage, I'm not sure super automatic elasticity is all that awesome of an idea. [17:26] if your CPU bound thing has a problem coming up or down, oh noes, it goes slower [17:27] if your I/O bound thing loses data... you are screwed [17:27] So I may just make the ceph charm deploy ceph and build the config file, but let admins do the work of adding/removing nodes [17:31] ah.. charm world doesn't pick it up because its not a trunk branch [17:32] we'd have to introduce an extra namespace layer to allow for deploying charm branches [17:33] no don't do that ;) [17:33] I forgot I haven't promulgated it yet [17:33] Because its changing a lot [17:34] SpamapS: do you have an environments.yaml entry from openstack? i.e., wanna see the s3-uri for the nova objectstore [17:35] s3-uri: http://x.x.x.x:3333 [17:35] m_3, the one in my home dir should be fine [17:35] SpamapS: does it require additional path like /services/Eucalyptus does? [17:35] on the gateway [17:35] no path specified at all in mine [17:35] gotcha... cool... just checking we're trying to call the right thing [17:35] it [17:36] m_3, its a credential problem [17:36] hazmat: right [17:39] mo credentials, mo problems [17:42] man, I need a giant RAM disk to do local deploys on [17:43] SpamapS: dude, local deploys rock! [17:43] Yeah, but now that I don't have to wait for amazon.. [17:43] but yeah, dualcore/8G on the laptop doesn't cut it [17:43] I have to wait for my local disk [17:44] I know [17:44] it's always something [17:44] If I had 8G I could make 4G for deploys :) [17:44] wanna replace my cd-rom with SSD [17:44] I've been shopping for SSD's for that very reason. [17:44] there's a kit for that in the mbp [17:45] Maybe I should enable write caching on my laptop disk [17:46] that actually helps quite a bit. :) [17:47] just have to remember to turn it off after deploy ;) [17:48] http://virt1.wikimedia.org:3333/ [17:48] shows that we can create the bucket at least [17:49] SpamapS, it doesn't take that long for local deploys.. its mostly just the package install, ssd make it rock... i'm still waiting for a native 7mm ssd to come on the market [17:49] hazmat: but then you're shortening your SSD's life [17:50] SpamapS, not concerned, i got it to use it ;-) [17:50] http://pastebin.com/EUaafbPA [17:51] Hrm.. I dunno. at $500 for a big one.. I want to use it for more than a year. :-P [17:51] grabbing food for a sec [17:51] * SpamapS is now wondering if his assumptions of endurance are false tho [17:51] SpamapS, the wear level is pretty good on the devices, and the sandforce controllers are pretty awesome about dedup [17:53] ok, so most can sustain 20GB/day for 5 years [17:54] * SpamapS is ordering now.. F'it [17:57] SpamapS, just make sure your compatible size wise with your laptop [17:57] ta [17:58] hmm. the only thing that I see so far that'll deliver a 403 is if the object already exists [17:59] hazmat: yeah I've been looking into it [17:59] yeah. it'll only give a 403 if the file or directory exists [17:59] did you guys do a test write into the file you need to create with juju? [17:59] err. object [18:00] odd. it's giving a 405... [18:00] I don't even see that in the code [18:01] Ryan_Lane, i'm still getting errors on auth.. 405 might be coming from pylons [18:04] hazmat: btw, ceph is now the second time where I have 'relation-set' the base64 of a file on disk.. I wonder if we can't get a 'relation-file /etc/hosts' to make sharing files easier [18:06] hazmat: errors on auth where? [18:06] to nova? [18:06] Ryan_Lane, to s3server [18:06] ah. right [18:06] yeah [18:06] * hazmat checks if novarc has changed [18:07] oh. wait [18:07] let me fix your secret and access keys [18:08] * m_3 looks for variation of s3cmd that'll work with nova object-store [18:08] Ryan_Lane, cool thanks [18:08] m_3: it works fine but you have to skip verifying your settings and manually edit the config [18:09] m_3: ~/.s3cfg .. its obvious where to change the hostnames [18:09] SpamapS: thanks [18:09] m_3: note that txaws comes with some handy commands for using S3 [18:10] hazmat: heh. seems your keys are fine [18:10] pylons giving back 405? [18:10] hmm [18:10] I dunno what that is [18:12] http://pastebin.com/8PrAvyCf [18:13] * hazmat tries with s3cmd [18:13] m_3, its a hidden option only in the config file of s3cmd [18:15] Ryan_Lane, it seems to be fine manually... i'll dig into debugging it [18:15] oh [18:15] that's the problem [18:15] ? [18:15] me and m_3 are probably using the same bucket ;-) [18:16] oh no way [18:16] hmm [18:16] nope that's not it [18:16] * hazmat goes back to drawing board [18:18] it's possible this is missing function calls you need [18:18] this is the super simple, kind of shitty object store [18:18] we're using that one in canonistack [18:18] it seems it doesn't even have authentication [18:19] Its shittiness is worked around in txaws and juju quite a bit ;) [18:19] heh [18:19] it should have auth [18:19] its used for image uploads [18:19] glance is used for image uploads [18:19] and glance doesn't have auth either ;) [18:20] Hmmmmm.. right.. I thought somebody told me that this was used to facilitate those image uploads [18:20] we are using cactus [18:20] Oh [18:20] snap [18:20] and anything works? [18:20] you know Diablo's out.. ;) [18:20] cactus is perfectly stable for us ;) [18:20] We had to fix quite a few diablo bugs for juju to work [18:20] I wasn't comfortable upgrading a few days before the hackathon ;) [18:20] Yeah [18:20] its non-trivial [18:21] Big component of the essex design summit was how to do upgrades [18:21] vast understatement? [18:22] the objectstore hasn't changed at all I believe [18:22] it is deprecated [18:23] ah.. its cactus [18:24] hazmat: I wonder if the nova bugs with groups were all introduced during diablo [18:24] swift on an instance and we can just use that s3-uri? [18:24] if they were in cactus too.. there will be problems. :p [18:25] bugs with groups? [18:25] Yeah nova couldn't handle the group management that juju does for firewall management [18:25] ah [18:25] security groups you mean? [18:26] instances would fail to start because of at least 1 bug [18:26] yeah [18:26] ah [18:26] I haven't seen any security group issues yet [18:27] they may have been introduced in diablo [18:27] Ryan_Lane, juju create's a security group that allows intra group traffic, it broke diablo for a while [18:27] Ryan_Lane, there are a couple of bugs fixed wrt to juju usage in the diablo release [18:28] ah. ok [18:28] i'm going to see how far i get with it [18:28] * Ryan_Lane nods [18:42] <_mup_> Bug #875903 was filed: Zookeeper errors in local provider cause strange status view and possibly broken topology < https://launchpad.net/bugs/875903 > [18:44] so the 401 unauthorized is actually from trying to describe the security groups [18:45] SpamapS, re that bug, did you hibernate? [18:46] SpamapS, the session expiration is fatal atm, and a hibernate will trigger it, since there isn't any heartbeat and then the clock advances [18:47] status should be verifying against the agent presence nodes [18:48] hazmat: no, just been fiddling with it for 1 or 2 hours straight [18:48] else its just reporting against recorded state [18:48] hazmat: I saw this one before too [18:48] hazmat: I bootstrapped quite recently [18:49] anyway, time for weekend stuff [18:49] SpamapS, cheers [18:52] SpamapS: later man... thanks! [18:53] hazmat: euca-describe-groups is authorized from the cli [18:53] m_3, yeah.. saw that [18:56] could also euca-authorize -P tcp -p 22 -s 0.0.0.0/0 junk-group === Ryan_Lane1 is now known as Ryan_lane === Ryan_lane is now known as Ryan_Lane [20:39] will juju work with openstack ? [20:47] backburner: yes, it's been tested with diablo I think [21:11] There are nova-cloud-controller and nova-compute charms, I believe [21:11] I haven [21:11] I haven't been able to find information or documentation on deplying Ubuntu Cloud Infrastructure and OpenStack yet though [21:11] In Oneiric, I mean [21:26] enmand_: juju can deploy openstack using those charms... it can also deploy services _on_ openstack [23:19] m_3, yeah, I found the openstack charms and all, and I read some of the deploying on OpenStack stuff