[00:21] weblife: pretty sure you need the mongo from raring [00:22] or one from our backports ppa [00:22] to the best of my knowledge 10gen do not ship an ssl enabled version [00:25] davecheney: Correct! I just spent the last half hour figuring this out, wans't documented to well. But apparently no packages are except for the enterprise version. (bye bye mongodb 2.6) Although, I do think I see how I can but that will be for a later date. [00:26] weblife: mongodb 2.4 + ssl is available in ppa:juju/stable if you are running < raring [00:28] I just removed the 10gen package. The juju-local package pulls from there(I believe) [00:28] davecheney: I just removed the 10gen package. The juju-local package pulls from there(I believe) [00:32] weblife: i cannot confirm that [00:32] it would only do so if there was a conflict with the name o the package [00:44] arg [00:45] weblife: i have no experience with the 10gen provided package, so much so that I didn't even know it existed [00:46] getting error on 'sudo juju bootstrap': error: net: no such interface. [00:46] weblife: have you renamed lxcbr0 to something else ? [00:46] davecheney: :) I find it funny mongodb recommends that package over the apt-get package even though its not the current stable [00:47] weblife: sods law says that 90% of documentation is out of date [00:47] weblife: 10gen might change their mind when 14.04 is out [00:47] davecheney: no... Think it was because of my initial install conflicks with the 10gen package [00:47] 2.4 hasn't been backported to 12.04 [00:48] that makes sense then why they do [00:56] darn. tried un-installing all the packages and re-installing. but still same error. Need a break from this madness. Maybe I will reboot and everything will work like magic. :) hahahaha (like this is windows) [01:00] if the error is about interfaces [01:00] it is unrelated to mongodb [01:00] 19:46 < weblife> getting error on 'sudo juju bootstrap': error: net: no such interface. [01:00] maybe because im Saucy [01:00] 19:46 < davecheney> weblife: have you renamed lxcbr0 to something else ? [01:01] davecheney: No never touched it. Just let juju-local get it going [01:01] ifconfig -a [01:01] do you have lxcbr0 ? [01:01] https://bugs.launchpad.net/juju-core/+bug/1206959 [01:01] <_mup_> Bug #1206959: error from no installed lxc is obscure [01:05] Oh yeah thats strange. [01:05] I run lxc and says its not installed. Then I try and install it and it says its already the newest version [01:07] lxc may not be a command [01:07] dpkg -L lxc [01:07] juju-local installed it and started it: lxc start/running - lxc-net stop/pre-start, process 17104. But it isn't showing in ifconfig [01:07] ifconfig -a [01:08] lxc-net should be started for the bridge to be created, right? [01:10] yeah its acting funky on me. I am going to switch kernels real quick. I am using a special version to fix my network interface because I am on a surface pro. [01:13] weblife: mate, i think you're doing things which are too advanced for juju. We know there are some limitations on the environment the local provider expects to run in, and I think you're probably going to trip over those. [01:13] Hey hey. It was my kernel [01:13] how much memory does that surface pro have? how much memory will mongo take? [01:14] That sucks makes connecting to the web a pain without it. [01:14] davecheney: is 1.14 making its way to precise? [01:14] but you can connect to irc fine? :) [01:14] kurt_: it will be in ppa:juju/stable [01:14] I have the 128 GB only 30GB reserved for my ubuntu but 60 GB SD for most storage [01:15] getting things into the precise main archives is a very ardious process [01:16] juju probably more than most, converting from python to go doesn't easily fit in the SRU guidelines :) hehe [01:16] davecheney: sorry, I'm not fully aware of the release process yet. [01:16] kurt_: me too [01:16] I really only care it makes its way to precise [01:16] the most expedient way is ppa:juju/stable for well, stable [01:16] and ppa:juju/devel for devel [01:16] kurt_: i recommend you address you precise question to the mailing list [01:17] i honestly dont have the answers for you on that one [01:17] sarnold: yeah once I connect its no problem for the most part. Just have to deal with disconnect / reconnects to often. Which does actually affect irc a little, especially if someone send me a msg in the small period [01:17] and multiple net interfaces that I need to shut down [01:17] weblife: aha :) [01:18] davecheney: ok. I'm trying hard to wrangle all the bits I need to successfully deploy openstack with gui with juju, etc. It's been a major challenge [01:18] weblife: also, https://bugs.launchpad.net/juju-core/+bug/1216775 [01:18] <_mup_> Bug #1216775: cmd/juju: local provider doesn't give a clear explanation when lxc is not configured correctly [01:18] kurt_: i'm sorry to hear that [01:19] obviously we don't intend it to be that hard [01:19] i'd like to hear more about the background of your problem [01:19] especially to find out where our documentation lead you astray [01:19] davecheney: it is what it is. It's like a snapshot in time with the half-life of strontium 238 [01:19] davecheney: when I figure out the exact reason I will submit a bug report. Just glad its working now. [01:21] this is awesome though. No more long waits on aws to setup [01:25] kurt_: we're going to cut a new stable 1.14 soon (ie, less than a week, hopefully less than 48 hours) [01:25] which we recommend everyone upgrade too [01:25] it'll be 1.14.0 [01:30] davecheney: ok. hopefully that's on precise. :D [01:30] kurt_: can you please say that again [01:30] davecheney: I'm looking forward to getting the debug-log back. [01:31] kurt_: me too [01:34] davecheney: I opened another bug too [01:34] actually 3 today [01:34] kurt_: yup [01:34] thank you [01:34] we can't fix problems we don't know about [01:35] i apprecate you eating our dogfood [02:41] hey guys, I am trying to deploy an openstack HA with the juju guide posted on the ubuntu website. I am up to deploying the mysql instances, but the virtual IP never comes online and when i run a juju debug-log i get this line about corosync not configuring: [02:41] 2013-08-29 15:20:02,364 unit:mysql-hacluster/3: unit.hook.api INFO: Unable to configure corosync right now, bailing [02:41] does anyone have any ideas? [02:41] hmm, which guide is saying to use juju 0.7? [02:42] the ubuntu HA [02:42] just a sec, I'll grba the url [02:43] https://wiki.ubuntu.com/ServerTeam/OpenStackHA [02:43] or have the charms moved on to the go implimentation of juju now? [02:44] I think it comes through from jamespage's opus [02:44] zradmin: you can use any IP address. Try deploying openstack in a non-HA scenarios [02:45] the vip will be ignored [02:46] zradmin: try using juju 1.12 [02:46] ok [02:47] thanks [02:47] zradmin: np [03:26] hazmat, fyi hitting some issues with darwin + juju-core + MAAS today. gonna look closer at it tomorrow, but i periodically fail to reset the environment. in case you have any ideas in the meantime: http://paste.ubuntu.com/6042664/ [03:27] adam_g, hmm.. that looks like a maas or api binding issue wrt to oauth token [03:27] what's even more strange is that its coming back on the cli vs in the provisioner [03:29] wonder if its the same issue as https://bugs.launchpad.net/juju-core/+bug/1215670 [03:29] <_mup_> Bug #1215670: After exhausing OpenStack quotas, juju status output polluted [03:32] adam_g, not exactly, that's a recorded error async from the provisioning agent recorded into state. deploy is the initiation of the async command, records to state, provider output recorded to state shouldn't be on the cli till the provisioning agent has a chance to process the state change. [03:32] hmm.. it could be some sort of provider credential validation early that's failing. [03:33] adam_g, how reproducible is it? [03:34] adam_g, there's some sensitivity with the token and clock skew between the client and the server/maas afaicr [03:34] hazmat, seems to be 1/2 of the deployments i kick off. FWIW, MAAS has always been a bit unreliable [03:35] its just that the previous deployer / darwin's py env is more tolerate [03:35] adam_g, fair enough, but really we should be filing bugs on these rather than papering them over [03:36] but yeah.. we can do something similiar here if nesc, although ideally via a flag, cause i'd like to get them fixed if they occur. [03:37] hazmat, yeah, agreed. === defunctzombie is now known as defunctzombie_zz === tasdomas_afk is now known as tasdomas [07:13] jamespage, ping === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === melmoth_ is now known as melmoth [10:06] raywang, hello [10:07] hi jamespage, i saw your wrote ceph charms, just want to make sure, and OSD daemon go with system disk .e.g. /dev/sda ? [10:07] raywang, nope [10:07] the ceph-mon daemons will run on the system disk [10:07] currently the OSD's required a dedicated disk each [10:08] raywang, I've just proposed some changes to allow you to run OSD's on the system disk - but they are still pending review [10:09] raywang, https://code.launchpad.net/~james-page/charms/precise/ceph/dir-support/+merge/182607 [10:09] and [10:09] https://code.launchpad.net/~james-page/charms/precise/ceph-osd/dir-support/+merge/182608 [10:09] you can provide configuration such as "osd-devices: /mnt/osd-local" for example [10:10] and the charm will use that for the OSD filesystem. [10:10] raywang, specifying osd-devices: /dev/sda will just not do anything - the charm will recognise that this disk is in use and ignore it [10:11] jamespage, weired, i only have one disk (/dev/sda), and i set osd-devices: /dev/sdb, it fails, but it can be started successfully [10:11] unit.hook.api@INFO: Path /dev/sdb does not exist - bailing [10:11] yeah - osd-devices is a whitelist [10:11] unitworkflowstate: transition configure (started -> started [10:11] if any of the devices is already in use it just ignores it - same for if the device is not found [10:12] raywang, so you probably have a ceph cluster right now with no actual storage - just ceph MON's :-) [10:13] jamespage, it won't report error without actual storage? [10:13] raywang, no [10:14] raywang, running the ceph charm without storage does actually make sense if you are using dedicated servers for MON's - but only for large clusters! [10:14] jamespage, is there any problem with no actual storage ? [10:14] raywang, if you login to one of the nodes and do "sudo ceph -s" it should tell you if the cluster is running [10:15] ok [10:15] raywang, well it won't store anything until you have added some nodes with storage [10:15] raywang, the ceph-osd charm can do that as well [10:15] 2013-08-30 06:15:13.030434 7f16ac7d1780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication [10:15] 2013-08-30 06:15:13.030476 7f16ac7d1780 -1 ceph_tool_common_init failed. [10:15] raywang, I suspect that you are not bootstrapped - how many service units are you running? [10:16] jamespage, for mon x 3, osd x 3 [10:16] hmm [10:16] raywang, can you check for running ceph-mon daemons as well please [10:18] jamespage, well, I run "sudo service ceph-all start", only two process are running [10:18] 3793 ? Ssl 0:08 /usr/bin/python -m juju.agents.unit --nodaemon --logfile /var/lib/juju/units/ceph-1/charm.log --session-file /var/run/juju/unit-ceph-1-agent.zksession [10:18] 7064 ? S 0:00 /usr/bin/python /var/lib/juju/units/ceph-1/charm/hooks/mon-relation-joined [10:18] raywang, which ones [10:19] ah - right - so I see mon-relation-joined still running [10:19] thats the peer relation between ceph units that performs the bootstrap [10:19] jamespage, but from juju status, everything is started [10:19] any indication as to what its doing [10:19] ? [10:19] jamespage, well, how do i know? :) [10:20] raywang, look at the debug-log [10:20] juju debug-log [10:20] ok [10:20] or look at the unit log on the service unit directly - it somewhere under /var/lib/juju (for the version of juju you are using) [10:20] raywang, the ceph charms have been fully tested under the new version btw [10:21] jamespage, i don't think there is any output about ceph now [10:21] raywang, can you pastebin me "ps -aef" please [10:21] ok [10:24] jamespage, http://pastebin.ubuntu.com/6043572/ [10:25] raywang, hmm - i suspect the charm hook is blocking waiting for the ceph-mons to startup [10:25] raywang, can you pastebin /etc/ceph/ceph.conf and check in /var/log/ceph for errors please [10:25] something has stopped ceph-mon from starting up correctly I suspect [10:26] jamespage, but from the juju status, everything is started, [10:26] jamespage, is it a fake started? [10:26] raywang, no - its just means that no hooks have failed [10:26] raywang, some are still trying to run! [10:26] ok [10:26] I should probably include a timeout on the pool for bootstrap - right now the charm will wait for ever [10:27] jamespage, ceph.conf -> http://pastebin.ubuntu.com/6043577/ [10:27] hah [10:28] jamespage, but weird is, i add-relation ceph glance, I can glance image-create without real storage... [10:28] ceph.conf looks OK [10:28] raywang, yeah - remember hooks can fire at any point in time [10:29] raywang, as ceph is not bootstrapped yet it won't give out information to glance [10:29] so glance is still using local storage right now [10:30] jamespage, there is just one line in /var/log/ceph/ceph-mon.OS01-07.log [10:30] what does it say? [10:30] 2013-08-30 03:33:29.453655 7ff412f31780 0 store(/var/lib/ceph/mon/ceph-OS01-07) created monfs at /var/lib/ceph/mon/ceph-OS01-07 for OS01-07 [10:30] hmm [10:30] check in /var/log/upstart as well [10:31] jamespage, if glance don't get information from ceph, it will use local storage as the fallback option? [10:31] raywang, yes [10:31] jamespage, that's great :) [10:31] and beware - it won't transfer images when ceph does get related! [10:33] jamespage, you mean even I register the image in glance, i also can not boot VMs? [10:33] I mean glance won't transfer images that got dumped on local storage into ceph once the relation is fully configured [10:34] jamespage, sudo zgrep ceph /var/log/upstart/ return nothing [10:34] hmm [10:34] ok, got it [10:34] raywang, which ceph version are you using? [10:34] but as long as glance get related to ceph, it wil pass the image to ceph later on ? [10:35] jamespage, from grizzly cloud archive [10:35] raywang, if you upload images to glance after its been configured to use ceph - yes [10:35] 0.56.6-0ubuntu1~cloud0 [10:35] raywang, should be OK - I was testing that yesterday [10:35] jamespage, the ceph is not started might be because i only have one disk? [10:36] raywang, no - the ceph-mon's should startup without extra disks [10:36] something is blocking it [10:36] but i use the local.yaml from openstackHA, and it tells ceph start osd daemon too. [10:37] raywang, how much memory does you service unit have? [10:37] just trying to think that is block this [10:37] jamespage, 196G [10:37] blimey [10:37] should be OK then [10:37] and E7 CPU [10:38] can you pastebin me the relevant bits from your local.yaml file please [10:38] I'll try to reproduce [10:38] ok [10:38] thanks [10:40] jamespage, well thank you -> http://pastebin.ubuntu.com/6043604/ [10:40] jamespage, the reason why I got only one disk is because the HP D?L580 G7 must to have RAID for disks, i only have two disks, so there is only one disk available to system [10:44] raywang, yeah - thats OK for MON's [10:44] raywang, upstream best practice is to have a dedicated devices for each OSD [10:45] jamespage, so only disk is not working with ceph-osd charm, right? [10:45] raywang, not right now - you would need to use the branches I linked to above [10:46] s/only/one/ [10:46] raywang, general rule is 1 core/1GB/1OSD/1Disk [10:46] roughly [10:46] ok [10:47] jamespage, i'm sorry which branch you linked above? I can't find it [10:47] https://code.launchpad.net/~james-page/charms/precise/ceph-osd/dir-support/+merge/182608 [10:47] https://code.launchpad.net/~james-page/charms/precise/ceph/dir-support/+merge/182607 [10:47] ah i see [10:49] raywang, odd [10:49] first line of my ceph-mon.*.log is [10:49] 2013-08-30 10:46:14.333465 7fb27b326780 0 ceph version 0.56.6 (95a0bda7f007a33b0dc7adf4b330778fa1e5d70c), process ceph-mon, pid 17563 [10:49] second line is: [10:49] 2013-08-30 10:46:14.336048 7fb27b326780 0 store(/var/lib/ceph/mon/ceph-juju-serverstack-machine-1) created monfs at /var/lib/ceph/mon/ceph-juju-serverstack-machine-1 for juju-serverstack-machine-1 [10:50] anything wrong? [10:50] raywang, no - started OK on my test rig [10:51] jamespage, maybe I need to add an extra disk to re-deploy ceph and ceph-osd? [10:51] raywang, I'm just wondering whether its something todo with the name of your host [10:52] server is OS01-07 right? [10:52] jamespage, OS01-07 is one of the ceph-mon [11:42] sidnei, ah - I just bumped into your backports of juju-deployer for the stable PPA [12:05] hi folks, is this expected? http://paste.ubuntu.com/6043878/ (i.e. that setting a config value to "" doesn't work) [12:06] mthaddon: which version of juju? [12:06] marcoceppi: 1.13.2-4~1703~raring1 [12:10] * mthaddon has to grab some food, will bbiab [12:11] mthaddon: So, I think there was a change committed for this, I thought it was in 1.13, let me find the mailing list [12:19] marcoceppi: cool, thanks [12:22] marcoceppi: isn't that the change that setting "" actually sets "" vs the default? [12:22] rick_h: yeah, I can't find the mailing list post about it [12:23] I know I saw talk about it in #juju-gui, but don't recall seeing a mailing list post. Maybe I missed it [12:23] rick_h: somewhere, someone discussed it [12:25] mthaddon: I think the end result is it's a known issue and should be fixed soon [12:25] not sure what to do about it until then [12:25] marcoceppi: I'll file a bug so we can track it at least - thanks [12:27] https://bugs.launchpad.net/juju-core/+bug/1218877 [12:27] <_mup_> Bug #1218877: Can't set config options to empty values [12:34] mthaddon, rick_h, marcoceppi: we already have https://bugs.launchpad.net/juju-core/+bug/1194945 for this missing feature [12:34] <_mup_> Bug #1194945: juju set is overloaded [12:35] TheMue: thanks, I couldn't quite find that bug [12:35] ah, thanks TheMue [12:35] cool, thx, will merge the two [12:35] a new unset command has already been merged and will now be released [12:36] next step will be the setting of string options to empty strings. but we have to take care for the API which still unsets the option [12:36] TheMue: I've observed the same behaviour when passing a yaml file as the config - will your work cover that too? [12:37] mthaddon: have to discuss with fwereade, but imho it should [12:38] TheMue: do you have an idea of timeframe for that, as it's affecting a service we want to bring online in production and I'd like to know whether to try and work around it another way or not [12:38] TheMue: AIUI your charm package change does do that, right? the problem is that it changes public api behaviour and *that* needs to be worked around [12:39] fwereade: exactly [12:40] mthaddon: I think it's in next week [12:40] ok, thanks [13:00] evilnickveitch: I'm adding a page to the Get Started section at the bottom of the getting started page, but I don't know what class to give the link... I can't find a pattern for example, local is page-item-20 and openstack is page-item-3596 [13:01] (at the very bottom of https://juju.ubuntu.com/docs/getting-started.html) [13:02] natefinch: where? [13:02] this stuff: http://pastebin.ubuntu.com/6044074/ [13:02] natefinch: OH, in the footer. If you're going to edit the [13:03] natefinch: don't edit the footer directly in the boiler plate [13:03] also, that's just auto-generated wordpress crap [13:03] oh [13:03] yeah, we sync the header and footer with the wordpress install using a build script that replaces the boilerplate in all the pages [13:04] ahh, well that's good. Certainly better than copy & paste by hand :) [13:04] how do I add a section to that list, then? [13:04] natefinch: you need WP admin access [13:04] we can add it in later [13:05] marcoceppi: that's cool [13:05] natefinch: just add it to the doc sprint spreadsheet and I'll make sure it gets done [13:05] marcoceppi: will do [13:08] can we not version the boilerplate? is having a seperate build step that painful? [13:08] mgz: boilerplate is versioned [13:09] sorry, as in, can we please not include the boilerplate .tpl in all the doc files we actually care about? [13:09] mgz: template/ directory has the boilerplate templates [13:09] mgz: eventually, yes, but for now it's not a focus of this sprint IIRC [13:10] morning! [13:10] evilnickveitch: where's the hangout? [13:10] mgz: we really _really_ need content [13:10] or should I just pick a todo item? [13:11] jcastro: the hangout was about an hour ago ;) [13:11] marcoceppi: I know, I know :) [13:11] oh I thought we would just be hanging out all day and people coming in and out? [13:12] jcastro: oh, I'm game for that [13:12] jcastro: that too, I guess we just use the one in the calendar? [13:13] * marcoceppi sits in the hangout [13:13] got it [13:14] https://plus.google.com/hangouts/_/552b1b180f1908b8a7ba7f65402ab654f5b73847?authuser=1 [13:58] heya folks [13:58] am having some trouble with juju-deployer hanging after deploying services [13:59] using an lxc env [14:01] running with -v, I get a string of "Delta unit: u1-psearch-fe/0 change:installed" type messages, and then hang [14:02] if I ctrl-c, the traceback looks like: http://paste.ubuntu.com/6044247/ [14:02] have tried this on 2 different hosts with 2 different configs [14:03] bloodearnest: can you paste what juju deployer command / flags you're using? [14:03] marcoceppi, sure [14:08] arosales, jcastro, hi. fwiw I'm going to be proposing to mramm (next week probably?) that the problem-driven services journey that I advocated in yesterday's "awesome first 30 min" vUDS session be hanging off of jujucharms.com--something like discourse.jujucharms.com and openstack.jujucharms.com and so on. Maybe I'm crazy, but I wanted to let you know in case you or anyone else wanted to try to stop or redirect me off [14:08] early. :-) I don't think that should stop any plans you guys are doing, if plans are already being made. We can move content easily enough, if we all end up having enough of a consensus to do so. [14:08] Also, I'm hoping Makyo will be available to start on doc sprint in an hour or two, after his timezone comes around. [14:10] marcoceppi, juju-deployer -v -s 5 -W -L -c $DEFAULT -c $CONFIG -c $SECRETS $TARGET [14:11] Can't destroy my environment [14:11] error: gomaasapi: got error back from server: 409 CONFLICT [14:11] even after deleting node from maas [14:12] we need a hammer mode for destroy-environment. ie. -f switch :D [14:12] gary_poster, if you put together a doc I would be happy to take a review [14:12] cool arosales [14:12] gary_poster, +1 on creative thinking on any idea to improve the first 30 min ux [14:12] arosales, heh, ack [14:13] gary_poster, also good to hear about Makyo and docs sprint [14:13] One more coffee and I'll be good. [14:14] :-) cool [14:21] evilnickveitch, https://code.launchpad.net/~a.rosales/juju-core/correct-exposing-link/+merge/183186 [14:22] https://code.launchpad.net/~jorge/juju-core/add-to/+merge/183190 [14:22] blam! [14:22] jcastro, arosales yay! you guys are the best with all your exciting additions [14:22] * arosales starting with the low hanging fruit first === freeflying is now known as freeflying_away === tasdomas is now known as tasdomas_afk [14:42] evilnickveitch, ISTM that there's so much overlap between authors-charm-anatomy and authors-charms-in-action that they should basically be merged, would you concur? [14:45] jcastro, marcoceppi: ^? [14:47] fwereade, there is a fair amount yes... [14:48] evilnickveitch, jcastro, marcoceppi: possibly into anatomy (the files in the charm, and what hooks are run when) and hook-environment (the hook tools and env vars and other relevant bits)? [14:49] fwereade_: I feel like that might be a lot of content. Hook environment might be strong enough to stand on it's own [14:49] but to the former, yes I think files of a charm and hook execution plan make sense together [14:50] fwereade_, yeah, I am restructuring some of the docs anyway. I think the anatomy should contain the descriptive bits, and we should have an additional pages for further hook realted bits [14:50] * marcoceppi 's opinion [14:50] marcoceppi, sorry, yeah, there was an implied leading "and then broken down again: " that I somehow forgot to type [14:54] evilnickveitch, yeah, maybe just a listing of the valid ones in the first doc (along with the rest of the anatomy) and a link to the more detailed treatment [14:55] marcoceppi, evilnickveitch: ok, I'll set about breaking those two into three, thanks [14:55] ok, cool [15:18] evilnickveitch: for some reason this didn't find it's way in to the footer template [15:18] evilnickveitch: for jorge's page, add to the bottom of the page right above [15:19] evilnickveitch: going to keep digging in to the style issues [15:39] evilnickveitch: fix for funky css colors. https://code.launchpad.net/~marcoceppi/juju-core/fix-css-prettyprint/+merge/183220 [15:45] evilnickveitch, https://code.launchpad.net/~makyo/juju-core/enable-viewboxing/+merge/180354 re: saving icon SVGs with viewbox attr. [15:52] hey TheMue [15:52] I am writing a nodejs example app page for the docs [15:52] and I need to link to the local config page in the docs that is currently in progress [15:52] Makyo, hi - i thought that got merged. let me check [15:53] what's the filename of the html you will use so I can link it? [15:55] jcastro: it is config-local.html [15:55] thanks [15:56] jcastro: yw [16:05] morning [16:09] adam_g, did you end up filing a bug on the maas unauthorized thing? [16:12] adam_g, didn't see it .. i filed bug 1218997 [16:12] <_mup_> Bug #1218997: maas throws unauthorized sometimes for no reason === TheRealMue is now known as TheMue [16:41] https://code.launchpad.net/~jorge/juju-core/nodeapp-example/+merge/183230 [16:41] BAM! [16:41] bcsaller: hey I hear you're the one who volunteered to test docs? [16:44] utlemming: ^^^ [16:44] sorry, mixed up my Bens [16:45] jcastro: ack, looking [16:45] http://pastebin.ubuntu.com/6044806/ [16:45] has a rendered version so you don't have to html yourself [16:57] m_3: which branch should I MP on for the rack charm? === defunctzombie_zz is now known as defunctzombie [17:09] can someone help me get my stupid apache working so I can test my docs? === defunctzombie is now known as defunctzombie_zz [17:14] evilnickveitch, jcastro, marcoceppi: little help? I tried symlinking to the docs directory from /var/www and that didn't work, so then I used marco's apache config... but I'm still just getting 403s when I try to view the docs [17:14] does LXC on precise not work for the local provider? [17:15] its stuck in pending and hasn't created the container for unit 1 [17:15] natefinch: every directory between "/" and "htmldocs" needs to have r/x permissions for world [17:16] natefinch, you can't symlink, you need to configure apache. Hopefull [17:16] natefinch, does the log have anything useful? [17:16] utlemming: it does work, but I don't know if it's been tested thourgholy. We used it at a charm school using 1.13.1 on precise without much incident [17:22] utlemming, might want to check the machine logs inside of JUJU_HOME [17:22] marcoceppi: well, we've upgraded from a 403 to an internal server error :) [17:22] natefinch: what's apache2 error log look like? [17:22] hazmat: thanks [17:23] marcoceppi: /home/nate/docs/htmldocs/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration [17:23] natefinch, nginx ftw ;-) [17:23] natefinch: ah, this easy. `sudo a2enmod rewrite` [17:24] hazmat: heh... I'm sure that has its own problems ;) [17:24] well, I second that. nginx ftw. [17:24] but IS uses apache, got to replicate production :) [17:24] jcastro: Nice. If Mims approves my merge, you will also be able to run a default(It's working app) load the most current verified stable version or PPA. [17:25] marcoceppi, why is there a rewrite rule? [17:25] hazmat: from old docs to new docs [17:25] marcoceppi: brilliant, it's working. Thanks a lot. [17:25] marcoceppi, gotcha [17:26] natefinch, ok even simpler.. cd htmldocs && python -m SimpleHTTPServer [17:26] weblife: which charm? [17:26] hazmat: actually, that's brilliant, especially for people like me that are just working on the docs temporarily [17:27] hazmat: +1000 [17:27] jcastro: sorry, was responding to your doc. Node-app [17:27] ah, awesome! [17:27] marcoceppi: there should be a doc on how to contribute to the docs, and include that as an easy way to test them [17:27] natefinch: there is a doc for contributing to the docs, feel free to add that in there :D [17:28] natefinch: https://juju.ubuntu.com/docs/contributing.html === defunctzombie_zz is now known as defunctzombie [17:28] * marcoceppi spies a lot of thigns that need to be fixed [17:28] marcoceppi: oh, sweet. EvilNick should have mentioned that ;) [17:29] https://code.launchpad.net/~web-brandon/charms/precise/node-app/install-fix [17:34] Pretty sure were gonna be in Syria shortly. Assad won't sit at a table for talks. Naive Obama. [17:38] if we're doing politics we should be working on the drupal charm since its running most of the usg agency sites [17:39] minus the plone ones like fbi/cia [17:39] lol [17:39] * sarnold idly wonders who's brave enough to run on joomla! [17:47] sarnold, most of them are behind cdns with WAF appliances. [17:48] hazmat: I hope fairly draconian.. [17:49] sarnold, i haven't seen any joomla, but drupal has become the defacto cms around the usg.. doe, whitehouse, nasa, etc. [17:50] hazmat: a lot of DC runs on Drupal, sadly [17:50] or fortunately, depending on your skill set [17:50] the revenge of php [17:53] jcastro: renamed [17:53] <3 [17:53] jcastro: I'll send curtis mail asking to flush web caches [17:54] lp:charms/rails is lp:~charmers/charms/precise/rails/trunk [17:54] and neither lp:charms/rack nor lp:~charmers/charms/precise/rack/trunk exist anymore [17:55] old rails is held at lp:~mark-mims/charms/precise/rails/trunk [18:21] jcastro: lp:~marcoceppi/charm-tools/python-port [18:21] python charmtools/getall.py /path/to/where/you/want/them/all [18:22] not sure who's looking at doc merge proposals right now, but here's mine: https://code.launchpad.net/~natefinch/juju-core/win-getting-started/+merge/183246 [18:32] is there any way to configure which bridge the local provider uses? [18:43] utlemming: I think that's been a matter of discussion in here the last few days, I think the conclusion was the bridge name is more or less fixed as an assumption somewhere. [18:43] sarnold: that is unfortante, as lxc lets you define the bridge in /etc/default/lxc and /etc/lxc/lxc.conf [18:44] utlemming: yeah, and I could easily see wanting to use hand-managed lxc instances for one set of tasks and juju-managed instances for another set of tasks... [18:46] sarnold: my use case is the Juju Cloud Image for developers. I was going to create a bridge, bind eth0 to that bridge and then use that bridge as the LXC bridge and then use host-networking for Virtualbox. It would effectively allow people a developer environment and let them intereface with servcies from the host. [18:54] I'm an undergrad looking to include juju in my senior thesis. I was wondering if anyone here knew of a place I could get a small number of servers for testing purposes [19:01] dalek49: If you're on linux, you can use the local provider to set up services on your local machine using lxc containers. Otherwise... no? If you can cobble together some cheap/free computers off of craigslist, you can install MaaS on them, which treats them just like a cloud, which juju can then use. [19:02] dalek49: http://maas.ubuntu.com/ [19:06] Yeah, I've been working on local mode [19:06] I wasn't aware that I could put maas on my own cluster [19:36] any idea what's going on with the local provider here: http://paste.ubuntu.com/6045349/ [19:36] I can't "juju ssh" or get a juju debug-log, but I can do deployments [19:39] utlemming: you can't ssh to a machine number in local, this is a known issue [19:39] marcoceppi: ah, thank you [19:39] utlemming: you can't debug-log either, because there is no bootstrap node [19:39] utlemming: you can however `tail -f ~/.juju/local/log/unit*.log` [19:40] utlemming: all local provider logs are stored in the ~/.juju directory to make them easily accesible [19:40] utlemming: juju ssh juju-gui/0 instead* [19:41] sweet, thanks for the help [19:58] evilnickveitch: charm-tools documentation https://code.launchpad.net/~marcoceppi/juju-core/charm-tools/+merge/183265 === BradCrittenden is now known as bac [20:09] I haven't looked into it yet but I was wondering if it would be possible to pass a tar file in the deployment line? If not I could see some usefulness in it, fyi... [20:10] hazmat: how's deployer docs coming along? [20:11] weblife: not at the moment [20:11] jcastro, stuck in a meeting [20:11] jcastro, already 11m over and then back into docs [20:11] * jcastro nods [20:11] TheMue: how'd you do today? Anything to land? [20:12] weblife: but it's a really interesting idea [20:13] evilnickveitch: https://code.launchpad.net/~natefinch/juju-core/win-getting-started/+merge/183246 [20:16] jcastro: just tested if my doc is correct [20:16] TheMue: rock and roll! [20:16] jcastro: now adding it to the menu as it is a new document and then pushing it [20:16] jcastro: yeah [20:16] ok, a bunch of us landed stuff, nice! [20:17] I'll go ahead and mark your section as DONE then [20:17] \o/ [20:26] https://code.launchpad.net/~charmers/juju-core/docs/+merge/183188 [20:26] ^^ this should be deleted right? [20:27] stupid connection [20:29] off to see cafe tauba === defunctzombie is now known as defunctzombie_zz [20:47] jcastro: so, proposed [20:51] so guys, i'll stepping out. liked this sprint. have a nice weekend [20:59] marcoceppi: can you repaste me the hangout URL? [20:59] jcastro: https://plus.google.com/hangouts/_/552b1b180f1908b8a7ba7f65402ab654f5b73847 [21:04] 4 branches in the queue, 15 merges today [21:04] go docs === defunctzombie_zz is now known as defunctzombie [21:29] why does juju depend on mongo? [21:32] dalek49: juju stores state into mongo [21:32] is there some documentation on that? [21:33] dalek49: http://blog.labix.org/2013/06/25/the-heart-of-juju has some of the basic design ideas. The 'state server' is mongo backed. [21:35] dalek49: I'm not sure if there's real docs on the state server in the full docs https://juju.ubuntu.com as it's more user docs vs technical 'how it works' [21:39] dalek49: we dont' really cover it too much, as it could change in the future (we've already moved from originally using ZooKeeper) [21:41] evilnickveitch, [21:41] https://code.launchpad.net/~a.rosales/juju-core/docs-update-scaling/+merge/183284 [21:41] \o/ [21:42] more merges! [21:44] I'm getting ssh: connection refused when I run debug-log. Anyone run into this? [21:44] dalek49: local provider? [21:45] marcoceppi: yes. It's local [21:46] dalek49: debug-log doesn't work on local. [21:46] dalek49: however, all the logs are stored in ~/.juju/local/log/ [21:47] dalek49: where "local" is the name of your local environment [21:48] marcoceppi: many thanks [21:49] np, if you want the debug-log experience of having a bunch of text rush at you at once, you can run `tail -f ~/.juju/local/log/unit-*.log` === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === kentb is now known as kentb-out === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away [23:05] hazmat, new issue https://bugs.launchpad.net/juju-core/+bug/1219116 [23:05] <_mup_> Bug #1219116: juju-deployer fails against juju-core: dial tcp 127.0.0.1:17070: connection refused