[00:21] <davecheney> weblife: pretty sure you need the mongo from raring
[00:22] <davecheney> or one from our backports ppa
[00:22] <davecheney> to the best of my knowledge 10gen do not ship an ssl enabled version
[00:25] <weblife> davecheney: Correct!  I just spent the last half hour figuring this out, wans't documented to well.  But apparently no packages are except for the enterprise version. (bye bye mongodb 2.6)  Although, I do think I see how I can but that will be for a later date.
[00:26] <davecheney> weblife: mongodb 2.4 + ssl is available in ppa:juju/stable if you are running < raring
[00:28] <weblife> I just removed the 10gen package. The juju-local package pulls from there(I believe)
[00:28] <weblife> davecheney: I just removed the 10gen package. The juju-local package pulls from there(I believe)
[00:32] <davecheney> weblife: i cannot confirm that
[00:32] <davecheney> it would only do so if there was a conflict with the name o the package
[00:44] <weblife> arg
[00:45] <davecheney> weblife: i have no experience with the 10gen provided package, so much so that I didn't even know it existed
[00:46] <weblife> getting error on 'sudo juju bootstrap': error: net: no such interface.
[00:46] <davecheney> weblife: have you renamed lxcbr0 to something else ?
[00:46] <weblife> davecheney:  :)  I find it funny mongodb recommends that package over the apt-get package even though its not the current stable
[00:47] <davecheney> weblife: sods law says that 90% of documentation is out of date
[00:47] <davecheney> weblife: 10gen might change their mind when 14.04 is out
[00:47] <weblife> davecheney: no... Think it was because of my initial install conflicks with the 10gen package
[00:47] <davecheney> 2.4 hasn't been backported to 12.04
[00:48] <weblife> that makes sense then why they do
[00:56] <weblife> darn.  tried un-installing all the packages and re-installing. but still same error.  Need a break from this madness.  Maybe I will reboot and everything will work like magic. :) hahahaha (like this is windows)
[01:00] <davecheney> if the error is about interfaces
[01:00] <davecheney> it is unrelated to mongodb
[01:00] <davecheney> 19:46 < weblife> getting error on 'sudo juju bootstrap': error: net: no such interface.
[01:00] <weblife> maybe because im Saucy
[01:00] <davecheney> 19:46 < davecheney> weblife: have you renamed lxcbr0 to something else ?
[01:01] <weblife> davecheney: No never touched it.  Just let juju-local get it going
[01:01] <davecheney> ifconfig -a
[01:01] <davecheney> do you have lxcbr0 ?
[01:01] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1206959
[01:01] <_mup_> Bug #1206959: error from no installed lxc is obscure <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1206959>
[01:05] <weblife> Oh yeah thats strange.
[01:05] <weblife> I run lxc and says its not installed.  Then I try and install it and it says its already the newest version
[01:07] <davecheney> lxc may not be a command
[01:07] <davecheney> dpkg -L lxc
[01:07] <weblife> juju-local installed it and started it: lxc start/running - lxc-net stop/pre-start, process 17104.  But it isn't showing in ifconfig
[01:07] <davecheney> ifconfig -a
[01:08] <sarnold> lxc-net should be started for the bridge to be created, right?
[01:10] <weblife> yeah its acting funky on me.  I am going to switch kernels real quick.  I am using a special version to fix my network interface because I am on a surface pro.
[01:13] <davecheney> weblife: mate, i think you're doing things which are too advanced for juju. We know there are some limitations on the environment the local provider expects to run in, and I think you're probably going to trip over those.
[01:13] <weblife> Hey hey.  It was my kernel
[01:13] <sarnold> how much memory does that surface pro have? how much memory will mongo take?
[01:14] <weblife> That sucks makes connecting to the web a pain without it.
[01:14] <kurt_> davecheney: is 1.14 making its way to precise?
[01:14] <sarnold> but you can connect to irc fine? :)
[01:14] <davecheney> kurt_: it will be in ppa:juju/stable
[01:14] <weblife> I have the 128 GB only 30GB reserved for my ubuntu but 60 GB SD for most storage
[01:15] <davecheney> getting things into the precise main archives is a very ardious process
[01:16] <sarnold> juju probably more than most, converting from python to go doesn't easily fit in the SRU guidelines :) hehe
[01:16] <kurt_> davecheney: sorry, I'm not fully aware of the release process yet.
[01:16] <davecheney> kurt_: me too
[01:16] <kurt_> I really only care it makes its way to precise
[01:16] <davecheney> the most expedient way is ppa:juju/stable for well, stable
[01:16] <davecheney> and ppa:juju/devel for devel
[01:16] <davecheney> kurt_: i recommend you address you precise question to the mailing list
[01:17] <davecheney> i honestly dont have the answers for you on that one
[01:17] <weblife> sarnold: yeah once I connect its no problem for the most part.  Just have to deal with disconnect / reconnects to often.  Which does actually affect irc a little, especially if someone send me a msg in the small period
[01:17] <weblife> and multiple net interfaces that I need to shut down
[01:17] <sarnold> weblife: aha :)
[01:18] <kurt_> davecheney: ok.  I'm trying hard to wrangle all the bits I need to successfully deploy openstack with gui with juju, etc.  It's been a major challenge
[01:18] <davecheney> weblife: also, https://bugs.launchpad.net/juju-core/+bug/1216775
[01:18] <_mup_> Bug #1216775: cmd/juju: local provider doesn't give a clear explanation when lxc is not configured correctly <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1216775>
[01:18] <davecheney> kurt_: i'm sorry to hear that
[01:19] <davecheney> obviously we don't intend it to be that hard
[01:19] <davecheney> i'd like to hear more about the background of your problem
[01:19] <davecheney> especially to find out where our documentation lead you astray
[01:19] <kurt_> davecheney: it is what it is.  It's like a snapshot in time with the half-life of strontium 238
[01:19] <weblife> davecheney: when I figure out the exact reason I will submit a bug report.  Just glad its working now.
[01:21] <weblife> this is awesome though.  No more long waits on aws to setup
[01:25] <davecheney> kurt_: we're going to cut a new stable 1.14 soon (ie, less than a week, hopefully less than 48 hours)
[01:25] <davecheney> which we recommend everyone upgrade too
[01:25] <davecheney> it'll be 1.14.0
[01:30] <kurt_> davecheney: ok. hopefully that's on precise. :D
[01:30] <davecheney> kurt_: can you please say that again
[01:30] <kurt_> davecheney: I'm looking forward to getting the debug-log back.
[01:31] <davecheney> kurt_: me too
[01:34] <kurt_> davecheney: I opened another bug too
[01:34] <kurt_> actually 3 today
[01:34] <davecheney> kurt_: yup
[01:34] <davecheney> thank you
[01:34] <davecheney> we can't fix problems we don't know about
[01:35] <davecheney> i apprecate you eating our dogfood
[02:41] <zradmin> hey guys, I am trying to deploy an openstack HA with the juju guide posted on the ubuntu website. I am up to deploying the mysql instances, but the virtual IP never comes online and when i run a juju debug-log i get this line about corosync not configuring:
[02:41] <zradmin> 2013-08-29 15:20:02,364 unit:mysql-hacluster/3: unit.hook.api INFO: Unable to configure corosync right now, bailing
[02:41] <zradmin> does anyone have any ideas?
[02:41] <thumper> hmm, which guide is saying to use juju 0.7?
[02:42] <zradmin> the ubuntu HA
[02:42] <zradmin> just a sec, I'll grba the url
[02:43] <zradmin> https://wiki.ubuntu.com/ServerTeam/OpenStackHA
[02:43] <zradmin> or have the charms moved on to the go implimentation of juju now?
[02:44] <kurt_> I think it comes through from jamespage's opus
[02:44] <kurt_> zradmin: you can use any IP address.  Try deploying openstack in a non-HA scenarios
[02:45] <kurt_> the vip will be ignored
[02:46] <kurt_> zradmin: try using juju 1.12
[02:46] <zradmin> ok
[02:47] <zradmin> thanks
[02:47] <kurt_> zradmin: np
[03:26] <adam_g> hazmat, fyi hitting some issues with darwin + juju-core + MAAS today.  gonna look closer at it tomorrow, but i periodically fail to reset the environment. in case you have any ideas in the meantime: http://paste.ubuntu.com/6042664/
[03:27] <hazmat> adam_g, hmm.. that looks like a maas or api binding issue wrt to oauth token
[03:27] <hazmat> what's even more strange is that its coming back on the cli vs in the provisioner
[03:29] <adam_g> wonder if its the same issue as https://bugs.launchpad.net/juju-core/+bug/1215670
[03:29] <_mup_> Bug #1215670: After exhausing OpenStack quotas, juju status output polluted <juju-core:Triaged> <https://launchpad.net/bugs/1215670>
[03:32] <hazmat> adam_g, not exactly, that's a recorded error async from the provisioning agent recorded into state. deploy is the initiation of the async command, records to state, provider output recorded to state shouldn't be on the cli till the provisioning agent has a chance to process the state change.
[03:32] <hazmat> hmm.. it could be some sort of provider credential validation early that's failing.
[03:33] <hazmat> adam_g, how reproducible is it?
[03:34] <hazmat> adam_g, there's some sensitivity with the token and clock skew between the client and the server/maas afaicr
[03:34] <adam_g> hazmat, seems to be 1/2 of the deployments i kick off. FWIW, MAAS has always been a bit unreliable
[03:35] <adam_g> its just that the previous deployer / darwin's py env is more tolerate
[03:35] <hazmat> adam_g, fair enough, but really we should be filing bugs on these rather than papering them over
[03:36] <hazmat> but yeah.. we can do something similiar here if nesc, although ideally via a flag, cause i'd like to get them fixed if they occur.
[03:37] <adam_g> hazmat, yeah, agreed.
[07:13] <raywang> jamespage, ping
[10:06] <jamespage> raywang, hello
[10:07] <raywang> hi jamespage, i saw your wrote ceph charms, just want to make sure, and OSD daemon go with system disk .e.g. /dev/sda ?
[10:07] <jamespage> raywang, nope
[10:07] <jamespage> the ceph-mon daemons will run on the system disk
[10:07] <jamespage> currently the OSD's required a dedicated disk each
[10:08] <jamespage> raywang, I've just proposed some changes to allow you to run OSD's on the system disk - but they are still pending review
[10:09] <jamespage> raywang, https://code.launchpad.net/~james-page/charms/precise/ceph/dir-support/+merge/182607
[10:09] <jamespage> and
[10:09] <jamespage> https://code.launchpad.net/~james-page/charms/precise/ceph-osd/dir-support/+merge/182608
[10:09] <jamespage> you can provide configuration such as "osd-devices: /mnt/osd-local" for example
[10:10] <jamespage> and the charm will use that for the OSD filesystem.
[10:10] <jamespage> raywang, specifying osd-devices: /dev/sda will just not do anything - the charm will recognise that this disk is in use and ignore it
[10:11] <raywang> jamespage, weired, i only have one disk (/dev/sda), and i set osd-devices: /dev/sdb, it fails, but it can be started successfully
[10:11] <raywang> unit.hook.api@INFO: Path /dev/sdb does not exist - bailing
[10:11] <jamespage> yeah - osd-devices is a whitelist
[10:11] <raywang> unitworkflowstate: transition configure (started -> started
[10:11] <jamespage> if any of the devices is already in use it just ignores it - same for if the device is not found
[10:12] <jamespage> raywang, so you probably have a ceph cluster right now with no actual storage - just ceph MON's :-)
[10:13] <raywang> jamespage, it won't report error without actual storage?
[10:13] <jamespage> raywang, no
[10:14] <jamespage> raywang, running the ceph charm without storage does actually make sense if you are using dedicated servers for MON's - but only for large clusters!
[10:14] <raywang> jamespage, is there any problem with no actual storage ?
[10:14] <jamespage> raywang, if you login to one of the nodes and do "sudo ceph -s" it should tell you if the cluster is running
[10:15] <raywang> ok
[10:15] <jamespage> raywang, well it won't store anything until you have added some nodes with storage
[10:15] <jamespage> raywang, the ceph-osd charm can do that as well
[10:15] <raywang> 2013-08-30 06:15:13.030434 7f16ac7d1780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
[10:15] <raywang> 2013-08-30 06:15:13.030476 7f16ac7d1780 -1 ceph_tool_common_init failed.
[10:15] <jamespage> raywang, I suspect that you are not bootstrapped - how many service units are you running?
[10:16] <raywang> jamespage, for mon x 3,  osd x 3
[10:16] <jamespage> hmm
[10:16] <jamespage> raywang, can you check for running ceph-mon daemons as well please
[10:18] <raywang> jamespage, well, I run "sudo service ceph-all start", only two process are running
[10:18] <raywang> 3793 ?        Ssl    0:08 /usr/bin/python -m juju.agents.unit --nodaemon --logfile /var/lib/juju/units/ceph-1/charm.log --session-file /var/run/juju/unit-ceph-1-agent.zksession
[10:18] <raywang>  7064 ?        S      0:00 /usr/bin/python /var/lib/juju/units/ceph-1/charm/hooks/mon-relation-joined
[10:18] <jamespage> raywang, which ones
[10:19] <jamespage> ah - right - so I see mon-relation-joined still running
[10:19] <jamespage> thats the peer relation between ceph units that performs the bootstrap
[10:19] <raywang> jamespage, but from juju status, everything is started
[10:19] <jamespage> any indication as to what its doing
[10:19] <jamespage> ?
[10:19] <raywang> jamespage, well, how do i know? :)
[10:20] <jamespage> raywang, look at the debug-log
[10:20] <jamespage> juju debug-log
[10:20] <raywang> ok
[10:20] <jamespage> or look at the unit log on the service unit directly - it somewhere under /var/lib/juju (for the version of juju you are using)
[10:20] <jamespage> raywang, the ceph charms have been fully tested under the new version btw
[10:21] <raywang> jamespage, i don't think there is any output about ceph now
[10:21] <jamespage> raywang, can you pastebin me "ps -aef" please
[10:21] <raywang> ok
[10:24] <raywang> jamespage, http://pastebin.ubuntu.com/6043572/
[10:25] <jamespage> raywang, hmm - i suspect the charm hook is blocking waiting for the ceph-mons to startup
[10:25] <jamespage> raywang,  can you pastebin /etc/ceph/ceph.conf and check in /var/log/ceph for errors please
[10:25] <jamespage> something has stopped ceph-mon from starting up correctly  I suspect
[10:26] <raywang> jamespage, but from the juju status, everything is started,
[10:26] <raywang> jamespage, is it a fake started?
[10:26] <jamespage> raywang, no - its just means that no hooks have failed
[10:26] <jamespage> raywang, some are still trying to run!
[10:26] <raywang> ok
[10:26] <jamespage> I should probably include a timeout on the pool for bootstrap  - right now the charm will wait for ever
[10:27] <raywang> jamespage, ceph.conf -> http://pastebin.ubuntu.com/6043577/
[10:27] <raywang> hah
[10:28] <raywang> jamespage, but weird is, i add-relation ceph glance,  I can glance image-create without real storage...
[10:28] <jamespage> ceph.conf looks OK
[10:28] <jamespage> raywang, yeah - remember hooks can fire at any point in time
[10:29] <jamespage> raywang, as ceph is not bootstrapped yet it won't give out information to glance
[10:29] <jamespage> so glance is still using local storage right now
[10:30] <raywang> jamespage, there is just one line in /var/log/ceph/ceph-mon.OS01-07.log
[10:30] <jamespage> what does it say?
[10:30] <raywang> 2013-08-30 03:33:29.453655 7ff412f31780  0 store(/var/lib/ceph/mon/ceph-OS01-07) created monfs at /var/lib/ceph/mon/ceph-OS01-07 for OS01-07
[10:30] <jamespage> hmm
[10:30] <jamespage> check in /var/log/upstart as well
[10:31] <raywang> jamespage, if glance don't get information from ceph, it will use local storage as the fallback option?
[10:31] <jamespage> raywang, yes
[10:31] <raywang> jamespage, that's great :)
[10:31] <jamespage> and beware - it won't transfer images when ceph does get related!
[10:33] <raywang> jamespage, you mean even I register the image in glance, i also can not boot VMs?
[10:33] <jamespage> I mean glance won't transfer images that got dumped on local storage into ceph once the relation is fully configured
[10:34] <raywang> jamespage,  sudo zgrep ceph /var/log/upstart/ return nothing
[10:34] <jamespage> hmm
[10:34] <raywang> ok, got it
[10:34] <jamespage> raywang, which ceph version are you using?
[10:34] <raywang> but as long as glance get related to ceph, it wil pass the image to ceph later on ?
[10:35] <raywang> jamespage, from grizzly cloud archive
[10:35] <jamespage> raywang, if you upload images to glance after its been configured to use ceph - yes
[10:35] <raywang> 0.56.6-0ubuntu1~cloud0
[10:35] <jamespage> raywang, should be OK - I was testing that yesterday
[10:35] <raywang> jamespage, the ceph is not started might be because i only have one disk?
[10:36] <jamespage> raywang, no - the ceph-mon's should startup without extra disks
[10:36] <jamespage> something is blocking it
[10:36] <raywang> but i use the local.yaml from openstackHA, and it tells ceph start osd daemon too.
[10:37] <jamespage> raywang, how much memory does you service unit have?
[10:37] <jamespage> just trying to think that is block this
[10:37] <raywang> jamespage, 196G
[10:37] <jamespage> blimey
[10:37] <jamespage> should be OK then
[10:37] <raywang> and E7 CPU
[10:38] <jamespage> can you pastebin me the relevant bits from your local.yaml file please
[10:38] <jamespage> I'll try to reproduce
[10:38] <raywang> ok
[10:38] <jamespage> thanks
[10:40] <raywang> jamespage, well thank you -> http://pastebin.ubuntu.com/6043604/
[10:40] <raywang> jamespage, the reason why I got only one disk is because the HP D?L580 G7 must to have RAID for disks,  i only have two disks, so there is only one disk available to system
[10:44] <jamespage> raywang, yeah - thats OK for MON's
[10:44] <jamespage> raywang, upstream best practice is to have a dedicated devices for each OSD
[10:45] <raywang> jamespage, so only disk is not working with ceph-osd charm, right?
[10:45] <jamespage> raywang, not right now - you would need to use the branches I linked to above
[10:46] <raywang> s/only/one/
[10:46] <jamespage> raywang, general rule is 1 core/1GB/1OSD/1Disk
[10:46] <jamespage> roughly
[10:46] <raywang> ok
[10:47] <raywang> jamespage, i'm sorry which branch you linked above?  I can't find it
[10:47] <jamespage> https://code.launchpad.net/~james-page/charms/precise/ceph-osd/dir-support/+merge/182608
[10:47] <jamespage> https://code.launchpad.net/~james-page/charms/precise/ceph/dir-support/+merge/182607
[10:47] <raywang> ah i see
[10:49] <jamespage> raywang, odd
[10:49] <jamespage> first line of my ceph-mon.*.log is
[10:49] <jamespage> 2013-08-30 10:46:14.333465 7fb27b326780  0 ceph version 0.56.6 (95a0bda7f007a33b0dc7adf4b330778fa1e5d70c), process ceph-mon, pid 17563
[10:49] <jamespage> second line is:
[10:49] <jamespage> 2013-08-30 10:46:14.336048 7fb27b326780  0 store(/var/lib/ceph/mon/ceph-juju-serverstack-machine-1) created monfs at /var/lib/ceph/mon/ceph-juju-serverstack-machine-1 for juju-serverstack-machine-1
[10:50] <raywang> anything wrong?
[10:50] <jamespage> raywang, no - started OK on my test rig
[10:51] <raywang> jamespage, maybe I need to add an extra disk to re-deploy ceph and ceph-osd?
[10:51] <jamespage> raywang, I'm just wondering whether its something todo with the name of your host
[10:52] <jamespage> server is OS01-07 right?
[10:52] <raywang> jamespage, OS01-07 is one of the ceph-mon
[11:42] <jamespage> sidnei, ah - I just bumped into your backports of juju-deployer for the stable PPA
[12:05] <mthaddon> hi folks, is this expected? http://paste.ubuntu.com/6043878/ (i.e. that setting a config value to "" doesn't work)
[12:06] <marcoceppi> mthaddon: which version of juju?
[12:06] <mthaddon> marcoceppi: 1.13.2-4~1703~raring1
[12:10]  * mthaddon has to grab some food, will bbiab
[12:11] <marcoceppi> mthaddon: So, I think there was a change committed for this, I thought it was in 1.13, let me find the mailing list
[12:19] <mthaddon> marcoceppi: cool, thanks
[12:22] <rick_h> marcoceppi: isn't that the change that setting "" actually sets "" vs the default?
[12:22] <marcoceppi> rick_h: yeah, I can't find the mailing list post about it
[12:23] <rick_h> I know I saw talk about it in #juju-gui, but don't recall seeing a mailing list post. Maybe I missed it
[12:23] <marcoceppi> rick_h: somewhere, someone discussed it
[12:25] <marcoceppi> mthaddon: I think the end result is it's a known issue and should be fixed soon
[12:25] <marcoceppi> not sure what to do about it until then
[12:25] <mthaddon> marcoceppi: I'll file a bug so we can track it at least - thanks
[12:27] <mthaddon> https://bugs.launchpad.net/juju-core/+bug/1218877
[12:27] <_mup_> Bug #1218877: Can't set config options to empty values <canonical-webops> <juju-core:New> <https://launchpad.net/bugs/1218877>
[12:34] <TheMue> mthaddon, rick_h, marcoceppi: we already have https://bugs.launchpad.net/juju-core/+bug/1194945 for this missing feature
[12:34] <_mup_> Bug #1194945: juju set is overloaded <juju-core:In Progress by themue> <https://launchpad.net/bugs/1194945>
[12:35] <marcoceppi> TheMue: thanks, I couldn't quite find that bug
[12:35] <rick_h> ah, thanks TheMue
[12:35] <mthaddon> cool, thx, will merge the two
[12:35] <TheMue> a new unset command has already been merged and will now be released
[12:36] <TheMue> next step will be the setting of string options to empty strings. but we have to take care for the API which still unsets the option
[12:36] <mthaddon> TheMue: I've observed the same behaviour when passing a yaml file as the config - will your work cover that too?
[12:37] <TheMue> mthaddon: have to discuss with fwereade, but imho it should
[12:38] <mthaddon> TheMue: do you have an idea of timeframe for that, as it's affecting a service we want to bring online in production and I'd like to know whether to try and work around it another way or not
[12:38] <fwereade> TheMue: AIUI your charm package change does do that, right? the problem is that it changes public api behaviour and *that* needs to be worked around
[12:39] <TheMue> fwereade: exactly
[12:40] <TheMue> mthaddon: I think it's in next week
[12:40] <mthaddon> ok, thanks
[13:00] <natefinch> evilnickveitch: I'm adding a page to the Get Started section at the bottom of the getting started page, but I don't know what class to give the link... I can't find a pattern  for example, local is page-item-20 and openstack is  page-item-3596
[13:01] <natefinch> (at the very bottom of https://juju.ubuntu.com/docs/getting-started.html)
[13:02] <marcoceppi> natefinch: where?
[13:02] <natefinch> this stuff: http://pastebin.ubuntu.com/6044074/
[13:02] <marcoceppi> natefinch: OH, in the footer. If you're going to edit the
[13:03] <marcoceppi> natefinch: don't edit the footer directly in the boiler plate
[13:03] <marcoceppi> also, that's just auto-generated wordpress crap
[13:03] <natefinch> oh
[13:03] <marcoceppi> yeah, we sync the header and footer with the wordpress install using a build script that replaces the boilerplate in all the pages
[13:04] <natefinch> ahh, well that's good. Certainly better than copy & paste by hand :)
[13:04] <natefinch> how do I add a section to that list, then?
[13:04] <marcoceppi> natefinch: you need WP admin access
[13:04] <marcoceppi> we can add it in later
[13:05] <natefinch> marcoceppi: that's cool
[13:05] <marcoceppi> natefinch: just add it to the doc sprint spreadsheet and I'll make sure it gets done
[13:05] <natefinch> marcoceppi: will do
[13:08] <mgz> can we not version the boilerplate? is having a seperate build step that painful?
[13:08] <marcoceppi> mgz: boilerplate is versioned
[13:09] <mgz> sorry, as in, can we please not include the boilerplate .tpl in all the doc files we actually care about?
[13:09] <marcoceppi> mgz: template/ directory has the boilerplate templates
[13:09] <marcoceppi> mgz: eventually,  yes, but for now it's not a focus of this sprint IIRC
[13:10] <jcastro> morning!
[13:10] <jcastro> evilnickveitch: where's the hangout?
[13:10] <marcoceppi> mgz: we really _really_ need content
[13:10] <jcastro> or should I just pick a todo item?
[13:11] <marcoceppi> jcastro: the hangout was about an hour ago ;)
[13:11] <mgz> marcoceppi: I know, I know :)
[13:11] <jcastro> oh I thought we would just be hanging out all day and people coming in and out?
[13:12] <marcoceppi> jcastro: oh, I'm game for that
[13:12] <mgz> jcastro: that too, I guess we just use the one in the calendar?
[13:13]  * marcoceppi sits in the hangout
[13:13] <jcastro> got it
[13:14] <jcastro> https://plus.google.com/hangouts/_/552b1b180f1908b8a7ba7f65402ab654f5b73847?authuser=1
[13:58] <bloodearnest> heya folks
[13:58] <bloodearnest> am having some trouble with juju-deployer hanging after deploying services
[13:59] <bloodearnest> using an lxc env
[14:01] <bloodearnest> running with -v, I get a string of  "Delta unit: u1-psearch-fe/0 change:installed" type messages, and then hang
[14:02] <bloodearnest> if I ctrl-c, the traceback looks like: http://paste.ubuntu.com/6044247/
[14:02] <bloodearnest> have tried this on 2 different hosts with 2 different configs
[14:03] <marcoceppi> bloodearnest: can you paste what juju deployer command / flags you're using?
[14:03] <bloodearnest> marcoceppi, sure
[14:08] <gary_poster> arosales, jcastro, hi. fwiw I'm going to be proposing to mramm (next week probably?) that the problem-driven services journey that I advocated in yesterday's "awesome first 30 min" vUDS session be hanging off of jujucharms.com--something like discourse.jujucharms.com and openstack.jujucharms.com and so on.  Maybe I'm crazy, but I wanted to let you know in case you or anyone else wanted to try to stop or redirect me off
[14:08] <gary_poster> early. :-)  I don't think that should stop any plans you guys are doing, if plans are already being made.  We can move content easily enough, if we all end up having enough of a consensus to do so.
[14:08] <gary_poster> Also, I'm hoping Makyo will be available to start on doc sprint in an hour or two, after his timezone comes around.
[14:10] <bloodearnest> marcoceppi, juju-deployer -v -s 5 -W -L -c $DEFAULT -c $CONFIG -c $SECRETS $TARGET
[14:11] <kurt_> Can't destroy my environment
[14:11] <kurt_> error: gomaasapi: got error back from server: 409 CONFLICT
[14:11] <kurt_> even after deleting node from maas
[14:12] <kurt_> we need a hammer mode for destroy-environment. ie. -f switch :D
[14:12] <arosales> gary_poster, if you put together a doc I would be happy to take a review
[14:12] <gary_poster> cool arosales
[14:12] <arosales> gary_poster, +1 on creative thinking on any idea to improve the first 30 min ux
[14:12] <gary_poster> arosales, heh, ack
[14:13] <arosales> gary_poster, also good to hear about Makyo and docs sprint
[14:13] <Makyo> One more coffee and I'll be good.
[14:14] <gary_poster> :-) cool
[14:21] <arosales> evilnickveitch, https://code.launchpad.net/~a.rosales/juju-core/correct-exposing-link/+merge/183186
[14:22] <jcastro> https://code.launchpad.net/~jorge/juju-core/add-to/+merge/183190
[14:22] <jcastro> blam!
[14:22] <evilnickveitch> jcastro, arosales yay! you guys are the best with all your exciting additions
[14:22]  * arosales starting with the low hanging fruit first
[14:42] <fwereade_> evilnickveitch, ISTM that there's so much overlap between authors-charm-anatomy and authors-charms-in-action that they should basically be merged, would you concur?
[14:45] <fwereade_> jcastro, marcoceppi: ^?
[14:47] <evilnickveitch> fwereade, there is a fair amount yes...
[14:48] <fwereade_> evilnickveitch, jcastro, marcoceppi: possibly into anatomy (the files in the charm, and what hooks are run when) and hook-environment (the hook tools and env vars and other relevant bits)?
[14:49] <marcoceppi> fwereade_: I feel like that might be a lot of content. Hook environment might be strong enough to stand on it's own
[14:49] <marcoceppi> but to the former, yes I think files of a charm and hook execution plan make sense together
[14:50] <evilnickveitch> fwereade_, yeah, I am restructuring some of the docs anyway. I think the anatomy should contain the descriptive bits, and we should have an additional pages for further hook realted bits
[14:50]  * marcoceppi 's opinion
[14:50] <fwereade_> marcoceppi, sorry, yeah, there was an implied leading "and then broken down again: " that I somehow forgot to type
[14:54] <fwereade_> evilnickveitch, yeah, maybe just a listing of the valid ones in the first doc (along with the rest of the anatomy) and a link to the more detailed treatment
[14:55] <fwereade_> marcoceppi, evilnickveitch: ok, I'll set about breaking those two into three, thanks
[14:55] <evilnickveitch> ok, cool
[15:18] <marcoceppi> evilnickveitch: for some reason this didn't find it's way in to the footer template
[15:18] <marcoceppi> evilnickveitch: for jorge's page, add <script src="https://google-code-prettify.googlecode.com/svn/loader/run_prettify.js?skin=sunburst"></script> to the bottom of the page right above </footer>
[15:19] <marcoceppi> evilnickveitch: going to keep digging in to the style issues
[15:39] <marcoceppi> evilnickveitch: fix for funky css colors. https://code.launchpad.net/~marcoceppi/juju-core/fix-css-prettyprint/+merge/183220
[15:45] <Makyo> evilnickveitch, https://code.launchpad.net/~makyo/juju-core/enable-viewboxing/+merge/180354 re: saving icon SVGs with viewbox attr.
[15:52] <jcastro> hey TheMue
[15:52] <jcastro> I am writing a nodejs example app page for the docs
[15:52] <jcastro> and I need to link to the local config page in the docs that is currently in progress
[15:52] <evilnickveitch> Makyo, hi - i thought that got merged. let me check
[15:53] <jcastro> what's the filename of the html you will use so I can link it?
[15:55] <TheMue> jcastro: it is config-local.html
[15:55] <jcastro> thanks
[15:56] <TheMue> jcastro: yw
[16:05] <weblife> morning
[16:09] <hazmat> adam_g, did you end up filing a bug on the maas unauthorized thing?
[16:12] <hazmat> adam_g, didn't see it .. i filed bug 1218997
[16:12] <_mup_> Bug #1218997: maas throws unauthorized sometimes for no reason <MAAS:New> <https://launchpad.net/bugs/1218997>
[16:41] <jcastro> https://code.launchpad.net/~jorge/juju-core/nodeapp-example/+merge/183230
[16:41] <jcastro> BAM!
[16:41] <jcastro> bcsaller: hey I hear you're the one who volunteered to test docs?
[16:44] <jcastro> utlemming: ^^^
[16:44] <jcastro> sorry, mixed up my Bens
[16:45] <utlemming> jcastro: ack, looking
[16:45] <jcastro> http://pastebin.ubuntu.com/6044806/
[16:45] <jcastro> has a rendered version so you don't have to html yourself
[16:57] <jcastro> m_3: which branch should I MP on for the rack charm?
[17:09] <natefinch> can someone help me get my stupid apache working so I can test my docs?
[17:14] <natefinch> evilnickveitch, jcastro, marcoceppi: little help?  I tried symlinking to the docs directory from /var/www and that didn't work, so then I used marco's apache config... but I'm still just getting 403s when I try to view the docs
[17:14] <utlemming> does LXC on precise not work for the local provider?
[17:15] <utlemming> its stuck in pending and hasn't created the container for unit 1
[17:15] <marcoceppi> natefinch: every directory between "/" and "htmldocs" needs to have r/x permissions for world
[17:16] <evilnickveitch> natefinch, you can't symlink, you need to configure apache. Hopefull
[17:16] <evilnickveitch> natefinch, does the log have anything useful?
[17:16] <marcoceppi> utlemming: it does work, but I don't know if it's been tested thourgholy. We used it at a charm school using 1.13.1 on precise without much incident
[17:22] <hazmat> utlemming, might want to check the machine logs inside of JUJU_HOME
[17:22] <natefinch> marcoceppi: well, we've upgraded from a 403 to an internal server error :)
[17:22] <marcoceppi> natefinch: what's apache2 error log look like?
[17:22] <utlemming> hazmat: thanks
[17:23] <natefinch> marcoceppi:  /home/nate/docs/htmldocs/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration
[17:23] <hazmat> natefinch, nginx ftw ;-)
[17:23] <marcoceppi> natefinch: ah, this easy. `sudo a2enmod rewrite`
[17:24] <natefinch> hazmat: heh... I'm sure that has its own problems ;)
[17:24] <marcoceppi> well, I second that. nginx ftw.
[17:24] <marcoceppi> but IS uses apache, got to replicate production :)
[17:24] <weblife> jcastro:  Nice.  If Mims approves my merge,  you will also be able to run a default(It's working app) load the most current verified stable version or PPA.
[17:25] <hazmat> marcoceppi, why is there a rewrite rule?
[17:25] <marcoceppi> hazmat: from old docs to new docs
[17:25] <natefinch> marcoceppi: brilliant, it's working.  Thanks a lot.
[17:25] <hazmat> marcoceppi, gotcha
[17:26] <hazmat> natefinch, ok even simpler.. cd htmldocs && python -m SimpleHTTPServer
[17:26] <jcastro> weblife: which charm?
[17:26] <natefinch> hazmat: actually, that's brilliant, especially for people like me that are just working on the docs temporarily
[17:27] <marcoceppi> hazmat: +1000
[17:27] <weblife> jcastro: sorry, was responding to your doc.  Node-app
[17:27] <jcastro> ah, awesome!
[17:27] <natefinch> marcoceppi: there should be a doc on how to contribute to the docs, and include that as an easy way to test them
[17:27] <marcoceppi> natefinch: there is a doc for contributing to the docs, feel free to add that in there :D
[17:28] <marcoceppi> natefinch: https://juju.ubuntu.com/docs/contributing.html
[17:28]  * marcoceppi spies a lot of thigns that need to be fixed
[17:28] <natefinch> marcoceppi: oh, sweet.  EvilNick should have mentioned that ;)
[17:29] <weblife> https://code.launchpad.net/~web-brandon/charms/precise/node-app/install-fix
[17:34] <weblife> Pretty sure were gonna be in Syria shortly. Assad won't sit at a table for talks. Naive Obama.
[17:38] <hazmat> if we're doing politics we should be working on the drupal charm since its running most of the usg agency sites
[17:39] <hazmat> minus the plone ones like fbi/cia
[17:39] <weblife> lol
[17:39]  * sarnold idly wonders who's brave enough to run on joomla!
[17:47] <hazmat> sarnold, most of them are behind cdns with WAF appliances.
[17:48] <sarnold> hazmat: I hope fairly draconian..
[17:49] <hazmat> sarnold, i haven't seen any joomla, but drupal has become the defacto cms around the usg.. doe, whitehouse, nasa, etc.
[17:50] <marcoceppi> hazmat: a lot of DC runs on Drupal, sadly
[17:50] <marcoceppi> or fortunately, depending on your skill set
[17:50] <hazmat> the revenge of php
[17:53] <m_3> jcastro: renamed
[17:53] <jcastro> <3
[17:53] <m_3> jcastro: I'll send curtis mail asking to flush web caches
[17:54] <m_3> lp:charms/rails is lp:~charmers/charms/precise/rails/trunk
[17:54] <m_3> and neither lp:charms/rack nor lp:~charmers/charms/precise/rack/trunk exist anymore
[17:55] <m_3> old rails is held at lp:~mark-mims/charms/precise/rails/trunk
[18:21] <marcoceppi> jcastro: lp:~marcoceppi/charm-tools/python-port
[18:21] <marcoceppi> python charmtools/getall.py /path/to/where/you/want/them/all
[18:22] <natefinch> not sure who's looking at doc merge proposals right now, but here's mine: https://code.launchpad.net/~natefinch/juju-core/win-getting-started/+merge/183246
[18:32] <utlemming> is there any way to configure which bridge the local provider uses?
[18:43] <sarnold> utlemming: I think that's been a matter of discussion in here the last few days, I think the conclusion was the bridge name is more or less fixed as an assumption somewhere.
[18:43] <utlemming> sarnold: that is unfortante, as lxc lets you define the bridge in /etc/default/lxc and /etc/lxc/lxc.conf
[18:44] <sarnold> utlemming: yeah, and I could easily see wanting to use hand-managed lxc instances for one set of tasks and juju-managed instances for another set of tasks...
[18:46] <utlemming> sarnold: my use case is the Juju Cloud Image for developers. I was going to create a bridge, bind eth0 to that bridge and then use that bridge as the LXC bridge and then use host-networking for Virtualbox. It would effectively allow people a developer environment and let them intereface with servcies from the host.
[18:54] <dalek49> I'm an undergrad looking to include juju in my senior thesis.  I was wondering if anyone here knew of a place I could get a small number of servers for testing purposes
[19:01] <natefinch> dalek49: If you're on linux, you can use the local provider to set up services on your local machine using lxc containers.  Otherwise... no?  If you can cobble together some cheap/free computers off of craigslist, you can install MaaS on them, which treats them just like a cloud, which juju can then use.
[19:02] <natefinch> dalek49: http://maas.ubuntu.com/
[19:06] <dalek49> Yeah, I've been working on local mode
[19:06] <dalek49> I wasn't aware that I could put maas on my own cluster
[19:36] <utlemming> any idea what's going on with the local provider here: http://paste.ubuntu.com/6045349/
[19:36] <utlemming> I can't "juju ssh" or get a juju debug-log, but I can do deployments
[19:39] <marcoceppi> utlemming: you can't ssh to a machine number in local, this is a known issue
[19:39] <utlemming> marcoceppi: ah, thank you
[19:39] <marcoceppi> utlemming: you can't debug-log either, because there is no bootstrap node
[19:39] <marcoceppi> utlemming: you can however `tail -f ~/.juju/local/log/unit*.log`
[19:40] <marcoceppi> utlemming: all local provider logs are stored in the ~/.juju directory to make them easily accesible
[19:40] <marcoceppi> utlemming: juju ssh juju-gui/0 instead*
[19:41] <utlemming> sweet, thanks for the help
[19:58] <marcoceppi> evilnickveitch: charm-tools documentation https://code.launchpad.net/~marcoceppi/juju-core/charm-tools/+merge/183265
[20:09] <weblife> I haven't looked into it yet but I was wondering if it would be possible to pass a tar file in  the deployment line? If not I could see some usefulness in it, fyi...
[20:10] <jcastro> hazmat: how's deployer docs coming along?
[20:11] <marcoceppi> weblife: not at the moment
[20:11] <hazmat> jcastro, stuck in a meeting
[20:11] <hazmat> jcastro, already 11m over and then back into docs
[20:11]  * jcastro nods
[20:11] <jcastro> TheMue: how'd you do today? Anything to land?
[20:12] <marcoceppi> weblife: but it's a really interesting idea
[20:13] <natefinch> evilnickveitch: https://code.launchpad.net/~natefinch/juju-core/win-getting-started/+merge/183246
[20:16] <TheMue> jcastro: just tested if my doc is correct
[20:16] <jcastro> TheMue: rock and roll!
[20:16] <TheMue> jcastro: now adding it to the menu as it is a new document and then pushing it
[20:16] <TheMue> jcastro: yeah
[20:16] <jcastro> ok, a bunch of us landed stuff, nice!
[20:17] <jcastro> I'll go ahead and mark your section as DONE then
[20:17] <jcastro> \o/
[20:26] <jcastro> https://code.launchpad.net/~charmers/juju-core/docs/+merge/183188
[20:26] <jcastro> ^^ this should be deleted right?
[20:27] <weblife> stupid connection
[20:29] <weblife> off to see cafe tauba
[20:47] <TheMue> jcastro: so, proposed
[20:51] <TheMue> so guys, i'll stepping out. liked this sprint. have a nice weekend
[20:59] <jcastro> marcoceppi: can you repaste me the hangout URL?
[20:59] <marcoceppi> jcastro: https://plus.google.com/hangouts/_/552b1b180f1908b8a7ba7f65402ab654f5b73847
[21:04] <hazmat> 4 branches in the queue, 15 merges today
[21:04] <hazmat> go docs
[21:29] <dalek49> why does juju depend on mongo?
[21:32] <rick_h> dalek49: juju stores state into mongo
[21:32] <dalek49> is there some documentation on that?
[21:33] <rick_h> dalek49: http://blog.labix.org/2013/06/25/the-heart-of-juju has some of the basic design ideas. The 'state server' is mongo backed.
[21:35] <rick_h> dalek49: I'm not sure if there's real docs on the state server in the full docs https://juju.ubuntu.com as it's more user docs vs technical 'how it works'
[21:39] <marcoceppi> dalek49: we dont' really cover it too much, as it could change in the future (we've already moved from originally using ZooKeeper)
[21:41] <arosales> evilnickveitch,
[21:41] <arosales> https://code.launchpad.net/~a.rosales/juju-core/docs-update-scaling/+merge/183284
[21:41] <marcoceppi> \o/
[21:42] <marcoceppi> more merges!
[21:44] <dalek49> I'm getting ssh: connection refused when I run debug-log.  Anyone run into this?
[21:44] <marcoceppi> dalek49: local provider?
[21:45] <dalek49> marcoceppi: yes. It's local
[21:46] <marcoceppi> dalek49: debug-log doesn't work on local.
[21:46] <marcoceppi> dalek49: however, all the logs are stored in ~/.juju/local/log/
[21:47] <marcoceppi> dalek49: where "local" is the name of your local environment
[21:48] <dalek49> marcoceppi: many thanks
[21:49] <marcoceppi> np, if you want the debug-log experience of having a bunch of text rush at you at once, you can run `tail -f ~/.juju/local/log/unit-*.log`
[23:05] <adam_g> hazmat, new issue https://bugs.launchpad.net/juju-core/+bug/1219116
[23:05] <_mup_> Bug #1219116: juju-deployer fails against juju-core: dial tcp 127.0.0.1:17070: connection refused <juju-core:New> <https://launchpad.net/bugs/1219116>