[04:25] <Kupo24z> Hey all, I've setup a flat-networking openstack cluster with juju however the networking config area seems to be missing in horizon. Nova network shows running on all compute nodes however. Any ideas?
[05:09] <jose> negronjl: ping
[05:15] <negronjl> jose: hey
[05:15] <jose> negronjl: hey! he estado trabajando en el charm de Seafile que habías hecho, y ya lo tengo terminado, sólo me falta un copyright file, que vendría de tu parte
[05:16] <jose> no sé si puedas poner uno en tu branch para poder hacer un pull y encargarme de mandarlo a la Charm Store
[05:16] <negronjl> jose: submit it and I'll add it
[05:17] <jose> negronjl: you mean, should I push my branch and you'll put an MP to add it?
[05:18] <negronjl> jose: you can add this one: https://github.com/haiwen/seafile/blob/master/LICENCE.txt
[05:19] <negronjl> jose: also, add your name to the copyright ( for the charm )
[05:19] <jose> negronjl: I just did some tweaks, you're actually the author :)
[05:19] <negronjl> jose: that should suffice then, submitted for review and we can see how it goes
[05:19] <negronjl> jose: add my name to the copyright as well ... push it and I'll review it
[05:20] <negronjl> jose: or just MP the thing and I'll take it from there
[05:20] <jose> ok, I'm finishing up the README and the icon and I'm pushing it
[05:20] <negronjl> jose ... oh ... and thanks for fixing it :)
[05:20] <jose> no worries :)
[05:50] <jose> negronjl: https://code.launchpad.net/~jose/charms/precise/seafile/trunk if you want to take a look
[05:52] <negronjl> jose: le haces MP ?  Asi yo hago el review y por ahi seguimos
[05:52] <jose> dale
[05:53] <jose> (el bug con charmers para la inclusion en el charm store ya está abierto)
[05:54] <negronjl> jose: Lo miro en un rato
[05:54] <jose> genial, gracias :)
[06:18] <negronjl> jose: Thanks for the changes.  I merged the changes
[06:18] <jose> np, looking forward to seeing it on the store soon
[06:19] <negronjl> jose: I added you to the maintainers as well.
[06:20] <jose> awesome, thank
[06:20] <jose> s
[06:20] <jose> I'm planning on adding memcached and postfix support in the future, looks promising
[07:54] <benonsoftware> Hiya
[07:54] <benonsoftware> I'm currently making a charm for folding@home / origami and I'm not sure if I understand the 'provides:' section in metadata.yaml
[13:51] <Kupo24z> Hey all, anyone know how to add a fixed range in juju openstack without it being overwritten (nova.conf)?
[14:42] <timrc> Hrm when I deploy a local jenkins environment using a mixture of precise and trusty containers, the precise containers just hang :( 1.18.1-trusty-amd64
[14:43] <timrc> I creating an lxc container by hand with -r precise to ensure that was the problem
[14:43] <timrc> The precise containers are forever in a "Pending" state
[14:48] <jose> I think there was a bug about that
[14:57] <mattyw> anyone seen this error before? WARNING failed to write bootstrap-verify file: cannot make S3 control bucket: A conflicting conditional operation is currently in progress against this resource. Please try again.
[14:57] <mattyw> during a bootstrap
[14:59] <jose> mattyw: that's an S3 error, is your bucket name unique and you have enough privileges for bucket creation?
[15:00] <mattyw> jose, everything was working until I tried switching regions about 30 mins ago, I switched back now and get this problem
[15:00] <mattyw> I've logged into my account, deleted all buckets
[15:01] <jose> according to Amazon, that's an 409 Conflict error, with code OperationAborted
[15:02] <smarter> so, I'm running juju on precise and doing a local deployment with saucy machines, and they all fail with some apparmor error: http://sprunge.us/XgPJ
[15:02] <smarter> previously, I was deploying with raring machines and it worked fine
[15:03] <jose> mattyw: apparently bug #1183571 is related to that
[15:03] <_mup_> Bug #1183571: bootstrap fails: A conflicting conditional operation... <cmdline> <hours> <juju-core:Triaged> <https://launchpad.net/bugs/1183571>
[15:03] <mattyw> jose, I changed my control bucket id and it seems to work ok
[15:03] <jose> awesome then
[15:03] <smarter> any idea?
[15:03] <mattyw> jose, thanks for the link to the bug
[15:09] <mattyw> anyone seen this error when trying to deploy a local charm? ERROR error uploading charm: cannot update uploaded charm in state: not okForStorage
[15:14] <jcastro> lazyPower, you're on review this week iirc?
[15:14] <lazyPower> jcastro: yeah, its going to be a repeat of the last 2 weeks - if you run into some high prority stuff please make a note on the board for me and i'll hit it up EOD
[15:14] <jcastro> This afternoon I'll try to hit the easy ones and at least +1
[15:15] <lazyPower> Much appreciated my man
[16:13] <smarter> is there any way to disable apparmor in the local instances to work around http://sprunge.us/XgPJ ?
[16:28] <smarter> oh apparently, the problem was fixed in dbus: https://launchpad.net/ubuntu/saucy/+source/dbus/+changelog
[16:28] <smarter> is there any way to run apt-get update && apt-get dist-upgrade on the lxc containers spawned by juju, before anything else?
[16:31] <marcoceppi> smarter: you can run juju add-machine to enlist a bunch of machines for deploying to, then run an ssh loop to run those commands
[16:33] <smarter> okay that could work, is there any way to automate that?
[17:00] <smarter> wait actually saucy already has dbus 1.6.12-0ubuntu10 so that's probably not the cause of my problem
[17:11] <jcastro> marcoceppi, reminder to put your juju logging plugin thing in github pls
[17:16] <smarter> marcoceppi: I can't even juju ssh to the lxc to disable apparmor, because ssh to lxc containers is broken...
[17:16] <marcoceppi> smarter: no it's not. What version of juju are you using?
[17:16] <marcoceppi> that was fixed in 1.18
[17:16] <smarter> 1.16.x
[17:16] <smarter> ah
[17:16] <smarter> maybe I should try 1.18 again, but it broke other stuff iirc
[17:16] <smarter> sigh
[17:17] <lazyPower> marcoceppi: i wish juju pprint was included by default
[17:17] <lazyPower> such a handy plugin
[17:21] <jcastro> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"local\"")
[17:22] <jcastro> anyone  see that before? Fresh deploy on LXC on trusty
[17:22] <jcastro> lazyPower, actually I'd prefer normal juju status to be not so wordy by default
[17:25] <jose> guys, what should be the process in case I want to be the maintainer of a charm?
[17:27] <marcoceppi> jose: is the charm unmaintained?
[17:28] <jose> marcoceppi: no, but the maintainer hadn't pushed a fix for ~1.5 years
[17:28] <marcoceppi> jose: we don't really have good criteria for what constitues an unmaintained charm, which one is it?
[17:29] <jose> marcoceppi: owncloud
[17:38] <jcastro> jose, I would snag it, but send nathwill a courtesy email
[17:38] <jose> jcastro: will do
[17:47] <cory_fu> Why do I occasionally get this error: http://pastebin.ubuntu.com/7256493/
[17:47] <cory_fu> Seems like it's failing on the log command
[17:47] <cory_fu> But it only happens every once in a while
[18:04] <smarter> okay, so I have this error http://sprunge.us/XgPJ when trying to do a local deployment from a precise host to either saucy or trusty instances
[18:04] <smarter> but raring works fine
[18:05] <jamespage> this might be a dumbass question but why when I do juju upgrade-charm for a charm deployed from a local branch does juju not just replace whats on disk in the units with what I have locally?
[18:12] <marcoceppi> it pretty much does that, in that it zips the contents, uploads to the juju storage service, then has all the units pull the zip down, extract over your CHARM_DIR and run upgrade-charm hook
[18:12] <marcoceppi> jamespage: ^
[18:12] <jamespage> marcoceppi, hmm
[18:14] <jamespage> marcoceppi, I'm having to use --switch to force a wholesale replacement of the charm
[19:01] <cory_fu> Getting an error when trying to commit in bzr:
[19:01] <cory_fu> bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/~johnsca/charms/precise/apache-allura/refactoring-with-tests/.bzr/branch/lock): Transport operation not possible: http does not support mkdir()
[19:02] <cory_fu> How do I fix that?
[19:04] <cory_fu> I did set my launchpad-login already
[19:10] <cory_fu> Ah.  bzr bind lp:... instead of just setting the push branch, fixed it
[19:20] <cory_fu> I'm now consistently getting this error: http://pastebin.ubuntu.com/7257083/
[19:20] <cory_fu> The juju-log helper works fine, and then a few steps later in the hook suddenly fails.
[19:20] <cory_fu> Here is the hook for reference: http://bazaar.launchpad.net/~johnsca/charms/precise/apache-allura/refactoring-with-tests/view/head:/hooks/install
[19:26] <cory_fu> mbruzek, can you perhaps help me with that ^?
[19:26] <mbruzek> Yeah
[19:26] <mbruzek> Just dug up the url for you
[19:26] <mbruzek> cory_fu, https://docs.google.com/a/canonical.com/document/d/19JyDGyiVqFi4K66yCp3iYH-TbrZvBmZceq-LnmPbj0w/edit#
[19:26] <cory_fu> Thanks
[19:43] <mbruzek> cory_fu, Do you want to get on a G+ hangout ?
[19:43] <cory_fu> Sure
[19:46] <timrc> marcoceppi, All but one of my services are starting when I bootstrap and deploy a new local environment.  Looks like juju is only able to start one machine.  The rest sit in pending forever.  Have you encountered this recently?
[19:46] <timrc> marcoceppi, experiencing this with both 1.17.7 and 1.18.1
[19:46] <marcoceppi> timrc: I have not, but I have not used local in quite a while
[19:46] <timrc> ah hm
[19:46] <marcoceppi> timrc: what does all-machine.log in ~/.juju/local/log look like?
[19:48] <timrc> marcoceppi, I see a lot of this: machine-0: 2014-04-15 19:39:05 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
[20:19] <timrc> marcoceppi, http://pastebin.ubuntu.com/7257403/ -- I removed just the bit of log that shows the one service deploying to the only machine that started
[20:20] <timrc> marcoceppi, machine-0 and machine-1 seem to start without issue, but machine-2 and on do not... no mention of them in the log either
[20:36] <timrc> marcoceppi, Looks like you're having some connection issues... did you get my pastebin?
[21:15] <lazyPower> jcastro: VEM charm is promulgated
[21:15] <lazyPower> any other high prority items in the queue that need callout? otherwise i'm going top to bottom
[21:16] <jcastro> lazyPower, those mysql ones would be nice
[21:16] <lazyPower> jcastro: the 3 you linked earler have already been reved/acked
[21:17] <jcastro> oh awesome, <3
[21:17] <lazyPower> :) Context switching all day my man. i know whats up
[21:28] <jose> lazyPower: hey, mind reviewing an MP I just did to unattended-upgrades? it's pretty simple and straightforward, just 3 lines modified
[21:28] <jose> well, 4
[21:28] <lazyPower> jose: link to MP?
[21:28] <jose> lazyPower: https://code.launchpad.net/~jose/charms/precise/unattended-upgrades/add-categories-readme-markdown/+merge/215966
[21:30] <lazyPower> jose: nix the $'s from the commands in the markdown
[21:30] <lazyPower> use CODE output, which you have, and i'll ack this
[21:31] <jose> ok, I'll remove it
[21:34] <jose> pushed
[21:41] <jamespage> marcoceppi, figured out my problem - old maas install suffering from nonces issue creating problems
[21:43] <marcoceppi> huh
[21:50] <jamespage> gnz
[21:50] <jamespage> lazyPower, thanks for the review
[21:50] <lazyPower> jamespage: thanks for the high quality submission
[21:50] <lazyPower> :)
[21:51] <jamespage> lazyPower, can't take all the credit - yolanda and ivoks both worked on that one as well
[21:51] <lazyPower> the merge history was looonnnnggg on that one.
[21:53] <jamespage> lazyPower, fwiw that charm also works just fine on trusty as well
[21:53] <jamespage> it can deal with 12.04 rabbit and 14.04 rabbit
[21:54] <jose> lazyPower: thank you :)
[21:55] <marcoceppi> jamespage: good to know
[22:00] <lazyPower> hey mbruzek
[22:00] <mbruzek> yes?
[22:01] <lazyPower> i'm down to the new tomcat submission. Just so i'm 100% clear - this is a brandy new charm intended to deprecate tomcat6 and tomcat7 charms, correct?
[22:01] <lazyPower> not an update to either/or
[22:01] <mbruzek> new charm based on Robert Ayres code
[22:01]  * lazyPower cracks knuckles
[22:01] <lazyPower> here we go then, let the review...COMMENCE!
[22:01] <lazyPower> btw ty for this <3
[22:02] <jose> :P
[22:07] <fro0g> is there a Juju changelog document somewhere describing what's new in the different versions?
[22:10] <jose> fro0g: I think http://changelogs.ubuntu.com/changelogs/pool/universe/j/juju-core/juju-core_1.18.1-0ubuntu1/changelog is what you're looking for
[22:17] <fro0g> jose: indeed, thanks!
[22:40] <smarter> is "juju debug-log" supposed to work with juju 1.18 and local deployments now?
[22:40] <rick_h_> smarter: I think with 1.19 now
[22:41] <smarter> ok :)
[23:30] <lazyPower> Congrats to mbruzek on his newly promulgated Tomcat charm! This is another example of a high quality submission - https://bugs.launchpad.net/charms/+bug/1295710
[23:30] <_mup_> Bug #1295710: Create a new Apache Tomcat Juju Charm. <Juju Charms Collection:Fix Released> <https://launchpad.net/bugs/1295710>
[23:37]  * jose claps
[23:38] <davecheney> lazyPower: what happens if something dies while obtain_tomcat_lock is held ?
[23:41] <lazyPower> davecheney: actually - good question. Its a sentinel file, so you'd have to manually release the lock.
[23:41] <davecheney> lazyPower: what does the lock, lock ?
[23:43] <lazyPower> davecheney: as i understand the source, it locks tomcat from doing things while its reconfiguring. so the app server continues to churn away while the hooks do their thing, aftwords it recycles the app server
[23:43] <davecheney> lazyPower: this could be fixed by running each hook in that big block in a ( ) block
[23:44] <lazyPower> why "fixed"? i dont think theres anything wrong with it.
[23:45] <lazyPower> i may be getting tripped up on semantics though - if that's a feature you would want out of the tomcat charm, i'd file it so the author can implement it.
[23:46] <davecheney> if a hook fails
[23:46] <davecheney> it'll leave tomcat locked
[23:46] <davecheney> how can an adminstrator see this ?
[23:47] <lazyPower> let me redeploy it and evaluate the behavior
[23:48] <lazyPower> davecheney: also thanks for the feedback
[23:48] <davecheney> lazyPower: no probs
[23:59] <lazyPower> davecheney: subsequent runs take control of the file descriptor lock, and if a hookf ails, its shown int eh status output