[02:03] lazyPower: do you have some time to review the charm now? [02:03] going over it as we speak [02:03] my brain is burnt from homework and it needs some relaxing [02:03] cool [02:03] have you started classes already? [02:03] i thought you weren't headed back until next month [02:03] yeah, 8 days ago [02:03] right on [02:19] lazyPower: any updates? [02:20] jose: i'm a little concerned that we are version locking the deployment [02:20] hmm, let me check something and I'll be back in a minute [02:20] (possible answer to that) === wendar_ is now known as wendar [02:21] KABOOM! [02:22] owncloud-latest is the same as the latest version, not only for upgrades [02:22] let me fix that part of the charm (or see if I can do it) [02:25] looks like a simple fix [02:29] jose: md5 checking as well, so what i think we need to do is either pull it from the source tree and provide git tag/branch checkout [02:30] or we need to defalt to from source and provide ppa installation options since there is a suse build tree [02:30] this will help "future proof" the charm [02:30] and we can work on the plugin manifests getting disabled from there [02:30] hmm, I was thinking on pulling both owncloud-latest.tar.bz2 and owncloud-latest.tar.bz2.md5 to get the latest version on install [02:33] hmm... [02:33] ok i'll allow it [02:33] * lazyPower does the juju blessing on jose's efforts [02:33] :P [02:34] ok, let's test-deploy [02:35] jose: link me the diff [02:35] i've got a 4.x already populuated with plugins and data ready to test teh migration [02:35] lazyPower: if you do upgrade-charm it should be good to go [02:36] the only move I'm doing from the branch pushed is the install hook [02:36] so you're only changing the package names in the source? [02:36] on the install hook, yes [02:36] ah ok, got it [02:36] just re-merged and i saw the change [02:37] cool! [02:39] jose: have you noticed that after you deploy the unit, and set a username, you're still prompted to create a user on first run? [02:39] lazyPower: yep, I think those variables should be eliminated [02:40] we have to keep them around due to backwords compatibility. There's either a) a way to fix them, or b) we need to udpate the config.yaml to note they are depreciated [02:40] my inclination is towards a) [02:41] ok, let me check that before I do the deploy [02:41] jose: awesome. validated the upgrade is non destructive [02:42] and looking at the plugin manifests, they change from version to version, so the re-enablement of the plugins is a non-issue from what i can see. i'm still nosing about [02:43] and your charm upgrades are compliant with: http://doc.owncloud.org/server/6.0/admin_manual/maintenance/update.html [02:43] * lazyPower thumbs up [02:43] cool [02:43] now I'm checking that username/password thing [02:44] let me try again with a multi-user install, and validate we aren't going to hurt anyone thats got more than one user, since thats the last thing i can think of as an edge case [02:45] lazyPower: did you do the deploy as a standalone instance or with mysql? [02:45] both [02:46] because as far as I can see here, the admin and pass config options only work with mysql, they're called for on the db-relation-changed hook [02:46] right [02:46] its called out in the README [02:47] tbh i had not reviewed the readme this go-around. Thats why it wasn't nacked on the first review. [02:47] so, disregard the instructions above [02:49] ok, no touching user/pass! [02:49] deploying a fresh instance with the charm [02:52] lazyPower: a bit offtopic, do you know how to make 'juju stauts' an alias for 'juju stauts'? :P [02:53] not with the space no, but i have however set an alias as js='juju status' [02:55] juju stauts is a common typo of mine [02:57] i'd just alias it toi something shorter to type [02:57] all my common commands are shortened like that in a ~/.bash_aliases file i version [02:57] yep, js may be a good idea [03:03] allright, i've valided with nfs, mysql, and standalone options from -current to your proposed MP [03:03] everything seems to be in order [03:03] houston, we have a problem [03:03] what'd you find? [03:03] yesterday I found a bug on owncloud itself, the -latest.tar.bz2.md5 hash is not updates, which would cause an endless loop in the md5 check [03:04] we would need to hardcode the md5 until they fix it [03:04] leave it as the orig tarball [03:04] i'll have another charmer check in the morning, and once it's acked i'll merge it [03:04] like 6.0.2 instead of latest? [03:04] yeah, you said 6.0.2 is the same as latest at present [03:04] yep [03:05] I'd need to change that in both install and upgrade-charm then [03:05] you can revert the patch that applies the -latest [03:05] oh, there isn't, I did everything in one shot [03:06] I'll just push [03:07] let me do a final test [03:11] i just left a comment that it looks good to me. I dont want to merge this as its 11:10, and i'm pretty tired. I may have missed something but i checked it with all the configurable relationship options [03:11] i'm confident this is a high quality patch adn will have no problems being merged [03:11] it's good :) [03:11] great work Jose [03:11] thanks lazyPower :) [03:12] allright man, thats me. [03:12] i'm out [03:12] you have a good night! === vladk|offline is now known as vladk === vladk is now known as vladk|offline === CyberJacob|Away is now known as CyberJacob === liam_ is now known as Guest89383 === CyberJacob is now known as CyberJacob|Away === vladk|offline is now known as vladk === vladk is now known as vladk|offline [08:49] !list === vladk|offline is now known as vladk === vladk is now known as vladk|lunch [11:51] hello, i'm trying to get the LXC Provider to work for me. I have had some trouble with lxc but everything seemed to work, until! i reboot the machine (latop). I then can not connect to my cloud anymore. If i destroy and bootstrap the environment again it works. but, shouldn't that environment survice reboots? The error i get after reboot of Host Machine http://dpaste.com/1776409/ [12:04] hi all, I'm trying to install the Vagrant image with Juju pre-installed : precise-server-cloudimg-amd64-juju-vagrant-disk1.box from https://juju.ubuntu.com/docs/config-vagrant.html but it looks like they're no longer available. Has the link changed? [12:07] will1: i was having the same problem. I'm trying the ones for raring now: http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-juju-vagrant-disk1.box [12:09] manuel_: I thought I looked in the raring directory. but that's cool, raring will work for me. many thanks, much appreciated :-) === vladk|lunch is now known as vladk [13:45] jose, while we finish up the review, you want to do the honors blogging the announcement of the updated owncloud charm? [13:52] jose, lazyPower: hey the owncloud guys publish ubuntu packages [13:53] that means we could use those instead of the tarballs, so we don't have to update the charm on every owncloud release [13:53] I'll file a bug [13:54] jcastro: yeah from the suse build service right? [13:54] i was looking for a PPA [13:54] yeah [13:55] https://bugs.launchpad.net/charms/+source/owncloud [13:55] we can resolve some of these bugs [13:59] jcastro: I'd love to, but I'm at university now, if you give me a couple hours I'll be happy to do it [13:59] jose, I mean eventually! === mattgriffin is now known as mattgriffin-afk [16:24] What could be the reason that a juju status shows all agent-state as down on my machines? [16:44] I'm trying to use juju-local.. my bootstrap works, but then when I try to run juju-deployer it tries to deploy, but I get stuck with juju status looking like http://paste.ubuntu.com/7235656/ [16:44] any ideas? [16:49] I'm back home, blogging now [16:49] cjohnston: it looks like lxc can't starts virtual machines, do you have your cgroup mounted and all depdendencies installed? [16:51] cjohnston: maybe you could try with this command: sudo lxc-create -t ubuntu [16:51] to see if lxc is wokring [16:51] *working [16:51] avoine: I installed juju-local.. I do have other containers that I have created and they work [16:51] cjohnston, is this a trust host deploying precise ? [16:51] gnuoy: yes [16:51] cjohnston, I raised https://bugs.launchpad.net/juju-core/+bug/1306537 this morning [16:52] <_mup_> Bug #1306537: LXC provider fails to provision precise instances from a trusty host [16:52] gnuoy: ack.. things are working for ev and vila, I guess I'm the unlucky one [16:53] cjohnston, well, if you get a fix I'd love to hear about it === vladk is now known as vladk|offline [16:53] avoine: I don't see any docs about other deps or cgroups or anything like that? [16:54] cjohnston, have you tried deploying a trusty unit out of interest? [16:55] gnuoy: just the bootstrap [16:55] I can fire up cs:trusty/ubuntu no problem [16:55] let me see if I can get trusty to wrok [16:55] work [16:57] fwiw this is what my lxc env looks like having deployed a trusty and precise service http://pastebin.ubuntu.com/7235731/ [16:57] #2 is exactly what mine was looking like [17:02] gnuoy: how long did it take for cs:trusty/ubuntu to come up? [17:04] gnuoy: it came up.. :-/ [17:04] pretty much instant [17:12] gnuoy: did things work for you two days ago? [17:12] jcastro, lazyPower: would http://software.opensuse.org/download/package?project=isv:ownCloud:community&package=owncloud work good for replacing the tarball on the owncloud charm? [17:13] cjohnston, I couldn't say exactly when it broke but it was pretty recent [17:13] jose: it could [17:13] and probably should. Offer it as an option [17:13] gnuoy: there were a few lxc related changes that landed yesterday [17:13] to use the PPA or to use Source [17:13] lazyPower: got it! [17:13] there are going to be nuance diferences in the packages [17:13] i believe the ppa will install it to /usr and symlink, i may be wrong. [17:14] gnuoy: sorry.. a few days ago [17:16] I'll check how it works and then modify the charm if it seems appropriate [17:18] jose, yeah that's what I was thinking [17:18] cool, I'm deploying an ubuntu instance and will check how it behaves [18:02] Woo, looks like we have changes on the vagrant images [18:02] Reminder, charm school in ~60 minutes [18:03] 'how to approve an MP jose is about to submit' === roadmr is now known as roadmr_afk === CyberJacob|Away is now known as CyberJacob [18:29] hey jcastro, i think our charm school is going to need to be cancelled. I've run into a blocker. Do we have another idea we can sub in? [18:30] do we have a mac available? [18:30] so hey basically, the juju-specific box went away [18:30] so our docs don't even link to the right boxes right now (fix is in a PR, just added it) [18:31] lazyPower, can you resolve your issue in the next 15 minutes? [18:31] But those boxes don't have juju installed and bootstrapped already [18:31] jcastro: actively trying to do so [18:31] it's installed, just not bootstrapped === vladk|offline is now known as vladk [18:32] marcoceppi, the boxes disappeared, I don't think Ben is working today so I have no idea where they're supposed to be [18:32] i'm getting lxc container errors on this box when i attempt to boot services. this is eminating from both the precise64 and trusty64 box [18:32] let me fetch a 32box and see if they exhibit the same behavior [18:34] man, the entire vagrant page in the docs is depending on the juju boxes existing [18:35] I wonder how long they have been missing and we didn't notice. :-/ [18:36] I'd like to punt this to next friday, we need to get these images back from ben [18:36] I mean, we could do it with the vanilla image, but we'd end up doing port forwarding config and a bunch of metawork before we even get started [18:37] jcastro: that's not going to be awesome [18:37] jcastro: the containers on precise32 are forever pending... still waiting to see if there's a change [18:37] but its not looking good [18:37] ok so let's punt one week, I'd rather not suck. [18:38] i'm game for that idea. We can even do a how to macguyver your own juju dev environment [18:38] which i wouldn't mind doing [18:38] its fun [18:39] ok we have a machine up. the boot time was 8 minutes on this vagrant image with 2gb of ram allocated [18:41] hey so, idea [18:41] is there a place we can run a test that returns every link in j.u.c/docs that 404's? [18:42] jcastro: just on time before the tweet went off [18:43] for that you mean, every link on that page or also on the subpages? [18:44] I mean every link in the docs, all of them. [18:44] so that when something moves or breaks, we know about it [18:44] oh, automatically [18:44] I could've used a link checker manually [18:45] jcastro: i have a utility i wrote [18:45] let me fish that up after i'm out of this meeting [18:47] yeah it just needs to be automatic and on a production box, not some VPS, etc. [18:48] maybe on the same box that builds the docs [18:48] evilnickveitch, what do you think? [18:48] jcastro: its a ruby script, so it'll probably get nacked [18:49] yup [18:49] I'll ask around, maybe someone on webops can recommend something, surely we can't be the only ones with broken URLs [18:54] jcastro, yes, I think I mentioned this months ago. There is a lint.py tool in the tools dir that checks for bug refs [18:54] ideally we should add it to that [18:54] ah! Perfect [18:54] I'll file a bug [18:55] jcastro, ok [19:18] jcastro, cool - I just discovered the footer has a load of broken links [19:18] nice! [19:37] jcastro: marcoceppi: can either of you help me with swift? [19:37] I'm trying to learn how to setup a bucket on canonistack [19:37] * jcastro is dumb wrt. swift === roadmr_afk is now known as roadmr === CyberJacob is now known as CyberJacob|Away [20:23] I'm getting a connection refused error whenever I try to juju status or juju destroy-environment on my local env [20:23] Should I manually delete .juju/local and .juju/environments/local.jenv? [20:37] lazyPower: do you have a pointer to your "clean up local provider files" [20:38] @cory_fu2 first off do a lxc ls [20:39] make sure there are no running containers left behind. The script we wrote was pretty destructive on active environments and didn't segregate against anyone doing work with another platform like docker. [20:40] cory_fu2: aside from that, you manually delete some files and there's an AU post on it let me fish that up [20:40] http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider [20:41] @cory_fu2 ^ === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === cory_fu2 is now known as cory_fu === hatch__ is now known as hatch] === hatch] is now known as hatch === vladk is now known as vladk|offline [22:34] after update in juju 1.18 and clean local bootstrap i tried to deploy apache2 but geting this error in machine 1 "agent-state-info: '(error: container failed to start)'" [22:34] marcoceppi: after update in juju 1.18 and clean local bootstrap i tried to deploy apache2 but geting this error in machine 1 "agent-state-info: '(error: container failed to start)'" [23:23] themonk: 1.18.1 was just released [23:24] try that and see if it's still broken [23:36] lazyPower: have a second? === timrc is now known as timrc-afk [23:50] marcoceppi: ping