[08:20] <AskUbuntu> Enterprise Support for Juju | http://askubuntu.com/q/488769
[14:40] <jamespage> fwereade, this might seem like and odd question but would a change to 'private-address' on a relation persist OK?
[14:40] <jamespage> i.e. if I wanted to change it to something other than the juju provided private-address?
[15:10] <Pa^2> Success!
[15:31] <schegi> jamespage, out there??
[15:31] <jamespage> schegi, yup
[15:33] <fwereade> jamespage, yes it would
[15:33] <fwereade> jamespage, we don't overwrite it
[15:33] <jamespage> fwereade, awesome
[15:34] <fwereade> jamespage, and I don't *think* we ever will, I think my last chat with hazmat agreed there
[15:34] <schegi> thinking about digging a bit into this charm stuff on the weekend in oder to modify the ceph/ceph-osd charms so that they support external journal devices. as long as i understood the whole thing up to now the most relevant  thinkgs should be the hooks and the files under upstart provide the configuration right?
[15:49] <jamespage> schegi, the specific osd journal device is specified when initializing the OSD device in the charm code under hooks
[15:49] <jamespage> osdize_device
[15:49] <jamespage> it call ceph-disk-* todo that
[15:50] <jamespage> schegi, its awesome that you want to hack on this!
[16:00] <jamespage> niedbalski, hmm - I'm not sure that the way to changed rabbitmq to bump the open files ulimit is working:
[16:00] <jamespage> Max open files            1024                 4096                 files
[16:01] <schegi> yeah already seen the call of ceph-disk, what is not completely clear to me is the way the charm iterates over the given devices. especially then doing this on different nodes. lets see what i can do next week.
[16:18] <niedbalski> jamespage, i tested that using pam / ssh / login and worked fine, did you rebooted the instance/server ?
[16:19] <jamespage> niedbalski, just looking at a completely fresh install - the nfiles limits appear not to be set even before reboot
[16:46] <frobware> I was trying to deploy wordpress on arm64 but get: "ERROR charm not found: cs:trusty/wordpress". It is available from precise but I get other problems choosing that. Is this simply not supported right now?
[16:48] <sarnold> frobware: correct; you can see a list of charms available for trusty at http://manage.jujucharms.com/charms/trusty
[16:49] <frobware> sarnold: thank you.  I tried mysql (which worked) then thought I'd tie that to wordpress... and got caught out.
[16:50] <sarnold> frobware: you ought to be able to mix trusty mysql with precise wordpress
[16:51] <frobware> sarnold: the error I get is (from juju status):  "3":
[16:51] <frobware>     agent-state-info: '(error: index file has no data for cloud {RegionOne http://192.168.1.100:5000/v2.0}
[16:51] <frobware>       not found)'
[16:51] <frobware>     instance-id: pending
[16:51] <frobware>     series: precise
[16:51] <sarnold> frobware: yikes, that one's beyond me :) sorry
[16:52] <frobware> sarnold: for ARMv8 I'm somewhat surprised to think you could mix precise and trusty. seems too old...
[16:52] <sarnold> frobware: is armv8 arm64?
[16:52] <frobware> sarnold: correct. AKA aarch64
[16:53] <sarnold> the arch so nice they named it thrice
[16:54] <sarnold> frobware: yeah, that looks to be true, precise php5 is only available on amd64, armel, armhf, i386, powerpc
[17:24] <rick_h_> lazypower: around? I need a favor
[17:25] <rick_h_> marcoceppi: or maybe you know, I'm trying to find a couple of charms that have some papercuts that need fixing.
[17:25] <rick_h_> marcoceppi: preferably ones with tests to update with it
[17:38] <frobware> is there a means in the juju-gui to deploy to a machine id when using openstack?  I was trying to avoid spinning up / tearing down instances whilst experimenting. You can do it with "juju deply --to", just wondered if there's a way to do this in the browser.
[17:39] <bac> frobware: not yet, it is under development
[17:39] <frobware> bac, ok, thx.
[17:39] <rick_h_> frobware: we're working on that feature right now. If you go to the url https://xxxx/:flags:/mv you get a new machine view option
[17:39] <rick_h_> that allows deploying a service, and then placing it
[17:40] <rick_h_> frobware: it's not yet complete, but I think it will do what you want in the current state. If this is not a real live/testable environment would love to have you peek at it sometime and any feedback is valuable
[17:40] <frobware> rick_h_: trying now...
[17:41] <rick_h_> frobware: note that this is only on the latest release of the charm and that even better would be to use the charm's config to set the release to 'develop' which will pull from the latest trunk codebase.
[17:41] <rick_h_> juju-gui-source="develop"
[17:42] <frobware> rick_h_: using your URL with bits, I see "Machines".  I'm assuming 0 (i.e., 1 machine) is where it would deploy to, unless I "Add Machine".  Correct?
[17:42] <rick_h_> frobware: correct
[17:42] <rick_h_> frobware: if you deploy any services using the normal url, they'll show up in your environment there on each machine they're on
[17:43] <rick_h_> frobware: and those machine numbers should match your 'juju status' output
[17:47] <frobware> rick_h_: hehe. so I got a little lost. Have a new machine id "6" for mysql service (which is for oneric)
[17:47] <rick_h_> frobware: lol, ok. Is that what you wanted to have?
[17:47] <frobware> rick_h_: Machine id 6 listed. Hardware details not listed. Cannot see a way to remove the machine.
[17:48] <rick_h_> frobware: if at any point you can reload the browser window and it should reset
[17:48] <rick_h_> frobware: yea, removing is the task we're working on now :)
[17:48] <rick_h_> frobware: so you have this deployed to the environment or just in your GUI window?
[17:48] <frobware> rick_h_: both, trying to remove via the command line...
[17:49] <frobware> rick_h_: you see, this is why I don't live leaving the church... (terminal)
[17:49] <frobware> rick_h_, :)
[17:50] <rick_h_> frobware: hah, I'm with you. Sorry to use you as a test pilot so. As we're working on this I was curious what someone new would do with it right off.
[17:50] <frobware> rick_h_: right, strike N. Selected mysql (trusty this time). I see it listed under "Unplaced units".
[17:50] <rick_h_> frobware: I'd be curious how you went from 1 machine to machine 6 if you have time to put it in a pastebin or anything, but if not thanks for checking it out briefly
[17:51] <rick_h_> frobware: right, you can then drag/drop it or click the arrow icon next to it to place it on a machine
[17:51] <frobware> rick_h_: is there a log I can cut & paste for you?  I was clicking a bit madly.
[17:51] <rick_h_> frobware: hmm, not really, don't worry about it.
[17:52] <frobware> rick_h_: so I dragged it "right" and ended up with 2 machines. Getting closer. ;)
[17:52] <rick_h_> frobware: woot
[17:53] <frobware> rick_h_: the UI says 2 machines. juju status doesn't. Is there an "apply" button?
[17:53] <rick_h_> frobware: and down at the bottom, if you hit the deploy button it should say it's going to create a machine, deploy mysql, add a mysql unit?
[17:53] <rick_h_> frobware: yep, in that footer
[17:53] <frobware> rick_h_: so now I have "Deployment summary: 1 unit added, 1 machine added."
[17:54] <rick_h_> frobware: ok, and then if you confirm there, it should go do it
[17:54] <rick_h_> frobware: and after a few seconds juju status should show the new stuff going on and in progress
[17:54] <frobware> rick_h_: what does "root-level" machine imply?
[17:54] <rick_h_> frobware: not in an lxc or kvm container on that machine
[17:54] <rick_h_> frobware: but on the base install of it
[17:54] <frobware> rick_h_: so on the host?  ;)
[17:54] <rick_h_> there was an email thread going around on that wording. You just confirmed I'm wrong :)
[17:54] <rick_h_> frobware: yes
[17:55] <frobware> rick_h_: not obvious. root has too (way too) many connotations.
[17:55] <rick_h_> understood, that was the argument against
[17:55] <frobware> rick_h_: so if I confirm now adding 1 machine on the "root" means I will get... 2 machines. or 1?
[17:56] <rick_h_> frobware: so you will get one new machine, and then have 2 total (since you have the bootstrap node/machine #0 currently)
[17:56] <rick_h_> frobware: if I'm understanding where you're at correctly
[17:57] <frobware> rick_h_: yes. but I was hoping/trying to deploy on the machine that is running the UI
[17:57] <rick_h_> frobware: oh! ok. what cloud is this?
[17:57] <rick_h_> frobware: ok, so you've got machine 0, which is the bootstrap node and running the juju-gui currently?
[17:57] <frobware> rick_h_: local deployment of openstack (via devstack) to which I added juju.
[17:58] <rick_h_> frobware: ok, so if you reload the page, and go back to the start, deploy mysql, get an unplaced unit. Then click on machine 0, and then drag/drop the unplaced unit on the 'add container' header you can deploy msyql in a container on that machine
[17:58] <rick_h_> frobware: but now we're running closer to the edge of how much work is released in your version of the GUI vs in trunk but not released yet
[17:59] <frobware> rick_h_: it would be neat if I could drag from the left directly on to the Environment column (and 0 in my case/example)
[18:00] <rick_h_> frobware: to the container column?
[18:00] <frobware> rick_h_: given what I see, the "Environment column".  In there I see "Environment" and below that "0" - which I assume is the node running the UI.
[18:01] <rick_h_> frobware: ok gotcha. Yea, that's the machines. We can't do the 'hulk smash' colocating over the api to juju so we have to walk you through creating a container
[18:01] <rick_h_> frobware: it sounds like we're hitting the limits of what we've got for you to use at the moment vs what you want to do.
[18:02] <frobware> rick_h_: sure. right, let me try once more... if you have the patience!
[18:02] <rick_h_> frobware: so to your original question, we're working on it and getting there, but not quite yet. :)
[18:02] <rick_h_> frobware: sure thing, I'm taking up your time here.
[18:03] <frobware> rick_h_: I have my unplaced unit, click on the arrow indicating left->right, move to... (choosing) 0, choose location: choosing 0/bare metal.
[18:03] <rick_h_> frobware: yea, that'll bomb
[18:04] <frobware> rick_h_: ah!
[18:04] <rick_h_> frobware: juju won't let us, we've removed that option this week
[18:04] <rick_h_> frobware: but not in your version of the GUI codebase
[18:04] <frobware> rick_h_: my clicking is obsolete!
[18:04] <rick_h_> lol
[18:05] <frobware> rick_h_: so can I just drag onto the "0" in the Environment column?
[18:05] <rick_h_> frobware: so click the machine first
[18:05] <frobware> rick_h_: which gives me "0/lxc/new0".  what's that? ;)
[18:05] <rick_h_> frobware: so that's it's going to create a new lxc container on machine 0
[18:06] <rick_h_> $machine0/lxc/$container0
[18:06] <rick_h_> kind of thing
[18:06] <frobware> rick_h_: so can  change the type?
[18:06] <rick_h_> frobware: yes, kvm is the other type
[18:06] <rick_h_> frobware: you should have gotten a drop down to select the type?
[18:07] <frobware> rick_h_: it does drop down, but no choices. I'm just clicking the "0/lxc/new0" bits...
[18:07] <frobware> rick_h_: incidentally, the trash icon doesn't appear to do anything when I click on it.
[18:08] <rick_h_> frobware: hmm, ok so yea that's now a drop down in the latest code.
[18:08] <rick_h_> frobware: yea, the devs are working on the 'remove' right now as we speak
[18:09] <rick_h_> frobware: you can see the diff in trunk http://comingsoon.jujucharms.com/:flags:/mv/
[18:09] <rick_h_> frobware: if you first deploy one service, deploy, and confirm, then deploy a second one, switch to machine view, and you create a container it has a choice
[18:10] <rick_h_> frobware: come back next week? lol
[18:10] <frobware> rick_h_: you're right. time for beer!
[18:11] <rick_h_> frobware: thanks for experimenting though. Great feedback
[18:11] <frobware> rick_h_: one other bit of feedback. Once you click on "deploy" would you (I would??) expect the state to change.  ie., should I be able to click deploy again.
[18:12] <rick_h_> frobware: which deploy do you mean?
[18:12] <rick_h_> frobware: the one in the footer where you get the confirmation step?
[18:13] <frobware> rick_h_: bottom right button.  I click "Deploy", it says "Confirm", I do.  And the state goes back to "Deploy". I wasn't sure, then, whether it had registered my action.
[18:13] <rick_h_> frobware: ah, yes there's a fix to grey it out and change it since after confirmation you've got no pending changes
[18:13] <frobware> rick_h_: and if I do click and confirm again, what is it going to do?
[18:13] <frobware> rick_h_: ^^^ right
[18:13] <rick_h_> frobware: it 'resets' but it only says "deploy" once you have something to deploy
[18:14] <frobware> rick_h_: yep, no pending changes, no possible action.
[18:14] <rick_h_> http://comingsoon.jujucharms.com/:flags:/mv/ note it's greyed out and disabled
[18:14] <rick_h_> by default
[18:15] <frobware> rick_h_: thanks for your help and patience...
[18:15] <rick_h_> frobware: right back at you
[18:15] <frobware> rick_h_: back to my cli. :)
[18:15] <rick_h_> enjoy! /me goes back to his mutt window to do email
[18:31] <lazypower> rick_h_: whats the favor? :)
[18:32] <rick_h_> lazypower: I'm looking for charms with papercuts that I can give new starts to fix
[18:32] <rick_h_> lazypower: as part of a 'welcome to juju' program
[18:32] <lazypower> Oh!
[18:33] <rick_h_> lazypower: so wonder if you guys have anything in your heads, a charm with tests would be great
[18:33] <rick_h_> so that they could get the full experience of debugging, fixing, adding a test, and such
[18:33] <rick_h_> lazypower: but something small, don't want them to spend a week on it
[18:33] <lazypower> ok that's going to be tricky as we have a handfull of charms with tests. let me cross ref launchpad with the audit spreadsheet
[18:34] <rick_h_> lazypower: ok, well the test thing is icing on the cake if you've got it
[18:34] <rick_h_> lazypower: I went through the charm bugs but so many look like the might not even apply any more
[18:34] <rick_h_> lazypower: sent you a link
[18:36] <lazypower> yeah this is going to be fun, jose has been really good for sniping a lot of the low hanging fruit
[18:36] <rick_h_> lazypower: ok, well if not no biggie.
[18:37] <rick_h_> lazypower: don't waste a bunch of time on it, but if you think of anything let me know and I'll get a new person on it for you for free :)
[18:37] <lazypower> <3
[18:37] <jose> rick_h_: is there any specific service in which you would like to see tests?
[18:38] <rick_h_> jose: no, this is more about them just having something low hanging to check out a charm, walk through it, debug, and see how they tick
[18:38] <rick_h_> jose: nothing in mind in particular
[18:38] <jose> and that program is going to be like a mentoring program or a show?
[18:38] <rick_h_> jose: I'm trying something new
[18:39] <rick_h_> jose: we've got 2 new starters on monday, and want to intro them to juju before we give them bugs in our code to fix
[18:39] <rick_h_> jose: and looking at and figuring out a charm is part of that intro
[18:39] <jose> well, if you need a hand with mentoring or something similar let me know, if you want to host hangouts also let me know and I'll get you set up with ubuntuonair
[18:39] <rick_h_> jose: naw, thanks though
[18:39] <jose> :)
[20:21] <Pa^2> Any thoughts on why a Nova-Compute might agent-state-info: 'hook failed: "install"' ?
[20:22] <Pa^2> Environment: local v.1.18.4.1
[20:43] <Pa^2> I will put this up if anyone wants to take a look: all-machines.log ( http://paste.ubuntu.com/7712947/ )
[20:43] <Pa^2> ...now it is Miller time.
[22:23] <jose> tvansteenburgh: ping
[23:25] <tvansteenburgh> jose: pong
[23:26] <jose> tvansteenburgh: hey, I'm having some problems with charm test not running the relation hooks, know how may I be able to fix it?
[23:26] <tvansteenburgh> paste the test?
[23:26] <tvansteenburgh> i'll have a quick look but fair warning, i'm gonna have to leave in about 5 min
[23:27] <jose> tvansteenburgh: the test itself?
[23:27] <jose> sure
[23:27] <tvansteenburgh> yeah, the test itself
[23:27] <jose> http://bazaar.launchpad.net/~jose/charms/precise/chamilo/add-tests/view/head:/tests/100-deploy
[23:28] <tvansteenburgh> so what behavior are you seeing
[23:29] <jose> tvansteenburgh: there's a db-relation-changed hook on the chamilo charm and it's not being ran even though the relation is there
[23:34] <tvansteenburgh> jose: sorry, i don't see anything obviously long, and i have to leave for an appointment. ping me if you figure it out, otherwise i'll dig deeper when i get home later
[23:34] <tvansteenburgh> s/long/wrong/
[23:34] <tvansteenburgh> sorry i couldn't be more help right now, good luck!
[23:34] <jose> tvansteenburgh: not a problem, just wanted to give the heads up :)