[00:03] Noticed when creating a new charm with the python template, charmhelpers are installed via pypi. Does that obviate the need for charm_helpers_sync and charm-helpers.yaml? [00:19] sebas538_: want to see something cool? === CyberJacob is now known as CyberJacob|Away [01:31] According to: http://www.percona.com/doc/percona-xtradb-cluster/5.5/installation.html, percona-xtradb-cluster 5.5 does not work with app armor, yet there is an app armor profile created when deploying percona-cluster charm. Could this possibly be the reason why my cluster is not coming up? [01:34] designated: its doubtful thats a core openstack-charmers charm [01:39] designated: check for DENIED lines in the logs === CyberJacob|Away is now known as CyberJacob [01:53] sarnold: which logs specifically? [01:54] designated: /var/log/syslog or /var/log/audit/audit.log (if auditd is installed) [01:55] also it doesn't matter what I configure ha-bindiface with, it still seems to use the addresses from a different interface when configuring wsrep_cluster_address=gcomm:// in my.cnf [01:58] sarnold: auditd isn't running and there are no DENIED messages anywhere in /va/log/syslog [01:58] designated: okay, then it's not apparmor's fault :) [01:59] Evening everyone. [02:01] can anyone confirm whether the configuration of "ha-bindiface" within the charm is in any way associated with what's supposed to get configured for wsrep_cluster_address in my.cnf? [02:02] the charm seems to be configuring wsrep_cluster_address with the addresses resolved from the hostnames, which may be a problem with the multicast requirement. [02:05] also, should corosync_bindiface under the hacluster charm match the vip_iface for the percona-cluster charm? [02:05] i need more documentation. [02:08] https://bugs.launchpad.net/charms/+bug/1245095 ??? [02:08] Bug #1245095: rabbitmq-server charm ha-bindiface default breaks rabbitmq-hacluster subordinate charm [02:08] Maybe or not maybe? [02:13] LinstatSDR: possibly, I'll look into that. [02:15] Okay. Sorry I cannot help you designated. === CyberJacob is now known as CyberJacob|Away === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === urulama is now known as urulama|out === liam_ is now known as Guest72494 [07:26] . === erkules_ is now known as erkules === CyberJacob|Away is now known as CyberJacob === lifeless_ is now known as lifeless [10:17] hi [10:55] hi mwak o/ [10:56] how are you marcoceppi ? [10:56] good, an you? [10:58] good [10:58] looking to have snappy on online labs === CyberJacob is now known as CyberJacob|Away === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === CyberJacob|Away is now known as CyberJacob [13:48] jcastro: im going to have a conferece call with rackspace enterpise account lead on using juju for deployment on rackspace cloud. im wondering if there is any progress on that topic? [13:49] jcastro: im referring to question and information you provided on askubuntu http://askubuntu.com/questions/166102/how-do-i-configure-juju-for-deployment-on-rackspace-cloud [13:56] schkovich: Rackspaces API is still too far off trunk AFAIK [13:56] marcoceppi: they are suggesting using heat instead of juju [13:57] however in my understanding those are different tools [13:57] schkovich: I'm sure they are, and it is a bit different. Heat is openstack specific for starters, it's more like cloud formations for amazon [13:57] and i don't like to be pushed ;) [13:57] there's a bit of overlap, sure, but that's because we're both operating in the orchestration name space [13:58] Is it possible to change the timestamp in Juju logs from UTC to the TZ used by the machine? [13:59] yeah, there is overlap no doubt [14:00] what i will try to find later today is if rackspace private cloud would play nicely with juju [14:00] otherwise i will have no other option but to persue my company to move to ubuntu cloud ;) [14:00] wallyworld: I'm not really convinced that https://github.com/juju/juju/pull/1323 really fixes the problem. [14:01] wallyworld: the code still assumes that the ubuntu user uses /home/ubuntu, for example. It should use ~ubuntu, etc. [14:01] wallyworld: and IIRC, the original issue is that the "ubuntu user" in a local environment means something completely different. It should be ignored completely in the local environment case. [14:02] rbasak: if fixes the issue of determining if the ubuntu user exists before doing a chown. in the master branch "id ubuntu" is used instead of grepping the passwd file. but i see that ~ubuntu is better than /home/ubuntu [14:03] wallyworld: it should not be attempting to do a chown on anything in ~ubuntu (or /home/ubuntu) whether or not the ubuntu user exists, when in a local environment. [14:03] In a local environment, the "ubuntu" user is not special. It's just another user. [14:03] OTOH, in a cloud environment, the "ubuntu" user definitely is special and it's fine for Juju to use/clobber it. [14:04] wallyworld: you're just swapping one failure case for another here. [14:04] i see, that does make sense. it may not be trivial to implement [14:04] wallyworld: now, if I add "adduser ubuntu" on my laptop, but try a local environment from my "rbasak" user, it'll still fail. [14:05] that's sort of an edge case you think? [14:05] ie i would prefer not to block 1.21 [14:05] I think it's probably more common that you expect. [14:05] we can fix for 1.22 [14:05] For example, when running the installer for a desktop system, I might just type "ubuntu/ubuntu" to get started. I used to do that, actually. [14:05] The failure case that caused people to hit this bug seems to be similar. [14:05] schkovich: you could still use RackSpace, you'd just have to write some code around juju to talk to their API [14:06] but that's a bit of extra work [14:06] wallyworld: sure, I'm not asking you to block anything. Just please consider that patch to be a workaround that swaps one failure case for another, and not a proper fix. [14:06] wallyworld: sorry to pop into conversation but perhaps uid=$(id -u ${jetty_user}) could be more suitable then to grep the passwd file [14:06] rbasak: given 1.21 is already really late, we will definitely fix properly for 1.22 [14:06] The standard way is "getent passwd " [14:07] That's quite common in maintainer scripts. [14:07] marcoceppi: im using manual environment for the moment which is working [14:07] in master, i use "id ubuntu", but can change to getent [14:07] I don't see an actual grep in the password file though. [14:07] id should be no worse I think. [14:07] schkovich: right, so you could build around that, there's examples of this with DigitalOcean and Online-Labs [14:08] schkovich: https://github.com/kapilt/juju-digitalocean [14:08] schkovich: yeah, the grep passwd file is a bit hacky [14:08] rbasak: so to summarise, no chown for local provider [14:08] schkovich: I mean, it's more work but it's possible to build on top of juju for stuff not directly supported in juju yet [14:09] wallyworld: for now, that's fine. In the long term, don't touch the ubuntu user or ~ubuntu at all in the local provider. [14:09] marcoceppi: i will take a look into that [14:10] rbasak: ok, will do. what will ship in 1.21 is that the chown will still be done, but based on the ubuntu user existing, not a /home/ubuntu check. agreed that's sub optimal as you say [14:10] wallyworld: ack. Thank you for working on this! [14:10] will fix properly in 1.22 [14:10] marcoceppi: unfortunately there is a preasure to get things done :( [14:10] np, sorry for not getting it right first up [14:10] it was a last minute fix based on comments isw in the bug [14:10] schkovich: that's understandable, I may take a look at this over the holiday break myself, but I don't ahve a rackspace account atm [14:12] marcoceppi: i can't give u access to the company account but sounds as interesting project on which i could work in my free time [14:13] schkovich: oh, I wouldn't need/want access. I'd just open an account. Simply stating it would be a learning curve [14:14] marcoceppi: if i do please contact me, i would luv to get involved [14:15] marcoceppi: same applies other way around, if i start working on rackspace plugin i will contact you :) [14:15] schkovich: awesome, I'm always in here, feel free to give me a ping [14:16] marcoceppi: i will, since im not regular here u can google me by nick [14:19] you could do the manual provider with rackspace [14:19] but then again, it's manual and that takes away like half the reason [14:19] jcastro: i already did that :) [14:20] jcastro: and i agreed with marcoceppi that it would be nice to have rackspace plugin [14:21] jcastro: we might start the project over comming holidays [14:22] jcastro: i will try to get as much information as possible later today while on call with rackspace guys [14:22] jcastro: information on rackspace api of course :) [14:24] brb [14:24] if you need help the core guys are in #juju-dev [14:37] jcastro: thank you for the tip :) [15:49] Morning guys. [15:57] o/ LinstatSDR [15:57] hi :) === CyberJacob is now known as CyberJacob|Away [17:54] mwenning: ping [17:55] lazypower, pong [17:56] lazyPower, pong [17:59] ahoy mwenning, are you still working on teh dell openmanage charm? [18:01] lazyPower, I've been tied up with other stuff. When I get a chance to try it, juju usually foils my attempts :-( [18:01] aisrael: ^ [18:01] thanks mwenning, was just circling back as we came across teh bug during triage [18:02] mwenning: Would it be okay to assign the open MP to you in Launchpad? [18:02] mwenning: if so, what's your lp name? [18:03] lazyPower, understand. I'm not happy about it, but Dell certs and bugs are 1st priority. [18:03] mwenning [18:03] MP? [18:03] mwenning: judgement free zone here, i completely understand priorities :) [18:04] lazyPower, 1) what is MP , 2) any progress on the amulet bug? [18:04] 1) Merge Proposal - but i think aisrael meant bug. [18:04] 2) i dontrecall teh bug - can you refresh me? [18:05] mwenning: lazyPower: Yes, sorry. https://bugs.launchpad.net/charms/+bug/1325700 [18:05] Bug #1325700: New Charm: Dell OpenManage Server Administrator (OMSA) [18:06] lazyPower, https://bugs.launchpad.net/amulet/+bug/1375344 [18:07] Bug #1375344: Amulet fails to bring up a machine to run relation-sentry running openmanage charm [18:07] mwenning: id say so! amulet no longer uses relation sentries. [18:07] tvansteenburgh: we need to do a triage on teh launchpad bugs for amulet [18:08] lazyPower, okeydokey. [18:09] lazyPower: ack [18:09] lazyPower, IA I can give it a try this week again. [18:09] mwenning: sorry about the lack of response on that bug - i think we moved all the bugs over to github [18:15] lazyPower: o/ [18:15] sebas5384: o/ [18:16] sebas5384: did you see the video i shot to the list? Seems like its right up your alley [18:16] lazyPower: didn't see it :( [18:16] paste it here again please :) [18:16] https://www.youtube.com/watch?v=bCvl-TsxVXA&feature=gp-n-y&google_comment_id=z12rdtcw0zyxifibb04cfv0pbwq4h5jy1j4 [18:16] lazyPower: oooh yes! i sow your email about it [18:17] but didn't sow the video yet [18:17] :) [18:17] i'm gonna watch it then [18:21] lazyPower, did we put your new video up on insights yet? [18:21] jcastro: not that i'm aware of [18:26] lazyPower, always ping me when you push a new vid so I can add it [18:26] lazyPower, any plans on adding etcd and flannel charms to proper trusty? [18:26] ack, will do jcastro. i'll add it to my workflow [18:26] yep, its part of a master plan thats brewing [18:29] lazyPower: nice!! [18:35] sebas5384: glad you liked it :) [18:40] lazyPower: do you think flannel can do the networking of the containers created by the juju in the vagrant flow ? [18:41] lazyPower: i tried some other stuff but, i can't make it work [18:41] definitively my knowledge about networking is really poor [18:41] sebas5384: it only enables private networking via the tun/tap device - it hasn't solved public-interface reachability. And its using the same basic princial you were describing to me [18:41] i think the better bet here, thats fully portable with our vagrant experience is a vpn service on the vagrant machine [18:42] aisrael: you've recently done some work on this, what do you think? [18:42] lazyPower: yeah, i installed this https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html [18:42] sebas5384: aisreal recently published an article about networkign with vagrant on yosemite, did you see it? [18:43] but i'm didn't test it yet [18:43] lazyPower: no oO ! [18:43] where!? [18:43] http://www.adamisrael.com/blog/2014/12/12/sshuttle-workaround-for-os-x-10-10-yosemite-juju-and-vagrant/ [18:43] looks like neither option are required on osx [18:43] its routing [18:44] hummmm [18:45] * sebas5384 reading [18:48] lazyPower: so after that route i can ping the private container ip ? [18:48] sebas5384: as i understand it, its setting up a route that will direct all 10.0.3.x traffic to the LXC bridge in the vagrant image [18:49] i haven't tested it myself, but its generated a bit of buzz since he published the article [18:49] lazyPower: of course!! it's what I'm looking for since i run the first vagrant up with juju in it [18:49] hehe [18:49] i'm going to test it! [18:50] * sebas5384 testing [18:50] let us know how it works out for you - if it performs as expected - lets get some feedback on the list :) [18:50] or maybe some of that social network lovin <3 [18:51] lazyPower: yeah! sure :) [18:55] lazyPower: sebas5384 Yes, please let me know if you run into any trouble with it! [19:03] Hi, I have an idea for a new charm [19:05] I'm using MMS Automation Agent to deploy and manage my mongodb cluster. juju could be providing the machines, install the mms agent and configure it [19:06] Tug: it's completely possible [19:06] I was using the mongodb charm but it still isn't stable enough for me. MMS proved to work really great [19:07] and then I have access to the monotoring and backup services so that's a bonus [19:10] marcoceppi, yes and it's a really simple charm as there is a debian package for the agent. The charm could then check that each machine can communicate with each others, that the directory for the database is created etc. [19:27] Tug: that would be an excellent charm for the store, i'm sure 10gen would love to see something like that [19:29] the problem is on optimizing the machine power [19:29] How so? [19:30] config servers could be deployed on less powerful machines [19:30] mongod needs persistent storage and lots of RAM [19:30] whereas mongos might be optimized for cpu [19:30] well, thats part of the benefit of using constraints when you're deploying your machines - if you were to build the MMS Agent charm - you can then specify lower constraints for the config servers [19:31] and make that the standard by offering up a bundle of hte configuration [19:31] juju deploy my-mms-charm configsvr --constraints="Mem=1gb root-disk=8gb" is an easy way to specify this declaratively on the command line [19:32] ah yes you can have multiple services defined for the same charm, I forgot [19:32] you're still deploying the same charm, you just have a different name - so your peering will be different, but if you're exposing the relationships properly between the services shouldn't be an issue at all. And I do believe that MMS configures all this during runtime anyhow as they're doing a reconciler based mongo orchestration. [19:34] one last issue is that the type of instance that will be deployed on the machine is not configurable by the agent. It can only be done from the mms web interface at the moment. [19:34] maybe we should ask for this feature to mongodb first [19:35] Tug: sounds reasonable [19:37] lazyPower, I'm going to fill the request in mongodb's jira right now :) [19:52] hey jcastro [19:52] look at this https://github.com/rethinkdb/rethinkdb/issues/3410 [19:53] yeah I suggested that [19:53] oh, i thought this was organic [19:53] * lazyPower snaps [19:53] i got real excited there for a second [20:03] lazyPower: It Freaking Works !! [20:03] GREAT SUCCESS \O/ [20:03] * sebas5384 excited! [20:03] now that's what i was talking about [20:03] :D [20:04] this is going to change everything around here [20:04] :) [20:04] sebas5384: Excellent! [20:04] thanks aisrael !!! [20:04] sebas5384: My pleasure! I love how fast it is interacting with my juju containers now, too. [20:04] that should be in the juju vagrant box [20:05] sebas5384: That's something I'm going to look into. [20:05] aisrael: yeah i would love to see more love around containers with juju [20:05] sebas5384: oh yeah? [20:05] aisrael: good to know [20:06] sebas5384: hang out with us in #juju-edge, we're talking about workloads and containers and that story [20:06] lazyPower: yes! like using the lxd directly or something like that [20:06] lazyPower: done! [20:18] On a MAAS server where can I find the partman-auto/method rules referenced in /etc/maas/preseed/pressed_master? === kadams54 is now known as kadams54-away [20:53] drbidwell: no idea, they may be stored in the datastore (postgresql) [20:59] aisrael: what would be the best approach to ssh into the containers? because the ssh keys aren't in the host :P [21:00] sebas5384: I'm still doing that from inside vagrant. There might be a way of adding the ubuntu user key to your local keys, but I haven't explored that yet. [21:01] aisrael: yeah I think there's a way to add an ssh key for juju's to manage it [21:01] https://juju.ubuntu.com/docs/howto-authorised-keys.html ? [21:06] it's not clear to me how to use it === kadams54-away is now known as kadams54 [21:09] I know the vagrant image uses an insecure key, so in theory you could copy the keys to your local ~/.ssh, ssh-add them, and ssh directly into the containers [21:12] Yes, that'll work. [21:12] sebas5384: you should be able to add your personal key to the deployment [21:12] I copied the ssh keys from the vagrant user, renamed to id_vagrant and ssh-added them. [21:12] I can then ssh ubuntu@10.0.3.x and access a container [21:13] so you are using like ssh -i .... ? [21:14] marcoceppi: yeah! but how [21:14] ? [21:14] after copying and renaming the keys (id_rsa, id_rsa.pub) and copying to ~/ssh, run ssh-add id_vagrant [21:14] sorry for the noob question [21:14] aisrael: ohhh I see === sebas538_ is now known as sebas5384_ === mwhudson_ is now known as mwhudson === sebas5384_ is now known as sebas5384 [21:46] marcoceppi: Any idea where I could find information about MAAS internal workings short of reading all of the code? === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away