/srv/irclogs.ubuntu.com/2014/12/17/#juju.txt

blrNoticed when creating a new charm with the python template, charmhelpers are installed via pypi. Does that obviate the need for charm_helpers_sync and charm-helpers.yaml?00:03
lazyPowersebas538_: want to see something cool?00:19
=== CyberJacob is now known as CyberJacob|Away
designatedAccording to: http://www.percona.com/doc/percona-xtradb-cluster/5.5/installation.html, percona-xtradb-cluster 5.5 does not work with app armor, yet there is an app armor profile created when deploying percona-cluster charm.  Could this possibly be the reason why my cluster is not coming up?01:31
lazyPowerdesignated: its doubtful thats a core openstack-charmers charm01:34
sarnolddesignated: check for DENIED lines in the logs01:39
=== CyberJacob|Away is now known as CyberJacob
designatedsarnold: which logs specifically?01:53
sarnolddesignated: /var/log/syslog or /var/log/audit/audit.log (if auditd is installed)01:54
designatedalso it doesn't matter what I configure ha-bindiface with, it still seems to use the addresses from a different interface when configuring wsrep_cluster_address=gcomm:// in my.cnf01:55
designatedsarnold: auditd isn't running and there are no DENIED messages anywhere in /va/log/syslog01:58
sarnolddesignated: okay, then it's not apparmor's fault :)01:58
LinstatSDREvening everyone.01:59
designatedcan anyone confirm whether the configuration of "ha-bindiface" within the charm is in any way associated with what's supposed to get configured for wsrep_cluster_address in my.cnf?02:01
designatedthe charm seems to be configuring wsrep_cluster_address with the addresses resolved from the hostnames, which may be a problem with the multicast requirement.02:02
designatedalso, should corosync_bindiface under the hacluster charm match the vip_iface for the percona-cluster charm?02:05
designatedi need more documentation.02:05
LinstatSDRhttps://bugs.launchpad.net/charms/+bug/1245095 ???02:08
mupBug #1245095: rabbitmq-server charm ha-bindiface default breaks rabbitmq-hacluster subordinate charm <openstack> <Juju Charms Collection:Opinion> <https://launchpad.net/bugs/1245095>02:08
LinstatSDRMaybe or not maybe?02:08
designatedLinstatSDR: possibly, I'll look into that.02:13
LinstatSDROkay. Sorry I cannot help you designated.02:15
=== CyberJacob is now known as CyberJacob|Away
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== urulama is now known as urulama|out
=== liam_ is now known as Guest72494
marcoceppi.07:26
=== erkules_ is now known as erkules
=== CyberJacob|Away is now known as CyberJacob
=== lifeless_ is now known as lifeless
mwakhi10:17
marcoceppihi mwak o/10:55
mwakhow are you marcoceppi ?10:56
marcoceppigood, an you?10:56
mwakgood10:58
mwaklooking to have snappy on online labs10:58
=== CyberJacob is now known as CyberJacob|Away
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== CyberJacob|Away is now known as CyberJacob
schkovichjcastro: im going to have a conferece call with rackspace enterpise account lead on using juju for deployment on rackspace cloud. im wondering if there is any progress on that topic?13:48
schkovichjcastro: im referring to question and information you provided on askubuntu http://askubuntu.com/questions/166102/how-do-i-configure-juju-for-deployment-on-rackspace-cloud13:49
marcoceppischkovich: Rackspaces API is still too far off trunk AFAIK13:56
schkovichmarcoceppi: they are suggesting using heat instead of juju13:56
schkovichhowever in my understanding those are different tools13:57
marcoceppischkovich: I'm sure they are, and it is a bit different. Heat is openstack specific for starters, it's more like cloud formations for amazon13:57
schkovichand i don't like to be pushed ;)13:57
marcoceppithere's a bit of overlap, sure, but that's because we're both operating in the orchestration name space13:57
cjohnstonIs it possible to change the timestamp in Juju logs from UTC to the TZ used by the machine?13:58
schkovichyeah, there is overlap no doubt13:59
schkovichwhat i will try to find later today is if rackspace private cloud would play nicely with juju14:00
schkovichotherwise i will have no other option but to persue my company to move to ubuntu cloud ;)14:00
rbasakwallyworld: I'm not really convinced that https://github.com/juju/juju/pull/1323 really fixes the problem.14:00
rbasakwallyworld: the code still assumes that the ubuntu user uses /home/ubuntu, for example. It should use ~ubuntu, etc.14:01
rbasakwallyworld: and IIRC, the original issue is that the "ubuntu user" in a local environment means something completely different. It should be ignored completely in the local environment case.14:01
wallyworldrbasak: if fixes the issue of determining if the ubuntu user exists before doing a chown. in the master branch "id ubuntu" is used instead of grepping the passwd file. but i see that ~ubuntu is better than /home/ubuntu14:02
rbasakwallyworld: it should not be attempting to do a chown on anything in ~ubuntu (or /home/ubuntu) whether or not the ubuntu user exists, when in a local environment.14:03
rbasakIn a local environment, the "ubuntu" user is not special. It's just another user.14:03
rbasakOTOH, in a cloud environment, the "ubuntu" user definitely is special and it's fine for Juju to use/clobber it.14:03
rbasakwallyworld: you're just swapping one failure case for another here.14:04
wallyworldi see, that does make sense. it may not be trivial to implement14:04
rbasakwallyworld: now, if I add "adduser ubuntu" on my laptop, but try a local environment from my "rbasak" user, it'll still fail.14:04
wallyworldthat's sort of an edge case you think?14:05
wallyworldie i would prefer not to block 1.2114:05
rbasakI think it's probably more common that you expect.14:05
wallyworldwe can fix for 1.2214:05
rbasakFor example, when running the installer for a desktop system, I might just type "ubuntu/ubuntu" to get started. I used to do that, actually.14:05
rbasakThe failure case that caused people to hit this bug seems to be similar.14:05
marcoceppischkovich: you could still use RackSpace, you'd just have to write some code around juju to talk to their API14:05
marcoceppibut that's a bit of extra work14:06
rbasakwallyworld: sure, I'm not asking you to block anything. Just please consider that patch to be a workaround that swaps one failure case for another, and not a proper fix.14:06
schkovichwallyworld: sorry to pop into conversation but perhaps uid=$(id -u ${jetty_user}) could be more suitable then to grep the passwd file14:06
wallyworldrbasak: given 1.21 is already really late, we will definitely fix properly for 1.2214:06
rbasakThe standard way is "getent passwd <user>"14:06
rbasakThat's quite common in maintainer scripts.14:07
schkovichmarcoceppi: im using manual environment for the moment which is working14:07
wallyworldin master, i use "id ubuntu", but can change to getent14:07
rbasakI don't see an actual grep in the password file though.14:07
rbasakid should be no worse I think.14:07
marcoceppischkovich: right, so you could build around that, there's examples of this with DigitalOcean and Online-Labs14:07
marcoceppischkovich: https://github.com/kapilt/juju-digitalocean14:08
wallyworldschkovich: yeah, the grep passwd file is a bit hacky14:08
wallyworldrbasak: so to summarise, no chown for local provider14:08
marcoceppischkovich: I mean, it's more work but it's possible to build on top of juju for stuff not directly supported in juju yet14:08
rbasakwallyworld: for now, that's fine. In the long term, don't touch the ubuntu user or ~ubuntu at all in the local provider.14:09
schkovichmarcoceppi: i will take a look into that14:09
wallyworldrbasak: ok, will do. what will ship in 1.21 is that the chown will still be done, but based on the ubuntu user existing, not a /home/ubuntu check. agreed that's sub optimal as you say14:10
rbasakwallyworld: ack. Thank you for working on this!14:10
wallyworldwill fix properly in 1.2214:10
schkovichmarcoceppi: unfortunately there is a preasure to get things done :(14:10
wallyworldnp, sorry for not getting it right first up14:10
wallyworldit was a last minute fix based on comments isw in the bug14:10
marcoceppischkovich: that's understandable, I may take a look at this over the holiday break myself, but I don't ahve a rackspace account atm14:10
schkovichmarcoceppi: i can't give u access to the company account but sounds as interesting project on which i could work in my free time14:12
marcoceppischkovich: oh, I wouldn't need/want access. I'd just open an account. Simply stating it would be a learning curve14:13
schkovichmarcoceppi: if i do please contact me, i would luv to get involved14:14
schkovichmarcoceppi: same applies other way around, if i start working on rackspace plugin i will contact you :)14:15
marcoceppischkovich: awesome, I'm always in here, feel free to give me a ping14:15
schkovichmarcoceppi: i will, since im not regular here u can google me by nick14:16
jcastroyou could do the manual provider with rackspace14:19
jcastrobut then again, it's manual and that takes away like half the reason14:19
schkovichjcastro: i already did that :)14:19
schkovichjcastro: and i agreed with marcoceppi that it would be nice to have rackspace plugin14:20
schkovichjcastro: we might start the project over comming holidays14:21
schkovichjcastro: i will try to get as much information as possible later today while on call with rackspace guys14:22
schkovichjcastro: information on rackspace api of course :)14:22
schkovichbrb14:24
jcastroif you need help the core guys are in #juju-dev14:24
schkovichjcastro: thank you for the tip :)14:37
LinstatSDRMorning guys.15:49
marcoceppio/ LinstatSDR15:57
LinstatSDRhi :)15:57
=== CyberJacob is now known as CyberJacob|Away
lazyPowermwenning: ping17:54
mwenninglazypower, pong17:55
mwenninglazyPower, pong17:56
lazyPowerahoy mwenning, are you still working on teh dell openmanage charm?17:59
mwenninglazyPower, I've been tied up with other stuff.  When I get a chance to try it, juju usually foils my attempts :-(18:01
lazyPoweraisrael: ^18:01
lazyPowerthanks mwenning, was just circling back as we came across teh bug during triage18:01
aisraelmwenning: Would it be okay to assign the open MP to you in Launchpad?18:02
aisraelmwenning: if so, what's your lp name?18:02
mwenninglazyPower, understand.  I'm not happy about it, but Dell certs and bugs are 1st priority.18:03
mwenningmwenning18:03
mwenningMP?18:03
lazyPowermwenning: judgement free zone here, i completely understand priorities :)18:03
mwenninglazyPower, 1) what is MP , 2) any progress on the amulet bug?18:04
lazyPower1) Merge Proposal - but i think aisrael meant bug.18:04
lazyPower2) i dontrecall teh bug - can you refresh me?18:04
aisraelmwenning: lazyPower: Yes, sorry. https://bugs.launchpad.net/charms/+bug/132570018:05
mupBug #1325700: New Charm: Dell OpenManage Server Administrator (OMSA) <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1325700>18:05
mwenninglazyPower, https://bugs.launchpad.net/amulet/+bug/137534418:06
mupBug #1375344: Amulet fails to bring up a machine to run relation-sentry running openmanage charm <Amulet:New> <https://launchpad.net/bugs/1375344>18:07
lazyPowermwenning: id say so! amulet no longer uses relation sentries.18:07
lazyPowertvansteenburgh: we need to do a triage on teh launchpad bugs for amulet18:07
mwenninglazyPower, okeydokey.18:08
tvansteenburghlazyPower: ack18:09
mwenninglazyPower, IA I can give it a try this week again.18:09
lazyPowermwenning: sorry about the lack of response on that bug - i think we moved all the bugs over to github18:09
sebas5384lazyPower: o/18:15
lazyPowersebas5384: o/18:15
lazyPowersebas5384:  did you see the video i shot to the list? Seems like its right up your alley18:16
sebas5384lazyPower: didn't see it :(18:16
sebas5384paste it here again please :)18:16
lazyPowerhttps://www.youtube.com/watch?v=bCvl-TsxVXA&feature=gp-n-y&google_comment_id=z12rdtcw0zyxifibb04cfv0pbwq4h5jy1j418:16
sebas5384lazyPower: oooh yes! i sow your email about it18:16
sebas5384 but didn't sow the video yet18:17
sebas5384:)18:17
sebas5384i'm gonna watch it then18:17
jcastrolazyPower, did we put your new video up on insights yet?18:21
lazyPowerjcastro: not that i'm aware of18:21
jcastrolazyPower, always ping me when you push a new vid so I can add it18:26
jcastrolazyPower, any plans on adding etcd and flannel charms to proper trusty?18:26
lazyPowerack, will do jcastro. i'll add it to my workflow18:26
lazyPoweryep, its part of a master plan thats brewing18:26
sebas5384lazyPower: nice!!18:29
lazyPowersebas5384: glad you liked it :)18:35
sebas5384lazyPower: do you think flannel can do the networking of the containers created by the juju in the vagrant flow ?18:40
sebas5384lazyPower: i tried some other stuff but, i can't make it work18:41
sebas5384definitively my knowledge about networking is really poor18:41
lazyPowersebas5384: it only enables private networking via the tun/tap device - it hasn't solved public-interface reachability. And its using the same basic princial you were describing to me18:41
lazyPoweri think the better bet here, thats fully portable with our vagrant experience is a vpn service on the vagrant machine18:41
lazyPoweraisrael: you've recently done some work on this, what do you think?18:42
sebas5384lazyPower: yeah, i installed this https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html18:42
lazyPowersebas5384:  aisreal recently published an article about networkign with vagrant on yosemite, did you see it?18:42
sebas5384but i'm didn't test it yet18:43
sebas5384lazyPower: no oO !18:43
sebas5384where!?18:43
lazyPowerhttp://www.adamisrael.com/blog/2014/12/12/sshuttle-workaround-for-os-x-10-10-yosemite-juju-and-vagrant/18:43
lazyPowerlooks like neither option are required on osx18:43
lazyPowerits routing18:43
sebas5384hummmm18:44
* sebas5384 reading18:45
sebas5384lazyPower: so after that route i can ping the private container ip ?18:48
lazyPowersebas5384: as i understand it, its setting up a route that will direct all 10.0.3.x traffic to the LXC bridge in the vagrant image18:48
lazyPoweri haven't tested it myself, but its generated a bit of buzz since he published the article18:49
sebas5384lazyPower: of course!! it's what I'm looking for since i run the first vagrant up with juju in it18:49
sebas5384hehe18:49
sebas5384i'm going to test it!18:49
* sebas5384 testing18:50
lazyPowerlet us know how it works out for you - if it performs as expected - lets get some feedback on the list :)18:50
lazyPoweror maybe some of that social network lovin <318:50
sebas5384lazyPower: yeah! sure :)18:51
aisraellazyPower: sebas5384 Yes, please let me know if you run into any trouble with it!18:55
TugHi, I have an idea for a new charm19:03
TugI'm using MMS Automation Agent to deploy and manage my mongodb cluster. juju could be providing the machines, install the mms agent and configure it19:05
marcoceppiTug: it's completely possible19:06
TugI was using the mongodb charm but it still isn't stable enough for me. MMS proved to work really great19:06
Tugand then I have access to the monotoring and backup services so that's a bonus19:07
Tugmarcoceppi, yes and it's a really simple charm as there is a debian package for the agent. The charm could then check that each machine can communicate with each others, that the directory for the database is created etc.19:10
lazyPowerTug: that would be an excellent charm for the store, i'm sure 10gen would love to see something like that19:27
Tugthe problem is on optimizing the machine power19:29
lazyPowerHow so?19:29
Tugconfig servers could be deployed on less powerful machines19:30
Tugmongod needs persistent storage and lots of RAM19:30
Tugwhereas mongos might be optimized for cpu19:30
lazyPowerwell, thats part of the benefit of using constraints when you're deploying your machines - if you were to build the MMS Agent charm - you can then specify lower constraints for the config servers19:30
lazyPowerand make that the standard by offering up a bundle of hte configuration19:31
lazyPowerjuju deploy my-mms-charm configsvr --constraints="Mem=1gb root-disk=8gb" is an easy way to specify this declaratively on the command line19:31
Tugah yes you can have multiple services defined for the same charm, I forgot19:32
lazyPoweryou're still deploying the same charm, you just have a different name - so your peering will be different, but if you're exposing the relationships properly between the services shouldn't be an issue at all. And I do believe that MMS configures all this during runtime anyhow as they're doing a reconciler based mongo orchestration.19:32
Tugone last issue is that the type of instance that will be deployed on the machine is not configurable by the agent. It can only be done from the mms web interface at the moment.19:34
Tugmaybe we should ask for this feature to mongodb first19:34
lazyPowerTug: sounds reasonable19:35
TuglazyPower, I'm going to fill the request in mongodb's jira right now :)19:37
lazyPowerhey jcastro19:52
lazyPowerlook at this https://github.com/rethinkdb/rethinkdb/issues/341019:52
jcastroyeah I suggested that19:53
lazyPoweroh, i thought this was organic19:53
* lazyPower snaps19:53
lazyPoweri got real excited there for a second19:53
sebas5384lazyPower: It Freaking Works !!20:03
lazyPowerGREAT SUCCESS \O/20:03
* sebas5384 excited!20:03
sebas5384now that's what i was talking about20:03
sebas5384:D20:03
sebas5384this is going to change everything around here20:04
sebas5384:)20:04
aisraelsebas5384: Excellent!20:04
sebas5384thanks aisrael !!!20:04
aisraelsebas5384: My pleasure! I love how fast it is interacting with my juju containers now, too.20:04
sebas5384that should be in the juju vagrant box20:04
aisraelsebas5384: That's something I'm going to look into.20:05
sebas5384aisrael: yeah i would love to see more love around containers with juju20:05
lazyPowersebas5384: oh yeah?20:05
sebas5384aisrael: good to know20:05
lazyPowersebas5384: hang out with us in #juju-edge, we're talking about workloads and containers and that story20:06
sebas5384lazyPower: yes! like using the lxd directly or something like that20:06
sebas5384lazyPower: done!20:06
drbidwellOn a MAAS server where can I find the partman-auto/method rules referenced in /etc/maas/preseed/pressed_master?20:18
=== kadams54 is now known as kadams54-away
marcoceppidrbidwell: no idea, they may be stored in the datastore (postgresql)20:53
sebas5384aisrael: what would be the best approach to ssh into the containers? because the ssh keys aren't in the host :P20:59
aisraelsebas5384: I'm still doing that from inside vagrant. There might be a way of adding the ubuntu user key to your local keys, but I haven't explored that yet.21:00
sebas5384aisrael: yeah I think there's a way to add an ssh key for juju's to manage it21:01
sebas5384https://juju.ubuntu.com/docs/howto-authorised-keys.html ?21:01
sebas5384it's not clear to me how to use it21:06
=== kadams54-away is now known as kadams54
aisraelI know the vagrant image uses an insecure key, so in theory you could copy the keys to your local ~/.ssh, ssh-add them, and ssh directly into the containers21:09
aisraelYes, that'll work.21:12
marcoceppisebas5384: you should be able to add your personal key to the deployment21:12
aisraelI copied the ssh keys from the vagrant user, renamed to id_vagrant and ssh-added them.21:12
aisraelI can then ssh ubuntu@10.0.3.x and access a container21:12
sebas538_so you are using like ssh -i .... ?21:13
sebas538_marcoceppi: yeah! but how21:14
sebas538_?21:14
aisraelafter copying and renaming the keys (id_rsa, id_rsa.pub) and copying to ~/ssh, run ssh-add id_vagrant21:14
sebas538_sorry for the noob question21:14
sebas538_aisrael: ohhh I see21:14
=== sebas538_ is now known as sebas5384_
=== mwhudson_ is now known as mwhudson
=== sebas5384_ is now known as sebas5384
drbidwellmarcoceppi: Any idea where I could find information about MAAS internal workings short of reading all of the code?21:46
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!