[05:02] <nevermam> Thanks arosales and tvansteenburgh..I will go though the charm store best practies, and look forward for the new addition to the review docs by arosales
[07:31] <stub> jamespage, Tribaal : I don't want it to be opt in, as I'm also hoping that juju-log, leader-set and everything else will grow --file for similar reasons.
[07:33] <stub> jamespage, Tribaal : Maybe that requires bug fixes in juju-core. Maybe the haproxy charm is relying on weird quoting behavior it can no longer rely on. It would be nice to know.
[07:33] <stub> c/haproxy/hacluster/
[07:52] <Tribaal> stub: jamespage: I'd like it to be default as well, it just makes more sense to me.
[07:52] <Tribaal> (design the problem away)
[07:57] <stub> Tribaal, jamespage : I've set the bug to incomplete and added a comment. Without an actual example of the problem, it is just going to get broken again.
[13:16] <arosales> nevermam, thanks for the ping here. Feel free to ping if you have any other questions
[13:35] <arosales> lazyPower, aisrael   upper-- found some interesting bits with vagrant workflow (https://jujucharms.com/docs/devel/howto-vagrant-workflow)
[13:35] <arosales> lazyPower, aisrael https://gist.github.com/anonymous/508af728f342ecc7da56
[13:37] <lazyPower> ah, the genghis workflow. Probably needs some attention, the last time I'm aware that was touched was early 2014. I'm not surprised there's some bitrot there.
[13:37] <arosales> lazyPower, if those comments looks ok perhaps upper-- would be interested in a merge request (ref https://jujucharms.com/docs/devel/contributing)
[13:37]  * arosales grabs some coffee
[13:40] <lazyPower> arosales: seems to be in order. If they get a PR issued against the docs to update those bits, aisreal or myself can run through them for validation and get it merged.
[13:40] <lazyPower> upper--: ^ if you've got the time for a PR against the docs, it would be much appreciated.
[13:40] <beisner> coffee indeed...
[13:41] <upper--> lazyPower: sure, ill have a look when i get a minute :)
[13:43] <pmatulis> tried to deploy n=4 nova-compute but 2 got stuck in 'allocating'. tried to remove machines but same 2 machines associated with stuck services now stuck 'pending / dying'. what to do? - http://paste.ubuntu.com/11412201/
[13:52] <mbruzek> pmatulis: I haven't had much experience on maas
[13:52] <mbruzek> pmatulis: But if that were local provider I would think your vm image is corrupted.
[13:53] <mbruzek> pmatulis: Can you `juju destroy-environment maas --force` and start over?
[13:53] <pmatulis> mbruzek: i'll keep that as a last resort. but thanks
[13:55] <mbruzek> pmatulis: OK.  Again it reminds me of a local issue where the lxc images were only half downloaded.   A destroy-environment --force and deleting the cloud images off my machine usually clear up a problem like that.  Since this is maas it may not work.
[13:59] <lazyPower> pmatulis: do your nodes provision properly through MAAS without juju spinning them up?
[13:59] <lazyPower> pmatulis: also, which version of maas/juju?
[14:06] <pmatulis> lazyPower: i haven't tried installing a pure 'buntu with this maas setup yet
[14:07] <pmatulis> lazyPower: maas 1.7.4+bzr3366-0 + juju 1.23.3-0ubuntu1
[14:08] <pmatulis> going to try 'juju remove-machine <#> --force 1'
[14:09] <lazyPower> pmatulis: that should drop it from the juju env and force power it off, you dont need to pass 1 to force though
[14:09] <lazyPower> juju destroy-machine # --force should do what you want
[14:09] <pmatulis> ah ok
[14:09] <lazyPower> pmatulis: however, i would ensure that booting the machine w/ maas works before you go the juju integration route, that will help you flush out any problems w/ the underlying maas provider
[14:09] <pmatulis> lazyPower: yep, good idea
[14:10] <lazyPower> and if the machines boot properly through maas, we can start looking at the cloud-init logs emitted from the machine to identify the issue juju may be causing. typically this is due to networking config
[14:10] <lazyPower> at least in my experience
[14:10] <lazyPower> but i've also got a weird vmaas setup that is hinky because I like to do things without reading instructions first :)
[14:11] <pmatulis> thing is, 2 of my nova-compute services came up. just 2 got borked
[14:11] <pmatulis> i didn't test anything though, just according to 'juju status'
[14:11] <lazyPower> hmm
[14:12] <lazyPower> that may or may not be symptomatic of the machines being bum in maas
[14:12] <lazyPower> i would still give the broken machines a cold boot directly from teh maas ui and see what happens
[14:14] <pmatulis> fyi, entire environment is housed in a single openstack instance (acting as kvm hypervisor)
[14:15] <pmatulis> the load went over 9 on it when i deployed these 4 nova-compute. may be part of the problem. dunno
[15:37] <lazyPower> thats possible pmatulis
[15:37] <lazyPower> if it couldn't get the requested resources the VM boots may have gotten hung up, i know that happens to me occasionally in our over-subscribed openstack instance
[15:38] <lazyPower> typically killing the machine and requesting again seems to have some merit as it works in 90% of the cases that happens, but its not ideal :)
[15:49] <pmatulis> lazyPower: ok, --force worked on those 2 machines; they're gone and the maas GUI shows them as 'Ready' instead of 'Failed something or other'
[15:50] <pmatulis> will try to deploy one at a time
[15:50] <lazyPower> pmatulis: ok good, so it seems like it might have just been intermittant
[15:50] <lazyPower> exactly my suggestion
[15:50] <lazyPower> take it one at a time and see if they come online as they should
[16:04] <evilnickveitch> just FYI, as there is now content for 1.23 in docs (GCE got added earlier) the new default version for stable docs is 1.23
[16:04] <evilnickveitch> Anybody wanting to check out the combination of Juju and Google Compute Engine should take a look at: https://jujucharms.com/docs/stable/config-gce
[16:05] <jcastro> Nice work dude!
[16:08] <evilnickveitch> jcastro, thx
[16:25] <hatch> I have a service that appears to be stuck 'dying' with two units - is there a way I can force it to destroy?
[16:29] <jcastro> without just killing the machine out from underneath it?
[16:30] <hatch> jcastro: trying that right now...doesn't appear to be destroying the machine either
[16:32] <jcastro> I assume you tried to destroy the service first of course?
[16:33] <hatch> jcastro: yeah - it doesn't appear to be doing anything
[16:33] <hatch> no errors
[16:38] <jcastro> anything in the machine logs?
[16:39] <hatch> jcastro: when I try to destroy the unit I get this error
[16:39] <hatch> ERROR no units were destroyed: state changing too quickly; try again soon
[16:39] <hatch> it's been in pending for a long time
[16:39] <jcastro> state changing too quickly sounds odd to me
[16:41] <hatch> yeah... especially since it's not :
[16:41] <hatch> :)
[16:45] <hatch> jcastro: on the machine there is this error in the juju logs
[16:45] <hatch> FATAL: Could not load /lib/modules/3.13.0-53-generic/modules.dep: No such file or directory
[16:45] <jcastro> no clue on that one, pastebin then to the list?
[16:46] <hatch> yeah I can do that
[16:46] <hatch> thanks for the help
[17:25] <lazyPower> hatch: is there a subordinate service on the unit?
[17:25] <hatch> lazyPower: there wasn't
[17:26] <hatch> it was just mysql
[17:26] <lazyPower> weird
[17:26] <lazyPower> is this 1.24?
[17:26] <hatch> 1.24-beta5-trusty-amd64
[17:26] <lazyPower> hmm ok, im' on beta4 atm
[17:26] <lazyPower> i haven't run into that bu ti'll keep my eyes peeled
[17:26] <lazyPower> link to bug if you file one plz, i'd like to track that
[17:27] <hatch> well I'm not sure how to reproduce it
[17:27] <hatch> so the bug report is going to suck :)
[17:27] <hatch> but I guess maybe someone can trace it back
[17:32] <firl> anyone able to help me verify if what I am seeing might be a bug or not with my juju / openstack set up?
[17:46] <hatch> lazyPower: https://bugs.launchpad.net/juju-core/+bug/1459761 it's not much but maybe it'll help :)
[17:46] <mup> Bug #1459761: Unable to destroy service/machine/unit <juju-core:New> <https://launchpad.net/bugs/1459761>
[17:47] <marcoceppi> firl: we can try, what are you seeing?
[17:48] <firl> Used juju to deploy juju gui
[17:48] <firl> worked fine, then used juju gui to deploy owncloud. The box came up fine and everything
[17:48] <firl> the security group wasn’t configured to expose the ports though, had to manually configure it
[17:48] <marcoceppi> firl: did you tell juju to expose owncloud?
[17:49] <firl> that would probably do it
[17:49] <firl> nope I didn’t,
[17:49] <marcoceppi> firl: in the GUI if you click on OwnCloud you'll see at the bottom left of the inspector the Expose option. In the command line its just "juju expose owncloud"
[17:49] <firl> yeah; completely user error
[17:50] <marcoceppi> no worries
[17:50] <upper--> lazyPower: i submitted a pull request for the docs, not sure if i have to tell you or you or someone else will just notice
[17:51] <firl> yep that did it, thanks for letting me waste your time marcoceppi
[17:51] <marcoceppi> firl: it's never a waste! Glad it was something simple
[17:51] <firl> haha
[17:51] <lazyPower> upper--: we'll take a look shortly, thanks for the contribution!
[17:51] <lazyPower> hatch: awesome, cheers
[17:51] <marcoceppi> upper--: #431?
[17:52] <upper--> uhhh..
[17:53] <arosales_> aisrael, have you tried quickstart in vagrant?
[17:53] <upper--> marcoceppi: yes
[17:53] <marcoceppi> upper--: thanks for the contribution, one of us will take a look soon for sure
[17:54] <arosales> if you are having issues with quickstart in vagrant perhaps give deployer a try
[17:55] <arosales> specifically deployer -c <./path/to/bundle.yaml>
[17:55] <arosales> upper--, chcking with aisrael or tvansteenburgh to see if they have had any issues with quickstart in vagrant
[17:55] <upper--> ill try deployer
[17:57] <arosales> upper--, hmm but your manual deploy should have worked if your mirrored the bundle yaml
[17:59] <arosales> upper--, checking with asanjar on the timing
[18:01] <jose> jcastro: want me to host office hours under ubuntuonair?
[18:06] <jcastro> that sounds awesome to me
[18:06] <jcastro> marcoceppi: you in for office hours?
[18:06] <marcoceppi> sure
[18:07] <jcastro> jose: dang, forgot to post a reminder to the list, on it now.
[18:07] <jose> :P
[18:07] <jose> cool, will be ready by then
[18:13] <aisrael> arosales: upper-- I've used quickstart under vagrant, but not with bundles (successfully)
[18:13] <arosales> aisrael, interesting something we should put a card out for to test
[18:14] <arosales> ie testing vagrant/quickstart/bundles
[18:14] <upper--> ok..
[18:14] <upper--> ill see if i can get maas working properly
[18:14] <arosales> upper--, suggest to stick with deployer -c <bundle-file>
[18:14] <arosales> for now
[18:14] <aisrael> arosales: definitely. I'm finding the documentation re launching from a bundle to be lacking
[18:14] <arosales> upper--, also may be worth comparing to AWS
[18:15] <upper--> arosales: okay.. yeah or aws
[18:15] <arosales> and isolate if it is the charm or environment
[18:15] <upper--> im about done for the day though
[18:15] <arosales> upper--, understood
[18:15] <arosales> been a good whack today
[18:15] <arosales> aisrael, thanks
[18:27] <jcastro> jose: can you make the IRC channel for the webapp go here instead of #u-o-a?
[18:27] <jose> jcastro: definitely, just a matter of changing a line of html
[18:31] <jcastro> jose: thanks for hosting, I was dreading getting made fun of by marco for not doing the onair properly.
[18:32] <jose> what? what did you do to the page?
[18:32] <jose> did you break it again?
[18:32]  * jose scratches head
[19:52] <jose> so, who's gonna join me today?
[19:54] <jose> I know everyone wants to be on air, but don't get too excited
[19:58] <jose> for anyone that wants to join, here's the link: https://plus.google.com/hangouts/_/hoaevent/AP36tYezWWMOvDIBerQVTv3ZjW4XO3xnVhVshfCm-HXeQlkdqnVBtg
[20:00] <jcastro> ok everyone, office hours starts in 30 seconds or so
[20:00] <jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYezWWMOvDIBerQVTv3ZjW4XO3xnVhVshfCm-HXeQlkdqnVBtg
[20:00] <jcastro> if you want to participate
[20:00] <jcastro> http://ubuntuonair.com if you want to just listen in
[20:07] <aisrael> ^^ that link isn't working for me. The video says "An error occurred. Please try again later."
[20:07] <jcastro> I'll take questions in here if you have any
[20:08] <jcastro> aisrael: try a hard refresh
[20:08] <aisrael> nada
[20:08] <jcastro> jose: ^^
[20:09] <aisrael> jcastro: it just started working
[20:09] <jose> aisrael: try again, youtube when we didn't start (a bit of lag)
[20:09] <aisrael> http://imgur.com/gallery/q9njg40
[20:09] <jose> refresh and try again
[20:09] <hazmat> jcastro: question: are there any good presentations on juju for a business type audience / ie. high level benefits / whose using it /  etc
[20:14] <hazmat> lazyPower: nice recovery (re kubes)
[20:15] <hazmat> lazyPower: re kube is there a way (document?) to run kube e2e on the charms?
[20:16] <hazmat> lazyPower: flannel does nesc. do overlays, in some cases its directly modifying the iaas substrate networking
[20:16] <hazmat> er. doesn't
[20:17] <lazyPower> hazmat: thats fair
[20:17] <lazyPower> hazmat: i wasn't ready at all to give that update, so i shot from the hip
[20:17] <hazmat> lazyPower: kube is targetting 1.0 end of june.. their already in feature freeze
[20:18] <lazyPower> hazmat: also regarding kube e2e, thats coming in an action. we just landed the ptestore validation, so e2e is expecting (come pass or fail) by the end of the cycle. I'm guessing in 2 to 3 weeks time.
[20:18] <lazyPower> *petstore
[20:18] <kwmonroe> the core apache bits we're working on for big data: https://jujucharms.com/u/bigdata-dev/apache-core-batch-processing
[20:20] <jcastro> hazmat: after kevin finishes we'll move on to your questions
[20:20] <jcastro> sorry I missed them the first time
[20:22] <hazmat> jcastro: no worries, lazyPower got to them already.. and re presentations .. email to the list would be great.
[20:24] <hazmat> jcastro: new question.. where is kafka in the big data landscape?
[20:24] <hazmat> i just see mine and samuel's old charms.
[20:24] <hazmat> its an integral part of many data flow pipelines
[20:25] <hazmat> jcastro: what's the state of monitoring services in the charm ecosystem?
[20:25] <jcastro> keep the questions coming, this is awesome!
[20:27] <hazmat> re monitoring what about grafana / statsd / heka / collectd / prometheus?
[20:27] <hazmat> ;-)
[20:27] <hazmat> pls let's all leave nagios in the 90s where it belongs ;-)
[20:27] <jcastro> heh
[20:29] <hazmat> marcoceppi: is the monitors interface blogged about or documented?
[20:29] <hazmat> sounds like its trying to be a generic that different tools could plug into?
[20:29] <hazmat> irc lag kills
[20:34] <hazmat> marcoceppi: what's the state of networking?
[20:35] <hazmat> leader elections.. yipee!
[20:35] <hazmat> everyone rewrite your charms ;-)
[20:36] <hazmat> or even a db upgrade from a common webapp.
[20:37] <hazmat> the container networking in aws is kinda of bogus, only works with default vpc
[20:40] <hazmat> marcoceppi: the aws tagging thing is pretty big as well
[20:40] <hazmat> that's how you get chargeback / measure the cost of an environment in aws.
[20:42] <jrwren> are there docs for how to update charms to use those new states?
[20:44] <hazmat> 'need to know basis' lol
[20:45]  * hazmat translates i could tell you how to status output on your env, but i'd have to terminate your environment after
[20:45] <lazyPower> soudns about right
[20:46] <hazmat> sweet!
[20:46] <hazmat> re dockercon, link?
[20:46]  * hazmat already signed up
[20:47] <hazmat> jcastro: link? i don't see anything on insights.canonical.com
[20:48] <jrwren> what was that charm with the status examples?
[20:48] <kev_> What is best way to bridge  the private interface to make it accessible to to the public network
[20:48] <hazmat> er.. i mean nothing on insights.ubuntu.com that i see
[20:48] <jcastro> https://insights.ubuntu.com/event/conducting-systems-and-services-an-evening-about-orchestration/
[20:48] <hazmat> jcastro: thanks
[20:50] <jcastro> we haven't pushed it hard yet due to openstack news
[20:50] <jcastro> but we'll be pushing/tweeting it way more over the next few weeks
[20:53] <hazmat> jcastro: tweeted already https://twitter.com/kapilvt/status/604027510624501760
[20:56] <jcastro> <3
[22:13] <marcoceppi> jcastro: you still around?