[04:40] <jose> rick_h_: event runs from 14 to 20 UTC, you can choose your slot!
[07:24] <ezobn> Hi All, After fresh install of juno openstack on trusty (openstack-origin:trusty-juno) the neutron security group service is not started. How I can run it ? Why on controller node (installed by charm) no neutron-server installed ?
[08:02] <vogonpoetry> Trying to make changes to cinder-vmware charm work with another storage driver. Looking at it, I think I mainly need to edit cinder_context to take in the new config keys defined in config.yaml… Is there more to it than that? If so where else should I look.
[08:03] <vogonpoetry> and many thanks for any help
[09:34] <cargill_> hi, when I deployed a bundle and a service is not started, can I start it from the gui?
[09:36] <cargill_> the debug-log does not mention any errors, however it does not mention starting that either
[09:47] <cargill_> hmm, yet stat says "agent-state: started"
[13:41] <jose> tvansteenburgh: hey, have a min?
[14:03] <tvansteenburgh> jose: yep
[14:03] <jose> tvansteenburgh: quick question, will bundletester deploy the local charm or the one in the store as a default?
[14:03] <jose> cory_fu: those tests for chamilo are now done, should I do an MP against your branch or against the store?
[14:04] <tvansteenburgh> jose: bundletester just finds tests and executes them. what's deployed would depend on what's in the amulet test or the bundle file
[14:05] <jose> tvansteenburgh: got it, thanks
[14:10] <rick_h_> jose: where do I sign up for a slot?
[14:10] <jose> rick_h_: with me
[14:10] <rick_h_> jose: ah ok.
[14:11] <rick_h_> jose: put me down for 15:00 friday?
[14:11] <jose> rick_h_: sure, what would be a short description of the presentation?
[14:12] <rick_h_> jose: I'm going to ping alexisb and see if she wants to dual up again or not.
[14:12] <jose> rick_h_: got it, lemme know! :)
[14:12] <rick_h_> jose: so for now "What's new and upcoming in the work of Juju UI Engineering" can work for now
[14:12] <cory_fu> jose: If you're happy with the services framework implementation of the charm, I'd say MP it against trunk
[14:12] <rick_h_> jose: and for a description just something about the latest in the progress of the Juju GUI, jujucharms.com, and juju-quickstart.
[14:13] <rick_h_> jose: and if alexisb can join then I'll ask you to ammend it to be more general juju
[14:13] <jose> rick_h_: awesome, sec to give you the link...
[14:14] <jose> rick_h_: http://summit.ubuntu.com/uos-1411/meeting/22387/whats-new-and-upcoming-in-the-work-of-juju-ui-engineering/
[14:14] <jose> cory_fu: ok, MP against trunk is open
[14:14] <rick_h_> jose: ty
[14:34] <cory_fu> tvansteenburgh: Why is bundletester tagged to bzr==2.6.0 instead of bzr>=2.6.0?  My system apparently has 2.7.0dev1 and it's complaining
[14:35] <tvansteenburgh> cory_fu: that's how i inherited it, and i haven't tested it with anything else
[14:35] <tvansteenburgh> you didn't install it in a virtualenv?
[14:39] <cory_fu> tvansteenburgh: Yes, but I had to use --system-site-packages to get pythonapt to work, which meant that my system-level installed bzr made it so I couldn't install the right version of bzr into the virtualenv
[14:39] <cory_fu> I know I had this working at one point, so I'm not sure how I managed it before
[14:39] <tvansteenburgh> pip install -I
[14:40] <tvansteenburgh> (capital i)
[14:41] <cory_fu> Ah!  That works.  I thought there was such an option, but it's not listed on pip --help
[14:41] <tvansteenburgh> pip help install :)
[14:41] <cory_fu> Gah
[14:41] <cory_fu> :)
[14:41] <cory_fu> Thanks
[14:41] <tvansteenburgh> np
[14:51] <cargill_> hi, is it possible to run juju local environment when the host is an lxc container already?
[14:52] <lazyPower> cargill_: lxc in lxc gets a bit hairy, and isn't really recommended.
[14:54] <LinStatSDR> sorry for the delay, lazyPower is correct.
[14:54] <cargill_> lazyPower: I've been using vagrant until now, but the fact that everything is routed through 10.0.3.1 which breaks some relations, notably postgresql, is quite annoying, not to mention slow and memory-hungry
[14:55] <lazyPower> cargill_: i understand your frustration. aisrael is working with our team that maintains the images toa ddress most of those issues. there's also work being done to provide a docker based image for the workflow (allbeit a much lower priority than fixing vagrant papercuts presently)
[14:56] <cargill_> and I'm running Debian, which juju is not really happy with and I haven't had the time to find out how to fix that yet
[14:56] <aisrael> cargill_: the issue with postgresql and routing should be fixed in the latest vagrant box images
[14:56] <cargill_> I've downloaded one yesterday, is that new enough?
[14:56] <LinStatSDR> cargill_ what version of ebian?
[14:56] <cargill_> jessie
[14:56] <LinStatSDR> okay
[14:57] <lazyPower> cargill_: where did you fetch the box from?  aisrael - have we updated the docs with the latest box url(s)?
[14:57] <aisrael> cargill_: yes. Are you seeing the routing issue with that box?
[14:58] <cargill_> the amd64 one linked in https://juju.ubuntu.com/docs/config-vagrant.html
[14:59] <cargill_> aisrael: it's this issue, isn't it? 'FATAL: no pg_hba.conf entry for host "10.0.3.1"'
[14:59] <aisrael> cargill_: Yep, that's the issue
[15:00] <LinStatSDR> ;(
[15:00] <aisrael> lazyPower: looks like the doc is pointing to an image from August
[15:00] <aisrael> cargill_: Could you try an image from here? http://cloud-images.ubuntu.com/vagrant/trusty/current/
[15:00] <aisrael> I'll get the docs updated
[15:00] <lazyPower> aisrael: i suspected as much, we should try to automate that part of the docs or poitn them at the /current/ images so its always up to date
[15:01] <LinStatSDR> juju does need the updating
[15:01] <aisrael> lazyPower: Yeah, it should just point to current. I'll see if we can get that stale image removed, too.
[15:01] <lazyPower> aisrael: offtopic to whats going on here - will you be available to help run a session at UDS over the dev workflow with vagrant? ~ 20 minutes give or take
[15:02] <cargill_> I'd still prefer to try the LXCception approach, with 4GB of memory, VirtualBox is not a nice neighbour
[15:02] <aisrael> lazyPower: depends on the day. My schedule this week is going to be somewhat challenging
[15:03] <cargill_> and I'm mostly after testing out stuff locally, not production use
[15:03] <cargill_> what are the common issues with that approach?
[15:05] <lazyPower> cargill_: I've run into issues with cgroups failing the upstart task - and didnt pursue it any further
[15:05] <lazyPower> cargill_: and to note, i haven't actually tried to run juju within that lxc container
[15:06] <lazyPower> cargill_: however if you come up with a working solution - i'm all for talking to you about your approach and documenting it for science.
[15:07] <LinStatSDR> For science!
[15:07] <cargill_> at the moment, I'm getting 'juju.container.lxc clonetemplate.go:167 container failed to start: container failed to start' in the container, nothing more descriptive in the juju debug-log -l TRACE
[15:08] <cargill_> but I haven't tried to create a container yet, this is where I was before I asked here
[15:08] <lazyPower> cargill_: if i were to guess at the culprit - i'd say its networking
[15:08] <LinStatSDR> i dislike generic failure debug statements
[15:09] <cargill_> yeah, I'm trying to get all the information there is on LXC in LXC and see if I can get an lxc container start manually, to make sure that works
[15:09] <ktosiek>  /join #nagare
[15:09] <ktosiek> dang it
[15:09] <lazyPower> cargill_: https://www.stgraber.org/2012/05/04/lxc-in-ubuntu-12-04-lts/ - container nesting.
[15:09] <LinStatSDR> hmm 12.04
[15:09] <lazyPower> the info is  bit old :(
[15:10] <LinStatSDR> Yes, needs a bit of updating lol
[15:11] <lazyPower> https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-basic-usage - that may be more up to date.
[15:11] <lazyPower> under nesting - it talks about a profile to be used for nesting
[15:11] <lazyPower> lxc.aa_profile = lxc-container-default-with-nesting
[15:12] <LinStatSDR> LazyPower, that is much newer. For 14.04
[15:13] <cargill_> ok, so the host being debian, I don't have to do this part, right? since systemd manages its cgroups, would it interfere?
[15:13] <LinStatSDR> Does it "have" to be debian :D
[15:14] <cargill_> LinStatSDR: kinda has, yes, I'm not reinstalling my machine just to have juju running on it :)
[15:15] <cargill_> (although I can get rid of systemd, it is a bit painful sometimes)
[15:15] <LinStatSDR> Hehe, I'm just busting your chops cargill_. That and I seem to less "problems" if you will, issues running Ubuntu for juju
[15:16] <lazyPower> cargill_: since we're not officially moving ot SystemD for another few cycles, i dont think anyone here has really worked with juju under a systemd supervisor.
[15:16] <lazyPower> so we wont have much in terms of info in that regard
[15:17] <cargill_> yeah, I thought as much, thanks anyway
[15:18] <LinStatSDR> =)
[15:19] <cargill_> does the lxc.mount.auto = cgroup line go into /etc/lxc.conf or the container's own config file in /var/lib/lxc/<container>/config?
[16:10] <lazyPower> mhall119: ping - we've got all 5 of our sessions in pending on summit.
[16:13] <lazyPower> oo, i missed one. 6 sessions
[16:16] <mhall119> lazyPower: in pending? then you need a track lead to approve them
[16:16] <lazyPower> mhall119: do I ping antonio for that? sorry for my daftness - this is my first time doing the scheduling.
[16:16] <mhall119> jose: gaughen: marcoceppi: ^^
[16:16] <mhall119> lazyPower: antonio or one of the ones I just pinged
[16:16]  * lazyPower facepalms
[16:16] <lazyPower> you sent that in the email - sorry
[16:17] <mhall119> it's okay :)
[16:28] <cargill_> hmm, juju-gui is started, but keeps on responding with a 301 to https:// + whatever I put in the Host header, then does not want to speak TLS, is there something I'm missing?
[16:29] <lazyPower> rick_h_: have you guys done any experimentation with the gui being nested in lxc? I think i know the answer to this...
[16:30] <lazyPower> rick_h_: for clarity - Debian host => Parent JUJU Environment container => juju gui is container in a container.
[16:30] <cargill_> ah, it was trying to redirect me to port 443, just that it got confused
[16:31] <rick_h_> lazyPower: no, the big thing is how do you get to it via the network? It needs to have the ability to hit the juju api websocket
[16:31] <rick_h_> cargill_: cool, yea one of the guys on the team is currently working on making the charm take a port as a config param to run on
[16:31] <rick_h_> which should help it colocate better with other services soon. Should be released next week
[16:31] <lazyPower> rick_h_: good question - i'm not sure - cargill_ is pioneering this
[16:32] <rick_h_> lazyPower: yea, never tried it I guess. Nothing 'shouldn't work that I know of but it all depends on the networking setup from what I can think of
[16:33] <lazyPower> I haven't either tbh - but this sounds like a compelling alternative to using vagrant if the networking re-config is trivial.
[16:33] <cargill_> after I got the croups remounted in the container as well, juju seems to be mostly happy (apart from me setting apt proxy wrong and install of juju-gui failing because of that :))
[16:33] <rick_h_> lazyPower: huh? How does it help with vagrant? Does vangrant just run the lxc?
[16:33] <lazyPower> rick_h_: vagrant is a heavier weight alternative when i can just share resources with my HOST
[16:34] <rick_h_> lazyPower: ok, but I'm missing the point. If you're on linux and have lxc why do lxc in lxc?
[16:34] <lazyPower> rick_h_: all that i *really* want this for is doing the testing setup for charm reviews - as my workstation is polluted beyond recognition from all the junk in 00-setup from teh charms.
[16:34] <rick_h_> you can do multiple envs in lxc?
[16:34] <lazyPower> isolate all that business so i can just wipe out the lxc container when im' done and call it.
[16:35] <rick_h_> lazyPower: right but do an env and just destroy-environment --force?
[16:35] <lazyPower> and i dont need nested containers for that.
[16:35] <lazyPower> now that i think about it, i've been undeniably lazy - i could just snapshot a testing container, and put my environments.yaml in there - use it for testing with cloud hosts
[16:36] <rick_h_> lazyPower: ok, I'm all for helping you solve a problem with the gui. Just not understanding the problem atm so forgive me.
[16:36] <rick_h_> lazyPower: give me a setup you want to work and I'll see what we can do
[16:37] <lazyPower> rick_h_: well my use case is crazy different than what cargill_ is doing, and cargill_ would be a better source of truth on that matter than I.
[16:37] <lazyPower> i pinged you to see if you guys had ever had a crazy notion to try that - but most of us that i'm aware of, have not gone the route of nested lxc.
[16:38] <rick_h_> lazyPower: right, we've not really. lxc is usually 'contained' enough for our needs.
[16:38] <lazyPower> yo dawg, i heard you like to contain things, so we spun up containers in your containers so you can container while you container.
[16:40] <LinStatSDR> lol
[16:40] <LinStatSDR> lazyPower +1
[16:51] <cory_fu> Just submitted my dhx (debug-hooks-ext) plugin pull request: https://github.com/juju/plugins/pull/32  I am interested in feedback, but I think it makes for a much improved charm debugging experience
[17:10] <cargill_> is there a way to get the public-address of a service either in the GUI or through the juju command?
[17:11] <cargill_> apart from juju stat, which is a bit hard to parse in the shell
[17:13] <cargill_> actually, not through the gui, because I want to get the gui service address to set up networking after I bootstrap a new environment...
[17:16] <lazyPower> cargill_: juju run --unit service/# "unit-get public-address"
[17:17] <cory_fu> cargill_: juju status service | grep public-address | awk '{print $NF}'
[17:17] <lazyPower> if your'e on one of the beta builds, we have newer options for juju status as well
[17:17] <cargill_> lazyPower: thanks
[17:24] <cargill_> yay, juju feels much snappier, does not eat my memory for breakfast and no problems with cross-machine connections going through 10.0.3.1
[17:30] <cargill_> thanks for your help everyone
[17:43] <lazyPower> No problem cargill_ - if you have any notes i'd love to see them.
[17:43] <lazyPower> its an interesting approach for sure
[18:25] <jose> mhall119, lazyPower: scheduling? shoot a PM
[18:25] <lazyPower> jose: i sent an email follow up - let me fwd it to ya
[18:25] <jose> cool
[18:26] <lazyPower> jose: you've got mail - and there are 6 sessions proposed on summit
[18:26] <jose> got it
[18:27] <jose> lazyPower: have you already proposed the sessions on summit or created a blueprint?
[18:27] <jose> don't see them
[18:27] <lazyPower> jose: no blueprints - the only session we have for planning is the open feedback - and what project would you target that against as it encompasses charms and juju?
[18:28] <lazyPower> jose: however mbruzek created 6 sessions that are in pending
[18:28] <mhall119> jose: you don't see pending meetings in http://summit.ubuntu.com/uos-1411/review/ ?
[18:28] <jose> lazyPower, mhall119: was looking at scheduling, sorry
[18:31] <jose> lazyPower: 17 UTC wed, charm testing: slot is plenary. 15 UTC fri, open feedback, slot is taken by rick_h_
[18:32] <lazyPower> jose: can you just shift them down into open slots and I'll update our tenative slots on the calendar?
[18:32] <lazyPower> jose: what i'm more concerned with is keeping the order that we have them scheduled, as its a progressive build on the prior
[18:32] <jose> ok
[18:32] <lazyPower> the times are adjustible however
[18:33] <jose> lazyPower: charm testing is on the 18UTC slot on wed, open feedback on the 14 UTC slot on Fri
[18:33] <jose> lazyPower: actually, you can choose between 14UTC fri or 18 UTC fri
[18:34] <lazyPower> jose: 18UTC sounds like a better timeslot as its after standup.
[18:34] <jose> ok
[18:35] <jose> lazyPower, mbruzek: all meetings approved and scheduled
[18:35] <lazyPower> jose: thanks for the follow up o/
[18:35] <jose> 5 slots are still open if anyone wants tot ake them
[18:36] <jose> 5 slots are still open if anyone wants tot ake them
[18:36] <jose> np :)
[18:38] <bidwell> Is this the place where one might get help with getting juju+maas to bootstrap?
[18:39] <lazyPower> bidwell: we can certainly try - whats the situation?
[18:41] <bidwell> I have a maas server with 6 machines behind it that have been provisioned with ubuntu 14.04.1 and returned to the ready state (but still up).  When I run "juju bootstrap" from the maas server it runs for 30 minutes and then says "ERROR bootstrap failed: waited for 30m0s without being able to connect: /var/lib/juju/nonce.txt does not exist"
[18:42] <bidwell> I can ssh to them as ubuntu@host.domain from the maas server and sudo once there.  I am not sure what I am missing.
[18:50] <lazyPower> oh, interesting
[18:50] <lazyPower> missing nonce.txt huh?
[18:53] <lazyPower> bidwell: i'm not positive on why this is the case but i'm goign through questions on AU with similar symptoms
[18:53] <lazyPower> bidwell: can you get me the output from juju -v --debug bootstrap -e maas (or your maas environment name)
[18:54] <lazyPower> preferrably in a pastebin
[19:02] <themonk> lazyPower, hi
[19:03] <jose> themonk: hey, need any help?
[19:03] <themonk> jose, : hi yes :)
[19:03] <jose> what's up?
[19:05] <themonk> jose, i have submitted my charm for review but i am just waiting, tipicaly how long it takes to review? and can you see my page and tell me is there anything i did wrong?
[19:06] <jose> themonk: let me check if it's on the queue
[19:06] <jose> themonk: which charm is it?
[19:06] <themonk> gluu-server
[19:06] <lazyPower> hah
[19:07] <jose> that's lazyPower
[19:07] <lazyPower> jose: i have that one locked but have since gotten pre-occupied
[19:07] <jose> cool then!
[19:07] <lazyPower> if you want it i can unlock it and you can get the first round
[19:07] <jose> definitely
[19:07] <jose> I've got some time now
[19:07] <lazyPower> allrighty - incoming c-c-c-c-c-c-combo-breaker
[19:08] <lazyPower> themonk: tiem to review is subjective - it depends on whats int eh queue and so forth - as you're an ISV we'll give ya some express privledges ;)
[19:08] <lazyPower> jose: unlocked - have at it
[19:08] <jose> cool, checking now
[19:08] <themonk> lazyPower, :)
[19:09] <marcoc> whit, how do you deploy cloud foundry?
[19:10] <whit> marcoc, usually I follow the readme, but execute the actual deployment command by hand
[19:10] <jose> themonk: mind if I PM?
[19:10] <whit> marcoc, what are you seeing?
[19:11] <whit> marcoc, or was your question even more general?
[19:11] <marcoc> whit, I have bootstrap, what's next is my question
[19:12] <themonk> jose, ?
[19:12] <jose> themonk: wanna make a couple questions about the charm, and was wondering if it was fine for you if I sent a private message
[19:12] <whit> marcoc, alright!  I suggest checking out the source from launchpad and following the instructions in the README
[19:13] <themonk> jose, ok np
[19:13] <whit> marcoc, let me grab you a link
[19:13] <marcoc> whit, and the source is?
[19:14] <whit> marcoc, https://code.launchpad.net/~cf-charmers/charms/trusty/cloudfoundry/trunk
[19:14] <marcoc> whit, ta
[19:15] <whit> marcoc, hollar if you have any issues.  Where are you planning on deploying?
[19:15] <jose> marcoceppi, lazyPower, mbruzek: auth request to push https://code.launchpad.net/~ibm-demo/charms/trusty/mediawiki/trunk/+merge/240072
[19:15] <marcoc> AWS west-2
[19:16] <lazyPower> jose: go for it
[19:16] <jose> lazyPower: thanks
[19:16] <whit> marcoc, ok cool.  be sure to set a constraint for instance-type=m3.medium
[19:17] <marcoc> instance-type works as a --constraint?
[19:17] <whit> marcoc, iirc, the deploy script included will create a "dense" placement
[19:17] <whit> marcoc, only on aws, but yeah
[19:18] <whit> marcoc, you can also pick a size of cpu or memory and get the same effect.  main thing is to avoid getting hung up on limit m1.smalls
[19:18] <whit> *limited
[19:18] <whit> marcoc, it should spin up 8 machines iirc
[19:22] <marcoc> oh crap, where do I set the constraint? before running cfdeploy?
[19:23] <whit> marcoc, before
[19:23] <whit> marcoc, generally we bootstrap with it
[19:23] <marcoc> oops
[19:23] <marcoc> juju deployer -T
[19:23] <whit> marcoc, anyway, it will fail fairly fast. what version of  juju are you using?
[19:23] <whit> marcoc, exactly
[19:24]  * whit should alias that to juju undeploy
[19:24] <marcoc> 1.21-beta1
[19:24] <marcoc> whit, I wrote a juju-reset plugin
[19:24] <whit> marcoc, ah nice
[19:25] <whit> marcoc, 1.21-b may not have the m1.small issue
[19:25]  * whit crosses fingers
[19:27] <whit> marcoc, how's reinvent?
[19:28] <marcoc> hasn't started yet, but it's pretty hectic
[19:30] <bidwell> my 'juju -v --debug bootstrap -e maas' pastebin should be at http://pastebin.com/fxMRSJM
[20:14] <marcoc> marcoc, how long should this take?
[20:14] <marcoc> I got an internet window
[20:14] <marcoc> to a thing with tabs
[20:14] <marcoc> but it's been spinning for a while