=== CyberJacob is now known as CyberJacob|Away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === menn0 is now known as menn0-afk === underyx is now known as Guest2269 === whit_ is now known as Guest44905 === blackboxsw is now known as blackboxsw_away === menn0-afk is now known as menn0 === kadams54-away is now known as kadams54 === urulama__ is now known as urulama === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [04:40] rick_h_: event runs from 14 to 20 UTC, you can choose your slot! === kadams54 is now known as kadams54-away === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [07:24] Hi All, After fresh install of juno openstack on trusty (openstack-origin:trusty-juno) the neutron security group service is not started. How I can run it ? Why on controller node (installed by charm) no neutron-server installed ? === CyberJacob|Away is now known as CyberJacob [08:02] Trying to make changes to cinder-vmware charm work with another storage driver. Looking at it, I think I mainly need to edit cinder_context to take in the new config keys defined in config.yaml… Is there more to it than that? If so where else should I look. [08:03] and many thanks for any help === alexlist` is now known as alexlist === CyberJacob is now known as CyberJacob|Away === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [09:34] hi, when I deployed a bundle and a service is not started, can I start it from the gui? [09:36] the debug-log does not mention any errors, however it does not mention starting that either [09:47] hmm, yet stat says "agent-state: started" === liam_ is now known as Guest26825 [13:41] tvansteenburgh: hey, have a min? === underyx_ is now known as underyx [14:03] jose: yep [14:03] tvansteenburgh: quick question, will bundletester deploy the local charm or the one in the store as a default? [14:03] cory_fu: those tests for chamilo are now done, should I do an MP against your branch or against the store? [14:04] jose: bundletester just finds tests and executes them. what's deployed would depend on what's in the amulet test or the bundle file [14:05] tvansteenburgh: got it, thanks [14:10] jose: where do I sign up for a slot? [14:10] rick_h_: with me [14:10] jose: ah ok. [14:11] jose: put me down for 15:00 friday? [14:11] rick_h_: sure, what would be a short description of the presentation? [14:12] jose: I'm going to ping alexisb and see if she wants to dual up again or not. [14:12] rick_h_: got it, lemme know! :) [14:12] jose: so for now "What's new and upcoming in the work of Juju UI Engineering" can work for now [14:12] jose: If you're happy with the services framework implementation of the charm, I'd say MP it against trunk [14:12] jose: and for a description just something about the latest in the progress of the Juju GUI, jujucharms.com, and juju-quickstart. [14:13] jose: and if alexisb can join then I'll ask you to ammend it to be more general juju [14:13] rick_h_: awesome, sec to give you the link... [14:14] rick_h_: http://summit.ubuntu.com/uos-1411/meeting/22387/whats-new-and-upcoming-in-the-work-of-juju-ui-engineering/ [14:14] cory_fu: ok, MP against trunk is open [14:14] jose: ty [14:34] tvansteenburgh: Why is bundletester tagged to bzr==2.6.0 instead of bzr>=2.6.0? My system apparently has 2.7.0dev1 and it's complaining [14:35] cory_fu: that's how i inherited it, and i haven't tested it with anything else [14:35] you didn't install it in a virtualenv? [14:39] tvansteenburgh: Yes, but I had to use --system-site-packages to get pythonapt to work, which meant that my system-level installed bzr made it so I couldn't install the right version of bzr into the virtualenv [14:39] I know I had this working at one point, so I'm not sure how I managed it before [14:39] pip install -I [14:40] (capital i) [14:41] Ah! That works. I thought there was such an option, but it's not listed on pip --help [14:41] pip help install :) [14:41] Gah [14:41] :) [14:41] Thanks [14:41] np [14:51] hi, is it possible to run juju local environment when the host is an lxc container already? [14:52] cargill_: lxc in lxc gets a bit hairy, and isn't really recommended. [14:54] sorry for the delay, lazyPower is correct. [14:54] lazyPower: I've been using vagrant until now, but the fact that everything is routed through 10.0.3.1 which breaks some relations, notably postgresql, is quite annoying, not to mention slow and memory-hungry [14:55] cargill_: i understand your frustration. aisrael is working with our team that maintains the images toa ddress most of those issues. there's also work being done to provide a docker based image for the workflow (allbeit a much lower priority than fixing vagrant papercuts presently) [14:56] and I'm running Debian, which juju is not really happy with and I haven't had the time to find out how to fix that yet [14:56] cargill_: the issue with postgresql and routing should be fixed in the latest vagrant box images [14:56] I've downloaded one yesterday, is that new enough? [14:56] cargill_ what version of ebian? [14:56] jessie [14:56] okay [14:57] cargill_: where did you fetch the box from? aisrael - have we updated the docs with the latest box url(s)? [14:57] cargill_: yes. Are you seeing the routing issue with that box? [14:58] the amd64 one linked in https://juju.ubuntu.com/docs/config-vagrant.html [14:59] aisrael: it's this issue, isn't it? 'FATAL: no pg_hba.conf entry for host "10.0.3.1"' [14:59] cargill_: Yep, that's the issue [15:00] ;( [15:00] lazyPower: looks like the doc is pointing to an image from August [15:00] cargill_: Could you try an image from here? http://cloud-images.ubuntu.com/vagrant/trusty/current/ [15:00] I'll get the docs updated [15:00] aisrael: i suspected as much, we should try to automate that part of the docs or poitn them at the /current/ images so its always up to date [15:01] juju does need the updating [15:01] lazyPower: Yeah, it should just point to current. I'll see if we can get that stale image removed, too. [15:01] aisrael: offtopic to whats going on here - will you be available to help run a session at UDS over the dev workflow with vagrant? ~ 20 minutes give or take [15:02] I'd still prefer to try the LXCception approach, with 4GB of memory, VirtualBox is not a nice neighbour [15:02] lazyPower: depends on the day. My schedule this week is going to be somewhat challenging [15:03] and I'm mostly after testing out stuff locally, not production use [15:03] what are the common issues with that approach? [15:05] cargill_: I've run into issues with cgroups failing the upstart task - and didnt pursue it any further [15:05] cargill_: and to note, i haven't actually tried to run juju within that lxc container [15:06] cargill_: however if you come up with a working solution - i'm all for talking to you about your approach and documenting it for science. [15:07] For science! [15:07] at the moment, I'm getting 'juju.container.lxc clonetemplate.go:167 container failed to start: container failed to start' in the container, nothing more descriptive in the juju debug-log -l TRACE [15:08] but I haven't tried to create a container yet, this is where I was before I asked here [15:08] cargill_: if i were to guess at the culprit - i'd say its networking [15:08] i dislike generic failure debug statements [15:09] yeah, I'm trying to get all the information there is on LXC in LXC and see if I can get an lxc container start manually, to make sure that works [15:09] /join #nagare [15:09] dang it [15:09] cargill_: https://www.stgraber.org/2012/05/04/lxc-in-ubuntu-12-04-lts/ - container nesting. [15:09] hmm 12.04 [15:09] the info is bit old :( [15:10] Yes, needs a bit of updating lol [15:11] https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-basic-usage - that may be more up to date. [15:11] under nesting - it talks about a profile to be used for nesting [15:11] lxc.aa_profile = lxc-container-default-with-nesting [15:12] LazyPower, that is much newer. For 14.04 [15:13] ok, so the host being debian, I don't have to do this part, right? since systemd manages its cgroups, would it interfere? [15:13] Does it "have" to be debian :D [15:14] LinStatSDR: kinda has, yes, I'm not reinstalling my machine just to have juju running on it :) [15:15] (although I can get rid of systemd, it is a bit painful sometimes) [15:15] Hehe, I'm just busting your chops cargill_. That and I seem to less "problems" if you will, issues running Ubuntu for juju [15:16] cargill_: since we're not officially moving ot SystemD for another few cycles, i dont think anyone here has really worked with juju under a systemd supervisor. [15:16] so we wont have much in terms of info in that regard [15:17] yeah, I thought as much, thanks anyway [15:18] =) [15:19] does the lxc.mount.auto = cgroup line go into /etc/lxc.conf or the container's own config file in /var/lib/lxc//config? === robbiew is now known as robbiew-afk === robbiew-afk is now known as robbiew [16:10] mhall119: ping - we've got all 5 of our sessions in pending on summit. [16:13] oo, i missed one. 6 sessions [16:16] lazyPower: in pending? then you need a track lead to approve them [16:16] mhall119: do I ping antonio for that? sorry for my daftness - this is my first time doing the scheduling. [16:16] jose: gaughen: marcoceppi: ^^ [16:16] lazyPower: antonio or one of the ones I just pinged [16:16] * lazyPower facepalms [16:16] you sent that in the email - sorry [16:17] it's okay :) [16:28] hmm, juju-gui is started, but keeps on responding with a 301 to https:// + whatever I put in the Host header, then does not want to speak TLS, is there something I'm missing? [16:29] rick_h_: have you guys done any experimentation with the gui being nested in lxc? I think i know the answer to this... [16:30] rick_h_: for clarity - Debian host => Parent JUJU Environment container => juju gui is container in a container. [16:30] ah, it was trying to redirect me to port 443, just that it got confused [16:31] lazyPower: no, the big thing is how do you get to it via the network? It needs to have the ability to hit the juju api websocket [16:31] cargill_: cool, yea one of the guys on the team is currently working on making the charm take a port as a config param to run on [16:31] which should help it colocate better with other services soon. Should be released next week [16:31] rick_h_: good question - i'm not sure - cargill_ is pioneering this [16:32] lazyPower: yea, never tried it I guess. Nothing 'shouldn't work that I know of but it all depends on the networking setup from what I can think of [16:33] I haven't either tbh - but this sounds like a compelling alternative to using vagrant if the networking re-config is trivial. [16:33] after I got the croups remounted in the container as well, juju seems to be mostly happy (apart from me setting apt proxy wrong and install of juju-gui failing because of that :)) [16:33] lazyPower: huh? How does it help with vagrant? Does vangrant just run the lxc? [16:33] rick_h_: vagrant is a heavier weight alternative when i can just share resources with my HOST [16:34] lazyPower: ok, but I'm missing the point. If you're on linux and have lxc why do lxc in lxc? [16:34] rick_h_: all that i *really* want this for is doing the testing setup for charm reviews - as my workstation is polluted beyond recognition from all the junk in 00-setup from teh charms. [16:34] you can do multiple envs in lxc? [16:34] isolate all that business so i can just wipe out the lxc container when im' done and call it. [16:35] lazyPower: right but do an env and just destroy-environment --force? [16:35] and i dont need nested containers for that. [16:35] now that i think about it, i've been undeniably lazy - i could just snapshot a testing container, and put my environments.yaml in there - use it for testing with cloud hosts [16:36] lazyPower: ok, I'm all for helping you solve a problem with the gui. Just not understanding the problem atm so forgive me. [16:36] lazyPower: give me a setup you want to work and I'll see what we can do [16:37] rick_h_: well my use case is crazy different than what cargill_ is doing, and cargill_ would be a better source of truth on that matter than I. [16:37] i pinged you to see if you guys had ever had a crazy notion to try that - but most of us that i'm aware of, have not gone the route of nested lxc. [16:38] lazyPower: right, we've not really. lxc is usually 'contained' enough for our needs. [16:38] yo dawg, i heard you like to contain things, so we spun up containers in your containers so you can container while you container. [16:40] lol [16:40] lazyPower +1 [16:51] Just submitted my dhx (debug-hooks-ext) plugin pull request: https://github.com/juju/plugins/pull/32 I am interested in feedback, but I think it makes for a much improved charm debugging experience === kadams54 is now known as kadams54-away === scott is now known as Guest4737 [17:10] is there a way to get the public-address of a service either in the GUI or through the juju command? [17:11] apart from juju stat, which is a bit hard to parse in the shell [17:13] actually, not through the gui, because I want to get the gui service address to set up networking after I bootstrap a new environment... [17:16] cargill_: juju run --unit service/# "unit-get public-address" [17:17] cargill_: juju status service | grep public-address | awk '{print $NF}' [17:17] if your'e on one of the beta builds, we have newer options for juju status as well [17:17] lazyPower: thanks === LinStatSDR_ is now known as LinStatSDR [17:24] yay, juju feels much snappier, does not eat my memory for breakfast and no problems with cross-machine connections going through 10.0.3.1 [17:30] thanks for your help everyone [17:43] No problem cargill_ - if you have any notes i'd love to see them. [17:43] its an interesting approach for sure === kadams54-away is now known as kadams54 [18:25] mhall119, lazyPower: scheduling? shoot a PM [18:25] jose: i sent an email follow up - let me fwd it to ya [18:25] cool [18:26] jose: you've got mail - and there are 6 sessions proposed on summit [18:26] got it [18:27] lazyPower: have you already proposed the sessions on summit or created a blueprint? [18:27] don't see them [18:27] jose: no blueprints - the only session we have for planning is the open feedback - and what project would you target that against as it encompasses charms and juju? [18:28] jose: however mbruzek created 6 sessions that are in pending [18:28] jose: you don't see pending meetings in http://summit.ubuntu.com/uos-1411/review/ ? [18:28] lazyPower, mhall119: was looking at scheduling, sorry [18:31] lazyPower: 17 UTC wed, charm testing: slot is plenary. 15 UTC fri, open feedback, slot is taken by rick_h_ [18:32] jose: can you just shift them down into open slots and I'll update our tenative slots on the calendar? [18:32] jose: what i'm more concerned with is keeping the order that we have them scheduled, as its a progressive build on the prior [18:32] ok [18:32] the times are adjustible however [18:33] lazyPower: charm testing is on the 18UTC slot on wed, open feedback on the 14 UTC slot on Fri [18:33] lazyPower: actually, you can choose between 14UTC fri or 18 UTC fri [18:34] jose: 18UTC sounds like a better timeslot as its after standup. [18:34] ok [18:35] lazyPower, mbruzek: all meetings approved and scheduled [18:35] jose: thanks for the follow up o/ [18:35] 5 slots are still open if anyone wants tot ake them [18:36] 5 slots are still open if anyone wants tot ake them [18:36] np :) [18:38] Is this the place where one might get help with getting juju+maas to bootstrap? [18:39] bidwell: we can certainly try - whats the situation? [18:41] I have a maas server with 6 machines behind it that have been provisioned with ubuntu 14.04.1 and returned to the ready state (but still up). When I run "juju bootstrap" from the maas server it runs for 30 minutes and then says "ERROR bootstrap failed: waited for 30m0s without being able to connect: /var/lib/juju/nonce.txt does not exist" === CyberJacob|Away is now known as CyberJacob [18:42] I can ssh to them as ubuntu@host.domain from the maas server and sudo once there. I am not sure what I am missing. [18:50] oh, interesting [18:50] missing nonce.txt huh? [18:53] bidwell: i'm not positive on why this is the case but i'm goign through questions on AU with similar symptoms [18:53] bidwell: can you get me the output from juju -v --debug bootstrap -e maas (or your maas environment name) [18:54] preferrably in a pastebin [19:02] lazyPower, hi [19:03] themonk: hey, need any help? [19:03] jose, : hi yes :) [19:03] what's up? [19:05] jose, i have submitted my charm for review but i am just waiting, tipicaly how long it takes to review? and can you see my page and tell me is there anything i did wrong? [19:06] themonk: let me check if it's on the queue [19:06] themonk: which charm is it? [19:06] gluu-server [19:06] hah [19:07] that's lazyPower [19:07] jose: i have that one locked but have since gotten pre-occupied [19:07] cool then! [19:07] if you want it i can unlock it and you can get the first round [19:07] definitely [19:07] I've got some time now [19:07] allrighty - incoming c-c-c-c-c-c-combo-breaker [19:08] themonk: tiem to review is subjective - it depends on whats int eh queue and so forth - as you're an ISV we'll give ya some express privledges ;) [19:08] jose: unlocked - have at it [19:08] cool, checking now [19:08] lazyPower, :) [19:09] whit, how do you deploy cloud foundry? [19:10] marcoc, usually I follow the readme, but execute the actual deployment command by hand [19:10] themonk: mind if I PM? [19:10] marcoc, what are you seeing? [19:11] marcoc, or was your question even more general? [19:11] whit, I have bootstrap, what's next is my question [19:12] jose, ? [19:12] themonk: wanna make a couple questions about the charm, and was wondering if it was fine for you if I sent a private message [19:12] marcoc, alright! I suggest checking out the source from launchpad and following the instructions in the README [19:13] jose, ok np [19:13] marcoc, let me grab you a link [19:13] whit, and the source is? [19:14] marcoc, https://code.launchpad.net/~cf-charmers/charms/trusty/cloudfoundry/trunk [19:14] whit, ta [19:15] marcoc, hollar if you have any issues. Where are you planning on deploying? [19:15] marcoceppi, lazyPower, mbruzek: auth request to push https://code.launchpad.net/~ibm-demo/charms/trusty/mediawiki/trunk/+merge/240072 [19:15] AWS west-2 [19:16] jose: go for it [19:16] lazyPower: thanks [19:16] marcoc, ok cool. be sure to set a constraint for instance-type=m3.medium [19:17] instance-type works as a --constraint? [19:17] marcoc, iirc, the deploy script included will create a "dense" placement [19:17] marcoc, only on aws, but yeah [19:18] marcoc, you can also pick a size of cpu or memory and get the same effect. main thing is to avoid getting hung up on limit m1.smalls [19:18] *limited [19:18] marcoc, it should spin up 8 machines iirc [19:22] oh crap, where do I set the constraint? before running cfdeploy? [19:23] marcoc, before [19:23] marcoc, generally we bootstrap with it [19:23] oops [19:23] juju deployer -T [19:23] marcoc, anyway, it will fail fairly fast. what version of juju are you using? [19:23] marcoc, exactly [19:24] * whit should alias that to juju undeploy [19:24] 1.21-beta1 [19:24] whit, I wrote a juju-reset plugin [19:24] marcoc, ah nice [19:25] marcoc, 1.21-b may not have the m1.small issue [19:25] * whit crosses fingers [19:27] marcoc, how's reinvent? [19:28] hasn't started yet, but it's pretty hectic [19:30] my 'juju -v --debug bootstrap -e maas' pastebin should be at http://pastebin.com/fxMRSJM === urulam___ is now known as urulama___ === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [20:14] marcoc, how long should this take? [20:14] I got an internet window [20:14] to a thing with tabs [20:14] but it's been spinning for a while === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === LinStatSDR_ is now known as LinStatSDR === bidwell is now known as drbidwell === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === CyberJacob is now known as CyberJacob|Away