[01:11] <lutostag> any way to make juju create a privileged lxd container rather than a userspace one?
[01:11] <lutostag> (perhaps constraints)?
[02:51] <rick_h_> lutostag: you have to change the lxd profile
[02:51] <rick_h_> lutostag: there's a conversation on how to allow customizing this at deploy time for an application but it's not implemented yet
[02:52] <pragsmike> hi everyone!
[02:53] <rick_h_> lutostag: https://jujucharms.com/u/james-page/openstack-on-lxd mentions customizing the lxd profile used in the "LXD profile for Juju" section
[02:53] <rick_h_> howdy pragsmike
[02:54] <rick_h_> pragsmike: the but to track I was telling you about is: https://bugs.launchpad.net/juju/+bug/1566791
[02:54] <mup> Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> <cpec> <network> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1566791>
[03:19] <tonyanytime> Hello all, new to juju, stuck with a dying juju-gui, container doesn't exist, it doesn't  remove machine Or let me add it again. Kind of stuck in between. Any ideas?
[10:20] <chaitu> Hi all, I am deploying an autopilot setup, First i deployed successfully and then my controller node got crashed. Can someone suggest me if there is any way to add a controller to the same cluster or to redeploy the same controller
[11:09] <bloodearnest_> marcoceppi, cory_fu: heys guys - quick question: is it possible to add a --no-install-recommends to the basic layer's install of python3-pip?
[11:11] <bloodearnest> reasons are many. a) is only 3 packages, rather than ~43 b) I don't really want a full compiler toolchain install by default on all charms using layer:basic! :D
[11:11] <bloodearnest> is there a reason we pull in recommends (which includes build-essentials)
[11:11] <bloodearnest> ?
[13:24] <bloodearnest> is there a reason not to have apt_install do --no-recommends by default?
[13:48] <Anita_> Hi
[13:48] <Anita_> Hi Matt
[13:50] <Anita_> Hi Matt Bruzek
[14:03] <balloons> ping mgz
[14:03] <mgz> balloons: yo
[14:12] <Anita_> sudo apt-get install juju-local gives error : The following packages have unmet dependencies:  juju-local : Depends: lxc (>= 1.0.0~alpha1-0ubuntu14) but it is not going to be installed               Depends: lxc-templates but it is not going to be installed E: Unable to correct problems, you have held broken packages.
[14:12] <Anita_> any idea
[14:12] <Anita_> trying to install 1.25
[14:23] <marcoceppi> bloodearnest: the pip modules embedded are source wheels, and need to be built on the machine in case of architecture dependencies
[14:23] <marcoceppi> bloodearnest: feel free to open a bug though
[14:24] <bloodearnest> marcoceppi, this is apt installing python3-pip
[14:24] <marcoceppi> bloodearnest: yes, we pip install those wheelhouses from the wheelhouse directory in a charm
[14:28] <bloodearnest> marcoceppi, yes, but the initial pip (to pip install the bundled pip), we install python3-pip, python3-setuptools and python3-yaml:
[14:28] <bloodearnest> https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L46
[14:28] <bloodearnest> and when I say install I mean apt install
[14:28] <bloodearnest> which is 45 packages
[14:28] <bloodearnest> in xenial anyway
[14:29] <marcoceppi> bloodearnest: right, I understand you're point I don't think I'm making my counter point very clear. I'm blitzing to the charmer summit start in 1.5 hours, can I recommend opening a bug on layer basic so we can continue the conversation there?
[14:35] <venom3> Hello!
[14:35] <venom3>  I have a question about Juju gui.
[14:35] <bloodearnest> marcoceppi, ack, ta
[14:37] <venom3> I need to enable insecure mode into embedded juju gui (I'm not using the juju-gui charm)
[14:37] <marcoceppi> venom3: that's a good question, urulama ^ any opinions?
[14:38] <bloodearnest> marcoceppi, I think I follow now, sorry
[14:42] <marcoceppi> bloodearnest: it's unfortunate, 46 extra packages is quite annoying to instance boot time
[14:42] <bloodearnest> yeah :(
[14:56] <bloodearnest> marcoceppi, might it be possible to have a layer option the says: build-required? And only pull in the extras if that is true?
[14:56] <marcoceppi> bloodearnest: it'd be better if we auto-detected if the module needed to be compiled
[14:57] <bloodearnest> right
[14:57] <marcoceppi> not sure if that's possible
[14:57] <bloodearnest> or just build a snap :)
[14:57] <bloodearnest> ah, same problem, of course
[15:01] <urulama> venom3: you mean GUI in the controller with "juju gui" command?
[15:04] <venom3> urulama: yes. I need an http address, but  when I type "juju gui --no-browser", I get https:...
[15:04] <urulama> frankban: do we support this? ^
[15:04] <frankban> urulama, venom3: no, the GUI is only served via https
[15:05] <pragsmike> I've had problems with the gui being rejected by my browser because the certificate too closely resembles one that the browser thinks it has seen recently
[15:06] <venom3> frankban, urulama: sorry, but I read in "https://blog.jujugui.org/" that this feature was re-enabled
[15:06] <venom3> posted by "jeffpihach 8:30 pm on February 17, 2016"
[15:07] <frankban> venom3: that's about the juju-gui charm
[15:08] <frankban> venom3: not the GUI as provided directly by Juju
[15:09] <venom3> frankban: thanks. So do you suggest to deploy this charm to enable http protocol?
[15:10] <venom3> In this case, could I have any conflict or other problems?
[15:10] <frankban> venom3: so after you deploy the charm from https://jujucharms.com/juju-gui/ you could in theory just set "juju set juju-gui secure=false" to be able to access the GUI from http. this is of course highly insecure
[15:10] <frankban> and discouraged
[15:10] <frankban> venom3: can I ask what's your use case?
[15:15] <venom3> frankban: yes, of course. we deployed maas and juju in a private infrastructure. Everything is internal. The first idea was to use ngnix as proxy, but we have problem with https (ok, i admit my lacks of knowledge).
[15:18] <venom3> Maas was a joke ('proxy_pass http://192.168.110.1/MAAS/;')
[15:25] <frankban> venom3: cool
[15:27] <venom3> frankban: we resolved by iptables
[15:27] <frankban> venom3: so you can use the GUI directly from the controller? much better!
[15:40] <pragsmike> venom3 what was the issue?  couldn't you use the https: URI?
[16:01] <venom3> urulama, frankban: thank you for your time, really (i don't know what's happened, I've lost connection). Bye.
[16:12] <hatch> venom3: why was the https url not working for you?
[16:22] <venom3> hatch: we tried to use nginx as reverse proxy. It was simple for http protocol exposed from MAAS, but a pain for https from juju-gui. The idea is set juju gui insecure. This is not a problem, because is behind a secure network.
[16:22] <hatch> venom3: alright no problem - I was just curious if there is anything we could do to make this easier
[16:23] <hatch> but it's definitely a workflow we don't recommend :)
[16:23] <hatch> thanks
[16:27] <venom3> hatch: no problem. I read the documentation and I know your recommendation. We are using Juju for a short time, so we have plenty of informations to acquire. This worths the effort, because Openstack is a pain without tools like this
[16:28] <venom3> And the community is doing a great work!
[16:28] <hatch> :) glad you like it!
[16:30] <venom3> glad all you exist, really.
[16:31] <devop01> i have a fresh xenial install with the juju dev ppa and it can't seem to deploy a local charm.  I give it the path and it says it can't find it
[16:32] <hatch> devop01: Juju 2?
[16:32] <devop01> hatch: yes
[16:33] <devop01> hatch: i don't get any debug logs or anything indicating what i did wrong
[16:33] <hatch> devop01: if you navigate to the root path of your charm you should be able to go `juju deploy .`
[16:34] <hatch> what command were you trying to run?
[16:34] <devop01> hatch: juju deploy . also doesn't work.
[16:34] <hatch> really...can you paste the error?
[16:34] <devop01> i was trying to run `juju deploy ./repos/ceph-mon` which is my local copy
[16:45] <hatch> devop01: did you get it solved?
[16:46] <devop01> hatch: no i'm not sure how to get it to deploy local charms.  I'm on juju 2 beta18.  Deploying anything from the store seems to work fine
[16:46] <hatch> devop01: can you paste the error message that it outputs?
[16:47] <devop01> hatch: I think i buggered something with the charm.  I tried a different local charm and it deployed ok.
[16:48] <hatch> devop01: alright, that was going to be my next suggestion :)
[16:48] <hatch> you can use charm proof to see what you might be missing
[16:49] <devop01> :)
[16:55] <devop01> mskalka: https://jujucharms.com/docs/2.0/clouds-LXD
[17:14] <x58> juju status says "ERROR connection is shut down" how do I restart this connection?
[17:29] <Brochacho> What's the lxc image alias for xenial?
[17:47] <catbus1> Is there any change on the charmstore in the last two weeks that causes this issue: http://pastebin.ubuntu.com/23170237/
[17:48] <PCdude> catbus1: u realise u are using a beta version? I have had trouble before with beta versions of JUJU. My advice would be to stay with 1.25.6 until 2.0 comes out of beta
[17:49] <x58> catbus1: beta17 is out... you might want to grab that and try again.
[17:51] <catbus1> x58: I can do that, but the same version worked 2 weeks ago.
[17:54] <PCdude> x58: that one is meant for ubuntu 16.10
[17:54] <PCdude> catbus1: I dont think u are using 16.10?
[17:54] <catbus1> PCdude: no, we aren't using 16.10.
[17:55] <PCdude> catbus1: then beta15 is the latest rn
[17:56] <x58>   Version table:
[17:56] <x58>      2.0-beta18-0ubuntu1~16.04.1~juju1 500
[17:56] <x58>         500 http://ppa.launchpad.net/juju/devel/ubuntu xenial/main amd64 Packages
[17:56] <x58>         500 http://ppa.launchpad.net/juju/devel/ubuntu xenial/main i386 Packages
[17:56] <x58> It's in the devel PPA for JuJu
[17:56] <x58> xenial-updates has beta15...
[17:57] <PCdude> x58: very strange, I cant even find that version on the launchpad site of juju itself
[17:58] <x58> sudo add-apt-repository ppa:juju/devel
[17:58] <x58> sudo apt-get update
[17:58] <x58> PCdude: You mean right here: https://launchpad.net/~juju/+archive/ubuntu/devel
[17:59] <PCdude> x58: this one: https://launchpad.net/juju-core
[17:59] <PCdude> x58: even if u click further through, I cant find beta18
[17:59] <PCdude> but indeed I can find it after adding the dev PPA
[18:01] <x58> beta15 is the latest in xenial updates, wonder if they only show "stable" releases on juju-core
[18:03] <x58> catbus1: there were changes to the charmstore IIRC. I can't find the bug report at the moment.
[18:04] <hatch> catbus1: you'll want to update to the most recent version of Juju - there were charmstore changes made that require a more recent version of Juju than you're using
[18:04] <hatch> assuming you want to stick with Juju 2
[18:05] <catbus1> hatch: x58: understood, updated to beta18 now and trying again
[18:05] <hatch> great, hope it works for you
[18:10] <catbus1> hatch: x58: it's working, didn't throw out that error anymore.
[18:10] <hatch> great glad to hear it
[18:52] <mbruzek> hmo: http://ppa.launchpad.net/juju/devel/ubuntu/pool/main/j/juju-core/juju-core_2.0-beta15-0ubuntu1~16.04.1~juju1.debian.tar.xz
[21:17] <ChrisHolcombe> i'm running into an issue where i can't deploy a charm from the local dir.  I'm seeing: juju.cmd.juju.application deploy.go:791 cannot interpret as local bundle: read .: is a directory.  I'm on 16.04 with juju 2 beta 18
[21:18] <hatch> ChrisHolcombe: it appears that Juju doesn't see the path you're passing in as a local charm
[21:19] <ChrisHolcombe> hatch, is there something special i need to do to make it see it differently?
[21:19] <hatch> is it possible that your charm is invalid? Have you ran charm proof on it?
[21:19] <hatch> there isn't
[21:19] <ChrisHolcombe> charm proof passes
[21:19] <hatch> it should "just work" :)
[21:19] <hatch> well then!
[21:19] <ChrisHolcombe> the only thing i get is a W for no copyright file
[21:19] <ChrisHolcombe> no biggie :)
[21:19] <hatch> haha right
[21:19] <hatch> you could also try via the GUI using a zip of the charm
[21:20] <ChrisHolcombe> hmm ok
[21:20] <hatch> I'm assuming that you're just running `juju deploy .` ?
[21:20] <ChrisHolcombe> hatch, yup
[21:22] <ChrisHolcombe> hatch, deploying full path with ./{path} doesn't work either
[21:22] <hatch> typically when I've seen that problem it's been because of a proof issue
[21:22] <ChrisHolcombe> yeah
[21:23] <rick_h_> hatch: no, proof and deploy aren't connected
[21:23] <rick_h_> so not sure wtf, ChrisHolcombe what's the ls of the directory look like?
[21:23] <hatch> no they aren't, but I've usually found that proof will catch why deploy doesn't work :)
[21:23] <rick_h_> ChrisHolcombe: I think it keys off either bundle.yaml or metadata.yaml
[21:24] <hatch> for some reason it's thinking you're trying to deploy a bundle
[21:24] <ChrisHolcombe> rick_h_, nothing special.  has a hooks dir, metadata.yaml, config.yaml.  All the usual pieces.  This deployed fine back on juju 1.25.x
[21:24] <rick_h_> ChrisHolcombe: right, there was a change in the you have to do ./ in the path
[21:25] <rick_h_> ChrisHolcombe: maybe go up one dir and try juju deploy ./dirname ?
[21:25] <ChrisHolcombe> rick_h_, yeah i tried that also.
[21:25] <ChrisHolcombe> rick_h_, i also tried that :D
[21:25] <rick_h_> ChrisHolcombe: heh ok. have to check
[21:25]  * rick_h_ goes to download a charm and try
[21:26] <ChrisHolcombe> rick_h_, https://gist.github.com/cholcombe973/7c33286c38bc36caf233bfc7a511c2ed
[21:27] <hatch> ohh
[21:27] <hatch> yeah reading the source here, that i suppose is expected
[21:28] <elopio> \o/ quassel installed in canonistack with juju. Sooo nice.
[21:28] <ChrisHolcombe> hatch, ok cool so the real error is it can't find my charm?
[21:28] <hatch> yeah
[21:28] <hatch> fwiw I'm not a core dev, I'm just reading the source :D
[21:28] <ChrisHolcombe> yup
[21:28] <ChrisHolcombe> i can sorta read Go
[21:29] <ChrisHolcombe> hatch, it looks like maybeReadLocalBundle falls through to maybeReadLocalCharm
[21:29] <hatch> yeah, I had a laugh at the names
[21:29] <hatch> hah
[21:29] <rick_h_> ChrisHolcombe: hatch yea, there was some recent changes in that to clean it up a bit when the output got cleaned up
[21:30] <hatch> it looks like it's uploading as revision 3 then trying to deploy revision 9
[21:30] <hatch> charms?revision=3&schema=local&series=trusty
[21:30] <rick_h_> ChrisHolcombe: was the charm previously installed?
[21:30] <hatch> Deploying charm "local:trusty/gluster-charm-9".
[21:30] <hatch> so revision drift it appears?
[21:31] <ChrisHolcombe> hatch, it seems to ref the revision every time i try to deploy it
[21:31] <ChrisHolcombe> rev*
[21:31] <rick_h_> there's a bug around the upgrade of a local: charm if I recall
[21:31] <rick_h_> it should be +1'ing the revision automatically each deploy
[21:31] <hatch> ahh this might be it then
[21:31] <rick_h_> since it's local and not tied ot the charmstore
[21:31] <ChrisHolcombe> rick_h_, ok
[21:31] <ChrisHolcombe> rick_h_, could i deploy from the cs and then upgrade via local?
[21:32] <rick_h_> ChrisHolcombe: you can use --switch, but I'm not sure there. I just tried with the ubuntu charm and deployed it three times ok
[21:32] <rick_h_> ChrisHolcombe: can you create a new model and try again?
[21:32] <ChrisHolcombe> hmm i haven't used that before.  let me try
[21:32] <rick_h_> ChrisHolcombe: maybe there's something in the history there in that model that's causing something. Just to narrow down wtf
[21:34] <ChrisHolcombe> rick_h_, i have a revision file that says 3.  I'm not sure i remember where that came from.  It hasn't been touched in a year
[21:34] <hatch> Ohh that might be where the 3 is comign from then
[21:34] <hatch> :)
[21:34] <rick_h_> ChrisHolcombe: it's probably coming from the revision file
[21:34] <rick_h_> ChrisHolcombe: as the start revision
[21:35] <ChrisHolcombe> ok
[21:35] <rick_h_> my ubuntu depoloy started at rev 7
[21:35] <ChrisHolcombe> rick_h_, right.  depends on if you've deployed that before in the model i think
[21:40] <ChrisHolcombe> rick_h_, hacky workaround didn't work haha.  i tried to deploy from the cs and upgrade from local
[22:13] <lp_sprint> perrito666: tweet me that pic plz
[22:13] <perrito666> lp_sprint: https://launchpad.net/juju/+milestone/2.0-beta18
[22:14] <catbus1> Hi, how do I check if the lxd containers are coming up? All the containers are in 'error' state in the machine section of juju status, and they show 'waiting for agent init to complete' for a while.
[22:15] <hatch> catbus1: I believe that this is a known issue right now
[22:16] <hatch> there is a workaround, one moment
[22:16] <hatch> catbus1: well to answer your question you can run `lxc list` :)
[22:16] <catbus1> lxc list shows empty list
[22:16] <catbus1> running juju beta12 thought
[22:16] <catbus1> though
[22:16] <hatch> ohh, ok I'm not sure, are you able to upgrade to b18?
[22:17] <catbus1> yes
[22:17] <hatch> lots has changed since b12 :)
[22:18] <catbus1> will do
[22:18] <hatch> catbus1: feel free to ping me if the issue persists with b18
[22:27] <catbus1> mliberte: hey
[22:27] <mliberte> hello
[22:35] <catbus1> ping
[22:35] <catbus1> mliberte: ping
[22:35] <catbus1> all set
[22:42] <catbus1> hatch: err.. with b18, only one machine picked up by maas to deploy (should be 4), and juju status shows machine 1, 2, and 3 in error state.
[22:43] <hatch> these are still lxd? Are they on Xenial?
[22:44] <catbus1> one sec
[22:44] <hatch> if they are, you may have to run
[22:44] <hatch> juju set-model-config enable-os-refresh-update=false
[22:44] <hatch> juju set-model-config enable-os-upgrade=false
[22:44] <hatch> I'm not sure if that bug had been fixed or not yet
[22:45] <hatch> (assuming they are stuck in pending)
[22:45] <catbus1> hatch: juju controller shows maas isn't able to find machines match the tag contraints, but we only specify machine 0 with tags, not 1, 2, or 3.
[22:45] <hatch> hmm, that is out of my area of expertise :)
[22:47] <lazy_sprint> perrito666: https://bugs.launchpad.net/juju/+bug/1595720
[22:47] <mup> Bug #1595720: Problems using `juju ssh` with shared models <ssh> <usability> <juju:Triaged> <https://launchpad.net/bugs/1595720>
[22:47] <catbus1> will add the same tag to other machines to see if it works
[22:54] <catbus1> can not reproduce the issue anymore.