[00:00] hmm rookie mistake time? :-/ [00:01] Budgie^Smore - i've lied to you [00:01] its non base64 encoded when looking at the secret via the webui [00:02] OK *phew* don't scare me like that :P [00:04] then it doesn't make sense why it is serving the wrong cert even after removing and readding the service :-/ [00:05] not to mention removing and readding the secret, simplifying the secret name, etc. [00:06] Budgie^Smore - when you're in a better position to debug, lets try to get you some TLC and get you unblocked. [00:06] i'm going be sticking around late tomorrow by about an hour or so, and i'm happy to help debug/reproduce then [00:06] think you'll have the time for that Budgie^Smore? [00:07] that works, I am going to try and getting my virtualized environment up today [00:07] ok, that sounds great then [00:07] i'll see you tomorrow evening :) [00:08] first step in solving a problem is figuring out if you have a problem at all, better to do that from a simplified cluster to rule out complexities [00:09] I need to do it anyway since I want to be able to basically make a master node made up of VM (for now) that I can just spin up worker nodes... call me crazy but I want to containerize everything I can :P [00:14] I would probably constainerize kubernetes masters if I could :P but that is totally just crazy talk [00:15] Budgie^Smore why do you think thats crazy talk? they are really gung-ho for self hosted at the developer watering holes [00:15] i'm not sold on the idea myself, but if its done well i can see why its attractive [00:15] OK not sure crazy talk then [00:15] nah just mostly crazy if you forget the triple-o work back in teh day and how big of a sideshow that was [00:16] not that it wasn't a feat of engineering, but that it was clunky and not really intuitive [00:16] ok i'm going to shut up now i feel like i'm dissing someone elses work... [00:16] * lazyPower checks out for the evening [00:17] it is all about making sure that the services are all hardened correctly, I am not totally sold on the idea of the masters but everything else yes === Guest3904 is now known as medberry === medberry is now known as med_ === mbarnett_ is now known as mbarnett === JoseeAntonioR is now known as jose === stormmore_ is now known as Budgie^Smore === antdillon_ is now known as antdillon === lukasa_ is now known as lukasa === petevg_ is now known as petevg === Lukewh_ is now known as Lukewh === arosales_ is now known as arosales === zeestrat_ is now known as zeestrat === nottrobin_ is now known as nottrobin === WillMoogle_ is now known as WillMoogle [05:01] giving my laptop a real workout tonight, backing up a couple of VMs to USB and then I am going to do system reset and image it so I can wipe the hard drive and put Ubuntu on it! === bradm_ is now known as bradm === frankban|afk is now known as frankban [08:03] Good morning Juju world! [09:13] Hi nobuto do you have a charm that uses the apache-php layer? [09:19] kjackal: not really. but I demoed how to write a new charm in front of customers along with the official doc (https://jujucharms.com/docs/stable/developer-layer-example) it never worked with xenial. [09:19] that's why I made a pull request to the layer. [09:21] nobuto: I see. thank you for the PR. I am testing it right now with a dummy charm. Hopefully I am going to merge it. Lets see! [09:21] kjackal: thanks [09:34] nobuto: do you have a valid apache.yaml ? [09:45] nobuto: I am getting this exception: http://pastebin.ubuntu.com/24000015/ Could be because the apache.yaml I found on the web is not valid [09:47] Hey. Can we configure Juju 2.0 with maas server? [09:51] pranav__: yes [09:52] pranav__: https://jujucharms.com/docs/2.0/clouds-maas [10:00] kjackal: let mee prepare apache.yaml for you to test. [10:01] @kjackal many thanks! [10:02] Hi juju World!!! [10:04] charm release giving command giving error "ERROR cannot release charm or bundle: unauthorized: access denied for user" [10:05] but able run the push command and its giving revision too [10:05] url: cs:~ibmcharmers/xenial/ibm-db2-2 channel: unpublished [10:09] kjackal: how about this? https://github.com/nobuto-m/layer-vanilla/blob/xenial/apache.yaml === caribou_ is now known as caribou [10:23] nobuto: I got another exception this time http://pastebin.ubuntu.com/24000102/ [10:23] kjackal: I used example vanilla layer. so checksum might be different. let me check. [10:24] But this is past the patch your are proposing, so i will merge the PR. However, this layer seems rather old, and you might not want to use it for demos [10:27] kjackal: right. vanilla layer needs to be updated to use new apache.yaml syntax: https://github.com/nobuto-m/layer-vanilla/blob/xenial/apache.yaml [10:28] kjackal: anyway our new charm developing story in jujucharms.com might need a face lift. === coreycb` is now known as coreycb === Anita is now known as Guest34515 [12:54] channel wise charm revoke command is not working. how can we fix that? === freyes__ is now known as freyes [13:03] how do I create services in bundle so more than one can be installed on a single machine? [13:04] right now, it seems to be ignoring "to" field, and spawn new machines for services [13:05] channel wise charm revoke command is not working. Please advice [13:20] kklimonda: the to field should do it. could you share your bundle? [13:20] Guest34515: are you getting an error message? Could you describe what's not working? [13:31] @marcoceppi: ah, I see - it seems if I add a new service to the bundle file, "to" is ignored [13:39] (add a new service and deploy again without cleaning the environment) [13:40] kklimonda, is this with conjure-up? [13:40] no, pure juju 2.0 [13:40] ok [13:42] perhaps there is a different way to incrementally work on bundles? [13:42] so I can write them one application at a time === anita is now known as Guest26106 [14:20] how to revoke specific revisions of a charm in charm store? [14:41] how to revoke specific revisions of a charm in charm store? [14:41] please help [14:48] Guest26106: are you getting an error message? Could you describe what's not working? === frankban is now known as frankban|afk [15:23] good morning [15:23] I am wanting to bootstrap juju to perform a canvas install of openstack, not through autopilot [15:23] the issue I am having is the internal network created by the lxd bridge on the bootstrapped juju machine [15:23] 10.0.0.0/24 === scuttle|afk is now known as scuttlemonkey [15:26] bildz: does that collide with your current network? [15:27] no its NAT'ed and preventing root based containers from getting a routable IP [15:28] I need the LXD bridge to use the correct DHCP, instead of the default bridge network [15:34] \o/ [15:37] ok guys how the heck do we reconfigure lxdbr0 to use my existing network (and dhcp) when deploying an app instead of going rogue and uising it own defined and useless 10.x.x.x network [15:40] marcoceppi: h00pz and I work next to each other [15:43] bildz h00pz is this on maas? [15:44] the hosts are deployed by maas but avoiding using autopilot as it sucks ass at placement [15:44] we stood up a standalone juju controller and ‘juju add-machine ssh:’ for all the computes [15:45] then we added the openstack env and created the lxd containers for the various services but when it came to the lxd networking it went and used 10.x.x.x [15:45] we would like to know how to change that lxd networking to use the same network and dhcp as the hosts they will be on [15:50] marcoceppi: any idea how to hack the lxd bridges ? [16:26] kwmonroe: cory_fu: and easy to merge update on the readme https://github.com/juju-solutions/layer-cwr/pull/86 [16:37] I posted this in #netfilter and #lxdcontainers, possibly someone here has some insight ... [16:37] having some issues getting packets through to lxd containers via iptables nat, wondering if someone might shed some light on my attempt to nat from host to container [16:38] I'm applying a prerouting rule on my external interface in order to nat through the host to a lxd container on lxdbr0, the prerouting rule is "iptables -t nat -A PREROUTING -i ens3 -p tcp --dport 6379 -j DNAT --to 10.0.0.160:6379" [16:38] the packets don't seem to be making it through to the container though.... I'm wondering if there are any tricks of the trade I'm missing here? [16:40] trying to introspect the ufw rules docker creates in an attempt to recreate [16:45] bbcmicrocomputer: which contrail charms on jujucharms.com are most up to date? [16:50] https://api.jujucharms.com/charmstore/v5/cassandra-29/archive/config.yaml - why is install_sources type "string" when the default value is actually a list? [16:54] kklimonda: https://jujucharms.com/u/sdn-charmers/ [16:54] thanks [16:54] kklimonda: bundles are here - http://bazaar.launchpad.net/~sdn-charmers/+junk/contrail-deployer/files/head:/bundles/ [16:55] kklimonda: works best with Contrail 3.2/3.1 commercial packages from Juniper [16:55] have you looked into dpdk vrouter? [16:55] (require license) [16:55] kklimonda: these charms don't support dpdk [16:55] Contrail 4.x charms from Juniper should do (due April) [16:56] sigh, I need R3.1 with dpdk and juju - fortunately I'm familiar with contrail itself, so I just have to unknowns [17:44] anastasiamac: upgrading to juju 2.1-rc1 to fix the bug: https://bugs.launchpad.net/juju/+bug/1605241 causes juju to no longer be able to bootstrap a localhost environment. It just says:cloud localhost not found [17:44] Bug #1605241: lxd instances not starting [17:49] is there a method in charmhelpers that returns the primary network interface? [17:51] ahh, just found it - charmhelpers.contrib.network.ip.get_iface_addr() [17:52] even better - charmhelpers.contrib.network.ip.get_iface_from_addr(addr), nice [18:21] Zic - you around? [18:46] Juju Show #6 in 14min [18:47] get your popcorn [18:47] * jrwren fetches popcorn [18:49] thedac: arosales marcoceppi lazyPower and anyone else I'm missing the HO url will be https://hangouts.google.com/hangouts/_/ytl/jtR7zxxKKNe2lyJ_QtFXBACCGGaZLppiaK6hWnMYUyI=?eid=103184405956510785630 [18:50] and the viewing url is: https://www.youtube.com/watch?v=K-cWDvM2zts [18:50] * thedac nods [18:51] rick_h: ack omw [18:53] rick_h: I am getting you do not have access to this page [18:53] thedac: https://hangouts.google.com/hangouts/_/ytl/jtR7zxxKKNe2lyJ_QtFXBACCGGaZLppiaK6hWnMYUyI=?eid=103184405956510785630&hl=en_US&authuser=0 is my full url [18:54] thedac: last time folks had to remove the hl and authuser keywords, maybe try setting up authuser to the right one for your account [18:54] ok [18:54] * rick_h also manually invites you in via email [18:54] thanks [18:55] doesn't modern technology suck?! ;) [18:56] 5 minute warning [18:56] magicaltrout: at times...then it does magical things and tries to make up for it [18:56] just fyi, still not able to get in. No email received [18:56] Sure this is not restricted to a specific circle? [18:56] thedac: try https://hangouts.google.com/call/kwu2kkxx5ve5rdlgkdcznmwoeae [18:57] That seems to be working [18:57] arosales: marcoceppi lazyPower and anyone else ^ [19:04] youtube play list [19:04] https://youtu.be/OBseJVHuVXI?list=PLW1vKndgh8gJS4upPNaXiYYHnCmFdWk03 [19:10] sat in a darpa webex and office hours, this is like multi tasking overload [19:11] Can I get a link to "watch" the juju sho? [19:12] mbruzek: https://www.youtube.com/watch?v=K-cWDvM2zts [19:12] Thanks magicaltrout [19:15] marcoceppi: is it like brackets? [19:16] magicaltrout: lolz re darpa webex :-) [19:17] mbruzek: hah, you wish [19:27] how do you remove a storage pool from juju lol? I can't seem to find the cmd [19:29] cholcombe: looking here https://jujucharms.com/docs/stable/charms-storage [19:29] I think it is also dependent on when you want to remove the pool. [19:30] arosales: yeah i was looking there [19:30] everything there is about creating storage pools. nothing about removing them [19:30] is it tied to the controller maybe? [19:30] ya I am also looking for remove [19:31] https://insights.ubuntu.com/2017/02/10/webinar-getting-started-with-the-canonical-distribution-of-kubernetes/ [19:32] cholcombe: may have to ping in #juju-dev [19:32] arosales: ok [19:32] cholcombe: I think wallyworld and axw were working on storage and there was some gaps [19:32] cholcombe: would be interested what you find out for 2.1 support [19:32] arosales: yeah i've talked with axw a few times. I might ping him on this [19:32] cholcombe: +1 [19:33] I think they come online here in a bit [19:33] ~3-4 hours [19:33] I think you can catch them more easily on west coast time [19:33] cholcombe: sorry I don't have a better answer [19:33] arosales: no worries [19:35] cholcombe: asked folks in the hangout too, but no command that we were able to find [19:36] arosales: yeah i don't think it exists. that is really strange [19:37] axw: the destroy is coming? ^ [19:37] interesting, juju is hardcoded to connect to streams.canonical.com, even if local mirrors for tools and images are configured [19:38] that was a cause of a very long delay between juju deploy and MAAS starting to provisiong a machine [19:42] arosales: do you know the env variable i have to set to allow loopback devices in lxc with juju? [19:49] axw: ^^ [19:56] cholcombe: is that ENV or LXD profile? [20:00] arosales: i thought i saw in a PR that you could set an env variable for juju and it would set the profile on creation [20:00] arosales: http://reviews.vapour.ws/r/1154/ [20:01] looks like it says if StorageConfig.AllowMount is true than it sets it [20:01] cholcombe: ah ok :-) [20:02] good to know thanks cholcombe [20:02] arosales: https://github.com/juju/juju/pull/1826/files#diff-abd61728f26e92bea6ee732aa19f7808R17 [20:04] cholcombe: good stuff [20:05] we need to document this. this was entirely too hard to find [20:13] cholcombe: could you file a bug @ https://github.com/juju/docs/issues/ [20:13] arosales: yup will do [20:53] Hello all I followed the “kubernetes cluster Easy Way" Tutorial and decided to add deis workflow to it. The helm install failed at the deis install step with the following message. Error: forwarding ports: error upgrading connection: Upgrade request required.. Any suggestions on how to proceed? [20:54] rahworks - the "upgrade request" bits are due to the APILB charm. Its a layer7 router and it doesn't support SPDY, which kubernetes requires. There's a bug to replace this with an ELB as i know you are an AWS shop - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/183 [20:54] rahworks - additionally we are cycling towards replacing with HAProxy for the non cloud-koolaid version of that service [20:56] rahworks - there is a published work around for the helm installer failure because of the apilb - https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/#common-problems [20:56] lazypower thanks for the links, will take a look. [21:02] I love deploying to containers, because it "boots" so darn fast. But it seems I always need a "base" charm installed on a physical system before I can install additional charms to containers on that physical system. Is there a dummy juju charm I could install to set up a physical system, then install the charms I actually care about as containers? [21:04] ravenpi - juju deploy ubuntu [21:04] D'oh! It is safe to say I would never have thought of that. Too darn obvious. :-) Thanks!! [21:04] anytime :) we've hidden it in plain sight. [21:05] +1 [21:08] rahworks - you're going to find another issue after that helm work-around though [21:08] rahworks - the deis chart is going to try to provision a service of type LoadBalancer, which will never be satisfied. bdx and i spoke to this lastnight [21:08] ohh ok... [21:09] rahworks - we'll need to submit a patch for an alternative path (users on bare metal will love us for this) where it uses NodePort service type, or we'll have to manually configure an HAProxy alternative to forward the deis requests to the router [21:09] rahworks - additionally, if you have ingress=true on your workers, the deployment will never complete, as the ingress controller is occupying port 80, which the deis router wants. [21:10] One extra step I did do was to allow all tcp traffic from my workstation...via sg [21:10] rahworks - are you trying this in lxd? [21:12] just applying the dais install to the canonical-kube install... nothing lxd specific [21:12] just applying the deis install to the canonical-kube install... nothing lxd specific [21:12] ok, that statement of "allow all traffic from my workstation" caught me off guard, not sure how that fits into the order of operations here. [21:13] ohh... well after i updated the config to point to port 6443, the security groups setup for the master defaults to block that port. [21:14] rahworks - you should be able to juju expose kubernetes-master, and get that unblocked. its just defaulted to unexposed in HA formation to help isolate traffic away from it (read: slightly more secure by default) [21:14] ohh ok.. === frankban|afk is now known as frankban [21:28] lazyPower: container networking in 2.1 you say??? [21:28] bdx - kick the tires and tell us what works for you and what doesnt :) [21:29] omg - on it [21:33] bdx https://lists.ubuntu.com/archives/juju/2017-February/008595.html [22:18] evilnick___: so yeah that setting doesn't seem to do it [22:19] is there a way to list the model config values currently set? [22:19] cholcombe, :( [22:19] yes, just juju model-config [22:19] evilnick___: cool. do i need to create the model before this'll take? [22:20] cholcombe, you can set it as a default for models, then create a new model [22:20] i'm not sure if that will make a difference [22:20] ok then yeah it doesn't work :-/ [22:22] ah. in that case maybe it didn't make it to LXD [22:22] evilnick___: yeah wallyworld just commented [22:23] that stinks. i really want to test gluster on lxd so i can have it grab some floating ip's from my bridge and use them [22:23] allocating elastic ip's on ec2 is a pain [22:23] there are security issues with loop mounts [22:23] cholcombe, well, sorry about that, but at least I can now turn that issue into removing the reference from the storage page. [22:23] wallyworld: yeah i saw in the commit [22:23] it was enabled with lxc because those were prvileged [22:23] i see [22:24] wallyworld: can we at least give people the option to use said unsafe thing? [22:24] there are plans to come up with a solution, but there are things to consider so we don't do bad things [22:24] i'm totally onboard with this being bad in production but for dev it's difficult to work around [22:24] yeah understood [22:25] it's something that fell off the "most important thing to do next" list [22:25] yeah i understand [22:25] i'll prod the right folks to look into it [22:25] wallyworld: are there any block devices that'll work on my local lxd? [22:26] without loop devices, i don't think so [22:26] gah [22:27] leave it with me and i'll dig to get a proper answer [22:27] ok [22:29] wallyworld: maybe a zfs provider :) [22:29] that would be sweet for local [22:29] yeah, it would be [22:30] LXD is getting new storage APIs this cycle - juju was going to support those but we've had a team restructure and may need to drop that work [22:30] ah interesting. i'll talk to rockstar about it and see what he knows [22:31] so it is recognised we have work to do, but not the people to do it as the next priority at the moment === frankban is now known as frankban|afk [23:14] cholcombe: there's no command to remove pools, and the config attribute we use to have for loop devs no longer exists in juju 2 [23:14] cholcombe: you would have to make the necessary changes to the lxd profile by hand [23:14] axw: i see [23:14] can i set it globally for lxd? [23:15] cholcombe: you can set it in the default profile [23:15] cholcombe: juju also creates profiles for each model, so you can set it on a per-model basis [23:15] axw: cool. if that was in the docs that'd be super helpful :D [23:15] i wouldn't have to bug ya [23:16] cholcombe: I'll find out exactly what profile changes are required and update the docs issue [23:16] axw: awesome! I'm looking forward to test driving this === mwhudson_ is now known as mwhudson [23:43] cholcombe: https://github.com/juju/docs/issues/1665 [23:43] axw: :D === mwhudson is now known as Guest32650 [23:43] * cholcombe high fives axw [23:43] cholcombe: btw, LXD is adding a storage API that we'll make use of when we're both ready [23:43] cool [23:43] cholcombe: so you'll be able to add volumes into a container programmatically [23:43] sounds good to me ! === externalreality_ is now known as externalreality === mwhudson_ is now known as mwhudson [23:53] Is it recommended to deploy charms one at a time, or is it better to use bundles? [23:54] I've been trying to get a modified openstack bundle to deploy properly, but I'm wondering if it's smarter to build it piece by piece