[01:12] question can you move in juju a container from 1 machine to the next via command line by chance ? [01:13] so I am about to put a PoC using Juju onto AWS and am trying to figure out what the min instance size should be? [02:18] Budgie^Smore: t2.medium is pretty good for most things, depends on the owrkload [02:19] well I was just going to put the juju controller on it and maybe a kubernete-master node [02:20] it is probably only going to manage a few other slave nodes [02:26] Budgie^Smore: so, t2.medium should be okay for controller, m3.medium might be more stable [02:27] as for kubernetes-master / worker the master doesn't need too much if you're going to be doing a small number of worker nodes. The workers are really up to you depending on how many continaers you want to pack in per node [02:30] yeah I am looking at doing c4.4xlarge or c4.8xlarge for the slaves [02:31] I should be able to put the master on the same instance as the juju controller right/ [02:31] ? [03:57] lol managed to blue screen the work laptop! new windows 10! [06:53] is there a way for models to share machines? [07:43] hey marcoceppi_ just came across your name on a GitHub "issue" in relation to useing AWS ELB as a substitue for kubeapi-load-balancer, defintely would get thumbs up from for that :) === frankban|afk is now known as frankban [08:32] Good morning Juju world === rvba` is now known as rvba [12:36] I have been trying to deploy 10 nodes with Juju/MAAS and I've gotten to the point where I bootstrap juju onto MAAS as a cloud, but it doesn't indicate machine status when HA is enabled is this normal? [12:38] fang64: could you elaborate on machine status? [12:39] marcoceppi: when I type juju machines it indicates ha status is 1/3 [12:39] this is after I've enabled ha, and the hosts are done being deployed. [12:40] marcoceppi: I mean controllers sorry, wrong command [12:41] it does show the controller has machines 3 but HA is yellow with 1/3 [12:42] I assume it means high availability isn't working? because it's not indicating 3/3? maybe it's my lack of understanding [12:44] marcoceppi_: this is what I see, http://i.imgur.com/onIDdtL.png [12:46] fang64: yes, it should work its way to 3/3 as HA kicks in and the db is replicated/etc [12:46] it's been a day [12:46] I don't think it takes that long for ha to enable? [12:46] fang64: k, so something is up. You'll have to check the debug-log or at this point the logs on the machines [12:47] fang64: no, definitely not [12:47] ok, so it's broken [12:47] I just wanted to make sure I wasn't misunderstanding,. [12:47] no, definitely not. That's why it shows yellow in that you've asked for 3 controller nodes in HA, but only have one functioning [12:48] alright well I'll take a look, really I am trying to get to a point where I can deploy openstack, but I kept running into issues with bootstrap I suspect I have some network issues. [12:48] fang64: oh, ok. [12:50] I don't know if anyone can answer this question, if I am deploying to MAAS as I am now, is juju using lxc containers or is it just installing the charms to physical hosts? [12:50] fang64: so it depends on how you're installing [12:51] I just created maas as a cloud [12:51] fang64: the bundles that the openstack folks use use lxd containers on the physical hosts to help spread things out [12:51] in juju and then told it to bootstrap [12:51] fang64: so nothing there is lxd centric [12:51] ah ok [12:51] fang64: you'll see in the bundle things getting told to go to 0:lxd or the like [12:52] fang64: to help colocate the openstack services on deploy, but until you do that nothing is lxd unless told to be [12:52] ok, so in my case with juju bootstrap as it stands now it's provisioning the hosts as the failover [12:52] no containers being used to do that? [12:52] so juju boostrap will go ask MAAS for one node, download the jujud binary to it, and start the service [12:53] fang64: no, no containers being used to do that [12:53] ah ok, that's something I didn't understand initially when I looked at this, so juju is running on the host os, and when I enabled ha it grabbed 2 more [12:53] fang64: exactly [12:54] fang64: to have HA you need three machines so that you're not going to fall over if a disk dies/etc [12:54] that makes more sense, I was just a little confused what it's actually doing with MAAS [12:55] Another thing I was curious about which is responsible for networking configuration? Juju or MAAS? [12:55] or is it a combination of both, or it depends on what bundle or charm you are using? [12:57] rick_h: I appreciate the help, I'm going to try and figure out what's borked on my networking [13:10] Is there a way to react when controller losts connection to agent? [13:11] for example now it says: agent lost, see 'juju show-status-log my-instance/3' [13:23] anrah: If it doesn't recover, I think it means either your network is blocked between the controller and that unit, or one or more jujud agents has crashed and you have a Juju bug to deal with [13:24] anrah: If you find a dead juju service, you can try restarting it to see if sorts itself out. [13:25] I mean that if i have a case where the server dies for reason X [13:25] meaning some sort of selfhealing that juju would spin up replacament unit [13:26] I think that requires external monitoring and some magic with jujulib for python.. [13:27] Yes. I don't think juju ever spins up new units without user input. [13:29] If you run juju recursively, you can even charm that :) I think some people do it for autoscaling. [13:31] (probably easier now with 2.0 - you just need your monitoring charm given credentials for the main controller and it can administer itself and all the other models) [13:31] anrah: there's a basic autoscale/autoheal demo here: https://github.com/juju/python-libjuju/blob/autoscaler/examples/autoscale.py [13:32] you could extend it to deal with more healing conditions [13:32] stub: Yeah, I have build something like that for autoscaling [13:33] next step is to make it work with autohealing [13:34] tvansteenburgh: Do you know if there is a best practice to adding unit tests to layers that don't mess up the main charm's tests? [13:35] stub: sorry, i dunno [13:36] I think I either need to exlude them from charm build, or stick them in a non-standard directory === ant_ is now known as ant__ [15:20] stub, should be able to exclude unit/tests/file-by-name iirc [15:20] or some such [17:12] Hi, lads. How do I use 'Login wih USSO' in juju2's gui? I bootstraped 2,0 environment, did 'juju gui --show-credentials', opened the juju-gui URL in the browser, and when I click on 'Login with USSO' i get 'authentication failed: no credentials provided' [17:13] i am logged within ubuntu single-sign-on [17:13] I am also logged into charmstore [17:13] Mmike thats a great question. [17:14] rick_h - is there any feedback/guidance here re USSO login support? or who should i be pinging to find out for mmike? [17:14] Mmike: hmm, that's only available if it's configured to use an external idenity provider. [17:15] Mmike: not useful for most cases unless you've bootstrapped that way [17:15] rick_h: oh [17:15] Mmike: the --show-credentials should show you the username/password ot use in the gui [17:15] rick_h: yup, those work, I just wanted to see how 'USSO' would work [17:15] Mmike: honestly, I think that button should be hidden unless the controller supports it. [17:15] Mmike: I'll bring it up with the team [17:15] rick_h: that was my thinking too! [17:16] rick_h: how do I bootstrap with external identity provider configured? [17:16] that's a controller option, or? [17:16] Mmike: it's a config. I'll have to see if I can find it sec [17:17] thnx! [17:17] Mmike: https://lists.ubuntu.com/archives/juju/2016-September/007843.html [17:17] Mmike: beware "here be dragons" as it works but has side effects and such [17:17] Mmike: we were just working on things like show-model listing users with access/etc this morning [17:18] Mmike: so it's there but there's some things that it causes to act a bit wonky [17:18] I see [17:18] rick_h: thank you for that info [17:18] Mmike: np, hope that helps [17:18] thanks rick_h [17:18] I'm asking because on https://blog.jujugui.org/ it's mentioned that if you log in with sso you get additinoal options, etc... [17:18] so I was wondering how to get there [17:19] but, yea - disabling that button if controller doesn't support it would be the excellent [17:19] Mmike: what additional options? [17:19] * rick_h skims [17:20] Mmike: not sure I can think of any additional options you get tbh [17:20] it's the same juju/gui/etc [17:21] rick_h: the video says that 'you can log in into controller using sso, and then you get additional canvas to select models, etc, etc' [17:21] Mmike: oh hmm, will have to watch that I guess. [17:21] Mmike: I mean it just shows models you have access to/etc [17:23] rick_h: yup, maybe the video is confusing or giving information that's specific for a particular type of a controller [17:25] What's the bash equivalent of charmhelper.config's .changed? [17:34] aisrael: You using reactive? There is a state set that you can check with `charms.reactive is_state config.changed.foo` [17:34] aisrael: I don't know if there's a CLI for the actual method on Config.changed [17:39] cory_fu, Yeah, this is a pure-bash charm, but I'm going to convert it to a layer, I think === natefinch is now known as natefinch-afk [17:40] lazyPower: (I prefer IRC over Slack :D) the last time you said to me that Canonical Kubernetes was not compatible with Helm, is that so? [17:41] lazyPower: about my issue with Vitess posted in their Slack, the guys from Vitess pointed me to https://github.com/youtube/vitess/tree/master/helm/vitess (work-in-progress) [17:41] it's a Helm package :s [17:41] help charts* don't know the right word :) [17:41] Zic - we certainly do support helm, but there is a known issue with teh current incantation of the kube-api-loadbalancer [17:42] the supported work around is to either update or clone your kubeconfig and point it directly at one of your kubernetes-master units, and expose the kubernetes master unit. Proxying through the master load balancer will cause you heartburn and trigger false positive failures with helm. [17:42] oh, I can do that [17:42] Zic - we have an open PR to actually put that in the upstream docs for the CDK [17:43] thanks to SaMnCo for that submission *hattip* [17:43] I already access directly to my kubernetes-master on my LAN [17:43] and even the kube-api-loadbalancer is exposed only on LAN [17:43] ok, if you're already pointing directly at your master, you should be g2g [17:43] so I just need to upgrade my kubeconfig [17:43] and if thats not the case, i want all your bugs and feedback around this so we can triage accordingly [17:43] hehe :D [17:43] yep, just point kubeconfig at the correct port/ip for a master and you should be g2g [17:44] Zic - not sure if you saw but we just landed *everything* upstream yesterday around 7pm CST [17:45] https://github.com/kubernetes/kubernetes/pull/40324 [17:45] lazyPower: as my Canonical Kubernetes cluster runs perfectly fine, I throw all of my ninja-power to make Vitess works in K8s... except I discovered today that their config (used in their official deployment guide) is not resiliant at all \o/ [17:46] so... no, I didn't see anything from tech-world except my rage and tears with Vitess this last 2 days :) [17:46] but noted, I will take a look ) [17:46] well, i'm sorry to hear about the tears, but its awesome to hear that we are empowering you [17:46] makes my own day to day tears worth while :) [17:46] huhu :) [17:48] simply puts: canonical-kubernetes does the job perfectly, and when I saw some guys raging against kubeadm which can't do the same, or even try to build K8s from scratch, I thank Juju :) [17:48] building K8s from scratch is very instructive anyway [17:48] but I need a tool to industrialize it in my company [17:49] I appreciate that feedback, we've been cycling hard to remove the barrier to going to prod with kubernetes [17:49] that's for the right part o/ [17:49] so much so that we're going hard in the paint, my nice grey paintjob has streaks of red and white all up and down the side [17:49] for the wrong part (and that's totally offtopic here): Vitess is not ready for production-grade experience at this time [17:49] it will need some tricks to be [17:49] the good news is that this K8s cluster is not entirely dedicated to Vitess :p [17:50] we're looking to leave beta soon [17:50] so, be prepared for bulletproofing in the coming iterations [17:50] Zic - did you perhaps update to 1.5.2 with last weeks update? [17:51] my main coming-feature is haproxy replacing nginx in kube-api-loadbalncer :) [17:51] we just had a brief meeting about that this morining, we're looking to do layer 4 routing instead of layer7 via nginx. so that haproxy replacement cant come soon enough [17:51] nope, I think I'm on 1.5.1 for now, I didn't have the time to upgrade with my Vitess problems :( [17:51] Ack. when you do i'm highly interested in a) which approach you took to do the upgrade and b) how that experience went for you [17:52] lazyPower: layer 4 is OK also, in fact it's just the possibility to make a master offline easily which I need [17:52] so capturing any feedback for us would be highly useful in that context [17:52] (without touching the nginx vhost) [17:52] thats our goal, to not have the master exposed at all, so you can isolate it in some network segment and sleep easy [17:54] to describe our infra for now : I'm using 3 master, 5 etcd, 1 kube-api-loadbalancer (with easyrsa charms also on it), 6 physical workers, 3 EC2 AWS instances, 2 DRBD filers for PV with NFS, all in private-LAN [17:54] to expose publicly, I set up a public haproxy which have Ingress as backend [17:54] (3 haproxy, with a heartbeat VIP) [17:54] oh hey thats a nice setup [17:54] * lazyPower nods [17:55] i also see you took etcd durability very serious and gave it proper fault tolerant pooling [17:55] yes, as you advice me the first time I came here :) [17:55] :D [17:55] (was for the first pizza i owe you :p) [17:56] but yeah, I don't use public Ingress because if I bring-up a public ethernet interface on my kubernetes-worker, all my NodePort will be exposed, as K8s does not have a setting like "NodePort only bind on private address" [17:56] so I prefered to put public LB in front of Ingress and NodePorts [17:56] right [17:56] i think there was discussion on the k8s project around that [17:56] NodePort on Interface [17:56] but i haven't been tracking that too closely, and it may have been tabled [17:57] yes, I saw an issue on GitHub but not so much news since 2015 [17:58] yeah, that sounds about right [17:58] might be worth poking it to see if there can be some renewed interest around it [17:59] the only negative effect of that is that our customer had full capability with kubectl to expose his service privately through Ingress and NodePorts [17:59] but for public parts, it's HAProxy VMs wich is fully managed through Puppet [18:00] so, we're going to augment ingress with configmaps, which should make it more durable for you [18:00] so our customer needs to fill at ticket for every public expose [18:00] as in, you can expose interesting things like ssh services for a private gogs instance [18:00] (not so important as he only expose 80 and 443, and we do the ssl offloading with a wildcard cert) [18:00] and it'll proxy that ssh connection through the ingress controller [18:00] cool :) [18:00] as it stands today we're only concerned with web traffic on that ingress controller [18:00] yep [18:00] but we do realize and understand there is another class of workloads that need to be supported [18:00] I use NodePorts for all other concerns [18:01] and its a bit slower going but i'm tracking that [18:01] yeah [18:01] i do the same in my homelab [18:01] but nos as practicle as Ingress [18:01] practical* [18:02] yep [18:02] I discovered something like https://traefik.io/ also [18:02] there's been some effort around an haproxy ingress controller as well [18:02] i've used traefik, its great [18:02] i'm not positive that its been updated withs ocket support yet though [18:02] which can enpower my customer for public exposing [18:02] are you aware if they added that in recent revisions? [18:02] no not at all, I'm at the point of just reading their homepage :D [18:02] i'd like to include that in a workloads repository so end users can mix/match their ingress controllers via namespace [18:03] eg: namespace=customer you get the default nginx controller, namespace=alpha1 - you launch and use a traefik ingress controller until you're ready to promote to namespace=customer [18:04] and can move all that stuff with it, its just kind of a nice to have, and would be great to be tuneable with some curated manifests [18:04] but thats all pie in the sky at this time, i haven't had time to dedicate to it [18:05] I'm planning to finish this resilient Vitess cluster, upgrading to K8s 1.5.2 through Juju and after that, poweroff/rebooting randomly all the parts :D [18:05] @lazyPower pleasure :) just the first of (I hope) a long list [18:06] @Zic, let me know if you have issues around helm [18:06] if all stay OK (I already did some tests :p) I will owe you 3 pizzas total [18:06] if all is wrong, I will jump through the window of the ground-floor [18:06] (and then, debug :p) [18:06] also, if you are looking into scaling SQL, we have a Charm Partner partner (ScaleDB) who does just that [18:07] not sure they have a lot of k8s stuff, but still worth looking at. [18:08] noted :) [18:08] and, last but not least, I'm trying to get good low level sysadmin feedback on Juju usage to document [18:09] so I'd be happy to discuss your XP, in French or English ;) [18:13] anyone here who knows about ceph a bit how to create a directory without creating a drive mount point : http://paste.ubuntu.com/23864768/ [18:14] I try to get ceph-disk for some reason push a drive creation but I do want only a directory been done [18:17] ping cholcombe and icey ^ [18:19] Teranet: can you share a bundle that you're using to deploy that? from the bit of logs, it looks like it _should_ just work so would be a bug [18:20] it's not a bug I was looking now on the container and I do see /srv/osd is created and has data already in it [18:20] I am thinking it's looking still for an empty one [18:31] ok I am deploying a bigger OS.yaml which I cutom build already in a lot of ways [18:31] want me to share the hole OS.yaml ? [18:36] ah Teranet, ceph-osd doesan't behave too well inside of a container [18:37] so I should redploy it outside of the container ? [18:38] or can I just do redeploy the 3 ceph-osd somehow to the boxes instead of containers ? [18:39] what is wrong with ceph-osd in a container? if your lxc/lxd is in zfs, make sure to set use-direct-io: false and don't expect /dev osd-devices to work. Other than that it seems to work for me for dev/test. [18:41] Teranet: is your lxd zfs or btrfs backed? [18:42] Teranet: I get those errors with zfs backed container and the default use-direct-io value. Set the use-direct-io to false for the ceph-osd charm. [18:44] jrwren: I suppose that's relevant too; I wouldn't run ceph-osd inside of a container unless it _is_ just for testing [18:45] let me doublecheck [18:54] how can I check that if it's zfs again ? [18:54] fdisk don't help much there [18:56] lxc info | grep storage [18:56] should tell you [18:57] zfs [19:02] ok so if I put them now to the host how can I move them ? [19:04] or do I need to kill those 3 containers with ceph-osd on it first and than redeploy them ? [19:06] Teranet: probably, yes, or you could set the config, check in /etc/ceph for 'journal dio = false' === redir is now known as redir_exercise [19:17] concerning the hosted controller [19:17] I don't seem to be able to connect to a model on the hosted controller via libjuju [19:17] see http://paste.ubuntu.com/23865066/ [19:18] ^ succeeds when ran against my own controller, but fails when ran against the hosted controller [19:18] tvansteenburgh [19:18] ^^ [19:18] tvansteenburgh: have you tried using libjuju against the hosted controller? [19:19] bdx: yeah. it *can* work, but it relies on valid macaroons existing on the host [19:20] bdx: do you have juju cli installed on the host? [19:21] tvansteenburgh: yea, does the fact that juju cli can access the model validate that the macaroons are valid? [19:21] yeah [19:22] bdx: can you deploy to the model with the cli? [19:22] if you can, then libjuju should work too [19:22] heya Zic [19:22] we should find time this week or next to have you sync with the team in a hangout, would love to get a laundry list of feedback from you [19:22] bdx: this is a limitation of libjuju until we add support for obtaining and discharging macaroons [19:24] tvansteenburgh: I run that script ^ against a lxd controller and it does not error, then I switch controllers via `juju switch jujucharms.com` following which select one of my models, and run the script again and it fails [19:25] bdx: is there a traceback? [19:25] due to auth errors, see http://paste.ubuntu.com/23865120/ [19:27] bdx: are you logged in, i.e. juju login [19:27] yea [19:28] tvansteenburgh: http://paste.ubuntu.com/23865139/ [19:28] bdx: do you have a ~/.go-cookies file? [19:28] yeah ... want me to squash it and try again? [19:29] not yet [19:31] so I just created a controller and ran "juju ssh -m controller 0". while it appears that it is logged in, it doesn't give me a prompt, and hitting enter gives me "-bash: line 1: $'\r': command not found". any ideas what is going on? [19:32] bdx: http://pythonhosted.org/juju/narrative/model.html#connecting-with-macaroon-authentication [19:36] tvansteenburgh: YES! that did it [19:36] stormmore - that's most definiately a bug, but i'm not certain what happened [19:36] tvansteenburgh: thank you [19:37] stormmore: are you on windows? [19:37] bdx: \o/ np [19:37] lazyPower I came across https://bugs.launchpad.net/juju-core/+bug/1468752 so you are right it is bug :) [19:37] icey yes [19:37] oo fantastic find [19:37] bummer that its a bug, but glad its not pioneering territory [19:37] stormmore: yeah, that was the bug I was curious about :) === scuttlemonkey is now known as scuttle|afk [19:45] so what its telling me is that this is a stupid Windows bug without a workaround but the fix is in 2.2 which is currently in alpha! [19:47] I am going to see if I can install and bootstrap from bash / Ubuntu on Windows 10 [19:50] that is if I can figure out how to get it upgrade to Xenial === redir_exercise is now known as redir [20:10] tvansteenburgh: I may have been a bit premature in my rejoice [20:10] tvansteenburgh: I'm getting "Fatal error on SSL transport" now ... [20:12] bdx: that's almost always a side effect of another problem, got a traceback? [20:12] tvansteenburgh: my simple script http://paste.ubuntu.com/23865377/ [20:12] tvansteenburgh: traceback <- http://paste.ubuntu.com/23865383/ [20:16] bdx: can you set logging level to DEBUG and paste the full output? [20:16] tvansteenburgh: http://paste.ubuntu.com/23865408/ [20:18] bdx: did you change line 41 or 43? [20:18] tvansteenburgh: do you think I need to be supplying a 'cacert'? [20:18] tvansteenburgh: both [20:20] bdx: yeah, trying sending the cert [20:20] i'm not convinced that's the problem but it won't hurt [20:21] tvansteenburgh: http://paste.ubuntu.com/23865441/ [20:21] sad [20:21] heh [20:22] nevermind! [20:22] any word on this bug? https://bugs.launchpad.net/juju/+bug/1614364 [20:27] tvansteenburgh: does libjuju use the asyncio ssl protocol for ssl connections? [20:29] bdx: no [20:29] well it might indirectly [20:30] tvansteenburgh: this https://github.com/python/asyncio/blob/master/asyncio/sslproto.py [20:30] no [20:30] tvansteenburgh: can you replicate this on your end? [20:31] bdx: not yet. it works for me. still looking [20:40] bdx: change line 41 to: logging.basicConfig(level=logging.DEBUG) [20:40] then paste me the full output [20:41] bdx: also, print the value of your model_uuid to make sure it's actually set [20:41] tvansteenburgh: I'm setting MODEL_UUID inline in my testing [20:42] ok i thought you were getting it from the env [20:46] http://paste.ubuntu.com/23865573/ [20:46] thats better [20:48] bdx: yeah, so that means whatever macaroons you have locally aren't sufficient [20:51] bdx: which is strange since you can run juju status on the model [20:57] tvansteenburgh: I launch a fresh xenial container `lxc launch ubuntu:16.04 libjujutest`, exec in, `su ubuntu`, install juju, libjuju, asyncio via `sudo apt install juju python3-pip && sudo -H pip3 install juju asyncio`, run the script and get the error [20:58] tvansteenburgh: should the jujuclient version <-> juju controller version mismatch have anything to do with this? [21:00] bdx: when using macaroon auth (shared controller), libjuju can currently only connect if there valid macaroons in ~/.go-cookies already [21:00] * login to the controller:model via juju cli, then run the script and still get the error [21:00] correction [21:00] my bad [21:00] I am logging in first [21:02] bdx: i'm trying to figure out what's different. when i run your script against jimm, it works. on every model i try === tvansteenburgh1 is now known as tvansteenburgh [21:02] bdx: try `juju deploy ubuntu`, then run your script [21:06] I keep getting "ERROR cannot update credentials for aws: timeout acquiring mutex" when trying add credentials or change the default aws region :-/ === scuttle|afk is now known as scuttlemonkey [21:07] tvansteenburgh: deployed ubuntu, no change, except that now the script will error every other try, and just hangs on 50% it seems ... when I Ctrl+C out of what seems to be a hang, I get this http://paste.ubuntu.com/23865667/ [21:07] which has an interesting error at the bottom "juju.errors.JujuAPIError: unknown version (1) of interface "Client"" [21:08] yeah [21:16] bdx: my models are 2.0.1, yours are 2.0.2.1 [21:17] that shouldn't matter, but... [21:18] nope, works for me on 2.0.2.1 too === menn0_ is now known as menn0 [21:31] anyone familiar with the openstack base bundle available to answer a few questions? [22:00] looks like I am going to just spin up an Ubuntu VM since Bash on Windows has problems with mutex when trying to do stuff with juju and the windows version of juju has problems with ssh to the controller [22:14] mskalka: What's up? I've got some minutes before EOD. [22:15] mskalka: If no one is around here, make sure to check out #openstack-charms [22:19] zeestrat: thanks for the reply. I'm running into an odd issue deploying openstack on aws. It looks like the deployment is hung up cinder and glance connecting to the mysql db [22:20] I had it working last week just as a POC after a few tweaks to where it landed the charms (the ceph-mon charms did not like being in lxd for example) but never ran into this issue [22:25] mskalka: Hmmm. Unfortunately I only have experience with deploying it to MAAS on bare metal so I won't probably be too helpful. Ping the guys in #openstack-charms with the output of "juju status --output yaml" and perhaps some logs. [22:25] zeestrat: No worries! Thanks for the help, I'll ping #openstack-charms tomorrow morning [22:26] Cool. I think most of them are on a EMEA timezone. [22:27] copy, thanks === mskalka is now known as mskalka|afk [23:07] do we have an JUJU OPENSTACK guys here ? I am looking for some logging from OpenStack but now since it's all on JUJU / MAAS not sure where it would be looging too for neutron === frankban is now known as frankban|afk [23:43] Teranet: #openstack-charms channel might be best