[01:27] <sfeole> hello juju world, quick question.  When writing layered charms can I write multiple functions for 1 decorator?
[01:27] <sfeole> @when(foo)
[01:27] <sfeole>     def runme():
[01:27] <sfeole>   def runmetoo():
[01:28] <sfeole> if anyone knows ?
[01:45] <stokachu> sfeole, im guessing you'd put @when(foo) before both of those functions
[01:45] <sfeole> stokachu, yea, but the 2nd function did not appear to run
[01:46] <stokachu> interesting maybe reactive is just one state per function
[01:47] <stokachu> why not just write several functions and include them under a parent function that runs when that state is emitted?
[01:47] <sfeole> stokachu, yea, i can do that
[01:47] <sfeole> stokachu, thanks for that advice
[01:47] <stokachu> sfeole, sorry i know it's not what you were asking
[01:47] <sfeole> i'll try it
[01:47] <stokachu> but i can't think of another way
[01:47] <sfeole> stokachu, no, but it should work now that I think about it
[01:48] <stokachu> ok cool
[01:48] <sfeole> stokachu, :)
[01:48] <sfeole> stokachu, all the examples online always show 1 Decorator for 1 Function
[04:31] <surf> Hi all, I want to deploy services using juju on the cloud (openstack). Is juju deploys services on linux containers on the top of openstack instance (or) it will directly deploy services on openstack instance.
[05:29] <Ankammarao> Hi juju world!!!
[05:30] <Ankammarao> when i try to push the charm to the charm store i am getting error"ERROR cannot post archive: unauthorized: access denied for user"
[05:30] <Ankammarao> and also charm whoami only showing the user name
[07:48] <stub> skayskay: oh, so what is the environment? Its coming from an update to the snap-layer, which ensures squashfuse is installed so snaps work under lxd (it is in process of becoming a required dependency of snapd, but I needed the work around yesterday)
[07:49] <stub> skayskay: hmm... so trusty. I guess I'll need to detect the release and only install under xenial
[07:49] <stub> at least until the snapd backports are all sorted
[07:54] <kjackal> Good morning Juju world!
[08:01] <stub> skayskay: Not that that should discourage you from updating to Xenial :-P
[09:24] <Ankammarao> #ubuntu
[09:28] <deanman> kwmonroe, hey, are you around ?
[10:10] <Ankammarao> Hi , is there any command to revoke or hide the old verions of th charm in the charm store
[12:57] <icey> apparently is-leader cannot be called from within the collect-metrics hook? https://bugs.launchpad.net/charms/+source/ceph-mon/+bug/1663584
[12:57] <mup> Bug #1663584: metrics collection hook fails <ceph-mon (Juju Charms Collection):New> <https://launchpad.net/bugs/1663584>
[13:22] <anita_> Hi, is there any method to delete any particular version of a charm from charm store? or the full charm itself?
[13:26] <magicaltrout> i don't believe you can remove them, but you can grant them unavailable to users
[13:26] <magicaltrout> you couldn't delete them a while ago, but i've not looked recently
[13:28] <anita_> magicaltrout_: I tried to revoke a particular version
[13:28] <anita_> but i can still read that charm versoin
[13:29] <anita_> magicaltrout_: could you please let me know the command?
[13:29] <magicaltrout> thats it
[13:29] <magicaltrout> but you will be able to see it, did you check by logging out of the charmstore?
[13:30] <anita_> let me try, i remember I tried that way, but still able to read
[13:30] <anita_> let me recheck
[13:32] <anita_> Yeah, with sign out not able to see the charm
[13:32] <anita_> but with sign-in able to read that charm :(
[13:32] <magicaltrout> yeah you can
[13:32] <magicaltrout> but no one else can
[13:32] <anita_> oh is it?
[13:33] <magicaltrout> so thats as close to it being deleted as you can get
[13:33] <anita_> ok
[13:33] <anita_> Thanks a lot
[13:33] <magicaltrout> no probs
[13:38] <gaurangt> hi, how do we specify resources in the bundle file?
[13:45] <anrah> gaurangt: From my knowledge that is not possible
[13:45] <anrah> https://bugs.launchpad.net/juju/+bug/1623217 is filed for that issue
[13:45] <mup> Bug #1623217: juju bundles should be able to reference local resources <juju:Triaged> <https://launchpad.net/bugs/1623217>
[13:51] <gaurangt> anrah, yeah, this looks to be the same requirement.
[13:51] <gaurangt> thanks for pointing out
[13:52] <lazyPower> Zic o/
[13:53] <gaurangt> anrah, one more thing , if I have already deployed the charm and I need to add a relation to that charm in my bundle (which will be subsequently deployed), is that possible?
[13:55] <magicaltrout> no gaurangt
[13:55] <magicaltrout> but you could just take that bundle and add your charm to it and post a new bundle to the charm store
[13:56] <gaurangt> magicaltrout, oh ok.
[13:56] <Zic> lazyPower: \o
[13:57] <lazyPower> Zic - awesome. glad you're about. I'm about to push the proposed changes for your multi-master scenario
[13:57] <gaurangt> I've a specific requirement where I need to deploy one charm manually first and then do some manual stuff on the machines and then deploy the next bundle.
[13:57] <lazyPower> Zic - however, there's a bit of complexity for me to get it from my hands to yours. a) the kubedns addon template was updated, and its breaking deployments if i rebuild the charms from teh current source tree.
[13:58] <Zic> lazyPower: cool! I have some hours in front of me to test
[13:58] <lazyPower> b) its non-retroactive, so it would require a fresh deploy... or we would need to riff out how to approach this in a sensible manner for your existing deployment.
[13:58] <Zic> I can reinstall it, sure
[13:59] <lazyPower> Zic ok, sorry about the inconvenience there. I think before this goes GA, we might want to talk about if this should have some auto-magic behind it.
[14:00] <lazyPower> i still have to sort A before i can get it in your hands though. CDK will fail to complete the setup while that KubeDNS addon template has the optional config map bits added, i might just try to fetch the 1.5.2 release addon template and munge it in there after a manual build
[14:00] <lazyPower> Zic - however, this tested well in my initial tests. So i'm kind of excited to see if this resolves the multi-master crypto issues for you
[14:01] <lazyPower> magicaltrout o/
[14:02] <magicaltrout> now then squire
[14:02] <lazyPower> magicaltrout  glad to see you made it somewhere safely :)  I'm looking forward to the next summit/conference. Those gents @ the pub were ww1 actors, and were doing re-enactments. I got a nickle tour of Gent + free phillofal for the trouble.
[14:02] <lazyPower> \o/
[14:02] <magicaltrout> lol
[14:03] <magicaltrout> that was very weird
[14:03] <lazyPower> i also suddenly realize i have no idea how to spell fallafel
[14:03] <Zic> falafel :p
[14:03] <lazyPower> ^
[14:03] <lazyPower> that
[14:03] <jrwren> well, now I know what I want for lunch.
[14:04] <lazyPower> Yeah, they were some cool dudes. I thought for sure they were luring me away from the hotel to do nasty things... but nope. they fed me and gave me a history lesson and sent me back on my way
[14:06] <admcleod> phillofaldelphia
[14:17] <Mmike> Hi, lads. I'm trying to run some amulet tests on trusty, but I can't install amulet because it depends on python3-amulet which depends on python3-libcharmstore, which is no more in trusty
[14:21] <Zic> lazyPower: hmm, some bad news sorry: my customer is actively working right now on the cluster to prepare their apps to be "k8s-ified", so I can't reinstall it today as they will work to the end of the day and this night (deadlines approach...) monday I'm out of office and I will be back only on tuesday :/
[14:21] <lazyPower> Mmike - Sorry you've hit that. Have you filed a bug? It would be good to capture that feedback so I can shop it with the maintainers
[14:21] <lazyPower> Zic - ah, that is a bit troubling but not an ultimate blocker. this jsut releives some of the pressure I put on myself to get this in your hands -today-
[14:22] <Mmike> lazyPower: nope, was hoping I did something wrong - will file one shortly
[14:22] <lazyPower> Zic - by then we might even have a sensible approach to updating the existing deploy so its just juju upgrade-charm
[14:22] <Mmike> lazyPower: I'm filing that against amulet, right?
[14:23] <lazyPower> Mmike correct - https://github.com/juju/amulet
[14:24] <Mmike> ha!
[14:24] <Zic> lazyPower: sorry :( I expected that I can reinstall the cluster this afternoon (it's 15:23 here o/) and let the cluster operational for this night and the week-end, but the customer is actively working this afternoon :(
[14:24] <lazyPower> Zic  no problem, no problem at all
[14:24] <Mmike> lazyPower: so, just to make clear - no bug in launchpad, but create issue in github?
[14:24] <lazyPower> i was just trying to haul tail to get you un-blocked on this
[14:25] <lazyPower> Mmike - yeah, i'm fairly certain they dont look at launchpad for amulet bugs (they might, but i'm erring on what i know)
[14:25] <Zic> lazyPower: I poweroff-ed my two extra-master for now, as it's not in prod and only one master is ok to let them k8s-ified their apps
[14:25] <lazyPower> Zic - good plan
[14:25] <Mmike> lazyPower: ack, thnx
[14:25] <Zic> so they are not hurt by the bug for now
[14:26] <lazyPower> yeah that crypto bug was fairly simple to squash i think in how i approached it
[14:26] <Zic> I can power-on them if you want to test your patch on an already deployed cluster
[14:26] <lazyPower> Zic - just FYI, there's a notion of leadership in charms. I used the leader to generate those files and push them to the followers.  So teh leader is only the one that generates, and the followers just vaccume that in from the leader-data, and blindly write the contents to the correct filepaths.
[14:27] <lazyPower> its a sort of nieve approach, but i think it's elegant enough that we can fix your deployed units with a simple state toggle
[14:27] <lazyPower> however, if you have a beefy machine, it might be good to test this in a LXD environment before we go mucking with your deployed units
[14:27] <lazyPower> s/environment/model/
[14:29] <Zic> I can pop some VMs on the same ESX which host the current control plane of the k8s cluster, yes
[14:29] <lazyPower> fantastic
[14:30] <lazyPower> Zic - ok, lets plan on doing that, and we'll do some testing there before we touch your deployed cluster. I'd like to white glove that deployment as much as possible as it would be your n'th re-deploy at this time.
[14:30] <lazyPower> i dont enjoy making extra work for my users
[14:31] <lazyPower> ^ you heard it here first folks
[14:31] <Zic> :D
[14:32] <Zic> I'm preparing the new VM, just one is needed?
[14:33] <lazyPower> Zic - yeah, make it beefy. You'll want bare minimum 4 cores 8gb of ram and ~ 50gb of disk space if you're going to deploy CDK in LXD
[14:33] <magicaltrout> don't believe it for a second
[14:35] <Zic> oki
[14:47] <Zic> lazyPower: it's fun to do things reversely -> I directly began to test CDK on VMs and baremetal, it will be the first time I test it in LXD :)
[14:47] <Zic> (will be the first time I use LXD anyway)
[14:47] <lazyPower> Zic - its great :) you'll have some shuffling to do
[14:47] <Zic> I read about LXD and what differences it exposes to docker or rkt
[14:47] <lazyPower> i'll give you instructions when its ready, you'll have to use conjure-up to deploy initially then intercept and update teh charms
[14:48] <lazyPower> Zic - apples and pineapples my friend. machine containers vs app containers
[14:48] <Zic> it seems to be the common choice for who only works with VMs the last years
[14:48] <lazyPower> you get a full init system and the containers look/act just like a "real" linux
[14:48] <Zic> the migration seems to be easier for VM -> microservice with LXD
[14:48] <lazyPower> so its more than just a process with an ip address
[14:48] <lazyPower> YES
[14:48] <lazyPower> exactly
[14:48] <lazyPower> lift and shift is the primary value proposition
[14:48] <Zic> I read right \o/
[14:48] <lazyPower> there's more but we'll leave it at that :)
[14:50] <magicaltrout> why can't C++ code just resolve its dependencies properly in any IDE :sob:
[14:55] <Zic> magicaltrout: to make you Go? (I'm a C/C++ guy, but it will be the answer of my pro-Go teamworker)
[14:55] <magicaltrout> hehe
[14:55] <magicaltrout> i've never really done anything in either, they all make me sad
[14:55] <Zic> maybe I will be part of the Go-sect (oh, I mean Gopher!) this year if I can free some time after the K8S project :p
[14:56] <magicaltrout> i blame kjackal_
[14:56] <kjackal_> good call!
[14:56] <magicaltrout> thanks
[15:07] <lazyPower> Zic - here's the magic if you're interested https://github.com/chuckbutler/kubernetes/commit/3320fc04015411cdc9ad44d98210ada5137537e3
[15:08] <Zic> oh this part is in Python ?
[15:12] <lazyPower> Zic - yep, the entirety of the kubernetes charms are python
[15:15] <Zic> interesting, so I can actually reading it and understand the entire charm :p
[15:15] <Zic> thought it was Go too, or a specific YAML descriptor
[15:16] <magicaltrout> na most charms are python
[15:16] <magicaltrout> a few in bash
[15:16] <magicaltrout> juju core is Go
[15:16] <Zic> yeah, the first time I discovered Juju, it was also in Pytho IIRC
[15:17] <Zic> but the first time I used it, it was recoded to Go :)
[15:17] <Zic> and as all new tools of Canonical seems to be in Go this day...
[15:18] <Zic> (LXD, snappy, juju, ...)
[15:25] <magicaltrout> yeah but they aren't crazy enough to get the public to develop in Go ;)
[15:25] <Zic> magicaltrout: hehe :p
[15:26] <SimonKLB> if ~/charms/deps/layer/X is already populated and the repository is updated the new commits don't seem to be pulled when running charm build, am i doing something wrong?
[15:26] <lazyPower> SimonKLB - allow me to introduce you to the best flag ever when having these issues
[15:26] <lazyPower> SimonKLB - when building, pass --no-local-layers
[15:26] <SimonKLB> hehe one step ahead of you :)
[15:26] <lazyPower> `charm build --no-local-layers`
[15:26] <SimonKLB> same thing then
[15:27] <lazyPower> argh
[15:27] <lazyPower> ok i'm no help then
[15:27]  * lazyPower dies a little inside
[15:27] <SimonKLB> haha, ive never had the issue before, so i wonder if its something introduced recently
[15:27] <SimonKLB> either that or im doing something odd
[15:28] <Zic> magicaltrout: few years back, I was not curious about Go at all for two reason 1) I have skills in C, Python, and it seems enough to me for "system programming language" 2) The only well-known project in Go was Docker and, as a sysadmin, not huge fan of it (even if I didn't try, I tend to prefer the LXD approach for my PoV)
[15:29] <Zic> magicaltrout: but as more and more tools in Ubuntu seems to go to Go (...), I'm planning to really take a look at Go this year :)
[15:29] <SimonKLB> lazyPower: this isnt something youve stumbled upon before btw?
[15:29] <SimonKLB> if youve had a long-running charmbox
[15:29] <magicaltrout> i'll learn it, as soon as I've got LXD into Mesos, completed my machine learning course, onboarded my new employee, got some more work in the pipeline and taken a holiday
[15:30] <SimonKLB> and build a charm with a layer that has been updated
[15:30] <lazyPower> SimonKLB - nah, i ran into stale stuff because i had local paths when i was building
[15:30] <lazyPower> but that flag fixed me up, and also, i always map in my charm/layer repo
[15:30] <SimonKLB> lazyPower: yea ive had that problem as well, thats why i knew about the --no-local-layers flag
[15:30] <lazyPower> so its whatever i have on the host. so the length of the session of charmbox isn't so much a factor
[15:30] <Zic> magicaltrout: my last concern is that, it seems Go is much loved when C/C++ hurts you
[15:31] <Zic> for personal development, I really love C, as I'm not a dev, just a sysadmin so when I'm developing, it's mainly for myself
[15:31] <SimonKLB> lazyPower: yea right, i think i actually have it mounted as a volume as well, still though, its never been any problem getting it up to date when building
[15:31] <Zic> no deadline, no teamworker code unreadable :)
[15:31] <magicaltrout> "i really love C as I'm not a dev...." said 1 person ever
[15:31] <SimonKLB> lazyPower: i just tried deleting the docker folder from deps/layers and then it was cloned fresh
[15:33] <Zic> magicaltrout: :D
[15:33] <lazyPower> SimonKLB - thats weird
[15:34] <Zic> magicaltrout: I'm saying that because I know that, all I'm developing can easily be in Go (or even in Python) without any downside
[15:34] <magicaltrout> aye Zic I know what you mean
[15:34] <magicaltrout> I do java cause its what they taught us at uni
[15:34] <magicaltrout> thats pretty much the only reason
[15:34] <lazyPower> all that java
[15:34] <Zic> magicaltrout: I just did it in C because I like it, and I know it's not a good reason :)
[15:34] <Zic> (for profesionnal PoV)
[15:35] <magicaltrout> my boss chew me out a few weeks ago for mocking PHP developers
[15:35] <magicaltrout> so I'm no longer allowed to mock languages
[15:35] <Zic> :p
[15:35] <magicaltrout> I still think PHP developers need to get out more
[15:36] <lazyPower> magicaltrout - you should replace your boss with a very intricate webservice (in your language of choice) (i'm only half kidding)
[15:36] <magicaltrout> he is a webservice
[15:36] <magicaltrout> i got chewed out over email
[15:36] <mbruzek> Nothing wrong with Java
[15:36] <Zic> for what I'm doing, Go (or Python) seems to be a better choice than C in fact, but as I'm developing for myself and not profesionally, I have more fun in developing in C, as it really feels low-level speaking to the machine
[15:37]  * magicaltrout will happily go as abstracted as required for it to be easy :P
[15:37] <magicaltrout> but for machine learning stuff its all python these days
[15:37] <magicaltrout> and charms
[15:37] <lazyPower> ^
[15:37] <lazyPower> That
[15:37] <magicaltrout> so my python fu is slowly growing
[15:38] <lazyPower> "and charms"
[15:38] <lazyPower> thats what i like to hear
[15:38] <Zic> yeah, it's the TL;DR : I now know that I can code Juju charm in Python or Bash :)
[15:38] <Zic> so my Go's learning can wait a little more :p
[15:38] <magicaltrout> well when i get this mesos stuff building I'll have the best container stack on Jujucharms.com :
[15:38] <magicaltrout> :P
[15:39] <lazyPower> Zic - i'm happy to provide all the distraction of learning go that you require if its to charm stuff up
[15:39] <lazyPower> magicaltrout - thats a pipedream sir, CDK is clearly > mesos. We'll do the pepsi challenge if you require
[15:39] <magicaltrout> hehe
[15:39] <magicaltrout> we'll see
[15:39] <magicaltrout> we'll see
[15:39] <lazyPower> indeed
[15:40] <SimonKLB> lazyPower: another wierd one, after juju upgrade-charm im setting "could not download resource: HTTP request failed: resource "X" not found" in the logs
[15:40] <magicaltrout> those who like following the crowd use CDK... those who like doing science and getting stuff done, use Mesos! ;)
[15:40] <SimonKLB> however, i do have the resource locally
[15:41] <SimonKLB> i.e, /var/lib/juju/agents/unit-charmname-1/resources/X exist
[15:41] <Zic> lazyPower: I didn't learn Go before because it was too "have a foot in each camp" : I have C for my self-pleasure to code, and I have Python/Bash for my work (as a sysadmin)
[15:41] <Zic> lazyPower: but today, as Go is more mature, I see it everywhere
[15:41] <lazyPower> SimonKLB - when you upgrade a charm (if local or --switch) it will drop the resource from the controller
[15:41] <lazyPower> SimonKLB - which means you need to re-attach
[15:41] <Zic> so... yeah, I plan to learn Go somewhere in 2017 :p
[15:42] <Zic> I heard that day will be switched to 27 hours instead of 24 this year
[15:42] <SimonKLB> lazyPower: ahaaa!
[15:42] <Zic> :>
[15:42] <lazyPower> Zic - dont torment me with a good time
[15:45] <bryan_att> hi all - looking for where I should go for conjure-up support. I've deployed it but cannot access horizon (it's unclear what the URL for horizon should be, and the default or "/dashboard" on the deployed host do not work)
[15:46] <lazyPower> bryan_att - you're in the correct place, if not here then #openstack-charms. but stokachu and mmcc are the primary authors of conjure-up
[15:46] <rick_h> bryan_att: what substrate did you deploy to? bryan_att is the horizon exposed and have something of an address you can reach from where you're at?
[15:46] <stokachu> o/
[15:46] <lazyPower> and with a response time like that ^ you're in good hands
[15:47] <bryan_att> rick_h: not sure what a substrate is, but it's Xenial minimal server with updates/upgrades only
[15:47] <Zic> if I was a developer, I will take time to learn as many languages I found "cool"... as a sysadmin, I just sticked with "know one language for each task, C for system-programming, Python for scripting", but I revised my mind and will do an exception for Go, as I saw more and more employee with Go in their sysadmin skills
[15:47] <rick_h> bryan_att: did you go to lxd, maas, or something else?
[15:47] <stokachu> bryan_att, did you select localhost?
[15:47] <stokachu> bryan_att, openstack with novalxd?
[15:47] <bryan_att> rick_h: yes that was the only option
[15:48] <bryan_att> stokachu: yes, all the instructions as stated on the quickstart page
[15:48] <stokachu> bryan_att, can you paste.ubuntu.com your `juju status` output
[15:49] <bryan_att> stokachu: http://paste.ubuntu.com/23967373/
[15:50] <stokachu> bryan_att, looks like openstack-dashboard/0*    active    idle   14       10.0.8.149      80/tcp,443/tcp  Unit is ready
[15:50] <stokachu> so http://10.0.8.149/horizon
[15:50] <stokachu> but it also looks like some of the ceph stuff isn't up yet
[15:51] <Zic> lazyPower: my VM is ready by the way, ping me when I can begin the test :p
[15:51] <Zic> I'm here for the two next hours
[15:52] <lazyPower> Zic - i'm still waiting on a good build of bins from the master branch. we're blocked on https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/213 otherwise
[15:52] <lazyPower> the build routine since we're building all arch's and all components in a single job takes upwords of an hour and a half
[15:52] <Zic> ok
[15:53] <lazyPower> i'm 35 minutes into a 1.5 hour build
[15:53] <lazyPower> so only 60 minutes to go \o/
[15:53] <lazyPower> then i'll be pending the upgrade test. this is likely to wait until Monday for you
[15:53] <lazyPower> oorrrr
[15:53] <lazyPower> yeah, probably monday now that i think about it more
[15:53] <lazyPower> and i should have the upgrade logic sorted by then as well
[15:54] <Zic> s/monday/tuesday/ actually :p but even if I'm on a weekend, I will take a look because I'm too curious even when I'm away from the PC :)
[15:56] <Zic> lazyPower: to a near subject, first I thought my kube-dns CLBO-ing sometime (~10th times, and then reteruning to normal state) was maybe not related to this issue, but since I poweroff-ed my two extramaster, I didn't have a single CLBO of kube-dns
[15:56] <Zic> maybe this fix... will fix this too
[15:57] <Zic> as the kube-dns CLBO event is tied to the readyness/liveness check, which querying the API (and maybe get the certificate error? I don't find the way to confirm this)
[15:57] <Zic> we'll see
[15:58] <bryan_att> stokachu: how do I access horizon at http://10.0.8.149/horizon is that is not a routable subnet on my local net? Also see the next paste which is the result on the host itself:
[15:59] <bryan_att> stokachu: attempt to access horizon https://www.irccloud.com/pastebin/EhMwXriO/
[15:59] <stokachu> bryan_att, you can use sshuttle to setup a connection to that 10.0.8.0 subnet
[16:00] <stokachu> bryan_att, so sshuttle -r user@hostmachine 10.0.8.0/24
[16:00] <bryan_att> stokachu: ok but to get this clearer, is it assumed that conjure-up is being deployed on a desktop host, or is it designed to be deployed on a server? If the latter and we need to use another tool to access it (e.g. sshuttle), that would be good to clarify in the docs.
[16:01] <stokachu> bryan_att, yea that depends, if you're install openstack with novalxd on localhost it is assumed that it's running on your laptop
[16:01] <stokachu> bryan_att, but we should document the sshuttle way b/c some users do ssh into another machine and test it
[16:02] <bryan_att> stokachu: ok, so for my use case, I'm deploying this on a Intel NUC5i7 with 16GB and xenial-server minimal install, then accessing it via other machines on mt local net to deploy workloads / tests etc using the OSC.
[16:02] <stokachu> bryan_att, https://github.com/conjure-up/conjure-up/issues/672
[16:03] <stokachu> bryan_att, yea you'll want to use sshuttle to access that openstack
[16:03] <stokachu> you'll also need to setup a second sshuttle session to access the actual compute nodes deployed via openstack
[16:03] <stokachu> openstack on novalxd is meant mostly for development
[16:03] <stokachu> we have another one that runs on MAAS and is meant for production
[16:05] <bryan_att> stokachu: this is also for development, but I need an OpenStack deployment that is more "real" than devstack which cannot do what I need for tests, e.g. per the OPNFV Models project https://wiki.opnfv.org/display/models/Testing - e.g. I need to bring up 4 VMs, driven by Tacker as VNFM
[16:06] <bryan_att> stokachu: I am working with narindergupta to get the OPNFV JuJu installer (JOID) running on newton but also need something lighter, so I am trying conjure-up to see what it can and can't do.
[16:06] <stokachu> bryan_att, ah ok
[16:07] <stokachu> narindergupta, can you walk bryan_att through using sshuttle to access his openstack environment?
[16:09] <stokachu> bryan_att, im working on more openstack spells if there is something you need specifically let me know
[16:10] <lazyPower> Zic - makes sense to me. seems like whichever scheduler scheduled the kubedns pod doesn't always correlate with whichever api-server was handling the request. and when thats the case, if the crypto keys were mismatched it makes sense it failed because it uses teh default service token to do that auth request
[16:10] <Zic> lazyPower: just saw mbruzek comment on your fix : https://github.com/chuckbutler/kubernetes/commit/3320fc04015411cdc9ad44d98210ada5137537e3#commitcomment-20834883
[16:10] <Zic> do you plan to migrate to snap instead of apt in future releases?
[16:11] <lazyPower> Zic - we're not using apt now ;) we're using tarball packages
[16:11] <lazyPower> Zic - but yeah, we're in the process of snapping up kubernetes
[16:11] <mbruzek> Zic we are looking into snaps as the future yes.
[16:11] <Zic> cool
[16:12] <Zic> lazyPower: oh yeah, for k8s part, but for some other part like etcd, it fetch the package from the deb archive and you're sticked to the freezed archive version so
[16:12] <lazyPower> yeah, sabdfl himself is actually working on the etcd snap
[16:12] <Zic> (except security/fix upgrade)
[16:12] <lazyPower> i'm a bit concerned about the snap refresh happening under the charm and getting newer versions than the charm is ready for, but we'll jump off that bridge when we come to it i guess
[16:14] <magicaltrout> i like the fact your jump off bridges
[16:14] <magicaltrout> not cross them....
[16:14] <magicaltrout> your/you
[16:14] <lazyPower> intentional magicaltrout  ;)
[16:14] <lazyPower> because the day i hit that issue, i'm going to commit seppuku
[16:15] <Zic> lazyPower: it's a bit offtopic here, but I don't track the news about snappy in Ubuntu -> is Ubuntu Desktop fully build on snappy and no .deb is part of the future?
[16:15] <Zic> or snap is just camp to "some packages which moves a lot"
[16:15] <lazyPower> Zic - we're not quite there yet, but yes, snaps are the future of ubuntu.
[16:15] <magicaltrout> automated under the hood rollout of all your container tech where you can't yet peg versions... what could possibly go wrtong \o/
[16:15] <lazyPower> magicaltrout precisely
[16:15] <Zic> lazyPower: with a total replace of deb or just as a "sidekick" for some packages?
[16:15] <Zic> (as it is actually used, in fact)
[16:15] <lazyPower> Zic - hard to say, but i think its where we'd like to go is fully snapped.
[16:16] <lazyPower> we've recently added classic-mode snaps to make making "unconfined" snaps much easier to build and an acceptable delivery pattern. Where teh strict confinement is like final-boss mode of snaps.
[16:17] <lazyPower> there are some corner cases that dont lend itself very nicely to strict-confinement, like the CNI plugin structure of kubernetes
[16:17] <lazyPower> but we're working through that and talking with the snappy developers to find a good path
[16:18] <Zic> I need to refresh me about all Ubuntu technologies :p it's quite a long time I didn't read about Mir, Snappy, ...
[16:19] <Zic> recently I updated my info about Unity 8 and... *suspense* Juju \o/
[16:20] <Zic> I was quite impressed/worried that Ubuntu is going to JavaScript application (through Qt/QML) for desktop

[16:20] <lazyPower> Zic - the best place to get info about snappy is from the snappy mailing list. There's many threads daily from developers integrating and making snap packages.
[16:21] <lazyPower> you'll find some neat tricks, future-talk, and get the scoop on all things snappy
[16:21] <lazyPower> i really like the new format of ubuntu-core with the USSO centric bootstrap where on first boot it prompts for your user credentials and fetches ssh keys, so yoyu dont have an insecure default username/password, which  helps eliminate vectors that contribute to things like the mirai botnet
[16:22] <lazyPower> its a *lot* like the cloud-init story where it fetches ssh keys on bootstrap and doesn't ship with default credentials. but has a fancy TUI for all of it
[16:23] <Mmike> Hello, again! I've deployed my juju env on amazon AWS but the 'login with sso' button in juju-gui still yields with 'authentication failed: no credentials provided'. Do I need to configure something else for this to work?
[16:23] <Mmike> rick_h: ^^
[16:23] <rick_h> Mmike: is this something you bootstrapped?
[16:23] <Mmike> rick_h: it is
[16:24] <rick_h> Mmike: then the sso button doesn't do anything. I've filed a bug on the GUI on this and we're looking at updating that
[16:24] <Mmike> rick_h: I did 'add-credentials' for aws and ...
[16:24] <Mmike> oh
[16:24] <Mmike> rick_h: thank you
[16:24] <Mmike> rick_h: do you have the bug url handy, maybe
[16:24] <Mmike> ?
[16:25] <rick_h> Mmike: https://github.com/juju/juju-gui/issues/2360
[16:25] <Zic> lazyPower: I'm so attached to the APT/.deb architecture but ideas that snappy exposes seems so cool that yeah, I'm excited about how it will emerge in future Ubuntu releases
[16:25] <lazyPower> Zic its already availble by default in xenial+
[16:25] <Mmike> rick_h: thnx, much appreciated
[16:25] <Zic> lazyPower: yeah, but not so much integrated for GUI apps (the only case I used it for now :/)
[16:26] <lazyPower> Zic - as a test, you can sudo snap install charm
[16:26] <lazyPower> that will get you the latest/greatest charm-tools in snap format
[16:26] <lazyPower> Zic - and i dont know about that :) there's quite a few GUI apps in the snap store as well
[16:27] <Zic> lazyPower: the confinement of this GUI apps in snap break some Desktop integration like the Unity HUD, the GTK decorator, the clipboard, etc.
[16:27] <lazyPower> ah yeah
[16:27] <Zic> but it's a work in progress I guess
[16:27] <lazyPower> that is a challenge
[16:27] <lazyPower> https://uappexplorer.com/apps?type=snappy_application
[16:27] <lazyPower> is what i was about to link as counter point
[16:27] <Zic> and the old XOrg is not helping on this part
[16:27] <Zic> I think it will be easier with Mir
[16:27] <lazyPower> but i cede to your findings
[16:28] <Zic> the only technology I'm feared that Canonical push is Mir actually (instead of Wayland)
[16:28] <Zic> I know that they choose Mir instead of Wayland because of Ubuntu Touch
[16:28] <Zic> but I feared that, in coming future, Ubuntu will be too different than other distribution who choose Wayland
[16:29] <Zic> for all other tech (juju, snappy, LXD, Unity) I'm glad of choices made
[16:30] <Zic> I'm just a bit curious/worried about Mir
[16:35] <magicaltrout> ooh nice
[16:35] <magicaltrout> kjackal_: got mesos master + slave running docker containers locally using the universal stuff
[16:35] <magicaltrout> don't need marathon after all
[16:36] <magicaltrout> i'll take a stab at finding out what goes on under the hood next week.
[16:37] <Zic> magicaltrout: (offtopic again, sorry) -> we never used Mesos directly at my company, only via DC/OS and... was not fan at all
[16:38] <Zic> I'm saying that because the customer which went with DC/OS in his mind is now currently migrating to K8S through CDK :)
[16:39] <Zic> not the same techno at all, but he drops DC/OS after seeing K8S
[16:39] <magicaltrout> well whatever works :)
[16:40] <magicaltrout> we use a lot of Mesos at NASA as it allows us to deploy non containerised workloads to it
[16:40] <Zic> magicaltrout: in fact, I think I had a bad experience with Mesos but it's DC/OS's fault, I never use "raw"-Mesos
[16:40] <magicaltrout> but I'd also like to bring juju to Mesos as a "cloud" which is why I'm plugging LXD into it, but you know, Kubernetes has a lot of fans
[16:40] <Zic> used*
[16:40] <magicaltrout> but I use DC/OS on my consultancy servers to deploy all my stuff
[16:41] <magicaltrout> i'm more than happy with it, but i think the earlier versions were a bit funky
[16:41] <magicaltrout> and TBH vanilla mesos is easy to stand up anyway
[16:41] <Zic> yeah, my main concern with DC/OS was, when it block or crash, it only prompt you with a "Give us a visit at Slack" in the first version
[16:41] <magicaltrout> lol
[16:41] <magicaltrout> okay its not that bad :P
[16:42] <lazyPower> who doesn't like joining private slack instances? *eyeballs the 10 orgs he's idling in and hasn't spoken to in 6+ days)
[16:42] <magicaltrout> indeed
[16:42] <Zic> also, we find that one of the GUI which was normally protected by the DC/OS oauth was actually accessible if you forge a special link that bypass the reverse-service...
[16:43] <Zic> we drop it early in our PoC, don't know if it's better now :)
[16:43] <magicaltrout> Zic: if you deploy on prem and don't lock down the ports there's a shit load open to the  world :)
[16:43] <Zic> yep
[16:43] <magicaltrout> dunno why they don't resolve that, but I'm not a Mesosphere dev, so I don't care :)
[16:43] <Zic> ^^
[16:43] <magicaltrout> iptables < lock-downmesos.save :)
[16:44] <magicaltrout> anyway, I think the container orch landscape is big enough for a few platforms, clearly k8s is winning, but I don't think Mesos will go anywhere in the near future
[16:44] <magicaltrout> or maybe it will and something else will come along, but clearly there's a lot of scope for different platforms
[16:45] <magicaltrout> they'll all do similar stuff at the end of the day
[16:45] <Zic> it's not the same abstraction of hardware, between k8s and Mesos, I think there is place for both of them
[16:45] <lazyPower> ^
[16:45] <lazyPower> that
[16:45] <Zic> Mesos presents you hardware like a "bunch of ressources"
[16:45] <Zic> K8S orchestrate containers
[16:45] <Zic> it's not same approach for me
[16:45] <magicaltrout> yup
[16:45] <magicaltrout> thats true
[16:45] <lazyPower> Zic - however k8s has support for resources, *and* cri will change that story
[16:45] <magicaltrout> personally, I like containers, but i like raw resource as well ;)
[16:45] <lazyPower> assuming lxd makes its way into CRI
[16:46] <lazyPower> which i heard once, but haven't heard anything about since
[16:46] <lazyPower> so we cant really commit to heresay
[16:46] <Zic> FYI, I was not fan at all of container before K8S, because I saw containers as "devs wants to do silly things in my machines"
[16:46] <magicaltrout> well everyone seems to think it will happen, without actually committing to it
[16:46] <magicaltrout> and your commander in chief was like "we'll build the stuff, and someone else will plug it in "
[16:46] <magicaltrout> who knowa
[16:46] <Zic> with K8S, container in prod is a viable thing
[16:47] <magicaltrout>  -a +s
[16:47] <Zic> but since K8S, I just embrace the philosophy of microservices in prod
[16:48] <magicaltrout> depends what you deploy though doesn't it
[16:48] <Zic> before, it was just for me  a "lab for devs", and if docker image go in prod I was like "meh, it's running in the container, don't know, poke the dev"
[16:48] <magicaltrout> you can't classify a 100 node hadoop cluster as a microservice :)
[16:48] <lazyPower> ^
[16:48] <lazyPower> that
[16:48] <Zic> since K8S, we're more in the devops approach here (full-collaboration between dev & ops)
[16:50] <Zic> (we don't wait K8S in fact :p but we envisage K8S to make things better to run container in prod)
[16:50] <lazyPower> magicaltrout - and i get the point. Everyone wants it, and I'd love to have the time to work on that and get my hands dirty with the lxd team. but a) i'm not a go developer (yet) and b) without an official roadmap item, i cannot commit to anything happening anywhere. but i would suspect its brewing somewhere in the corners of canonical.  If not it'll be one of those "why haven't we done this yet? who's responsible" and then i'll get ot stand
[16:50] <lazyPower>  on the carpet somewhere and answer to the bosses.
[16:51] <magicaltrout> indeed
[16:51] <magicaltrout> i don't doubt it
[16:52] <Zic> lazyPower: slap if I'm indiscret, but are you directly working at Canonical?
[16:52] <Zic> I thought so, but I saw in your slides that you put your @ubuntu.com mail, not the @canonical.com one :)
[16:52] <lazyPower> Zic - yep. I'm co-architect of CDK with mbruzek
[16:52] <Zic> so I'm asking :x
[16:52] <lazyPower> Zic - i value the community contribution more than teh company contribution. I will be an ubuntu community member longer htan I will be a canonical employee (is my justification for that)
[16:53] <Zic> lazyPower: like this kind of spirit :)
[16:53] <Zic> (as an Ubuntu Member too :p)
[16:53] <lazyPower> i have no plans on going anywhere, but we are who we are because we have awesome community members
[16:53] <lazyPower> like yourself, who aren't afraid to break things and let us know how we can do better and even sometimes conetribute those better ideas
[16:53] <lazyPower> *contribute
[16:54] <lazyPower> so, rather than identify via my job, i'll identify via our contributions
[16:54] <lazyPower> i also chose the ubuntu membership cloak instead of the canonical cloak (if you /whois me)
[16:54] <lazyPower> little things like that :)
[16:55] <lazyPower> i mean magicaltrout is about as much of an ubuntu member as any of us employed here :)  speaking of magicaltrout  - have you applied for membership?
[16:55] <magicaltrout> dunno what you're talking about
[16:55] <magicaltrout> i did get a canonical rucksack yesterday though
[16:55] <magicaltrout> which was nice
[16:55] <lazyPower> https://wiki.ubuntu.com/Membership
[16:55] <Zic> lazyPower: my company wants to buy official support from Canonical for CDK, it's cool (as it's the way Canonical can makes money and continue to support Ubuntu), but I prefer to directly chat with the Juju team than the commercial support :)
[16:55] <magicaltrout> you can do both Zic
[16:55] <Zic> I will :p
[16:56] <magicaltrout> that way lazyPower gets paid :P
[16:56] <lazyPower> \o/
[16:56] <lazyPower> and i like getting paid
[16:56] <lazyPower> not gonna lie
[16:56] <magicaltrout> pays for beer
[16:56] <Zic> but in fact, I will reserve lazyPower for me, and let the Canonical support to my teamworker *evil laughing*
[16:57] <bryan_att> stokachu: if I get it up and running, let me see what services are included and I'll get back to you. Apart from the basics, I do need Heat at least.
[16:59] <Zic> (you can keep lazyPower's body, I'm just reserving his minds)
[16:59] <lazyPower> this got awkward fast
[16:59] <Zic> \o/
[17:00] <Zic> as it sounds awkward in French, it's even worse in English
[17:01] <lazyPower> https://imgflip.com/i/1jdjzm
[17:01] <magicaltrout> don't worry lazyPower Zic told us he uses C for self pleasure earlier....
[17:01] <Zic> lazyPower: even if I'm preferring IRC over Slack, /giphy miss in IRC :)
[17:05] <Zic> lazyPower: anyway, do you think that we can obtain commercial support for a multi-master environment? as it's not marked as production-ready officially
[17:05] <Zic> I don't even know if I will put multimaster in prod or just in preprod
[17:05] <lazyPower> Zic  - thats one of our line items for GA, is to have HA master sorted
[17:05] <Zic> (as we're planning to have a separate cluster for preprod)
[17:06] <lazyPower> the only thing we dont have that i'm aware will be a request is federated clusters
[17:06] <lazyPower> and i think once we finish our plumbing, get the upgrade story bulletproof, and have HA masters, we're basically at GA at that point.
[17:06] <lazyPower> and our upgrades are looking pretty good so far, there's more work to be done
[17:06] <lazyPower> but the 1.5.x to 1.6.x upgrade will be teh final boss test of that, and then its time to rubberstamp
[17:07] <Zic> will gladly any troubles I will run of course :D
[17:07] <Zic> +report
[17:07] <lazyPower> :) we appreciate it
[17:12] <Zic> I will let you know who is the customer when it will go prod :)
[17:13] <Zic> (I think some of you may already know it)
[19:04] <stormmore> howdy juju world!
[19:07] <lazyPower> hey stormmore
[19:07] <lazyPower> o/
[19:08] <lazyPower> stormmore - also i know you were tracking this. HA master fixes incoming https://github.com/chuckbutler/kubernetes/commit/3320fc04015411cdc9ad44d98210ada5137537e3
[19:10] <stormmore> lazyPower o/ good to have you back :)
[19:10] <lazyPower> :D glad to be back. even if only for half a day. i'm about to bounce to get my new glasses (no more orange tape!)
[19:11] <stormmore> lol I have http://www.clicmagneticglasses.com/ which freak some people out :)
[19:12] <lazyPower> oh man i want some of these
[19:12] <lazyPower> next year when the benefit has re-upped i might do this
[19:12] <rick_h> yea, at first those were crazy but then I thought...that's a damn good idea
[19:12] <stormmore> lazyPower, you will crack up at this. I am trying to architect a standalone master node!
[19:13] <lazyPower> stormmore - why would i crack up at this? you can even do it as a phaux HA with lxd and a reverse proxy
[19:13] <stormmore> rick_k and lazyPower I love mine :) have to order direct if you want anything more than a pair of readers
[19:14] <stormmore> oh really! I was just thinking of creating a base Ubuntu install with MaaS, and KVM. Then have VMs for Juju, and k8s master then add other hardware nodes for the workers
[19:16] <stormmore> lazyPower I am thinking of trying to make it useable for 1) air-gapped rooms and 2) to bootstrap additional data centers easily
[19:17] <stormmore> lazyPower do you have a link for phaux HA?
[19:18] <lazyPower> stormmore - nah i just cooked it up in my head. the premise is deploying to LXD on the unit, and setting up a reverse proxy for the apiserver endpoint
[19:18] <lazyPower> so you could in theory, poke individual containers with upgrades and what not, and lose a container and still remain online.
[19:18] <lazyPower> simulated HA via a single point of failure
[19:18] <stormmore> a true DC in a box idea then :)
[19:19] <lazyPower> stormmore - thats exactly what we did in Gent, ran a bunch of deployments in lxd
[19:19] <lazyPower> people were kind of blown away that you can simulate network partitions and what not on a single box
[19:19] <stormmore> I thought about that but I wasn't sure how to get the juju controller to handle the LXD and the ability to add hardware nodes through MaaS
[19:20] <lazyPower> juju deploy --to lxd:# kubernetes-master
[19:20] <lazyPower> the networkign there should work pretty well as spaces are fully supported on maas
[19:21] <lazyPower> however we need to investigate extra bindings in the kube charms for that to be truly useful
[19:21] <magicaltrout> if i get lxd in mesos i'm going to name it.... Fauxpenstack
[19:21] <stormmore> oh I get that part but how do you configure the local juju controller ... yes I could use manual for it but then how do I get juju to add-node using maas
[19:21] <lazyPower> next cycle maybe, we're pretty up to the gills in terms of features for this cycle.
[19:21] <stormmore> it is basically the 1 controller - 2 cloud problem
[19:21] <lazyPower> you could just juju deploy ubuntu to get a clean server install
[19:21] <lazyPower> then use that machine # to start colocating the lxd services
[19:22] <lazyPower> and scale out using juju add-unit ubuntu
[19:22] <lazyPower> its not directly straight forward, but would work
[19:22] <stormmore> at least from my understanding you would have to manually add each node to juju
[19:22] <lazyPower> stormmore - lets follow up on this next week for a "for fun" session
[19:23] <stormmore> lazyPower sounds "fun" ;-)
[19:23] <lazyPower> i bet we can get you moving with minimal fuss modeling that as a distributed lxd service
[19:23] <lazyPower> across many physical units
[19:23] <stormmore> for the time being I am going to have VM and Physical nodes handled by MaaS
[19:23] <bdx> lazyPower: "modeling that as a distributed lxd service" - I would love to know what you are talking about here
[19:23] <bdx> :-)
[19:24] <lazyPower> bdx nobody poked you ;)
[19:24] <lazyPower> <3
[19:24] <bdx> I heard distributed lxd service and I came running
[19:24] <lazyPower> bdx yeah man, there's a lot we can do here, i'm sure we'll find an end of the sidewalk at some point but if we're as far along with networking in the MAAS substrate this should be completely doable with minimal mods to teh charms.
[19:24] <lazyPower> and iirc, thats our furthest running story with regards to juju spaces networking
[19:25] <stormmore> I am not sure if it would need to be a fully distributed lxd service (not that that wouldn't be cool too) but the ability to put all the management services on to the master node in lxds and add other k8s worker nodes that are hw / maas driven is basically want I am wondering about
[19:25] <lazyPower> stormmore - completely doable. my manual environment is like that
[19:26] <lazyPower> i have 3 workers that are just spare hardware, all the management/control-plane is either smashed on the metal or in lxd
[19:26] <lazyPower> i dont recommend smashing on metal unless you like pain in the future
[19:26] <stormmore> lazyPower yeah but it is manual, would be nice to use MaaS for the hw nodes ;-)
[19:26] <lazyPower> same principal
[19:26] <lazyPower> should be a similar path to success
[19:27] <lazyPower> if you *need* to have the metal unit represented first, you can juju deploy ubuntu, that will give you a clean metal ubuntu image and you can start modeling there
[19:27]  * lazyPower checks the juju help-commands to see if there's one to just reuqest a machine via the provider
[19:27] <lazyPower> add-machine                Start a new, empty machine and optionally a container, or add a container to a machine.
[19:28] <lazyPower> stormmore - juju add-machine --help
[19:28] <lazyPower> absolutely no schenangans needed. add-machine can reuqest clean metal from the provider.
[19:28] <lazyPower> well clean-metal being vm, container, metal, et-al
[19:29] <stormmore> lazyPower I got juju running on localhost using juju deploy manual/localhost my-cluster but I couldn't figure out how to then point that controller to a maas cloud
[19:29] <lazyPower> look at juju add-user/ juju grant
[19:29] <lazyPower> you can take the output there and add it to your other juju workstation to control the controller.
[19:31] <stormmore> ah you are still thinking that I would be managing this master from another system :)
[19:32] <lazyPower> are you meaning a self hosted full stack juju bit?
[19:32] <stormmore> the workflow I am looking at accomplishing is bootstrap juju on localhost and use that controller to hook into MaaS to add additional hosts. I believe there is only 1 controller per cloud rule
[19:32] <lazyPower> oooo
[19:32] <lazyPower> yeah your adds would be manual then
[19:32] <lazyPower> and thats less than optimal
[19:33] <lazyPower> at least if this is how i think it is
[19:33] <stormmore> that is why I am thinking using MaaS and KVMs for the components other than k8s worker nodes
[19:34] <stormmore> at least MaaS can handle both VMs and HW in the same controller
[19:35] <stormmore> the end state (hopefully) will be the master node can act like a client as well when needed
[19:36] <stormmore> or removed from the environment once the subservices have been hardened into the environment
[19:43] <stormmore> the messy part is I am considering using Ansible to orchestrate the bootstrapping of this node as it doesn't need to bootstrapped itself
[19:50] <lazyPower> you could probably get away with a pretty short bash script
[19:53] <lazyPower> but i digress i need to jet to run some errands. keep me in the loop stormmore and i'm happy to lend a hand/input where applicable
[19:53] <lazyPower> cheers o/ have a great weekend everyone
[19:58] <derekcat> Anyone know where Juju keeps its known_hosts file?  It keeps telling me to delete the offending key in /tmp/ssh_known_hosts[numbers] when I try to juju ssh to a unit...  Machines originally added via: juju add-machine ssh:ubuntu@[ip address]
[19:58] <derekcat> The /tmp/ssh_known_hosts file appears to be a very temporary file..  Gone by the time the ssh attempt fails.
[20:32] <rahworks> hey sup everyone, I can't seem to login to jujucharms.com. can someone help me out
[20:34] <rahworks> This is what i see when i try to login to juju charms.com
[20:34] <rahworks> http://imgur.com/a/YZILS
[20:41] <magicaltrout> yeah i think the SSO is having a fit
[20:41] <magicaltrout> i can't get into the wiki either
[20:42] <bdx> lazyPower: we are trying to install deis on cdk and getting some crazy errors ...
[20:42] <bdx> lazyPower: have you installed deis on cdk successfully?
[20:47] <bdx> lazyPower: were hitting this https://github.com/conjure-up/conjure-up/issues/520
[20:47] <bdx> lazyPower: trying your workaround now
[20:48] <stokachu> bdx, if you get it working can you post the steps in that bug?
[20:48] <stokachu> so i can automate it
[21:03] <bdx> stokachu: yes ... we are so close ...
[21:04] <stokachu> thanks
[21:04] <bdx> stokachu: `deis register deis.<mydomainname>.com` is whats failing us now ... I'll add to the bug
[21:05] <stokachu> bdx, perfect
[21:42] <derekcat> : Anyone know where Juju keeps its known_hosts file for manual/local machines?
[22:00] <derekcat> It works if I run juju ssh --no-host-key-checks postgresql/14
[22:01] <derekcat> but otherwise, it spits this at me:
[22:01] <derekcat> Add correct host key in /tmp/ssh_known_hosts736182584 to get rid of this message.
[22:01] <derekcat> Offending RSA key in /tmp/ssh_known_hosts736182584:7
[22:03] <derekcat> Very similar to https://bugs.launchpad.net/juju/+bug/1646322  Except I'm using manual provider instead of openstack, hence the suggested solution is unrelated to this..
[22:04] <mup> Bug #1646322: juju scp/ssh known hosts errors <landscape> <juju:New> <https://launchpad.net/bugs/1646322>
[22:04] <derekcat> mup: haha nice timing.  Everything is in the same VLAN/subnet in my case..
[22:04] <mup> derekcat: In-com-pre-hen-si-ble-ness.
[22:04] <derekcat> mup: lol
[22:04] <mup> derekcat: Roses are red, violets are blue, and I don't understand what you just said.
[22:06] <derekcat> mup: my problem machine, juju-controller, and the machine I'm running commands from are all in the same network space. The other 5 machines in the model are all working fine, so I'm not sure what happened here..
[22:06] <mup> derekcat: I apologize. I'm a program with a limited vocabulary.
[22:08] <derekcat> mup: well that's ok, maybe the extra verboseness will help other people ^_-
[22:08] <mup> derekcat: I really wish I understood what you're trying to do.
[22:14] <derekcat> Also, no idea how the fingerprint could've changed as it doesn't appear to have been redeployed since I initially added the machine
[22:14] <derekcat> >_<
[22:44] <lazyPower> bdx looks like you didn't apply the work around. if you see port 443 - you're using the api load balancer, which doesn't support SPDY which in turn will fail the deis setup.
[22:48] <lazyPower> bdx - https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/#common-problems
[22:49] <bdx> lazyPower: thanks ... we did apply the work around though ... it was failing the same way on 80 and 443
[22:49] <lazyPower> bdx but the API endpoint should be 6443
[22:49] <lazyPower> not 443
[22:49] <lazyPower> 80 will always fail, as the apiserver requires strict tls key authentication
[22:49] <bdx> ahhhh
[22:49] <lazyPower> 443 might have worked, but the layer7 load balancer would be the blocker there and yield unhelpful error messaging
[22:49] <bdx> I see that now
[22:50] <lazyPower> so i suspect something didn't happen how we expected it to and that config is incorrect
[22:50] <bdx> I see, that would make perfect sense
[22:50] <lazyPower> https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/#common-problems <- has the steps to manually fix this using some jq kung-fu and editing your kubeconfig
[22:50] <lazyPower> sorry :( that should have worked, i did test it
[22:50] <lazyPower> but i haven't tested it recently
[22:50] <lazyPower> so its likely that something has changed and its botched on "fixing" the config file
[22:51] <lazyPower> if this works i'll file a bug to investigate the config regeneration path and see if somethings funky in there or if this is unrelated and we have something else at play here, but deis has been deployed successfully via helm on CDK. Ben was a wizard at that and stokachu has been working through this as well (but i'm not certain if it was successful as I haven't followed up)
[22:52] <bdx> lazyPower: I'll investigate this and get back to you
[22:52] <bdx> lazyPower: thanks for following up
[22:52] <lazyPower> np :)
[22:52] <lazyPower> i just happened to stop in before i head out for the night. glad i caught you before we missed each other
[22:53] <bdx> aha niccceeee