[08:39] <kjackal> good morning Juju world!
[10:56] <chrome0> I'm trying to upgrade juju 1.25.6 -> 1.25.10 but it seems to fail right at the beginning and not even starting the upgrade. http://paste.ubuntu.com/24220847/
[10:57] <chrome0> Any idea what's going on here? At machine #0 I can't see anything that even registers as a upgrade attempt
[10:59] <kklimonda> if one of the machines fails during bundle deployment, can I tell juju to retry this one machine and all units that depend on it?
[11:10] <kjackal> kklimonda: this is a rather charm/specific issue. The charms should stay in waiting/blocked state and react to other charms states
[11:11] <kjackal> kklimonda: so as soon as you resolve the issue with the machine/charm the rest of the bundle should react to that
[11:12] <kklimonda> but what if MAAS has failed to deploy the machine?
[11:12] <kklimonda> I can't retry it, as juju plumbing would be missing
[11:16] <ybaumy> juhu juju
[11:17] <ybaumy> how do i change the cpu overcommitment values in nova.conf .. it says that its maintained by juju
[11:17] <ybaumy> i want a 20:1 ratio
[11:19] <ybaumy> i know thats high but its just for test instances
[11:22] <jianghuaw_> Hi, I deployed a bundle with maas; but one of the machines got broken (stuck with no access to this machine). So I marked this machine as broken. Now how can I proceed the deployment?
[11:25] <jianghuaw_> Any advice? Is it possible to make it to allocate a new machine and proceed the remaining application deployment?
[11:28] <kjackal> jianghuaw_ kklimonda: you can try a juju retry-provisioning . If that fails you can deploy the failing application in another machine and re-add the relations
[11:28] <kjackal> if any
[11:29] <kklimonda> kjackal: thanks, does it work for both machines and lxc containers?
[11:29] <kklimonda> looks like what I'm looking for
[11:30] <jianghuaw_> kjackal, thanks.
[11:30] <kjackal> kklimonda: yes works for any provider. The bundle is just a "script"
[11:32] <kjackal> ybaumy:  I do not see anyhting in https://jujucharms.com/nova-compute/266 . You could ask at #openstack-charms
[12:29] <cnf> hmm, ok
[12:33] <cnf> jamespage: this is the result from your bundle http://termbin.com/dy1u did i do something silly? you where going to test it, right?
[12:34] <jamespage> cnf: I think what I said was I don't have anywhere to test this right now :(
[12:34] <jamespage> cnf: juju status --format=yaml might tell us more
[12:34] <cnf> ah, ok :P
[12:35] <cnf> message: '{"hostname": ["Node with this Hostname already exists."]}' o,O
[12:36] <cnf> that's on an lxd container
[12:36] <cnf> also
[12:36] <cnf>          current: provisioning error
[12:36] <cnf>           message: 'unable to setup network: host machine "0" has no available device
[12:36] <cnf>             in space(s) "space-openstack-mgmt"'
[12:36] <cnf> but it does?
[12:38] <cnf> http://termbin.com/va82
[12:42] <cnf> oh, nm
[12:42] <cnf> it seems MAAS is being weird again?
[12:42] <cnf> wtf?
[12:43] <cnf> jamespage: ok, back to #maas it seems omO
[12:52] <cnf> or is it
[12:52] <cnf> bah! i don't know
[12:53] <cnf> this is frustrating
[12:53] <icey> is there a good way to make an interface that's local to a specific charm rather than a global interface on interfaces.juju.solutions?
[12:57] <marcoceppi> icey: yeah, just put it in INTERFACE_PATH (and set that environment variable)
[12:58] <icey> marcoceppi: ideally I'd love to have it in tree for this charm
[12:58] <marcoceppi> icey: why?
[12:58] <marcoceppi> kind of defeats the purpose of interop
[12:58] <icey> marcoceppi: it's an existing interface; I'm porting a charm (old, bash + python) into layers + reactive. The interface is very explicitly only for this application
[12:59] <marcoceppi> icey: why even have an interface at all
[12:59] <marcoceppi> if nothing will ever connect to it
[12:59] <icey> marcoceppi: there are things that connect to it, but nothing else will provide it
[12:59] <icey> I suppose I can make a full interface and host it
[13:00] <marcoceppi> icey: well the interface on interfaces.juju.solutions include the provides and requires parts
[13:00] <icey> marcoceppi: yeah
[13:00] <marcoceppi> icey: yeah, I would go the full monty if other things consume (or provide) it
[13:00] <icey> the other things consuming it already have written their code to do that ;-)
[13:01] <marcoceppi> icey: but new applications can now use the layer you produce :)
[13:01] <icey> indeed marcoceppi
[13:05] <marcoceppi> stokachu: https://github.com/conjure-up/conjure-up/issues/750 any thoughts?
[13:07] <stokachu> marcoceppi: what does 'sudo apt-get update' give you? does it work?
[13:08] <marcoceppi> stokachu: no, 403 as well
[13:08] <stokachu> hmm
[13:08] <stokachu> so we rely on either that ppa or the snap version of lxd
[13:09] <stokachu> marcoceppi: are you behind a proxy or anything?
[13:09] <marcoceppi> at home
[13:09] <stokachu> maybe that needs a refresh
[13:11] <marcoceppi> oddly enough, works elsewhere
[13:11] <marcoceppi> nvm
[13:11] <stokachu> got it?
[13:12] <marcoceppi> actually, nope
[13:13] <marcoceppi> okay, I think I figured it out
[13:13] <stokachu> what was it?
[13:14] <marcoceppi> bad apt proxy config
[13:14] <marcoceppi> well, apt proxy config, which was fine, but not for ppas
[13:15] <stokachu> ah ok
[13:16] <marcoceppi> now I have to fight with snapd
[13:23] <jrwren> marcoceppi: apt-cacher-ng ?
[13:23] <marcoceppi> squid-deb-proxy
[14:38] <cnf> hmz ok
[14:38] <cnf> lets try this again :(
[14:39] <cnf> it seems juju breaks my network config on maas nodes
[14:39] <cnf> and then complains about it
[14:40] <cnf> :(
[14:41] <cnf> it tries to move an ip to a bridge
[14:41] <cnf> and then fails?
[14:42] <cnf> can anyone help debug this?
[14:43] <cnf> i'm stuck at http://termbin.com/3z4n
[14:48] <cnf> hmm, this is really frustrating
[14:55] <cnf> jamespage: are you available?
[14:57] <cnf> hmm, how do i upgrade a controller?
[14:57] <cnf> 2.1.2 has something about one of the errors
[15:03] <cnf> >,<
[15:04] <cnf> ugh, wtf
[15:04] <cnf> upgrade instructions don't work
[15:06] <rick_h> cnf: you did juju switch controller and juju upgrade-juju?
[15:06] <cnf> rick_h: yes, upgrade-juju doesn't work
[15:07] <rick_h> what does it say?
[15:07] <cnf> $ juju upgrade-juju
[15:07] <cnf> no prepackaged tools available, using local agent binary 2.1.2.1
[15:07] <cnf> ERROR no matching tools available
[15:07] <rick_h> cnf: hmm, what did you deploy from?
[15:08] <cnf> 2.1.2-sierra-amd64
[15:08] <cnf> it is what i used to bootstrap
[15:08] <cnf> well, i used 2.1.1 to bootstrap
[15:10] <rick_h> cnf: hmm, so it couldn't find a 2.1.2 for the architecture and tried to use yours but you're on osx which of course can't be uploaded.
[15:10] <cnf> right
[15:10] <rick_h> cnf: what is this installed on?
[15:11] <cnf> an ubuntu vm
[15:11] <rick_h> cnf: the controllers?
[15:11] <rick_h> not sure why it wouldn't be able to upgrade an ubuntu vm to 2.1.2. What version is it at now? juju show-controller xxxxx
[15:12] <cnf> 2.1.1
[15:12] <rick_h> on a xenial VM?
[15:12] <rick_h> ubuntu xenial that is
[15:12] <cnf> yes
[15:13] <cnf> i'm trying to upgrade because https://jujucharms.com/docs/devel/reference-release-notes says "[juju] Handle 'Node with this Hostname already exists' errors when provisioning containers. LP:#1670873"
[15:13] <mup> Bug #1670873: juju fails when requesting an IP for a container when retrying after lxd forkstart <oil> <oil-2.0> <juju:Fix Released by jameinel> <juju 2.1:Fix Released by jameinel> <https://launchpad.net/bugs/1670873>
[15:13] <cnf> and my deploy is failing, and this is one of the errors in my logs
[15:13] <cnf> don't know if this will fix it, but hey
[15:13] <rick_h> cnf: k, that sounds good. I'm not sure why the upgrade wouldn't find the 2.1.2 agents though.
[15:14] <cnf> uhu
[15:14] <rick_h> cnf: so the controller is a VM, how did you get it setup? With the manual provider? I thought I saw something about MAAS earlier?
[15:14] <cnf> juju model-config shows the right proxies are set
[15:14] <cnf> rick_h: juju bootstrap
[15:14] <cnf> it's a KVM running on the MAAS controller, and added as a MAAS machine
[15:14] <rick_h> cnf: on the vm itself using localhost?
[15:15] <cnf> so it is a VM, but it looks like a maas machine to juju
[15:15] <rick_h> cnf: ah ok, so yea using the maas provider bits
[15:15] <cnf> yes
[15:15] <cnf> so i use my laptop to run the juju command against a maas / juju setup in the lab
[15:15] <rick_h> balloons: have a sec? can you think of why the controller wouldn't find agents for 2.1.2 on xenial/maas? ^
[15:15] <rick_h> cnf: makes sense
[15:16] <balloons> rick_h, what does the log show?
[15:16] <cnf> which log?
[15:16] <balloons> bootstrap -- try running it with --debug too
[15:16] <balloons> the obvious answer is if the maas doesn't have outside internet access
[15:17] <cnf> http://termbin.com/p59f
[15:17] <rick_h> balloons: it's already bootstrapped and running
[15:17] <cnf> balloons: the bootstrap was done 2 weeks or so ago
[15:17] <rick_h> balloons: so he's trying to juju upgrade-juju to go from 2.1.1 to 2.1.2
[15:22] <balloons> what's the controller version?
[15:22] <balloons> Are we sure it thinks it's 2.1.1?
[15:23] <rick_h> balloons: using show-controller that's what cnf says?
[15:23] <cnf> and it was installed 2 or 3 weeks ago :P
[15:24] <cnf>  agent-version: 2.1.1
[15:25] <balloons> ack
[15:27] <rick_h> cnf: can you run the 'juju upgrade-juju' with --debug on it and see if it outputs something helpful on where it's going and we can verify it can reach it?
[15:28] <cnf> rick_h: i did, and pasted the output above
[15:28] <cnf> http://termbin.com/p59f
[15:28] <rick_h> oh, sorry I missed it
[15:30] <balloons> it's weird because I don't see it doing anything beyond starting the initial search
[15:31] <cnf> yeah
[15:31] <cnf> and juju model-config shows the right proxies are set
[15:32] <balloons> cnf, can you bootstrap a new 2.1.2 controller? Just to make sure you can see streams?
[15:32] <cnf> balloons: streams?
[15:32] <cnf> btw, "juju status" says "upgrade available: 2.1.2"
[15:36] <cnf> balloons: and bootstrapping a new controller would be about an hour work
[15:36] <cnf> i have nothing ready to take it, atm
[15:36] <balloons> cnf, no worries. Don't want to try that then
[15:37] <cnf> i'll put that on the last resort list :P
[15:37] <cnf> it's possible, but i'd rather go for the easer debugging first, if we can
[15:38] <balloons> cnf, sync-tools may also be an ok test
[15:38] <balloons> if you run with juju upgrade-juju --dry-run or juju upgrade-juju --dry-run --agent-version 2.1.2
[15:38] <balloons> what happens?
[15:39] <cnf> $ juju upgrade-juju --dry-run --agent-version 2.1.2
[15:39] <cnf> upgrade to this version by running
[15:39] <cnf>     juju upgrade-juju --agent-version="2.1.2"
[15:39] <cnf> :P
[15:40] <balloons> juju sync-tools --public --debug --version 2.1 --local-dir=. --dry-run --stream=released
[15:41] <cnf> http://termbin.com/qx52
[15:42] <balloons> k, so it can see agents just fine
[15:45] <cnf> balloons: 16:44:55 DEBUG juju.environs.simplestreams simplestreams.go:454 skipping index "file:///Users/cnf/tools/streams/v1/index2.json" because of missing information: "content-download" data not found
[15:45] <cnf> is that normal?
[15:46] <balloons> cnf, do you have local streams? Where did that come from?
[15:46] <cnf> balloons: i took away the --dry-run from that last command
[15:50] <andrew-ii> A model can't reasonably share machines, right? Like, two models can't coexist?
[15:52] <cnf> hmm
[15:53] <balloons> cnf, juju show-controller
[15:54] <cnf> balloons: https://bpaste.net/show/60fec9e338f8
[15:55] <rick_h> andrew-ii: no, the little watchers running on there would probably get pretty confused
[15:56] <andrew-ii> rick_h: Thanks - I was pretty sure it was nonsensical, but I didn't find the verbage (obvious as it may be)
[16:00] <cnf> balloons: i'm both glad and worried this stumps you as well :P
[16:01] <cnf> glad because it means i wasn't doing obviously stupid stuff, and worried because debugging it is going to be a pita
[16:02] <balloons> cnf, do you know the history of the controller? How was it created, and what's happened to it along the way?
[16:02] <cnf> balloons: yeah, it's all me
[16:02] <cnf> i bootstrapped it 2 or 3 weeks ago
[16:03] <cnf> from this very machine
[16:03] <balloons> using the 2.1.1 client right?
[16:04] <balloons> cnf, juju model-defaults
[16:04] <cnf> yes
[16:04] <cnf> http://termbin.com/hoqz
[16:05] <cnf> model-config has proxy overrides
[16:06] <balloons> cnf, how about model-config then as well :-)
[16:06] <cnf> http://termbin.com/detm
[16:07] <Budgie^Smore> o/ juju world
[16:09] <balloons> cnf, juju model-config logging-config=juju.apiserver=trace
[16:09] <balloons> juju model-config -m controller logging-config=juju=trace
[16:10] <cnf> ok, and then juju upgrade-juju again?
[16:11] <balloons> yea, with debug. I don't think in this case it will show anything more
[16:11] <balloons> However, we're not getting a good return on finding the agents from streams.
[16:11] <balloons> Have you upgraded controllers before?
[16:11] <cnf> it doesn't show anything more
[16:12] <cnf> no
[16:12] <cnf> balloons: i'm brand baby new to juju / maas
[16:12] <balloons> There may be an issue with it not respecting proxy on upgrade
[16:12] <cnf> (been quite a frustrating experience so far :( )
[16:12] <balloons> It's clear I think that it's not seeing streams. It should return something
[16:12] <balloons> It's possible to bootstrap another controller and migrate your workload, or do use sync-tools and manually push the agent to the controller to upgrade it
[16:13] <cnf> i did sync-tools, so i have them all locally now
[16:13] <cnf> where should they be on the controller?
[16:13] <cnf> (though it sounds like a bug of sorts)
[16:14] <balloons> cnf, yes, you found a bug indeed I believe. I'll have to repo it, but I'm thinking that's it
[16:15] <cnf> $ pwd
[16:15] <cnf> ubuntu@juju-controller:/var/lib/juju/tools$ ls
[16:15] <cnf> 2.1.1-xenial-amd64  machine-0
[16:15] <cnf> btw
[16:15] <cnf> that is on the controller
[16:16] <balloons> right, so we'll need to make a local stream, then tell the controller about it, and upgrade using it
[16:16] <cnf> ok
[16:18] <balloons> ok, so you ran synctools and have them all in a local folder?
[16:18] <Budgie^Smore> lazyPower, you about today? do you remember us talking about elasticsearch integration?
[16:19] <cnf> balloons: http://termbin.com/mq29
[16:21] <lazyPower> Budgie^Smore: in what context?
[16:21] <cnf> http://termbin.com/rdvo is cleaner :P
[16:23] <Budgie^Smore> lazyPower, from memory (so don't quote me on this, slept a few times since then), you were already testing it as part of the CDK bundle
[16:23] <lazyPower> Budgie^Smore: when we first released the CDK bundle we had elastic beats + elasticsearch + kibana as part of the bundle
[16:23] <lazyPower> so the integration points already exist
[16:24] <Budgie^Smore> so add charm, relate and deploy?
[16:24] <lazyPower> That should be the basics, yeah
[16:24] <lazyPower> Budgie^Smore: give me a bit i'm ina meeting
[16:25] <lazyPower> Budgie^Smore: but you should be able to deploy beats-core and add the beats=>kube relations, configure the beats and that should be the basics of the operation though.
[16:25] <lazyPower> there's a new elasticsearch charm incoming from bdx that targets ES 5.x
[16:25] <lazyPower> might be worth tracking that work as well
[16:25] <Budgie^Smore> no worries, about to head to the office
[16:30] <balloons> cnf, sorry just a moment
[16:30] <cnf> sure
[16:44] <balloons> cnf, which model are you in?
[16:44] <cnf> controller
[16:45] <cnf> or i should say admin/controller i guess
[16:49] <balloons> cnf, so my idea isn't possible to avoid you bootstrapping another controller
[16:49] <cnf> :(
[16:50] <balloons> I beat on it a bit, but apart from manually placing the tools and editing the db, it's not going to happen
[16:50] <cnf> do you know what the bug is?
[16:50] <cnf> i don't fancy doing this again next time :/
[16:50] <balloons> cnf, I'd encourage you to post to the list to get feedback on others in locked down maas environments on how they manage things
[16:50] <balloons> the collective knowledge is better than me
[16:51] <cnf> i'm not quite sure what to post
[16:51] <cnf> besides "it doesn't work"
[16:51] <balloons> cnf, well you shouldn't be doing any of this. upgrade-juju should just work
[16:51] <cnf> yes, it should :P
[16:51] <cnf> but if i don't know why it doesn't, i don't trust it will work next time
[16:52] <balloons> cnf, can you ssh into the controller
[16:52] <cnf> yes
[16:52] <balloons> we actually never proved the controller has a good proxy
[16:52] <balloons> juju ssh -m controller 0
[16:52] <balloons> then try grabbing from streams.canonical.com
Index of /</h1>
[16:53] <balloons> cnf, when you bootstrap, the agent is uploaded by the client to the controller. The deployed machines then get the agent from the controller. So it's possible the controller doesn't have access at all
[16:53] <cnf> curl can get to it just fine
[16:54] <cnf> unless juju just ignores the set proxies
[16:54] <balloons> via proxy yes?
[16:54] <cnf> yes
[16:54] <balloons> are we sure the proxy set in juju is correct?
[16:54] <cnf> uhm, no?
[16:54] <cnf> i don't know what "correct" is besides "juju model-config"
[16:55] <cnf> it's set in /etc/apt/apt.conf.d/95-juju-proxy-settings
[16:55] <cnf> and it's set in the env
[16:55] <cnf> so i don't know what else should be set
[16:56] <cnf> unless the tool that does the downloading from the stream ignores this?
[16:57] <balloons> ahh, we did confirm juju status shows upgrade availible yea?
[16:58] <balloons> so the proxy works, the values set in model-config match your expectations, and juju can kind of see the upgrade, since it tells you it exists
[16:58] <cnf> yes
[16:58] <cnf> controller  dsmaas-controller  dsmaas        2.1.1    upgrade available: 2.1.2
[16:59] <cnf> right
[17:02] <balloons> cnf, did you bootstrap with those proxy settings?
[17:02] <cnf> yes
[17:03] <balloons> did you show me juju model-config -m controller?
[17:03] <kklimonda> whom is conjure-up targetted at?
[17:04] <kklimonda> (for example I seem to be missing the point of deploying OpenStack with one command, given how complicated software it is, and how much planning ahead is required)
[17:04] <kklimonda> is this for demos and lab?
[17:05] <balloons> stokachu, ^^
[17:05] <bdx> kklimonda: getting the initial base infrastructure stack deployed successfully is one thing, maintaining it over time is another
[17:06] <cnf> balloons: yes,  but i can show it again :P
[17:06] <balloons> cnf, ty :-)
[17:06] <cnf> balloons: http://termbin.com/69cn
[17:07] <bdx> kklimonda: the ability to spin up openstack, or any other complex software stack with a single command is really the polish on the block
[17:07] <balloons> Anyways, if you wouldn't mind posting to the mailing list about your issues, that would be lovely. It would also be useful for you to ask how folks best handle upgrades in these situations, though, it should work.
[17:07] <kklimonda> bdx: yes - but that's my point. what does it bring to the table over juju?
[17:08] <kklimonda> it just seems to be another layer of indirection
[17:08] <bdx> kklimonda: its a layer of useability
[17:09] <cnf> i don't know the mailing list
[17:09] <cnf> i'm generally not a fan of mailing lists
[17:10] <balloons> cnf, ahh. Well, you certainly don't have to post. I can file a bug about it, but it may be useful for you to do so, so you can track it: https://bugs.launchpad.net/juju/+filebug
[17:11] <cnf> what do i call it?
[17:11] <cnf> juju upgrade-juju fails ?
[17:11] <bdx> kklimonda: I don't want the users of my charms maintaining yaml configs all over the place, its easier for me, and my users, if I create spells for theses infrastructure stacks so as the deploys can be interactive and intuitive
[17:11] <cnf> it's so generic :/
[17:11] <balloons> cnf, juju upgrade-juju doesn't honor proxy settings
[17:12] <bdx> kklimonda: especially for openstack .... your config.yaml for an openstack bundle can end up being 1000+ lines
[17:12] <balloons> I wonder if I can repo quickly actually
[17:12] <cnf> bdx: been trying to deploy openstack with juju for 3 weeks, little polish to that :(
[17:12] <cnf> balloons: are you sure it's the proxy ?
[17:12] <balloons> cnf, did you try conjure-up, heh?
[17:12] <kklimonda> bdx: yes, but that configuration (and the decisions behind it) still have to be made
[17:12] <balloons> cnf, your log indicates you don't get anything back from the version check
[17:13] <balloons> cnf, it should return, nothing to upgrade, or XXXX found. You get nothing and it drops to trying a locally built one
[17:13] <bdx> cnf: I have successfully deployed openstack with juju in a myriad of different ways, let me know if you need some insight, I would be glad to give you some pointers if needed
[17:14] <cnf> bdx: i can't get anything sensible out of juju so far
[17:14] <zeestrat> kklimonda: As another side of the tale, we find the extra layer of abstraction to give little value so we stick to Juju.
[17:14] <cnf> can't even upgrade it, it seems
[17:15] <bdx> kklimonda: conjure-up also allows for different types of provisioning automation not available via vanilla juju
[17:15] <bdx> kklimonda: e.g. lxd-profiles
[17:16] <cnf> balloons: https://bugs.launchpad.net/juju/+bug/1674759
[17:16] <mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:New> <https://launchpad.net/bugs/1674759>
[17:16] <cnf> sorry if it's a bit susinct, i'm tired and hungry atm
[17:16] <balloons> cnf, I know we didn't want to bootstrap a controller, but I would encourage you to try upgrading with https://jujucharms.com/docs/2.1/models-migrate. It's likely how you should manage models in production anyway
[17:16] <balloons> ie, bootstrap a 2.1.2 controller, then migrate your models. Finally, teardown the 2.1.1 controller
[17:16] <cnf> balloons: well, that assumes you have spare machines to do this :P
[17:17] <balloons> cnf, yea, that's the blessing and curse
[17:17] <cnf> i'll have to do that twice
[17:17] <cnf> once away, once back
[17:17] <balloons> why twice?
[17:17] <cnf> and bootstrapping a new machine takes 20 minutes on its own
[17:17] <cnf> balloons: once to a machine, then back to the vm
[17:17] <bdx> kklimonda: look at the example of the kubernetes spell for lxd provider .... conjure-up lends to some really cool extended functionality where you can use pre/post scripts to modify things outside of your juju environment
[17:18] <cnf> conjure-up looks like even more magic ontop of juju
[17:18] <cnf> i can't get juju to behave sanely, i don't want even more magic personally
[17:18] <bdx> cnf, kklimonda: https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/00_pre-deploy
[17:19] <bdx> if allows you to do cool and important things that you guys aren't really looking at or taking into account
[17:19] <cnf> bdx: that looks ugly, what's that for?
[17:20] <balloons> ty for the bug report cnf
[17:20] <bdx> cnf: that is a conjure-up pre-deploy script - it runs prior to conjure-up deploying your juju stuff to configure the lxd profile https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/lxd-profile.yaml#L6
[17:20] <cnf> o,O
[17:21] <bdx> cnf: things like this (customizing lxd profiles) are a huge hassle as well as a blocker for people trying to deploy things to containers that need special modifications
[17:22] <cnf> and how do i install conjure-up?
[17:23] <bdx> cnf, kklimonda: if you aren't wise to all of things you would need to do to lxd to make it support deploying kubernetes ... these things are encapsulated in the conjure-up workflow/spells
[17:23] <cnf> i don't care about lxd, much
[17:23] <bdx> cnf: are you familiar with snaps?
[17:23] <cnf> no
[17:23] <bdx> cnf: you are trying to ride the horse without knowing well the saddle
[17:24] <cnf> ...
[17:24] <kklimonda> bdx: I mean, I understand what you're saying but someone will have to understand all this once the deployment is done
[17:25] <cnf> conjure-up is available on both Ubuntu Trusty 14.04 LTS and Ubuntu Xenial 16.04 LTS
[17:25] <cnf> no ubuntu here
[17:25] <lazyPower> work is underway to port it to MacOS, but that's still pending.
[17:25] <bdx> cnf: you are going to have a tough time trying to run all/any of this from non Ubuntu Xenial
[17:25] <cnf> bdx: so no thank you
[17:26] <bdx> cnf: conjure-up is delivered as a snap - which isn't a thing on non-ubuntu systems
[17:26] <lazyPower> bdx: thats a lie
[17:26] <bdx> :-0
[17:26] <lazyPower> bdx: snap is supported on centos, debian, arch, sles
[17:26] <bdx> oooh
[17:26] <bdx> thanks LP!
[17:26] <lazyPower> bdx: <3 happy to alley oop some knowledge
[17:26] <bdx> cnf: srry
[17:26] <cnf> so you don't know your own tools...
[17:27] <cnf> either way, as long as juju isn't reliably, no amount of magic on top of it will make me trust it
[17:28] <bdx> cnf: I'm just a community member, I don't work for canonical .... I try to know the tools as well as possible, I didn't know snap was supported across other os's srry
[17:29] <bdx> kklimonda: have to know what?
[17:29] <cnf> you don't deploy / run openstack without knowing the details
[17:29] <bdx> cnf: what details?
[17:30] <cnf> yeah, you don't run openstack, i take it
[17:30] <cnf> balloons: where is the mailing list, btw?
[17:30] <bdx> yeah ... I have been since it was released to the public ... before there was automation
[17:31] <kklimonda> bdx: for example, even if conjure spell manages apparmor profiles on host machines for the user, that doesn't mean this is removing any burden - you still have to understand what has been changed, and why, or you'll have a bad day later.
[17:31] <balloons> cnf, https://lists.ubuntu.com/mailman/listinfo/juju
[17:31] <cnf> bdx: in production, with people using it?
[17:31] <bdx> cnf: yea man
[17:31] <bdx> deployed via juju
[17:32] <cnf> and you suggest people deploy openstack without knowing about how it works? what network is used for that
[17:32] <cnf> or what hardware is holding ceph data etc
[17:32] <bdx> cnf: that is the admins responsibility to track that information
[17:32] <cnf> balloons: oh, you need to join it?
[17:32] <cnf> o,O
[17:32] <bdx> cnf: juju/maas really help there too
[17:32] <lazyPower> cnf: our goal is not to remove the knowledge requirement for management over time, its to remove the knowledge barrier to get started. We still advcoate you should read teh book at least once, we want to abstract away that requirement though.
[17:32] <balloons> cnf, well juju@lists.ubuntu.com if you just want to send a mail. But replies posted only to the list obviously won't reach you
[17:32] <cnf> bdx: for the past 3 weeks, juju / maas has been gettin in my way
[17:32] <cnf> not helping me
[17:34] <bdx> cnf: sorry to hear that, let me know if you need some guidance
[17:34] <cnf> to be quite honest, if i can't get something working soon, my advice here will be to NOT use juju / canonical for openstack deployment
[17:34] <cnf> bdx: feel free to start with https://bugs.launchpad.net/juju/+bug/1674759
[17:34] <mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:New> <https://launchpad.net/bugs/1674759>
[17:35] <bdx> kklimonda: so if you are deploying kubernetes to lxd, you are probably just looking for a POC anyway right?
[17:36] <kklimonda> bdx: well, conjure-up seems like a pretty cool POC/lab deployment tool
[17:36] <kklimonda> I don't deny that :)
[17:36] <cnf> balloons: thanks for the help so far, i'm calling it a day
[17:36] <cnf> i'm tired, frustrated and hungry
[17:36] <cnf> balloons: not a good state to deal with this :P
[17:36] <balloons> cnf, you are most welcome. Sorry to hear about your troubles :-(
[17:38] <bdx> kklimonda: I can't speak for others, or for how the tool is intended to be used, but I find it most useful for getting my initial stack deployed, then I switch over to pure juju for the rest of the lifecycle ops
[17:42] <bdx> cnf: did you try `juju upgrade-juju --build-agent`
[17:43] <rick_h> bdx: seems like a legit bug in the upgrade command not able to go through the proxy that's set
[17:43] <rick_h> bdx: and from a osx client can't provide agents for ubuntu xenial
[17:43] <bdx> rick_h: ahh .. I forgot the osx thing , darn
[17:44] <rick_h> bdx: yea, nasty bug basically
[17:44] <bdx> ok then
[17:44] <bdx> shoot
[18:07] <stormmore> o/ juju world
[19:23] <andrew-ii> bdx: you mentioned a good book to read? One that would make sense with version 2?
[19:30] <marcoceppi> stokachu cory_fu any progress on osx + conjure?
[19:47] <bdx> andrew-ii: I mentioned a book?
[19:47] <andrew-ii> Might have been a figure of speech
[19:48] <andrew-ii> Basically, I jumped into maas/juju full hog back before 2 was released, and never really got a cloud up. So it's been hard to get started.
[19:48] <andrew-ii> I've tried conjure-up, but I'm just too ignorant to troubleshoot it, I think
[19:49] <andrew-ii> So I'm slowly building openstack up manually, and that's been really informative
[19:51] <bdx> andrew-ii: yeah ... that is probably the biggest backwards whale there is ... juju makes openstack really niceee, but you have to know a bit about openstack for it all to make sense
[19:52] <bdx> andrew-ii: you should imagine juju as something that takes away a lot of the pain points, that being said, you still need to be entirely and overly familiar with the actual cloud substrate if you plan on supporting/operating/maintaining it to any degree
[19:53] <andrew-ii> I've certainly felt that :)
[19:53] <bdx> which it sounds like you are touching up in some of those areas for sure
[19:53] <andrew-ii> You'd be amazed how much you can learn by ramming your head against an immovable object for a while
[19:55] <bdx> haha ... right ... some may call it psychotic, but I'll stand by it
[19:55] <andrew-ii> It's been a blast
[19:55] <andrew-ii> Completely useless
[19:55] <andrew-ii> But fun
[19:57] <andrew-ii> Though it looks like I'll be able to use it for a simple test environment, so I'm actually excited to see that work!
[19:57] <bdx> andrew-ii: have you gotten a POC deploy up via conjure-up or the openstack-base-bundle / openstack-lxd-base-bundle?
[19:58] <andrew-ii> Every time I tried conjure-up, I ended up reinstalling the maas controller
[19:58] <bdx> andrew-ii: I see. Did you find out why that was happening?
[19:59] <andrew-ii> So I was following the setup on one of the Juju dev blogs, and it has a lot of network config
[20:00] <andrew-ii> It seems like conjure-up didn't really respect that and, combined with some strange dns/routing issues, just sorta confused it
[20:00] <andrew-ii> I'm sure it was recoverable, but I didn't know how or what was *really* wrong
[20:01] <bdx> andrew-ii: yeah, (are you reading Dimiter's blog?) a good amount of that has been simplified/made possibly from the maas gui now
[20:01] <andrew-ii> Yes!
[20:01] <andrew-ii> The new maas 2 interface is a joy
[20:01] <andrew-ii> And that has greatly alieviated a bunch of the rebuild/configuration pain
[20:01] <bdx> andrew-ii: fair enough - right
[20:01] <andrew-ii> And I've almost gotten openstack manually deployed
[20:02] <andrew-ii> It just hung on some relations that... well, should have been ok
[20:02] <bdx> ooooh, you are building it manually via juju
[20:02] <andrew-ii> So I'm rebuilding it tonight to see what I missed the first time in my config
[20:02] <bdx> lol
[20:02] <bdx> ok
[20:02] <bdx> gotcha
[20:02] <andrew-ii> Yeah - one of my machines seems to be slightly junk
[20:02] <andrew-ii> Since it won't always be able to talk to the cloud images
[20:03] <andrew-ii> So I need to do a lot of checks before I can let a bundle loose, or it just sorta stalls
[20:03] <bdx> hmmm .... I see
[20:04] <andrew-ii> I'll admit, it's weird
[20:04] <andrew-ii> One machine basically only starts LXD containers once I destroy it once and rebootstrap
[20:05] <bdx> as opposed to?
[20:05] <andrew-ii> Seems there's some screwy TLS handshake issue that won't work because I think I have a dead network card
[20:05] <andrew-ii> well, one or two of the network connections is sligthly garbage - both trouble machines are SuperMicro
[20:05] <andrew-ii> Bonding helped, but not perfectly
[20:06] <bdx> lol oh man
[20:06] <bdx> yeah ... that will be a stick in your spokes for sure
[20:06] <andrew-ii> Basically, it seems to be a hilarious series of nonsensical bumps :P
[20:07] <bdx> aweee, I'm sorry man ... I know how that goes ... just gotta take the good with the bad and roll on
[20:07] <hatch> Hi everyone, we're aware of a service outage on jujucharms.com and are currently working quickly on a resolution.
[20:07] <andrew-ii> Ain't no thing. I'm pretty zen about it
[20:07] <andrew-ii> I knew going in that bucking for the 2.0+ releases was riding the bleeding edge, and so naturally I got a bit cut
[20:07] <bdx> andrew-ii: when I've been in similar situations, I create a working source of truth and make small incremental changes from that starting point
[20:08] <bdx> yeah
[20:08] <andrew-ii> MAAS is stable and seems great now, so now I'm just learning juju, and soon OpenStack :)
[20:10] <bdx> there ya go .... the best thing being, if/when you get stuck you know you have a whole community of engineers here to lend a hand - keep up the good work!
[20:18] <hatch>  jujucharms.com outage has been resolved, thanks for your patience :)
[20:20] <magicaltrout> I AM NOT PATIENT!
[20:20] <magicaltrout> but i didn't notice
[20:20] <magicaltrout> so i figure its okay
[20:20] <hatch> lol
[20:21] <hatch> that's the best kind of outage
[21:31] <stormmore> lazyPower, I think I finally broke my AWS cluster badly
[21:32] <magicaltrout> try switching it off and on again
[21:32] <magicaltrout> or take the floppy disk out
[21:33] <stormmore> magicaltrout, I did that made things worse :P
[21:34] <magicaltrout> did you defragment the drive? or mark the bad sectors? :P
[21:36] <stormmore> :P doubt that is the problem magicaltrout ... getting told by the master that it is waiting for kube-system pods to start and the workers are waiting for kubectl!
[21:37] <magicaltrout> ah nice
[21:41] <stormmore> at this point I am thinking about destroying and recreating it but wanted to know if lazyPower would like to get some failure data before I actually do destroy the cluster
[21:42] <magicaltrout> when you say waiting for kubectrl you mean the executable isn't available?
[21:43] <stormmore> not sure at this point, "Waiting for kubelet to start." is what juju status says about the node
[21:43] <magicaltrout> oh
[21:44] <tvansteenburgh> stormmore: juju ssh to the worker node, then `journalctl -u kubelet`
[21:44] <stormmore> well for the workers. "Waiting for kube-system pods to start" is what the state is for the master
[21:44] <tvansteenburgh> journalctl should have some info in there about why kubelet won't start
[21:47] <tvansteenburgh> stormmore: for the master, juju debug-log --replay -i unit-kubernetes-master-0 # pastebin that somewhere
[21:47] <stormmore> tvansteenburgh, http://paste.ubuntu.com/24224523/ the journalctl -u kubectl from a worker
[21:48] <tvansteenburgh> that's the whole thing?!
[21:48] <stormmore> no but it gets into a loop by the looks of it
[21:49] <tvansteenburgh> that last error msg might be relevant but it's truncated
[21:50] <tvansteenburgh> stormmore: i gotta step away for a bit, bbl
[21:51] <stormmore> no worries, yeah I though that... might try and remove that deployment but not sure how I can without the cluster being healthier
[21:51] <stormmore> I am about to step away and take this home with me
[22:40] <Budgie^Smore> ok I am home
[22:44] <stokachu> kklimonda: conjure-up is way more than a poc tool
[22:44] <stokachu> kklimonda: you can deploy to localhost if you want to
[22:44] <stokachu> kklimonda: but you aren't limited to just localhost
[22:47] <stokachu> you dont have to alter lxd profiles etc when deploying kubernetes to aws
[22:47] <stokachu> so not knowing about lxd profiles won't stop you in learning how to deploy kubernetes
[22:47] <stokachu> and if you have all day to read through 5 pages of documentation to setup a cluster then feel free