[03:31] <justicefries> just as a wild-eyed thought
[03:33] <justicefries> i wish juju was structured a little bit more like Kubernetes where when I created a charm, I was just submitting a manifest rather than a charm with code, and my code was something more like third party resources.
[03:33] <justicefries> how this would work in practice, I couldn't tell you.
[06:24] <lazyPower> justicefries well we can skinny up that charm with layers
[06:25] <lazyPower> but that only goes so far. I guess in reality if you wanted to get *really* persnickety, you could use just a metadata that declared resources, and a single hook, but thats really not what we're here to do. Operational code is often large to handle the complexities of interactions and what that effected change means in a context to your workload.
[06:26] <lazyPower> also note: its midnight here so i hope i'm not babbling gibberish
[06:28] <junaidal1> tvansteenburgh: Any idea, when networks spaces feature is added to juju-deployer? https://bugs.launchpad.net/juju-deployer/+bug/1642157
[06:28] <mup> Bug #1642157: deployer doesn't support juju 2.0 network-spaces feature <juju-deployer:New> <https://launchpad.net/bugs/1642157>
[07:36] <kjackal> Good morning Juju world!
[08:17] <junaidali> Good morning kjackal
[09:48] <SimonKLB> just deployed openstack bundle with conjureup, instead of deploying them on lxd containers inside the machines it deployed all the charms on different machines
[09:48] <SimonKLB> what couldve happened to cause this?
[09:49] <SimonKLB> this is on maas btw
[09:51] <junaidali> can you share your bundle? is it openstack base bundle?
[09:51] <SimonKLB> junaidali: yes, "OpenStack base for MAAS"
[09:53] <junaidali> SimonKLB: this bundle https://jujucharms.com/openstack-base/ right?
[09:53] <SimonKLB> junaidali: afaik
[09:53] <SimonKLB> junaidali: it's the pre-defined one in conjure-up, i've not added a custom bundle
[09:54] <junaidali> SimonKLB: is there a machine section in that bundle?
[09:55] <SimonKLB> junaidali: yes
[09:55] <SimonKLB> junaidali: but it might be due to the fact that it says "lxc:1" instead of "lxd:1" come to think of it
[09:55] <SimonKLB> junaidali: ive seen something similar in the past
[09:56] <SimonKLB> junaidali: never had any trouble deploying the bundle manually with juju deploy, but conjure-up seem to be tricked by it
[09:56] <stokachu> SimonKLB, whats juju status look like?
[09:57] <junaidali> SimonKLB: I've not used conjure up but manually deploying a bundle works even if there is lxc:<machine number>. Juju deploy command auto deploys to lxd
[09:57] <SimonKLB> junaidali: yes
[09:58] <SimonKLB> stokachu: http://paste.ubuntu.com/23786000/
[09:58] <SimonKLB> 18 machines heh...
[09:59] <SimonKLB> i wouldnt be surprised if it was the lxc/lxd bug
[09:59] <stokachu> SimonKLB, yea gimme a few finishing something up and ill check it
[09:59] <SimonKLB> stokachu: cool, thanks
[09:59] <mskalka> SimonKLB mind showing the bundle.yaml?
[09:59] <SimonKLB> mskalka: https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml
[10:00] <mskalka> ok so you're definitely on the base bundle.. that's quite odd
[10:01] <SimonKLB> mskalka: well, that's just my guess tbh, conjure-up have some bundles pre-defined
[10:01] <SimonKLB> mskalka: the one i deployed was named "OpenStack base for MAAS"
[10:01] <SimonKLB> mskalka: i assume that is the one i linked to
[10:01] <stokachu> SimonKLB, you're also on 2.1-rc1 it looks like
[10:01] <SimonKLB> stokachu: yea, thats right
[10:01] <mskalka> try pulling down the bundle from https://jujucharms.com/openstack-base/ and deploying it from local
[10:01] <stokachu> ok just wanted to get the right juju version
[10:02] <SimonKLB> mskalka: will do
[10:03] <SimonKLB> anyone got a nice one-liner to empty a model of all it's applications?
[10:04] <mskalka> can't guarantee anything but it might rule out conjure-up shenanigans
[10:04] <SimonKLB> mskalka: yea
[10:04] <mskalka> other than killing the controller? Not really
[10:04] <stokachu> just destroy-model
[10:04] <stokachu> juju destroy-model
[10:05] <SimonKLB> stokachu: ah, conjure-up created a model of it's own
[10:05] <stokachu> SimonKLB, :)
[10:08] <SimonKLB> mskalka: stokachu how do i use conjure-up with a local bundle.yaml?
[10:15] <stokachu> SimonKLB, cp -a ~/.cache/conjure-up-spells/openstack-base ~/openstack-base
[10:16] <stokachu> then put your bundle in openstack-base
[10:16] <stokachu> and you can run conjure-up -d <path_to_spell>/openstack-base
[10:21] <SimonKLB> stokachu: ah, just deployed it through juju deploy now, works like it should - conjure-up will have to wait for another day
[10:21] <SimonKLB> there is an issue ticket with a similar experience that i had, i commented my problem as well https://github.com/conjure-up/conjure-up/issues/553
[10:50] <stokachu> SimonKLB, yea i've got someone on it
[10:50] <stokachu> hopefully will have it fixed today
[10:50] <SimonKLB> stokachu: great to hera!
[10:50] <SimonKLB> hear*
[13:28] <tvansteenburgh> junaidali: probably whenever someone submits a patch. i may get to it eventually but i doubt it'll be soon
[13:40] <pmatulis> hi, using ppa:juju/stable i install juju and juju-deployer but i get a juju 1.x type error. i notice that juju-deployer actually gets installed from universe. i think there is a packaging snafu. see http://paste.ubuntu.com/23786563/
[14:03] <tvansteenburgh> pmatulis: ftr, the most up-to-date juju-deployer is in ppa:tvansteenburgh/ppa
[14:04] <pmatulis> tvansteenburgh, oh
[14:57] <cory_fu> rick_h: https://github.com/juju/python-libjuju/blob/e13c7c82396da82780c0b18c32f88c79647d09a6/examples/deploy.py (once https://github.com/juju/python-libjuju/pull/46 is merged)
[15:09] <tvansteenburgh> cory_fu: shouldn't it be loop.run(main())
[15:09] <tvansteenburgh> or loop.run_until_complete(step())
[15:10] <tvansteenburgh> i guess the former so that args can be passed
[16:46] <Catalys> Good afternoon, is manage.jujucharms.com down? (Reason my Openstack single node installation is failing)
[16:47] <magicaltrout> yup
[17:10] <smgoller> how can I force destroy a model? I've got a model that's stuck on "Waiting on model to  be removed, 8 machine(s)...
[17:10] <magicaltrout> can you remove-machine?
[17:10] <smgoller> no
[17:10] <magicaltrout> unlucky
[17:10] <magicaltrout> dunno then
[17:12] <lazyPower> smgoller juju remove-machine # --force is your saving grace here as magicaltrout pointed out.
[17:13]  * magicaltrout gets lucky \o/
[17:13] <lazyPower> sometimes, i've seen slow cloud cause issues where models can take up to 20 minutes to be reaped, and force removing the machines seemed to have done the trick
[17:13] <smgoller> this is a maas cluster, and i've run that command and the machine is still not gone.
[17:14] <lazyPower> its the controller doing the right thing trying to execute all the extant removal hooks, (*-departed, *-broken, stop), and then issue a removal request via the cloud api.
[17:14] <smgoller> i released the machines on the maas side now so I'm just trying to figure out how to clean up without killing the controller
[17:14] <lazyPower> smgoller - is this 2.0.2?
[17:14] <smgoller> yes
[17:14] <lazyPower> smgoller - can you tail your controller logsync file and see if there's anything pertinent spamming in the controller logs so we can file a bug?
[17:15] <lazyPower> this smells of a regressionb ut we're going to need a root cause to be helpful
[17:15] <smgoller> where is that located?
[17:16] <lazyPower> you can juju switch controller && juju ssh 0 && tail -f /var/log/juju/*.log
[17:16] <lazyPower> er that last && wont work as a chain, you'll probably need to put that in the ssh session
[17:16] <smgoller> i know what's wrong
[17:16] <smgoller> it's not juju
[17:16] <magicaltrout> you coming to gent this year lazyPower ?
[17:16] <lazyPower> magicaltrout yessir
[17:16] <magicaltrout> excellent
[17:16] <lazyPower> smgoller oh ok! do tell if you get it resolved
[17:17] <smgoller> controller can't resolve the hostname of the maas server
[17:17] <smgoller> is the actual problem
[17:17] <lazyPower> magicaltrout i took a look at the mesos/k8s integration bits over holiday. we might actually get to that this year
[17:17] <smgoller> so i'm good. thanks!
[17:17] <lazyPower> smgoller oh awesome! thats good to hear (not that its a quibble, but that its  not a regression)
[17:17] <smgoller> yes :)
[17:17] <magicaltrout> nice one lazyPower I've got one of my guys working very slowly on LXC support for Mesos
[17:17] <lazyPower> oh snap
[17:18] <smgoller> i prefer problems to be mine instead of the software's i'm using
[17:18] <lazyPower> lex dee in mesos :D
[17:18] <magicaltrout> I need to sort out my Mesos charms and get them up to date as well
[17:18] <magicaltrout> then bring in flannel
[17:18] <magicaltrout> they have support in there somewhere
[17:18] <lazyPower> magicaltrout yep, lmk when you've got that in the pipeline and i can try to schedule out some time to collab so we can do an external scheduler relation
[17:20] <magicaltrout> yeah that would be good. I moved all my existing stuff to DC/OS as a test platform, but DC/OS is just Marathon with a fancy UI and some authentication, so I reckon we can do that nicely in Juju, but then it doesn't have the universe packager and stuff, but i reckon if LXC works as a cloud provider in Mesos, who cares! ;)
[17:21] <lazyPower> ^
[17:21] <lazyPower> that
[17:22] <lazyPower> we can do universe packager as a stretch goal whenever someone wants it eh?
[17:22] <magicaltrout> indeed its only marathon apps wrapped up in a git repo
[17:22] <magicaltrout> still need to write my cfgmgmtcamp talk =/
[17:22] <magicaltrout> when don't I need to write a talk?
[17:23] <magicaltrout> and sumbit some to apachecon
[17:23] <jrwren> only during the 1hr you are actualy giving a talk do you need to not write a talk. :)
[17:23] <magicaltrout> aye, even then though....
[17:25] <magicaltrout> lazyPower: got any cool K8S demo stuff? I have a conference I'm sponsoring full of developers in a few weeks, gonna take down a screen and stuff
[17:26] <lazyPower> yeah, i wrote up an ARK server workload
[17:26] <lazyPower> i hear games steal the show when you demo neat stuff with moving game servers
[17:26] <magicaltrout> there will certainly be game and app developers there
[17:26] <lazyPower> k let me get you my resource files
[17:26] <magicaltrout> banging
[17:26] <lazyPower> you'll need to drop the $20 on steam for the game tho
[17:26] <lazyPower> prepare your wallet
[17:26] <magicaltrout> i'm over it
[17:27] <lazyPower> that soon huh?
[17:27] <magicaltrout> lol, if it gets me work :P
[17:27] <lazyPower> magicaltrout https://gist.github.com/99a68eb6602b2a9502f79315517969b2
[17:28] <lazyPower> you'll want to scrub that private registry image link, incoming second gist wit the docker bits to build the server image
[17:28] <lazyPower> https://gist.github.com/c3c3c5be88ec31b288a18b6577a8c832
[17:31] <magicaltrout> excellenet
[17:31] <magicaltrout> thanks lazyPower
[17:31] <lazyPower> np, lmk if you have any issues with that
[17:31] <lazyPower> its oddly specific to my homelab
[17:31] <lazyPower> so there's maybe dragons in there i'm not aware of
[17:33] <bugg> oops
[17:33] <bugg> don't kill tmux
[17:35] <magicaltrout> I'll spin it up in AWS next week lazyPower and let you know if anything starts crying
[17:36] <magicaltrout> looking forward to it, not sponsored an event before. Gonna take along a bunch of Juju stuff, K8S, build pipelines for auto build and deployment, data processing etc
[17:36] <magicaltrout> see if it gets any love
[17:37] <magicaltrout> sam is going to sort me out with some marketing stuff as well now I have the master services agreement signed
[17:39] <lazyPower> nice
[17:40] <lazyPower> you should get some love with that stack, being able to deploy it and demonstrate it at the drop of a hat is a nice feather to have on your tail
[19:21] <stokachu> aluria, can we get the review queue in the topic changed to review.jujucharms.com?
[19:22] <aluria> jam, jcastro: ^
[19:23] <aluria> stokachu: I'm not a jujudev -- but cheers :)
[19:24] <stokachu> ah just showed you the last person to update the topic
[19:43] <Teranet> ok who has Openstack on Juju experiance ? I do have an contrainer issue and don't know how to tackle it
[20:05] <stokachu> jcastro, thanks
[20:06] <pmatulis> Teranet, just ask a well-formulated question and see if someone can answer
[20:19] <admcleod_> Teranet: whats the issue?
[21:05] <Teranet> HI I do have a hook error which I don't know how to fix : hook failed: "cloud-compute-relation-changed" for nova-compute:cloud-compute
[21:06] <Teranet> THis is on Nova-Cloud-Controller
[21:06] <Teranet> and also neutron-gateway is blocked
[21:06] <Teranet> all other container look green and ok
[21:15] <kwmonroe> Teranet: does debug-log give you any more details? "juju debug-log --replay" will give you the whole log; if that's too much info, you can restrict it to unit with "juju debug-log -i unit-nova-compute -XX --replay", where 'XX' is the unit number of your failing unit.
[21:16] <kwmonroe> er, that should be "-i unit-nova-compute-XX" <-- no space between the unit and unit number
[21:36] <admcleod> Teranet: if you find anything with kwmonroe 's suggestion could you pastebin and give us the link?
[21:41] <Teranet> yes ofcourse sorry totally alseep today up since 4AM :-(
[21:42] <Teranet> Here is the pastbin result : http://paste.ubuntu.com/23788886/
[21:42] <Teranet> and it's now clear to me what the real issue is
[21:43] <Teranet> it's like persistant but not willingly to tell me what's wrong with it.
[21:44] <Teranet> new link : http://paste.ubuntu.com/23788899/
[21:44] <Teranet> I added a full status overview of the juju containers as well
[21:45] <Teranet> it's just I am pulling my hair out because I don't really see how to see the real error or how to fix it. :-(
[21:45] <Teranet> Which drives me crazy
[21:46] <admcleod> teranet: if you "juju ssh" to one rabbit node, can you ping the hostname of the other rabbit node?
[21:46] <Teranet> Yes don't worry about rabbitMQ for now
[21:47] <Teranet> that's a man mad issue :-)
[21:50] <admcleod> Teranet: well it looks like a problem with reverse dns on the nova cloud controller node, you might also want to ask the same question / paste the logs in openstack-charms
[21:52] <kwmonroe> yeah Teranet, #openstack-charms might be better a better place.. admcleod, just out of morbid curiosity, does nova-cloud-controller :: nova-compute have the same nonsensical rev dns requirement that namenode :: datanode has?  any way to be "ip-only" in an openstack env?
[21:53] <admcleod> kwmonroe: i think there is a requirement for forward and reverse to work and match up
[21:54] <admcleod> kwmonroe: so.. i assume, dont know for sure, even if you used IP's it would do both checks
[21:55] <elbalaa> hi all, is juju dead?
[21:57] <Teranet> what forward and revers ?? Juju don't control DNS nor does MAAS or OpenStack
[21:58] <kwmonroe> elbalaa: my juju lives.  strong like bull.
[21:58] <admcleod> Teranet: the hook is failing because of an NXDOMAIN error
[21:58] <kwmonroe> Teranet: line 220 of your first paste
[21:58] <Teranet> where do you see the NXDOMAIN ?? ok let me check
[21:59] <admcleod> Teranet: also i think MAAS should be handling the DNS
[21:59] <admcleod> Teranet: but i have very little maas knowledge
[21:59] <Teranet> Maas only does do DHCP but won't do DNS
[22:00] <Teranet> our Corp is to big and complex which does not allow dynamic DNS
[22:01] <admcleod> Teranet: are you following any particular set of instructions for this deployment?
[22:05] <admcleod> Teranet: https://github.com/conjure-up/conjure-up/issues/487
[22:05] <Teranet> Yes I do follow the last OpenStack/Ubuntu deployment setup which I rewrite right now for Cluster Enviroments like we use at Rackspace and IBM right now
[22:06] <admcleod> Teranet: i was wondering what that setup mentions about DNS. also see the link i just posted
[22:06] <admcleod> elbalaa: is that a philosophical question?
[22:07] <Teranet> ... one sec let me check
[22:09] <Teranet> I don't see where to repoint DNS to MAAS I do see certain records in MAAS DNS but that's it
[22:12] <elbalaa> admcleod: wow juju does windows
[22:12] <admcleod> Teranet: just before you said 'maas only does do DHCP but won't do DNS' - has it been disabled? you may also want to ask about this in #maas.