=== thumper is now known as thumper-dogwalk === babbageclunk is now known as babbageclunk-run === thumper-dogwalk is now known as thumper [03:31] just as a wild-eyed thought [03:33] i wish juju was structured a little bit more like Kubernetes where when I created a charm, I was just submitting a manifest rather than a charm with code, and my code was something more like third party resources. [03:33] how this would work in practice, I couldn't tell you. [06:24] justicefries well we can skinny up that charm with layers [06:25] but that only goes so far. I guess in reality if you wanted to get *really* persnickety, you could use just a metadata that declared resources, and a single hook, but thats really not what we're here to do. Operational code is often large to handle the complexities of interactions and what that effected change means in a context to your workload. [06:26] also note: its midnight here so i hope i'm not babbling gibberish [06:28] tvansteenburgh: Any idea, when networks spaces feature is added to juju-deployer? https://bugs.launchpad.net/juju-deployer/+bug/1642157 [06:28] Bug #1642157: deployer doesn't support juju 2.0 network-spaces feature [07:36] Good morning Juju world! [08:17] Good morning kjackal === deanman_ is now known as deanman [09:48] just deployed openstack bundle with conjureup, instead of deploying them on lxd containers inside the machines it deployed all the charms on different machines [09:48] what couldve happened to cause this? [09:49] this is on maas btw [09:51] can you share your bundle? is it openstack base bundle? [09:51] junaidali: yes, "OpenStack base for MAAS" [09:53] SimonKLB: this bundle https://jujucharms.com/openstack-base/ right? [09:53] junaidali: afaik [09:53] junaidali: it's the pre-defined one in conjure-up, i've not added a custom bundle [09:54] SimonKLB: is there a machine section in that bundle? [09:55] junaidali: yes [09:55] junaidali: but it might be due to the fact that it says "lxc:1" instead of "lxd:1" come to think of it [09:55] junaidali: ive seen something similar in the past [09:56] junaidali: never had any trouble deploying the bundle manually with juju deploy, but conjure-up seem to be tricked by it [09:56] SimonKLB, whats juju status look like? [09:57] SimonKLB: I've not used conjure up but manually deploying a bundle works even if there is lxc:. Juju deploy command auto deploys to lxd [09:57] junaidali: yes [09:58] stokachu: http://paste.ubuntu.com/23786000/ [09:58] 18 machines heh... [09:59] i wouldnt be surprised if it was the lxc/lxd bug [09:59] SimonKLB, yea gimme a few finishing something up and ill check it [09:59] stokachu: cool, thanks [09:59] SimonKLB mind showing the bundle.yaml? [09:59] mskalka: https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml [10:00] ok so you're definitely on the base bundle.. that's quite odd [10:01] mskalka: well, that's just my guess tbh, conjure-up have some bundles pre-defined [10:01] mskalka: the one i deployed was named "OpenStack base for MAAS" [10:01] mskalka: i assume that is the one i linked to [10:01] SimonKLB, you're also on 2.1-rc1 it looks like [10:01] stokachu: yea, thats right [10:01] try pulling down the bundle from https://jujucharms.com/openstack-base/ and deploying it from local [10:01] ok just wanted to get the right juju version [10:02] mskalka: will do [10:03] anyone got a nice one-liner to empty a model of all it's applications? [10:04] can't guarantee anything but it might rule out conjure-up shenanigans [10:04] mskalka: yea [10:04] other than killing the controller? Not really [10:04] just destroy-model [10:04] juju destroy-model [10:05] stokachu: ah, conjure-up created a model of it's own [10:05] SimonKLB, :) [10:08] mskalka: stokachu how do i use conjure-up with a local bundle.yaml? [10:15] SimonKLB, cp -a ~/.cache/conjure-up-spells/openstack-base ~/openstack-base [10:16] then put your bundle in openstack-base [10:16] and you can run conjure-up -d /openstack-base [10:21] stokachu: ah, just deployed it through juju deploy now, works like it should - conjure-up will have to wait for another day [10:21] there is an issue ticket with a similar experience that i had, i commented my problem as well https://github.com/conjure-up/conjure-up/issues/553 [10:50] SimonKLB, yea i've got someone on it [10:50] hopefully will have it fixed today [10:50] stokachu: great to hera! [10:50] hear* === zz_CyberJacob is now known as CyberJacob === jamespag` is now known as jamespage === lukasa_ is now known as lukasa [13:28] junaidali: probably whenever someone submits a patch. i may get to it eventually but i doubt it'll be soon [13:40] hi, using ppa:juju/stable i install juju and juju-deployer but i get a juju 1.x type error. i notice that juju-deployer actually gets installed from universe. i think there is a packaging snafu. see http://paste.ubuntu.com/23786563/ [14:03] pmatulis: ftr, the most up-to-date juju-deployer is in ppa:tvansteenburgh/ppa [14:04] tvansteenburgh, oh [14:57] rick_h: https://github.com/juju/python-libjuju/blob/e13c7c82396da82780c0b18c32f88c79647d09a6/examples/deploy.py (once https://github.com/juju/python-libjuju/pull/46 is merged) [15:09] cory_fu: shouldn't it be loop.run(main()) [15:09] or loop.run_until_complete(step()) [15:10] i guess the former so that args can be passed [16:46] Good afternoon, is manage.jujucharms.com down? (Reason my Openstack single node installation is failing) [16:47] yup [17:10] how can I force destroy a model? I've got a model that's stuck on "Waiting on model to be removed, 8 machine(s)... [17:10] can you remove-machine? [17:10] no [17:10] unlucky [17:10] dunno then [17:12] smgoller juju remove-machine # --force is your saving grace here as magicaltrout pointed out. [17:13] * magicaltrout gets lucky \o/ [17:13] sometimes, i've seen slow cloud cause issues where models can take up to 20 minutes to be reaped, and force removing the machines seemed to have done the trick [17:13] this is a maas cluster, and i've run that command and the machine is still not gone. [17:14] its the controller doing the right thing trying to execute all the extant removal hooks, (*-departed, *-broken, stop), and then issue a removal request via the cloud api. [17:14] i released the machines on the maas side now so I'm just trying to figure out how to clean up without killing the controller [17:14] smgoller - is this 2.0.2? [17:14] yes === med_ is now known as Guest87783 [17:14] smgoller - can you tail your controller logsync file and see if there's anything pertinent spamming in the controller logs so we can file a bug? [17:15] this smells of a regressionb ut we're going to need a root cause to be helpful [17:15] where is that located? [17:16] you can juju switch controller && juju ssh 0 && tail -f /var/log/juju/*.log [17:16] er that last && wont work as a chain, you'll probably need to put that in the ssh session [17:16] i know what's wrong [17:16] it's not juju [17:16] you coming to gent this year lazyPower ? [17:16] magicaltrout yessir [17:16] excellent [17:16] smgoller oh ok! do tell if you get it resolved [17:17] controller can't resolve the hostname of the maas server [17:17] is the actual problem [17:17] magicaltrout i took a look at the mesos/k8s integration bits over holiday. we might actually get to that this year [17:17] so i'm good. thanks! [17:17] smgoller oh awesome! thats good to hear (not that its a quibble, but that its not a regression) [17:17] yes :) [17:17] nice one lazyPower I've got one of my guys working very slowly on LXC support for Mesos [17:17] oh snap [17:18] i prefer problems to be mine instead of the software's i'm using [17:18] lex dee in mesos :D [17:18] I need to sort out my Mesos charms and get them up to date as well [17:18] then bring in flannel [17:18] they have support in there somewhere [17:18] magicaltrout yep, lmk when you've got that in the pipeline and i can try to schedule out some time to collab so we can do an external scheduler relation [17:20] yeah that would be good. I moved all my existing stuff to DC/OS as a test platform, but DC/OS is just Marathon with a fancy UI and some authentication, so I reckon we can do that nicely in Juju, but then it doesn't have the universe packager and stuff, but i reckon if LXC works as a cloud provider in Mesos, who cares! ;) [17:21] ^ [17:21] that [17:22] we can do universe packager as a stretch goal whenever someone wants it eh? [17:22] indeed its only marathon apps wrapped up in a git repo [17:22] still need to write my cfgmgmtcamp talk =/ [17:22] when don't I need to write a talk? [17:23] and sumbit some to apachecon [17:23] only during the 1hr you are actualy giving a talk do you need to not write a talk. :) [17:23] aye, even then though.... [17:25] lazyPower: got any cool K8S demo stuff? I have a conference I'm sponsoring full of developers in a few weeks, gonna take down a screen and stuff [17:26] yeah, i wrote up an ARK server workload [17:26] i hear games steal the show when you demo neat stuff with moving game servers [17:26] there will certainly be game and app developers there [17:26] k let me get you my resource files [17:26] banging [17:26] you'll need to drop the $20 on steam for the game tho [17:26] prepare your wallet [17:26] i'm over it [17:27] that soon huh? [17:27] lol, if it gets me work :P [17:27] magicaltrout https://gist.github.com/99a68eb6602b2a9502f79315517969b2 [17:28] you'll want to scrub that private registry image link, incoming second gist wit the docker bits to build the server image [17:28] https://gist.github.com/c3c3c5be88ec31b288a18b6577a8c832 [17:31] excellenet [17:31] thanks lazyPower [17:31] np, lmk if you have any issues with that [17:31] its oddly specific to my homelab [17:31] so there's maybe dragons in there i'm not aware of [17:33] oops [17:33] don't kill tmux === bugg is now known as magicaltrout === magicaltrout is now known as magicaltrout1 === magicaltrout1 is now known as magicaltrout [17:35] I'll spin it up in AWS next week lazyPower and let you know if anything starts crying [17:36] looking forward to it, not sponsored an event before. Gonna take along a bunch of Juju stuff, K8S, build pipelines for auto build and deployment, data processing etc [17:36] see if it gets any love [17:37] sam is going to sort me out with some marketing stuff as well now I have the master services agreement signed [17:39] nice [17:40] you should get some love with that stack, being able to deploy it and demonstrate it at the drop of a hat is a nice feather to have on your tail [19:21] aluria, can we get the review queue in the topic changed to review.jujucharms.com? [19:22] jam, jcastro: ^ [19:23] stokachu: I'm not a jujudev -- but cheers :) [19:24] ah just showed you the last person to update the topic === jcastro changed the topic of #juju to: Welcome to #juju || Review Queue: http://review.jujucharms.com || Summit: http://summit.jujucharms.com === jcastro changed the topic of #juju to: Welcome to #juju || Review Queue: http://review.jujucharms.com || Summit: http://summit.jujucharms.com || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Youtube: https://www.youtube.com/c/jujucharms [19:43] ok who has Openstack on Juju experiance ? I do have an contrainer issue and don't know how to tackle it [20:05] jcastro, thanks [20:06] Teranet, just ask a well-formulated question and see if someone can answer [20:19] Teranet: whats the issue? === admcleod_ is now known as admcleod [21:05] HI I do have a hook error which I don't know how to fix : hook failed: "cloud-compute-relation-changed" for nova-compute:cloud-compute [21:06] THis is on Nova-Cloud-Controller [21:06] and also neutron-gateway is blocked [21:06] all other container look green and ok [21:15] Teranet: does debug-log give you any more details? "juju debug-log --replay" will give you the whole log; if that's too much info, you can restrict it to unit with "juju debug-log -i unit-nova-compute -XX --replay", where 'XX' is the unit number of your failing unit. [21:16] er, that should be "-i unit-nova-compute-XX" <-- no space between the unit and unit number [21:36] Teranet: if you find anything with kwmonroe 's suggestion could you pastebin and give us the link? [21:41] yes ofcourse sorry totally alseep today up since 4AM :-( [21:42] Here is the pastbin result : http://paste.ubuntu.com/23788886/ [21:42] and it's now clear to me what the real issue is [21:43] it's like persistant but not willingly to tell me what's wrong with it. [21:44] new link : http://paste.ubuntu.com/23788899/ [21:44] I added a full status overview of the juju containers as well [21:45] it's just I am pulling my hair out because I don't really see how to see the real error or how to fix it. :-( [21:45] Which drives me crazy [21:46] teranet: if you "juju ssh" to one rabbit node, can you ping the hostname of the other rabbit node? [21:46] Yes don't worry about rabbitMQ for now [21:47] that's a man mad issue :-) [21:50] Teranet: well it looks like a problem with reverse dns on the nova cloud controller node, you might also want to ask the same question / paste the logs in openstack-charms [21:52] yeah Teranet, #openstack-charms might be better a better place.. admcleod, just out of morbid curiosity, does nova-cloud-controller :: nova-compute have the same nonsensical rev dns requirement that namenode :: datanode has? any way to be "ip-only" in an openstack env? [21:53] kwmonroe: i think there is a requirement for forward and reverse to work and match up [21:54] kwmonroe: so.. i assume, dont know for sure, even if you used IP's it would do both checks [21:55] hi all, is juju dead? [21:57] what forward and revers ?? Juju don't control DNS nor does MAAS or OpenStack [21:58] elbalaa: my juju lives. strong like bull. [21:58] Teranet: the hook is failing because of an NXDOMAIN error [21:58] Teranet: line 220 of your first paste [21:58] where do you see the NXDOMAIN ?? ok let me check [21:59] Teranet: also i think MAAS should be handling the DNS [21:59] Teranet: but i have very little maas knowledge [21:59] Maas only does do DHCP but won't do DNS [22:00] our Corp is to big and complex which does not allow dynamic DNS [22:01] Teranet: are you following any particular set of instructions for this deployment? [22:05] Teranet: https://github.com/conjure-up/conjure-up/issues/487 [22:05] Yes I do follow the last OpenStack/Ubuntu deployment setup which I rewrite right now for Cluster Enviroments like we use at Rackspace and IBM right now [22:06] Teranet: i was wondering what that setup mentions about DNS. also see the link i just posted [22:06] elbalaa: is that a philosophical question? [22:07] ... one sec let me check [22:09] I don't see where to repoint DNS to MAAS I do see certain records in MAAS DNS but that's it [22:12] admcleod: wow juju does windows [22:12] Teranet: just before you said 'maas only does do DHCP but won't do DNS' - has it been disabled? you may also want to ask about this in #maas.