[00:40] hi juju team are we working on kubernetes provider by any chance? Just wondering whats our take on instatiating a workload on top of kubernetes. [00:42] narindergupta: juju deploy canonical-kubernetes [00:43] tvansteenburgh: i understand but anything on deploy on top of kubernetes like we do it on lxd or openstack etc.... [00:49] tvansteenburgh: thanks [06:57] Good morning Juju world! === frankban|afk is now known as frankban [11:23] jamespage: Hello, I am trying to find the official documentation for Mitaka to Newton upgrade? I am not sure if there is one such document which is publicly available. Could you please let me know where can i find the docs for the M->N upgrade? [11:25] jamespage: If i just change the source to openstack-origin=cloud:xenial-newton, will that be enough? [11:25] armaan: yup [11:30] jamespage: great, thanks :) [11:32] Hello again. I'm about to demo an installation of a hadoop/spark bundle with juju and wonder if someone has a working example I can use? Preferably some failsafe bundle.... [11:33] https://jujucharms.com/hadoop-spark/ canonical are currently working on a jaas tutorial for that erik_lonroth_ [11:33] it works pretty well [11:38] Great, I'll try give it a go and see [11:38] thanx! [11:56] I tried deploy it but it fails with errors on the namenode and spark... [11:59] weird [11:59] I think it might be some apt things... [11:59] we spun it up 30 minutes ago without any issues [12:01] I'll push up a debug-log soon [12:03] https://pastebin.com/WpxCbx7K [12:06] I will mention that we have a proxy..... Its been a source for extremely many problems so far. [12:07] does look nasty [12:07] you should have a chat with the lovely kwmonroe [12:09] erik_lonroth_: magicaltrout I am giving it a try now [12:09] goooooooo kjackal [12:09] he is also lovely [12:09] but a little bit crazy [12:10] did you get a car magicaltrout? [12:10] i have a car [12:10] its a 2nd hand vw golf [12:10] the same one i had in pasadena [12:11] disaster.... [12:11] i'm thinking about a 2nd hand ford mustang import [12:13] my neighbor just got a new mustang, pretty nice [12:13] long live the stick! [12:14] maybe i'll get a huge rick_h style RV instead.... [12:15] Fun at the campground https://goo.gl/photos/C9v1yhoAg2PhnF5u7 [12:16] Actually https://goo.gl/photos/978vPSWqGgP3u6Ai6 [12:16] do you have a stupendous Pickup to go with it rick_h ? [12:16] I believe thats the law in the us [12:16] ah [12:16] only moderate [12:16] magicaltrout: but of course [12:17] Tough to fuel up sometimes https://goo.gl/photos/VD1cZZ6uXk8dSdmr9 lol [12:18] not like the pickups we saw when crusing around florida though [12:18] some of those were bonkers [12:18] erik_lonroth_: spark bundle seems to deploy fine ever here [12:19] erik_lonroth_: let me see the repo it is trying to get bigtop packages from [12:24] magicaltrout: i assure you, there is nothing moderate about that pickup and that it is stupendous. [12:24] erik_lonroth_: the repo for bigtop is this one: http://bigtop-repos.s3.amazonaws.com/releases/{version}/{dist}/{release}/{arch} [12:25] ha! [12:25] erik_lonroth_: is it possibe your firewall/proxy is blocking this repo? [13:06] It is the prime suspect. [13:06] I'll investigate that first as that is normally our proble,. [13:06] problem. [13:18] Could it be related to another error I found that looks like this from something called "Bigtop" https://pastebin.com/wzVYUtc5 [13:18] bigtop is the apache hadoop distribution fyi [13:18] unit-namenode-0: 14:01:54 INFO unit.namenode/0.install FileNotFoundError: [Errno 2] No such file or directory: Path('/etc/default/bigtop-utils') -> Path('/etc/default/bigtop-utils.bak') [13:19] I think its likely to have failed prior to that [13:19] like the apt failures [13:20] that error is because there is nothing to copy [13:20] because bigtop doesn't seem to have been installed === scuttle|afk is now known as scuttlemonkey [14:24] good evening everyone, someone can help to understand how to deploy landscape-client on a virtual server where is installed already a service? its README is few clear. thanks in advance [14:25] I've also a post here https://askubuntu.com/questions/918493/deploy-of-landscape-client-on-a-node === Guest6511 is now known as med_ [14:36] hey erik_lonroth_ - the big data charms/bundles do require external apt access. if you're behind a proxy, you can configure a model to use them: https://jujucharms.com/docs/stable/models-config [14:37] erik_lonroth_: as an example: juju add-model mymodel --config http-proxy=http://squid.internal:3128 --config https-proxy=http://squid.internal:3128 --config no-proxy=127.0.0.1 [14:38] you may be able to set those vars on an existing model with "juju model-config -m mymodel foo=bar" [14:40] Hey guys what networks does the juju controller need to be on for an openstack deploy to work? [14:40] Does it need to share all networks that the nodes for the deploy themselves are on [14:45] see i told you kwmonroe was very nice [14:45] much more useful than that kjackal [14:46] lol [14:46] you're pretty lovely too magicaltrout [14:47] aww only cause rick_h files feature requests on my behalf, saves my fingers from RSI [14:50] magicaltrout: heh, glad to be of service [14:51] vlad_: are you deploying the openstack or on the openstack? [14:52] vlad_: generally it needs binds to all interfaces on the machine it's deployed to and it needs to share a common interface with the nodes [14:52] rick_h: Deploying the openstack [14:52] vlad_: there's an open bug about getting juju to support specifying the network to use for juju communication [14:53] rick_h: Makes more sense now that I was able to get the system to boot up using one network that the controller shared with the rest [14:54] rick_h: Think I could get around this issue as long as the openstack nodes and the controller share at least one subnet? [14:58] vlad_: I think so, but the question is going to be some juju quirks as to how it'll pick which subnet to talk across and making sure it's in the right place in the list on the nodes to be picked up as a common place to chat [14:59] vlad_: beisner and others from the openstack teams probably have more experience with those quirks than I can think off of the top of my head [15:00] rick_h: Good point. To be honest you guys have been the most helpful IRC to date for me anyway. Which I really really appreciate by the way. I think my biggest issue is that I'm trying to set this up in a system that's not physically setup right and that's why I keep hitting quirks [15:00] vlad_: yea, there's a lot of moving parts to work through for sure [15:01] vlad_: there's some charms that admcleod has that helps test network among nodes in maas (/me isn't sure what you're putting this OS on) [15:02] * rick_h takes this opportunity to poke admcleod about the blog/demo stuff on them. [15:03] https://jujucharms.com/u/admcleod/woodpecker and https://jujucharms.com/u/admcleod/magpie [15:06] rick_h: Awesome thanks I'm deploying everything onto xenial I believe [15:07] rick_h: Yeah this is strange I've taken out the bindings and all config references in juju of this network and it keeps using it for some reason [16:01] hello! [16:01] is there a xenial docker charm laying around somewhere? [17:11] if i call relation_list() on a reactive charm when I only have 1 unit is it expected that it will fail? [17:11] fail as in traceback? no. [17:20] jrwren: i get 2 error code back saying: error: no relation id specified [17:20] it ran relation-list --format=json [17:20] i'm running juju 2.1.3 === frankban is now known as frankban|afk [17:23] here's what i'm seeing: https://gist.githubusercontent.com/cholcombe973/2ffd92cc7afc50fa01299600c05b3c09/raw/4d84efb719a9299c2a0febbd7ef0459334250f63/gistfile1.txt . I'm deploying a new gluster charm with 1 unit [17:24] i'm calling relation_list() and it blows up [17:24] i see the same thing in the debug-hooks for the install hook [17:24] maybe that's the issue. i'm in the install hook and it's running this [17:25] IMO definitely a reactive bug. [17:25] or charmhelpers bug. [17:25] jrwren: yeah one of them is broken [17:25] part of the point of charmhelpers is to wrap those calls and make it easy for a charm writer to never have a traceback surface. [17:33] jrwren: indeed [17:57] bdx: got your metadata pr merged. should be g2g if you rebuild now [17:58] bdx: anything you find in the store is going to be old at this point. I'd like to ideally setup a job to build and publish that charm to edge on repo-update. I haven't had the bandwidth to circle bakc to that effort though. [18:00] cholcombe: i dont see relation_list in this. https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html [18:01] so i suspect reactive, or you're invoking some wrapper thats calling that via subprocess? [18:13] lazyPower: i'm calling related_units which calls relation-list on the cli [18:13] cholcombe: did you capture that output? i would have suspected relation-list would return empty, but may be > 0 during the install hook [18:13] lazyPower: yeah it's in the paste above [18:13] afaik the charm has no notion of any relations that early in the invocation, as no other events have been processed other than *possibly* storage-attached in some charms. [18:14] yeah i prob messed up somehow and it's stuck in the install hook [18:14] returned non-zero exit status 2 -- ok, so have you attempted to invoke? [18:14] sorry incomplete sentence [18:14] Have you tried to invoke that command in a shell in that hook context? [18:14] i did [18:14] same error [18:14] it gave you a python traceback? [18:15] yeah [18:15] i also have the cli output but it's identical [18:15] i meant have you invoked relation-list --format=json on the cli [18:15] relation-list is a golang app, if your'e getting python somethings really wrong. [18:15] oh. yeah i did but i threw that terminal away [18:15] ok i want to do 2 things is why i'm asking [18:15] ok [18:16] i'll try again in a sec [18:16] 1) make sure that traceback isn't masking a bug, 2) capture the behavior so we can patch or document charm-helpers so this doesn't bite other people [18:16] i suspect you'll need to capture this in a try/except block and handle the CalledProcessError exception to keep going today. [18:16] that or guard against invoking that during install [18:17] but i dont like option 2, because it's not indicative as to why thats the case. [18:17] lazyPower: yeah. i'll get the cli output in a sec [18:18] ty <3 [18:18] i remember it giving the same output thought. it says no relation id specified [18:18] and then echo $? says 2 [18:19] lazyPower: thx [18:19] cholcombe: relation-list --help? [18:19] i've gotta wait for a vm to start up again to try [18:19] 1 min [18:19] i think you need to run relation-id's, to get the relationship id, and then plug that into relation-list ot list the sessions of that relation type [18:20] so either a param was missing, or something is really funky in ch [18:20] the related_units code doesn't do a try except in the charmhelpers source [18:20] it just expects it to work [18:20] yeah, i'm not surprised. i think the thought was better to raise an error than silently fail [18:20] i see [18:20] relation id shows as None in the install hook [18:21] ok give me 1 sec, let me trap a unit in a hook and get some more detail [18:21] 1 sec [18:22] cholcombe: you're forcing me to reactivate dead braincells in this domain :) I haven't had to think about this since the move to reactive [18:22] hahaha [18:22] lazyPower: sorry man [18:22] you're all good bruv, we need to know this stuff too [18:23] i've just been spoiled [18:23] i'm building up the new gluster charm as reactive that's why i'm asking [18:24] So, why are you probing for relation id's instead of using the conversation objects? [18:25] this level of tracking is handled for you when you use the newer style interfaces. you can just invoke object.conversations() and count how many occurrences you have. if you need more detail, the conversation object has it in the dict. [18:27] lazyPower: hmm alright. yeah i'm prob going about this the wrong way [18:27] it's because the old charm was classic and i'm trying to convert it [18:27] cholcombe: however, lets teach you :) [18:27] relation-ids kube-api-endpoint [18:27] kube-api-endpoint:8 [18:28] ok [18:28] you start with the relation-id's command to get your relationship id, you have to specify the interface [18:28] or is it relation name? [18:28] its relation name [18:28] you use that id at the end of the name, so in this example its 8 [18:28] relation-list -r 8 [18:28] kubeapi-load-balancer/0 [18:29] when you invoke relation-list -r it tells you what units are attached in the scope of that id [18:29] right [18:29] i think i had that written down somewhere in a cheat sheet [18:29] i thought there was a command that gave you -all- relations in a list plus id, but that doesn't appear to be the case. [18:29] again, dead braincells, i might be conflating it with something else [18:29] so, i'm going to presume that the related_units() invocation needs the relationship name [18:30] ok [18:30] and that should get you what you're looking for [18:30] lemme pull up the source and verify [18:30] ok [18:30] lazyPower: +1 :) [18:31] Hey guys if one of my unit fails to deploy is there an easy way to have juju just rerun it? For example my openstack-dashboard failed to deploy but nothing else on that machine had errors [18:32] vlad_: it should auto-retry with a backoff timer by default unless you disabled that behavior in model config [18:32] cholcombe: where did you import relation_list() from? [18:32] from charmhelpers core hookenv [18:32] i believe [18:32] that method doesn't exist [18:32] what are you actually calling again? :) [18:32] its lost in scrollback to me [18:33] haha [18:33] um [18:33] related_units i think it's called [18:33] def related_units(relid=None): [18:33] yeah sure does, it wants the ID of that relationship [18:33] http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/core/hookenv.py#L428 [18:34] thats why its blowing up on you. the method incorrectly defaults to None and masks the error. it should have faulted at the parameter level and forced you to provide one [18:34] right or it asks for relation_id which turns out to be None in the install hook [18:34] yeah [18:34] you're not in a relationship context, so that seems to jive with what we're seeing [18:34] right [18:34] i'm always breaking stuff :-/ [18:34] all of these were intended to be used in the scope of a relationship context, and overridden when you were outside of it to invoke relations during non-relationship events. [18:35] nah, its just a confusing mess bruv. i feel your pain [18:35] haha [18:35] relationships are still one of the most advanced concepts we have [18:35] the reactive model has made it easier, but its not perfect [18:36] so, i would encourage you to write a proper interface using the reactive model, and use the interface object you get back to perform your checks you're trying to do [18:36] i presume this is for peering and minimum node count? [18:36] yeah [18:36] that's correct [18:36] yeah, you're gonna need to write a peer interface, which both provides and requires [18:36] so you get to mash that logic up into a single interface [18:36] lemme link you to etcd peer interface [18:36] ok [18:36] it may help [18:36] if it doesnt, i take no credit/blame ;) [18:36] wolsen wrote a peer interface and i'm trying to wrap my head around it still [18:36] https://github.com/juju-solutions/interface-etcd/blob/master/peers.py [18:37] yeah this is too light to actually be helpful [18:37] i forgot i'm using leadership to coordinate the cluster [18:38] i'm only using peering to do detail probing and control states. [18:38] that's ok [18:38] I think cholcombe's code will use essentially the same level of details [18:38] there's a whole heap of logic that's not represented in here because i used a different mechanism [18:38] oh ok :) well, neat [18:38] cholcombe: best of luck to you :) [18:38] thanks :) [18:39] vlad_: that being said, juju resolved openstack-dashboard/0 will retry the hook execution that last failed === scuttlemonkey is now known as scuttle|afk [23:23] Hi All, What is the difference between Juju and DCOS ?