=== rodlogic is now known as Guest90697 === rodlogic is now known as Guest28228 [01:43] Is it weird of me that whenever I see $LAYER_PATH, I think SLAYER_PATH, and start thinking it's about metal? Slayer? Excellent! (insert Bill + Ted sound bite here) [01:48] blahdeblah: you've found out our secret! [01:48] rick_h_: :-D [01:48] * blahdeblah cranks up some metal === Garyx_ is now known as Garyx === frankban|afk is now known as frankban === danilos` is now known as danilos [08:20] Hello there. I have the status: "agent is lost, sorry! See 'juju status-history ceph-osd/3" && "agent is not communicating with the server" [08:20] I have restarted the agent on the servers but that doesn't seem to do the trick [08:21] i can ping the bootstack server from the client-machine [08:22] BlackDex: what command did you use to restart the agent [08:22] thats a known issue you can restart the main jujud agent (not the unit specific part) to fix iirc [08:23] lathiat: on the machines 'sudo initctl list | grep juju | cut -d" " -f1 | xargs -I{} sudo service {} restart' [08:25] also a restart of the server isn't working [08:26] on xenial? [08:26] trusty 14.04.4 [08:29] what services does that restart in what order.. i fyou rnu like sudo ceho service {} restat instead [08:30] sudo service jujud-unit-ntp-6 restart [08:30] sudo service jujud-unit-ceph-osd-0 restart [08:30] sudo service jujud-machine-8 restart [08:30] sudo service jujud-unit-ntp-1 restart [08:30] sudo service jujud-unit-ceph-mon-0 restart [08:30] hmm... two ntp's :p [08:33] ah, thats because of ceph-mon and ceph-osd [08:33] on the same machine [08:33] the strange thing is, it worked before [08:33] try restart the the machine agent first maybe [08:36] hmm [08:36] for some reason it has the wrong IP [08:36] wrong subnet [08:38] the bootstack server has two interfaces one external and one interal [08:38] and now it uses the external interface [08:39] i mean bootstrap [09:01] lathiat: I'm now setting the correct ip address in the agent.conf and resting the agents [09:01] that should work [09:01] strange that it has been adapted [09:01] that fixed it :) [09:02] ok interesting [09:02] the ip of the api server, or somtehing else? [09:03] yea, of the bootstrap node === Garyx_ is now known as Garyx [09:51] Hello all! We have a CI environment configured using the jenkins and jenkins-slave charms. However, we have some credentials that the slaves need access to that, currently, we just manually provision to the slaves. [09:51] We're using mojo to deploy our environment(s), so we do have a good way of managing secrets. [09:51] But we don't have a good way of getting those secrets on to hosts. [09:52] My first thought for addressing this is to have a subordinate charm which is configured with, effectively, a mapping from file name -> [09:52] content. [09:53] Does that sound like a reasonable solution? Does anyone know of anything like this that already exists? [10:41] lathiat: Strange. The bootstrap server has 2 ip's [10:42] one external and one internal [10:42] i need to force bootstrap to push the internal IP instead of the external as IP to the agents [10:45] BlackDex: interesting, can you show me some more details on pastebin? [10:56] um [10:57] is there a way to change the bootstrap IP? [10:57] lets say you need to change because of infra changes [11:07] http://paste.ubuntu.com/16316990/ === Garyx_ is now known as Garyx [12:01] beisner, hey - I keep hitting an issue where the neutron-gateway is given and extra nic, but the ext-port config is not set... [12:01] in o-c-t [12:01] have you seen that? [12:02] it causes failues in all of the tempest tests that check connectivity [12:02] Odd_Bloke - that does sound reasonable but its fiddly to have to configure a subordinate. I'm wondering if you can't use native jenkins Credentials on the leader, that way the slaves can use jenkins primitives to surface those credentials? [12:08] lazyPower: Does the jenkins charm support loading credentials from configuration? [12:10] Odd_Bloke - i think the only credentials it exposes at present is the admin username/pass... to make it consistent it would probably need to be extended. at one point we were using some python helpers in there to configure the instance [12:10] but to be completely fair i haven't dug deep in the jenkins charm in > 4 months so i'm not sure whats going on in there these days. [12:10] * lazyPower looks [12:10] lazyPower: I guess it also doesn't have any plugin-specific functionality ATM? [12:11] (Looking at the documented config options at least) [12:11] right i'm seeing the same [12:12] jcastro - didn't we engage with cloudbees/jenkins upstream wrt the jenkins charm? [12:13] Odd_Bloke - well, my suggestion seems to involve a lot of engineering work, disregard me :) [12:13] lazyPower: ^_^ [12:13] but looking it over, it doesn't seem we're exposing a path to complete what you're trying to do with the charms. [12:14] and that a subordinate would be a workaround... but i'm hesitant to say thats the best path forward. [12:16] Jenkins is a bit of a weird one. [12:16] Because the files we want on disk are really part of the "workload" that Jenkins runs in jobs. [12:16] Rather than the Jenkins workload itself. [12:16] yep [12:16] but, we do need to support rapidly deploying a workload on top of jenkins i feel [12:17] We were looking at https://jujucharms.com/u/matsubara/ci-configurator/trusty/0 [12:18] we've run into this very scenario quite a bit with our app-container workloads (kubernetes/swarm) - and layer-docker gave us an entrypoint into deploying/modeling workloads on top. we're discussing what a layer for communicating with k8's and deploying k8s workloads looks like.. but it does somewhat decouple the workload from the charm as its runtime could be one of n units running kubernetes... [12:18] But we were discouraged from having Jenkins configuration changes deployed by doing a charm upgrade, for reasons I can't remember. [12:18] almost seems like having jenkins as a layer, and a supporting library to model your workloads on top, then derive from layer-jenkinks to get your app runtime going, with a thin layer configuring jobs on top to make up your charm [12:19] i dont know that i like that either.. advocating for snowflake jenkins instances [12:20] i haven't had enough coffee... i'm going to go grab a cuppa and keep noodling this [12:20] :) [12:21] one question tho: are these credentials to members participating? (eg: jenkins/0 and jenkins-slave/2) or are they unrelated units? [12:21] the dhcp leases are not initially set correctly when using LXD as provider - not sure if this is a lxc issue or specific to juju because of some hostname managing? [12:21] lazyPower: So the way we currently model our Jenkins stuff is that all jobs can run on all slaves, so we'd want the credentials on all slaves. [12:22] Odd_Bloke - that makes this significantly easier [12:22] you're now talking about a relation/interface modification [12:22] or a new one [12:22] lazyPower: (Though our slaves are deployed in a different environment to our master, so they aren't actually related :( ) [12:22] o [12:22] * lazyPower snaps [12:22] anyone else have experienced any problems with DNS:es not working straight away when deploying a new charm? [12:23] simonklb - i cant point to a specific instance no, but having a bug to reproduce/reference would be helpful. If you file it targeted at juju-core and it belongs in lxd we can re-target teh bug for you. [12:26] lazyPower: yea, the bug is kind of unspecific, the only thing I've found is that the dnsmasq.lxcbr0.leases file is not updated to the juju-xxx-xxx-.. hostnames right away [12:26] that sounds like a likely culrprit, yeah [12:27] are you on juju 2.0-beta6/beta7? [12:27] 2.0-beta6-xenial-amd64 [12:27] yep, ok [12:27] can you bug that for me? :) [12:27] i'll poke cherylj about it once its file and we can take a look [12:28] thanks, I'll try to describe it :) [12:28] btw do you know if there is any work done on the lxd profiles? [12:28] i know its been mentioned more than once [12:28] I saw that docker support is not working with LXD until that is implemented [12:28] we have specific profiles for docker based workloads in lxd, there's some networking [12:29] and a few other concerns i've heard about wrt profiles, but i'm not 100% clear on what work is being done there [12:29] lazyPower: this is the bug thread I saw https://bugs.launchpad.net/juju-core/+bug/1565872 [12:29] Bug #1565872: Juju needs to support LXD profiles as a constraint [12:30] yeah :) that was my team poking about our workload support in lxd [12:31] lazyPower: we did, why what's up? [12:31] jcastro - just curios on if upstream will be looking at the charms, and i knew we had WIP there. Odd_Bloke is deploying a jenkins workload, and the charms have an end of the sidewalk [12:32] we're supposed to be maintaining it in their namespace [12:32] ooo jamespage still owns this charm [12:33] ? [12:33] jenkins? [12:35] popey: sudo apt install juju [12:35] jamespage - yep [12:36] popey o/ [12:37] huh [12:37] lazyPower, guess that's the side effect of being a 0 day adopter [12:38] you forget about all that stuff you wrote :-) [12:38] haha [12:38] lazyPower, I think you can blame me for ganglia as well [12:38] lazyPower, dosaboy and beisner have something a bit more recent re jenkins - we should fold that work in and make it generally consumable if thats not already the case... [12:39] marcoceppi: yeah, tried that, but it all failed so I gave up and span up a digital ocean droplet [12:39] Odd_Bloke ^ [12:39] popey: wait, what do you mean it all failed? [12:39] the guide as it is written doesn't work on 16.04 [12:39] get-started? [12:39] the one i linked to above, yes [12:40] * marcoceppi stomps off to correct things [12:40] I didn't have time to debug it so switched to real hardware, sorry. [12:45] popey: s/real hardware/other people's hardware/ ;) [12:45] s/other people's hardware/working infrastructure/ [12:49] lazyPower: https://bugs.launchpad.net/juju-core/+bug/1579750 [12:49] Bug #1579750: Wrong hostname added to dnsmasq using LXD [12:50] let me know if you need any more info [12:50] We'll follow up on the bug if thats the case simonklb :) Thanks for filing this [12:50] does anyone know if anyone's tried adding http/2 support into the apache2 charm? [12:50] nottrobin - i don't believe so. [12:51] lazyPower: thanks. do you know if anyone's working on a xenial version of apache2 [12:52] I haven't seen anything yet. Might be worth while to ping the list so it gets a broader set of eyes on it. [12:53] lazyPower: I guess maybe not: https://code.launchpad.net/charms/xenial/+source/apache2 [12:53] nottrobin - right but the charm store no longer 100% relies on launchpad sicne the new store went live that could live somewhere in github or bitbucket for all we know :) [12:54] still a good idea to ping the list, someone else may be working on that very thing [12:54] lazyPower: oh right. didn't know that [12:54] lazyPower: where's the list? I'm not sure if I'm subscribed [12:54] juju@lists.ubuntu.com [12:54] https://lists.ubuntu.com/mailman/listinfo/juju [12:55] nottrobin: the effort involved is probably not huge? so if you're keen, I'd go ahead and xenialize, and then link on the list. [12:56] mgz: yeah I'm thinking of giving that a go [12:56] mgz: enabling http/2 is more difficult as even in xenial the default version is compiled without http/2 support [12:56] yeah, but that can be another branch after. === lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta6 release notes: https://jujucharms.com/docs/devel/temp-release-notes === rodlogic is now known as Guest30270 [13:01] mgz: yeah of course [13:01] but that's still my ultimate goal [13:27] hi Odd_Bloke, jamespage, lazyPower - afaik, the trusty/jenkins (python rewrite) has all previously pending bits landed. i don't believe there is any secret delivery mechanism, though i do also have that need. right now, we push things into units post-deploy. a generic-ish subord or secrets layer sounds interesting. [13:30] jamespage, re: o-c-t, ack. also seeing all tempest tests fail the nova instance ssh check. have you diag'd yet? if not, about to dive in. [13:33] beisner - i think if thats the case i'm more inclined to agree with a subordinate approach vs a layer, unless the layer is abstract enough to accomodate the volume of secrets a large deployment would need [13:34] similar to what kwmonroe and mbruzek pioneered with the interface: java work. an interface with a complimentary framework to deliver java.... only we get an interface and a complimentary subordinate to deliver n^n secrets. [13:36] lazyPower, a swiss-army secrets-lander does sound nice. i wonder if #is is doing anything along these lines with basenode or similar charms? [13:36] they might be [13:37] it seems like yes, but i can't put my finger on one atm === rodlogic is now known as Guest93463 === scuttle|afk is now known as scuttlemonkey [14:04] beisner, o-c-t fixed up - the set call was not actually using the value provided. [14:05] wow wth lol [14:07] jamespage, thx for the fix! === scuttlemonkey is now known as scuttle|afk [14:09] stub: ping to make sure you are aware of https://bugs.launchpad.net/postgresql-charm/+bug/1577544 [14:09] Bug #1577544: Connection info published on relation before pg_hba.conf updated/reloaded [14:11] beisner, np === rodlogic is now known as Guest80695 === scuttle|afk is now known as scuttlemonkey [14:19] how can I debug the Juju controller/api? [14:19] it's been dying on me several times just today [14:27] jamespage, for cinder it looks like we've settled on a model of 1 charm per driver? Is that correct? [14:28] cholcombe, yes - each is a backend, and can be plugged in alongside other drivers [14:28] mbruzek - have a minute to look over something with me? [14:28] jamespage, ok cool. i might have to do some copy pasta for huawei then [14:28] yar [14:28] cholcombe, I'd prefer we did not [14:29] cholcombe, the right way is to write a new cinder-backend base layer and use that :-) [14:29] oh? [14:29] ah yes that's what i meant hehe [14:29] then the next one that comes along is minimal effort [14:29] cholcombe, ok good :-) [14:29] cholcombe, you might be trail blazing a bit here [14:29] Good morning [14:30] mbruzek https://github.com/chuckbutler/layer-etcd/commit/1b8648d41fa336496efbff47e6a643e550361ee6 [14:30] this is a little frameworky script i threw together to lend a hand preparing resources offline. similar to what a make fat target would do, but the intention is not to include these files with the charm traditionally via charm push, but to attach them as resources [14:31] I wanted your input on this before i ran down the remaining changes, moving it from curl'ing on the unit to using resource-get, etc. [14:32] popey: what was the issue specifically? I would like to fix it [14:33] jcastro: the doc says to install an unavailable package [14:34] 11:20 < popey> https://jujucharms.com/get-started - these instructions do not work on 16.04 [14:34] 11:21 < popey> E: Unable to locate package juju-quickstart [14:34] 11:23 < popey> installing juju-core results in no juju binary being installed either. [14:34] beisner, we need to switch post-deploy thingy to use mac of added interface, to deal with the network renaming in xenial... [14:34] ack [14:35] popey: ok, I see the problem, marco's fixing it now, ta [14:35] jamespage, yes, what's there was a quick fix. we really need to pluck the mojo utils thing into a library proper and be able to use that outside of both o-c-t and mojo. [14:35] that page is supposed to be revved. :-/ [14:35] jamespage, b/c it is fixed much more elegantly and pythonic in the mojo helpers [14:35] beisner, I'm sure it is ... [14:35] to use mac [14:36] I got this from juju-db on machine-0: checkpoint-server: checkpoint server error: No space left on device [14:36] still got space left on the host machine [14:44] popey: that page was for trusty, apparently nobody bothered to check if it worked with xenial [14:44] oops [14:45] jcastro: it doesn't :) [14:47] balloons: heya, we should add that to the release checklist. [15:01] looks like the container gets full, located it to the /var/lib/juju/db folder [15:01] a bunch of collection-*.wt files [15:03] anyone can help me to find what is being written? [15:03] this happens when I deploy a new charm === rodlogic is now known as Guest20915 === ejat is now known as fenris- === fenris- is now known as ejat === ejat is now known as fenris- === fenris- is now known as ejat === rodlogic is now known as Guest12850 [16:29] ahasenack: you know that issue with lxc instances of xenial on trusty with juju/maas I was talking to you about last week? It looks like it was choking on systemd with a non-systemd host. So I tried to update my lxc-host in maas to deploy with xenial and hit something I think you've run into also: https://bugs.launchpad.net/maas/+bug/1419041 [16:29] Bug #1419041: bootstrap failing with gomaasapi 400 when updating MAAS images [16:29] ahasenack: oh wait, no I think this is the one: https://bugs.launchpad.net/maas/+bug/1537095 [16:29] Bug #1537095: [1.9.0] 400 Error juju bootstrapping missing images [17:07] Hi [17:09] Is it possible to change the status of a bug from Fix Released to Fix Committed? === rodlogic is now known as Guest15218 === frankban is now known as frankban|afk [17:17] For a recommended charm, if I need to do some updates can't I use the same bug ? I changed the status from Fix Released to FIx committed, but it doesn't get reflected in Review Queue. [17:40] suchvenu: no, to do updates you need to open a merge request [17:43] Hi Marcoceppi [17:43] ok, This is the stream where I have pushed my deployable charm [17:44] https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk [17:44] and layered charm is at lp:~ibmcharmers/charms/trusty/layer-ibm-db2/trunk [17:45] suchvenu: but you're trying to update db2 in the charmstore, correct? [17:45] yes right. It don;t need any review ? [17:48] So i need to merge from ibmcharmers branch to charmers branch , right ? [17:48] suchvenu: yes [17:48] suchvenu: in doing so, that will create a merge that will automatically appear in the review queue [17:49] Will it updated in queue then ? [17:49] oh ok [17:49] and no need to attach a bug ? [17:49] suchvenu: correct [17:50] bugs are for charms that don't exist in the system yet [17:50] ok [17:50] And any reviewers to be added ? [17:52] I had changed the bug status from Review Fixed to Review Committed. Should I change it back ? [17:53] suchvenu: link? [17:55] https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk [18:01] https://bugs.launchpad.net/charms/+bug/1477057 [18:01] Bug #1477057: New Charm: IBM DB2 [18:07] Marchceppi, should I change the status of Bug here ? === rodlogic is now known as Guest1573 [18:17] suchvenu: you don't need to touch the bug anymore [18:18] suchvenu: you need to go here, https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk and click "Propose for merging" [18:20] which should be the target branch ? [18:21] suchvenu: cs:charms/trusty/ibm-db2 [18:21] err [18:21] lp:charms/trusty/ibm-db2 [18:28] Do i need to add any reviewers ? [18:38] suchvenu: No you do not have to add reviewers. This Merge Request will get picked up in the review queue [18:45] suchvenu: As long as charmers are on the merge you should be good [18:47] https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk/+merge/294153 [18:48] created merge request [18:57] Thanks Marcoceppi for your help! === rodlogic is now known as Guest97418 === rodlogic is now known as Guest56148 === rodlogic is now known as Guest3957 [22:26] dpb: tribaal: +2 on the swift-storage change after manual tests run