/srv/irclogs.ubuntu.com/2016/05/09/#juju.txt

=== rodlogic is now known as Guest90697
=== rodlogic is now known as Guest28228
blahdeblahIs it weird of me that whenever I see $LAYER_PATH, I think SLAYER_PATH, and start thinking it's about metal?  Slayer? Excellent! (insert Bill + Ted sound bite here)01:43
rick_h_blahdeblah: you've found out our secret!01:48
blahdeblahrick_h_: :-D01:48
* blahdeblah cranks up some metal01:48
=== Garyx_ is now known as Garyx
=== frankban|afk is now known as frankban
=== danilos` is now known as danilos
BlackDexHello there. I have the status: "agent is lost, sorry! See 'juju status-history ceph-osd/3" && "agent is not communicating with the server"08:20
BlackDexI have restarted the agent on the servers but that doesn't seem to do the trick08:20
BlackDexi can ping the bootstack server from the client-machine08:21
lathiatBlackDex: what command did you use to restart the agent08:22
lathiatthats a known issue you can restart the main jujud agent (not the unit specific part) to fix iirc08:22
BlackDexlathiat: on the machines 'sudo initctl list | grep juju | cut -d" " -f1 | xargs -I{} sudo service {} restart'08:23
BlackDexalso a restart of the server isn't working08:25
lathiaton xenial?08:26
BlackDextrusty 14.04.408:26
lathiatwhat services does that restart in what order.. i fyou rnu like sudo ceho service {} restat instead08:29
BlackDexsudo service jujud-unit-ntp-6 restart08:30
BlackDexsudo service jujud-unit-ceph-osd-0 restart08:30
BlackDexsudo service jujud-machine-8 restart08:30
BlackDexsudo service jujud-unit-ntp-1 restart08:30
BlackDexsudo service jujud-unit-ceph-mon-0 restart08:30
BlackDexhmm... two ntp's :p08:30
BlackDexah, thats because of ceph-mon and ceph-osd08:33
BlackDexon the same machine08:33
BlackDexthe strange thing is, it worked before08:33
lathiattry restart the the machine agent first maybe08:33
BlackDexhmm08:36
BlackDexfor some reason it has the wrong IP08:36
BlackDexwrong subnet08:36
BlackDexthe bootstack server has two interfaces one external and one interal08:38
BlackDexand now it uses the external interface08:38
BlackDexi mean bootstrap08:39
BlackDexlathiat: I'm now setting the correct ip address in the agent.conf and resting the agents09:01
BlackDexthat should work09:01
BlackDexstrange that it has been adapted09:01
BlackDexthat fixed it :)09:01
lathiatok interesting09:02
lathiatthe ip of the api server, or somtehing else?09:02
BlackDexyea, of the bootstrap node09:03
=== Garyx_ is now known as Garyx
Odd_BlokeHello all!  We have a CI environment configured using the jenkins and jenkins-slave charms.  However, we have some credentials that the slaves need access to that, currently, we just manually provision to the slaves.09:51
Odd_BlokeWe're using mojo to deploy our environment(s), so we do have a good way of managing secrets.09:51
Odd_BlokeBut we don't have a good way of getting those secrets on to hosts.09:51
Odd_BlokeMy first thought for addressing this is to have a subordinate charm which is configured with, effectively, a mapping from file name ->09:52
Odd_Blokecontent.09:52
Odd_BlokeDoes that sound like a reasonable solution?  Does anyone know of anything like this that already exists?09:53
BlackDexlathiat: Strange. The bootstrap server has 2 ip's10:41
BlackDexone external and one internal10:42
BlackDexi need to force bootstrap to push the internal IP instead of the external as IP to the agents10:42
lathiatBlackDex: interesting, can you show me some more details on pastebin?10:45
BlackDexum10:56
BlackDexis there a way to change the bootstrap IP?10:57
BlackDexlets say you need to change because of infra changes10:57
ejathttp://paste.ubuntu.com/16316990/11:07
=== Garyx_ is now known as Garyx
jamespagebeisner, hey - I keep hitting an issue where the neutron-gateway is given and extra nic, but the ext-port config is not set...12:01
jamespagein o-c-t12:01
jamespagehave you seen that?12:01
jamespageit causes failues in all of the tempest tests that check connectivity12:02
lazyPowerOdd_Bloke - that does sound reasonable but its fiddly to have to configure a subordinate. I'm wondering if you can't use native jenkins Credentials on the leader, that way the slaves can use jenkins primitives to surface those credentials?12:02
Odd_BlokelazyPower: Does the jenkins charm support loading credentials from configuration?12:08
lazyPowerOdd_Bloke - i think the only credentials it exposes at present is the admin username/pass... to make it consistent it would probably need to be extended. at one point we were using some python helpers in there to configure the instance12:10
lazyPowerbut to be completely fair i haven't dug deep in the jenkins charm in > 4 months so i'm not sure whats going on in there these days.12:10
* lazyPower looks12:10
Odd_BlokelazyPower: I guess it also doesn't have any plugin-specific functionality ATM?12:10
Odd_Bloke(Looking at the documented config options at least)12:11
lazyPowerright i'm seeing the same12:11
lazyPowerjcastro - didn't we engage with cloudbees/jenkins upstream wrt the jenkins charm?12:12
lazyPowerOdd_Bloke - well, my suggestion seems to involve a lot of engineering work, disregard me :)12:13
Odd_BlokelazyPower: ^_^12:13
lazyPowerbut looking it over, it doesn't seem we're exposing a path to complete what you're trying to do with the charms.12:13
lazyPowerand that a subordinate would be a workaround... but i'm hesitant to say thats the best path forward.12:14
Odd_BlokeJenkins is a bit of a weird one.12:16
Odd_BlokeBecause the files we want on disk are really part of the "workload" that Jenkins runs in jobs.12:16
Odd_BlokeRather than the Jenkins workload itself.12:16
lazyPoweryep12:16
lazyPowerbut, we do need to support rapidly deploying a workload on top of jenkins i feel12:16
Odd_BlokeWe were looking at https://jujucharms.com/u/matsubara/ci-configurator/trusty/012:17
lazyPowerwe've run into this very scenario quite a bit with our app-container workloads (kubernetes/swarm) - and layer-docker gave us an entrypoint into deploying/modeling workloads on top.  we're discussing what a layer for communicating with k8's and deploying k8s workloads looks like.. but it does somewhat decouple the workload from the charm as its runtime could be one of n units running kubernetes...12:18
Odd_BlokeBut we were discouraged from having Jenkins configuration changes deployed by doing a charm upgrade, for reasons I can't remember.12:18
lazyPoweralmost seems like having jenkins as a layer, and a supporting library to model your workloads on top, then derive from layer-jenkinks to get your app runtime going, with a thin layer configuring jobs on top to make up your charm12:18
lazyPoweri dont know that i like that either.. advocating for snowflake jenkins instances12:19
lazyPoweri haven't had enough coffee... i'm going to go grab a cuppa and keep noodling this12:20
Odd_Bloke:)12:20
lazyPowerone question tho: are these credentials to members participating? (eg: jenkins/0  and jenkins-slave/2) or are they unrelated units?12:21
simonklbthe dhcp leases are not initially set correctly when using LXD as provider - not sure if this is a lxc issue or specific to juju because of some hostname managing?12:21
Odd_BlokelazyPower: So the way we currently model our Jenkins stuff is that all jobs can run on all slaves, so we'd want the credentials on all slaves.12:21
lazyPowerOdd_Bloke - that makes this significantly easier12:22
lazyPoweryou're now talking about a relation/interface modification12:22
lazyPoweror a new one12:22
Odd_BlokelazyPower: (Though our slaves are deployed in a different environment to our master, so they aren't actually related :( )12:22
lazyPowero12:22
* lazyPower snaps12:22
simonklbanyone else have experienced any problems with DNS:es not working straight away when deploying a new charm?12:22
lazyPowersimonklb - i cant point to a specific instance no, but having a bug to reproduce/reference would be helpful. If you file it targeted at juju-core and it belongs in lxd we can re-target teh bug for you.12:23
simonklblazyPower: yea, the bug is kind of unspecific, the only thing I've found is that the dnsmasq.lxcbr0.leases file is not updated to the juju-xxx-xxx-.. hostnames right away12:26
lazyPowerthat sounds like a likely culrprit, yeah12:26
lazyPowerare you on juju 2.0-beta6/beta7?12:27
simonklb2.0-beta6-xenial-amd6412:27
lazyPoweryep, ok12:27
lazyPowercan you bug that for me? :)12:27
lazyPoweri'll poke cherylj about it once its file and we can take a look12:27
simonklbthanks, I'll try to describe it :)12:28
simonklbbtw do you know if there is any work done on the lxd profiles?12:28
lazyPoweri know its been mentioned more than once12:28
simonklbI saw that docker support is not working with LXD until that is implemented12:28
lazyPowerwe have specific profiles for docker based workloads in lxd, there's some networking12:28
lazyPowerand a few other concerns i've heard about wrt profiles, but i'm not 100% clear on what work is being done there12:29
simonklblazyPower: this is the bug thread I saw https://bugs.launchpad.net/juju-core/+bug/156587212:29
mupBug #1565872: Juju needs to support LXD profiles as a constraint <adoption> <juju-release-support> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1565872>12:29
lazyPoweryeah :) that was my team poking about our workload support in lxd12:30
jcastrolazyPower: we did, why what's up?12:31
lazyPowerjcastro - just curios on if upstream will be looking at the charms, and i knew we had WIP there. Odd_Bloke is deploying a jenkins workload, and the charms have an end of the sidewalk12:31
jcastrowe're supposed to be maintaining it in their namespace12:32
lazyPowerooo jamespage still owns this charm12:32
jamespage?12:33
jamespagejenkins?12:33
marcoceppipopey: sudo apt install juju12:35
lazyPowerjamespage - yep12:35
lazyPowerpopey o/12:36
jamespagehuh12:37
jamespagelazyPower, guess that's the side effect of being a 0 day adopter12:37
jamespageyou forget about all that stuff you wrote :-)12:38
lazyPowerhaha12:38
jamespagelazyPower, I think you can blame me for ganglia as well12:38
jamespagelazyPower, dosaboy and beisner have something a bit more recent re jenkins - we should fold that work in and make it generally consumable if thats not already the case...12:38
popeymarcoceppi: yeah, tried that, but it all failed so I gave up and span up a digital ocean droplet12:39
lazyPowerOdd_Bloke ^12:39
marcoceppipopey: wait, what do you mean it all failed?12:39
popeythe guide as it is written doesn't work on 16.0412:39
marcoceppiget-started?12:39
popeythe one i linked to above, yes12:39
* marcoceppi stomps off to correct things12:40
popeyI didn't have time to debug it so switched to real hardware, sorry.12:40
marcoceppipopey: s/real hardware/other people's hardware/ ;)12:45
popeys/other people's hardware/working infrastructure/12:45
simonklblazyPower: https://bugs.launchpad.net/juju-core/+bug/157975012:49
mupBug #1579750: Wrong hostname added to dnsmasq using LXD <juju-core:New> <https://launchpad.net/bugs/1579750>12:49
simonklblet me know if you need any more info12:50
lazyPowerWe'll follow up on the bug if thats the case simonklb :) Thanks for filing this12:50
nottrobindoes anyone know if anyone's tried adding http/2 support into the apache2 charm?12:50
lazyPowernottrobin - i don't believe so.12:50
nottrobinlazyPower: thanks. do you know if anyone's working on a xenial version of apache212:51
lazyPowerI haven't seen anything yet. Might be worth while to ping the list so it gets a broader set of eyes on it.12:52
nottrobinlazyPower: I guess maybe not: https://code.launchpad.net/charms/xenial/+source/apache212:53
lazyPowernottrobin - right but the charm store no longer 100% relies on launchpad sicne the new store went live that could live somewhere in github or bitbucket for all we know :)12:53
lazyPowerstill a good idea to ping the list, someone else may be working on that very thing12:54
nottrobinlazyPower: oh right. didn't know that12:54
nottrobinlazyPower: where's the list? I'm not sure if I'm subscribed12:54
lazyPowerjuju@lists.ubuntu.com12:54
lazyPowerhttps://lists.ubuntu.com/mailman/listinfo/juju12:54
mgznottrobin: the effort involved is probably not huge? so if you're keen, I'd go ahead and xenialize, and then link on the list.12:55
nottrobinmgz: yeah I'm thinking of giving that a go12:56
nottrobinmgz: enabling http/2 is more difficult as even in xenial the default version is compiled without http/2 support12:56
mgzyeah, but that can be another branch after.12:56
=== lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta6 release notes: https://jujucharms.com/docs/devel/temp-release-notes
=== rodlogic is now known as Guest30270
nottrobinmgz: yeah of course13:01
nottrobinbut that's still my ultimate goal13:01
beisnerhi Odd_Bloke, jamespage, lazyPower - afaik, the trusty/jenkins (python rewrite) has all previously pending bits landed.  i don't believe there is any secret delivery mechanism, though i do also have that need.  right now, we push things into units post-deploy.  a generic-ish subord or secrets layer sounds interesting.13:27
beisnerjamespage, re: o-c-t, ack.  also seeing all tempest tests fail the nova instance ssh check.  have you diag'd yet?  if not, about to dive in.13:30
lazyPowerbeisner - i think if thats the case i'm more inclined to agree with a subordinate approach vs a layer, unless the layer is abstract enough to accomodate the volume of secrets a large deployment would need13:33
lazyPowersimilar to what kwmonroe and mbruzek pioneered with the interface: java work.    an interface with a complimentary framework to deliver java.... only we get an interface and a complimentary subordinate to deliver n^n secrets.13:34
beisnerlazyPower, a swiss-army secrets-lander does sound nice.  i wonder if #is is doing anything along these lines with basenode or similar charms?13:36
lazyPowerthey might be13:36
beisnerit seems like yes, but i can't put my finger on one atm13:37
=== rodlogic is now known as Guest93463
=== scuttle|afk is now known as scuttlemonkey
jamespagebeisner, o-c-t fixed up - the set call was not actually using the value provided.14:04
beisnerwow wth lol14:05
beisnerjamespage, thx for the fix!14:07
=== scuttlemonkey is now known as scuttle|afk
tvansteenburghstub: ping to make sure you are aware of https://bugs.launchpad.net/postgresql-charm/+bug/157754414:09
mupBug #1577544: Connection info published on relation before pg_hba.conf updated/reloaded <PostgreSQL Charm:New> <https://launchpad.net/bugs/1577544>14:09
jamespagebeisner, np14:11
=== rodlogic is now known as Guest80695
=== scuttle|afk is now known as scuttlemonkey
simonklbhow can I debug the Juju controller/api?14:19
simonklbit's been dying on me several times just today14:19
cholcombejamespage, for cinder it looks like we've settled on a model of 1 charm per driver?  Is that correct?14:27
jamespagecholcombe, yes - each is a backend, and can be plugged in alongside other drivers14:28
lazyPowermbruzek - have a minute to look over something with me?14:28
cholcombejamespage, ok cool. i might have to do some copy pasta for huawei then14:28
mbruzekyar14:28
jamespagecholcombe, I'd prefer we did not14:28
jamespagecholcombe, the right way is to write a new cinder-backend base layer and use that :-)14:29
cholcombeoh?14:29
cholcombeah yes that's what i meant hehe14:29
jamespagethen the next one that comes along is minimal effort14:29
jamespagecholcombe, ok good :-)14:29
jamespagecholcombe, you might be trail blazing a bit here14:29
thedacGood morning14:29
lazyPowermbruzek https://github.com/chuckbutler/layer-etcd/commit/1b8648d41fa336496efbff47e6a643e550361ee614:30
lazyPowerthis is a little frameworky script i threw together to lend a hand preparing resources offline. similar to what a make fat target would do, but the intention is not to include these files with the charm traditionally via charm push, but to attach them as resources14:30
lazyPowerI wanted your input on this before i ran down the remaining changes, moving it from curl'ing on the unit to using resource-get, etc.14:31
jcastropopey: what was the issue specifically? I would like to fix it14:32
popeyjcastro: the doc says to install an unavailable package14:33
popey11:20 < popey> https://jujucharms.com/get-started - these instructions do not work on 16.0414:34
popey11:21 < popey> E: Unable to locate package juju-quickstart14:34
popey11:23 < popey> installing juju-core results in no juju binary being installed either.14:34
jamespagebeisner, we need to switch post-deploy thingy to use mac of added interface, to deal with the network renaming in xenial...14:34
jcastroack14:34
jcastropopey: ok, I see the problem, marco's fixing it now, ta14:35
beisnerjamespage, yes, what's there was a quick fix.  we really need to pluck the mojo utils thing into a library proper and be able to use that outside of both o-c-t and mojo.14:35
jcastrothat page is supposed to be revved. :-/14:35
beisnerjamespage, b/c it is fixed much more elegantly and pythonic in the mojo helpers14:35
jamespagebeisner, I'm sure it is ...14:35
beisnerto use mac14:35
simonklbI got this from juju-db on machine-0: checkpoint-server: checkpoint server error: No space left on device14:36
simonklbstill got space left on the host machine14:36
jcastropopey: that page was for trusty, apparently nobody bothered to check if it worked with xenial14:44
popeyoops14:44
popeyjcastro: it doesn't :)14:45
jcastroballoons: heya, we should add that to the release checklist.14:47
simonklblooks like the container gets full, located it to the /var/lib/juju/db folder15:01
simonklba bunch of collection-*.wt files15:01
simonklbanyone can help me to find what is being written?15:03
simonklbthis happens when I deploy a new charm15:03
=== rodlogic is now known as Guest20915
=== ejat is now known as fenris-
=== fenris- is now known as ejat
=== ejat is now known as fenris-
=== fenris- is now known as ejat
=== rodlogic is now known as Guest12850
plarsahasenack: you know that issue with lxc instances of xenial on trusty with juju/maas I was talking to you about last week? It looks like it was choking on systemd with a non-systemd host. So I tried to update my lxc-host in maas to deploy with xenial and hit something I think you've run into also: https://bugs.launchpad.net/maas/+bug/141904116:29
mupBug #1419041: bootstrap failing  with gomaasapi 400 when updating MAAS images <landscape> <oil> <oil-bug-1372407> <MAAS:Fix Released> <https://launchpad.net/bugs/1419041>16:29
plarsahasenack: oh wait, no I think this is the one: https://bugs.launchpad.net/maas/+bug/153709516:29
mupBug #1537095: [1.9.0] 400 Error juju bootstrapping missing images <landscape> <oil> <MAAS:In Progress by allenap> <MAAS 1.9:New> <https://launchpad.net/bugs/1537095>16:29
suchvenuHi17:07
suchvenuIs it possible to change the status of a bug from Fix Released to Fix Committed?17:09
=== rodlogic is now known as Guest15218
=== frankban is now known as frankban|afk
suchvenuFor a recommended charm, if I need to do some updates can't I use the same bug ?  I changed the status from Fix Released to FIx committed, but it doesn't get reflected in Review Queue.17:17
marcoceppisuchvenu: no, to do updates you need to open a merge request17:40
suchvenuHi Marcoceppi17:43
suchvenuok, This is the stream where I have pushed my deployable charm17:43
suchvenuhttps://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk17:44
suchvenuand layered charm is at lp:~ibmcharmers/charms/trusty/layer-ibm-db2/trunk17:44
marcoceppisuchvenu: but you're trying to update db2 in the charmstore, correct?17:45
suchvenuyes right. It don;t need any review ?17:45
suchvenuSo i need to merge from ibmcharmers branch to charmers branch , right ?17:48
marcoceppisuchvenu: yes17:48
marcoceppisuchvenu: in doing so, that will create a merge that will automatically appear in the review queue17:48
suchvenuWill it updated in queue then ?17:49
suchvenuoh ok17:49
suchvenuand no need to attach a bug ?17:49
marcoceppisuchvenu: correct17:49
marcoceppibugs are for charms that don't exist in the system yet17:50
suchvenuok17:50
suchvenuAnd any reviewers to be added ?17:50
suchvenuI had changed the bug status from Review Fixed to Review Committed. Should I change it back ?17:52
marcoceppisuchvenu: link?17:53
suchvenuhttps://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk17:55
suchvenuhttps://bugs.launchpad.net/charms/+bug/147705718:01
mupBug #1477057: New Charm: IBM DB2 <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1477057>18:01
suchvenuMarchceppi, should I change the status of Bug here ?18:07
=== rodlogic is now known as Guest1573
marcoceppisuchvenu: you don't need to touch the bug anymore18:17
marcoceppisuchvenu: you need to go here, https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk and click "Propose for merging"18:18
suchvenuwhich should be the target branch ?18:20
marcoceppisuchvenu: cs:charms/trusty/ibm-db218:21
marcoceppierr18:21
marcoceppilp:charms/trusty/ibm-db218:21
suchvenuDo i need to add any reviewers ?18:28
mbruzeksuchvenu: No you do not have to add reviewers. This Merge Request will get picked up in the review queue18:38
mbruzeksuchvenu: As long as charmers are on the merge you should be good18:45
suchvenuhttps://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk/+merge/29415318:47
suchvenucreated merge request18:48
suchvenuThanks Marcoceppi for your help!18:57
=== rodlogic is now known as Guest97418
=== rodlogic is now known as Guest56148
=== rodlogic is now known as Guest3957
thedacdpb: tribaal: +2 on the swift-storage change after manual tests run22:26

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!