=== rodlogic is now known as Guest90697 | ||
=== rodlogic is now known as Guest28228 | ||
blahdeblah | Is it weird of me that whenever I see $LAYER_PATH, I think SLAYER_PATH, and start thinking it's about metal? Slayer? Excellent! (insert Bill + Ted sound bite here) | 01:43 |
---|---|---|
rick_h_ | blahdeblah: you've found out our secret! | 01:48 |
blahdeblah | rick_h_: :-D | 01:48 |
* blahdeblah cranks up some metal | 01:48 | |
=== Garyx_ is now known as Garyx | ||
=== frankban|afk is now known as frankban | ||
=== danilos` is now known as danilos | ||
BlackDex | Hello there. I have the status: "agent is lost, sorry! See 'juju status-history ceph-osd/3" && "agent is not communicating with the server" | 08:20 |
BlackDex | I have restarted the agent on the servers but that doesn't seem to do the trick | 08:20 |
BlackDex | i can ping the bootstack server from the client-machine | 08:21 |
lathiat | BlackDex: what command did you use to restart the agent | 08:22 |
lathiat | thats a known issue you can restart the main jujud agent (not the unit specific part) to fix iirc | 08:22 |
BlackDex | lathiat: on the machines 'sudo initctl list | grep juju | cut -d" " -f1 | xargs -I{} sudo service {} restart' | 08:23 |
BlackDex | also a restart of the server isn't working | 08:25 |
lathiat | on xenial? | 08:26 |
BlackDex | trusty 14.04.4 | 08:26 |
lathiat | what services does that restart in what order.. i fyou rnu like sudo ceho service {} restat instead | 08:29 |
BlackDex | sudo service jujud-unit-ntp-6 restart | 08:30 |
BlackDex | sudo service jujud-unit-ceph-osd-0 restart | 08:30 |
BlackDex | sudo service jujud-machine-8 restart | 08:30 |
BlackDex | sudo service jujud-unit-ntp-1 restart | 08:30 |
BlackDex | sudo service jujud-unit-ceph-mon-0 restart | 08:30 |
BlackDex | hmm... two ntp's :p | 08:30 |
BlackDex | ah, thats because of ceph-mon and ceph-osd | 08:33 |
BlackDex | on the same machine | 08:33 |
BlackDex | the strange thing is, it worked before | 08:33 |
lathiat | try restart the the machine agent first maybe | 08:33 |
BlackDex | hmm | 08:36 |
BlackDex | for some reason it has the wrong IP | 08:36 |
BlackDex | wrong subnet | 08:36 |
BlackDex | the bootstack server has two interfaces one external and one interal | 08:38 |
BlackDex | and now it uses the external interface | 08:38 |
BlackDex | i mean bootstrap | 08:39 |
BlackDex | lathiat: I'm now setting the correct ip address in the agent.conf and resting the agents | 09:01 |
BlackDex | that should work | 09:01 |
BlackDex | strange that it has been adapted | 09:01 |
BlackDex | that fixed it :) | 09:01 |
lathiat | ok interesting | 09:02 |
lathiat | the ip of the api server, or somtehing else? | 09:02 |
BlackDex | yea, of the bootstrap node | 09:03 |
=== Garyx_ is now known as Garyx | ||
Odd_Bloke | Hello all! We have a CI environment configured using the jenkins and jenkins-slave charms. However, we have some credentials that the slaves need access to that, currently, we just manually provision to the slaves. | 09:51 |
Odd_Bloke | We're using mojo to deploy our environment(s), so we do have a good way of managing secrets. | 09:51 |
Odd_Bloke | But we don't have a good way of getting those secrets on to hosts. | 09:51 |
Odd_Bloke | My first thought for addressing this is to have a subordinate charm which is configured with, effectively, a mapping from file name -> | 09:52 |
Odd_Bloke | content. | 09:52 |
Odd_Bloke | Does that sound like a reasonable solution? Does anyone know of anything like this that already exists? | 09:53 |
BlackDex | lathiat: Strange. The bootstrap server has 2 ip's | 10:41 |
BlackDex | one external and one internal | 10:42 |
BlackDex | i need to force bootstrap to push the internal IP instead of the external as IP to the agents | 10:42 |
lathiat | BlackDex: interesting, can you show me some more details on pastebin? | 10:45 |
BlackDex | um | 10:56 |
BlackDex | is there a way to change the bootstrap IP? | 10:57 |
BlackDex | lets say you need to change because of infra changes | 10:57 |
ejat | http://paste.ubuntu.com/16316990/ | 11:07 |
=== Garyx_ is now known as Garyx | ||
jamespage | beisner, hey - I keep hitting an issue where the neutron-gateway is given and extra nic, but the ext-port config is not set... | 12:01 |
jamespage | in o-c-t | 12:01 |
jamespage | have you seen that? | 12:01 |
jamespage | it causes failues in all of the tempest tests that check connectivity | 12:02 |
lazyPower | Odd_Bloke - that does sound reasonable but its fiddly to have to configure a subordinate. I'm wondering if you can't use native jenkins Credentials on the leader, that way the slaves can use jenkins primitives to surface those credentials? | 12:02 |
Odd_Bloke | lazyPower: Does the jenkins charm support loading credentials from configuration? | 12:08 |
lazyPower | Odd_Bloke - i think the only credentials it exposes at present is the admin username/pass... to make it consistent it would probably need to be extended. at one point we were using some python helpers in there to configure the instance | 12:10 |
lazyPower | but to be completely fair i haven't dug deep in the jenkins charm in > 4 months so i'm not sure whats going on in there these days. | 12:10 |
* lazyPower looks | 12:10 | |
Odd_Bloke | lazyPower: I guess it also doesn't have any plugin-specific functionality ATM? | 12:10 |
Odd_Bloke | (Looking at the documented config options at least) | 12:11 |
lazyPower | right i'm seeing the same | 12:11 |
lazyPower | jcastro - didn't we engage with cloudbees/jenkins upstream wrt the jenkins charm? | 12:12 |
lazyPower | Odd_Bloke - well, my suggestion seems to involve a lot of engineering work, disregard me :) | 12:13 |
Odd_Bloke | lazyPower: ^_^ | 12:13 |
lazyPower | but looking it over, it doesn't seem we're exposing a path to complete what you're trying to do with the charms. | 12:13 |
lazyPower | and that a subordinate would be a workaround... but i'm hesitant to say thats the best path forward. | 12:14 |
Odd_Bloke | Jenkins is a bit of a weird one. | 12:16 |
Odd_Bloke | Because the files we want on disk are really part of the "workload" that Jenkins runs in jobs. | 12:16 |
Odd_Bloke | Rather than the Jenkins workload itself. | 12:16 |
lazyPower | yep | 12:16 |
lazyPower | but, we do need to support rapidly deploying a workload on top of jenkins i feel | 12:16 |
Odd_Bloke | We were looking at https://jujucharms.com/u/matsubara/ci-configurator/trusty/0 | 12:17 |
lazyPower | we've run into this very scenario quite a bit with our app-container workloads (kubernetes/swarm) - and layer-docker gave us an entrypoint into deploying/modeling workloads on top. we're discussing what a layer for communicating with k8's and deploying k8s workloads looks like.. but it does somewhat decouple the workload from the charm as its runtime could be one of n units running kubernetes... | 12:18 |
Odd_Bloke | But we were discouraged from having Jenkins configuration changes deployed by doing a charm upgrade, for reasons I can't remember. | 12:18 |
lazyPower | almost seems like having jenkins as a layer, and a supporting library to model your workloads on top, then derive from layer-jenkinks to get your app runtime going, with a thin layer configuring jobs on top to make up your charm | 12:18 |
lazyPower | i dont know that i like that either.. advocating for snowflake jenkins instances | 12:19 |
lazyPower | i haven't had enough coffee... i'm going to go grab a cuppa and keep noodling this | 12:20 |
Odd_Bloke | :) | 12:20 |
lazyPower | one question tho: are these credentials to members participating? (eg: jenkins/0 and jenkins-slave/2) or are they unrelated units? | 12:21 |
simonklb | the dhcp leases are not initially set correctly when using LXD as provider - not sure if this is a lxc issue or specific to juju because of some hostname managing? | 12:21 |
Odd_Bloke | lazyPower: So the way we currently model our Jenkins stuff is that all jobs can run on all slaves, so we'd want the credentials on all slaves. | 12:21 |
lazyPower | Odd_Bloke - that makes this significantly easier | 12:22 |
lazyPower | you're now talking about a relation/interface modification | 12:22 |
lazyPower | or a new one | 12:22 |
Odd_Bloke | lazyPower: (Though our slaves are deployed in a different environment to our master, so they aren't actually related :( ) | 12:22 |
lazyPower | o | 12:22 |
* lazyPower snaps | 12:22 | |
simonklb | anyone else have experienced any problems with DNS:es not working straight away when deploying a new charm? | 12:22 |
lazyPower | simonklb - i cant point to a specific instance no, but having a bug to reproduce/reference would be helpful. If you file it targeted at juju-core and it belongs in lxd we can re-target teh bug for you. | 12:23 |
simonklb | lazyPower: yea, the bug is kind of unspecific, the only thing I've found is that the dnsmasq.lxcbr0.leases file is not updated to the juju-xxx-xxx-.. hostnames right away | 12:26 |
lazyPower | that sounds like a likely culrprit, yeah | 12:26 |
lazyPower | are you on juju 2.0-beta6/beta7? | 12:27 |
simonklb | 2.0-beta6-xenial-amd64 | 12:27 |
lazyPower | yep, ok | 12:27 |
lazyPower | can you bug that for me? :) | 12:27 |
lazyPower | i'll poke cherylj about it once its file and we can take a look | 12:27 |
simonklb | thanks, I'll try to describe it :) | 12:28 |
simonklb | btw do you know if there is any work done on the lxd profiles? | 12:28 |
lazyPower | i know its been mentioned more than once | 12:28 |
simonklb | I saw that docker support is not working with LXD until that is implemented | 12:28 |
lazyPower | we have specific profiles for docker based workloads in lxd, there's some networking | 12:28 |
lazyPower | and a few other concerns i've heard about wrt profiles, but i'm not 100% clear on what work is being done there | 12:29 |
simonklb | lazyPower: this is the bug thread I saw https://bugs.launchpad.net/juju-core/+bug/1565872 | 12:29 |
mup | Bug #1565872: Juju needs to support LXD profiles as a constraint <adoption> <juju-release-support> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1565872> | 12:29 |
lazyPower | yeah :) that was my team poking about our workload support in lxd | 12:30 |
jcastro | lazyPower: we did, why what's up? | 12:31 |
lazyPower | jcastro - just curios on if upstream will be looking at the charms, and i knew we had WIP there. Odd_Bloke is deploying a jenkins workload, and the charms have an end of the sidewalk | 12:31 |
jcastro | we're supposed to be maintaining it in their namespace | 12:32 |
lazyPower | ooo jamespage still owns this charm | 12:32 |
jamespage | ? | 12:33 |
jamespage | jenkins? | 12:33 |
marcoceppi | popey: sudo apt install juju | 12:35 |
lazyPower | jamespage - yep | 12:35 |
lazyPower | popey o/ | 12:36 |
jamespage | huh | 12:37 |
jamespage | lazyPower, guess that's the side effect of being a 0 day adopter | 12:37 |
jamespage | you forget about all that stuff you wrote :-) | 12:38 |
lazyPower | haha | 12:38 |
jamespage | lazyPower, I think you can blame me for ganglia as well | 12:38 |
jamespage | lazyPower, dosaboy and beisner have something a bit more recent re jenkins - we should fold that work in and make it generally consumable if thats not already the case... | 12:38 |
popey | marcoceppi: yeah, tried that, but it all failed so I gave up and span up a digital ocean droplet | 12:39 |
lazyPower | Odd_Bloke ^ | 12:39 |
marcoceppi | popey: wait, what do you mean it all failed? | 12:39 |
popey | the guide as it is written doesn't work on 16.04 | 12:39 |
marcoceppi | get-started? | 12:39 |
popey | the one i linked to above, yes | 12:39 |
* marcoceppi stomps off to correct things | 12:40 | |
popey | I didn't have time to debug it so switched to real hardware, sorry. | 12:40 |
marcoceppi | popey: s/real hardware/other people's hardware/ ;) | 12:45 |
popey | s/other people's hardware/working infrastructure/ | 12:45 |
simonklb | lazyPower: https://bugs.launchpad.net/juju-core/+bug/1579750 | 12:49 |
mup | Bug #1579750: Wrong hostname added to dnsmasq using LXD <juju-core:New> <https://launchpad.net/bugs/1579750> | 12:49 |
simonklb | let me know if you need any more info | 12:50 |
lazyPower | We'll follow up on the bug if thats the case simonklb :) Thanks for filing this | 12:50 |
nottrobin | does anyone know if anyone's tried adding http/2 support into the apache2 charm? | 12:50 |
lazyPower | nottrobin - i don't believe so. | 12:50 |
nottrobin | lazyPower: thanks. do you know if anyone's working on a xenial version of apache2 | 12:51 |
lazyPower | I haven't seen anything yet. Might be worth while to ping the list so it gets a broader set of eyes on it. | 12:52 |
nottrobin | lazyPower: I guess maybe not: https://code.launchpad.net/charms/xenial/+source/apache2 | 12:53 |
lazyPower | nottrobin - right but the charm store no longer 100% relies on launchpad sicne the new store went live that could live somewhere in github or bitbucket for all we know :) | 12:53 |
lazyPower | still a good idea to ping the list, someone else may be working on that very thing | 12:54 |
nottrobin | lazyPower: oh right. didn't know that | 12:54 |
nottrobin | lazyPower: where's the list? I'm not sure if I'm subscribed | 12:54 |
lazyPower | juju@lists.ubuntu.com | 12:54 |
lazyPower | https://lists.ubuntu.com/mailman/listinfo/juju | 12:54 |
mgz | nottrobin: the effort involved is probably not huge? so if you're keen, I'd go ahead and xenialize, and then link on the list. | 12:55 |
nottrobin | mgz: yeah I'm thinking of giving that a go | 12:56 |
nottrobin | mgz: enabling http/2 is more difficult as even in xenial the default version is compiled without http/2 support | 12:56 |
mgz | yeah, but that can be another branch after. | 12:56 |
=== lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta6 release notes: https://jujucharms.com/docs/devel/temp-release-notes | ||
=== rodlogic is now known as Guest30270 | ||
nottrobin | mgz: yeah of course | 13:01 |
nottrobin | but that's still my ultimate goal | 13:01 |
beisner | hi Odd_Bloke, jamespage, lazyPower - afaik, the trusty/jenkins (python rewrite) has all previously pending bits landed. i don't believe there is any secret delivery mechanism, though i do also have that need. right now, we push things into units post-deploy. a generic-ish subord or secrets layer sounds interesting. | 13:27 |
beisner | jamespage, re: o-c-t, ack. also seeing all tempest tests fail the nova instance ssh check. have you diag'd yet? if not, about to dive in. | 13:30 |
lazyPower | beisner - i think if thats the case i'm more inclined to agree with a subordinate approach vs a layer, unless the layer is abstract enough to accomodate the volume of secrets a large deployment would need | 13:33 |
lazyPower | similar to what kwmonroe and mbruzek pioneered with the interface: java work. an interface with a complimentary framework to deliver java.... only we get an interface and a complimentary subordinate to deliver n^n secrets. | 13:34 |
beisner | lazyPower, a swiss-army secrets-lander does sound nice. i wonder if #is is doing anything along these lines with basenode or similar charms? | 13:36 |
lazyPower | they might be | 13:36 |
beisner | it seems like yes, but i can't put my finger on one atm | 13:37 |
=== rodlogic is now known as Guest93463 | ||
=== scuttle|afk is now known as scuttlemonkey | ||
jamespage | beisner, o-c-t fixed up - the set call was not actually using the value provided. | 14:04 |
beisner | wow wth lol | 14:05 |
beisner | jamespage, thx for the fix! | 14:07 |
=== scuttlemonkey is now known as scuttle|afk | ||
tvansteenburgh | stub: ping to make sure you are aware of https://bugs.launchpad.net/postgresql-charm/+bug/1577544 | 14:09 |
mup | Bug #1577544: Connection info published on relation before pg_hba.conf updated/reloaded <PostgreSQL Charm:New> <https://launchpad.net/bugs/1577544> | 14:09 |
jamespage | beisner, np | 14:11 |
=== rodlogic is now known as Guest80695 | ||
=== scuttle|afk is now known as scuttlemonkey | ||
simonklb | how can I debug the Juju controller/api? | 14:19 |
simonklb | it's been dying on me several times just today | 14:19 |
cholcombe | jamespage, for cinder it looks like we've settled on a model of 1 charm per driver? Is that correct? | 14:27 |
jamespage | cholcombe, yes - each is a backend, and can be plugged in alongside other drivers | 14:28 |
lazyPower | mbruzek - have a minute to look over something with me? | 14:28 |
cholcombe | jamespage, ok cool. i might have to do some copy pasta for huawei then | 14:28 |
mbruzek | yar | 14:28 |
jamespage | cholcombe, I'd prefer we did not | 14:28 |
jamespage | cholcombe, the right way is to write a new cinder-backend base layer and use that :-) | 14:29 |
cholcombe | oh? | 14:29 |
cholcombe | ah yes that's what i meant hehe | 14:29 |
jamespage | then the next one that comes along is minimal effort | 14:29 |
jamespage | cholcombe, ok good :-) | 14:29 |
jamespage | cholcombe, you might be trail blazing a bit here | 14:29 |
thedac | Good morning | 14:29 |
lazyPower | mbruzek https://github.com/chuckbutler/layer-etcd/commit/1b8648d41fa336496efbff47e6a643e550361ee6 | 14:30 |
lazyPower | this is a little frameworky script i threw together to lend a hand preparing resources offline. similar to what a make fat target would do, but the intention is not to include these files with the charm traditionally via charm push, but to attach them as resources | 14:30 |
lazyPower | I wanted your input on this before i ran down the remaining changes, moving it from curl'ing on the unit to using resource-get, etc. | 14:31 |
jcastro | popey: what was the issue specifically? I would like to fix it | 14:32 |
popey | jcastro: the doc says to install an unavailable package | 14:33 |
popey | 11:20 < popey> https://jujucharms.com/get-started - these instructions do not work on 16.04 | 14:34 |
popey | 11:21 < popey> E: Unable to locate package juju-quickstart | 14:34 |
popey | 11:23 < popey> installing juju-core results in no juju binary being installed either. | 14:34 |
jamespage | beisner, we need to switch post-deploy thingy to use mac of added interface, to deal with the network renaming in xenial... | 14:34 |
jcastro | ack | 14:34 |
jcastro | popey: ok, I see the problem, marco's fixing it now, ta | 14:35 |
beisner | jamespage, yes, what's there was a quick fix. we really need to pluck the mojo utils thing into a library proper and be able to use that outside of both o-c-t and mojo. | 14:35 |
jcastro | that page is supposed to be revved. :-/ | 14:35 |
beisner | jamespage, b/c it is fixed much more elegantly and pythonic in the mojo helpers | 14:35 |
jamespage | beisner, I'm sure it is ... | 14:35 |
beisner | to use mac | 14:35 |
simonklb | I got this from juju-db on machine-0: checkpoint-server: checkpoint server error: No space left on device | 14:36 |
simonklb | still got space left on the host machine | 14:36 |
jcastro | popey: that page was for trusty, apparently nobody bothered to check if it worked with xenial | 14:44 |
popey | oops | 14:44 |
popey | jcastro: it doesn't :) | 14:45 |
jcastro | balloons: heya, we should add that to the release checklist. | 14:47 |
simonklb | looks like the container gets full, located it to the /var/lib/juju/db folder | 15:01 |
simonklb | a bunch of collection-*.wt files | 15:01 |
simonklb | anyone can help me to find what is being written? | 15:03 |
simonklb | this happens when I deploy a new charm | 15:03 |
=== rodlogic is now known as Guest20915 | ||
=== ejat is now known as fenris- | ||
=== fenris- is now known as ejat | ||
=== ejat is now known as fenris- | ||
=== fenris- is now known as ejat | ||
=== rodlogic is now known as Guest12850 | ||
plars | ahasenack: you know that issue with lxc instances of xenial on trusty with juju/maas I was talking to you about last week? It looks like it was choking on systemd with a non-systemd host. So I tried to update my lxc-host in maas to deploy with xenial and hit something I think you've run into also: https://bugs.launchpad.net/maas/+bug/1419041 | 16:29 |
mup | Bug #1419041: bootstrap failing with gomaasapi 400 when updating MAAS images <landscape> <oil> <oil-bug-1372407> <MAAS:Fix Released> <https://launchpad.net/bugs/1419041> | 16:29 |
plars | ahasenack: oh wait, no I think this is the one: https://bugs.launchpad.net/maas/+bug/1537095 | 16:29 |
mup | Bug #1537095: [1.9.0] 400 Error juju bootstrapping missing images <landscape> <oil> <MAAS:In Progress by allenap> <MAAS 1.9:New> <https://launchpad.net/bugs/1537095> | 16:29 |
suchvenu | Hi | 17:07 |
suchvenu | Is it possible to change the status of a bug from Fix Released to Fix Committed? | 17:09 |
=== rodlogic is now known as Guest15218 | ||
=== frankban is now known as frankban|afk | ||
suchvenu | For a recommended charm, if I need to do some updates can't I use the same bug ? I changed the status from Fix Released to FIx committed, but it doesn't get reflected in Review Queue. | 17:17 |
marcoceppi | suchvenu: no, to do updates you need to open a merge request | 17:40 |
suchvenu | Hi Marcoceppi | 17:43 |
suchvenu | ok, This is the stream where I have pushed my deployable charm | 17:43 |
suchvenu | https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk | 17:44 |
suchvenu | and layered charm is at lp:~ibmcharmers/charms/trusty/layer-ibm-db2/trunk | 17:44 |
marcoceppi | suchvenu: but you're trying to update db2 in the charmstore, correct? | 17:45 |
suchvenu | yes right. It don;t need any review ? | 17:45 |
suchvenu | So i need to merge from ibmcharmers branch to charmers branch , right ? | 17:48 |
marcoceppi | suchvenu: yes | 17:48 |
marcoceppi | suchvenu: in doing so, that will create a merge that will automatically appear in the review queue | 17:48 |
suchvenu | Will it updated in queue then ? | 17:49 |
suchvenu | oh ok | 17:49 |
suchvenu | and no need to attach a bug ? | 17:49 |
marcoceppi | suchvenu: correct | 17:49 |
marcoceppi | bugs are for charms that don't exist in the system yet | 17:50 |
suchvenu | ok | 17:50 |
suchvenu | And any reviewers to be added ? | 17:50 |
suchvenu | I had changed the bug status from Review Fixed to Review Committed. Should I change it back ? | 17:52 |
marcoceppi | suchvenu: link? | 17:53 |
suchvenu | https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk | 17:55 |
suchvenu | https://bugs.launchpad.net/charms/+bug/1477057 | 18:01 |
mup | Bug #1477057: New Charm: IBM DB2 <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1477057> | 18:01 |
suchvenu | Marchceppi, should I change the status of Bug here ? | 18:07 |
=== rodlogic is now known as Guest1573 | ||
marcoceppi | suchvenu: you don't need to touch the bug anymore | 18:17 |
marcoceppi | suchvenu: you need to go here, https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk and click "Propose for merging" | 18:18 |
suchvenu | which should be the target branch ? | 18:20 |
marcoceppi | suchvenu: cs:charms/trusty/ibm-db2 | 18:21 |
marcoceppi | err | 18:21 |
marcoceppi | lp:charms/trusty/ibm-db2 | 18:21 |
suchvenu | Do i need to add any reviewers ? | 18:28 |
mbruzek | suchvenu: No you do not have to add reviewers. This Merge Request will get picked up in the review queue | 18:38 |
mbruzek | suchvenu: As long as charmers are on the merge you should be good | 18:45 |
suchvenu | https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-db2/trunk/+merge/294153 | 18:47 |
suchvenu | created merge request | 18:48 |
suchvenu | Thanks Marcoceppi for your help! | 18:57 |
=== rodlogic is now known as Guest97418 | ||
=== rodlogic is now known as Guest56148 | ||
=== rodlogic is now known as Guest3957 | ||
thedac | dpb: tribaal: +2 on the swift-storage change after manual tests run | 22:26 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!