[01:40] <mup> Bug #1647897 opened: Juju bootstrap proxy support <juju-core:New> <https://launchpad.net/bugs/1647897>
[09:56] <gsamfira> hello folks. We have about 120 nodes constantly being deployed and redeployed by juju in a CI
[09:56] <gsamfira> this environment has been online for a while now
[09:56] <gsamfira> as a result, some collections have grown quite a bit
[09:56] <gsamfira> one of them is: presence.beings
[09:56] <gsamfira> juju:PRIMARY> db.presence.beings.count()
[09:56] <gsamfira> 1646080
[09:57] <gsamfira> the other was logs.logs (which had about 14.000.000 entries)
[09:57] <perrito666> gsamfira: hey
[09:57] <gsamfira> hey perrito666
[09:57] <perrito666> gsamfira: logs should be rotated, something is wrong there
[09:57] <gsamfira> I don;t think juju got a chance to rotate
[09:58] <gsamfira> the db was under huge load
[09:58] <perrito666> gsamfira: but the rotation is done by juju
[09:58] <perrito666> and by rotation I mean just deletion
[09:58] <gsamfira> mostly because of a bunch of queries against precence.being
[09:58] <gsamfira> which had no index on model-uuid
[09:58] <gsamfira> and did a COLLSCAN for every query
[09:59] <perrito666> gsamfira: this is 2.x?
[09:59] <gsamfira> I can imagine. But juju kind of stopped working, erroring out with i/o error while talking to mongo
[09:59] <gsamfira> 2.0.1
[09:59] <gsamfira> i/o error and i/o timeout
[10:00] <gsamfira> I had to drop all connections to the state machine port, go into the database and create an index in presence.beings
[10:00] <gsamfira> db.presence.beings.createIndex({"model-uuid": 1})
[10:00] <gsamfira> also on txns
[10:00] <gsamfira> db.txns.createIndex({"s": 1})
[10:00] <gsamfira> just to get rid of a lot of spam in syslog about COLLSCANs
[10:01] <perrito666> thumper: babbageclunk any of you might be interested in this?
[10:01] <gsamfira> also dropped all logs and saved 1.3 GB of disk space
[10:01] <perrito666> gsamfira:  lol
[10:01] <thumper> o/
[10:01] <perrito666> so, for how long has this been running?
[10:01] <gsamfira> the state machine is a 16 CPU core, 32 GB of RAM VM hosted on RAID10 10kRPM SAS disks
[10:01] <thumper> gsamfira: logs is a capped collection so size bound
[10:02] <thumper> presence is bollocks
[10:02] <mup> Bug #1647897 changed: Juju bootstrap proxy support <juju-core:Invalid> <https://launchpad.net/bugs/1647897>
[10:02] <thumper> and grows forever
[10:02] <gsamfira> also might be worth mentioning we have enabled HA on this particular setup
[10:02] <gsamfira> thumper: ouch
[10:02] <gsamfira> 2 low hanging fruit we can implement is simply creating indexes on stuff we query
[10:03] <gsamfira> especially if we query those frequently
[10:03] <gsamfira> the environment has been up for a couple of months I think
[10:03] <gsamfira> lemme check
[10:05] <gsamfira> since october. Rougly 2 months
[10:05] <gsamfira> perrito666: ^
[10:06] <perrito666> gsamfira: tx that is a nice point of data
[10:06] <gsamfira> http://paste.ubuntu.com/23592735/ <-- this might also be of interest
[10:06] <gsamfira> this is a CI environment, a lot of units get torn down and spun up again
[10:06] <gsamfira> so there is a lot of traffic
[10:08] <gsamfira> perrito666:  juju:PRIMARY> db.statuseshistory.count()
[10:08] <gsamfira> 1810513
[10:08] <gsamfira> so you get an idea
[10:08] <gsamfira> :)
[10:09] <perrito666> gsamfira: that is quite a reasonable size for status :) because its also capped but that one seems to be working
[10:10] <gsamfira> yup. The environment seems stable again after creating the indexes and cleaning the logs
[10:11] <perrito666> gsamfira: that is good to know :) a quick workaround and low hanging fruit all together
[10:12] <gsamfira> I'll create a patch for the indexes later this week
[10:17] <mgz> voidspace: http://streams.canonical.com/juju/tools/agent/2.0.2/
[10:20] <mup> Bug #1357760 changed: ensure-availability (aka HA) should work with manual provider <cloud-installer> <ha> <landscape> <manual-provider> <manual-story> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1357760>
[10:20] <mup> Bug #1493058 changed: ensure-availability fails on GCE <docteam> <enable-ha> <gce-provider> <ha> <jujuqa> <juju:Invalid by thumper> <juju-core:Won't Fix> <juju-core 1.24:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1493058>
[10:20] <mup> Bug #1512569 changed: UniterSuite.TestRebootNowKillsHook fails with: uniter still alive <ci> <test-failure> <unit-tests> <juju:Triaged> <juju-core:Won't Fix> <https://launchpad.net/bugs/1512569>
[10:20] <mgz> alexisb: can you backport your fix for bug 1631369 for 2.0 branch as well?
[10:20] <mup> Bug #1631369: ExpireSuite.TestClaim_ExpiryInFuture_TimePasses took way too long <ci> <intermittent-failure> <regression> <unit-tests> <juju:In Progress by alexis-bruemmer> <https://launchpad.net/bugs/1631369>
[10:21] <alexisb> mgz, sure
[10:44] <axw_> perrito666: I've created https://github.com/juju/juju/tree/feature-persistent-storage
[10:44] <perrito666> axw_: ack
[10:44] <perrito666> axw_: we should propose against that I presume
[10:45] <axw_> perrito666: yup, so we don't mess up 2.1
[10:45] <perrito666> we dont mess up things, we awesome-ize them
[10:47] <mup> Bug #1557726 changed: Restore fails on some openstacks like prodstack <backup-restore> <jujuqa> <openstack-provider> <juju:Fix Released by hduran-8> <juju 2.0:Fix Released by reedobrien> <juju-core:Won't Fix> <https://launchpad.net/bugs/1557726>
[12:46] <natefinch> thumper: https://bugs.launchpad.net/juju/+bug/1648063
[12:47] <mup> Bug #1648063: kill-controller removes machines from migrated model <model-migration> <juju:Triaged> <https://launchpad.net/bugs/1648063>
[12:56] <voidspace> jam: from within a hook context (debug-hooks) is there a way to get a list of all the bindings defined for the charm?
[12:57] <voidspace> frobware: ^^^ do you know?
[13:00] <jam> voidspace: generally it will be the things listed under "provides" or "requires" in charm-metadata.yaml
[13:00] <frobware> voidspace: https://github.com/frobware/testcharms
[13:01] <voidspace> jam: referring to the charm is what I was hoping to avoid
[13:01] <voidspace> jam: but thanks
[13:01] <voidspace> :-)
[13:15] <rick_h> voidspace: :/ had hoped but nothing here of use https://jujucharms.com/docs/2.0/authors-hook-environment
[13:16] <voidspace> rick_h: it would be a nice tool to have
[13:16] <voidspace> rick_h: however...
[13:16] <voidspace> rick_h: I have now torn down that environment and frobware has a test charm that I can deploy locally
[13:16] <voidspace> rick_h: that defines useful stuff explicitly for playing with bindings
[13:17] <rick_h> voidspace: yea, all good. Just noting that I can't find anything useful for what you were asking
[13:17] <voidspace> rick_h: thanks for lookinh
[13:17] <voidspace> rick_h: *looking
[13:17] <voidspace> rick_h: but frobware has useful tools for playing with network-get
[13:25] <natefinch> Review me? 2 line change: https://github.com/juju/juju/pull/6670
[13:56] <thumper> babbageclunk: https://github.com/juju/juju/pull/6669
[15:59] <natefinch> perrito666: http://reports.vapour.ws/releases/4631/job/functional-backup-restore/attempt/4900
[16:50] <mup> Bug #1625624 changed: juju 2 doesn't remove openstack security groups <ci> <landscape> <openstack-provider> <sts> <juju:Fix Committed by gz> <juju 2.0:Fix Committed by gnuoy> <juju-core:Fix Released by gnuoy> <https://launchpad.net/bugs/1625624>