[00:01] <mwhudson> yeah, i was profiling the linker and it stabs you in the face really
[00:02] <davecheney> it's all work that the 1.4 compilers never did
[00:02] <mwhudson> indeed
[00:02] <davecheney> and in my testing, that acconts for 3x slowdown
[00:03] <mwhudson> GOGC=off makes things about 40% faster in my tests i think
[00:03] <mwhudson> varies a bit with case, of course
[00:03] <davecheney> yup
[00:03] <mwhudson> not sure what a good sort of gc is for this sort of thing really
[00:03] <davecheney> none, it's a pathlogical case
[00:04] <davecheney> gc only works for allocations you intend to free
[00:04] <davecheney> preferably shortly
[00:04] <mwhudson> yeah
[00:04] <davecheney> i suspect the biggest problem to GOGC=off or using off heap memory will be political
[00:04] <mwhudson> generational would help a bit, but all the new Nodes get old->young pointers immediately
[00:04] <mwhudson> which kinda screws over the generational hypothesis
[00:05] <davecheney> generational collectors are good for avoiding heap fragmentation
[00:05] <davecheney> apart from that, they actaully don't help
[00:05] <davecheney> as anything which uses a lot of memory, by defintition has big data structures which are long lived
[00:05] <davecheney> think reddis, memcache, casandra
[00:05] <mwhudson> they help if you generate piles of short lived garbage
[00:05] <davecheney> all of those get promotoed, or are reachable from the promoted set
[00:05] <mwhudson> e.g. if you are writing python
[00:06] <davecheney> they are good for request/response servers
[00:06]  * mwhudson spots some java experience
[00:06] <davecheney> where they genrate a lot of allocations relative to the incoming connection, then free them at the end
[00:06] <davecheney> mwhudson: i'll show you the place that Java touched me, later
[00:10] <davecheney> welp, keith is intersted at least
[00:10] <davecheney> i don't really care about the solution
[00:10] <davecheney> only they engage with the idea that the gc is not helping the compiler
[00:39] <davecheney> menn0: thumper http://paste.ubuntu.com/11234864/
[00:39] <davecheney> uh oh
[00:39] <davecheney> this is our old friend
[00:39]  * menn0 looks
[00:40] <menn0> davecheney: where are you seeing that?
[00:40] <davecheney> failure on ppc64
[00:40] <davecheney> .../jujud/agent
[00:54] <menn0> davecheney: is it consistent?
[01:39] <davecheney> menn0: yup
[01:40] <davecheney> i smell a data race
[01:41] <thumper> anastasiamac, wallyworld: I need to run off in about 10 minutes, pepper is booked in for a hair cut
[01:41] <thumper> :)
[01:52] <mwhudson> davecheney: turns out being in a hangout makes compiles even sloweer
[01:55] <natefinch> hangouts DEFINITELY slow down compilation (and everything else)
[01:55] <davecheney> menn0: go test -race .../jujud/agent
[01:55] <davecheney> OK: 80 passed, 1 skipped
[01:55] <davecheney> PASS
[01:55] <davecheney> Found 13 data race(s)
[01:56] <natefinch> since it's a prime number, they cancel out, right?
[02:00] <davecheney> natefinch: so, should I add more races, or take some away ?
[02:00] <natefinch> davecheney: no no, if you fix one, it won't be prime, and they won't cancel out anymore
[02:00]  * natefinch is sure that's a thing.
[02:01] <davecheney>   github.com/juju/juju/apiserver.(*changeCertConn).Read()
[02:01] <davecheney>       <autogenerated>:37 +0xa3
[02:01] <davecheney> did someone recently add support to the apiserver to change certificates on the fly ?
[02:01] <natefinch> davecheney: on the upside, coverage of jujud/agent is better than I thought, 78.5% - which means we're only probably missing ~3 data races not covered in tests
[02:02] <davecheney> natefinch: superb
[02:09] <menn0> davecheney: wallyworld added the cert swapping thing
[02:09] <menn0> it's needed to support upgrade IIRC
[02:09] <menn0> s
[02:09] <wallyworld> recently = 1.22
[02:09] <wallyworld> it's needed to support secure connections to state servers from cloud nodes
[02:11] <davecheney> excellent
[02:11] <davecheney> it's not clear if those races are just in the tests
[02:11]  * davecheney throws table at mocking functions at test time
[02:12] <davecheney> but it's certainly a big problem
[02:12] <davecheney> data race == program is unknowable
[02:12] <davecheney> that could explain the runtime crashes we see
[02:17] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1456851
[02:17] <mup> Bug #1456851: cmd/jujud/agent: multiple data races detected <juju-core:Confirmed> <https://launchpad.net/bugs/1456851>
[02:42] <mup> Bug #1222413 changed: openstack provider Instances suppresses errors <openstack-provider> <tech-debt> <juju-core:Fix Released by gz> <juju-core 1.24:Fix Released by gz> <https://launchpad.net/bugs/1222413>
[02:42] <mup> Bug #1450129 changed: vsphere provider is missing firewaller, networking implementations <tech-debt> <vsphere-provider> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1450129>
[02:42] <mup> Bug #1450701 changed: Juju CLI compatibility option <status> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1450701>
[02:42] <mup> Bug #1451283 changed: deployer sometimes fails with a unit status not found error <blocker> <ci> <intermittent-failure> <landscape> <regression> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1451283>
[02:42] <mup> Bug #1452114 changed: Unnecessary errors emitted during init system discovery <systemd> <upstart> <vivid> <juju-core:Fix Released by wwitzel3> <juju-core 1.23:Fix Released by wwitzel3> <juju-core 1.24:Fix Released by wwitzel3> <https://launchpad.net/bugs/1452114>
[02:42] <mup> Bug #1452535 changed: default storage constraints are not quite correct <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1452535>
[02:42] <mup> Bug #1453801 changed: /var/spool/rsyslog grows without bound <stakeholder> <juju-core:Fix Released by axwalk> <juju-core 1.22:Fix Committed by axwalk> <juju-core 1.23:Fix Committed by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1453801>
[02:42] <mup> Bug #1454043 changed: InstancePoller compares wrong Address list and always requests updated state Addresses <cpec> <network> <stakeholder> <juju-core:Fix Released by thumper>
[02:42] <mup> <juju-core 1.22:Fix Committed by thumper> <juju-core 1.23:Fix Committed by thumper> <juju-core 1.24:Fix Released by thumper> <https://launchpad.net/bugs/1454043>
[02:42] <mup> Bug #1454676 changed: failed to retrieve the template to clone - 500 Internal Server error - error creating container juju-trusty-lxc-template - <oil> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1454676>
[02:42] <mup> Bug #1454829 changed: 1.20.x client cannot communicate with 1.22.x env <blocker> <compatibility> <status> <juju-core:Fix Released by wallyworld> <juju-core 1.22:Fix
[02:42] <mup> Committed by wallyworld> <juju-core 1.23:Fix Committed by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1454829>
[02:42] <mup> Bug #1456851 was opened: cmd/jujud/agent: multiple data races detected <juju-core:Confirmed> <https://launchpad.net/bugs/1456851>
[02:44] <menn0> thumper: ok to merge this? https://github.com/juju/txn/pull/10
[02:44] <menn0> no landing bot it seems
[02:44] <menn0> thumper: i have permission to merge, just want someone else to agree
[02:46] <davecheney> thumper: um, we have a serious problem
[02:46] <davecheney> the cert change listner basically doesn't work
[02:47] <davecheney> and cannot be fixed in its current form
[02:53] <davecheney> thumper: menn0 https://bugs.launchpad.net/juju-core/+bug/1456857
[02:53] <mup> Bug #1456857: apiserver: updateCert has data race, corrupts certificate information <juju-core:New> <https://launchpad.net/bugs/1456857>
[02:54] <menn0> wallyworld: ^^^
[03:12] <mup> Bug #1415176 changed: debug-hooks exit 1 , doesn't mark hook as failed <cts> <debug-hooks> <juju-core:Fix Released by hduran-8> <juju-core 1.23:Fix Released by hduran-8> <juju-core 1.24:Fix Released by hduran-8> <https://launchpad.net/bugs/1415176>
[03:12] <mup> Bug #1420057 changed: agents see "too many open files" errors after many failed API attempts <juju-core:Fix Released by dave-cheney> <juju-core 1.22:Fix Committed
[03:12] <mup> by dave-cheney> <juju-core 1.23:Fix Committed by dave-cheney> <juju-core 1.24:Fix Released by dave-cheney> <https://launchpad.net/bugs/1420057>
[03:12] <mup> Bug #1429790 changed: debug-hooks not working with manually provisioned machines <debug-hooks> <manual-provider> <juju-core:Fix Released by alesstimec> <juju-core 1.24:Fix Released by alesstimec> <https://launchpad.net/bugs/1429790>
[03:12] <mup> Bug #1437266 changed: Bootstrap node occasionally panicing with "not a valid unit name" <deploy> <destroy-machine> <destroy-service> <juju-core:Fix Released by themue> <juju-core 1.24:Fix Released by themue> <https://launchpad.net/bugs/1437266>
[03:12] <mup> Bug #1441206 changed: Container destruction doesn't mark IP addresses as Dead <destroy-unit> <network> <juju-core:Fix Released by mfoord> <juju-core 1.24:Fix Released by mfoord> <https://launchpad.net/bugs/1441206>
[03:12] <mup> Bug #1441913 changed: juju upgrade-juju failed to configure mongodb replicasets <canonical-is> <mongodb> <upgrade-juju> <juju-core:Fix Released by menno.smits> <juju-core 1.23:Fix Committed by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1441913>
[03:12] <mup> Bug #1442012 changed: persist iptables rules / routes for addressable containers across host reboots <addressability> <network> <juju-core:Fix Released by dooferlad> <juju-core 1.23:Fix Committed by dooferlad> <juju-core 1.24:Fix Released by dooferlad> <https://launchpad.net/bugs/1442012>
[03:12] <mup> Bug #1444861 changed: Juju 1.23-beta4 introduces ssh key bug when used w/ DHX <blocker> <dhx> <regression> <ssh> <juju-core:Fix Released by hduran-8> <juju-core 1.23:Fix Released by hduran-8> <juju-core 1.24:Fix Released by hduran-8> <https://launchpad.net/bugs/1444861>
[03:12] <mup> Bug #1446264 changed: joyent machines get stuck in provisioning <bootstrap> <joyent-provider> <reliability> <repeatability> <juju-ci-tools:Fix Released by gz> <juju-core:Fix Released by gz> <juju-core 1.23:Fix Committed by gz> <juju-core 1.24:Fix Released by gz> <https://launchpad.net/bugs/1446264>
[03:12] <mup> Bug #1449301 changed: storage: storage cannot be destroyed <storage> <tech-debt> <juju-core:Fix Released by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1449301>
[03:12] <mup> Bug #1449390 changed: storage: charms must wait for storage to be attached before running "install" hook <storage> <tech-debt> <juju-core:Fix Released by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1449390>
[03:12] <mup> Bug #1449822 changed: storage: storage-detached should be storage-detaching <storage> <tech-debt> <juju-core:Fix Released by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1449822>
[03:12] <mup> Bug #1450118 changed: vsphere provider should use OVA instead of OVF from cloud images. <tech-debt> <vsphere-provider> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1450118>
[03:12] <mup> Bug #1451674 changed: Broken DB field ordering when upgrading to Juju compiled with Go 1.3+ <golang> <upgrade-juju> <vivid> <juju-core:Fix Released by menno.smits> <juju-core 1.23:Fix Released by menno.smits> <juju-core 1.24:Fix Released by menno.smits> <https://launchpad.net/bugs/1451674>
[03:12] <mup> Bug #1452113 changed: log files are lost when agents are restarted under systemd  <regression> <systemd> <vivid> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.23:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1452113>
[03:12] <mup> Bug #1452207 changed: worker/uniter: charm does not install properly if storage isn't provisioned before uniter starts <storage> <juju-core:Fix Released by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1452207>
[03:12] <mup> Bug #1452511 changed: jujud does not restart after upgrade-juju on systemd hosts <regression> <systemd> <vivid> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.23:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1452511>
[03:12] <mup> Bug #1454481 changed: juju log spams ERROR juju.worker.diskmanager lsblk.go:111 error checking if "sr0" is in use: open /dev/sr0: no medium found <juju-core:Fix Released by axwalk> <juju-core 1.24:Fix Released by axwalk> <https://launchpad.net/bugs/1454481>
[03:12] <mup> Bug #1454599 changed: firewaller gets an exception if a machine is not provisioned <cpec> <stakeholder> <juju-core:Fix Released by hduran-8> <juju-core 1.22:Fix
[03:12] <mup> Committed by hduran-8> <juju-core 1.23:Fix Committed by hduran-8> <juju-core 1.24:Fix Committed by hduran-8> <https://launchpad.net/bugs/1454599>
[03:12] <mup> Bug #1454870 changed: Client last login time writes should not use mgo.txn <tech-debt> <juju-core:Fix Released by thumper> <juju-core 1.22:Fix Committed by thumper> <juju-core 1.23:Fix Committed by thumper> <juju-core 1.24:Fix Released by thumper> <https://launchpad.net/bugs/1454870>
[03:12] <mup> Bug #1456857 was opened: apiserver: updateCertificate has data race, corrupts certificate information <juju-core:New> <https://launchpad.net/bugs/1456857>
[03:12] <thumper> spammy mup
[03:13] <thumper> menn0: my guess is no bot
[03:13] <thumper> menn0: so just merge it (assuming all tests pass :-)
[03:19] <menn0> thumper: yep they do
[03:30] <davecheney> protip: whenever you use PatchValue, you're probably creating a data race
[03:30] <davecheney> please don't use PatchValue
[03:34] <wallyworld> davecheney: what do you mean doesn't work? if it didn't work, lxc image caching would not work
[03:35] <wallyworld> data race != doesn't work
[03:35] <marcoceppi> hey, emergency for demo
[03:35] <marcoceppi> getting this error
[03:35] <marcoceppi> WARNING failed to load charm at "/home/ubuntu/charms/trusty/rally": YAML error: line 20: did not find expected key
[03:35] <marcoceppi> kind of cryptic error
[03:36] <wallyworld> what version of juju? what is charm yaml?
[03:37] <marcoceppi> 1.24-beta3
[03:37] <marcoceppi> https://github.com/juju-solutions/rally
[03:39] <wallyworld> marcoceppi: sadly the error is from inside the yaml lib and it has been provided with no context :-(
[03:40] <wallyworld> marcoceppi: looks like action yaml
[03:40] <marcoceppi> wallyworld: yeah, missing "
[03:41] <marcoceppi> wallyworld: thanks@
[03:41] <marcoceppi> never have seen that error before
[03:41] <wallyworld> np
[03:41] <marcoceppi> and proof didn't pick it up
[03:41] <wallyworld> something to fix :-)
[03:42] <mup> Bug #1454466 was opened: Deployment times out waiting for relation convergence - neutron-gateway in installing state <oil> <juju-core:New> <juju-deployer:Invalid> <neutron-gateway (Juju Charms Collection):New> <https://launchpad.net/bugs/1454466>
[03:46] <natefinch> waigani_: you around?
[03:47] <waigani_> natefinch: yep
[03:47] <natefinch> waigani_: #1 - thanks for fixing my dumb windows bug in the log rotation tests
[03:47] <waigani_> natefinch: np :)
[03:48] <natefinch> waigani_: #2 - since you wrote a script to run CI, maybe you can help me get my CI script running.... what should "job_name" be?  my new test is log_rotation.py - does that mean the job name should be log_rotation?  or log_rotation.py  or something else entirely?
[03:49] <waigani_> natefinch: from what I can see, job_name is just used to name the environment
[03:49] <waigani_> natefinch: so the CI guys can keep track of what envs are for what jobs I'm guessing
[03:50] <natefinch> ahh ok.. do you know if there's a way to run just my one test, and not all of CI?
[03:50] <waigani_> natefinch: what's your one test?
[03:51] <jam> menn0: I had a couple quick thoughts about your scanner patch, are you around?
[03:51] <waigani_> natefinch: is your script up somewhere I can have a squizz?
[03:51] <menn0> jam: yep I'm here
[03:52] <natefinch> waigani_: I wrote a python script following the pattern set by some of the other tests, like assess_bootstrap.py
[03:52] <natefinch> waigani_: here's the code: https://gist.github.com/natefinch/e377eacd6b2316b2a884
[03:52] <jam> menn0: so one thing I was thinking about is that since we have to read the whole DB it really is a bit too often to do it every 2hrs (i think). so I was trying to think of ways to make it more logical
[03:52] <jam> menn0: one that I thought could be really good is to track how many txns are in the collection and only prune when they grow by a certain amount
[03:52] <jam> like say 2x
[03:53] <jam> menn0: as this is really "don't let TXNs grow without bound and take up 99.99% of the total DB size"
[03:53] <jam> but since there is one TXN for every other doc, it is fair to expect TXN to be as much as 50% of the total DB size.
[03:53] <jam> ideally we would record the size of the collection after the last pruning
[03:53] <natefinch> waigani_: it requires a new test charm that exists here for now: https://github.com/natefinch/fill-logs
[03:53] <jam> and then only once it has bloated do the next GC
[03:54] <jam> as a poor man's approximation we could track that info in memory
[03:54] <natefinch> waigani_: just needs to be manually copied into repository (again, just for now)
[03:54] <jam> (always GC on the first inspection, track the final size of the DB, and then only GC again once the count() is 2x the original count())
[03:54] <jam> or whatever is the cheapest thing to measure.
[03:55] <jam> We could do db.txn.stats() but I'm not sure if that shrinks after a big prune
[03:55] <menn0> jam: hmmm ok
[03:55] <natefinch> waigani_: ah crap, the script requires modifications to the ci-tools that aren't pushed yet
[03:55] <menn0> jam: i'm pretty sure the stats tell you the allocated size and the in-use size
[03:55] <waigani_> natefinch: so, when it's ready, the test charm should probably live here: lp:juju-ci-tools/repository
[03:56] <jam> menn0: so db.collection.stats() if it accurately tracks what pages are in use would probably be a very cheap chekck
[03:56] <jam> check
[03:56] <natefinch> waigani_: yep, got that. But I need to be able to test it to know if it's ready :)    Actually, I've done a lot of manual testing on the charm, so pretty sure it's fine.
[03:56] <jam> menn0: db.txn.count() *could* be cheap depending on how mongo tracks documents.
[03:56] <menn0> jam: count is very cheap
[03:56] <menn0> jam: it's tracked separately
[03:56] <jam> menn0: I think in mongo 3 because of MVCC it changes to be not so cheap
[03:57] <natefinch> waigani_: deploy_job is giving me ImportError: No module named boto   .... where do I get boto, pip?
[03:57] <menn0> jam: that makes sense
[03:57] <waigani_> natefinch: sure. With the JES stuff, I took the approach of writing a new job - using deploy_job.py as a template
[03:57] <jam> menn0: anyway, if count is cheap today we can go with it
[03:57] <natefinch> waigani_: oh interesting
[03:57] <jam> as we're a fair bit off, perhaps a code comment to evaluate if this stays cheap.
[03:57]  * menn0 nods
[03:57]  * natefinch grumbles that this whole "write a CI test" thing would be a hell of a lot easier if there were documentation about how to do it.
[03:58] <waigani_> natefinch: so I've got deploy_jes_job.py - which builds ontop of deploy_job.py
[03:58] <menn0> jam: well the pruning change - mostly as you reviewed it - is merging for 1.22 as we speak :)
[03:58] <menn0> jam: but i can iterate
[03:58] <waigani_> natefinch: so maybe you could do something similar?
[03:59] <menn0> jam: basically what you're after is: only prune if there's actual useful gains to be made
[03:59] <menn0> so that we're not loading the whole DB unecessarily
[03:59] <jam> menn0: k. so my thoughts are generally that we want to have *some* GC so that things don't grow without bound, but obviously we can functionally cope with a fair amount of garbage, and we don't want to saturate our system just checking for garbage that isn't there.
[03:59] <jam> menn0: right
[03:59] <jam> menn0: the fact that our current GC is really expensive because we aren't doing incremental
[04:00] <jam> menn0: I was going to say we could just drop the poll time to 1/day or 1/week even
[04:00] <jam> but doing it when we expect to be able to clean things seems a better path
[04:00] <natefinch> waigani_: yeah, I can look into that
[04:01] <natefinch> I love the way every time something goes wrong with a python script I get a huge useless stack trace
[04:01] <menn0> jam: so to recap: track the count of the txns collection after each prune, and only try to prune if the count grows to 2x the previous value
[04:02] <menn0> jam: (and prune the first time if there's no count recorded)
[04:03] <wallyworld> natefinch: as opposed to a panic in go? the stack trace is useful for diagnosis :-)
[04:03] <natefinch> so, the CI script gives me "ImportError: No module named boto" pip install boto givees me "ImportError: cannot import name IncompleteRead"  ...  my kingdom for a statically linked binary that just f'ing works.
[04:03] <jam> menn0: right. IMO ideally we would save the count after the last GC so that we don't always GC on startup
[04:03] <jam> but we can live with that
[04:03] <natefinch> wallyworld: a stack trace from pip is useless to the end user
[04:03] <natefinch> (i.e. me)
[04:03] <natefinch> it's just ugly
[04:04] <wallyworld> so are go panics
[04:04] <jam> menn0: also, I wanted to make sure that you don't GC immediately on startup (while load is the greatest on the machine), but I'm pretty sure you don't
[04:04] <natefinch> wallyworld: your code shouldn't panic unless there's something hugely drastically wrong
[04:04] <jam> menn0: can you make sure there is a test that you don't GC immediately ?
[04:04] <menn0> jam: the first prune doesn't happen until 2hrs after startup anyway
[04:04] <natefinch> wallyworld: like, programmer error, generally
[04:04] <natefinch> wallyworld: python scripts throw exceptions if you look at them the wrong way
[04:04] <wallyworld> natefinch: same with python - the programmer is just lazy not to deal with the error
[04:04] <menn0> jam: and if the count-at-last-prune is kept in the DB then it'll only happen when we want it to
[04:04] <natefinch> wallyworld: then almost every python programmer ever is lazy
[04:05] <jam> menn0: so in a healthy system (once we've fixed the address updater bug), I don't think we'll generate much garbage.
[04:05] <jam> menn0: like, I would expect it to take us weeks to actually grow to 2x
[04:05] <wallyworld> natefinch: and go programmers aren't?
[04:05] <menn0> jam: agreed
[04:05] <jam> menn0: I'd probably like INFO level logs that a GC is actually started (since we wouldn't be GC every 2 hrs)
[04:05] <jam> menn0: the only reason not to record it in the DB is that we don't have a great place to put the info.
[04:06] <menn0> jam: there is a log at debug but I can bump it
[04:06] <natefinch> wallyworld: sure they are.  But the default in go is not to show a huge useless ugly stack trace to your users.
[04:06] <menn0> jam: I might add a txns.prune collection or something
[04:06] <natefinch> wallyworld: regardless.... do you know how to fix this problem?  I presume the proper way to get boto is through pip, and yet my pip seems sad.
[04:06] <menn0> jam: or better yet txns.gc
[04:06] <wallyworld> natefinch: same with python if you wrap main in a try catch, all of 3 lines of code
[04:07] <waigani_> natefinch: as far as running one test, looking at assess_bootstrap.py you can execute that directly
[04:07] <jam> menn0: so I think an actually informative single-line message every 2 hrs would be ok. "checked for pruning transactions, found 1M current vs 0.5M old"
[04:07] <menn0> jam: with just a single doc
[04:07] <waigani_> natefinch: python assess_bootstrap.py $(which juju) local
[04:07] <wallyworld> natefinch: i've not sed boto sorry
[04:07] <wallyworld> used
[04:07] <waigani_> natefinch: that works for me - it's currently bootstrapping on my local machine
[04:07] <jam> menn0: so if we're going to create a table, I think recording the results of actual GC runs would also be useful
[04:07] <jam> for being able to track "how fast am I growing garbage, how often is GC actually running, etc"
[04:07] <natefinch> waigani_: ah, ok.  I tried to ask mgz and sinzui that... but I think I confused them by not wanting to run everything
[04:07] <jam> menn0: *not* recording every 2hrs, but recording the actual successful runs.
[04:08] <jam> menn0: thoughts?
[04:08] <natefinch> waigani_: do you know how to get boto?
[04:08] <menn0> jam: yeah I guess that would be nice
[04:08] <menn0> jam: although that collection would grow without bounds  :)
[04:08] <jam> menn0: so we *could* just put that info into logs and then get it from log scraping
[04:09] <jam> menn0: but with log rotation you are likely to never have enough history to be useful.
[04:09] <menn0> jam: yeah... i'm not sure it'll be that valuable in the db
[04:09] <jam> menn0: it does, hence the "only record successful runs"
[04:09] <jam> menn0: I would also be ok with capping the amount of history if we are worried about it
[04:09] <waigani_> natefinch: I must have already had that, I didn't hit that err. Have you tried pip?
[04:09] <jam> say 1000 successful GC runs should be big enough for anyone :)
[04:10] <natefinch> waigani_: yeah, my pip is broken, too.  Google/stackoverflow says easy_install -U pip  .... which is hilarious
[04:10] <menn0> jam: until it's not :)
[04:10] <jam> menn0: hence why I used that phrasing
[04:10] <menn0> jam: but seriously... that's not a major concern
[04:10] <waigani_> haha
[04:10] <jam> menn0: so some of it is protecting us against the unknown
[04:10] <jam> knowing that X grows without bound and we may have an env that runs for 10 years
[04:11] <jam> menn0: also multi-environment state servers magnify this sort of problem.
[04:11] <jam> menn0: that's actually one of the bigger reasons to not read-the-world every 2 hrs
[04:11] <jam> menn0: as you start removing the constraint that the DB is small.
[04:11] <menn0> jam: yep - agreed
[04:11] <natefinch> https://twitter.com/gardaud/status/357638468572151808
[04:12] <jam> natefinch: how do you get easy_install "apt-get install easy_install" :)
[04:12] <jam> (not actually true)
[04:12] <natefinch> jam: right?  :) I forget how I got easy_install... I'm sure I got it just to install pip
[04:14] <jam> natefinch: I believe easy_install is a single python file so many people 'wget' it
[04:14] <jam> natefinch: in proper security fashion: wget https://bootstrap.pypa.io/ez_setup.py -O - | python
[04:14] <jam> https://pypi.python.org/pypi/setuptools
[04:15] <jam> natefinch: I guess at least now they have you download it over HTTPS it used to be raw HTTP I believe.
[04:15] <natefinch> jam: well, at least that
[04:15] <jam> natefinch: though they also give you the "--no-check-certificate" and "--insecure" version of installation just to make sure you can root yourself to the world.
[04:17] <jam> menn0: does it seem reasonable to do the "only GC if big enough" ? I don't want to spend huge amounts of time on it, but it seemed a pretty cheap win
[04:17] <menn0> jam: i think it's worth doing
[04:18] <menn0> jam: i'd feel a lot less worried about this going out if that was in place
[04:20] <menn0> jam: aside from the potential i/o load i was concerned about what this would do to mongodb cached pages. it could hurt performance for a while.
[04:20] <menn0> (after each prune)
[04:28] <jam> menn0: did gustavo ever comment on the pruning changes to mgo/txn ?
[04:28] <jam> menn0: IIRC we also had some patches to PruneMissing that would also GC the txn-queues
[04:28] <jam> rogpeppe I think had done that.
[04:28] <jam> anyway /me needs to go run some errands. be back later
[04:29] <menn0> jam: unfortunately i haven't been able to get a response from gustavo so far
[04:30] <menn0> jam: i have a PR with that fixes PurgeMissing for the "huge txn-queues" situation (and various drive-by fixes)
[04:30] <menn0> jam: and he hasn't looked at the pruning fixes
[04:55] <wallyworld> axw_: reviewed, i've been adding todo cards to the backlog, some of which your branch now obsoletes. there's a few still to add. one i highlighted as high priority is an api compatability issue if we ship 1.24 with it
[04:56] <axw_> wallyworld: thanks. I will take a look after I address your comments
[04:56] <wallyworld> axw_: ok, np, i'm relocating so will be afk for a bit
[04:57] <axw> okey dokey
[05:14] <natefinch> ahh, python... http://cdn.meme.am/instances/500x/62360284.jpg
[05:26] <natefinch> if python were compiled, I'd be in bed by now :/
[05:29] <natefinch> sonofa .... juju action evidently does not take the -e <provider> flag
[05:30] <natefinch> er, environment, not provider
[07:26] <rogpeppe> mgz: hiya
[08:45] <dimitern> mgz, ping
[09:02] <TheMue> hmm, got no camera and sound, will restart browser
[09:03] <TheMue> aaargh, fan spins up and browser blocks :(
[09:28] <mup> Bug #1456957 was opened: rsyslog worker should not add machines that are not ready yet <cpec> <logging> <rsyslog> <juju-core:Triaged> <https://launchpad.net/bugs/1456957>
[10:26] <TheMue> dooferlad: is that you http://www.reddit.com/r/golang/comments/36lf6o/golang_and_openid/ ?
[10:26] <dooferlad> TheMue: no
[10:26] <TheMue> dooferlad: sadly so far no good answer there
[10:49] <mup> Bug #1456989 was opened: cloud-init 0.6.3 on precise generates invalid apt-get install command line <cloud-init> <precise> <regression> <juju-core:Triaged by dimitern> <juju-core 1.24:Triaged by dimitern> <https://launchpad.net/bugs/1456989>
[11:14] <mgz> dimitern: hey
[11:14] <dimitern> mgz, hey, I've filed the bug above ^^ which might be causing some CI failures wrt precise
[11:16] <mgz> dimitern: interesting - for real deployment right, not unit tests?
[11:16] <dimitern> mgz, yes
[11:16] <dimitern> mgz, also I wanted to ask about that PR you reverted yesterday re init discovery
[11:17] <mgz> 1.24/master have been fine on CI since the centos change - and we do have precise testing charms
[11:17] <mgz> dimitern: see the last 1.23 run for full breakge
[11:18] <mgz> I only reverted on 1.24 for the release, so 1.23 and master runs will still be borked
[11:18] <dimitern> mgz, I'm running into an issue where I get http://paste.ubuntu.com/11243099/
[11:18] <mgz> http://reports.vapour.ws/releases/2666
[11:18] <mgz> dimitern: yup, that's the deployment breakage
[11:18] <dimitern> mgz, so far only on 1.24 - and seems related to https://github.com/juju/juju/pull/2359
[11:19] <dimitern> mgz, what's interesting is that the same code is also on master (https://github.com/juju/juju/pull/2358), but I'm not seeing the same issue
[11:19] <mgz> dimitern: I don't see that *after* the revert
[11:19] <dimitern> mgz, the "[[: not found" error?
[11:19] <mgz> dimitern: fun, we probably need a master through CI then
[11:20] <mgz> dimitern: the breakage at all, the last 1.24 run was clean (ish - precise unit tests still failed, known flakiness)
[11:20] <dimitern> mgz, yeah, I'm currently trying to file a bug for that, but I'm having a bit of trouble pinning it down exactly
[11:21] <mgz> dimitern: one reason I just did the revert... I should have filed a bug after though
[11:21] <dimitern> mgz, it seems that when I reproduce the issue, replacing [[ and ]] with [ and ] in that init discovery script solves the errors
[11:21] <mgz> I wonder if this is a shebang issue
[11:22] <dimitern> mgz, and the reason is simple - http://paste.ubuntu.com/11243163/
[11:22] <mgz> I *thought* we were careful to use bash for everything though
[11:22] <dimitern> mgz, the script is rendered with a #!/usr/bin/env bash shebang
[11:22] <dimitern> mgz, however runcmd in cloud-init starts with a #!/bin/sh shebang
[11:23] <dimitern> you see where this is getting..
[11:23] <mgz> mehe, okay, well, that's the bug then
[11:24] <dimitern> the repercussions are potentially enormous - any runcmd script that requires bash in cloud-init user-data has to be pre-rendered somewhere and executed, rather than included inline like $(...)
[11:24] <mgz> we can also just not use bashism
[11:25] <dimitern> damn right :)
[11:56] <dimitern> mgz, FYI - bug 1456989
[11:56] <mup> Bug #1456989: cloud-init 0.6.3 on precise generates invalid apt-get install command line <cloud-init> <precise> <regression> <juju-core:Triaged by dimitern> <juju-core 1.24:Triaged by dimitern> <https://launchpad.net/bugs/1456989>
[11:56] <mgz> dimitern: I saw, thanks
[11:57] <dimitern> mgz, no, sorry not that one
[11:57] <mgz> so, my inclination is to go ahead and back the chane out on 1.23 and master as well
[11:57] <mgz> I saw mup say it in another channel :)
[11:57] <mgz> bug 1457011
[11:57] <mup> Bug #1457011: init system discovery script fails with: [[: not found <cloud-init> <compatibility> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1457011>
[11:57] <dimitern> mgz, that one yes
[12:01] <mup> Bug #1457011 was opened: init system discovery script fails with: [[: not found <cloud-init> <compatibility> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1457011>
[12:31] <mup> Bug #1457022 was opened: state server panic: "rescanned document misses transaction in queue" <cpec> <mongodb> <juju-core:In Progress by fwereade> <juju-core
[12:31] <mup> 1.22:In Progress by fwereade> <juju-core 1.23:In Progress by fwereade> <juju-core 1.24:In Progress by fwereade> <https://launchpad.net/bugs/1457022>
[12:52] <mattyw> perrito666, https://github.com/juju/charm/pull/129
[12:56] <perrito666> mattyw: you are loosing communication skills :p
[12:56] <perrito666> ah, you want me to merge that :p
[12:57] <mattyw> perrito666, just letting you know it's been updated :)
[12:57] <mattyw> perrito666, I've pinged you about others that I think can be closed
[12:58] <perrito666> mattyw: yes, sorry notifications of github get lost on the sea of notifications
[12:58]  * perrito666 sees no other pings from mattyw 
[12:58] <mattyw> perrito666, I prefer calling them github "notifications"
[13:00] <perrito666> mattyw: I call them spam
[13:01] <mup> Bug #1457031 was opened: Juju cannot deploy to any substrate <blocker> <bootstrap> <ci> <regression> <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1457031>
[13:01] <perrito666> mattyw: done
[13:04] <perrito666> well look at that, this call is full
[13:06] <TheMue> perrito666: yes, just tried it too
[13:06] <perrito666> odd
[13:07] <TheMue> perrito666: have been in another meeting so far and now cannot jump into this one
[13:07] <perrito666> seems that google is not that smart into letting you know if you invite more people than possible
[13:08] <perrito666> and there isnt a way to be just an expectator
[13:10] <mup> Bug #1457031 changed: Juju cannot deploy to any substrate <blocker> <bootstrap> <ci> <regression> <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1457031>
[13:11] <perrito666> ohhh, what now
[13:13] <TheMue> perrito666: see canonical #juju, they trey something with bundling lines
[13:16] <mup> Bug #1457031 was opened: Juju cannot deploy to any substrate <blocker> <bootstrap> <ci> <regression> <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1457031>
[13:28] <katco> fyi perrito666's internet has gone down
[13:40] <fwereade> katco, ericsnow, wwitzel3: sorry, can we push our meeting back 30 mins please?
[13:42] <katco> fwereade: certainly
[13:46] <mup> Bug #1457022 changed: state server panic: "rescanned document misses transaction in queue" <cpec> <mongodb> <juju-core:In Progress by fwereade> <juju-core 1.22:In Progress
[13:46] <mup> by fwereade> <juju-core 1.23:In Progress by fwereade> <juju-core 1.24:In Progress by fwereade> <mgo:In Progress by fwereade> <https://launchpad.net/bugs/1457022>
[13:57] <natefinch> python peeps... is there some static analysis tool that'll tell me when I've typoed function names etc?  You know, the stuff you get for free from a compiler?
[13:57] <ericsnow> natefinch: pyflakes
[13:57] <Spads> flake8
[13:57] <wwitzel3> tests?
[13:57] <ericsnow> natefinch: yeah, that one's better ^^
[13:58] <Spads> natefinch: as a bonus, flake8 can do McCabe cyclical complexity metrics
[13:58] <ericsnow> wwitzel3: +1 :)
[13:58] <natefinch> wwitzel3: I was waiting for that one... but I'm *writing* tests... where do the tests end?
[13:58] <natefinch> how do I test the tests?
[13:58] <natefinch> of the tests
[13:58] <ericsnow> natefinch: it's tests all the way down
[13:58] <natefinch> evidently
[13:58] <mup> Bug #1457022 was opened: state server panic: "rescanned document misses transaction in queue" <cpec> <mongodb> <juju-core:In Progress by fwereade> <juju-core 1.22:In Progress
[13:58] <mup> by fwereade> <juju-core 1.23:In Progress by fwereade> <juju-core 1.24:In Progress by fwereade> <mgo:In Progress by fwereade> <https://launchpad.net/bugs/1457022>
[13:59] <natefinch> if I have to do sudo pip install,  does that mean that I've screwed up my environment?
[14:00] <natefinch> or is that correct?
[14:00] <wwitzel3> natefinch: not, it just means you haven't explicitly isolated your environment and you are installing packages in to the system Python
[14:00] <natefinch> if I don't sudo, I get some massive traceback
[14:01] <mup> Bug #1457022 changed: state server panic: "rescanned document misses transaction in queue" <cpec> <mongodb> <juju-core:In Progress by fwereade> <juju-core 1.22:In Progress
[14:01] <mup> by fwereade> <juju-core 1.23:In Progress by fwereade> <juju-core 1.24:In Progress by fwereade> <mgo:In Progress by fwereade> <https://launchpad.net/bugs/1457022>
[14:02] <perrito666> yey, internet is sort of back
[14:06] <dimitern> natefinch, pip is supposed to be run inside a virtualenv
[14:06] <dimitern> natefinch, that might be the problem
[14:09]  * dimitern waves at voidspace 
[14:09] <natefinch> I really don't care enough to mess with virtualenv
[14:09] <perrito666> and off course, you look for something about vi in google and it returns answers about rick_h_ very often
[14:10] <perrito666> btw rick_h_ your screencast about bundle jugler is down
[14:10] <perrito666> natefinch: you will be sorry whenever you try to do something else and your system python is al screwed
[14:11] <natefinch> perrito666: I'll just complain about how much python sucks and make you fix it ;)
[14:11] <natefinch> s/make you/ask you nicely to/
[14:12] <perrito666> you cannot do that anymore, now wallyworld does
[14:12] <perrito666> :p
[14:12] <wallyworld> what?
[14:12] <dimitern> :D
[14:12] <wallyworld> python is awesome, way better than Go
[14:12] <dimitern> it feels like it's friday
[14:12] <TheMue> ouch
[14:12] <natefinch> wallyworld: I don't need 3 different package installer / environment handlers to run simple go code
[14:13] <wallyworld> me either
[14:13]  * TheMue fetches some cakes and coke and then watches the fight
[14:13] <natefinch> easy_install, pip, virtualenv
[14:13] <wallyworld> you don't have to have all those
[14:13] <dimitern> nope, just the first two - the last is not a package manager
[14:13] <wallyworld> i never have
[14:14] <natefinch> except people complain that I don't have it, implying that to do it the "right way" I should be using it.
[14:14] <dimitern> you can always use tarballs, like in the good old slackware days
[14:14] <wallyworld> at least python has such things available
[14:14] <natefinch> wallyworld: go doesn't have them because you don't need them
[14:14] <wallyworld> lol
[14:15] <wallyworld> yeah, just pull everything from tip, what could possibly go wrong
[14:15] <wallyworld> who needs package management
[14:15] <wallyworld> or config management
[14:15] <wallyworld> or versioning
[14:15] <katco> wallyworld: to be fair, go's solution to that is linking
[14:16] <katco> wallyworld: it just hasn't landed yet
[14:16] <wallyworld> don't get me started on static linking
[14:16] <natefinch> wallyworld: what do you mean by config management?
[14:16] <perrito666> katco: nanan, invalid point, if it is not there its not there
[14:16] <katco> perrito666: it's there, just not in a release yet :)
[14:16] <dimitern> just wait until 1.5 is out
[14:16] <wallyworld> and you can still statically link bad code without proper versioning
[14:17]  * perrito666 has dealt with academics too much to accept the answer "theoretically this is the solution, we just need to wait until the computer able to run it exists"
[14:17] <dimitern> not only that - you would be able to do it on ppc64 and arm64 as well :D
[14:17]  * wallyworld is too tired to argue anymore, need sleep
[14:18] <katco> tc wallyworld
[14:18] <wallyworld> next time we can discuss over drinks :-)
[14:18] <katco> :)
[14:18] <TheMue> just when it began to get funny
[14:19] <wallyworld> TheMue: it's past 12am here :-) you can pick up the discussion and preach how good erlang is :-)
[14:20] <dimitern> i'm outta here :D
[14:20] <TheMue> wallyworld: good idea, or pony (but it's very young)
[14:20] <TheMue> wallyworld: thanks for this great idea
[14:20] <wwitzel3> the fastest way to a ruin a perfect language is to program something in it
[14:20] <TheMue> so *, I'm open
[14:20] <wallyworld> lol
[14:21] <TheMue> did I evenr mentioned Smalltalk?
[14:21] <TheMue> *duck*
[14:22] <wwitzel3> I <3 Smalltalk, my first real programming job was Smalltalk.
[14:22]  * TheMue hugs wwitzel3
[14:23] <perrito666> mm, I think I lost the chance to insert the classic  C is the only real language
[14:23] <perrito666> and get the classic answer
[14:23] <perrito666> C is ASM for weaks
[14:23] <katco> i'm enjoying learning common lisp. it's a neat language
[14:29] <TheMue> I've once done Scheme and liked it. Always wanted to do Common Lisp too.
[14:29] <perrito666> wasnt that a kathy perry song?
[14:29] <perrito666> "I did Scheme and I liked it."
[14:30] <katco> TheMue: if you do, do yourself a favor and get quicklisp first: https://www.quicklisp.org/beta/
[14:30] <TheMue> katco: will try to remember
[14:30] <katco> TheMue: CL has some cruft from the spec being ratified in the 80's, but it's actually a very practical language
[14:30] <katco> lots of libraries
[14:30] <TheMue> katco: currently I'm looking into pony http://www.ponylang.org/
[14:31] <katco> yeah i saw your tweet and took a peek at it
[14:31] <TheMue> katco: that's an actor base languag, very clean
[14:31] <TheMue> hmm, loosing chars. "based language"
[14:31] <katco> ericsnow: hey sorry for the time change. standup time
[14:32] <ericsnow> katco: trying...hangouts is misbehaving for me
[14:32] <katco> ericsnow: ah okl
[14:46] <mup> Bug #1457068 was opened: bootstrap failed, no tools available <juju-core:New> <https://launchpad.net/bugs/1457068>
[14:51] <katco> fwereade: we're ready to argue!
[14:51] <katco> ;)
[14:51] <katco> https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=1
[15:05] <katco> fwereade: there?
[15:37] <natefinch> evilnickveitch: you around?
[15:38] <mup> Bug #1457089 was opened: reboot request in charm hook context is silently ignored in the case of actions <juju-core:New> <juju-core 1.24:New> <https://launchpad.net/bugs/1457089>
[15:42] <evilnickveitch> natefinch, yup
[15:44] <natefinch> evilnickveitch: nevermind, my question was answered by this bug: https://github.com/juju/docs/issues/405  Add "1.23" docs and update "devel" to 1.24.
[15:44] <evilnickveitch> ok, cool
[15:51] <natefinch> wwitzel3, ericsnow: is there documentation for GCE that should be going into jujucharms.com/docs?
[15:51] <ericsnow> natefinch: just what's in the release notes
[15:51] <natefinch> ericsnow: we need to convert that into a markdown document to put up on the webpage
[15:53] <natefinch> katco: ^^
[15:53] <natefinch> katco: sorry, the lack of docs is my fault, since it happened on my watch.
[15:56] <katco> natefinch: thx for the ping, i'll add it to the backlog
[16:05] <katco> ericsnow: wwitzel3: fwereade: such good conversation/work. i feel good about this direction.
[16:14] <wwitzel3> katco: me too
[16:16] <wwitzel3> jam: we never got a chance to meet with all the back-to-back meetings
[16:27] <Johncr1> juju cannot be installed because of a possible issue with python-pygment package.
[16:29] <ericsnow> mattyw: ping
[16:29] <rick_h_> perrito666: :(
[16:29] <mattyw> ericsnow, pong
[16:29] <ericsnow> mattyw: could you take another look at http://reviews.vapour.ws/r/1733/?
[16:30] <ericsnow> mattyw: also http://reviews.vapour.ws/r/1728/
[16:30] <mattyw> ericsnow, would be my pleasure
[16:30] <ericsnow> mattyw: thanks!
[16:32] <ericsnow> wwitzel3: I'm going to take lunch early so I'll be back in about 1.5 hours
[16:33] <mattyw> ericsnow, you mention an upcoming proper fix in the pr. when you say in that comment it just needs to be none windows, for now, do you mean until that fix?
[16:33] <ericsnow> mattyw: correct
[16:33] <wwitzel3> ericsnow: sounds good
[16:34] <mattyw> ericsnow, can you mention the bug number in that comment, and say it will change when a fix for that bug lands?
[16:34] <ericsnow> mattyw: sure
[16:34] <mattyw> ericsnow, I'll take another quick look after that but basically LGTM
[16:35] <ericsnow> mattyw: thanks again
[16:43] <katco> ericsnow: are you looking into bug 1457031?
[16:43] <mup> Bug #1457031: Juju cannot deploy to any substrate <blocker> <bootstrap> <ci> <regression> <juju-core:Triaged> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1457031>
[16:43] <katco> wwitzel3: has anyone reached out to natefinch to help with those CI tests?
[16:43] <wwitzel3> katco: I can after I finish stuffing my face with food
[16:44] <katco> wwitzel3: your priorities are correct sir! ;)
[16:44] <natefinch> food sounds like a good idea, I'll do that too
[16:45] <ericsnow> katco: I will after lunch
[16:49] <mattyw> ericsnow, reviewed
[16:50] <katco> ericsnow: cheers
[16:51] <mattyw> evilnickveitch, ping?
[16:56]  * perrito666 tries for week 3 to obtain a more decent internet provider.... the one I tried today cannot give me details over the internet they asked me to make a phone call...
[16:56] <mup> Bug #1457122 was opened: local data dir handling for init services should be handled independently <tech-debt> <juju-core:Triaged> <https://launchpad.net/bugs/1457122>
[16:56] <mup> Bug #1457124 was opened: Panic: FilterSuite.TearDownTest <ci> <intermittent-failure> <unit-tests> <juju-core:Incomplete> <juju-core 1.23:Triaged> <https://launchpad.net/bugs/1457124>
[17:00] <evilnickveitch> mattyw, pong
[17:30] <wwitzel3> natefinch: ping
[17:31] <natefinch> wwitzel3: let's jump on moonstone
[17:31] <wwitzel3> natefinch: sounds good
[17:49] <natefinch> wwitzel3: let me know if you have any questions or need help getting the environment set up
[17:49] <wwitzel3> natefinch: yeah, I closed the hangout because it was sucking CPI
[17:49] <wwitzel3> CPU
[17:49] <natefinch> wwitzel3: basically the test just deploys my charm, runs the "add 300 megs of data to the unit agent log" action, and then runs the "return the size of all unit agent logs" action... and verifies the output
[17:49] <natefinch> wwitzel3: totally understand
[17:50] <wwitzel3> natefinch: so what should I be passing as an env?
[17:51] <wwitzel3> natefinch: also JUJU_REPOSITORY , I assume that is pointing to your charm?
[17:55] <natefinch> wwitzel3: so, I just run from the juju-ci-tools directory.  You need to checkout lp:juju-ci-tools/repository under the juju-ci-tools directory
[17:55] <natefinch> then JUJU_REPOSITORY=./repository works
[17:55] <natefinch> wwitzel3: you need to copy the charm dir under ./repository/trusty
[17:56] <natefinch> er the fill-logs charm dir that is
[17:56] <natefinch> and env is the name of an environment in your environments.yaml that you would like to deploy to
[17:57] <wwitzel3> natefinch: got it, ok, running it now
[17:57] <wwitzel3> natefinch: and what is the issue that needs resolving, it isn't clear from the LP ticket
[17:58] <natefinch> wwitzel3: this is just a CI test for log rotation that I'm writing... right now it's timing out while running one of the actions... probably just not waiting long enough for the action to finish
[18:03] <natefinch> wwitzel3: there's action_fetch and action_do that I added in jujupy.py which could potentially have problems as well... though they seem to be fine.
[18:03] <natefinch> wwitzel3: just pushed a fix to the juju-ci-tools branch I'm working on
[18:05] <natefinch> wwitzel3: ha, now it just passes entierly
[18:05] <wwitzel3> natefinch: nice :)
[18:07] <wwitzel3> natefinch: I'm getting a regex issue atm, but haven't tried your latest fix
[18:07] <natefinch> wwitzel3: huh, I thought I fixed all the regex issues
[18:08] <wwitzel3> natefinch: Exception: Rotated unit log name '/var/log/juju/unit-fill-logs-0.log' does not match pattern '/var/log/juju/unit-fill-logs-0-(.+?)\.log'.
[18:09] <natefinch> wwitzel3: oh yeah, that was what I fixed
[18:09] <natefinch> haha sorry
[18:09] <wwitzel3> natefinch: ok, cool, running it now
[18:09] <wwitzel3> natefinch: if I get a successful run then LGTM
[18:10] <wwitzel3> natefinch: and I did .. no failures here
[18:10] <natefinch> I have to add the machine log rotation checks, but that'll be mostly copy and paste
[18:10] <natefinch> (and modify the regexes etc
[18:11] <natefinch> and/or abstract out the differences
[18:12] <natefinch> wwitzel3: anyway, I can finish that up, Thanks for verifying
[18:13] <natefinch> \
[18:13] <jam> wwitzel3: yeah, the meetings today took up our normal slots. are you around now?
[18:14] <wwitzel3> jam: yep
[18:14] <wwitzel3> natefinch: cool, np
[18:15] <wwitzel3> natefinch: let me know if you need me to do any more verifying
[18:15] <wwitzel3> jam: https://plus.google.com/hangouts/_/canonical.com/moonstone?authuser=0
[18:50] <ericsnow> wwitzel3: back
[18:50] <ericsnow> wwitzel3: I have a couple bugs to look at really quickly
[18:56] <wwitzel3> ericsnow: in moontsone talking to jam about the spec
[18:56] <ericsnow> wwitzel3: k
[19:59] <perrito666> I just heard this from a sales person at my ISP "We will need to replace your modem for one larger so the 50M we will provide you fit" (it is not awkwardly translated, in spanish the person actually spoke about volume)
[20:00]  * perrito666 was forced to buy the biggest residential connection available to get 7M upload
[20:01] <natefinch> heh
[20:01] <natefinch> sorry
[20:01] <natefinch> I think the CI test I just wrote actually found a bug in lumberjack
[20:03] <natefinch> quite by accident, but still.. handy
[20:23] <natefinch> https://github.com/natefinch/lumberjack/issues/12
[20:23] <natefinch> well, guess I know what I'm working on tonight
[20:23] <natefinch> Going to go make dinner, will be back in ~4.5 hours
[20:23] <natefinch> (when the kids are asleep)
[20:32] <mup> Bug #1457205 was opened: Subordinate charm Action data not reported by API <juju-core:New> <https://launchpad.net/bugs/1457205>
[21:01] <katco> wwitzel3: hey where did you and natefinch-afk leave the CI tests?
[21:33] <wallyworld> menn0: cherylj: is your work for a. txn fixes, and b. file handle leaks committed to 1.22?
[21:33] <wallyworld> i see a txn fix merged to 1.22
[21:34] <katco> wallyworld: see #juju@can
[21:34] <menn0> wallyworld: yes I committed a fix for the txns issue which is good enough
[21:34] <menn0> wallyworld: as it was merging jam started talked to me about some improvements
[21:34] <wallyworld> menn0: ok, i'll mark the bug as fix committed, ty
[21:34] <wallyworld> for 1.22 at least
[21:35] <menn0> wallyworld: no hang on :)
[21:35] <wallyworld> ok
[21:35] <menn0> wallyworld: i've almost got the improvements ready
[21:35] <wallyworld> rightio, we are waiting on another fix anyway
[21:35] <menn0> wallyworld: i think it's worth getting those in to 1.22 as well
[21:35] <wallyworld> ack
[21:36] <menn0> it significantly lowers the performance hit of the pruning change
[21:36] <menn0> wallyworld: are we aiming for the next 1.22 release today-ish?
[21:36] <katco> menn0: i don't think so
[21:37] <wallyworld> menn0: sorta - we are waiting for william's fix so it will likely be a bit later than just 1 day
[21:38] <menn0> wallyworld, katco: cool. well this is my top priority regardless. I'll definitely be done with this today. (for 1.22 at least if not all the branches)
[21:38] <wallyworld> ty :-)
[21:38] <katco> menn0: you are, as always, awesome :D
[21:41] <mup> Bug #1457218 was opened: failing windows unit tests <juju-core:In Progress by ericsnowcurrently> <juju-core 1.23:In Progress by ericsnowcurrently> <juju-core 1.24:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1457218>
[21:59] <ericsnow> could I have someone look at the patches I have up for review (for critical bugs): http://reviews.vapour.ws/r/1737/ and http://reviews.vapour.ws/r/1738/
[21:59] <ericsnow> I could also use a review on http://reviews.vapour.ws/r/1728/
[22:35] <mup> Bug #1457225 was opened: Upgrading from 1.20.9 to 1.23.3 works, but error: runner.go:219 exited "machiner": machine-0 failed to set status started: cannot set status of machine "0": not found or not alive <cts> <sts-stack> <juju-core:New> <https://launchpad.net/bugs/1457225>
[22:39] <perrito666> why, why oh god why is it so hard to write a proper unit test :(
[22:47] <mup> Bug #1457225 changed: Upgrading from 1.20.9 to 1.23.3 works, but error: runner.go:219 exited "machiner": machine-0 failed to set status started: cannot set status of machine "0": not found or not alive <cts> <sts-stack> <juju-core:New> <https://launchpad.net/bugs/1457225>
[22:56] <mup> Bug #1457225 was opened: Upgrading from 1.20.9 to 1.23.3 works, but error: runner.go:219 exited "machiner": machine-0 failed to set status started: cannot set status of machine "0": not found or not alive <cts> <sts-stack> <juju-core:New> <https://launchpad.net/bugs/1457225>
[23:39] <cherylj> Can I get a review for the file handle leak bug 1454687: http://reviews.vapour.ws/r/1740/
[23:39] <mup> Bug #1454687: add NX 842 hw compression patches <architecture-ppc64> <bot-comment> <bugnameltc-124979> <severity-medium> <targetmilestone-inin1510> <linux (Ubuntu):Triaged by arges> <https://launchpad.net/bugs/1454687>
[23:39] <cherylj> oops, wrong bug
[23:39] <cherylj> bug 1454697
[23:39] <mup> Bug #1454697: jujud leaking file handles <cpec> <stakeholder> <juju-core:Triaged> <juju-core 1.22:In Progress by cherylj> <juju-core 1.23:New> <juju-core 1.24:New> <https://launchpad.net/bugs/1454697>