[00:16] <menn0> thumper: can you have another look at http://reviews.vapour.ws/r/1975/ pls when you're back?
[00:35] <davecheney> thumper: james fixed the worker/provisioner race
[00:35] <davecheney> https://github.com/juju/juju/pull/2566
[00:35] <davecheney> so i guess that means we don't need to argue about gocheck maintainership
[01:16] <thumper> menn0: ack
[01:27] <thumper> hmm... looks like reviewboard host is out of disk space
[01:28] <davecheney> rup row
[01:28] <davecheney> score one for the cloud
[01:28] <thumper> the cloud, where everything just works
[01:34] <davecheney> yup, as well as it does right here in your home
[01:34] <davecheney> or your money back
[01:40] <axw> wallyworld: I'm not really sure what we need to say about iaas resource tagging beyond what's in the release notes. do you have any ideas?
[01:41] <wallyworld> axw: from memory the release notes seemed to cover it. they explained what was done and how to ass custom tags etc. so maybe just copy to a PR on the doc project
[01:41] <axw> wallyworld: ok
[01:42] <wallyworld> we just need to make sure the doc work is scheduled - hence the PR / bug being done
[01:43] <axw> wallyworld: do I specifically need to do a PR? I'm not sure where this would best go, so ok if I just create an issue on the project with the text, and then keep an eye on it?
[01:43] <wallyworld> yeah, that will be fine
[01:45] <menn0> thumper: is the RB machine also the main Jenkins host?
[01:45] <thumper> NFI
[01:46] <menn0> thumper: just checked, they are
[01:46] <axw> menn0: reviews. and juju-ci. both resolve to the same
[01:46] <menn0> axw: yep, i just checked the same thing :)
[01:47] <axw> :)
[01:47] <menn0> so if RB is having trouble them Jenkins will be as well
[01:47] <thumper> hazaah
[01:48] <menn0> thumper: i'm on the host now... root volume is certainly full
[01:48] <thumper> menn0: go delete some stuff
[01:49] <thumper> pretty sure we don't need /var
[01:49] <thumper> :-)
[01:49] <menn0> i'm just looking for where the space is being used
[01:49]  * thumper chuckles to himself
[01:49] <menn0> it's painfully slow
[01:49] <thumper> hmm
[01:49] <thumper> need coffee
[01:58] <menn0> thumper: you there?
[01:58] <thumper> yup
[01:58] <thumper> ugh... snow
[01:58] <davecheney> thumper: ready when you are
[01:58] <menn0> thumper: the culprit is the logs for the juju env that hosts the various CI services (reviews, CI proxy, reports)
[01:59] <thumper> hah
[01:59] <thumper> no rotation?
[01:59] <menn0> thumper: the disk isn't really big enough to support the way Juju rotates the logs
[01:59] <thumper> heh, oh the irony
[01:59] <menn0> thumper: there's several units each with 2 backups of 300MB plus the current log file
[02:00] <menn0> the disk is only 7GB total
[02:00] <menn0> thumper: also the logs are full of: exited "rsyslog": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"rsyslog\"")
[02:00] <menn0> weee
[02:01]  * thumper stabs rsyslog
[02:01] <menn0> wallyworld/thumper/davecheney: do you remember if the above has been fixed/
[02:01] <menn0> ?
[02:01] <thumper> I keep seeing it...
[02:01] <wallyworld> hmmm
[02:02] <wallyworld> i thought it had been
[02:02]  * menn0 is compressing the backup logs
[02:03] <menn0> that's better... 3.5GB free
[02:04] <menn0> wallyworld, thumper: this env is 1.21. I don't think the problem has been fixed there.
[02:05] <menn0> regardless, the disk is never going to be big enough
[02:05] <menn0> i'll shoot out an email
[02:05] <anastasiamac> menn0: tyvm for fixing/cleaning up the machine :D
[02:06] <menn0> anastasiamac: np
[02:45] <mup> Bug #1467362 opened: utils/ssh: data race in test <juju-core:New> <https://launchpad.net/bugs/1467362>
[03:36] <davecheney> thumper: menn0 git push --set-upstream origin fixedbugs/1465115
[03:36] <mup> Bug #1465115: api: data race in test <intermittent-failure> <race-condition> <unit-tests> <juju-core:In Progress by dave-cheney> <https://launchpad.net/bugs/1465115>
[03:36] <davecheney> thumper: menn0 https://github.com/juju/juju/pull/2618
[03:36] <davecheney> you'll love this one
[03:37]  * menn0 looks
[03:40] <menn0> davecheney: LGTM
[03:40] <menn0> davecheney: nasty problem
[03:42] <davecheney> yeah, that is a terrible footgun in the http api
[03:55] <mwhudson> davecheney: so the situation with rugby is that it doesn't have any kind of useful outbound network access at all, right?
[03:55] <mwhudson> not even via proxy
[04:00] <mup> Bug #1467372 opened: api/cleaner: data race in test <juju-core:New> <https://launchpad.net/bugs/1467372>
[04:03] <davecheney> mwhudson: yup
[04:03] <davecheney> it broke a few weeks ago
[04:03] <davecheney> this isn't the first time it broke
[04:03] <mwhudson> davecheney: nice
[04:04] <davecheney> but this was the first time I no longer had the stregth to complain on #is
[04:04] <davecheney> mwhudson: if you would take up the charge this time, I would be indebted
[04:04] <mwhudson> davecheney: i'm happy to (tomorrow) if you can tell me what should be working
[04:05] <davecheney> there is a proxy
[04:05] <davecheney> it runs on batuan
[04:05] <davecheney> well it used to
[04:05] <davecheney> but it doesn' tnow
[04:05] <mwhudson> davecheney: https_proxy=http://squid.internal:3128/ ?
[04:05] <davecheney> this proxy isn't monitored by is
[04:05] <davecheney> so it shits tiselfe every now and then
[04:05] <mwhudson> i see
[04:05] <davecheney> and needs to be manually unshitted
[04:12] <mup> Bug #1467372 changed: api/cleaner: data race in test <juju-core:New> <https://launchpad.net/bugs/1467372>
[04:15] <mup> Bug #1467372 opened: api/cleaner: data race in test <juju-core:New> <https://launchpad.net/bugs/1467372>
[04:20] <mwhudson> davecheney: you got some arm64 hw recently, right?
[04:20] <mwhudson> that iirc you weren't very impressed with
[04:27] <davecheney> yes, and yes
[04:30] <mup> Bug #1447234 changed: juju prints "error" when deploying yet no units are in error <deployer> <lxc> <reliability> <juju-core:Expired> <https://launchpad.net/bugs/1447234>
[04:30] <mup> Bug #1467374 opened: worker/uniter/filter: ci test failure <juju-core:New> <https://launchpad.net/bugs/1467374>
[04:34] <mwhudson> davecheney: what was it?
[04:34]  * mwhudson disappears, will read backlog later
[04:40] <davecheney> mwhudson: xgene
[05:21] <mup> Bug #1467379 opened: "attachmentcount" field not set when upgrading from 1.24 <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1467379>
[08:40] <jam> dimitern: I'm going to miss standup, I have to run an errand
[08:46] <dimitern> jam, ok, np
[09:06] <voidspace> dimitern: ping
[09:06] <voidspace> dimitern: standup?
[09:09] <fwereade> is anyone free to take a look at RB? [Errno 28] No space left on device: '/tmp/reviewboard.pcPtS2'
[09:12] <voidspace> dimitern: https://github.com/juju/juju/pull/2598
[10:56] <axw> evilnick: I possibly just inadvertently switched a checkbox on https://github.com/juju/docs/issues/444
[10:56] <axw> evilnick: which one left as an exercise to the reader (I don't know which one it was :/)
[10:56] <evilnick> axw hehehe. Thanks!
[10:57] <axw> axw: sorry about that. didn't realise clicking them did things.
[10:57] <evilnick> those things are a mixed blessing
[10:57] <axw> untracked things at that
[10:57] <evilnick> it's okay, we are nearly done with them anyhow - it will be pretty easy for me to tell what is changed
[10:58] <axw> cool
[11:50] <tasdomas> is the reviewboard server out of disk space? I see this message: '[Errno 28] No space left on device: '/tmp/reviewboard.iG8Eys'' on http://reviews.vapour.ws/r/1963/diff/#
[12:32] <Muntaner> hi jujuers
[12:32] <Muntaner> I'm having problems with a bootstrap
[12:33] <Muntaner> http://paste.ubuntu.com/11755548/
[12:39] <dooferlad> dimitern / TheMue: could you take a look at https://github.com/juju/juju/pull/2621 please? ReviewBoard hasn't found it (still out of disk space?) so please review on Github.
[12:39] <mgz> Muntaner: what did you do to add an ubuntu image to your deployment and register it with simplestreams?
[12:41] <mgz> Muntaner: I'm presuming you've read and followed jujucharms.com/docs/stable/howto-privatecloud
[12:41] <dimitern> dooferlad, looking
[12:46] <TheMue> dooferlad: *click*
[12:47] <Muntaner> mgz, does juju work with the new vivid cloud ubuntu images?
[12:47] <Muntaner> I mean, the 15.04
[12:47] <Muntaner> because it worked flawlessly with the old ones (14.04), but with the new I'm getting strange errors
[12:48] <dimitern> dooferlad, done
[12:48] <mgz> Muntaner: yes, but in that bootstrap it's not finding the image stream at all
[12:48] <Muntaner> mgz, it was an error in the metadatas
[12:49] <Muntaner> mgz, now I managed to go on: I get another error that I'm pasting
[12:49] <Muntaner> mgz, http://paste.ubuntu.com/11756595/
[12:49] <Muntaner> seems like it is searching for metadatas of a 14.04 version, why does it?
[12:51] <Muntaner> sorry, I pasted the same stuff for two times
[12:52] <TheMue> dooferlad: agreeing to dimitern comments ;)
[12:53] <mgz> Muntaner: looks like you are trying to bootstrap trusty and have no trusty images
[12:53] <Muntaner> mgz, mmmh
[12:53] <Muntaner> I'm not trying to bootstrap trusty: I wanna vivid
[12:54] <Muntaner> in my environments.yawl, I got a default-series: vivid
[12:54] <mgz> what's default-series in your environments.yaml?
[12:54] <Muntaner> mgz -> http://paste.ubuntu.com/11756623/
[12:54] <Muntaner> I got vivid
[12:58] <Muntaner> mgz, also with juju bootstrap --debug --series=vivid --upload-tools I got the same result
[13:00] <mgz> Muntaner: yeah, --series doesn't do that
[13:01] <Muntaner> mgz, sooo ...
[13:03] <Muntaner> mgz, maybe I solved
[13:03] <Muntaner> via tools-metadata-url: https://streams.canonical.com/juju/tools/
[13:04] <mgz> yeah, you also want to be able to access vivid tools, but doesn't seem to be where it's getting stuck from the logs
[13:04] <Muntaner> mgz, are you a developer?
[13:05] <Muntaner> aw yes, we are in dev chan
[13:06] <Muntaner> I think that juju logs need to be revisited
[13:06] <Muntaner> in other situations, I'm having a lot of troubles in understanding what isn't working
[13:06] <mgz> simplestreams is unreadable junk
[13:06] <mgz> and why we're still logging config contents >_<
[13:07] <Muntaner> mgz, will juju never work with containers? :)
[13:07] <mgz> ? it does.
[13:08] <mgz> so, can you now bootstrap or are you still stuck on simplestreams?
[13:08] <Muntaner> mgz, sorry, got confused! seems to be bootstraping, but now it's stuck at Installing package: cloud-image-utils
[13:09] <Muntaner> maybe I've got some issues in my openstack
[13:09] <dooferlad> dimitern: replied to that review (https://github.com/juju/juju/pull/2621)
[13:12] <dimitern> dooferlad, replied
[13:13] <dooferlad> dimitern: thanks. Will fix up as suggested
[13:14] <dimitern> dooferlad, cheers
[13:19] <dooferlad> dimitern: by the way, I will probably be a bit late for the networking knowledge sharing because I need to pick my daughter up.
[13:23] <Muntaner> maybe I'm having problems with security groups...
[13:23] <Muntaner> guys, shall I open some ports in my default security group?
[13:24] <Muntaner> because the environments gets bootstraped
[13:24] <Muntaner> but it stucks at the apt-get upgrade...
[13:24] <Muntaner> can't ssh to machines, can't get ssh status
[13:24] <Muntaner> juju status*
[13:24] <dimitern> dooferlad, ok, no worries
[13:24] <Muntaner>  DEBUG juju.api apiclient.go:337 error dialing "wss://192.168.0.97:17070/environment/46eefeea-bd6a-43a9-8571-23a841643c0f/api", will retry: websocket.Dial wss://192.168.0.97:17070/environment/46eefeea-bd6a-43a9-8571-23a841643c0f/api: dial tcp 192.168.0.97:17070: connection refused
[13:32] <Muntaner> guys, anybody can help me in understanting why it's getting stuck at Installing package: cloud-image-utils
[13:32] <Muntaner> ?
[13:42] <Muntaner> mmmh...
[13:46] <Muntaner> where can I find the tgz of the tools?
[13:49] <natefinch> Muntaner: I haven't done this personally, but I Think this section shows the relevant topics: https://jujucharms.com/docs/stable/howto-privatecloud#image-metadata
[13:50] <Muntaner> natefinch, I think I fixed the metadata problem
[13:51] <natefinch> Muntaner: cool
[13:52] <Muntaner> natefinch, the host machine got lost in this:
[13:52] <Muntaner> in the /var/log/cloud-init-output.log, I got this:
[13:53] <Muntaner> http://paste.ubuntu.com/11756909/
[13:53] <Muntaner> a lot of
[13:54] <natefinch> Muntaner: hmm
[13:59] <natefinch> Muntaner: do the machines have outside access to ubuntu package archives?
[14:00] <Muntaner> natefinch, naturally, yes
[14:02] <Muntaner> natefinch, now It works, maybe I got some network hic-cups
[14:02] <Muntaner> a non juju-related question: do skype work for you? for me, it crashes after 5 seconds under xubuntu and fedora
[14:04] <natefinch> Muntaner: I don't use skype, just google hangouts.  Works very reliably.  Not sure if they use similar technology.
[14:18] <Muntaner> natefinch, I got a problem with endpoints
[14:18] <Muntaner> who is telling to the juju machine 0 the endpoints?
[14:19] <Muntaner> because it is looking for "controller:8774"
[14:19] <Muntaner> but naturally, the vm can't know who is "controller"
[14:21] <natefinch> Muntaner: I'm not sure where that information is coming from.  That's not hardcoded or anything (neither is that port, I don't see 8774 in the code at all)
[14:22] <Muntaner> natefinch, I think it is asking to my openstack "what are your endpoints"?
[14:22] <mgz> Muntaner: it's in your keystone config
[14:22] <Muntaner> so I probably neet to change them
[14:22] <Muntaner> need*
[14:22] <Muntaner> it's fine :)
[14:30] <voidspace> dimitern: dooferlad: network problems are due to ethernet-over-power hardware problems
[14:30] <voidspace> still seeing if they can be resolved or if I need new hardware
[14:31] <Muntaner> guys
[14:31] <Muntaner> I'm trying to deploy juju-gui
[14:31] <Muntaner> on my fresh vivid environment
[14:31] <Muntaner> does it exists for vivid?
[14:31] <Muntaner> 'cos I'm getting a sad "ERROR juju.cmd supercommand.go:430 cannot resolve charm URL "cs:vivid/juju-gui": charm not found"
[14:35] <natefinch> Muntaner: I don't think the gui charm exists for vivid.  Most charms don't exist for vivid.  rick_h_ would know ^
[14:36] <Muntaner> aw!
[14:37] <natefinch> Muntaner: you could always copy the charm and put it on launchpad under your own name for vivid... in fact, I wouldn't be surprised if someone else already had
[14:37] <mgz> you can always download a trusty charm, rename the series, and try deploying from local:
[14:37] <natefinch> that too ^
[14:37] <rick_h_> natefinch: correct, we're only LTS
[14:38] <natefinch> It's sort of unfortunate that we tie charms to series so tightly, when a lot of the time they work fine in other series.
[14:48] <dimitern> voidspace, oh boy :/ one drawback of ethernet-over-power
[14:48] <voidspace> dimitern: yeah
[14:48] <voidspace> dimitern: a router or network card can just as easily fail too though
[14:48] <voidspace> dimitern: I don't think they're inherently unreliable, just one extra piece that can go wrong
[14:48] <voidspace> dimitern: looks like the remote one has just died
[14:49] <voidspace> dimitern: a system reset on the main one hasn't helped and according to the diagnostic tool I have the remote one isn't working
[14:49] <voidspace> isn't being detected at all
[14:50] <Muntaner> hey guys
[14:50] <Muntaner> last thing :)
[14:50] <dimitern> voidspace, but what's the problem you've discovered?
[14:50] <Muntaner> I should deploy a private local bundle on my fresh juju
[14:50] <Muntaner> I've got my bundle.yawl there
[14:50] <voidspace> dimitern: well, as the remote unit doesn't work I have no network
[14:50] <Muntaner> what was the command?
[14:50] <voidspace> dimitern: and my networking configuration for the machine requires eth0 to be connected
[14:50] <dimitern> voidspace, remote unit being the other end of the EoP ?
[14:51] <voidspace> dimitern: yep
[14:51] <mgz> Muntaner: I should have mentioned earlier, but you'd really be better off in #juju rather than here
[14:51] <voidspace> dimitern: the end my desktop is connected to
[14:51] <dimitern> voidspace, right
[14:51] <dimitern> voidspace, can you use a cable instead?
[14:53] <mup> Bug #1467556 opened: TestMachineAgentRunsEnvironStorageWorker fails <ci> <intermittent-failure> <test-failure> <juju-core:Incomplete> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1467556>
[14:53] <Muntaner> mgz, sorry
[14:59] <voidspace> dimitern: my desktop is upstairs, so no
[14:59] <voidspace> dimitern: I can go back to wifi and order a replacement -
[15:01]  * TheMue is shortly afk to ride back home in a dry moment ;)
[15:02] <dimitern> voidspace, I see, well I hope you figure out how to fix it :)
[15:03] <voidspace> dimitern: I can probably set up a virtual maas with maas 1.8 to test code against until the replacement arrives
[15:03] <voidspace> dimitern: I can't run my "real maas" without working ethernet
[15:03] <dimitern> voidspace, sounds good, and I can give you a hand with testing on both my maas-es
[15:04] <voidspace> dimitern: cool
[15:04] <voidspace> dimitern: thanks
[15:07] <dimitern> dooferlad, can you update bug 1463480 if there's anything you've missed ?
[15:07] <mup> Bug #1463480: Failed upgrade, mixed up HA addresses <blocker> <canonical-bootstack> <ha> <upgrade-juju> <juju-core:Triaged> <juju-core 1.22:Triaged> <juju-core 1.24:Triaged> <hacluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1463480>
[16:26] <mup> Bug #1467590 opened: Running out of disk space blocks interacting with env on cli <juju-core:New> <https://launchpad.net/bugs/1467590>
[17:02] <katco`> ericsnow: natefinch: planning meeting
[17:03] <natefinch> katco: coming
[21:29] <katco> ericsnow: wwitzel3: so i've never actually worked on a hook before. where are those defined?
[21:30] <katco> ericsnow: wwitzel3: gh.com/juju/charm?
[21:30] <katco> /hooks?
[21:30] <ericsnow> katco: what do you mean by "hook"?
[21:30] <katco> ericsnow: this is for the launch command
[21:31] <ericsnow> katco: we aren't adding any hooks
[21:50] <perrito666> katco: hooks are in the charm package
[22:00] <katco> ericsnow: k, think i've found it: uniter/runner/jujuc/. however, where should ours live under process?
[22:00] <ericsnow> katco: we already wrote all that
[22:00] <ericsnow> katco: process/context
[22:01] <ericsnow> katco: see register.go for a hook context command
[22:01] <ericsnow> katco: launch be very similar
[22:01] <katco> ericsnow: awesome. ty
[22:01] <ericsnow> katco: :)
[22:31] <thumper> wallyworld: I thought you said this was fixed? http://reports.vapour.ws/releases/2801/job/run-unit-tests-precise-i386/attempt/2149
[22:32] <wallyworld> it was - i checked the commits
[22:32] <wallyworld> if it's still broken there's maybe a regression or another problem?
[22:32] <wallyworld> in a meeting, will check soon
[22:36] <mup> Bug #1467690 opened: inconsistent juju status from cli vs api <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1467690>
[22:41] <mwhudson> davecheney: rugby can talk to its proxy again
[23:05] <wallyworld> thumper: i had a look at the logs for that test - i call bullshit
[23:05] <wallyworld> i can't see an int overflow there
[23:05] <thumper> cmd/juju/addrelation_test.go:18: undefined: CmdBlockHelper
[23:05] <wallyworld> there's a uniter test failure relating to an upgrade test
[23:05] <thumper> I'll look, but curious that it doesn't fail all the time
[23:06] <wallyworld> yeah, go figure
[23:26] <axw> wallyworld: conn died
[23:35] <davecheney> mwhudson: thanks
[23:36] <davecheney> mwhudson: actually
[23:36] <davecheney> it's  not working for me
[23:36] <mwhudson> hm
[23:36] <mwhudson> i managed to clone go a few minutes ago
[23:36] <davecheney> are you still in #is
[23:36] <mwhudson> i never leave!
[23:46] <davecheney> welcome to canoincal, when raising RT's is for the weak