[00:26] <thumper> wallyworld: I need an extra 10 minutes to kick out the in-laws
[00:26] <thumper> wallyworld: chat after that?
[00:26] <wallyworld> sure
[00:59] <axw> wallyworld: ping
[01:00] <wallyworld> axw: hey, just talking to tim, see you in standup hangout soon?
[01:00] <axw> wallyworld: sure, I'll wait in there
[03:14] <wallyworld> axw: could you take a look at http://reviews.vapour.ws/r/963/ for me?
[03:14] <axw> looking
[03:14] <wallyworld> ty
[03:15] <wallyworld> axw: also, eric raised a point about the rootfs provider - it won't work as written on windows
[03:15] <axw> wallyworld: yeah, storage in general is not expected to work on windows in the first iteration AFAIK
[03:16] <axw> we will need to disable it in tests if they're going to break
[03:16] <wallyworld> good yes, i couldn't quite recall if we agreed to that or not
[03:16] <wallyworld> tests shouldn;t break as it's all stubbed out
[03:16] <axw> 99% sure that Mark agreed to that
[03:16] <wallyworld> yes i do recall that now
[03:27] <axw> wallyworld: reviewed
[03:27] <wallyworld> ty
[03:33] <axw> wallyworld: I don't think the error is to do with arch mismatch... I created an env on AWS and it's missing the tools symlink for the container
[03:33] <wallyworld> axw: hmmm. that doesn't make sense based on the code
[03:33] <wallyworld> the symlink might be missing
[03:33] <axw> wallyworld: the container was started with the correct arch.
[03:33] <wallyworld> but the tools selection si broken
[03:33] <axw> wallyworld: where?
[03:34] <wallyworld> yes, the container image itself will be correct
[03:34] <wallyworld> in provisioner_task
[03:34] <wallyworld> possibleTools, err := task.toolsFinder.FindTools(
[03:35] <wallyworld> this used to use the HasTools interface implemented by the container provisioner to lok h tols
[03:35] <wallyworld> lock the tools
[03:36] <axw> HasTools was a hack. what's there will work if the constraints has the correct arch... which seems to be the case since it's picking the right arch
[03:40] <wallyworld> but the constraints won't always have the correct arch
[03:41] <wallyworld> they may be empty
[03:42] <axw> wallyworld: perhaps there's two bugs, I'll keep looking
[03:42] <wallyworld> axw: the constraints will be empty if you just bootstrap with nothing else
[03:42] <wallyworld> then the new machine 1 will have m.Constraints() without an Arch
[03:43] <axw> wallyworld: I thought Juju might add the host machine's arch as a constraint for LXC, doesn't look like it does tho
[03:43] <wallyworld> yeah
[03:43] <wallyworld> that's the issue I think
[03:43] <wallyworld> and the HasTools thing hacked around that
[03:52] <axw> wallyworld: sorry I got confused, thinking tools would be in the same location as the host, like in local
[03:52] <wallyworld> ah right
[03:52] <axw> just the one bug
[03:53] <axw> wallyworld: so... rather than reinstating HasTools, I'd prefer to just set Constraints.Arch to the host's if it's not already set. IIRC there was a bug caused by HasTools, where you couldn't create, say, i386 containers on amd64 hosts
[03:53] <axw> which is why it was removed in the first place
[03:54] <wallyworld> that solution of setting Arch was what occurred to me also - but only for containers
[03:55] <axw> wallyworld: actually... I think the LXC provisioner should just filter by the host arch. that's what we'd do for any other provider right?
[03:55] <axw> give it all it we know about, and let it make the informed decision
[03:55] <wallyworld> that would work, yeah
[03:56] <wallyworld> i think setting the arch seemed ok because it would constraint the search
[03:56] <wallyworld> but it's cleaner for sure to ket the provisioner decide
[04:19] <axw> wallyworld: http://reviews.vapour.ws/r/965/
[04:19] <wallyworld> looking
[04:19] <wallyworld> axw: I've updated the apt review from before also
[04:20] <axw> wallyworld: already LGTMd
[04:20] <wallyworld> ah ty :-)
[04:21] <wallyworld> axw: there's also a names one https://github.com/juju/names/pull/41 and one eric looked at but which i'd like your review of pretyy please
[04:21] <axw> will do
[04:31] <wallyworld> axw: done with a suggestio
[04:31] <wallyworld> n
[04:31] <axw> ta
[04:43] <wallyworld> axw: the filesystem tag uses the datasource name as the tag id (seemed reasonable at the time). the broad definition was to accommodate the behaviour where the datasource name is used as the dir name if no location is specified. I did think about using a numeric sequence value but then there's the issue of how to generate the numbers. i'd have to use a mongo sequence or something, or just count the number of filesystem docs
[04:43] <wallyworld> i can change to number though
[04:44] <axw> wallyworld: datasource name?
[04:44] <axw> as in ... the pool name?
[04:44] <wallyworld> in the charm metadata
[04:45] <wallyworld> storage name
[04:45] <axw> wallyworld: I'm a little wary of that. volumes can be disassociated from a storage instance, it may end up being a requirement for filesystems too
[04:46] <wallyworld> yeah, fair enough. i can make it like volume will be once your current branch lands (I didn't see it when I did this so didn;t appreciate the machine name aspect)
[04:46] <wallyworld> i do prefer opaque identifiers
[05:54] <anastasiamac_> wallyworld:
[05:54] <wallyworld> yo
[05:57] <anastasiamac_> wallyworld: looks like u've quit on PM
[05:57] <wallyworld> sigh, stupid irc
[05:57] <wallyworld> i'm here
[05:58] <wallyworld> got to go get kid from train station in a few minutes
[07:47] <wallyworld> axw: i've updated the juju/names branch to make fs tag more like volume tag
[07:48] <axw> wallyworld: thanks, will take a look
[07:56] <axw> wallyworld: LGTM
[07:57] <wallyworld> axw: ty. btw, what did you think about eric's comment re: Path vs Location?
[07:58] <axw> wallyworld: Path sounds fine for FilesystemAttachment. It really only needs to be Location on the StorageAttachment, which normalises volume/filesystem
[07:58] <wallyworld> sounds good to me
[07:59] <wallyworld> i sometime use location in conversation but path is better
[08:29] <TheMue> morning
[08:29] <dimitern> o/
[09:09] <dooferlad> morning
[09:13] <TheMue> dooferlad: o/
[09:14] <dimitern> TheMue, dooferlad, o/
[09:14]  * dimitern bbiab
[10:01] <dimitern> TheMue, dooferlad omw - ~3m
[10:57] <mattyw> fwereade, do you have 10 minutes?
[11:14] <dimitern> TheMue, reviewed
[11:14] <TheMue> dimitern: thx
[11:15] <TheMue> dimitern: and regarding the reviewer please take a look at the calendar. today michael and william are (ok, michael isn't available)
[11:16] <TheMue> dimitern: I for example have been last Friday and will be next Friday again
[11:16] <dimitern> TheMue, ah! yeah - I was thinking about last friday
[11:16] <dimitern> :)
[11:16] <dimitern> TheMue, but james is today
[11:16] <TheMue> dimitern: yep, our new ocr
[11:17] <TheMue> dimitern: btw, you've got a review too ;)
[11:17] <dimitern> TheMue, thanks you your review
[11:17] <TheMue> dimitern: h5
[11:17]  * dimitern h5s :)
[11:19] <TheMue> dimitern: thanks for the name retry pattern hint, this is exactly what I've looked for
[11:20] <dimitern> TheMue, yw, I had to look for it too, so decided to save you the trouble heh
[11:20] <TheMue> dimitern: great :)
[11:21] <dimitern> dooferlad, OCR PTAL: http://reviews.vapour.ws/r/961/
[11:21] <dimitern> (as you might see sometimes)
[11:22] <dooferlad> dimitern: looking
[11:22] <dimitern> dooferlad, ta!
[11:41] <dimitern> mgz, hey, are you around?
[12:59] <dimitern> dooferlad, hey, thanks for the review - I've responded. Is this the only question you have, everything else is clear? :)
[13:02] <dooferlad> dimitern: yep, everything else is fine.
[13:02] <dimitern> dooferlad, awesome!
[13:03] <dimitern> dooferlad, the state code is one of the gnarliest places btw - beware
[13:04] <dooferlad> dimitern: Heh, it did look a bit... interesting.
[13:04] <dooferlad> dimitern: not too scary though!
[13:04]  * dooferlad goes for lunch
[13:06] <dimitern> dooferlad, ;)
[13:30] <perrito666> natefinch: around?
[13:38] <jw4> sinzui: can I JFDI a backout merge?
[13:39] <jw4> it's actually one of the suspects in blocking bug 1423782 (but I don't think it's the culprit in that case)
[13:39] <mup> Bug #1423782: ppc64el /usr/bin/ld: error in $WORK/github.com/juju/juju/cmd/juju/_obj/exe/a.out <ci> <gccgo> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1423782>
[13:56] <sinzui> jw4 fixes-1423782 is what the lander is looking for. Use __JFDI__ when it is clear something has to merge to unblok other proitities
[13:56] <jw4> sinzui: okay; I don't think this backout does fix that though
[13:57] <jw4> sinzui: I just noticed it was referenced in the bug
[13:58] <jw4> sinzui: I think in this case it is blocking other priorities... i.e. removing a wrong implementation to make way for the right one?
[13:58] <sinzui> jw4, agreed, __JFDI__
[13:58] <jw4> sinzui: ta
[14:40] <stokachu> does anyone know what this error means: WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: can\'t get local.system.replset config from self or any seed (EMPTYCONFIG)
[14:40] <aisrael> I seem to be running into a new (to me) issue in trunk. I have a service in an error state, but `juju resolved` doesn't see it being in an error state.
[14:40] <aisrael> http://pastebin.ubuntu.com/10325205/
[14:41] <perrito666> stokachu: your mong rs.config() is empty, there was most likely an error while creating the base replicaset
[14:41] <stokachu> perrito666: so this happens randomly, whereas if i re-run the bootstrap it works the second time around
[14:43] <stokachu> just prior to that error the logs shows several mongo connection failures due to connection refused but then the connection succeeds and thats when the above error is shown
[14:43] <stokachu> this only happens when I bootstrap a bare metal from maas
[14:44] <perrito666> stokachu: sounds to me that I already heard this story, I believe there is even a bug about it, if not perhaps there should be
[15:02] <marcoceppi> is there any way to tell if trunk is blocked?
[15:04] <jw4> marcoceppi: I use https://api.launchpad.net/devel/juju-core?ws.op=searchTasks&status%3Alist=Triaged&status%3Alist=In+Progress&status%3Alist=Fix+Committed&importance%3Alist=Critical&tags%3Alist=regression&tags%3Alist=ci&tags_combinator=All
[15:05]  * marcoceppi registers isjujucoreblocked.com
[15:05] <jw4> marcoceppi: if you do that then use the official blocker check script... I'll find that for you after my stand up
[15:05] <marcoceppi> jw4: I found it in juju-ci-tools
[15:05] <marcoceppi> http://bazaar.launchpad.net/~juju-qa/juju-ci-tools/trunk/view/head:/check_blockers.py
[15:06] <jw4> yep
[15:06] <marcoceppi> sweet
[15:23] <ericsnow> marcoceppi: I just go here: https://bugs.launchpad.net/juju-core/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=CRITICAL&field.tag=ci+regression+&field.tags_combinator=ALL
[15:24] <dimitern> sinzui, I think we should try reverting https://github.com/juju/juju/commit/0621a1ad40ba15004ef6c3c5e4da22661243041f - since ppc64el tests started failing after it, and the other suspect commit jw4 already reverted
[15:25] <sinzui> dimitern, I am just reviewing 1.22 results to verify the linker issue isn't present in 1.22
[15:26] <dimitern> sinzui, sure
[15:26] <dimitern> sinzui, I spend the last hour looking at various bugs for gccgo-ppc64 - in the gcc bug tracker, in lp, etc. as it does seem like a compiler issue
[15:26] <sinzui> dimitern, no gcc/linker errors in 1.22, so I am sure the machine is good. Revert
[15:27] <dimitern> sinzui, cool, I'll propose it then
[15:38] <dimitern> sinzui, reverted and it's already getting tested - http://juju-ci.vapour.ws:8080/job/github-merge-juju/2210/console
[15:38] <dimitern> sinzui, provided it lands ok, can you trigger the ppc64 job manually to see if it worked and save time?
[15:41] <sinzui> dimitern, I cannot trigger manually.
[15:42] <sinzui> dimitern, I think wallyworld's fix for ppc lxc has bad tests: http://juju-ci.vapour.ws:8080/job/run-unit-tests-trusty-ppc64el/2398/consoleText
[15:43] <sinzui> ^ mgz, do you agree? I think we have another regression in 1.22, but one that is just a test
[15:43] <dimitern> sinzui, looking
[15:47] <mgz> sinzui: yup, failed across 3+ runs
[15:47] <mgz> looks like it'll be trivial to fix though
[15:49] <mgz> yeah, the startInstance helper on that test class just hardcodes some pretend tools and quantal-amd64
[15:49] <mgz> *as
[15:50] <dimitern> mgz, sinzui, actually the tests are bad, because with axw's patch all tests now take the current machine's arch
[15:51] <dimitern> i.e. tests need better isolation
[15:51] <sinzui> mgz, dimitern natefinch : I just reported bug 1423950 about the ppc64 test failures just added to 1.22
[15:51] <mup> Bug #1423950: lxc-broker_test tests fail on ppc64el <ci> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1423950>
[15:51] <mgz> worker/provisioner/lxc-broker_test.go l96
[15:51] <mgz> either that also needs to take current arch or the test needs to mock/isolate the tool selection
[15:52] <dimitern> mgz, the latter is better I think
[15:56] <dimitern> sinzui, I commented on that bug, however I don't think 1.23 is affected as there's not such patch there
[15:58] <sinzui> dimitern, good point. But note that 1.23 wasn't affected by the precise bug until someone ported it there. We need to stop the ppc fix from being ported without this fix also being present
[15:58] <dimitern> sinzui, I agree and will add another comment for this
[15:59] <sinzui> dimitern, I removed 1.23
[15:59] <sinzui> dimitern, I don't see a pr/review of a port of the ppc fix, so I wont worry
[16:04] <dimitern> sinzui, cheers - I've commented on both bugs - 1420049 and 1423950
[16:11] <sinzui> dimitern, mgz: I think I want to cry: the joyent trusty deploy is failing. the units cannot contact the state server to download the agent
[16:11] <sinzui> + printf Attempt 5 to download tools from %s...\n https://10.112.3.32:17070/tools/1.22-beta4-trusty-amd64
[16:11] <sinzui> I can only think to try the test again to get a friendly network
[16:12] <mgz> urk
[16:14] <dimitern> sinzui, aw, that's the "i'll give you a machine on a random network" type of joyent issue
[16:15] <sinzui> dimitern, exactly. we broadened the network we will accept, but joyent went broader
[16:16] <sinzui> dimitern, I can see in recent days, 1 in 4 attempts pass (but different branches). I will try again for a 4th attempt in the sequence
[16:17] <dimitern> sinzui, that's not a real solution, but if it makes the job pass more often, yeah
[16:18] <dimitern> unfortunately the "real" solution is not something joyent seems to support via their api
[16:18] <sinzui> dimitern, this isn't a juju regression, and I want to show there isn't one looking at the repeatability
[16:19] <dimitern> sinzui, I agree
[16:20] <sinzui> dimitern, joyent deploy precise passed the first time (with the new fix), quickstart passed for the last two revisions, and that test is pretty unreliable, and the upgrade trusty also passed. so I am sure juju is good
[16:20] <dimitern> sinzui, the revert of #1615 landed - can you give me an eta when the ppc64 tests job will run?
[16:21] <dimitern> sinzui, that's good news indeed
[16:21] <sinzui> dimitern, with the hour. we are waiting for 1.22 to conclude
[16:21] <dimitern> sinzui, ok, thanks
[16:23] <sinzui> dimitern, and we get a pass for the statistically consistent 1 in 4 will get a good network
[16:24] <sinzui> dimitern, natefinch : can you ask someone to fix the broken tests for bug 1423950?
[16:24] <mup> Bug #1423950: lxc-broker_test tests fail on ppc64el <ci> <ppc64el> <regression> <test-failure> <juju-core 1.22:Triaged> <https://launchpad.net/bugs/1423950>
[16:32] <dimitern> sinzui, sure
[16:47] <marcoceppi> jw4 ericsnow: http://juju.fail/status.json ;)
[16:53] <wwitzel3> hah I was right
[16:53] <dimitern> sinzui, it seems the ppc64-slave is free - how does it decide to try a new master commit?
[16:54] <aisrael> I'm running into an issue w/trunk. I have a service in an error state, but `juju resolved` reports it isn't in an error state. Seems like this change happened within the past week.
[16:54] <sinzui> dimitern, when all tests for a revision have run, a new revision will be tested. we are waiting on maas
[16:54] <aisrael> http://pastebin.ubuntu.com/10325205/
[16:54] <dimitern> sinzui, I see, ok
[16:56]  * dimitern steps out for a bit
[17:35] <natefinch> dimitern, sinzui:  Sorry, was out this morning.  I can take a look at that bug, if you want?
[17:36] <sinzui> natefinch, thank you
[17:42] <perrito666> is there a way to run the whole test suite supressing logs? I just want to know which packages fail
[17:53] <dooferlad> sinzui: https://github.com/juju/juju/pull/1645 should fix Bug #1423950 but needs a review
[17:53] <mup> Bug #1423950: lxc-broker_test tests fail on ppc64el <ci> <ppc64el> <regression> <test-failure> <juju-core 1.22:In Progress by dooferlad> <https://launchpad.net/bugs/1423950>
[17:53] <dooferlad> TheMue: Could you take a look? ^^
[17:54] <sinzui> natefinch, ^
[17:54] <TheMue> dooferlad: yep, will do
[17:59] <TheMue> dooferlad: looks simple and ok, sadly don't know how to test it better than let it do by CI
[17:59] <dooferlad> TheMue: indeed.
[17:59] <TheMue> dooferlad: so I'll give it a ship-it
[17:59] <dooferlad> TheMue: shipped!
[18:00] <TheMue> dooferlad: :)
[18:03] <dimitern> perrito666, there is
[18:05] <dimitern> perrito666, I found out yesterday - export TEST_LOGGING_CONFIG=CRITICAL or something then run them; however some of them fail when the <root> logger is not at least at DEBUG
[18:05] <perrito666> dimitern: tx
[18:05] <dimitern> dooferlad, re that fix
[18:05] <dooferlad> dimitern: yep
[18:05] <dimitern> dooferlad, I think you should restore this change at some point - i.e. in the suite's TearDownTest
[18:06] <dooferlad> can do
[18:06] <dimitern> dooferlad, back to what version.Current.Arch was; otherwise other tests can fail
[18:07] <dimitern> dooferlad, have you run make check?
[18:07] <dooferlad> dimitern: yes
[18:08] <dimitern> dooferlad, well if it passes, that's good news - nothing apparently broken, but I still think it should be restored - maybe in a follow-up?
[18:10] <dimitern> dooferlad, also, instead of "amd64" I'd use the arch.AMD64 const
[18:10] <dooferlad> dimitern: just testing restoring it in func (s *lxcProvisionerSuite) TearDownTest(c *gc.C)
[18:10] <dooferlad> dimitern: OK, will change that.
[18:10] <dimitern> dooferlad,despite that, great job for doing it so quickly! :)
[18:10] <dimitern> dooferlad, cheers
[18:14] <natefinch> dooferlad: agreed, great job
[18:14] <dooferlad> dimitern, natefinch: thanks!
[18:15] <dooferlad> dimitern: can the followup wait until Monday. It felt urgent to get a fix in, but I would like to finish promptly today.
[18:15] <dooferlad> dimitern: (insert ? where needed in above)
[18:15] <dimitern> sinzui, most jobs have disappeared from jenkins, which I presume is due to not being publicly visible; anyway any update on the ppc64 job?
[18:16] <sinzui> dimitern, I sure can
[18:16] <dimitern> dooferlad, it can wait, but please add a card and comment on the bug about it
[18:17] <dooferlad> dimitern: cool. It isn't writing the code, it is waiting for the tests to run :-) I will leave it going and see if I can get checked in later.
[18:18] <dimitern> dooferlad, you can then integrate axw's fix and yours + the follow up to port that fix to trunk
[18:18] <dimitern> dooferlad, sure, np
[18:18] <sinzui> dimitern, I is visible now
[18:20] <dimitern> sinzui, I can see it - thanks
[18:23] <sinzui> dimitern, We are using 10.0.40.x addresses in network we test mass kvm with.
[18:23] <sinzui> dimitern, do we need to change that to avoid the lxcbr0 issue?
[18:24] <dimitern> sinzui, the one I fixed recently?
[18:24] <dimitern> (too many lxcbr0 issues in the past 2 months :/)
[18:25] <sinzui> dimitern, yes, we aren;t using lxcbr0, but we are using that address range on an eth1
[18:25] <sinzui> we can change the range
[18:25] <dimitern> sinzui, the issue was that 10.0.3.* sorts before any 10.0.X.* where X > 3
[18:26] <dimitern> sinzui, so if you're hitting that issue I'd suggest changing the range to 10.0.2.*
[18:27] <dimitern> sinzui, what juju version is the ci environment using?
[18:27] <marcoceppi> alexisb: aisrael and I are seeing a weird issue in trunk. We'd like to debug to see if it's a bug or not, anyone we can chat with?
[18:27] <sinzui> dimitern, thank you. I think this means our address sis okay
[18:28] <sinzui> juju.api api.go:273 connecting to API addresses: [10.0.80.120:17070]
[18:29] <dimitern> sinzui,ah, now I got it - your lxcbr0 address is from 10.0.40.x ? yep, then it's fine
[18:29] <alexisb> marcoceppi, any of the use based developers
[18:29] <marcoceppi> natefinch: halp
[18:29] <alexisb> natefinch, wwitzel3, cherylj, katco, ericsnow_afk
[18:29] <natefinch> marcoceppi: heh
[18:30] <marcoceppi> aisrael: ^
[18:30] <natefinch> whazzup?
[18:48] <marcoceppi> natefinch: "I seem to be running into a new (to me) issue in trunk. I have a service in an error state, but `juju resolved` doesn't see it being in an error state. http://pastebin.ubuntu.com/10325205/"
[18:55] <natefinch> marcoceppi: wacky
[18:56] <marcoceppi> natefinch: yeah, aisrael is experiencing it atm
[18:56] <marcoceppi> with trunk from this am
[18:57] <natefinch> marcoceppi: I'll do some poking around and see if I can repro
[18:57] <marcoceppi> natefinch: ta
[19:16] <dimitern> I'd appreciate a review on http://reviews.vapour.ws/r/973/
[19:18] <TheMue> dimitern: already started ;)
[19:20] <dimitern> TheMue, cheers!
[19:21] <dimitern> TheMue, it's a long diff, but it should be easy to follow
[19:21] <TheMue> dimitern: so far yes, otherwise I'll ping you here
[19:24] <dimitern> ta!
[19:38] <jw4> marcoceppi: cool!
[19:43] <mbruzek> ping jog
[19:43] <jog> hi mbruzek
[19:43] <mbruzek> I can't get to http://reports.vapour.ws/charm-summary/kubernetes
[19:44] <mbruzek> Is it just me or is that page down?
[19:44] <jog> mbruzek, please use http://reports.vapour.ws/charm-tests-by-charm/kubernetes
[19:44] <jog> mbruzek, I update the site this morning
[19:45] <jog> mbruzek, the change was to merge the old charm reporting page with that we've recently been working on.
[19:45] <mbruzek> OK
[19:57] <TheMue> dimitern: you've got a review. really like this PR
[19:58] <dimitern> TheMue, thank you! It *did* a week to get this good :)
[19:58] <TheMue> dimitern: was worth it
[19:58] <dimitern> TheMue, indeed!
[19:59] <TheMue> dimitern: I'll now give my PR another merge kick and then enjoy my Glenrothes Single Malt :D
[19:59] <dimitern> TheMue, trunk is still blocked though
[19:59] <dimitern> TheMue, otherwise, enjoy! :)
[19:59] <TheMue> dimitern: oh, ok, then I'll do it tomorrow and now only enjoy *hicks*
[20:00]  * TheMue wishes the channel a nice weekend
[20:05] <whit> mbruzek, https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/specs/cs-latest-release-v0.9.3.yaml
[20:05] <mbruzek> jog I think pointing to the actual file in s3 would be better.
[20:06] <whit> jog, ie grabbing the file when the tests run and sticking it in s3 to refer to later
[20:06] <whit> tvansteenburgh mentioned doing this before
[20:07] <whit> jog, when the bundle is in source control, there is no guarantee that the file you link to is actually the file from the test run
[20:08] <jog> whit, agreed I'll like to what we archive in S3
[20:10] <tvansteenburgh> jog, whit: i'm not uploading the bundle file to s3 yet but i'll try to get that done this afternoon
[20:15] <whit> tvansteenburgh, thanks!
[20:35] <natefinch> perrito666: you have a shipit
[20:36] <perrito666> natefinch: you have $drink on my tab
[20:36]  * perrito666 cries
[20:39] <jog> mbruzek, whit, arosales, removing kubernetes test data older than Feb. 18, is A-OK with everyone?
[20:39] <whit> jog +1
[20:39] <mbruzek> jog that is my understanding yes
[20:44] <arosales> jog, yes please do
[20:45] <arosales> aisrael, did you get the help you were looking for on your development
[20:45] <aisrael> arosales: natefinch was going to see if he could reproduce it; I rolled back to binaries from the end of Jan to get unblocked in the meantime
[20:48] <arosales> natefinch, thanks for the help there, getting that moving forward will help us in our charm action development
[20:52] <arosales> aisrael, if we don't have a resolution today lets open a bug so we can bring other folks in if necessary
[20:52] <aisrael> arosales: ack
[20:52] <arosales> aisrael, thanks
[21:23] <natefinch> arosales, marcoceppi, aisrael: the good news is that I can reproduce it, so now it's just a matter of figuring out what's wrong, which I imagine will be pretty obvious.  Probably won't be able to entirely track it down before EOD, but most likely by early Monday.... either way, filing a bug would be appreciated
[21:24] <marcoceppi> natefinch: thanks, are we excercising that mechanism in testing?
[21:24] <jw4> natefinch, is this the error state bug that marcoceppi pastebinned earlier?
[21:24] <natefinch> jw4: yes
[21:25] <jw4> are you able to reproduce with the latest in master?
[21:25] <aisrael> natefinch: ack, I'll file a bug report shortly
[21:25] <natefinch> marcoceppi: probably?  I'll have to look at the tests.  But obviously there's a codepath that is not being tested.
[21:25] <jw4> (I'm just afraid that it might incidentally touch a PR that I just reverted this morining)
[21:25] <natefinch> jw4: tested it with latest master as of half hour ago-ish
[21:25] <jw4> natefinch: whew (for me)
[21:25] <arosales> natefinch, thanks
[21:37] <aisrael> https://bugs.launchpad.net/juju-core/+bug/1424069
[21:37] <mup> Bug #1424069: juju resolve doesn't recognize error state <juju-core:New> <https://launchpad.net/bugs/1424069>
[21:41] <alexisb> perrito666, ping
[21:41] <alexisb> if you are still around :)
[21:41] <perrito666> alexisb: barely
[21:41] <perrito666> tell me
[21:42]  * perrito666 braces
[21:42] <alexisb> perrito666, yeah I always come in with fun requests
[21:43] <alexisb> perrito666, I need a volunteer to work with eco team on a windows workload demo
[21:43] <alexisb> eco will be leading but we need a core dude to help if questions come up
[21:43] <perrito666> I am interested
[21:43] <perrito666> how urgent is this
[21:43] <mbruzek> alexisb: windows?  What is that
[21:44] <alexisb> well we have one week to get a working demo
[21:44] <alexisb> but I want to get all the techie names lined up today so I can get the effort organized
[21:45] <alexisb> mbruzek, it is for david duffey, just coming down the pipeline
[21:45] <alexisb> perrito666, no need for work until monday, but I do want to get everyone identified that will be enlisted to help
[21:46] <alexisb> and from the core front it should be mostly helping with questions and orchestrating getting gsamfira and team involved if needed
[21:46] <alexisb> perrito666, you have been working closely with cloudbase so you are my first ask :)
[21:47] <perrito666> alexisb: sure, whatever I dont know I will most likely know who to ask
[21:47] <alexisb> perrito666, cool
[21:47]  * alexisb goes to send email and get the ball rolling
[21:48] <perrito666> Ill try to run the whole tutorial they sent to get a working workload at home for reference
[21:49] <perrito666> well after not dodging that bullet I am EOW, see you all on monday
[21:49] <perrito666> :p
[21:50] <jw4> perrito666: o/
[22:00] <alexisb> by perrito666 and thanks!