thumper | davecheney: I'm looking at the peergrouper tests, but they now seem to pass when run individually, but fail when I run all juju tests | 01:03 |
---|---|---|
thumper | davecheney: I'm expecting it is impacted by load | 01:03 |
thumper | davecheney: using your stress.sh script | 01:03 |
thumper | but I'm needing something to stress either cpu or disk | 01:03 |
thumper | do you have something? | 01:03 |
thumper | wallyworld: we need to chat, re: simplestreams and lxd | 01:04 |
wallyworld | "we need to talk" | 01:04 |
wallyworld | i hate those 4 words | 01:04 |
thumper | "please come to the office?" | 01:05 |
wallyworld | thumper: did you want to talk now? | 01:05 |
thumper | today | 01:05 |
thumper | not necessarily now | 01:05 |
wallyworld | thumper: ok, give me 15? | 01:05 |
thumper | sure, np | 01:05 |
wallyworld | thumper: talk now? | 01:33 |
thumper | 1:1? | 01:33 |
thumper | ugh | 02:28 |
thumper | I have found the race in the peergrouper | 02:28 |
davecheney | thumper: ?? | 02:33 |
davecheney | do tell | 02:33 |
thumper | there are timing issues between the various go routines it starts up | 02:34 |
davecheney | yup, so when you run go est ./... | 02:34 |
davecheney | you have 4/8 other test jobs running at one time | 02:34 |
davecheney | timing goes off | 02:34 |
thumper | sometimes under heavy load, the peer grouper will attempt to determine the leader before it realises it has any machines | 02:34 |
thumper | on successful runs, the machine watchers have fired before the other change | 02:35 |
thumper | so it knows about the machines | 02:35 |
thumper | on unsuccessful runs, it doesn't | 02:35 |
thumper | so all the machines are "extra" | 02:35 |
thumper | and have nil vote, so fail | 02:35 |
thumper | I'm now attempting to work out the best place to sync the workers... | 02:38 |
thumper | and best way how to... | 02:38 |
thumper | uurrgghh | 02:42 |
thumper | pretty sure that's a bit bollocks | 02:42 |
thumper | davecheney: using a *machine as a key in a map? | 02:42 |
davecheney | could be reasonable | 02:46 |
davecheney | asumuing that nobody every creates a machine | 02:46 |
davecheney | which could be a problem | 02:47 |
thumper | shit | 03:06 |
thumper | I can't work out how to sync these things | 03:06 |
thumper | davecheney: got a few minutes? | 03:06 |
davecheney | thumper: hey | 04:21 |
davecheney | sorry, i was at the shops | 04:22 |
davecheney | still there ? | 04:22 |
thumper | yeah, but sent an email | 04:22 |
thumper | I've given up on the peergrouper | 04:22 |
thumper | it is a big pile of assumptions I don't understand | 04:22 |
davecheney | \o/ | 04:22 |
davecheney | no | 04:22 |
davecheney | /o\ | 04:22 |
davecheney | i sounds like it needs more synchornisation | 04:23 |
davecheney | if parts of the peergrouper assume something | 04:23 |
thumper | I added what I thought would be enough | 04:23 |
thumper | but no | 04:23 |
davecheney | that needs to be replaced with expiliit coordination | 04:23 |
thumper | yes, I agree with that last statement | 04:23 |
davecheney | the worrying part is i think we can assume it will fail ~100% of the time in the field | 04:23 |
davecheney | given it only just passes under controlled conditions | 04:23 |
davecheney | this should proibably be a build blocker | 04:24 |
thumper | the big problem as best as I can tell, is that it assumes that whenever the timer goes off for it to update itself, it assumes it knows the current state of the machines | 04:24 |
thumper | which it does not | 04:24 |
davecheney | wrooooong | 04:24 |
davecheney | that's impossible | 04:24 |
thumper | because those changes come in asyncronously | 04:24 |
thumper | and it isn't querying | 04:25 |
davecheney | /me facepalm | 04:25 |
thumper | I think that what it should do, is explicitly query all machines at the point of trying to decide | 04:25 |
thumper | and not rely on just change notifications | 04:25 |
davecheney | i think it's worse than that | 04:26 |
davecheney | you cannot query a machine | 04:26 |
davecheney | then do something with that information | 04:26 |
davecheney | an unlimited amount of time can pass between statements | 04:26 |
davecheney | any information you retrieve has to be assumed to be stale | 04:26 |
thumper | well, in practice, it isn't infinite | 04:26 |
davecheney | you have a distributed locking problem | 04:26 |
thumper | but it certainly isn't zero | 04:26 |
davecheney | s/infinite/unbounded | 04:27 |
thumper | I think that for any point in time, it should ask for the current state of the machines it cares about, and use that consistent information to make the decision | 04:27 |
thumper | to the best of its ability | 04:27 |
thumper | rather than the inconsistent picture it currently has | 04:27 |
thumper | but I have no more fucks to give | 04:28 |
davecheney | is there a way to query the state of all machines atomically | 04:29 |
davecheney | or is it N+1 style ? | 04:29 |
thumper | yes, I belive there is an API call to get the machine info | 04:29 |
thumper | if there isn't it is easy to add one | 04:29 |
thumper | as atomically as mongo gives us | 04:30 |
thumper | anyway | 04:30 |
thumper | dirty reads and all that :) | 04:30 |
davecheney | so, this is going to work 99% of the time | 04:31 |
davecheney | except the time when it fails because everyuthing is going up and down like a yoyo | 04:31 |
davecheney | in the 99% case, you don't need atomics or any of that jazz 'cos it's approximately steady state | 04:31 |
davecheney | in the 1% case, when we _really_ need it towkr | 04:32 |
davecheney | to work | 04:32 |
davecheney | it's not going to | 04:32 |
davecheney | at all | 04:32 |
davecheney | this is a poor outcome | 04:32 |
thumper | heh | 04:33 |
* thumper nods | 04:33 | |
thumper | davecheney: the problem is, as I see it, that any server under load, as it probably will be at startup, the peergrouper will fail the first time through its loop, and get restarted | 04:35 |
thumper | eventually it'll probably get settled | 04:35 |
thumper | but geez | 04:35 |
thumper | how not to do something | 04:35 |
davecheney | yeah, that's what I was grasping at | 04:37 |
davecheney | under steady state, it'll work just fine | 04:37 |
davecheney | which is useless | 04:37 |
davecheney | and under load, it'll freak out | 04:37 |
davecheney | which is useless | 04:37 |
davecheney | hmmm | 04:37 |
* thumper is done | 04:45 | |
thumper | laters | 04:45 |
wallyworld | axw_: if you have time at any point, could you take a look at http://reviews.vapour.ws/r/3046/ and http://reviews.vapour.ws/r/3104 for me? not urgent, just if/when you have some time | 05:44 |
axw_ | wallyworld: ok, probably not till later on | 05:45 |
wallyworld | np, no rush | 05:45 |
axw_ | just wrapping up azure changes to support legacy | 05:45 |
wallyworld | awesome, can definitely wait till after that | 05:45 |
=== urulama__ is now known as urulama | ||
axw_ | wallyworld: are you around? | 08:24 |
axw_ | wallyworld: never mind, self approving my merge of master into azure-arm-provider | 08:27 |
axw_ | mgz_: are you able to add "azure-arm-provider" as a feature branch to CI? | 08:28 |
axw_ | mgz_: or is it automatic...? | 08:29 |
wallyworld | axw_: it's automatic, but unless yu ask, it won't get to the top of the queue | 08:38 |
wallyworld | axw_: sory, was eating | 08:38 |
axw_ | wallyworld: thanks | 08:44 |
axw_ | wallyworld: FYI, PR to merge the azure-arm provider into the feature branch: https://github.com/juju/juju/pull/3701 | 09:07 |
axw_ | wallyworld: warning, it's extremely large | 09:07 |
wallyworld | axw_: ty, will look | 09:11 |
mwhudson | oh, not that arm | 09:29 |
frobware | dimitern, ping 1:1? | 09:34 |
dimitern | frobware, hey, oops - omw | 09:34 |
dimitern | voidspace, jam, fwereade, dooferlad, standup? | 10:00 |
voidspace | omw | 10:00 |
dimitern | jamespage, gnuoy, juju/os call? | 10:31 |
jamespage | dimitern, 2 mins | 10:31 |
dimitern | np | 10:32 |
frobware | dimitern, I moved the openstack meeting to 16:30, but that may be too late for you. | 11:15 |
dimitern | frobware, it's fine for me as scheduled | 11:24 |
frobware | dimitern, thanks & appreciated | 11:24 |
dimitern | voidspace, reviewed | 11:31 |
frobware | dimitern, voidspace, dooferlad: http://reviews.vapour.ws/r/3102/ | 11:56 |
dimitern | frobware, looking | 11:56 |
dimitern | frobware, btw updated http://reviews.vapour.ws/r/3088/ to fix the mac address issue with address-allocation enabled for kvm | 12:07 |
dimitern | and tested it to work | 12:07 |
frobware | dimitern, just saw it. checking my change against voidspace's change at the moment. | 12:07 |
dimitern | frobware, hmm, so you decided to go for the full mile there - always using addresses instead of hostnames if possible, even in status | 12:11 |
frobware | dimitern, if there's an address that is not resolvable you cannot connect to the machine. | 12:12 |
dimitern | (rather than just for mongo peer host/ports) | 12:12 |
frobware | dimitern, we're fixing the wrong bug, IMO. We need to fix maas. | 12:12 |
frobware | dimitern, see the commit message for why you need to drop unresolvable names | 12:13 |
dimitern | frobware, yeah, fair enough | 12:13 |
dimitern | frobware, there is however a ResolveOrDropHostnames that does almost the same thing in hostport.go | 12:14 |
frobware | dimitern, the trouble is that resolves | 12:14 |
frobware | dimitern, let's chat instead. HO? | 12:15 |
dimitern | frobware, ok, I'm joining the standup one | 12:15 |
frobware | dimitern, voidspace, dooferlad: "maas-spaces" feature branch created | 12:53 |
dimitern | frobware, awesome! let's get cranking :) | 12:53 |
frobware | dimitern, T-3 weeks... | 12:58 |
dimitern | frobware, yeah, it's not a lot, is it :/ | 13:06 |
=== akhavr1 is now known as akhavr | ||
mup | Bug #1514857 opened: cannot use version.Current (type version.Number) as type version.Binary <juju-core:Incomplete> <juju-core lxd-provider:Triaged> <https://launchpad.net/bugs/1514857> | 14:31 |
mup | Bug #1514857 changed: cannot use version.Current (type version.Number) as type version.Binary <juju-core:Incomplete> <juju-core lxd-provider:Triaged> <https://launchpad.net/bugs/1514857> | 14:34 |
mup | Bug #1514857 opened: cannot use version.Current (type version.Number) as type version.Binary <juju-core:Incomplete> <juju-core lxd-provider:Triaged> <https://launchpad.net/bugs/1514857> | 14:37 |
dimitern | whaaaat?! | 14:49 |
dimitern | damn...why did I spend almost a week fixing 1.24 | 14:49 |
dimitern | no 1.24.8 :( | 14:49 |
voidspace | dimitern: yeah, shame | 15:04 |
voidspace | dimitern: and there won't be a version of 1.24 with containers and "ignore-machine-addresses" working | 15:04 |
dimitern | voidspace, if 1.24 dies quickly, that won't be a big deal :) | 15:05 |
voidspace | dimitern: hopefully | 15:05 |
mup | Bug #1502306 changed: cannot find package gopkg.in/yaml.v2 <blocker> <ci> <regression> <juju-core:Invalid> <juju-core lxd-provider:Fix Released> <https://launchpad.net/bugs/1502306> | 15:19 |
mup | Bug #1514874 opened: Invalid entity name or password error, causes Juju to uninstall <sts> <juju-core:New> <https://launchpad.net/bugs/1514874> | 15:19 |
mup | Bug #1514877 opened: Env not found immediately after bootstrap <blocker> <ci> <regression> <test-failure> <juju-core:Incomplete> <juju-core controller-rename:Triaged> <https://launchpad.net/bugs/1514877> | 15:19 |
katco | fwereade: hey ran into bug 1503039 last friday while writing a reactive charm. any reason not to set that env. variable all the time? | 15:21 |
mup | Bug #1503039: JUJU_HOOK_NAME does not get set <charms> <docs> <hooks> <juju-core:Triaged> <https://launchpad.net/bugs/1503039> | 15:21 |
mup | Bug #1514874 changed: Invalid entity name or password error, causes Juju to uninstall <sts> <juju-core:New> <https://launchpad.net/bugs/1514874> | 15:22 |
mup | Bug #1514877 changed: Env not found immediately after bootstrap <blocker> <ci> <regression> <test-failure> <juju-core:Incomplete> <juju-core controller-rename:Triaged> <https://launchpad.net/bugs/1514877> | 15:22 |
mup | Bug #1502306 opened: cannot find package gopkg.in/yaml.v2 <blocker> <ci> <regression> <juju-core:Invalid> <juju-core lxd-provider:Fix Released> <https://launchpad.net/bugs/1502306> | 15:22 |
mup | Bug #1502306 changed: cannot find package gopkg.in/yaml.v2 <blocker> <ci> <regression> <juju-core:Invalid> <juju-core lxd-provider:Fix Released> <https://launchpad.net/bugs/1502306> | 15:25 |
mup | Bug #1514874 opened: Invalid entity name or password error, causes Juju to uninstall <sts> <juju-core:New> <https://launchpad.net/bugs/1514874> | 15:25 |
mup | Bug #1514877 opened: Env not found immediately after bootstrap <blocker> <ci> <regression> <test-failure> <juju-core:Incomplete> <juju-core controller-rename:Triaged> <https://launchpad.net/bugs/1514877> | 15:25 |
fwereade | katco, nah, go ahead and set it always | 15:26 |
fwereade | katco, it was originally just for debug-hooks, when you wouldn't know | 15:27 |
fwereade | katco, and you *can* always look at argv[0] | 15:27 |
fwereade | katco, but better just to be consistent across the board | 15:27 |
katco | fwereade: kk ty just wanted to check | 15:27 |
fwereade | katco, cheers | 15:28 |
* fwereade gtg out, back maybe rather later | 15:28 | |
* katco waves | 15:28 | |
mup | Bug #1514874 changed: Invalid entity name or password error, causes Juju to uninstall <sts> <juju-core:New> <https://launchpad.net/bugs/1514874> | 15:31 |
mup | Bug #1514877 changed: Env not found immediately after bootstrap <blocker> <ci> <regression> <test-failure> <juju-core:Incomplete> <juju-core controller-rename:Triaged> <https://launchpad.net/bugs/1514877> | 15:31 |
mup | Bug #1502306 opened: cannot find package gopkg.in/yaml.v2 <blocker> <ci> <regression> <juju-core:Invalid> <juju-core lxd-provider:Fix Released> <https://launchpad.net/bugs/1502306> | 15:31 |
katco | ericsnow: did you use git mv for your cleanup patch? | 15:33 |
ericsnow | katco: yep | 15:34 |
ericsnow | katco: the GH diff is a little easier to follow | 15:34 |
mup | Bug #1502306 changed: cannot find package gopkg.in/yaml.v2 <blocker> <ci> <regression> <juju-core:Invalid> <juju-core lxd-provider:Fix Released> <https://launchpad.net/bugs/1502306> | 15:34 |
mup | Bug #1514874 opened: Invalid entity name or password error, causes Juju to uninstall <sts> <juju-core:New> <https://launchpad.net/bugs/1514874> | 15:34 |
mup | Bug #1514877 opened: Env not found immediately after bootstrap <blocker> <ci> <regression> <test-failure> <juju-core:Incomplete> <juju-core controller-rename:Triaged> <https://launchpad.net/bugs/1514877> | 15:34 |
katco | ericsnow: i wish RB would detect that and show just the diffs instead of all green | 15:34 |
ericsnow | katco: yep, me too | 15:35 |
perrito666 | ahh RB the source of most of our wishes :p | 15:35 |
marcoceppi_ | alexisb: is anyone working on this? https://bugs.launchpad.net/juju-core/+bug/1488139 will it actually make it to alpha2? | 15:38 |
mup | Bug #1488139: juju should add nodes IPs to no-proxy list <network> <proxy> <juju-core:Triaged> <https://launchpad.net/bugs/1488139> | 15:38 |
alexisb | cherylj, ^^^ | 15:38 |
voidspace | dimitern: ping | 15:59 |
voidspace | dimitern: for "pick provider first" for addresses the upgrade step is AddPreferredAddressesToMachine | 16:00 |
voidspace | dimitern: that's the same upgrade function used to add preferred addresses to machines in the first place | 16:00 |
dooferlad | pro tip: if you uninstall maas, make sure that you get rid of maas-dhcp | 16:00 |
voidspace | dimitern: 1.25 already calls this as an upgrade step, so I assert that the backport to 1.25 doesn't need to add a new upgrade step... | 16:00 |
voidspace | dooferlad: :-) | 16:00 |
dooferlad | two DHCP servers on the same network results in such fun :-( | 16:00 |
voidspace | dooferlad: there are about seven billion maas packages | 16:00 |
dooferlad | voidspace: indeed. I think it didn't get maas-dhcp when I uninstalled because by default it isn't installed with the maas metapackage | 16:01 |
dimitern | voidspace, that sounds good | 16:19 |
perrito666 | well, in a whole new way of creepyness, google now adds the flight to your personal calendar when you get your plane tickets via email | 16:33 |
perrito666 | even though, the email was not the usual plain text reservation | 16:34 |
marcoceppi_ | we need help, our websocket connectino keeps dying during a deployment tanking charm testing for power8. | 16:41 |
marcoceppi_ | these are the last few lines of the log | 16:41 |
marcoceppi_ | http://paste.ubuntu.com/13216926/ | 16:42 |
marcoceppi_ | INFO juju.rpc server.go:328 error closing codec: EOF | 16:42 |
marcoceppi_ | what does that mean^? | 16:42 |
natefinch | marcoceppi_: I think in this case, EOF should be treated like "not an error" | 16:48 |
natefinch | marcoceppi_: yeah, looking at the code, that just means it probably was already closed | 16:48 |
marcoceppi_ | well, we've been wrestling with this for a few days now, and we're suck in that every time after a few mins, the websocket abruptly closes and tanks the python websocket library, which kills python-jujuclient, which kills amulet | 16:51 |
marcoceppi_ | so we're unable to run charm tests on our power8 maas | 16:51 |
marcoceppi_ | I'm prepared to provide anyone willing to help logs or whatever else is needed. I've exhausted my troubleshooting | 16:52 |
natefinch | alexisb: ^ | 16:54 |
alexisb | marcoceppi_, is this related to the bug you pointed at earlier? | 16:58 |
marcoceppi_ | alexisb: it's a machine running behind the great canonical firewall, we've got some things punched through and the rest we're using an http proxy. It seems this breakage always happens around the same time so we're removing as much of the proxy to test further | 16:59 |
marcoceppi_ | alexisb: long story short, not sure if this is related, we've manually no-proxy listed /everything/ for the environment so while that bug will help, it's not likely going to resolve whatever we're hitting | 17:00 |
alexisb | so marcoceppi_ do you have a system we can triage? | 17:07 |
marcoceppi_ | alexisb: yes, but it's behind the vpn and some special grouping, though I may be able to get someone access if they aren't in that group | 17:08 |
marcoceppi_ | alexisb: ignore, yes we have a system to triage | 17:08 |
alexisb | katco, can you get someone on your team to work w/ marcoceppi_ please | 17:09 |
alexisb | marcoceppi_, we will need to make sure there is a bug open to track status | 17:09 |
katco | alexisb: yep | 17:10 |
alexisb | thanks | 17:10 |
katco | marcoceppi_: is there already a bug for this? | 17:10 |
marcoceppi_ | alexisb: I'll file a bug though I'm not sure the problem name | 17:10 |
marcoceppi_ | we're not even able to diagnose the source of the problem | 17:10 |
katco | marcoceppi_: that's ok, we can iterate on the title :) | 17:11 |
marcoceppi_ | katco: https://bugs.launchpad.net/juju-core/+bug/1514922 | 17:12 |
mup | Bug #1514922: Deploying to maas ppc64le with proxies kills websocket <juju-core:New> <https://launchpad.net/bugs/1514922> | 17:12 |
katco | marcoceppi_: can you also update the bug with the details of what you've been discussing here, and any relevant logs? | 17:12 |
katco | marcoceppi_: (ty for filing a bug) | 17:13 |
rick_h__ | urulama: frankban ^ did we see something with the websockets closing on us? | 17:14 |
rick_h__ | urulama: frankban please see if this souds familiar at all and with our 'ping' and such | 17:14 |
urulama | looking | 17:14 |
urulama | well, it was through apache ... not sure what is meant by proxy in the bug? apache reverseproxy? | 17:15 |
marcoceppi_ | katco: updated | 17:15 |
katco | marcoceppi_: ty sir | 17:15 |
frankban | rick_h__, urulama it does not look familiar | 17:16 |
rick_h__ | frankban: ok, thanks | 17:16 |
marcoceppi_ | fwiw, I can connect and deploy just fine to the environment, it's when we keep a persistent websocket connection open that it tanks after a few mins of websocketing, or whatever websockets do | 17:17 |
marcoceppi_ | this build script works without issue on all other testing substrates | 17:17 |
katco | marcoceppi_: so it's *just* ppc? | 17:18 |
marcoceppi_ | katco: well it's the only maas we haver access to | 17:18 |
marcoceppi_ | it just so happens to be ppc64le | 17:18 |
katco | marcoceppi_: gotcha... what do you mean when you say it works on all other testing substrates? | 17:18 |
marcoceppi_ | katco: gce, aws, openstack, etc | 17:19 |
marcoceppi_ | katco: this job runs all our other charm testing substrates, which are public clouds and local | 17:19 |
katco | marcoceppi_: ah ok | 17:19 |
natefinch | it's unfortunate that our only MAAS environment is also on a wacky architecture | 17:20 |
marcoceppi_ | well, it's not the only maas environment for testing, juju ci has a few they use. It's the only maas environ we have for charm testing and it's maas because no public cloud have power8 yet | 17:20 |
natefinch | marcoceppi_: it's a shame it's the only MAAS environment *you* have for testing, then :) | 17:21 |
marcoceppi_ | hah, yes. | 17:21 |
mup | Bug #1514922 opened: Deploying to maas ppc64le with proxies kills websocket <juju-core:New> <https://launchpad.net/bugs/1514922> | 17:25 |
marcoceppi_ | katco: this seems to be related to http-proxy juju environment stuff. We remove all but the apt-*-proxy keys and the websocket didn't die | 17:44 |
katco | marcoceppi_: hm ok thanks that helps | 17:45 |
mup | Bug #1514616 opened: juju stateserver does not obtain updates to availability zones <kanban-cross-team> <landscape> <juju-core:New> <https://launchpad.net/bugs/1514616> | 17:55 |
marcoceppi_ | katco: it appears setting apt-http-proxy and other env variables does not do what is expected | 17:59 |
marcoceppi_ | hum | 18:01 |
marcoceppi_ | nvm | 18:01 |
marcoceppi_ | katco alexisb this isn't a priority for today, there are too many sharp sticks in our eyes to get a clear enough vision on this | 18:08 |
marcoceppi_ | for today anymore* | 18:09 |
marcoceppi_ | but it's very much a problem we will need fixed for 1.26 | 18:09 |
marcoceppi_ | If getting on a hang out to explain this more helps, lmk | 18:09 |
cmars | perrito666, can I get a review of http://reviews.vapour.ws/r/3041/ ? it's a bugfix for LP:#1511717 backported to 1.25 | 18:24 |
mup | Bug #1511717: Incompatible cookie format change <blocker> <ci> <compatibility> <regression> <juju-core:Fix Released by cmars> <juju-core 1.25:In Progress by cmars> <juju-core 1.26:Fix Committed by cmars> <https://launchpad.net/bugs/1511717> | 18:24 |
katco | marcoceppi_: just lmk when you get a better idea of what's going on | 18:31 |
marcoceppi_ | katco: we have no idea what's going on. We just know it's not getting resolved in 2 hours time | 18:32 |
cherylj | ericsnow: ping? | 18:34 |
ericsnow | cherylj: hey | 18:34 |
cherylj | hey ericsnow :) got a question for you about systemd | 18:34 |
ericsnow | cherylj: sure | 18:34 |
cherylj | ericsnow: was there a reason you linked the service files, rather than copying them over? just out of curiosity | 18:35 |
ericsnow | cherylj: was trying to stick just to the systemd API rather than copying any files | 18:36 |
cherylj | ericsnow: okay, I was just wondering. I've seen 2 bugs of people doing things we wouldn't expect that causes problems with just using links. | 18:36 |
cherylj | I'm okay with making those special cases work around juju :) | 18:37 |
ericsnow | cherylj: sounds good | 18:37 |
cherylj | thanks! | 18:37 |
ericsnow | cherylj: np :) | 18:37 |
perrito666 | cmars: sure you can | 18:55 |
perrito666 | sorry was afk for a moment | 18:55 |
perrito666 | cmars: shipit | 18:58 |
cmars | perrito666, thanks! | 19:12 |
=== urulama is now known as urulama__ | ||
natefinch | and.... master is blocked, dangit | 20:56 |
natefinch | ericsnow, wwitzel3: can you guys review http://reviews.vapour.ws/r/3103/ real quick? It's best to look at the PR (https://github.com/juju/juju/pull/3698) rather than reviewboard, because 99% of the code has already been reviewed, only a few small tweaks need to be reviewed (everything but the cherry-picked merge). | 21:02 |
ericsnow | natefinch: looking | 21:04 |
natefinch | I just made the worker into a singular worker and updated a test to check that. The last commit is really just redoing work in the first commit, because I cherry-picked the merge afterward (a result of me doing things in the wrong order, but seemed like not worth the trouble to redo it in the right order) | 21:06 |
ericsnow | natefinch: LGTM | 21:06 |
natefinch | ericsnow: thanks :) | 21:07 |
natefinch | katco, ericsnow: ug, looking at the failures on the lxd branch, I think it's just that some stuff changed out from underneath us... but when I rebase, I get 332 merge conflicts :/ | 21:37 |
ericsnow | natefinch: the patch I have up for review fixes most of those errors | 21:38 |
ericsnow | natefinch: http://reviews.vapour.ws/r/3101/ | 21:38 |
katco | natefinch: we rebased off the last bless of master | 21:38 |
katco | natefinch: i.e. we're intentionally behind master | 21:38 |
natefinch | ahh | 21:39 |
katco | ericsnow: the patch you have up fixes the things that cursed our branch? | 21:43 |
ericsnow | katco: several of them | 21:43 |
ericsnow | katco: oh | 21:44 |
natefinch | the one I was looking at was this one: https://bugs.launchpad.net/juju-core/+bug/1514857 | 21:44 |
ericsnow | katco: not that cursed our branch | 21:44 |
mup | Bug #1514857: cannot use version.Current (type version.Number) as type version.Binary <blocker> <ci> <regression> <test-failure> <juju-core:Incomplete> <juju-core lxd-provider:Triaged> <https://launchpad.net/bugs/1514857> | 21:44 |
ericsnow | katco: rather, the Wily test failures (which will curse our branch soon enough) | 21:44 |
katco | ericsnow: ah ok. natefinch looks like you're still good to look at the curses | 21:45 |
natefinch | katco: ok | 21:45 |
natefinch | my problem is figuring out why there's a compile issue. Seems like we got half of a change or something | 21:50 |
ericsnow | natefinch: looks like katco didn't use the merge bot :P | 21:54 |
katco | ericsnow: i did not. is it causing problems? | 21:55 |
ericsnow | katco: yeah, the merge broke some code | 21:55 |
ericsnow | katco: the merge bot would have caught it | 21:55 |
katco | ericsnow: oh oops :( sorry natefinch | 21:57 |
katco | ericsnow: natefinch: the idea was to get a bless anyway, so skipped the bot. shouldn't have done that | 21:57 |
natefinch | katco, ericsnow: http://reviews.vapour.ws/r/3110/ | 21:59 |
ericsnow | natefinch: LGTM | 22:00 |
natefinch | I gotta run, it's time o'clock, as my 2 year old would say. But I can land this later, or someone else can $$merge$$ as they wish | 22:01 |
natefinch | everything compiles now.. . tehre's some maas timeouts, but I'm guessing those are spurious. | 22:02 |
=== natefinch is now known as natefinch-afk | ||
natefinch-afk | back later. have a lot of work time left for today. | 22:02 |
mup | Bug #1515016 opened: action argument with : space is incorrectly interpreted as json <juju-core:New> <https://launchpad.net/bugs/1515016> | 22:20 |
=== akhavr1 is now known as akhavr | ||
katco | ericsnow: wwitzel3: natefinch-afk: please don't forget to update your bugs with status for the day | 22:37 |
ericsnow | katco: will do | 22:37 |
wwitzel3 | katco: rgr | 22:38 |
wallyworld | axw_: perrito666: give me a minute | 23:15 |
anastasiamac | wallyworld: k | 23:15 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!