beisner | sinzui, so in the openstack provider attempt, juju fires up a vivid instance, and i can see the nova console log that it's booted and ready with whatever userdata was passed to it. but there is a key mismatch. | 00:00 |
---|---|---|
beisner | sinzui, with the maas deploy, i get the same symptom (key issue), and it's much harder to get console output. | 00:00 |
beisner | er rather, maybe not a key mismatch, but definitely a key issue. | 00:01 |
sinzui | beisner, I don't think key issues are series issues | 00:01 |
beisner | sinzui, all those woes go away when I set default-series: utopic or trusty in the environments.yaml. | 00:01 |
sinzui | beisner, I think you found a bug :) | 00:03 |
beisner | sinzui, ok, what can i do/collect to raise a helpful/meaningful bug on this? | 00:04 |
beisner | ie. is there a way to get more verbosity from juju bootstrap? | 00:04 |
sinzui | beisner, the cloud-init-output log and maybe a machine-0 log if it gets that far | 00:05 |
beisner | sinzui, unit 0 never comes alive according to juju | 00:05 |
beisner | and juju debug-log is no help at that stage | 00:05 |
sinzui | beisner, I often ssh into the machine the moment I see the ip is open and tail the /var/log/cloud-init-output.log | 00:06 |
beisner | sinzui, ah cool. i'll dive in a bit more. appreciate the guidance. | 00:07 |
wwitzel3 | ericsnow: just realized I forgot to hit publish on that review, you have a review from me now. | 00:07 |
ericsnow | wwitzel3: thanks | 00:07 |
wwitzel3 | ericsnow: mostly minor, I found it all easy to follow, no comments about the service stuff since it is pretty generic boiler platey and we talkeda bout it before | 00:09 |
ericsnow | wwitzel3: cool | 00:09 |
wallyworld | thumper: you around? | 00:20 |
wallyworld | sinzui: so we should be able to get 1.22 out now right? since the precise upgrade issue is only 1.23? | 00:28 |
sinzui | wallyworld, no, because we never got a pass for 1.22 | 00:32 |
wallyworld | sinzui: assuming ci becomes happy | 00:32 |
sinzui | wallyworld, 1.22-beta4 has NEVER passed CI | 00:32 |
wallyworld | :-( | 00:32 |
sinzui | wallyworld, if it does pass, then I release | 00:32 |
wallyworld | let's hope this one works | 00:33 |
davecheney | thumper: ping | 00:41 |
* thumper is here now | 00:44 | |
wallyworld | thumper: can i grab 5 when you are free? | 00:47 |
thumper | wallyworld: sure, I'll be done chugging lunch in about 5 minutes | 00:48 |
wallyworld | np | 00:48 |
thumper | it's a banana protein smoothy | 00:48 |
wallyworld | mmmmm | 00:48 |
anastasiamac | 5min for smoothie? | 00:48 |
anastasiamac | thy* | 00:49 |
davecheney | thumper: shall we meet in the 1:1 hangout ? | 00:49 |
thumper | davecheney: yep, how about in 11 minutes? | 00:49 |
davecheney | thumper: go talk to wallyworld first | 00:49 |
thumper | ta | 00:49 |
davecheney | i'll see ou in the hangout in whenever | 00:49 |
thumper | wallyworld: our 1:1? | 00:53 |
wallyworld | yup | 00:53 |
wallyworld | sinzui: i just looked at the dashboard and the latest 1.23 run had local-upgrade-precise-amd64 passing | 00:58 |
wallyworld | have you upped the timeout already? | 00:58 |
sinzui | wallyworld, I did, that was my proof | 01:13 |
wallyworld | sinzui: logs are needed to help see where the time is going | 01:14 |
wallyworld | i can't see any linekd to the dashboard | 01:15 |
sinzui | wallyworld, I can give you the failures, but the passes might be more informative since I also gave the slaves more time to collect | 01:15 |
sinzui | s/slave/services | 01:15 |
wallyworld | sinzui: could you attach both to the bug for me? | 01:16 |
sinzui | wallyworld, I will see what I can do. I don't want to make them public if the contain credentials | 01:16 |
wallyworld | sinzui: make the bug private? | 01:16 |
sinzui | wallyworld, I cannot because that will hide the critical blockers | 01:18 |
wallyworld | sinzui: oh, maybe send privately? | 01:18 |
sinzui | wallyworld, this wouldn't be awkward if the credentials for reports.vapour.ws had not also failed this weekend too | 01:18 |
=== kadams54-away is now known as kadams54 | ||
sinzui | wallyworld, did you see smoser's ruling on apt-get dist-upgrade | 01:25 |
wallyworld | sinzui: one sec, just finishing standup | 01:25 |
wallyworld | sinzui: read scott's bug comments, we should be ok - we don't use proposed pocket with precise | 01:43 |
wallyworld | that i know of | 01:43 |
alexisb | menn0, thumper is this doc up to day??: | 01:49 |
alexisb | https://docs.google.com/a/canonical.com/document/d/1jsuoTbXZbj3wtoXCpc5MGFVvwuofYx3hLw2eNoXEmE0/edit#heading=h.aby6yid7wq2d | 01:49 |
* menn0 looks | 01:49 | |
menn0 | alexisb: yes, except that now we have a working proof of concept of logging to MongoDB and have run scaling tests (see your email) | 01:50 |
sinzui | wallyworld, we never use proposed | 01:50 |
menn0 | alexisb: I have been working on turning the POC into production code but whether it gets merged is somewhat dependent on whether the performance hit is deemed ok | 01:51 |
alexisb | menn0, yep understood, I just need a way to capture the work for logging that can be shared with those that are interested | 01:51 |
sinzui | wallyworld, but juju does do apt-get upgrade, which didn't give us an updated cloud-utils. apt-get install then did a remove | 01:51 |
menn0 | alexisb: ok. let me know if you need to know more. | 01:52 |
alexisb | ie answer the question for "why is logging for JES important and requires work" | 01:52 |
alexisb | nope I think that gets me what i need | 01:52 |
alexisb | thanks | 01:52 |
wallyworld | sinzui: so maybe the fix to put cloud-utils and cloud-image-utils on the one line is not required anymore | 01:52 |
sinzui | wallyworld, it is required because we haven't changed anything else to ensure removals cannot happen | 01:53 |
wallyworld | ok | 01:53 |
menn0 | wallyworld, sinzui: you guys seem to be discussing the same part of the code that I just filed a bug about. (bug 1424892) | 01:53 |
mup | Bug #1424892: rsyslog-gnutls is not installed when enable-os-refresh-update is false <cloud-init> <juju-core:New> <https://launchpad.net/bugs/1424892> | 01:53 |
wallyworld | menn0: nah, different | 01:54 |
sinzui | menn0, yes | 01:54 |
wallyworld | this is the deb packaging issue whcich affected cloud-utils | 01:54 |
* thumper sees that menn0 has answered all of alexisb's questions | 01:54 | |
thumper | coffee time then... | 01:54 |
wallyworld | menn0: your issue is the behaviour of the flags to disable apt | 01:54 |
sinzui | menn0, apt-get update finds the new packages (in cloud-* example) but apt-get update will not install them because update is not permitted to install new deps! | 01:55 |
sinzui | menn0, but apt-get dist-upgrade can install new deps | 01:55 |
wallyworld | menn0: when upgrading, do you recall if the state server rejects connections until all nodes in the env are deemed to have upgraded? | 01:55 |
menn0 | wallyworld: not quite ... let me quickly look at the code | 01:56 |
menn0 | wallyworld: a state server will accept API connections once it itself has upgraded | 01:59 |
menn0 | wallyworld: but state servers always upgrade first, before other nodes | 01:59 |
wallyworld | menn0: ok, ta. i'm looking into the CI blocker where precise upgrades tie out | 01:59 |
wallyworld | there's a bucket load of mongo connection failures that go on and on | 01:59 |
wallyworld | but state server probably not at fault if it accepts connections after it has finished | 02:01 |
menn0 | wallyworld: do you have some logs handy? | 02:01 |
wallyworld | menn0: i'll forward an email. sinzui had to increase timeout from 10 mins to 20 to make CI local precise upgrades pass. but just precise | 02:02 |
menn0 | wallyworld: that is certainly odd. | 02:02 |
wallyworld | indeed | 02:02 |
wallyworld | precise runs a slightly different mgo version | 02:02 |
wallyworld | that's all i can think of off hand | 02:02 |
sinzui | wallyworld, hp's swift isn't publish failed. I am getting out the hammers | 02:03 |
wallyworld | what's that about hp? | 02:03 |
sinzui | wallyworld, timouts uploading | 02:04 |
wallyworld | oh joy | 02:04 |
sinzui | wallyworld, I switch the job to not rebuild. just try to publish what the previous job made | 02:05 |
wallyworld | ok | 02:05 |
=== kadams54 is now known as kadams54-away | ||
beisner | wallyworld, sinzui - back for a bit, raised a bug on that vivid thing. should be readily reproducible but holler if there are any ?s. https://bugs.launchpad.net/juju-core/+bug/1424900 | 02:14 |
mup | Bug #1424900: Bootstrapping Vivid: ERROR failed to bootstrap environment, Permission denied (publickey), ci-info: no authorized ssh keys fingerprints found for user ubuntu <openstack> <uosci> <juju-core:New> <https://launchpad.net/bugs/1424900> | 02:14 |
wallyworld | thank you | 02:14 |
wallyworld | sinzui: menn0: with those logs, i see the 1st 4 minutes spinning up the state server and machines 1,2, then the state server upgrade completes in about minute, then we see a tonne of connection terminated errors lasting several mintes, so it seems there's an issue wit the worker nodes upgrading | 02:19 |
menn0 | wallyworld: yep, i'm looking at those logs now | 02:19 |
menn0 | wallyworld: the machine-0 logs are all perfectly normal | 02:19 |
menn0 | wallyworld: but the machine-1 logs indicate that the agent never restarted into the new version | 02:20 |
wallyworld | menn0: in machine 1 lo, i see a 3 minute gap fetching tools | 02:20 |
menn0 | wallyworld: it sees the need to upgrade but never seems to reboot | 02:20 |
wallyworld | menn0: the test may have timed out before machine 1 could restart | 02:21 |
wallyworld | take off the 3 minutes to fetch tools | 02:21 |
sinzui | menn0, its about timing. I was on the machine when machine 1 was shutdown because of a timeout. it was upgrading. so I hacked the job on the precise slave to give it 20 minutes to see a pass. | 02:21 |
wallyworld | and it probably would have been ok | 02:21 |
menn0 | but 20 mins is just crazy | 02:22 |
sinzui | menn0, all machines normally upgrade in less that 60 seconds. so 18 minutes is scarry | 02:22 |
wallyworld | sinzui: this is what i see as the issue | 02:22 |
wallyworld | 2015-02-23 18:36:07 INFO juju.worker.upgrader upgrader.go:201 fetching tools from "https://10.0.0.191:17070/environment/329350b1-edf2-4d62-8156-7338a12d3808/tools/1.23-alpha1.1-precise-amd64" | 02:22 |
wallyworld | 2015-02-23 18:39:52 INFO juju.utils http.go:66 hostname SSL verification disabled | 02:22 |
wallyworld | that almost 4 minute gap | 02:22 |
menn0 | looking at the timestamps, the machine was going VERY slowly | 02:22 |
menn0 | 30s between seeing the need to upgrade and then /starting/ to download the tools | 02:23 |
sinzui | menn0, yep. I rebooted the machine too. It is fast when I use it | 02:23 |
menn0 | sinzui, wallyworld: the agent is still starting up when it sees the need to upgrade. the timings between the various workers starting up is rather wide, like the system was crawling. | 02:24 |
wallyworld | yes | 02:25 |
wallyworld | it just seems machine 1 is very, very slow | 02:25 |
sinzui | menn0, it wasn't/isn't | 02:25 |
menn0 | sinzui: it might not be, but it just looks that way based on the logs | 02:27 |
sinzui | menn0, I also cleared /var/cache/lxc/cloud-precise | 02:28 |
wallyworld | i'm not sure what to do now to diagnose further - if it really is just precise, that makes it very hard to reason about | 02:28 |
sinzui | we are still waiting for the same job to run with 1.22 to compare | 02:28 |
wallyworld | i guess we see how 1.22comes out | 02:29 |
menn0 | wallyworld: we could spin up a precise instance on ec2 | 02:29 |
wallyworld | we could, i might do that | 02:30 |
menn0 | wallyworld, sinzui: as an example, you can see the difference the "slow down" when you look at the deployer worker in the logs | 02:33 |
menn0 | wallyworld, sinzui: before the API disconnection (due to the state server upgrading itself), the deployer worker starts 1s after it's parent worker (api-post-upgrade) | 02:34 |
menn0 | wallyworld, sinzui: at the bottom of the logs it starts 5.5 mins after it's parent worker | 02:35 |
sinzui | wow | 02:35 |
menn0 | wallyworld, sinzui: that's pretty strange | 02:35 |
sinzui | that is indeed the difference we feel watching it | 02:36 |
sinzui | menn0, the upgrade jub just ran with 1.22. | 02:37 |
wallyworld | hmmm. maybe something is trashing the disk? or eating to cpu? | 02:37 |
menn0 | wallyworld: that's what i'm thinking. and that thing could even be something in Juju itself. | 02:37 |
sinzui | 2015-02-24 02:35:45 INFO juju.cmd.juju upgradejuju.go:214 started upgrade to 1.22-beta4.1 | 02:37 |
sinzui | and status shows it complete at | 02:37 |
sinzui | 2015-02-24 02:36:09 | 02:37 |
menn0 | sinzui: what instance spec is being used for the precise tests? | 02:37 |
menn0 | sinzui: that's the kind of timing I would have expected | 02:38 |
wallyworld | but is the above for the state server? or a machine? | 02:38 |
wallyworld | ah | 02:38 |
wallyworld | the cmd | 02:39 |
sinzui | menn0, the precise slave has 8G ram with 4 vcpu | 02:39 |
sinzui | menn0, at this moment it has lots of resources free, but it was busy last hour building and testing | 02:40 |
sinzui | menn0, there was nothing for us to cleanup when we started investigating. the machine was fast for us | 02:40 |
wallyworld | sinzui: the precise slave is dedicated to running the juju test in question? | 02:41 |
sinzui | wallyworld, it is | 02:41 |
menn0 | sinzui: i'm looking at jenkins. are the latest successes because of the extended timeout? | 02:43 |
sinzui | menn0, the 2 master ones are | 02:43 |
menn0 | sinzui: kk | 02:44 |
sinzui | menn0, the 1.22 passed normally | 02:44 |
menn0 | sinzui: 18mins is just nuts | 02:46 |
sinzui | menn0, sure, but not for maas. I wondered if recent changes needs a new deps and extra work for precise | 02:46 |
menn0 | wallyworld: are you spinning up a precise instance? i'd like to poke around as the upgrade happens | 02:47 |
wallyworld | menn0: just resetting by source tree | 02:47 |
wallyworld | my | 02:47 |
menn0 | sinzui: perhaps bit I can't imagine what would cause this | 02:47 |
menn0 | but | 02:47 |
menn0 | sinzui: one thing that could be helpful is if the env had debug logging turned on. | 02:55 |
menn0 | sinzui: it starts of in debug but switches to info for the root logger | 02:56 |
menn0 | 2015-02-23 18:32:05 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "<root>=INFO;unit=DEBUG" | 02:56 |
sinzui | menn0, I can do that now that ci is locked down | 02:56 |
sinzui | menn0, I can switch it now, then we wait for 1.23 to test | 02:56 |
menn0 | wallyworld: meant to say... all those API disconnect messages in the machine-0 logs are just due to the repeated "juju status" polling that the test script does. that's fairly normal. | 02:57 |
sinzui | menn0, debug is in place | 02:57 |
menn0 | sinzui: awesome | 02:57 |
wallyworld | menn0: i have a bootstrapped precise instance - did you want me to add your ssh public key? | 02:58 |
menn0 | wallyworld: please | 02:58 |
* thumper tries for focus for an hour | 02:59 | |
thumper | if it is urgent, text me | 02:59 |
* thumper doesn't expect anything urgent | 02:59 | |
wallyworld | menn0: 54.82.35.127 i'll start a precise worker machine | 03:00 |
wallyworld | menn0: i'm a tool - i bootstrapped 1.23 not 1.21 | 03:02 |
wallyworld | ffs | 03:02 |
menn0 | it's so hard to get good help these days... | 03:02 |
wallyworld | forgot to type /usr/bin/juju | 03:03 |
wallyworld | sigh | 03:03 |
menn0 | wallyworld: has that host gone away? I can't connect to it now. | 03:14 |
wallyworld | menn0: yeah, almost done starting a 1.21 host, sorry | 03:15 |
wallyworld | it's been slow to come up | 03:15 |
wallyworld | menn0: machine 0 is 54.158.193.40 | 03:17 |
wallyworld | machine 1 is 54.204.193.53 | 03:17 |
menn0 | wallyworld: I thought you were just going to test with the local provider on a single precise machine | 03:19 |
wallyworld | menn0: i was curious to see how precise in general went | 03:19 |
wallyworld | but we can do both | 03:20 |
menn0 | wallyworld: is my key there? | 03:20 |
wallyworld | yep | 03:20 |
wallyworld | do you want to use machine as host for a local env | 03:20 |
wallyworld | machine 0 | 03:20 |
wallyworld | mongo would need to be stopped | 03:21 |
wallyworld | i should just fire up a new machine | 03:21 |
menn0 | wallyworld: I was using the wrong key... i have a personal one and a canonical one. fixed now | 03:21 |
menn0 | wallyworld: how about I fire up the machine for the local test | 03:23 |
wallyworld | ok | 03:23 |
wallyworld | menn0: upgrade on aws precise was fast, so it has to be a resource contention issue | 03:29 |
menn0 | wallyworld: ok. a useful data point. i'm just setting up this other instance now. | 03:29 |
menn0 | sinzui: SSD or magentic storage? | 03:29 |
sinzui | menn0, I think the latter. the machine is in Hp | 03:31 |
menn0 | sinzui: cool. i'll go with that. | 03:31 |
menn0 | wallyworld: it's 54.190.88.226. installing 1.21 now. | 03:41 |
wallyworld | ok, does it have my ssh key imported? | 03:41 |
sinzui | wallyworld, all the mass deploy jobs are still failing. | 03:42 |
wallyworld | hmmm | 03:42 |
wallyworld | i wonder what's different compared with clouds | 03:43 |
wallyworld | the same scripts should work on both | 03:43 |
wallyworld | is there a url with logs? | 03:43 |
sinzui | wallyworld, no, we cannot get logs for maas because they have unresolvable dns | 03:46 |
wallyworld | oh joy, this will be fun to solve | 03:46 |
sinzui | wallyworld, I am attempting it get into the maas to find the names, for the unit, them use virt-viewer to connect directly to the console | 03:46 |
sinzui | I dont' have creds though for maas 1.7 and 1.8 though | 03:47 |
menn0 | wallyworld: well i'm seeing a problem with the upgrade on precise but it doesn't look the same as what happen in the logs from sinzui | 04:28 |
menn0 | wallyworld: machine-0 fails to upgrade because of: | 04:29 |
menn0 | "set AvailZone in instanceData" failed: failed verification of local provider prerequisites: | 04:29 |
menn0 | cloud-image-utils must be installed to enable the local provider: | 04:29 |
menn0 | sudo apt-get install cloud-image-utils | 04:29 |
menn0 | wallyworld: shouldn't the upgrade step handle that itself? | 04:29 |
wallyworld | that's expected | 04:30 |
wallyworld | that package has to be on the host machine | 04:30 |
wallyworld | as do one or two others | 04:30 |
wallyworld | hmmm, maybe | 04:30 |
wallyworld | but in general | 04:30 |
wallyworld | the local provider checks for needed packages and prints those messages if they are not there | 04:30 |
menn0 | I installed juju-local so shouldn't this already have been taken care of | 04:30 |
menn0 | ? | 04:31 |
menn0 | wallyworld: maybe it got uninstalled by another operation? | 04:31 |
wallyworld | cloud-image-utils isn't a prereq of juju local, maybe it should be? | 04:31 |
menn0 | wallyworld: i'll work around this for now so that I can get to the actual problem that CI is seeing | 04:32 |
wallyworld | i think in trusty cloud-utils includes cloud-image-utils | 04:32 |
wallyworld | i can't wait for precise to go away, but still 2 years keft | 04:32 |
sinzui | wallyworld, I cannot copy this crap gui's cloud-init-output | 04:33 |
wallyworld | sigh | 04:33 |
wallyworld | is there an obvious error? | 04:34 |
sinzui | wallyworld, I can say it is the same error as before where the apt-line is garbled | 04:35 |
wallyworld | i'm trying to spin up maas locally but the bootstrap node refuses to transition from Deploying | 04:35 |
wallyworld | that doesn't make sense as there should be no quotes there now | 04:35 |
wallyworld | and why just maas and now AWS or HP etc | 04:35 |
sinzui | and it died as I almost got the log | 04:37 |
sinzui | wallyworld, as per the bug | 04:37 |
sinzui | util.py[WARNING]: Failed to install packages: ['bridge-utils', 'curl cpu-checker bridge-utils rsyslog-gnutls cloud-utils cloud-image-utils'] | 04:37 |
wallyworld | sigh | 04:38 |
* sinzui double checks commit | 04:38 | |
wallyworld | seems bridge-utils is added twice which is messing it up | 04:38 |
sinzui | wallyworld, are are testing your commit | 04:38 |
wallyworld | yes on aws | 04:39 |
wallyworld | works perfectly | 04:39 |
sinzui | wallyworld, apt doesn't care about duplicates on the command line | 04:39 |
wallyworld | sure, but adding twice seems to mess up juju's rendering of the cmd line | 04:39 |
wallyworld | sinzui: can i get access to your maas? | 04:40 |
wallyworld | if i can't reproduce, i can't fix | 04:40 |
sinzui | wallyworld, sure but it so poorly documented I cannot offer much | 04:40 |
wallyworld | maybe pm or email the ip of the controller and auth key | 04:41 |
menn0 | wallyworld, sinzui: i haven't been able to repro this precise upgrade timeout issue yet but I still have a few ideas | 04:48 |
wallyworld | sinzui: ah, i see where bridge-utils is added twice - it's hard coded in the maas provider | 05:01 |
wallyworld | that's why it is failing on maas | 05:02 |
sinzui | wallyworld, really> | 05:02 |
sinzui | wallyworld, jog hack the local machine to capture logs from machine-0. This isn't too helpful since the failures are machines 1 and 3 | 05:03 |
wallyworld | sinzui: yes, on machine 0 we run the scripts ourselves, not cloud init | 05:03 |
menn0 | sinzui, wallyworld: ok, i'm stumped on how to reproduce this precise issue | 05:07 |
wallyworld | sinzui: it may be the quickest thing to do is to go back to installing one package at a time now that cloud-image has been moved into the correct repo | 05:07 |
menn0 | sinzui, wallyworld: every upgrade works quickly for me | 05:07 |
sinzui | wallyworld, yes, with the caveat that we must also install clout-utils to force the upgrade | 05:08 |
wallyworld | sinzui: yep, i'll install cloud-utils along with the other ones we expect | 05:08 |
sinzui | wallyworld, or we do dist-upgrade (but I think that is risky for today) | 05:09 |
wallyworld | yeah, i'll keep it simple | 05:09 |
wallyworld | menn0: i think juju might be ruled out - something must be thrashing the machine though | 05:09 |
menn0 | wallyworld: either that or i'm missing some aspect of the test setup | 05:10 |
menn0 | i'm about to EOD | 05:10 |
wallyworld | ok, thanks for looking | 05:11 |
wallyworld | sinzui: do i need to do apt-get install --target-release precise-updates/cloud-tools cloud-utils ? | 05:15 |
wallyworld | or can i leave off the --target-release bit | 05:16 |
wallyworld | i think i need it right? for precise? | 05:16 |
menn0 | sinzui, wallyworld: of course I just realised that I was testing with a 1.23 that was a little behind the times and didn't include some of the recent commits that might be contributing to the issue | 05:25 |
* menn0 facepalms | 05:25 | |
menn0 | sinzui, wallyworld: it might be worth repeating what i've done... | 05:25 |
wallyworld | hard to get good help :-) | 05:25 |
menn0 | touche | 05:25 |
wallyworld | i gotta fix this other one first :-( | 05:25 |
menn0 | but I really need to EOD or my wife is going to get pissed | 05:26 |
wallyworld | np, leave it to us | 05:29 |
jog | hi wallyworld | 05:50 |
jog | wallyworld, I was just looking at the MaaS test results for 1.22 revision 79e5ea8a and still see the failure mentioned in bug 1424695. | 05:53 |
mup | Bug #1424695: maas cloud-init cannot download agent from state-server <ci> <maas-provider> <network> <regression> <trusty> <juju-core:In Progress by wallyworld> <juju-core 1.22:In Progress by wallyworld> <https://launchpad.net/bugs/1424695> | 05:53 |
jog | wallyworld, it's looks like you through that commit should have fixed that bug? | 05:54 |
jog | thought even | 05:54 |
jog | wallyworld, looks like maybe you dropped for a bit, did you see my comment above? | 05:58 |
wallyworld | jog: no | 05:58 |
jog | wallyworld, I was just looking at the MaaS test results for 1.22 revision 79e5ea8a and still see the failure mentioned in bug 1424695. | 05:58 |
mup | Bug #1424695: maas cloud-init cannot download agent from state-server <ci> <maas-provider> <network> <regression> <trusty> <juju-core:In Progress by wallyworld> <juju-core 1.22:In Progress by wallyworld> <https://launchpad.net/bugs/1424695> | 05:58 |
jog | looks like you expected that commit to resolve that quoted package string issue? | 05:59 |
wallyworld | jog: yes, sadly maas fails where the other clouds are ok, so i've marked bug as in progress again. i don't have a working maas to test with | 05:59 |
jog | wallyworld, you have access to finfolk.internal ? | 06:00 |
wallyworld | no, i tried and couldn't get in | 06:00 |
* jog wonders what happened to core vmaas setup on gremlin.internal that was setup during the Brussels sprint. | 06:16 | |
dimitern | jog, well, we were supposed to use it for ipv6 work of maas and juju, but we didn't need to | 06:22 |
dimitern | jog, IIRC the maas guys used it for qa stuff (or was it finfolk?) | 06:22 |
jog | dimitern, finfolk is used by juju-qa but I have an extra env setup for debugging for anyone on core that needs it | 06:23 |
dimitern | jog, right, that's good to know then :) | 06:24 |
wallyworld | dimitern: hi, i'm a little concerned that the vet warnings can be suppressed. shouldn't we be fixing the warnings? those provisioner ones are annoying :-) | 06:42 |
dimitern | wallyworld, they should've been fixed yesterday by jw4 | 06:43 |
dimitern | wallyworld, but maybe it bounced due to the blocker | 06:43 |
dimitern | wallyworld, nope - it did bounce - https://github.com/juju/juju/pull/1654 | 06:44 |
dimitern | wallyworld, and these appear for go1.4 only, which we don't officially support yet ;) | 06:45 |
dimitern | wallyworld, go vet in 1.4 is dumber than 1.2 - "%q" is reported for a type with String() method | 06:45 |
dimitern | wallyworld, I'm not sure how much you've seen of my comments | 06:56 |
wallyworld | dimitern: my stupid connection to free node keeps dropping, so none sorry :-( | 06:56 |
dimitern | wallyworld, I thought so :) | 06:57 |
dimitern | wallyworld, basically go vet 1.4 is dumber than go vet 1.2 (which is still the official go version we're using) - "%q" for a type with String() is reported | 06:57 |
dimitern | wallyworld, and there's this PR which bounced due to the blocker - https://github.com/juju/juju/pull/1654 which fixes the warnings | 06:58 |
wallyworld | wtf, go vet go dumber | 06:58 |
wallyworld | what are they thinking | 06:58 |
wallyworld | i'm on go 1.3.x | 06:58 |
dimitern | they were *not* thinking :) | 06:58 |
wallyworld | dimitern: with the blockers - we tried and couldn't reproduce the precise uprade one today | 06:59 |
wallyworld | we contend it's a machine thrashing issue on the test vm | 06:59 |
dimitern | wallyworld, the slowdown or the quoted packages? | 06:59 |
wallyworld | slowdown' | 07:00 |
wallyworld | with the packages one, i changed the behaviour but maas still failed (only maas, not aw etc) | 07:00 |
wallyworld | so i'm looking to revert the apt behaviour to be as per 1.21 since as of today or yesterday the cloud archive has been updated | 07:01 |
dimitern | wallyworld, hmm.. maas took longer than usual to upgrade according to the bug reports | 07:01 |
wallyworld | and we don't need to install cloud-utils and cloud-image-utils together | 07:01 |
wallyworld | oh, we were working on the assumption of local taking too long, that's what cutis told us | 07:01 |
dimitern | one thing occurred to me - for precise, are we making sure we add the cloud-tools pocket? | 07:02 |
wallyworld | dimitern: yeah, that's added | 07:03 |
dimitern | there's the MaybeAddCloudToolsWhatever in cloudinit that gets called for a given series - but IIRC it was put in an if block with enableAotUpdate | 07:03 |
wallyworld | dimitern: and i just confirmed, doing apt-get install cloud-utils by itself breaks | 07:03 |
wallyworld | but apt-get install cloud-utils cloud-image-utils works | 07:03 |
wallyworld | but setting that up breaks on maas | 07:03 |
dimitern | wallyworld, due to cloud-init getting removed? | 07:03 |
wallyworld | yeah | 07:03 |
dimitern | can you confirm cloud-image-utils comes from the cloud-tools pocket on precise? | 07:04 |
dimitern | it has to be fixed there | 07:04 |
wallyworld | dimitern: actually, it may be that we are only adding the cloud-tools pocket when bootstrapping | 07:05 |
wallyworld | that would explain things | 07:05 |
dimitern | wallyworld, that's wrong if we do | 07:05 |
dimitern | wallyworld, I'll have a look what triggers it | 07:05 |
wallyworld | dimitern: i have a branch which i was going to propose pn gh to revert the apt behaviour to be more like 1.21, but with cloud-utils added (it needs to be installed to get the right version). bootstrap is fine, but adding a new machine fails. it may be fue to cloud-tools pcoket not being added except for at bootstrap. but i have to go out to a Foo Fighters concert, will be back later. here's the branch. https://github. | 07:09 |
wallyworld | com/wallyworld/juju/tree/revert-apt-install-method | 07:09 |
dimitern | wallyworld, sure, I'll look into it | 07:09 |
wallyworld | the above branch reverts the behaviour of multiple packags on the one apt-get line | 07:09 |
dimitern | wallyworld, enjoy the concert ;) | 07:09 |
wallyworld | ty | 07:09 |
wallyworld | dimitern: if you run up a machine, apt-cache policy cloud-utils should show 0.27 | 07:10 |
wallyworld | on bootstrap and a worer node | 07:10 |
wallyworld | worker | 07:10 |
wallyworld | and cloud-image-utils should be there too | 07:10 |
dimitern | wallyworld, ok, will check both | 07:10 |
wallyworld | i'll check back later in a few ours | 07:10 |
wallyworld | tyvm | 07:11 |
dimitern | np | 07:11 |
hazmat | is gwacl's primary project site still launchpad.net/gwacl? | 08:58 |
jam | dimitern: did you see william today? | 09:10 |
dimitern | jam, not yet | 09:10 |
jam | wallyworld: did you want to discuss ensure-ha --to ? | 09:20 |
jam | natefinch: /wave | 10:32 |
natefinch | jam: howdy | 10:32 |
natefinch | jam: gimme 2 minutes to go get my coffee? | 10:32 |
jam | k | 10:33 |
jam | natefinch: I don't hear you | 10:40 |
perrito666 | morning | 11:20 |
perrito666 | natefinch_: still up? | 11:59 |
natefinch_ | perrito666: I am here... got up for an early meeting | 11:59 |
natefinch_ | perrito666: probably will have to go soon as the kids are stirring | 12:00 |
perrito666 | natefinch_: I cannot help but notice you are OCR today and I have a very small change here http://reviews.vapour.ws/r/995/ | 12:01 |
natefinch_ | perrito666: ship it! | 12:03 |
perrito666 | tx | 12:04 |
perrito666 | I should have known that the trick was to find you half asleep | 12:04 |
perrito666 | 3 blocking bugs? oh what have I done to deserve this | 12:05 |
natefinch_ | doh | 12:05 |
* dimitern nailed the cause of the blocker | 12:10 | |
* dimitern hates cloud-init more than before now | 12:11 | |
jam | fwereade: greetings | 12:22 |
fwereade | jam, heyhey | 12:22 |
jam | perrito666: doesn't removing restore break compat with older servers? | 12:23 |
jam | fwereade: so I commented on your last request. I was wondering if we would still want the server to give the official time. | 12:27 |
fwereade | jam, so the server would always return what you asked for? | 12:28 |
fwereade | jam, I'm open to being convinced, but I can't see what we'd use it for today | 12:29 |
fwereade | jam, and if we need it in the future it's a new api version anyway surely? | 12:29 |
jam | fwereade: the client could request an amount, the server may upgrade that to a longer amount and replies with the actual amount | 12:31 |
jam | fwereade: has better future compat if we change the minimum timeout, clients that were asking for something short still work | 12:31 |
fwereade | jam, surely even if we're just changing that behaviour we'd be adding a version anyway? | 12:35 |
jam | fwereade: then why worry about it at all? I thought the point of client requests was to allow flexing it | 12:36 |
jam | I don't think we'd *have* to change the version just because the boundary ranges change, if we make it clear | 12:36 |
jam | that said, we don't need to spend hours on this | 12:36 |
fwereade | jam, because 30s was a magic number embedded deep inside the server that I suspect is more than usually vulnerable to changes without proper foresight | 12:37 |
fwereade | jam, it's the client that needs 30s, and it will still be the client who needs more or less time in future, I think? | 12:38 |
fwereade | jam, boundary values perhaps, so long as we're making them looser, I suppose, but that's still a bit risky for my tastes | 12:38 |
perrito666 | jam: mm, you have a good point I think I can make it use both ways according to the server version | 12:39 |
fwereade | jam, "may I be leader for the next X seconds [yes|no]" STM to be simple and not simplistic -- and, hmm, does not actually preclude creating a longer lease internaly -- it just means that the client couldn't take advantage of that knowledge to space out its requests more | 12:41 |
jam | fwereade: so I don't think any client is going to make hard time constraints. They might be able to say "it will take approx X" but I doubt anyone can guarantee that they won't take longer than that | 12:42 |
fwereade | jam, sorry, what won't take longer? | 12:42 |
fwereade | jam, time to renew the lease? | 12:43 |
jam | fwereade: whatever thing they are running that they are expecting to hold the lease | 12:43 |
fwereade | jam, hence putting it in the client's control, and having them request a 60s and renew it in 30s, independent of what else is happening | 12:43 |
fwereade | jam, based on user feedback what is desired/required is a guarantee that a successful is-leader call give you 30s grace in which you're sure you're the leader | 12:44 |
fwereade | jam, worst-case you request *just* before a failing renew, and you *still* get 30s of grace from that success | 12:45 |
jam | fwereade: as in we won't nominate a new leader for 30s after the last one expired ? | 12:45 |
fwereade | jam, no? we won't nominate a new one until immediately afterthe last one expires | 12:46 |
fwereade | jam, that's why I ask for 60s and guarantee 30 | 12:46 |
fwereade | jam, and refresh every 30 | 12:46 |
fwereade | jam, even when you fail your nth request, your n-1th one is still good, leaving you time to react | 12:47 |
fwereade | jam, while you are still leader and nobody else is stepping on your toes | 12:47 |
jam | fwereade: what is triggering this polling ? | 12:48 |
jam | if it's juju calling a leader hook, can't we easily be blocked waiting for some other hook to finish? | 12:48 |
jam | is it *while* running a given hook we expect them to split off a thread/process to call back into us to ensure that they're still active? | 12:49 |
fwereade | jam, the leadership tracker will be running anyway, independently of anything else -- the hook tool asks the context which asks the tracker if it can be leader | 12:49 |
fwereade | jam, nobody seems to want that | 12:50 |
fwereade | jam, best-effort leader-deposed execution once we're outside the reported grace period seems sufficient | 12:50 |
jam | fwereade: it *feels* to me like someone would write "ok, I need to reconfigure my workload, ask for leader for X seconds, start reconfiguring, oh reconfiguring took too long, but I'm stuck in that process" | 12:51 |
jam | who/what would actually refresh the leadership request | 12:51 |
fwereade | jam, the refreshing is continuous anyway while you're leader whether you're running a hook or not | 12:52 |
jam | fwereade: k, then I don't see any need to set any time, Juju refreshes at an interval of X | 12:52 |
fwereade | jam, I'm not so certain that we'll never need to tweak that... | 12:54 |
jam | fwereade: you just said that if we need to tweak it, it would be a version bump (IMO) | 12:54 |
jam | my point was, if we want to make it variable, make it easy to be variable and still compatible | 12:54 |
fwereade | jam, I thought that was what I was doing :) | 12:54 |
jam | or just make it fixed and we bump the API version when we need to change ti | 12:54 |
jam | it | 12:55 |
jam | fwereade: you made it variable from one side, but not the other | 12:55 |
jam | fwereade: if this is being confusing or feeling like I'm antagonizing you, I'm not trying to, I'm happy with a JFDI here. | 12:56 |
jam | but it seemed odd to say that only one side would get to change without an API version bump | 12:56 |
jam | when it isn't hard to make it graceful either way | 12:56 |
fwereade | jam, I know you're not, but I must admit I am a little confused so there's probably something worth figuring out | 12:57 |
jam | fwereade: I think my gut reaction is "why is this an error, and why can't we just request 0s and get told what the lease time is" | 12:57 |
fwereade | jam, my contention is that it is hard to make it graceful if we allow the server any more than a yes/no :) | 12:57 |
jam | asking for too long I could accept, asking for too short seems like "nope, just take this longer one" | 12:58 |
jam | fwereade: but I was thinking it was being exposed to the charm itself | 12:58 |
jam | and that charmers were going to have to say "I'm goingto run this op, I think it will take 3 minutes, give me a 3min lease" | 12:58 |
jam | which lead to the other problems | 12:58 |
fwereade | jam, if I ask to be leader for 60s and the server makes me leader for 300s, I'm still definitely leader for 60s | 12:58 |
jam | (who can actually refresh, etc) | 12:58 |
fwereade | jam, as far as I'm aware that was never on the table | 12:59 |
fwereade | jam, nobody asked for it, and it adds complexity and temptation to ask for infuriatingly long lease times that then lead to poor experiences when those units fail | 13:00 |
jam | fwereade: that and missed guestimates that then lead to a need to have a separate process that is refreshing. I think the point is Juju is doing all the refreshing (which actually has its own problem of Juju being up and happy, but the charm code stuck in an infinite loop) | 13:00 |
jam | fwereade: but that's probably still a decent place to be. | 13:01 |
wallyworld | dimitern: looks like you found a fix for the cloud-utils issue. i didn't think to set apt-get update = true in cloud init for precise. how does that interact with the apt disable update settings? did you use GetPreparePackages() to selectively include target-series for precise? | 13:23 |
=== Murali_ is now known as Murali | ||
dimitern | wallyworld, it overrides the apt-get update disable flag | 13:27 |
dimitern | wallyworld, it has to otherwise it won't work | 13:27 |
wallyworld | dimitern: ah ok, and we can't add the cloud-tools repo without updating i assume | 13:27 |
dimitern | wallyworld, I'm using your approach with GetPreparePackages from utils/apt, but rather than joining all with " ", I'm calling AddPackage for each one | 13:27 |
wallyworld | dimitern: yeah, that's exactly what i did in my latest branch i pushed before i left | 13:28 |
dimitern | wallyworld, yes - it update is off, no apt-sources or packages are installed | 13:28 |
dimitern | wallyworld, there's a quirk though due to cloud-init | 13:28 |
wallyworld | dimitern: i discovered you had to do the packages one by one otherwise it would be sad | 13:28 |
wallyworld | Foo Fifgters were farking awesome btw, bloody excellent concert | 13:29 |
wallyworld | gad, can't type | 13:29 |
wallyworld | Foo Fighters | 13:29 |
dimitern | wallyworld, sweet! :) | 13:29 |
wallyworld | by ears are ringing :-) | 13:29 |
wallyworld | my | 13:29 |
dimitern | wallyworld, btw - this is the meat of my patch http://paste.ubuntu.com/10389087/ (apart from a similar if (in the beginning) in the templateUserData func in the lxc-broker) | 13:30 |
wallyworld | looking | 13:30 |
wallyworld | dimitern: yeah, that matches my understanding also | 13:31 |
wallyworld | dimitern: i can review once you propose | 13:32 |
dimitern | wallyworld, sure, I'll propose soon, but I wanted to test on a precise host just to be sure | 13:32 |
wallyworld | np | 13:32 |
dooferlad | dimitern: hangout? | 14:01 |
dooferlad | dimitern: MAAS+Juju Network interlock | 14:02 |
dimitern | dooferlad, uh, yeah - omw, thanks! | 14:02 |
=== ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1424695 1424777 | ||
=== kadams54-away is now known as kadams54 | ||
mgz | gsamfira: are you around? I've got a fix for windows testing stuff I'd like you to look at | 14:20 |
dimitern | mgz, hey | 14:50 |
dimitern | mgz, you'd probably guess what I'll ask about :) | 14:50 |
mgz | dimitern: I hope to have an update, various things are borked | 14:53 |
mgz | dimitern: who's ocrs today? | 14:53 |
dimitern | mgz, natefinch_ and anastasiamac according to the calendar | 14:53 |
dimitern | mgz, ok, thanks | 14:53 |
mgz | hm, I wonder if I'm in the middle of those two | 14:54 |
mgz | natefinch_: can I have a review plz? https://github.com/juju/testing/pull/52 | 14:54 |
natefinch_ | mgz: looking | 15:05 |
=== natefinch_ is now known as natefinch | ||
alexisb | dimitern, howdy, sorry I was late for our 1x1 but I am on the hangout now whenever you are ready | 15:06 |
dimitern | alexisb, oh, omw | 15:06 |
natefinch | mgz: this is a lot more complex than it seems to need to be. If we know we don't need a whitelist on anything other than windows, why not just put a if runtime.GOOS != "windows" { return } at the top, and let the rest of the function be windows only? | 15:08 |
mgz | look at the context above, there's also a JUJU_MONGOD variable that's whitelisted everywhere | 15:09 |
mgz | (and potential for more I guess) | 15:10 |
natefinch | mgz: oops, missed the append of the testingVariables, sorry | 15:10 |
mgz | I could make it shorter by just checking that, but it's written in such as way as it wanted to be extensible | 15:10 |
natefinch | yeah... it would probably be a lot easier if I weren't looking at it in a diff | 15:17 |
katco | dimitern: ping | 15:34 |
dimitern | katco, pong | 15:35 |
katco | dimitern: it looks like v2 S3 signing is slightly different than standard v2 signing | 15:35 |
katco | dimitern: amazon states: "Amazon S3 now supports the latest Signature Version 4. This latest signature version is supported in all regions and any new regions after January 30, 2014 will support only Signature Version 4." | 15:35 |
ericsnow | natefinch: 1-on-1? | 15:36 |
katco | dimitern: are you ok with shifting s3 to only using v4? porting the special v2 signing stuff into our standard v2 signing method would be a bit of a pain | 15:36 |
dimitern | katco, let me assimilate this for a moment :) | 15:36 |
katco | dimitern: not a problem, please take your time :) | 15:36 |
dimitern | katco, so for v2 we can do that later I guess (drop the special signing and leave the post-jan-2014 one only) | 15:38 |
dimitern | katco, for v3 we need to make it work as needed, because we'll be switching to v3 pretty soon | 15:38 |
katco | dimitern: you're talking goose versions now? | 15:38 |
dimitern | katco, no - about goamz | 15:39 |
katco | dimitern: ack sorry... wires crossed. that's what i meant :) | 15:39 |
dimitern | katco, ok, sgtm then | 15:39 |
dimitern | katco, what's your immediate plan? | 15:39 |
katco | dimitern: so some clarification | 15:39 |
katco | dimitern: we want v3 of goamz in juju by this friday for the feature freeze | 15:39 |
katco | dimitern: this is to support efforts in the china region | 15:40 |
katco | dimitern: so goamz v3 would drop s3 signing v2 in favor of standard v4, and we would put that into juju by friday | 15:40 |
katco | dimitern: are we on the same page? | 15:40 |
dimitern | katco, sgtm, however I have one request | 15:40 |
katco | dimitern: sure thing | 15:41 |
dimitern | katco, I've already ported what was sensible to port from v1 to v2 | 15:41 |
dimitern | katco, would you be so kind to do the same for v2 to v3? | 15:41 |
katco | dimitern: with some guidance, sure. are the change-sets fairly self contained? | 15:41 |
dimitern | katco, it shouldn't be a lot anyway, and I can help | 15:41 |
dimitern | katco, I think so | 15:42 |
katco | dimitern: yeah sure thing then. let me get the live tests working and then i'll work on that | 15:42 |
dimitern | katco, cheers! | 15:42 |
katco | dimitern: same to you! tyvm o/ | 15:42 |
katco | dimitern: i would like to buy you a beer in nuremberg :) | 15:43 |
dimitern | katco, \o/ I'll hold you on to this though :P | 15:44 |
katco | dimitern: it will be my pleasure! :D | 15:44 |
dimitern | ;) | 15:44 |
katco | i have a stein that my mom got me... i'm wondering if i should bring it | 15:45 |
jw4 | dimitern, wallyworld ; fwiw it seems that go vet in 1.4.2 is saner than 1.4.1 | 15:45 |
jw4 | dimitern, wallyworld but in any case the PR that *did* land allows you to set the environment variable IGNORE_VET_WARNINGS="some non-zero string" which will cause the pre-push hook to report but not fail on go vet warnings | 15:46 |
dimitern | jw4, that's fine - I'd prefer the flexibility of being able to ignore it, if needed | 15:47 |
jw4 | dimitern: yep | 15:48 |
natefinch | ericsnow: yeah, we can pop into moonstone | 15:51 |
ericsnow | natefinch: k | 15:51 |
stokachu | if I wanted to change the hostname during juju kvm creation do i need to modify the cloud-init config for that or is there an easier way | 15:51 |
jw4 | fwereade: still working on upgrade steps, but I wanted to get this wip in front of you sooner rather than later: http://reviews.vapour.ws/r/997/ | 15:52 |
dimitern | sinzui, I've marked bug 1424695 as duplicate of bug 1424777 as it's caused by the same issue | 15:56 |
mup | Bug #1424695: maas cloud-init cannot download agent from state-server <ci> <maas-provider> <network> <regression> <trusty> <juju-core:In Progress by wallyworld> <juju-core 1.22:In Progress by wallyworld> <https://launchpad.net/bugs/1424695> | 15:56 |
mup | Bug #1424777: local-provider precise failed to upgrade <ci> <local-provider> <precise> <regression> <upgrade-juju> <juju-core:In Progress by dimitern> <juju-core 1.22:In Progress by dimitern> <https://launchpad.net/bugs/1424777> | 15:56 |
sinzui | dimitern, hurray of sorts | 15:57 |
=== ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1424695 | ||
dimitern | sinzui, indeed :) | 15:57 |
dimitern | it's like this I believe | 16:03 |
=== ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: see calendar | Open critical bugs: 1424777 | ||
stokachu | dimitern: is there a way to customize the hostname during a kvm CreateMachine? | 16:04 |
dimitern | stokachu, I don't know for sure, but I doubt it | 16:05 |
dimitern | stokachu, we're not setting anything specific in cloud-init for hostname | 16:05 |
stokachu | dimitern: is that file generated dynamically? | 16:06 |
stokachu | the cloud-init file | 16:06 |
dimitern | stokachu, well, yes | 16:06 |
dimitern | stokachu, before provisioning a machine | 16:06 |
stokachu | problem is all KVM creations have a hostname of 'ubuntu' | 16:06 |
dimitern | stokachu, that comes from the ubuntu-cloud image IIRC | 16:07 |
stokachu | yea i was hoping there was a way to change that prior to provisioning | 16:07 |
dimitern | stokachu, that's the first time I've heard someone wanting this btw - it's good you've filed a bug for it | 16:08 |
stokachu | it only affects local provider, trying to deploy ceph units, it fails b/c all 3 units have same hostname | 16:09 |
natefinch | mgz: gave you a review, just one minor tweak requested, but otherwise LGTM | 16:09 |
stokachu | dimitern: https://bugs.launchpad.net/juju-core/+bug/1326091 | 16:11 |
mup | Bug #1326091: deploying into kvm with local provider, hostnames are not unique <kvm> <local-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1326091> | 16:11 |
dimitern | stokachu, cheers, if it's a blocker for you guys, I'd suggest pinging alexisb | 16:11 |
alexisb | yes stokachu can you plese send me mail | 16:13 |
mgz | natefinch: thanks! I'll adjust and land | 16:14 |
stokachu | alexisb: will do thanks | 16:15 |
stokachu | dimitern: thank you too | 16:15 |
dimitern | stokachu, no worries | 16:20 |
dimitern | wallyworld, if you're still here - http://reviews.vapour.ws/r/998/ | 16:20 |
dimitern | ^ fixes the blocker bug | 16:20 |
dimitern | natefinch, axw, others? ^^ | 16:20 |
cmars | hi dimitern, updated http://reviews.vapour.ws/r/974/, does it still look good to land? | 16:23 |
natefinch | dimitern: looking | 16:24 |
dimitern | cmars, hey! yes indeed - I've checked it already yesterday | 16:24 |
dimitern | natefinch, thanks | 16:24 |
cmars | dimitern, awesome, thanks for the review! | 16:24 |
dimitern | cmars, np | 16:24 |
perrito666 | brb | 16:25 |
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
natefinch | dimitern: is it possible to have --target-release series package1 package2 package3? And if so, will that still do the right thing with this code (i.e., they'll be joined by a space as the package name)? | 16:38 |
dimitern | natefinch, no it's not | 16:38 |
dimitern | natefinch, as far as I could understand from apt-get docs | 16:39 |
dimitern | natefinch, and even if it was, it still won't work due to the way cloud-init 0.6.3 passes them to apt-get | 16:40 |
dimitern | natefinch, e.g. "--target-release rel pkg1 pkg2 pkg3" | 16:40 |
dimitern | natefinch, well, actually - sorry, it *is* possible | 16:41 |
dimitern | natefinch, apt-get accepts that, but not cloud-init | 16:41 |
natefinch | dimitern: gah, what a pain in the ass. | 16:42 |
dimitern | natefinch, in reality, --target-release is just a hint to the apt policy engine to decide which candidates to prefer | 16:42 |
dimitern | natefinch, oh tell me about it :) I've been on it since 8 am | 16:43 |
natefinch | dimitern: I wonder if we should be checking to make sure that they don't do --target-release series pkg1 pkg2 ... but maybe that's being too smart? | 16:47 |
dimitern | natefinch, I have a couple of panics in place if for that reason | 16:48 |
dimitern | s/if// | 16:48 |
dimitern | natefinch, all tests pass - both make check and the live ones I've described | 16:49 |
dimitern | natefinch, I suppose you could argue if 99% of the existing tests pass, it's a bad thing and we need better ones | 16:49 |
dimitern | natefinch, but I'd rather land this and unblock everyone in a few hours, then I'm happy to improve the tests as a follow-up | 16:50 |
dimitern | natefinch, I'm porting the same fix to trunk and fully intend to retry the same live tests before proposing it | 16:51 |
=== kadams54 is now known as kadams54-away | ||
* dimitern is sick of blockers already - let's not add to that :) | 16:51 | |
dimitern | katco, still around? | 16:52 |
dimitern | katco, is this ready to land https://github.com/go-amz/amz/pull/27/ ? it looks so to me - if it is, i'll go ahead and merge it | 16:53 |
katco | dimitern: no not quite yet | 16:55 |
dimitern | katco, ok, np | 16:56 |
katco | dimitern: almost there | 16:56 |
* dimitern steps out for a while - will be back soon | 16:56 | |
natefinch | dimitern: gave you a ship it | 16:58 |
=== kadams54-away is now known as kadams54 | ||
alexisb | natefinch, can you edit the HA doc now? | 18:11 |
dimitern | natefinch, thanks! | 18:15 |
mgz | man, I got review 999? | 18:32 |
mgz | one off, one off | 18:32 |
jw4 | mgz: quick think of something else to PR | 18:32 |
mgz | review please: reviews.vapour.ws/r/999/ | 18:32 |
mgz | natefinch: ^ | 18:39 |
cmars | is landing still blocked? | 19:02 |
cmars | ok, nvm | 19:03 |
natefinch | alexisb: still view only | 19:08 |
natefinch | mgz: ship it | 19:08 |
alexisb | natefinch, ack, I dont have permission to give you write access we will have to bug wallyworld | 19:09 |
natefinch | alexisb: yeah, figured. No problem. I meant to ask for write access from him last night and forgot. | 19:11 |
dimitern | cmars, it is, I'm still testing the port of the fix for trunk, but 1.22 should get unblocked at least (not that it matters I guess) | 19:19 |
cmars | dimitern, got it, thanks for fixing! | 19:20 |
dimitern | cmars, np | 19:20 |
dimitern | natefinch, FYI, there's the tech-debt bug 1425245 I filed for your suggestion | 19:21 |
mup | Bug #1425245: improve cloud-init tests after the fix for bug #1424777 unblocks CI <tech-debt> <juju-core:Triaged by dimitern> <juju-core 1.22:Triaged by dimitern> <https://launchpad.net/bugs/1425245> | 19:21 |
natefinch | dimitern: thanks :) | 19:21 |
dimitern | :) | 19:23 |
hazmat | axw: llgoi ever get announced? | 19:50 |
* hazmat switches to pm | 19:54 | |
natefinch | thumper: do you have a link to the doc about CLI 2.0? | 20:01 |
natefinch | (asking for a friend) | 20:01 |
thumper | natefinch: it isn't written yet :) | 20:01 |
natefinch | thumper: heh | 20:01 |
natefinch | thumper: thought it was already being worked on | 20:01 |
natefinch | (ish) | 20:02 |
=== kadams54 is now known as kadams54-away | ||
dimitern | natefinch, PTAL http://reviews.vapour.ws/r/1001/ - fix for the blocker for trunk | 20:27 |
dimitern | thumper, ^^ | 20:45 |
thumper | dimitern: ack | 20:46 |
dimitern | thumper, I'm waay pas EOD, so I'm going - please add $$fixes-1424777$$ if it's ok | 20:46 |
thumper | dimitern: do you have the link for the 1.22 version? | 20:46 |
thumper | dimitern: and thanks for the fix | 20:47 |
dimitern | thumper, https://github.com/juju/juju/pull/1670/ | 20:47 |
thumper | ta | 20:47 |
dimitern | thumper, np | 20:47 |
perrito666 | every time I unplug my external monitor my computer hangs... how sad | 21:39 |
thumper | perrito666: tell Trevinho in #ubuntu-desktop | 21:52 |
thumper | perrito666: tell him I sent you :-) | 21:52 |
thumper | perrito666: although I'm not sure he'd be on right now as he lives in Italy | 21:52 |
thumper | perrito666: but is is known to work weird hours | 21:52 |
perrito666 | thumper: why that makes me think that I will get something thrown over my head | 21:52 |
thumper | :-) | 21:52 |
thumper | he's a good guy, and one of the current maintainers of unity 7 | 21:53 |
perrito666 | thumper: anyway I am using vivid,so that is most likely my fault | 21:53 |
thumper | perrito666: assuming you're using unity | 21:53 |
perrito666 | thumper: it is unity, version is a mistery, whatever is shipped with vivid | 21:53 |
cmars | why am I still getting "Build failed: Does not match ['fixes-1424777']" | 22:25 |
perrito666 | cmars: can I be a smartass? | 22:30 |
perrito666 | :p | 22:30 |
perrito666 | cmars: jokes aside, that bug must be marked as critical and not fix commited? | 22:31 |
cmars | perrito666, it's fix committed though | 22:32 |
cmars | https://bugs.launchpad.net/juju-core/+bug/1424777 | 22:32 |
mup | Bug #1424777: local-provider precise failed to upgrade <ci> <local-provider> <precise> <regression> <upgrade-juju> <juju-core:Fix Committed by dimitern> <juju-core 1.22:Fix Committed by dimitern> <https://launchpad.net/bugs/1424777> | 22:32 |
ericsnow | cmars: it has to be "fixed released" before it's unblocked | 22:33 |
jw4 | marcoceppi made this recently: http://juju.fail/status.json | 22:33 |
thumper | cmars: we need to make sure the ci test that it is supposed to fix is actually fixed | 22:33 |
thumper | sinzui: ping | 22:33 |
thumper | sinzui: can we see if dimiter's patch has fixed the precise upgrade? | 22:33 |
cmars | jw4, marcoceppi that's pretty neat | 22:35 |
jw4 | cmars: I believe it actually uses the same mechanism the CI bot uses | 22:36 |
marcoceppi | cmars: yeah, I was just about to put a small webpage in front of it, but status.json will always be available for consumption | 22:36 |
marcoceppi | jw4: it does, about to push the code up which generates this page | 22:36 |
jw4 | marcoceppi: cool | 22:36 |
sinzui | thumper, it does, but aws is ill, preventing an official blessing, but we are switching everything we can off of aws | 22:40 |
thumper | sinzui: cool | 22:41 |
katco | wallyworld: you around yet? | 23:32 |
wallyworld | yup | 23:32 |
katco | wallyworld: got time for a quick hangout? have a series of s3 questions | 23:32 |
wallyworld | sure | 23:32 |
katco | wallyworld: sweet... 1:1? | 23:32 |
wallyworld | yup | 23:32 |
perrito666 | ahaa I believe it chrome that goes crazy when x is resized | 23:34 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!