[00:02] <thumper> davecheney: does this bug still happen? https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1304167
[00:02] <_mup_> Bug #1304167: syntax error, trusty beta-2 cloud image <apparmor (Ubuntu):Confirmed> <https://launchpad.net/bugs/1304167>
[00:02] <wallyworld> thumper: maybe you were smoking weed when you wrote the email
[00:02] <thumper> davecheney: seems like a quite major bug if so
[00:02] <thumper> wallyworld: nah...
[00:02] <thumper> although I am wondering if it would help
[00:02] <wallyworld> couldn't hurt :-)
[00:02] <thumper> ha
[00:03] <davecheney> thumper: would it be possible for you to log
[00:04] <davecheney> "%T", err
[00:04] <thumper> davecheney: sure
[00:04] <davecheney> thanks
[00:04] <davecheney> thumper: yes, the bug is still open
[00:04] <davecheney> it has screwed LXC on any platform that uses apparmor
[00:05] <thumper> :-(
[00:05] <davecheney> thumper: when you run the destroy-enviromnet, you're not in that directory are you
[00:06] <davecheney> ie; mkdir /tmp/t
[00:06] <davecheney> cd /tmp/t
[00:06] <davecheney> rmdir /tmp/t
[00:06] <thumper> davecheney: no
[00:06] <davecheney> ok, just checking
[00:07] <davecheney> http://gcc.gnu.org/releases.html
[00:07] <davecheney> gcc 4.9 released
[00:07] <davecheney> but not really
[00:07] <thumper> if you destroy too close to bootstrap, you don't get it
[00:08] <davecheney> thumper: hmm ok
[00:08] <thumper> oh...
[00:08] <thumper> I think I know what it could be...
[00:09] <davecheney> thumper: hold pls
[00:09] <thumper> when we kill the machine agent with pkill
[00:09] <thumper> it cleans up after itself
[00:09] <thumper> we then have a race
[00:09] <davecheney> thumper: right, so things are racing on the directory listing
[00:09] <thumper> the agent is trying to remove some files
[00:10] <thumper> and then so does the destroy command
[00:10] <davecheney> http://golang.org/src/pkg/os/error_unix.go
[00:10] <davecheney> so is the agent removing ~/.juju/local ?
[00:10] <davecheney> ie it's not a file
[00:10] <davecheney> but the top level directory itself ?
[00:11] <davecheney> so os.RemoveAll goes to remove ~/.juju/locla
[00:11] <davecheney> and the whole thing has been deleted already ?
[00:11] <thumper> not all of it...
[00:11] <thumper> but some of it
[00:11] <thumper> oh...
[00:11] <thumper> yeah, sometimes all of it
[00:11] <thumper> yeah...
[00:11] <thumper> it does
[00:12] <thumper> *os.SyscallError
[00:12] <thumper> they are racing to remove the datadir
[00:13] <davecheney> thumper: ok, that should be possible to make a repro
[00:13] <davecheney> i'll do that while i'm waiting for gccgo to compile
[00:13] <thumper> davecheney: what do you think should happen?
[00:13] <davecheney> 10:12 < thumper> *os.SyscallError
[00:13] <davecheney> ^ is that %T ?
[00:13] <thumper> yeah
[00:14] <davecheney> cheaky bugger
[00:14] <davecheney> thumper: leave it with me
[00:14] <davecheney> raise an issue maybe
[00:14] <davecheney> i need to make a repro
[00:14] <thumper> davecheney: you see it as a golang bug?
[00:15] <davecheney> thumper: it won't fit through http://golang.org/src/pkg/os/error_unix.go
[00:16] <davecheney> http://play.golang.org/p/mp5i8GFL47
[00:16]  * davecheney goes to find out where that os.SysclalError is coming from
[00:18] <davecheney> thumper: for the moment you'lre going to have to code around it
[00:18] <davecheney> this won't be fixed in 1.2
[00:18] <davecheney> dir_unix.go
[00:18] <davecheney> 41:                             return names, NewSyscallError("readdirent", errno)
[00:18] <davecheney> this is where it's coming from
[00:19]  * davecheney feels very depressed
[00:19] <davecheney> it's just bugs, bugs, and more bugs
[00:20] <thumper> davecheney: I'll work around it
[00:21] <thumper> davecheney: we already ignore errors from two other things that we are racing with
[00:30] <davecheney> thumper: i'll get a repro quick smart
[00:30] <davecheney> i can see where it happens
[00:37] <waigani> morning davecheney.
[00:37] <waigani> davecheney: when I run make check on vm I get the following: http://pastebin.ubuntu.com/7246968
[00:37] <waigani> any hints?
[00:38] <waigani> thumper: wip on jujud isolation: https://codereview.appspot.com/87130045
[00:38] <waigani> thumper: cmd/juju and environs/bootstrap are now passing
[00:39] <waigani> environs/sync is going to take a bit more thought
[00:39] <waigani> and right now I'm too hungry to think
[00:39] <davecheney> waigani: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1304754
[00:39] <_mup_> Bug #1304754: gccgo on ppc64el using split stacks when not supported <ppc64el> <trusty> <gccgo-4.9 (Ubuntu):Confirmed> <https://launchpad.net/bugs/1304754>
[00:40] <waigani> davecheney: reading
[00:43] <davecheney> waigani: short versoin
[00:43] <davecheney> downgrading to an older kernel works around the problem
[00:43] <davecheney> but isn't a fix
[00:43] <waigani> davecheney: yep, thanks
[00:44] <waigani> I neeeeed food. bbl
[00:46] <davecheney> thumper: if err, ok := err.(*os.SyscallError); ok { if os.IsNotFound(err.Err) }
[00:46] <davecheney> or something
[00:56] <thumper> axw: just saw your answer too
[00:57] <thumper> axw: however the error that is being returned isn't os.IsNotExist
[00:57] <thumper> axw: as the race is being caught elsewhere
[00:59] <axw> thumper: ah, maybe in the Readdir then
[00:59] <axw> anyway, there's definitely a race, and you should ignore it I think
[00:59] <davecheney> thumper: lucky(~/devel/issue) % go run issue.go
[00:59] <davecheney> 2014/04/14 10:58:58 creating temporary directories rooted at "/tmp/issue015782153"
[00:59] <davecheney> 2014/04/14 10:58:59 preparing workers
[00:59] <davecheney> 2014/04/14 10:58:59 release the swarm
[00:59] <davecheney> 2014/04/14 10:58:59 unexpected error: *os.SyscallError, "readdirent: no such file or directory"
[00:59] <thumper> ah... read-dir-int
[00:59] <davecheney> 2014/04/14 10:58:59 unexpected error: *os.SyscallError, "readdirent: no such file or directory"
[00:59] <davecheney> 2014/04/14 10:58:59 unexpected error: *os.SyscallError, "readdirent: no such file or directory"
[00:59] <davecheney> 2014/04/14 10:58:59 unexpected error: *os.SyscallError, "readdirent: no such file or directory"
[00:59] <thumper> not re-addir-int
[00:59] <davecheney> thumper: raising an issue
[00:59] <thumper> axw: yeah, that's it
[01:00] <thumper> I couldn't parse the smashedtogetherwords
[01:05] <mwhudson> heh i finally have results for waigani and he's gone
[01:05] <mwhudson> but i think his problem was actually the "things randomly die on ppc" bug...
[01:06] <mwhudson> things all in all don't look too bad on arm64 actually
[01:08] <davecheney> thumper: https://code.google.com/p/go/issues/detail?id=7776&thanks=7776&ts=1397437695
[01:09] <thumper> mwhudson: \o/
[01:09] <mwhudson> not actually good
[01:09] <mwhudson> just not terrible
[01:10] <davecheney> mwhudson: /usr/include/features.h:374:25: fatal error: sys/cdefs.h: No such file or directory
[01:10] <davecheney> any suggestions which package contains this header
[01:10] <mwhudson> uh, no, looks basic though
[01:10] <mwhudson> hm
[01:11] <mwhudson> dpkg -S sez libc6-dev-i386
[01:11] <mwhudson> which seems a bit random
[01:11] <davecheney> % dpkg -S /usr/include/sys/cdefs.h
[01:11] <davecheney> libc6-dev-i386: /usr/include/sys/cdefs.h
[01:11] <davecheney> yeah
[01:11] <mwhudson> ah
[01:11] <mwhudson> um
[01:11] <davecheney> mwhudson: this is compiling gcc 4.9
[01:12] <mwhudson> "real" libc6-dev installs it to /usr/include/$triplet/sys/cdefs.h
[01:12] <mwhudson> davecheney: from upstream or the deb?
[01:12] <davecheney> mwhudson: upstream
[01:12] <davecheney> mwhudson: our deb produces broken binaries
[01:13] <mwhudson> davecheney: on powerpc64 i assume?
[01:13] <mwhudson> um, that sounds like something doko should know about :)
[01:13] <mwhudson> is this the split stack thing?
[01:13] <davecheney> yup
[01:14] <mwhudson> i guess libc6-dev-i386 must be some kind of pre-multiarch thing
[01:14]  * davecheney tries patching in some of the arguments from /usr/bin/gcc -v
[01:14] <mwhudson> davecheney: "dpkg --listfiles libc6-dev | grep cdefs.h" on your platform?
[01:15] <davecheney> $ dpkg --listfiles libc6-dev | grep cdefs.h
[01:15] <davecheney> /usr/include/powerpc64le-linux-gnu/sys/cdefs.h
[01:15] <davecheney> maybe ./configure got the tripplet wrong
[01:16] <davecheney> well, i was wondering why this was such a good compile box
[01:16] <davecheney> clock           : 4284.000000MHz
[01:16] <davecheney> ziiing
[01:21] <davecheney> gcc, just keep adding flags until it compiles
[01:22] <davecheney> nope
[01:22] <davecheney> still broke
[01:22] <davecheney> fuck this
[01:22] <davecheney> i'm using symlinks
[01:34] <davecheney> wow. such multiarch
[01:40] <davecheney> mwhudson: ok, here is what I think
[01:40] <davecheney> gccgo on ppc is correctly detecting that split stacks are not supported
[01:40] <davecheney> and using the default 'large' stack model
[01:40] <davecheney> but .. the stack is still too small
[01:41] <davecheney> i'm bt'in in gdb and at stack frame 1475 with no end in sight
[01:42] <mwhudson> haha
[01:42] <mwhudson> ok
[01:42] <mwhudson> so stack overflow?
[01:42] <mwhudson> hmm
[01:42] <davecheney> make that stack frame 3,300
[01:42] <mwhudson> is this on the altstack?  i.e. while handing a signal?
[01:42] <davecheney> so, in summary, gccgo doesn't give a clean indication when you fall off the end of the stack
[01:42] <davecheney> mwhudson: nope, with split stacks disabled
[01:42] <davecheney> you get a c style stack per goroutine
[01:42] <mwhudson> davecheney: that's not what i mean
[01:42] <mwhudson> sure
[01:43] <mwhudson> but signals are handled on a different stack again
[01:43] <mwhudson> (sigaltstack and all that)
[01:43] <mwhudson> i think those stacks are smaller?
[01:43] <mwhudson> anyways
[01:43] <davecheney> mwhudson: i'm going to say, conditionally, yes
[01:43] <davecheney> mwhudson: the sig handled gets a SEGV
[01:43] <mwhudson> davecheney: it's easy ish to make the stacks bigger i think
[01:43] <davecheney> and it blames the topmost stack frame for hittig a nil
[01:44] <mwhudson> i found the code that was allocating them
[01:44] <davecheney> when actaully all it did was call a function
[01:44] <mwhudson> yeah, well, if you fall off the end of the stack it's certainly going to break
[01:44] <davecheney> mwhudson: are you adding -fsplit-stack on aarch64 ?
[01:44] <mwhudson> davecheney: no
[01:44] <davecheney> shit, 5,000 stack frames
[01:45] <davecheney> how in gods name could juju use so much stack ...
[01:45] <mwhudson> could this "just" be application infinite recursion for some reason?
[01:45] <mwhudson> or does the backtrace look reasonable?
[01:46] <davecheney> mwhudson: the latter
[01:46] <davecheney> maybe a dozen frames
[01:46] <davecheney> this is going to be an 8mb stack
[01:46] <davecheney> 18,000 stack frames
[01:47] <davecheney> #31380 0x000000001000522c in main.count ()
[01:47] <davecheney> #31381 0x0000000010005854 in main.main ()
[01:47] <_mup_> Bug #31381: POMsgSet.active_texts assumes POFile.pluralforms is an int <lp-translations> <oops> <Launchpad itself:Fix Released by matsubara> <https://launchpad.net/bugs/31381>
[01:47] <_mup_> Bug #31380: source package sort by version doesn't cope with invalid version numbers <lp-foundations> <oops> <Launchpad itself:Fix Released by kiko> <https://launchpad.net/bugs/31380>
[01:47] <mwhudson> that doesn't sound reasonable
[01:47] <mwhudson> lolmup
[01:47] <davecheney> #-1
[01:47] <mwhudson> although, eh, i guess it works well enough on platforms that do have split stacks
[01:48] <davecheney> mwhudson: most gccgo developers are on amd64
[01:48] <davecheney> when I say most
[01:48] <davecheney> i mean
[01:48] <mwhudson> all 1 of them?
[01:48] <davecheney> everyone except you and me and some neckbeard using mips
[01:49] <mwhudson> strange this doesn't happen on arm64 though
[01:49]  * davecheney goes to talk to ian taylor
[01:49] <davecheney> mwhudson: gccgo src/test/peano.go
[01:49] <davecheney> ./a.out
[01:49] <mwhudson> i wouldn't have thought that stack frames would be much bigger on that
[01:49] <mwhudson> well yes, that fails on arm64 too
[01:49] <davecheney> i wonder if it is unrelated
[01:49] <davecheney> that gives a straight segfault
[01:49] <davecheney> and the go handler doens't catch it
[01:49] <davecheney> i wonder if we're barking up the wrong tree
[01:53] <davecheney> mwhudson: i'm thinking these are two different issues
[01:53] <davecheney> [492932.974051] a.out[25065]: bad frame in setup_rt_frame: 000000c20ffaf0e0 nip 0000000010004e0c lr 00000000100051fc
[01:53] <davecheney> ^ this is what running off the stack looks like
[01:53] <davecheney> note nip
[01:54] <davecheney> [2028013.988376] jujud[400]: bad frame in setup_rt_frame: 0000000000000000 nip 0000000000000000 lr 0000000000000000
[01:54] <davecheney> ^ this is what a juju segfault on a bad kernel looks like
[01:54] <davecheney> nip and lr are 0
[01:54] <davecheney> something branched to 0 and nuked the lr for good measure
[01:55] <mwhudson> well, once you have a disagreement over whether a bit of memory is stack or not, it's not exactly predictable what happens next
[01:55] <davecheney> true
[01:55] <davecheney> but why is the ip 0
[01:55] <davecheney> both cases this is unmapped memory
[01:56] <mwhudson> because something stomped over the link register on the stack, so it branched to lala land when trying to do a procedure return?
[01:56] <mwhudson> i don't know the ppc abi but i certainly saw that sort of thing a lot on arm64
[01:56] <davecheney> mwhudson: anything with a LR is probably going to act the same
[01:57] <mwhudson> also
[01:57] <davecheney> mwhudson: ok, so if we're not running of the end of the stack
[01:57] <davecheney> and i'm pretty sure we're not
[01:57] <davecheney> then why does the size of the kernel page size affect the result
[02:25] <davecheney> $ pmap -x 969
[02:25] <davecheney> 969:   /var/lib/juju/tools/machine-0/jujud machine --data-dir /var/lib/juju --machine-id 0 --debug
[02:25] <davecheney> Address           Kbytes     RSS   Dirty Mode  Mapping
[02:25] <davecheney> total kB               0       0       0
[02:25] <davecheney> ---------------- ------- ------- -------
[02:25] <davecheney> well, thanks
[02:28] <davecheney> thumper: juju stutus returns 0 if there are hook errors
[02:28] <davecheney> axw: sorry, maybe this question is best addressed to you
[02:30] <axw> is that a problem?
[02:30] <davecheney> axw: dunno
[02:30] <davecheney> depends what we've promised status willdo
[02:30] <davecheney> i know that people want to be able to say 'is this environment ok'
[02:30] <davecheney> $ pmap -x 969
[02:30] <davecheney> 969:   /var/lib/juju/tools/machine-0/jujud machine --data-dir /var/lib/juju --machine-id 0 --debug
[02:30] <davecheney> Address           Kbytes     RSS   Dirty Mode  Mapping
[02:30] <davecheney> sory
[02:31] <davecheney> ---------------- ------- ------- -------
[02:31] <davecheney> $ pmap -x 969
[02:31] <davecheney> 969:   /var/lib/juju/tools/machine-0/jujud machine --data-dir /var/lib/juju --machine-id 0 --debug
[02:31] <davecheney> Address           Kbytes     RSS   Dirty Mode  Mapping
[02:31] <davecheney> oh for fucks sake
[02:31] <davecheney> ---------------- ------- ------- -------
[02:31] <davecheney> total kB               0       0       0
[02:31] <davecheney>   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
[02:31] <davecheney>   969 root      20   0 1413376 515456  19136 S   9.6  6.2   0:18.51 /var/lib/juju/tools/machine-0/jujud machine --data-dir /var/lib/juju --machine-id 0 --debug
[02:31] <axw> yeah I can see the use case, but AFAIK it always just returned 0
[02:31] <davecheney> axw: i think this might be related
[02:31] <davecheney> heavy use of the api server causes RES to rise
[02:36] <davecheney> oh god
[02:36] <davecheney> i hate everything
[02:36] <davecheney> upstart isn't logging the stderr of jujud-machine-0
[02:41] <davecheney> :cry: SIGQUIT doesn't do what I think on gccgo
[03:06] <thumper> wallyworld: hangout died
[03:06] <thumper> wallyworld, axw, waigani: I figured I was done anyway :-)
[03:06] <axw> thumper: will take a look at your CL after I finish up on this HA thing
[03:06] <thumper> axw: ack
[03:07] <thumper> axw: I first read that as "hating"
[03:07] <axw> heh
[03:07] <thumper> made me chuckle
[03:07]  * thumper goes for a brief lie down before his head explodes
[03:13] <waigani> wallyworld: I found the mockable BuildToolsTarball, what was the other one? bundleTools?
[03:13] <wallyworld> yeah BundleTools
[03:13] <wallyworld> in environs/tools
[03:13] <waigani> that isn't mockable?
[03:14] <waigani> environ/tools/build.go:205
[03:14] <wallyworld> you just need to introduce a var
[03:14] <wallyworld> make the method lower case
[03:14] <wallyworld> make te var upper case
[03:15] <waigani> ah sure, make it mockable - no problem
[03:19] <davecheney> mwhudson: https://bugs.launchpad.net/juju-core/+bug/1307282
[03:19] <_mup_> Bug #1307282: cmd/jujud: gccgo api server consumes ~500mb of ram on machine-0 <gccgo> <ppc64el> <juju-core:Triaged> <https://launchpad.net/bugs/1307282>
[03:22] <davecheney> ERROR loaded invalid environment configuration: storage-port: expected int, got float64(8040)
[03:22] <davecheney> ERROR loaded invalid environment configuration: storage-port: expected int, got float64(8040)
[03:22] <davecheney> did this get fixed ?
[03:24] <davecheney> waigani: can you send me `uname -a` from your vm ?
[03:24] <waigani> davecheney: Linux winton-09 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:09:21 UTC 2014 ppc64le ppc64le ppc64le GNU/Linux
[03:26] <davecheney> waigani: intersting
[03:26] <davecheney> i'm trying a -24 kernel and I can't get it to crash
[03:26] <davecheney> waigani: did you just upgrade to that kernel ?
[03:26] <waigani> hmmm
[03:27] <davecheney> waigani: uptime
[03:27] <waigani> davecheney:  03:27:40 up  1:09,  2 users,  load average: 0.00, 0.01, 0.05
[03:28] <waigani> I did a restart, to see if that helped at all
[03:28] <waigani> ran make check after, same problem
[03:28] <davecheney> waigani: ok
[03:28] <davecheney> thanks, that makes it concrete
[03:28] <davecheney> dmesg
[03:28] <davecheney> ^^
[03:29] <waigani> davecheney: http://pastebin.ubuntu.com/7247924/
[03:29] <davecheney> waigani: ta
[03:29] <davecheney> i should have said
[03:29] <davecheney> dmesg | tail
[03:29] <davecheney> waigani: could I ask you to check again
[03:30] <waigani> davecheney: http://pastebin.ubuntu.com/7247927/
[03:30] <davecheney> sorry
[03:30] <davecheney> the test
[03:30] <davecheney> not the dmesg
[03:30] <waigani> ah right
[03:30] <davecheney> what i'm looking for is a line like
[03:30] <davecheney> (no worries, this was my fault)
[03:30] <davecheney> 11:54 < davecheney> [2028013.988376] jujud[400]: bad frame in setup_rt_frame: 0000000000000000 nip 0000000000000000 lr 0000000000000000
[03:30] <davecheney> ^ should see something like this
[03:31] <waigani> okay, I'll paste when done and keep an eye out for a line like that
[03:39] <davecheney> waigani: can you ssh-import-id dave-cheney on your vm
[03:39] <davecheney> so I can stooge around you /var/log/
[03:39] <davecheney> and see what kernel you were running before reboot
[03:40] <waigani> davecheney: already done, your public key is on the vm
[03:41] <davecheney> danka
[03:41] <davecheney> waigani: i have a theory that -24 kernel fixes the issue
[03:41] <davecheney> it's not much of a theory atm
[03:41] <waigani> davecheney: http://pastebin.ubuntu.com/7247954/
[03:42] <waigani> davecheney: I have a theory that I did something stupid
[03:42] <waigani> not so much a theory as a constant axiom
[03:43] <davecheney> waigani: ubuntu@winton-09:/var/log$ grep '\-generic' dmesg.0 dmesg
[03:43] <davecheney> dmesg.0:[    0.000000] Linux version 3.13.0-20-generic (buildd@denneed04) (gcc version 4.8.2 (Ubuntu 4.8.2-17ubuntu1) ) #42-Ubuntu SMP Fri Mar 28 09:55:49 UTC 2014 (Ubuntu 3.13.0-20.42-generic 3.13.7)
[03:43] <davecheney> dmesg.0:[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinux-3.13.0-20-generic root=UUID=30486aa4-f767-4397-ab88-dd0e02e66651 ro console=hvc0 earlyprintk
[03:43] <davecheney> dmesg:[    0.000000] Linux version 3.13.0-24-generic (buildd@fisher04) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #46-Ubuntu SMP Thu Apr 10 19:09:21 UTC 2014 (Ubuntu 3.13.0-24.46-generic 3.13.9)
[03:43] <davecheney> dmesg:[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinux-3.13.0-24-generic root=UUID=30486aa4-f767-4397-ab88-dd0e02e66651 ro console=hvc0 earlyprintk
[03:44] <davecheney> looks like you were running -20, then you got -24 when you rebooted
[03:44] <davecheney> waigani: dmesg ?
[03:45] <waigani> davecheney: http://pastebin.ubuntu.com/7247958/
[03:45] <waigani> sorry, is that what you meant?
[03:45] <davecheney> waigani: yup
[03:45] <davecheney> intersting
[03:45] <davecheney> all prevoius panics of this class leave a message in dmesg
[03:46] <davecheney> ok, there could be two unrelated issues
[03:46] <davecheney> waigani: could you log a bug for http://pastebin.ubuntu.com/7247954/
[03:46] <davecheney> tag it gccgo ppc64el
[03:46] <waigani> davecheney: yep, gladly :)
[03:47] <davecheney> waigani: ta
[03:47] <waigani> davecheney: I'll just double check that I have not done something stupid it the code. It *should* be latest trunk
[03:47] <davecheney> waigani: nah
[03:47] <davecheney> this isn't you
[03:47] <davecheney> the panic is happening in /usr/bin/go
[03:47] <davecheney> if you want to ingestigate
[03:48] <davecheney> apt-get source gccgo-go
[03:48] <waigani> right, that is what stumps me
[03:48] <davecheney> then have a look at that line in build.go
[03:48] <davecheney> waigani: i ran into that about a week ago
[03:48] <davecheney> that was when the floor fell out from under me
[03:48] <waigani> lol
[03:48] <waigani> yep, I know that one
[03:53] <waigani> davecheney: https://bugs.launchpad.net/juju-core/+bug/1307289
[03:53] <_mup_> Bug #1307289: Go panics when running tests on ppc64 <gccgo> <ppc64el> <juju-core:New> <https://launchpad.net/bugs/1307289>
[03:53] <davecheney> waigani: jolly good
[03:57] <davecheney> axw: ERROR loaded invalid environment configuration: storage-port: expected int, got float64(8040)
[03:57] <davecheney> ERROR loaded invalid environment configuration: storage-port: expected int, got float64(8040)
[03:57] <davecheney> ^ did this get fixed recently
[03:57] <davecheney> or should I log a bug
[03:58] <thumper> axw: do you really think that two filtering methods is better than one with a bool?
[03:58] <thumper> axw: I'll write it and look at the diff
[03:59] <axw> thumper: I really do. With that approach you can see without a doubt that nothing can change the behaviour at runtime; with the bool you need to ensure that nothing changes it
[04:00] <thumper> ok
[04:01] <axw> davecheney: wallyworld fixed that already I think
[04:01] <davecheney> axw: right
[04:01] <davecheney> this is 1.17.8 (ish)
[04:01] <axw> yeah, fixed in 1.18.1 I believe
[04:01] <wallyworld> yeah, fixed in trunk
[04:01] <davecheney> i think I saw a branch last week
[04:01] <davecheney> right o
[04:04] <thumper> axw: like this http://paste.ubuntu.com/7247988/ ?
[04:05] <axw> thumper: yup
[04:05] <axw> thumper: comment on countedFilterLine needs fixing
[04:49] <davecheney> ping jam ?
[04:50] <jam> davecheney: /wave
[04:50] <davecheney> jam: i think we're eating hte elephant from different ends
[04:50] <davecheney> wrt to the api server memory usage
[04:51] <jam> I'm not sure I understand
[04:52] <jam> thumper: I'm around whenever you would like to hangout
[04:52] <davecheney> jam: ok
[04:52] <davecheney> in trying to trace down the panics i'm seeing
[04:52] <davecheney> i've sort of discovered just how much memory jujud consumes
[04:52] <davecheney> it's horrific
[04:53] <jam> my initial results showed about 0.5MB per agent, which wasn't great, but wasn't terrible. but when something gets into a bad situation, I see memory spike terribly
[04:53] <davecheney> jam: gccgo
[04:54] <davecheney> it's more like 250mb per agent
[04:54] <davecheney> two agesnts per machine
[04:54] <davecheney> at a minimum
[04:54] <jam> wow...
[04:54] <davecheney> its complicated
[04:54] <jam> that's way different
[04:54] <davecheney> gccgo when not using split stacks
[04:54] <davecheney> allocates an 8 mb stack from the heap
[04:54] <davecheney> so that puts the heap under a lot of pressure
[04:55] <davecheney> even if large amounts of that 8mb stack remain uncomitted
[04:55] <jam> yeah, 8MB per goroutine would be really bad for how much we use it
[04:55] <davecheney> i'm also seeing strange things that make me thing when a client disconnects
[04:56] <jam> so I think we have a bug that if a client disconnects in a bad way, it cascades into causing an APIServer restart, but I haven't tracked down the exact issues yet.
[04:56] <davecheney> we're not releasing all the server side resources used by the client
[04:56] <davecheney> in my test
[04:56] <davecheney> 3 machines
[04:56] <davecheney> on the manual provider
[04:56] <jam> It might just be that it leaves resources behind, right
[04:56] <davecheney> killing the agents on the service units
[04:56] <davecheney> causese memoryu usage to almost double
[04:57] <davecheney> with gc and 8k stacks, you won't feel a few leaked goroutines
[04:57] <davecheney> with 8mb stacks
[04:57] <davecheney> yup, you'll feel it
[04:58] <davecheney> $ grep -c goroutine /tmp/out
[04:58] <davecheney> 247
[04:58] <davecheney> ^ starts at 169 for 4 agents
[04:58] <davecheney> after a few restarts of the agents we're up to 247
[04:59] <thumper> jam: ok, with you in 1m
[05:43] <davecheney> axw: http://paste.ubuntu.com/7248220/
[05:43] <davecheney> i don't get it
[05:43] <davecheney> i did destroy-machine as requiested
[05:43] <davecheney> the agents are stopped
[05:43] <davecheney> but I can't destroy the environment
[05:48] <axw> davecheney: umm
[05:49] <axw> davecheney: if they never disappear from state, seems that's a bug. but you can do destroy-machine --force to clean up manually
[05:50] <davecheney> axw: right
[05:51] <jam> thumper: the connection seems to have died
[05:52] <thumper> jam: google tells me my connectivity is experiencing issues
[05:57] <davecheney> axw: --force doesn't give me any love
[06:00] <axw> davecheney: did it return an error or anything?
[06:00] <axw> or just silence?
[06:02] <davecheney> silence
[06:02] <axw> davecheney: the provisioner should remove the machine from state when it's dead... it's entirely possible that someone has changed the provisioner so that it doesn't work with manual anymore
[06:03] <axw> we need a "no provider left behind" act
[06:04]  * davecheney reaches for rm 
[06:04] <axw> davecheney: destroy-environment --force should work as a last resort, if all the machines really are cleaned up
[06:25] <davecheney> ok, some good news, 3.13.0-24 may fix the issue
[06:26] <davecheney> oh
[06:26] <davecheney> nope
[06:26] <davecheney> hmm
[06:26] <davecheney> hard to tell
[06:26] <davecheney> need more information
[06:43] <rogpeppe> mornin' all
[06:43] <jam> morning rogpeppe
[06:43] <davecheney> 'moin
[06:44] <rogpeppe> jam: hiya
[06:44] <rogpeppe> davecheney: yo!
[06:55] <axw> morning rogpeppe
[06:55] <rogpeppe> axw: hiya
[06:55] <axw> rogpeppe: landed the EnsureAvailability MP. is there something else you'd like me to look at now?
[06:56] <davecheney> evening
[06:56] <rogpeppe> axw: there is one thing that would be awesome if we could do
[06:56] <rogpeppe> axw: currently we can't upgrade to a HA environment
[06:57] <axw> ok
[06:57] <rogpeppe> axw: because there is no mongo user configured on the admin database
[06:57] <rogpeppe> axw: we need to change EnsureMongoServer to add one
[06:57] <rogpeppe> axw: (if necessary)
[06:58] <axw> rogpeppe: I guess there's a tonne of other things that need to be done for upgrades too, though? like rewriting mongo scripts? or has nate done that already?
[06:59] <rogpeppe> axw: the mongo upstart script is already written when necessary (well, actually, it's been disabled for the moment, pending this)
[06:59] <axw> I see
[06:59] <axw> ok, I will take a look
[06:59] <rogpeppe> axw: to add the admin user, while the service is stopped, we need to start the mongod in non-authenticated mode
[06:59] <rogpeppe> axw: then add the admin user in that mode
[07:00] <axw> thanks
[07:00] <rogpeppe> axw: before tearing mongod down again and starting it up normally
[07:00] <rogpeppe> axw: i did manually verify that that does actually work, but i'm afraid i can't remember the exact steps i used
[07:11] <wallyworld> rogpeppe: hiya, i have a reflection question for you if you have a moment
[07:11] <rogpeppe> wallyworld: sure
[07:12] <wallyworld> i have a reflect.Value
[07:12] <wallyworld> i want to create a nil value pointer
[07:12] <wallyworld> eg reflect.ValueOf((*string)(nil))
[07:12] <wallyworld> if it were for a *string
[07:12] <wallyworld> but i want to do it dynamically
[07:13] <wallyworld> reflect.New(val.Type().Elem()) gives me a pointer to a zero value
[07:13] <wallyworld> but i want a pointer to nil that i can use with value.Set()
[07:13] <wallyworld> make sense?
[07:13] <rogpeppe> wallyworld: what would the code in normal Go look like? use T for the type of the value
[07:14] <wallyworld> var foo *T
[07:14] <wallyworld> foo = nil
[07:14] <wallyworld> foo is a field of a struct
[07:15] <wallyworld> i have it working using a switch on the field Kind and using reflect.ValueOf((*sgtring)(nil))
[07:15] <wallyworld> but i want to do it without that
[07:15] <rogpeppe> wallyworld: so you want a nil value of the same type as a pointer to the type of the field?
[07:16] <wallyworld> yeah, i think so, so that a call to value.Set() works
[07:16] <rogpeppe> wallyworld: do you want to actually set the value of the field in the struct?
[07:16] <wallyworld> yep
[07:17] <rogpeppe> wallyworld: i don't think you want a pointer, in that case
[07:17] <wallyworld> reflect.ValueOf(*mystruct).Elem().FieldByName(fieldName) is what i use to get the value
[07:17] <rogpeppe> wallyworld: right, well you can just call Set on the result of that
[07:17] <wallyworld> so if val is the result of the above
[07:17] <wallyworld> i call Set() yes
[07:17] <wallyworld> but i can't find out what to pass to Set()
[07:18] <rogpeppe> wallyworld: a reflect.Value of the same type as the field...
[07:18] <wallyworld> reflect.New(val.Type().Elem())  gives a pointer to "" for example
[07:18] <wallyworld> i want to do it dynamically
[07:18] <rogpeppe> wallyworld: are you just trying to set the field to nil?
[07:18] <wallyworld> yes
[07:19] <wallyworld> i thought i'd need value.Set()
[07:20] <rogpeppe> wallyworld: val := reflect.ValueOf(mystructptr).Elem().FieldByName(fieldName); val.Set(reflect.Zero(val.Type())
[07:20] <jam> dimitern: morning. We can do a 1:1 if you would like, though officially that's natefinch's responsibility now.
[07:20] <wallyworld> but reflect.Zero() gives me "" doesn't it?
[07:21] <wallyworld> rogpeppe: ah, it seems to have worked
[07:21] <wallyworld> thank you. for some reason i was thinking reflect.Zero() would give me the wrong thing
[07:22] <jam> wallyworld: you did "reflect.New(val.Type().Elem())"
[07:22] <jam> note that Elem is an element of the pointed to type
[07:22] <jam> vs
[07:22] <jam> reflect.New(val.Type())
[07:22] <jam> val.Type() is a pointer, val.Type().Elem() is the actual object
[07:22] <jam> and the Zero of a pointer is nil
[07:22] <jam> the Zero of a string is ""
[07:22] <wallyworld> ah ffs, stupid mistake, thanks
[07:22] <jam> (11:18:25 AM) wallyworld: reflect.New(val.Type().Elem())  gives a pointer to "" for example
[07:24] <rogpeppe> yeah, New is exactly equivalent to the language primitive "new"
[07:30] <dimitern> jam, oh is that so
[07:30] <dimitern> jam, well, i can join the regular meeting?
[08:23] <jam1> fwereade: looks like we made our N^2 problem with CharmURL worse in 1.18 because of the changes to Upgrade now watching the machine's agent version.
[08:23] <jam1> This one may not matter *quite* as much in practice, if you aren't deploying multiple units to machines.
[08:24] <jam1> But in my sim tests, we wake up the Upgrader even more often than we wake up the CharmURL
[08:53] <fwereade> jam1, ha
[08:54] <fwereade> jam1, yeah, I think we write something extra to the machine doc now -- dimitern, do I recall correctly?
[08:54] <fwereade> dimitern, btw can we please undo those errors changes? I added a note to the review but it was already landed ofc
[08:54] <jam1> fwereade: well, we also wake up every 15 min because the instance poller claims the machine has a new address
[08:55] <fwereade> jam1, yeah, indeed
[08:55] <dimitern> fwereade, I'm working on that now as a follow-up
[08:56] <fwereade> jam1, I cannot figure out how to schedule those sorts of fixes though -- unless we carve out X% of time for paying down tech debt and classify it as that
[08:56] <jam1> fwereade: well, if we have a client that wants us to scale to 10000 units, we can bill them for it, as well
[08:56] <jam1> fwereade: ATM, I'm mostly focused on "this is where we're at"
[08:58] <fwereade> jam1, I guess :)
[08:58] <fwereade> jam1, clarity on that front is indeed helpful
[08:59] <jam1> fwereade: "juju status" with 10k machines actually is doing ok performance wise, but nobody wants 10,000 lines of output
[08:59] <fwereade> jam1, indeed
[08:59] <jam1> so there are quite a few things that would need tweaking to scale to that level
[09:00] <jam1> fwereade: though for *testing* purposes, the N^2 stuff bites me in the ass a lot. 'juju add-unit" to add another 100 units each to 19 machines takes: 200s, 400s, 1200s, 2800s, and I'll let you know when it finishes seconds.
[09:01] <rogpeppe> jam1: 1-1?
[09:01] <fwereade> jam1, yeah -- I kinda feel like those sorts of issues are... they should work properly *now*
[09:01] <jam1> rogpeppe: I just need to switch machines, 1 sec
[09:02] <fwereade> jam1, but, ehh, prioritisation :/
[09:03] <jam> dimitern: so it looks like Canonical admin got it backwards, its actually you on my team and roger's on nate's team.
[09:04] <jam> dimitern: so I think everyone is still on the same standup for now
[09:06] <dimitern> jam, what team am i supposed to be on?
[09:10] <jam> dimitern: so looking at Alexis's email about Nate and Ian, you're on my team
[09:12] <dimitern> jam, yeah, I thought so
[09:56] <rogpeppe> jam: you've frozen...
[09:56] <jam> rogpeppe: I got logged out of my google account somehow
[09:56] <jam> end of month?
[09:56] <rogpeppe> jam: perhaps
[09:58] <perrito666> morning
[10:00] <jam> mgz: 1:1? (just running to the restroom myself)
[10:00] <mgz> sure, I'll wait for you thdere
[10:01] <mgz> ...the hangout, not the restroom
[10:07] <waigani> wallyworld: I can get TestUpgradeJujuWithRealUpload to pass by patching sync.BuildToolsTarball but not when I patch envtools.BundleTools
[10:08] <waigani> wallyworld: here is my attempt at mocking out bundleTools: http://pastebin.ubuntu.com/7248910/
[10:08] <wallyworld> what is the error?
[10:08] <waigani> wallyworld: ... and http://pastebin.ubuntu.com/7248930/
[10:08] <waigani> wallyworld: error uploading tools: no tools uploaded
[10:12] <wallyworld> waigani: why is the bundle tools mock uploading tools as well?
[10:12] <wallyworld> it shouldn't be doing that
[10:13] <waigani> wallyworld: good question! I just read the logic, let me give that another go ...
[10:13] <wallyworld> that is my guess as to what the error is, as there would be no metadata or anything
[10:13] <waigani> wallyworld: I basically ripped the logic out of BuildToolsTarball
[10:13] <wallyworld> upload tools needs the tarball and also metadata
[10:14] <waigani> wallyworld: right, let me try again
[10:30] <jamespage> evilnickveitch, the links on https://juju.ubuntu.com/docs/ looked foobared to me - are you aware?
[10:30] <evilnickveitch> jamespage, ooh. they were working yesterday. let me have a look
[10:31] <perrito666> fwereade: morning, are you around?
[10:32] <evilnickveitch> jamespage, hmm. seem to be working for me - was there a particular page or link that wasn't working for you?
[10:33] <jamespage> evilnickveitch, the links on the lhs of the page don't appear for me
[10:33] <evilnickveitch> jamespage, the links are pasted in by a bit of javascript at the end of the page
[10:33] <evilnickveitch> so either the js isn't loading
[10:34] <evilnickveitch> because something is messed up on that page, or the page isn't loaded
[10:34] <jamespage> hmm
[10:34] <evilnickveitch> have you tried refreshing etc?
[10:35] <evilnickveitch> are you sure page has finished loading? some external assets take a while to load sometimes, and the link JS is right at the end
[10:37] <TheMue> evilnickveitch: quick test here on FF show no links too
[10:37] <TheMue> evilnickveitch: jamespage is right
[10:38] <evilnickveitch> TheMue, jamespage okay, I guess mine was fetching from cache. i will check into it
[10:43] <evilnickveitch> TheMue, jamespage okay, I found the problem, some wonky HTML which prevents the rest of the page loading, it's only on the front page, the others should work fine
[10:43] <evilnickveitch> I will fix it ASAP
[10:44] <TheMue> evilnickveitch: Great, thanks.
[10:45] <mgz> evilnickveitch: do you not validate? :P
[10:46] <evilnickveitch> mgz, it was the stupid linter that caused the problem :P
[10:46] <mgz> evilnickveitch: :D
[10:47] <fwereade> perrito666, sorry, I completely missed yu there
[10:55] <perrito666> fwereade: happns :)
[10:57] <perrito666> fwereade: still missing the transaction hooks tests but https://codereview.appspot.com/86430043 I did ignore some of your comments because they broke functionality :) but I am willing to re-try once I make sure this goes the right way (altough my assert is either broken or making blow an error existing that was not being discovered bc I am failing 5 tests) https://codereview.appspot.com/86430043
[10:58] <fwereade> perrito666, cheers, I'll take a look
[10:59] <waigani> wallyworld: I exported tools.archive: http://pastebin.ubuntu.com/7249092 (tests pass now)
[10:59] <wallyworld> great, in standup, will look later
[11:00] <waigani> ah
[11:00] <wallyworld> had a quic look, looks nice and simple
[11:00] <wallyworld> like i'd hoped
[11:01] <waigani> yeah, just hope it's okay that I've made Archive public - adding noise to the API?
[11:01] <waigani> anyway, I'll leave it for the review
[11:04] <fwereade> perrito666, rogpeppe has a deepcopy package that may help with cloning
[11:04] <rogpeppe> fwereade, perrito666: it doesn't work any more
[11:04] <fwereade> rogpeppe, bah
[11:04] <perrito666> fwereade: ah, might be much better than the by-hand copy I am doing ther...
[11:04] <perrito666> rogpeppe: :(
[11:04] <rogpeppe> it was trying to be too clever
[11:05] <rogpeppe> perrito666: what are you copying?
[11:05] <perrito666> rogpeppe: units and machines
[11:05] <rogpeppe> perrito666: why?
[11:06] <rogpeppe> perrito666: is it just for testing?
[11:06] <perrito666> rogpeppe: sory I was listening on the other side :) no, not just for testing
[11:07] <perrito666> trying to get a copy that ensures me won't change wile I am working in it in certain circumstances (I am just making a method of something previously done by hand)
[11:27] <rogpeppe> perrito666: what are you actually trying to do?
[11:28] <fwereade> rogpeppe, clone state.Machine/Unit -- I commented that it'd be nice to do it properly
[11:28] <rogpeppe> fwereade: ah
[11:28] <fwereade> rogpeppe, there are a few places we do it in varyingly hackish ways iirc
[11:29] <evilnickveitch> TheMue, jamespage docs should be working now
[11:29] <jamespage> evilnickveitch, ta - next question - do release notes get published on /docs ?
[11:30] <evilnickveitch> jamespage, very good question - not as yet, but I do have a branch that will add them to the reference section. At least for the ones I can find
[11:30] <evilnickveitch> Check back after 7.30pm
[11:31] <jamespage> evilnickveitch, its something that ceph does quite well upstream
[11:31] <evilnickveitch> jamespage, cool, I will check out what they do. I was just intending to dump them all in newest first order with an index of links at the top
[11:32] <rogpeppe> fwereade, perrito666: two thoughts: 1) we could probably avoid doing a deep copy of the machineDoc, as we don't allow mutation of pieces inside its components
[11:33] <fwereade> rogpeppe, I suspect that statement is only mostly accurate
[11:33] <rogpeppe> fwereade, perrito666: 2) if we decided to, it would be easy (but not greatly efficient) to clone by serialising/deserialising through bson
[11:34] <fwereade> rogpeppe, perrito666: ha, I could live with that
[11:35] <rogpeppe> fwereade: tbh i think it's reasonable to have methods that return mutable values with a stipulation that you should not modify the contents
[11:36] <rogpeppe> fwereade: (i presume you're thinking about the Jobs method here)
[11:38] <rogpeppe> fwereade: if we did that, then Clone could be ultra cheap
[11:41] <mgz> rogpeppe: got around to finishing the last few test failures: https://codereview.appspot.com/87540043
[11:42] <rogpeppe> mgz: thanks. looking.
[11:43] <rogpeppe> mgz: LGTM
[11:43] <mgz> rogpeppe: thanks!
[11:48] <rogpeppe> oops, upgrade-juju seems to have killed its own environment
[11:49]  * rogpeppe hates it when that happens
[11:53] <rogpeppe> hmm, this is the second time this morning i've had a live bootstrap fail with this error:
[11:53] <rogpeppe> 2014-04-14 11:53:05 ERROR juju.cmd supercommand.go:299 cannot write file "tools/releases/juju-1.19.0.1-precise-amd64.tgz" to control bucket: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
[12:15] <fwereade> rogpeppe, perrito666: if the clone were an internal method with "do not modify result" I'd be fine
[12:16] <fwereade> rogpeppe, perrito666: if it's exported there's just way too much opportunity to screw up at a distance
[12:16] <rogpeppe> fwereade: i'm thinking of the Jobs method only
[12:16] <fwereade> rogpeppe, does Jobs not copy? it should ;p
[12:17]  * perrito666 sees another ball coming his way :p
[12:17] <rogpeppe> fwereade: well, if Jobs copies, then why do we need a deep copy of the machine doc?
[12:17] <fwereade> rogpeppe, plenty of methods mutate bits and pieces of state
[12:17] <rogpeppe> fwereade: (it doesn't, BTW, but it probably should)
[12:18] <rogpeppe> fwereade: which methods mutate stuff that's pointed to by the machine doc, rather than machine doc fields themselves?
[12:18] <fwereade> rogpeppe, and considering current cases misses the point; things will change and if we expose this functionality without insulating the objects frm one another we *will* screw it up
[12:19] <rogpeppe> fwereade: if we make all methods return copies of the underlying data, what is there to screw up?
[12:22] <rogpeppe> fwereade: AFAICS this should be fine: func (m *Machine) Clone() *Machine { m1 := *m; return &m1}
[12:23] <fwereade> rogpeppe, various methods write to the document on success
[12:23] <rogpeppe> fwereade: that's fine
[12:24] <rogpeppe> fwereade: the document is stored in the Machine as a value type. as long as none of our Machine methods change things that are pointed to by things in the machine doc, we're ok
[12:24] <fwereade> rogpeppe, we can be sure that none of them will ever change, say, a slice on the document?
[12:24] <rogpeppe> fwereade: that's not a hard invariant to maintain (it's local)
[12:24] <rogpeppe> fwereade: we can be sure they don't now, and it's not hard to verify that in the future
[12:25] <fwereade> rogpeppe, my experience is that it's a very difficult invariant to maintain, even with a team of ultra-smart people half the size of this one
[12:25] <rogpeppe> fwereade: i think it's better than adding to memory pressure and writing a bunch more code that needs to be maintained every time a field is changed.
[12:26] <fwereade> rogpeppe, if we're exporting a Clone method, that clone method must deep-copy the data
[12:26] <fwereade> rogpeppe, if it's not exported I'm willing to be a bit laxer
[12:27] <fwereade> rogpeppe, not because it won;t screw us, because it *will*
[12:27] <fwereade> rogpeppe, but because at least the scope of the weirdness will be small enough that we'll have a chance of dealing with it
[12:27] <rogpeppe> fwereade: tbh, i would prefer us to make Machine etc immutable
[12:27] <rogpeppe> fwereade: i don't think we gain much by having methods mutate our local idea of state
[12:29] <fwereade> rogpeppe, that's probably a reasonable position, especially considering current usage, but it's not really on the table at the moment
[12:29] <fwereade> rogpeppe, in terms of potentially fiddly changes, errgo has a much bigger payoff ;p
[12:29]  * fwereade needs to go to the airport, hadn;t realised he was flying so early
[12:29]  * fwereade will say hi again this evening if he can
[12:29] <natefinch> rogpeppe: gotta help with my daughter for a bit, probably be 45-60 mins
[12:30] <rogpeppe> fwereade: where are you going?
[12:30] <rogpeppe> natefinch: ok
[12:30] <perrito666> rogpeppe: well have to wait until he returns to know :p
[12:45] <rogpeppe> ha, i have a machine where provisioning failed (amazon says "Server.InternalError: Internal error on launch") but i can't call retry-provisioning because the machine isn't in an error state
[12:52] <dimitern> rogpeppe, mgz, errors package improvements - https://codereview.appspot.com/87560043 - it's a bit big, but most changes are renames
[12:53] <mgz> ...scary
[12:56] <rogpeppe> dimitern: the Suffix field looks like it's not used - is it?
[13:01] <rogpeppe> dimitern: similarly ArgsConstructor doesn't appear to be used
[13:01] <dimitern> rogpeppe, it's used in tests only
[13:02] <rogpeppe> dimitern: right. i'm not sure we need to pollute the production code with test-specific functionality.
[13:02] <dimitern> rogpeppe, allErrors is unexported - how does it pollute?
[13:02] <rogpeppe> dimitern: it makes the code more complex
[13:03] <dimitern> rogpeppe, so you're saying let's have 2 almost identical []struct{} defined - one for testing, the other for production?
[13:03] <rogpeppe> dimitern: i don't think you need the table at all in the production code - i'm just writing up a suggestion
[13:04] <dimitern> rogpeppe, if it stays like this there's less chance of forgetting to add a new error type to allErrors and have it tested
[13:04] <dimitern> rogpeppe, ok, thanks
[13:06] <sinzui> jamespage, Do you know who I can show Bug #1305280 to get an apparmor issue addressed?
[13:06] <_mup_> Bug #1305280: juju command get_cgroup fails when creating new machines, local provider arm32  <armhf> <local-provider> <lxc> <packaging> <juju-core:Triaged> <apparmor (Ubuntu):New> <https://launchpad.net/bugs/1305280>
[13:07] <jam> hi sinzui, I had some CI things I wanted to work with you on
[13:07] <sinzui> hi jam
[13:08] <jam> sinzui: specifically, looking at the log files, you're using "juju-1.18.0" to do "scp"
[13:08] <jam> which is known broken for you
[13:08] <jam> and we released 1.18.1 with that specific fix for you
[13:08] <jam> though beyond that, "juju scp" always requires the API server to be functioning, which is what is breaking in the "upgrade" test
[13:08] <jam> so it might be nice if we tried to use raw "scp" if we can.
[13:08] <jam> either try to raw scp first, or try "juju scp" first and fall back
[13:09] <jam> sinzui: we can get the API IP address from the environment/foo.jenv file
[13:09] <sinzui> jam, yes, abentley and I discussed the fallback
[13:09] <jam> (If we've ever connected successfully, we'll be caching the value there, and we'd like to get to the point where we cache it at the end of bootstrap)
[13:10] <jam> sinzui: I'm trying to debug the local bootstrap problem. I haven't reproduced it yet, but I'm currently on Trusty
[13:10] <sinzui> jam, and we can also update to 1.18.1 today
[13:10] <jam> so I have to fire up a Precise instance first
[13:11] <sinzui> jam, interesting. http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/aws-upgrade-trusty/ shows trusty is upgrade fails in parallel to precise
[13:12]  * sinzui starts update and upgrade
[13:14] <sinzui> jam, I can re-run an upgrade test for the cloud and series of your choice
[13:14] <jam> sinzui: but that is not local
[13:14] <sinzui> 1.18.1 is installed
[13:15] <sinzui> now
[13:15] <jam> I'm just starting with trying to fix the local-deploy issue
[13:15] <jam> so I can try to not fire up a remote machine just to debug upgrade
[13:18] <jam> sinzui: the main question about local right now is that probably the version of mongod running on precise is different from trusty
[13:18] <jam> so while I think we also have an upgrade bug
[13:18] <jam> It might be that bootstrap is failing because trusty has 2.4.9 (which works for us), and Precise is running 2.4.6 or something
[13:19] <jam> I just realized my test won't work, as local under LXC doesn't work
[13:21] <rogpeppe> dimitern: reviewed (kinda)
[13:22] <dimitern> rogpeppe, cheers, I have the next one for you btw :) - it's tiny https://codereview.appspot.com/87470044
[13:23] <rogpeppe> dimitern: i agree that the mgo/txn docs could be clearer, BTW
[13:24] <dimitern> rogpeppe, no doubt about it
[13:24] <rogpeppe> dimitern: LGTM
[13:25] <dimitern> rogpeppe, ta!
[13:44] <dimitern> rogpeppe, updated https://codereview.appspot.com/87560043 - it's nicer now I think
[13:44] <rogpeppe> axw: ha, it seems that 7 maximum parallel try attempts is way too small for real world API dialling
[13:45] <rogpeppe> BTW I now have a functional environment where I destroyed the bootstrap instance
[13:45]  * axw tries to remember why it's 7
[13:45] <axw> cool :)
[13:46]  * dimitern will bbiab (1h)
[13:47] <rogpeppe> axw: in my environment with 3 state servers, i see 21 addresses cached in the .jenv file...
[13:47] <axw> rogpeppe: I guess I was thinking one per state server, but we will need more for each address type...
[13:47] <rogpeppe> axw: and because each dial attempt takes ages to time out, we don't get to try the second valid address until it has.
[13:48] <rogpeppe> axw: i'm tempted to just allow unlimited concurrent dials
[13:49] <axw> rogpeppe: how do you have 21 addresses for 3 state servers?
[13:49] <axw> why so many?
[13:49] <rogpeppe> axw: http://paste.ubuntu.com/7249818/
[13:49] <rogpeppe> axw: machine-local addresses, ipv6 addresses, etc
[13:50] <axw> rogpeppe: we are ignoring the machine-local ones, right?
[13:50] <mgz> that number should get filtered a little, yeah
[13:50] <axw> I should know the answer to this :)
[13:50] <rogpeppe> axw: no, not in api dial, because we don't currently store the metadata in the jenv
[13:50] <rogpeppe> axw: that needs to be fixed
[13:50] <axw> ah right, yeah
[13:51] <axw> rogpeppe: so really, I think we'd have 2*state-server for both public and internal, if we had that
[13:51] <rogpeppe> still, the point remains that you probably want to try dialling all your api server addresses at once, because sod's law says that the one you don't try is the only one that works
[13:52] <axw> true. there's the private-inside-private scenario to cater for
[13:52] <rogpeppe> axw: probably 4, because DNS-name vs numeric
[13:52] <axw> rogpeppe: I was thinking public IP & name, but yes we do need to try private too
[13:52] <rogpeppe> axw: yup
[13:52] <jam> jamespage: sinzui: do we know why cloud-archive:tools only has juju-1.16.3 ?
[13:53] <jamespage> jam: its called a blocked SRU
[13:53] <jamespage> can't get into cloud-tools before you go into saucy
[13:54] <axw> rogpeppe: I guess we can do unlimited... if we get in trouble, we could try a more complicated initially-short but expanding timeout
[13:55] <jamespage> jam: we don't have an MRE yet so I have to detail how to test every bug in full - see bug 1277526
[13:55] <_mup_> Bug #1277526: [SRU] juju-core 1.16.6 point release tracker <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):In Progress by james-page> <juju-core (Ubuntu Trusty):Fix Released> <https://launchpad.net/bugs/1277526>
[13:55] <jam> jamespage: ouch. Going from 1.16.6 => 1.18.X is going to be a massive PITA for that.
[13:55] <jamespage> jam: there is not SRU for 1.16.6 -> 1.18.x
[13:56] <jam> jamespage: I realize there isn't (yet), but wouldn't the plan be to have the stable version of Juju in cloud-archive:tools ?
[13:56] <jamespage> jam: that happens when cloud-archive:tools get's superceded by the trusty version
[13:56] <jamespage> superceeded/replaced
[13:57] <jamespage> jam: actually - while I'm thinking about this - how does backup/restore work on 1.16.6?
[13:57] <jamespage> I see the update-bootstrap-node stuff mgz did in the bug list
[13:57] <jam> jamespage: AFAIK it works through all the 1.16's because that is what we wrote it against.
[13:58] <jamespage> jam: but for 1.16.x there is no backup or restore plugin?
[13:58] <jam> jamespage: I think it was added in 1.16.5 ?
[13:58] <jamespage> really?
[13:58] <jam> jamespage: we added it for CTS
[13:59] <mgz> yeah, it was a bit of a fudge for minor version
[14:02] <natefinch> rogpeppe: hey, sorry, that took a lot longer than expected, obviously.
[14:02] <rogpeppe> natefinch: i'm just about to go to lunch
[14:03] <natefinch> rogpeppe: ok, where are we right now?
[14:03] <rogpeppe> natefinch: i've had a mostly-success with my integrated branch
[14:04] <rogpeppe> natefinch: two things we need to fix: agent.Config.StateInfo needs to return localhost always
[14:04] <rogpeppe> natefinch: api.Open should try all addresses concurrently
[14:04] <rogpeppe> natefinch: oh, and one other one line fix
[14:05] <rogpeppe> natefinch: APIWorker needs to fetch agent config again after dialling
[14:06] <natefinch> rogpeppe: ok, I can start working on those.  Should I branch off your branch or just do that in a new branch off trunk?
[14:06] <rogpeppe> natefinch: i'd just do new branches off trunk
[14:07] <rogpeppe> natefinch: they're all trivial
[14:07] <natefinch> rogpeppe: yep, ok
[14:15] <jam> jamespage: "juju-local" doesn't seem to depend on rsyslog-gnutls
[14:16] <jam> ah, maybe it does now, but upgrade didn't do it?
[14:16] <jam> weird
[14:16] <jamespage> it does
[14:16] <jam> jamespage: I thought I did apt-get upgrade, but I had to "apt-get install juju-local" again to get it
[14:17] <jam> jamespage: anyway, it looks like 1.18.1 does depend on it, so thanks for that, sorry about the confusion
[14:17] <jamespage> np
[14:26] <jam> sinzui: I'm unable to reproduce the "local bootstrap" failure with trunk and cloud-archive:tools version of mongo (2.4.6)
[14:26] <jam> I see the replicaSet line, but it doesn't fail
[14:32] <sinzui> jam, I don't know which bug you are working on. The lxc bug I know of is about apparmor: bug 1305280
[14:32] <_mup_> Bug #1305280: juju command get_cgroup fails when creating new machines, local provider arm32  <armhf> <local-provider> <lxc> <packaging> <juju-core:Invalid> <apparmor (Ubuntu):New> <https://launchpad.net/bugs/1305280>
[14:32] <jam> sinzui: https://bugs.launchpad.net/juju-core/+bug/1306212
[14:32] <_mup_> Bug #1306212: juju bootstrap fails with local provider <bootstrap> <ci> <local-provider> <regression> <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1306212>
[14:33] <jam> sinzui: since I can't reproduce that right now, I'm switching to https://bugs.launchpad.net/juju-core/+bug/1307450
[14:33] <_mup_> Bug #1307450: upgrading from 1.18.1 to 1.19 (trunk) fails (API server stops responding) <ci> <juju-core:Triaged by jameinel> <https://launchpad.net/bugs/1307450>
[14:34] <sinzui> jam: please do
[14:38] <jam> sinzui: so offhand, we have a different bug, which is that "juju upgrade-juju --upload-tools" doesn't end up putting the tools where the agents can find them. :(
[14:39] <sinzui> damn
[14:40] <jam> sinzui: it looks like it uploads the tools, but doesn't make it readable
[14:40] <sinzui> jam, We would be happy if local-provider honoures tools-metadata-url. We want to set it to a testing stream since local has to use streams to get tools for different series
[14:40] <sinzui> jam, but I won't redirect you for delivering the fastest fix
[14:41] <jam> sinzui: well this is testing "juju-1.19.0 upgrade-juju --upload-tools"
[14:41] <jam> which should be working, but something isn't right
[14:43] <alexisb> morning all (and good evening)
[14:44] <jam> sinzui: sorry I couldn't get any farther on this, but I have to EOD
[14:44] <jam> wallyworld wanted to pick it up in the morning
[14:44] <sinzui> Thank you for you time jam
[14:44] <jam> sinzui: and I think he was the one who did the changes to "upload-tools" so he probably has better insight there
[14:47] <natefinch> alexisb: morning alexis  (I think the convention is just to use the greeting relative to your own time zone... everyone knows what you mean :)
[14:47] <jam> morning alexisb
[14:48] <jam> you're up awfully early
[14:48] <jam> sinzui: launchpad Q. If I have sensitive all-machines.log, can I upload it as a private attachment?
[14:52] <sinzui> jam No private attachment :(
[14:53] <jam> sinzui: fortunately VIM can global search for the secrets and replace them with XXX without too much trouble
[14:57] <rogpeppe> axw: ping
[14:57] <rogpeppe> alexisb: hiya
[14:58] <jam> sinzui: hmm... It looks like "juju bootstrap" started creating i386 instances, and you cant "upgrade-juju --upload-tools" with a 64-bit version
[14:58] <jam> it will let you, but it can't find the i386 tools (for obvious reasons)
[14:58] <axw> rogpeppe: hey
[14:59] <alexisb> jam, not that early 8am for me
[14:59] <jam> sinzui: can you check if your 1.18.1 bootstrapped instances are i386 ?
[14:59] <jam> it was for me
[14:59] <jam> which is also a bug
[14:59] <rogpeppe> axw: the existing code doesn't seem to mention juju-mongodb
[14:59] <rogpeppe> axw: do you know how we should tell if it's available?
[14:59] <rogpeppe> axw: (looking at your comments on https://codereview.appspot.com/86920043 )
[15:00] <jam> alexisb: well, you were on a bit earlier, but I did the math wrong. 11 hours makes you 1 hour closer, not 1 hour farther away
[15:00] <axw> rogpeppe: right. no, I don't. I guess it just hasn't been done yet - so that can be TODO
[15:00] <jam> I thought it was 5:30 ish
[15:00] <rogpeppe> axw: ok, cool
[15:01] <axw> rogpeppe: this upgrade thing is a massive PITA. may take me a little while yet to come up with a nice solution
[15:01] <rogpeppe> axw: where do the main difficulties lie?
[15:02] <axw> rogpeppe: upgrade steps require API server & state, API server dies when state gets bounced
[15:02] <rogpeppe> axw: don't do it in upgrade steps
[15:02] <rogpeppe> axw: do it in EnsureMongo
[15:02] <jam> sinzui: so I have a bit more I can try to go on tomorrow, or *maybe* later tonight depending on how things go.
[15:02] <rogpeppe> axw: where we're already stopping and restarting the service
[15:03] <sinzui> jam, okay. I am still looking for  the arch that was used
[15:03] <axw> rogpeppe: I *think* there's a problem then that server.pem may not exist
[15:03] <axw> err
[15:03] <axw> maybe not that one
[15:03] <axw> there was another file that was created on upgrade
[15:03] <jam> sinzui: I think we have a 1.18.2 Critical bug that 1.18.X no longer prefers amd64
[15:03] <rogpeppe> axw: EnsureMongoServer is responsible for writing out the files that mongo requires, so we *should* be ok, i think
[15:03] <axw> rogpeppe: anyway. I did start down that path... I'll keep looking into it tomorrow
[15:04] <axw> ok
[15:04] <rogpeppe> axw: thanks a lot
[15:04] <jam> I *think* someone commented that it was because of PPC/ARM64 enablement
[15:04] <jam> (we can't force amd64, so we let the cloud tell us what to use, but that means if both i386 and amd64 are available we now do i386, when we should do amd64 if possible)
[15:04] <axw> sleepy time.. night all
[15:04] <jam> sinzui: I do believe you can force it with: juju bootstrap --constraints="arch=amd64"
[15:05] <rogpeppe> axw: BTW the reason for moving InitiateMongoServer into peergrouper is...
[15:05] <jam> and now, I really must go spend time with my family :)
[15:05] <rogpeppe> too late!
[15:09] <sinzui> jam. CI has started a new round of tests. These will use 1.18.1. I will watch them for arch mismatches
[15:11] <rogpeppe> natefinch: ping
[15:11] <natefinch> rogpeppe: hi
[15:12] <rogpeppe> natefinch: hangout?
[15:12] <natefinch> rogpeppe: sure
[15:12] <rogpeppe> natefinch: https://plus.google.com/hangouts/_/canonical.com/juju-core?authuser=1
[15:39] <rogpeppe> could someone have a look at this please? we've addressed comments, but it still needs a LGTM and it's a major blocker for HA. https://codereview.appspot.com/86920043/
[15:41] <sinzui> jam: I don't see an arch mismatch deploying 1.18.1. CI doesn't use upload-tools when deploying stable (since upload-tools is officially an developer feature)
[15:41]  * sinzui tries locally
[15:41] <natefinch> dimitern, mgz, jam, ping on the review above that roger posted
[15:57] <rogpeppe> trivial review anyone? https://codereview.appspot.com/87560044
[15:58] <rogpeppe> dimitern, mgz, jam: ^
[16:03] <dimitern> rogpeppe, looking
[16:03] <rogpeppe> dimitern: ta!
[16:04] <dimitern> rogpeppe, i'd swap you for https://codereview.appspot.com/87560043 :)
[16:05] <rogpeppe> dimitern: will do, after i've finished investigating this issue
[16:06] <dimitern> rogpeppe, sure, np - just reminding
[16:06] <dimitern> rogpeppe, LGTM
[16:07] <rogpeppe> dimitern: we really really need a review of https://codereview.appspot.com/86920043/ if you could muster the energy for it
[16:07] <rogpeppe> dimitern: but thanks for that review too :-)
[16:08] <dimitern> rogpeppe, looking that one as well
[16:08] <rogpeppe> dimitern: much appreciated
[16:33] <jam> rogpeppe: on https://codereview.appspot.com/87560044/ is there something about direct State destruction that we lose with your patch?
[16:33] <rogpeppe> jam: no
[16:33] <rogpeppe> jam: AFIK
[16:33] <rogpeppe> AFAIK
[16:36] <rogpeppe> jam: we only connect to the API if we don't use --force, and in that case we really want to use the usual API connection methods
[16:53] <dimitern> rogpeppe, natefinch, that HA CL LGTM with some trivials
[16:54] <rogpeppe> dimitern: thanks muchlu
[16:54] <rogpeppe> y
[16:54] <dimitern> rogpeppe, i'll poke you again about https://codereview.appspot.com/87560043 though :) (last time for today)
[16:54] <rogpeppe> dimitern: ok, will look now :-)
[16:55] <dimitern> rogpeppe, tyvm!
[16:58] <rogpeppe> dimitern: the only comment i might have would be that it might be more idiomatic to have the error types themselves as pointer types, embedding wrapper as a value
[16:59] <rogpeppe> dimitern: in fact, i think that's definitely worth doing
[16:59] <rogpeppe> dimitern: because it means that %#v will work better on errors
[16:59] <rogpeppe> dimitern: so: type notFound {wrapper}
[16:59] <rogpeppe> dimitern: and func (*notFound) new( etc
[17:44] <dimitern> rogpeppe, ok, that sgtm
[17:45] <dimitern> rogpeppe, did I see LGTM as well? :)\
[17:45] <rogpeppe> dimitern: i really think those tests could use sorting out
[17:45] <dimitern> rogpeppe, which ones?
[17:45] <rogpeppe> dimitern: i've been struggling to understand the logic
[17:45] <rogpeppe> dimitern: errors_test.go
[17:45] <rogpeppe> dimitern: after some effort, i think i've managed to tease out a suggestion
[17:45] <dimitern> rogpeppe, for each error in allErrors I add like 20ish cases
[17:46] <dimitern> rogpeppe, I didn't want to repeat the same tests for all types and possibly miss something along the way
[17:46] <rogpeppe> dimitern: i know, but the logic is quite a bit more complex than it needs to be
[17:46] <rogpeppe> dimitern: lines 180 to 190 are really hard to follow
[17:47] <rogpeppe> dimitern: and the errorSatisfier type doesn't seem to be doing much any more
[17:47] <dimitern> rogpeppe, I confess I kept it only for the String() method
[17:48] <rogpeppe> dimitern: yeah, it feels like a weird holdover
[17:48] <rogpeppe> s/holdover/relic/
[17:48] <rogpeppe> dimitern: and you don't even need the String method for what you're using it for
[17:49] <dimitern> rogpeppe, I need a way to compare 2 satisfiers (== or !=) and i can't do it with func pointers it seems
[17:49] <rogpeppe> dimitern: you could have two nested loops over allErrors
[17:50] <dimitern> rogpeppe, isn't that worse than using reflect?
[17:50] <rogpeppe> dimitern: then you just need to compare indexes (or perhaps pointers if you prefer)
[17:50] <rogpeppe> dimitern: it's certainly simpler
[17:50] <rogpeppe> dimitern: so i think it's better
[17:51] <dimitern> rogpeppe, but I have test.satisfier and allErrors[i].satisfier
[17:51] <rogpeppe> dimitern: you don't need test.satisfier
[17:51] <dimitern> rogpeppe, I can't just compare them and the indexes don't matter
[17:51] <rogpeppe> dimitern: the only reason you have that is that you're mixing in nil satisfier tests
[17:51] <rogpeppe> dimitern: they don't really fit, and they complicate all the logic
[17:51] <dimitern> rogpeppe, hmm..
[17:52] <dimitern> rogpeppe, I guess I can make a separate set of tests + loop in another test case for nils
[17:52] <rogpeppe> dimitern: i'd move the contextf tests into their own function too
[17:52] <rogpeppe> dimitern: it's really a totally independent function
[17:52] <dimitern> rogpeppe, but it needs to loop over allErrors as well
[17:53] <dimitern> rogpeppe, ok, can be done separately I agree
[17:53] <rogpeppe> dimitern: not necessarily
[17:53] <rogpeppe> dimitern: its logic is independent of allErrors
[17:53] <rogpeppe> dimitern: you do need to check that each error implements the newer interface, but that's easy to check statically
[17:54] <dimitern> rogpeppe, the origin of this CL is the behavior of ErrorContextf - I need to check each error type is preserved
[17:54] <rogpeppe> dimitern: fair enough. but that's a very simple test and loop over allErrors.
[17:54] <dimitern> rogpeppe, yeah, but that's an implementation detail that you, as a user of Contextf doesn't need to know
[17:55] <dimitern> rogpeppe, exactly
[17:55] <jam> natefinch: https://codereview.appspot.com/87570043/ <= log the version of mongo as we create the upstart job
[17:56] <dimitern> rogpeppe, ok, I appreciate your comments and will look at it a bit later or tomorrow
[17:56]  * dimitern reached eod
[17:56] <rogpeppe> dimitern: np, sorry for the push-back.
[17:57] <dimitern> rogpeppe, not to worry - it was useful :)
[18:05] <jam> sinzui: just in case it wasn't clear, "juju upgrade-juju --upload-tools" failed because bootstrap picked an i386, but upload-tools can only upload the amd64 that I'm running.
[18:05] <jam> so it was a combination of bug #1304407
[18:05] <_mup_> Bug #1304407: juju bootstrap defaults to i386 <amd64> <apport-bug> <ec2-images> <metadata> <trusty> <juju-core:Triaged> <juju-core 1.18:Triaged> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1304407>
[18:06] <jam> and bug #1282869
[18:06] <_mup_> Bug #1282869: juju bootstrap --upload-tools does not honor the arch of the machine being created <bootstrap> <constraints> <ppc64el> <upload-tools> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1282869>
[18:06] <sinzui> o O (clue x 4)
[18:07] <jam> sinzui: so I'm going to try it again and see if I can reproduce the failing to upgrade (for the right reason)
[18:07] <jam> sinzui: though it looks like bug #1282869 isn't quite complete, as we fixed "bootstrap" but not "upgrade-juju"
[18:07] <_mup_> Bug #1282869: juju bootstrap --upload-tools does not honor the arch of the machine being created <bootstrap> <constraints> <ppc64el> <upload-tools> <juju-core:Fix Released by wallyworld> <https://launchpad.net/bugs/1282869>
[18:12] <jam> sinzui: I reproduced the "cannot upgrade to 1.19.0" bug: 2014-04-14 18:11:40 ERROR juju runner.go:220 worker: exited "state": cannot log in to admin database as "machine-0": unauthorized mongo access: auth fails
[18:12] <jam> natefinch: ^^
[18:26] <jam> rogpeppe: if you're still around, found the upgrade bug
[18:26] <jam> specifically, 1.19.0 always tries to login to the "admin" db
[18:26] <rogpeppe> jam: really? cool.
[18:26] <jam> but on an upgrade, it doesn't have rights as machine-0
[18:26] <rogpeppe> jam: oh of course, dammit
[18:27] <jam> rogpeppe: so... do we back out logging into admin, do we make it "try but be ok if it fails" ?
[18:27] <jam> rogpeppe: if we aren't going to do the full "upgrade support for HA" then we need to put in hacks
[18:27] <rogpeppe> jam: i think we've got to do the latter
[18:27] <rogpeppe> jam: otherwise HA won't work even when not upgraded
[18:28] <jam> rogpeppe: so out of curiousit, why are we doing "admin := session.DB(AdminUser)" I realize the name of the db is "admin" but that shouldn't be AdminUser should it?
[18:28] <rogpeppe> jam: hmm
[18:29] <jam> rogpeppe: it is just that we're using the "admin" as the name of the user as the name of the DB
[18:29] <jam> mostly just a constant that "works" but isn't actually the right named constant
[18:30] <rogpeppe> jam: yeah, it does seem odd
[18:31] <jam> rogpeppe: k, the other Database names are just hard-coded strings in the function, so I'll follow suit for clarity
[18:33] <rogpeppe> jam: sgtm
[18:33] <rogpeppe> jam: personally i like hard-coded strings anyway - i think they're often clearer
[18:35] <rogpeppe> jam: DB(AdminUser) does seem wrong to me. i don't know what i was thinking.
[18:36] <jam> rogpeppe: is there an obvious way how to remove an agent from admin? (I'd like to add a test that we come up ok when we can't access 'admin' as we'd run into after upgrade)
[18:37] <rogpeppe> pwd
[18:37] <jam> afaict we don't do anything with the "admin" db we just logged into
[18:37] <jam> at least not directly
[18:37] <jam> the other DB objects in that func are put into the State object
[18:37] <rogpeppe> jam: st.db.SessionDB("admin").RemoveUser(AdminUser)
[18:38] <jam> rogpeppe: thanks
[18:38] <jam> well, in this case "RemoveUser(info.Tag)" aka ("machine-0"
[18:38] <rogpeppe> jam: yeah
[18:38] <rogpeppe> jam: no, we don't do anything with the admin db
[18:39] <rogpeppe> jam: but we do need access to it for manipulating the replica set
[18:39] <jam> rogpeppe: right, it allows you to call particular functions *if* you're logged in
[18:39] <rogpeppe> jam: yeah
[18:39] <jam> side-effect is on Mongo side
[18:41] <jam> rogpeppe: presumably we also need to change State.setMongoPassword to allow for AddUser on the "admin" table to fail?
[18:41] <jam> or we shouldn't ever be creating one of those
[18:42] <jam> since we can't be in HA we shouldn't be adding any machines that would want to
[18:42] <rogpeppe> jam: yeah
[18:43] <rogpeppe> jam: we should change EnsureAvailability to fail if we're not in replica set mode
[18:43] <rogpeppe> jam: that way people can't get themselves into a nasty twist
[18:57] <jam> rogpeppe: &mgo.LastError{Err:"not authorized to remove from admin.system.users",
[18:57] <rogpeppe> jam: hmm
[18:57] <rogpeppe> jam: i suppose it might have removed the user anyway
[18:58] <jam> that is after trying to do:
[18:58] <jam> 	adminDB := s.state.db.Session.DB("admin")
[18:58] <jam> 	password := testing.FakeConfig()["admin-secret"].(string)
[18:58] <jam> 	adminDB.Login(AdminUser, password)
[18:58] <jam> so theoretically ensuring that I'm admin, though I need to check the err code
[18:58] <jam> auth fails ...
[18:59] <jam> rogpeppe: from what I can tell, TestingInitialize returns State object that isn't actually logged into the Admin db
[18:59] <jam> TestingInfo doesn't have a password
[18:59] <jam> rogpeppe: what is *really* strange is that SetMongoPassword was perfectly happy, which *should* be setting the password in "admiN"
[19:00] <jam> so you are authed to add people, but not remove them?
[19:00] <rogpeppe> jam: it does seem odd
[19:00] <rogpeppe> jam: mongo has some weird semantics sometimes
[19:09] <jam> rogpeppe: so I haven't figured out the password for "admin", but I have found that if I call Machine.SetMongoPassword() I can then log into "admin" as "machine-0" with the password I just gave it, and then use *those* credentials to remove the "machine-0" user.
[19:09] <jam> WTWTTWW WTF?
[19:09] <natefinch> lol mongo is wacky
[19:10] <jam> natefinch: yeah, so $CURRENT_USER can add admins, but can't remove them, but you can add one, log in as it, and then do whatever-you-want
[19:13] <jam> natefinch: apparently the model changed in mongo 2.6: http://docs.mongodb.org/manual/reference/method/db.addUser/
[19:13] <jam> rogpeppe: from what I can tell, calling adminDB.RemoveUser("machine-0") removes it completely, and not just from admin
[19:14] <rogpeppe> jam: ha
[19:14] <rogpeppe> jam: so i guess you'll have to remove it, then add it back to the ones you want it on
[19:15] <jam> rogpeppe: actually, looks like I was just screwing up the password, so I need to try again
[19:16] <jam> finally, failing in the way I wanted
[19:16] <jam> and now success
[19:16] <rogpeppe> jam: so mongo wasn't being weird at all, in fact?
[19:17] <jam> rogpeppe: well, I still have to log in as the agent I just created to delete it
[19:17] <jam> that is still weird-as-fuck
[19:17] <rogpeppe> jam: ah yes
[19:17] <jam> but removing it from admin only removes it from admin
[19:22] <sinzui> I think we want a juju-local-kvm package to sort of kvm deps. juju-local is lxc centric
[19:26] <natefinch> rogpeppe: lp:~natefinch/juju-core/043-localstateinfo
[19:31] <jam> natefinch: rogpeppe: It makes me wonder if we couldn't just add ourselves if we weren't in admin to start with ...
[19:32] <rogpeppe> jam: i *think* i tried that
[19:32] <rogpeppe> jam: but try it anyway
[19:34] <jam> rogpeppe: natefinch: sinzui: https://code.launchpad.net/~jameinel/juju-core/soft-login-failure-1307450/+merge/215742
[19:34] <jam> I was able to reproduce the upgrade failure with the local provider
[19:34] <jam> and that patch lets it get further
[19:34] <rogpeppe> jam: codereview?
[19:34] <sinzui> \o/
[19:34] <jam> rogpeppe: lbox is thinking about it
[19:35] <jam> sinzui: not that there won't be any other bugs, but the first one I think I got
[19:35] <jam> rogpeppe: weird, still thinking
[19:35] <jam> rogpeppe: https://codereview.appspot.com/87730043
[19:36] <jam> rogpeppe: natefinch: I'm off to sleep, unfortunately, so if it needs tweaks, I'm sure curtis would appreciate you picking it up from here.
[19:36] <rogpeppe> jam: i like "haven't implemented bug #xxxx" - sounds like we want to implement a bug...
[19:36] <jam> or you can point wallyworld at it when he gets up
[19:36] <natefinch>  jam:cool
[19:36] <rogpeppe> jam: thanks
[19:41] <rogpeppe> natefinch: FWIW, the last remaining diffs that we haven't already got branches in progress for: http://paste.ubuntu.com/7251459/
[19:41] <natefinch>  rogpeppe: wow, that's awesome
[21:09] <bac> sinzui: so swift remains dead to us.  jenkins for charmworld uses juju to update to the newly blessed code, so staging is now stuck and useless.
[21:09] <sinzui> bac: yep
[21:09]  * bac is sad
[21:10] <sinzui> bac: our only option at this time would be to replace the stack under our personal credentials, but we also need different public IPs
[21:10] <bac> sinzui: why the last part?
[21:11] <bac> sorry, that was cryptic, sinzui why do we need new public IPs?  because we can't wrest them away from the current assignees?
[21:11] <sinzui> bac: public IPs are not shareable or transferable between accounts
[21:11] <bac> hi thumper
[21:11] <thumper> o/
[21:11] <bac> sinzui: oh.  can they be revoked from orange and given to us?
[21:12] <sinzui> bac: if we wanted to preserve the current IPs we need to revoke then hope we get the same ones when we allocate new ips
[21:12] <bac> oi goi oi
[21:12] <thumper> sinzui: I'm going to test bootstrapping on precise
[21:12] <bac> or, dios mio as they say here
[21:12] <thumper> sinzui: I have a precise machine here
[21:12] <sinzui> bac: my success rate is is 25% in my attempts to get an IP I had in another account
[21:12] <thumper> what is our ppa for precise stuff?
[21:13] <bac> sinzui: so, what's another RT to update dns in the grand scheme?
[21:14] <waigani> morning all
[21:14] <bac> sinzui: so it looks like i need to push a change directly to production without running on staging first.  guess i'll wait until the morning.
[21:15] <sinzui> thumper, while you slept I worked out how to use debug-log
[21:16] <thumper> sinzui: okay...
[21:16] <sinzui> bac: yes lets wait till the morning. I can think about how to make the machine do an update like the charm would
[21:16] <sinzui> thumper, I will ping you when I would like your review.
[21:17] <thumper> sinzui: oh... for docs...
[21:17] <thumper> yeah, this documentation thing still slips by me ...
[21:17] <thumper> sinzui: we should get a summary of the help doc into the actual command line help
[21:18] <sinzui> thumper, I agree. Maybe I will make that a topic for vegas
[21:24] <thumper> sinzui: for the local bootstrap test on precise
[21:24] <thumper> sinzui: what is the minimum I need to install?
[21:25] <sinzui> thumper, CI uses real precise + juju + juju-local
[21:25] <thumper> sinzui: juju and juju-local from where?
[21:26] <thumper> sinzui: also, do you know which compiler?
[21:26] <sinzui> thumper, any recent. I have juju 19 and juju-local 1.18.1. I haven't changed the last package in a while
[21:26] <thumper> sinzui: as I may need to build additional logging
[21:27] <sinzui> thumper, good question
[21:27] <thumper> sinzui: let me be clear, the precise box I have currently has no juju deps at all
[21:27]  * sinzui looks
[21:27] <thumper> sinzui: I'm assuming there is a ppa
[21:28] <sinzui> $ apt-cache madison golang-go
[21:28] <sinzui>  golang-go | 2:1.1.2-2ubuntu1~ctools1 | http://ubuntu-cloud.archive.canonical.com/ubuntu/ precise-updates/cloud-tools/main amd64 Packages
[21:29] <sinzui> thumper, and if you want very close matches to packages I can offer this...bug I assure you I have't changed local packaging since 1.18.0
[21:29] <sinzui> http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/publish-revision/ws/tmp.fQ6PU5ZxX5/
[21:29] <thumper> ah ha
[21:29] <thumper> I have precise-updates/cloud-tools in apt
[21:29]  * thumper installs juju-local
[21:30] <thumper> sinzui: it seems weird to me that jam was able to boot trunk on aws but CI was not
[21:30] <sinzui> thumper, I think you have that reversed
[21:31] <thumper> ah... wat?
[21:31] <sinzui> thumper, http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/
[21:31] <sinzui> CI can deploy fine
[21:31] <thumper> sinzui: what is the current state of the local provider CI tests?
[21:31] <thumper> which one is that
[21:31] <sinzui> jam reports that deploy is using the wrong tools. I have not seen that personally or in CI
[21:31] <thumper> local-deploy is red
[21:32] <sinzui> it has been broken for a few days. it is not "techincally" aws as we have done this on canonistack too
[21:34] <rogpeppe> hmm, this is a regrettable error when the machine in question is down (actually, its instance has been destroyed): 2014-04-14 21:32:16 ERROR juju.cmd supercommand.go:299 some agents have not upgraded to the current environment version 1.19.0.3: machine-0
[21:35] <thumper> sinzui: how come the precise-updates cloud tools doesn't have 1.18?
[21:35] <rogpeppe> i think there should probably be a way to force that
[21:35] <rogpeppe> thumper: hiya
[21:35] <thumper> or perhaps another question would be
[21:36] <thumper> why doesn't my machine see it?
[21:36] <thumper> hi rogpeppe
[21:36] <thumper> rogpeppe: I'm wondering if the 'regrettable error' is an understatement for something?
[21:36] <rogpeppe> thumper: well, it means that the environment is now broken - i cannot upgrade it
[21:37] <sinzui> thumper, politics
[21:37] <rogpeppe> thumper: it is an understatement, yeah
[21:37] <thumper> rogpeppe: I suppose an error message that says "you're borked, sucks to be you" wouldn't be appreciated
[21:37] <rogpeppe> thumper: at least then i'd know it was deliberate...
[21:38] <sinzui> thumper, Ubuntu rejected 1.16.4 (they consider backup and restore a feature). jamespage is still trying to get 1.16.6 into archive for precise to ensure they can upgrade, then go to 1.18.0
[21:38] <rogpeppe> thumper: it's an interesting situation actually, because usually i'd be able to do destroy-machine --force, but in this case the machine in question is a state server
[21:38] <sinzui> thumper, we have never said users can upgrade from 1.16.3 to 1.18.0
[21:38] <thumper> sinzui: even in the cloud-tools?
[21:38] <sinzui> It's not our repo
[21:39]  * rogpeppe creates a bug
[21:40] <sinzui> thumper, I talked with a few people today about it. There is a chance 1.18.1 will become official in the archive when trusty is released and customers cannot upgrade to it
[21:40] <rogpeppe> hmm, actually maybe it's just a bug for me at this moment
[21:40] <thumper> sinzui: aargh... that is terrible
[22:01] <thumper> sinzui: ok, can confirm that 1.19.0 bootstraps the local provider on my precise machine
[22:01] <thumper> r2626
[22:01] <thumper> which I can see fails on CI
[22:02] <thumper> sinzui: so the big question now becomes, what is different?
[22:03] <sinzui> thumper, well.
[22:03] <sinzui> what changed in 	lp:juju-core r2593
[22:03] <sinzui> thumper, when CI slows I can run the deploy with --debug
[22:04] <thumper> sinzui: that was when the machine agent became responsible for setting up the mongo upstart script
[22:05] <thumper> sinzui: can you capture the mongo logs from the CI machine?
[22:05] <thumper> I wonder if this is the crash that dave had reported
[22:06] <thumper> sinzui: https://bugs.launchpad.net/juju-core/+bug/1306536
[22:06] <_mup_> Bug #1306536: replicaset: mongodb crashes during test <juju-core:Triaged> <https://launchpad.net/bugs/1306536>
[22:07] <sinzui> bugger, CI is trying the local job more than 5 times
[22:07] <sinzui> thumper, I think it is related since the logs report it http://ec2-54-84-137-170.compute-1.amazonaws.com:8080/job/local-deploy/1174/console
[22:10] <thumper> sinzui: there is also the mongo log file
[22:11] <thumper> sinzui: /var/log/upstart/juju-db-tim-local.log is my file
[22:11] <thumper> sinzui: replace <tim> for ci user, and <local> for the env name
[22:11] <sinzui> thumper, noted.
[22:11] <thumper> sinzui: that way we'll get any extra crash info
[22:12] <sinzui> the local upgrade test is still playing so I cannot start the deploy test
[22:12] <thumper> ack
[22:12] <thumper> surely if the upgrade test is running, then the local provider bootstraps?
[22:12] <thumper> or is it taking a long time to fail?
[22:13] <thumper> sinzui: perhaps also worth noting that my precise machine is running i386
[22:13] <sinzui> thumper, 1.18.1 is good. We can bootstrap with stable, we cannot upgrade to unstable
[22:13] <sinzui> We are amd64
[22:16] <thumper> sinzui: http://paste.ubuntu.com/7252245/
[22:16] <sinzui> I can bootstrap now
[22:17] <thumper> sinzui: where?
[22:17] <sinzui> On the CI machine
[22:17] <thumper> ?!
[22:17] <thumper> what changed?
[22:20] <sinzui> thumper, this is the log of my bootstrap attempt https://pastebin.canonical.com/108508/
[22:20] <sinzui> thumper, I didn't mean CI could pass bootstrap. I meant that the env was free for me to bootstrap
[22:23] <sinzui> thumper, I didn't get logs in a local dir or juju-jenkins-local
[22:24] <sinzui> thumper, maybe this config offends you: https://pastebin.canonical.com/108509/
[22:24]  * thumper looks
[22:24] <thumper> what is test-mode?
[22:25] <thumper> what is bootstrap-timeout in?
[22:25] <rogpeppe> oops, ensure-availability shouldn't have done *that*
[22:26] <sinzui> thumper, this is mongodb-server https://pastebin.canonical.com/108510/
[22:26] <thumper> sinzui: log?
[22:26] <sinzui> thumper, test-mode tell the charm store to not count the deployment
[22:26] <sinzui> no logs
[22:26] <sinzui> ^
[22:27] <sinzui> thumper, bootstrap failures don't seem to ever leave logs
[22:27] <thumper> sinzui: same mongo version
[22:27] <sinzui> hmm
[22:27] <sinzui> thumper, I can try to tail something in another terminal while I bootstrap
[22:28] <thumper> sinzui: can I log into that machine?
[22:29] <sinzui> sure
[22:30] <sinzui> thumper, ssh -i ./cloud-city/staging-juju-rsa jenkins@54.84.137.170
[22:30] <thumper> sinzui: I don't have that identity
[22:30] <sinzui> thumper, the key  is in lp:~sinzui/+junk/cloud-city
[22:30] <sinzui> which is shared with you
[22:30]  * thumper looks
[22:31] <sinzui> That also has the env for everything we test
[22:32] <thumper> sinzui: I'm in
[22:33] <sinzui> thumper, export GOPATH=/var/lib/jenkins/jobs/local-deploy/workspace/extracted-bin/
[22:33] <sinzui> thumper, export JUJU_HOME=~/cloud-city
[22:42]  * rogpeppe has an environment that seems reasonably HA
[22:44] <thumper> rogpeppe: \o/
[22:44] <rogpeppe> thumper: there are still... strangenesses
[22:45] <rogpeppe> thumper: but still, i destroyed the bootstrap instance and everything carried on much as usual
[22:45] <wallyworld> thumper: sinzui: i am going to land john's recent branch "soft-login-failure-1307450" which fixes an issue preventing upgrade from 1.18 to .19 from working
[22:46] <sinzui> rogpeppe, send me a bried summary of how you made it HA via the command line. I think I can reused the backup restore test to instrument a failure of a machine. I expect with HA, juju status still works after the failure
[22:46] <sinzui> \o/ wallyworld
[22:46] <rogpeppe> sinzui: the requisite branches haven't landed yet
[22:46] <wallyworld> sinzui: well, i'm going by the description - there may be other issues :-)
[22:46] <rogpeppe> sinzui: there's one which isn't ready to be proposed yet
[22:47] <rogpeppe> sinzui: i can push the branch that i'm testing, if you like
[22:49] <rogpeppe> sinzui: essentially i did this: http://paste.ubuntu.com/7252375/
[22:49] <sinzui> rogpeppe, no rush. I am busy preparing for a release and trying to get juju 1.16.6 in the cloud archive
[22:49] <sinzui> rogpeppe, excellent. as I hoped
[22:55]  * rogpeppe grinds to a halt
[22:55] <rogpeppe> g'night all
[22:55] <waigani> night rogpeppe
[22:56] <rogpeppe> waigani: ttfn
[22:56] <waigani> congrats on HA
[22:56] <rogpeppe> it's not there yet!
[22:57] <waigani> congrats on *almost* HA ;)
[22:58] <sinzui> thumper, Can you read my debug-log draft at https://docs.google.com/a/canonical.com/document/d/1BXYrLC78H3H9Cv4e_4XMcZ3mAkTcp6nx4v1wdN650jw/edit
[23:17] <hazmat> why does local provider try to reverse dns on the ip addresses..
[23:18]  * hazmat wonders how he got dns-name: 176.52.236.23.bc.googleusercontent.com.
[23:20] <hazmat> ha.. yummy!
[23:20] <hazmat> rogpeppe, sinzui  if you want an additional tester for that.. send me some instructions
[23:24] <sinzui> hazmat, thank you
[23:27] <hazmat> smoser, you ever seen cloudinit on trusty hang..  i'm in a container.. and the last output is http://pastebin.ubuntu.com/7252494/  but its blocking the rest of the container startup (ssh, etc).
[23:36] <wallyworld> sinzui: john's branch landed at r2627 so hopefully that might help the upgrade tests pass. we'll see i guess
[23:40] <smoser> hazmat, can you turn cloud-init debug on.
[23:40] <smoser> and get paste.
[23:41] <smoser> hazmat, in /etc/cloud/cloud.cfg.d/05_logging.cfg just turn 'handler_consoleHandler' to be
[23:41] <smoser> level=DEBUG
[23:41] <smoser> rather than
[23:41] <smoser> level=WARNING
[23:41] <smoser> you should see lots more output.
[23:42] <smoser> not sure how you ran that though
[23:46] <sinzui> thumper, I think CI will start testing in 15 minutes. Do you want me to disable the local tests so that you can use the env as you like?
[23:46] <thumper> sinzui: yeah, for now would be good
[23:46] <thumper> just otp with alexisb
[23:48] <alexisb> yes sinzui I was distracting thumper, I am done he is all yours now
[23:48] <sinzui> thumper, local is all yours. Say when you are done so that I can re-enable the test
[23:48] <thumper> sinzui: ok
[23:48] <thumper> in poking around now
[23:52] <thumper> sinzui: um...
[23:52] <thumper> sinzui: hangout?
[23:52] <hazmat> smoser, ack
[23:52] <hazmat> smoser, it was an old version of trusty i was updating.
[23:53] <hazmat> i'll see if i can reproduce and log
[23:53] <sinzui> thumper, I can 40 minutes. My children want dinner
[23:54] <thumper> ok
[23:54] <thumper> sinzui: can in 40 minutes?
[23:54] <thumper> or only for 40 minutes
[23:54] <thumper> :)