[00:10] <orospakr> I just tried deploying a charm (using juju-core) that requires the onieric series, but the VMs failed in the "no matching tools" state.  trying `juju sync-tools --all` just produced the same message.
[00:13] <orospakr> here's how the world looks for me right now: https://gist.github.com/orospakr/b3a1950fbe80c61446b4
[00:15] <davecheney> orospakr: we do no support onaric
[00:15] <davecheney> sorry
[00:15] <davecheney> you won't find any tools
[00:16] <davecheney> yeesh
[00:16] <davecheney> we do not support onieric
[00:16] <orospakr> huh, okay. thank you.
[00:17] <orospakr> silly question: when you juju bootstrap with juju-core, the bootstrap VM is running juju-core as well?
[00:22] <davecheney> the first machine, machine 0 is running the mongodb service (what we call the state server)
[00:22] <davecheney> it holds the knowledge of everything in that environment
[00:22] <orospakr> ah, I knew about that. I thought a jujud ran on it as well.
[00:23] <davecheney> orospakr: well a jujud is running on that machine
[00:23] <davecheney> i'm not sure which question you are asking
[00:26] <orospakr> does the first node also actively administer the other nodes, in response to your commands with the juju client, in addition to storing state in mongodb?
[00:31] <davecheney> orospakr: yes and no
[00:31] <davecheney> the first node also runs the providisioning agent which is the one that talks to the provider and creates new machines (nodes) if required
[00:34] <orospakr> ah, that's the thing I'm thinking of. good. :)
[00:36] <orospakr> so, now I've got this other issue: I can't destroy the wedged service/units/machines, because they're all unaccessible to juju.
[00:36] <davecheney> orospakr: can you explain what you mean by inaccessible ?
[00:37] <orospakr> the onieric-based service is in the "dying" state, and the units and machines alike are in the "pending" state.
[00:37] <orospakr> s/"dying" state/life parameter is "dying"/
[00:40] <davecheney> orospakr: have you tried juju terminate-machine ?
[00:40] <orospakr> aye. [~]$ juju destroy-machine 2 3 4
[00:40] <orospakr> error: no machines were destroyed: machine 2 has unit "couchbase/0" assigned; machine 3 has unit "couchbase/1" assigned; machine 4 has unit "couchbase/2" assigned
[00:40] <orospakr> oof, excuse me. that didn't paste well.
[00:42] <davecheney> i'd start with destroy-unit, destroy-server, etc
[00:42] <davecheney> services
[00:42] <davecheney> but at this point i'd just destroy the environment and start again if that is an option
[00:42] <orospakr> it is, but I'm trying to stick it out on things like this so I can be confident that I can deal with failures when I do use this in production. ;)
[00:44] <davecheney> i think destroy-unit and destory-service take a --force
[00:44] <davecheney> but they will create dangling references
[01:20] <davecheney> m_3: ping
[02:37] <davecheney> why did the PA restart ? http://paste.ubuntu.com/5597298/
[04:06] <davecheney> m_3-droid: i'm a tit
[04:07] <davecheney> the problem isn't the HP endpoint, it is the number of concurrent connections to the mongo server running on machine/0
[04:08] <davecheney>      ├─mongod───828*[{mongod}]
[04:08] <davecheney>      ├─mongod───9*[{mongod}]
[04:08] <davecheney> oops
[04:08] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1172110
[05:05] <davecheney> shit, we're using 3 mongo connections per machine
[05:18] <jam> davecheney: because of the different agents?
[05:20] <davecheney> i suspect so
[05:20] <davecheney> it isn't exactly 3 x
[05:20] <davecheney> 450 connections == 156 machines
[05:21] <davecheney> the mgo drivers' peer probing isn't helping
[05:21]  * davecheney really hates that feature
[05:28] <davecheney> jam: https://codereview.appspot.com/8931044/
[05:31] <davecheney> jam: Apr 24 05:30:51 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:30:51 [conn24025] end connection 15.185.174.63:33866 (599 connections now open)
[05:31] <davecheney> Apr 24 05:30:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:30:53 [initandlisten] connection accepted from 15.185.173.39:34560 #24049 (600 connections now open)
[05:31] <_mup_> Bug #24049: KDE systemsettings doesn't have a file association component <kde-systemsettings (Ubuntu):Fix Released> <https://launchpad.net/bugs/24049>
[05:31] <davecheney> ^ this is what you get from mongo
[05:31] <davecheney> compared to how much we log
[05:31] <davecheney> it's not a lot
[05:31] <davecheney> #1
[05:31] <davecheney> #1234
[05:31] <davecheney> #12345
[05:31] <_mup_> Bug #12345: isdn does not work, fritz avm (pnp?) <isdnutils (Ubuntu):Fix Released by doko> <https://launchpad.net/bugs/12345>
[05:31] <davecheney> #9999
[05:31] <davecheney> #10000
[05:31] <_mup_> Bug #10000: xserver-common: X crashed (signal 7) while scrolling in Mozilla <xorg (Ubuntu):Invalid by daniels> <xorg (Debian):Fix Released> <https://launchpad.net/bugs/10000>
[05:47] <davecheney> bzzt
[05:47] <davecheney> Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection refused because too many open connections: 819
[05:47] <davecheney> Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection accepted from 15.185.176.240:44746 #55756 (820 connections now open)
[05:47] <_mup_> Bug #55756: installer partman crash selecting swap partition <ubiquity (Ubuntu):Invalid> <https://launchpad.net/bugs/55756>
[05:47] <davecheney> Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection refused because too many open connections: 819
[05:47] <davecheney> Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection accepted from 15.185.177.196:51276 #55757 (820 connections now open)
[05:47] <_mup_> Bug #55757: "Send a mail" to a contact list adds only the first contact to "To:" <deskbar-applet:Fix Released> <deskbar-applet (Ubuntu):Fix Released by desktop-bugs> <https://launchpad.net/bugs/55757>
[05:59] <fwereade> morning everyone
[05:59] <fwereade> how screwed up is everything today?
[07:17] <davecheney> fwereade: we're looking at a hard limit of ~300 machines or 800 agents (the smaller of the two)
[07:17] <fwereade> davecheney, thanks, I just read your email -- definitely mongodb connections?
[07:17] <davecheney> fwereade: absolutelu
[07:17] <davecheney> it says so in the mongo log
[07:18] <davecheney> fwereade: also, https://codereview.appspot.com/8931044/
[07:18] <davecheney> not critical
[07:19] <fwereade> davecheney, that is ludicrous, isn't it? surely that's not as far as it can go?
[07:19] <davecheney> the default limit is 80% of nfiles, which is 1024 on ubuntu
[07:19] <davecheney> these defaults can be changed
[07:20] <davecheney> during service deployment we average 3x the number connections as machines
[07:20] <davecheney> the number falls back to 2x after deployment
[07:20] <davecheney> (although that may not be successful deployment as I restarted monogo
[07:20] <davecheney> 2x sounds logical, 1 machine agent, 1 unit agent per service unit
[07:20] <fwereade> davecheney, yep, agreed
[07:20] <davecheney> the conn limit for mongo can be increased
[07:21] <davecheney> fwereade: the other problem is i'm seeing continual cpu spikes as 600 agents wake up
[07:21] <davecheney> and probe each mongodb server
[07:21] <davecheney> this is a very unhelpful feature
[07:22] <fwereade> davecheney, heh, I had been less worried about that in particular but it goes to show you never can tell:/
[07:23] <davecheney> at 2000 agents I estimate the mongodb server will be under constant polling pressure
[07:23] <fwereade> davecheney, *if* we had an internal API that would be a non-issue but it doesn't help us today
[07:23] <davecheney> which will probably fuck us with sockets in TIME_WAIT
[07:23] <davecheney> fwereade: yes, that is the logical solution
[07:23] <davecheney> federated mongodb might help
[07:23] <davecheney> but the polling logic will still be a massive burdon
[07:24] <fwereade> davecheney, yeah,doesn't feel like it'd really address the issue
[08:26] <rogpeppe> mornin' all
[08:28] <dimitern> rogpeppe: morning
[08:28] <rogpeppe> dimitern: hiya
[08:29] <rogpeppe> fwereade: hope you've recovered a bit!
[08:30] <fwereade> rogpeppe, yeah, more or less, although I remain baffled at the actual state of play -- I am pondering dave's scale tests for now and hoping that mramm will wake up in an hour or two and bring us the gift of clarity
[08:31] <rogpeppe> fwereade: unfortunately my network connection is still borked, so i missed the kanban meeting yesterday, so i don't really know
[08:34] <rogpeppe> fwereade: i don't think i've seen anything about dave's scale tests other than the conversation just above
[08:34] <dimitern> rogpeppe: I think nobody knows for sure, except probably mgz
[08:35] <fwereade> rogpeppe, I had a brief chat with him this morning, it seems we are hitting mongo connection limits at around 300 machines
[08:35] <rogpeppe> fwereade: interesting. i'm not entirely surprised actually.
[08:36] <rogpeppe> fwereade: need to get that internal API done. and HA on that.
[08:36] <fwereade> rogpeppe, yep
[08:37] <fwereade> rogpeppe, considering possible mitigations in the meantime
[08:37] <rogpeppe> fwereade: anything particular in mind?
[08:37] <fwereade> rogpeppe, still going through internal sanity-vetting ;p
[08:40] <dimitern> hmm.. cmd/juju tests running time seems to have improved slightly
[08:47] <fwereade> mramm, heyhey
[08:49] <mramm> heyhey
[08:50] <mramm> I am trying to get caught up on the packaging issues
[08:50] <fwereade> mramm, great
[08:50] <mramm> I thought things were moving smoothly until late yesterday when antonio informed me that the server guys did not think it would go in
[08:51] <mramm> I still don't have a clear picture from them of what needs to happen, just some hand waving about "issues"
[08:52] <fwereade> mramm, my understanding had been that monday was the actual razor-sharp cutoff in any case
[08:52] <mramm> well, that was my understanding too
[08:53] <mramm> but I thought we'd given them something by then, and I hear from them yesterday that the razer sharp cutoff is actually today
[08:53] <fwereade> mramm, but, regardless, if there is anything I can do to help I would be happy to; and if you do manage to glean some measure of clarity I, and others, would be most grateful for it
[08:54] <mramm> looks like jamespage is on the case
[08:55] <mramm> if you can join #server on canonical IRC that would be helpful
[09:13] <mramm> so anybody that can help jamespage in some way today gets an extra gold star from me ;)
[09:25] <davecheney> good evening
[09:25] <davecheney> could I draw your attention to
[09:25] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1172122
[09:25] <_mup_> Bug #1172122: state/presence: FAIL: presence_test.go:253: PresenceSuite.TestWatchPeriod <juju-core:New> <https://launchpad.net/bugs/1172122>
[09:25] <davecheney> and
[09:25] <mramm> so, on subject of dave's e-mail
[09:25] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1172121
[09:25] <_mup_> Bug #1172121: environs/maas: multiple test failures <juju-core:Confirmed> <https://launchpad.net/bugs/1172121>
[09:25] <mramm> haha
[09:26] <davecheney> mramm: i can leave again if you want to talk about me
[09:26] <davecheney> i don't mind, i like the idea of being popular
[09:26] <davecheney> also, https://docs.google.com/a/canonical.com/document/d/1p_OzWxqxaXalHBI3ohkUsB9_iQBPSGWwM-qODm5FSbI/edit#
[09:27] <mramm> haha
[09:27] <mramm> so, I think we should talk about the internal API stuff sooner rather than later
[09:27] <mramm> given that we are hitting scalability limits now
[09:29] <fwereade> davecheney, did you update gomaasapi?
[09:29] <davecheney> mramm: SGTM
[09:29] <davecheney> fwereade: probably not
[09:29] <fwereade> davecheney, I think that resolves it
[09:29] <davecheney> right, i'll check that tomorrow
[09:30] <fwereade> davecheney, I'll close it then, reopen if I'm wrong please
[09:32] <davecheney> SGTM
[09:32] <davecheney> what about state/presence ?
[09:33] <fwereade> davecheney, I think I have no option but to do a deep dive there and try to figure out WTF is going on
[09:33] <fwereade> davecheney, occasional failures from there have always been a thing, but not one that ever quite rose high enough to be looked at properly
[09:33] <davecheney> has anyone else seen that problem ?
[09:35] <fwereade> davecheney, it doesn't look entirely unfamiliar, but I thought we had a bug for it already; apparently not
[10:10] <mramm> so, can somebody help jamespage out with updating our "release" package to build from local source rather than latest trunk, so the source gets included in the package
[10:10] <mramm> we can pull the local source from trunk and then build
[10:10] <jamespage> mramm, forget the package
[10:10] <jamespage> I just want a release tarball of juju-core 1.10.0 that contains everything that is juju-core 1.10.0 aside from the packaging
[10:11] <mramm> ahh
[10:11] <mramm> cool
[10:12] <mramm> so rogpeppe, TheMue, fwereade, jam, dimitern:  can one of you get that for jamespage now?
[10:12] <rogpeppe> mramm: ok, i'll do that
[10:13] <mramm> rogpeppe: thanks!
[10:13] <rogpeppe> jamespage: presumably not including binaries, right?
[10:14] <jamespage> rogpeppe, you got it
[10:14] <TheMue> ah, just wanted to ask too
[10:18] <davecheney> if you fancy a break to look at it
[10:21] <mramm> morning mgz!
[10:23] <mgz> hey! seen your email.
[10:24] <jamespage> mgz, hey!
[10:26] <mgz> so, there are a couple of issues for me:
[10:27] <mgz> the update-alternatives bits didn't get merged into dave's packaging branch, but he did then add a manpage, so that needed updating
[10:28] <mgz> I don't have rights to upload to the public bucket on ec2, nor have I done what dave normally does for releases
[10:29] <mgz> and there's a change in 1.10 working around the updates-during-raring to upstart bug that I don't think we want to release with
[10:30] <jamespage> mgz, I've merged all the various bits of packaging - lp:~james-page/ubuntu/raring/juju-core/1.10
[10:30] <jam> mgz: I have some questions for you as well if/when you have some time.
[10:30] <mgz> jam: sure
[10:30] <jamespage> mgz, and raised a FFe - 1172215
[10:30] <jamespage> bug 1172215
[10:30] <_mup_> Bug #1172215: [FFe] Please include juju-core 1.10.0 in Ubuntu 13.04 <Ubuntu:New> <https://launchpad.net/bugs/1172215>
[10:31] <jam> mgz: 1) it is known that 1.10.0 is in the ppa but not uploaded to ec2, so you can't actually bootstrap (no tools found)
[10:31] <mgz> those are the only things I'm aware of.
[10:32] <mramm> davecheney: can you help out with the tools upload stuff if you are still around?
[10:32] <jam> 2) I still can't ssh to the original goose instance, as near as I can tell chinstrap isn't letting ssh get to lcy01 (it is working to lcy02, but I get No route to host trying to get to 10.55.60.94)
[10:32] <mgz> any of the core guys should have rights, but no one responded when I poked the other day
[10:33] <mgz> so maybe they don't?
[10:33] <davecheney> mramm: sure, mgz do you want the creds ?
[10:33] <jam> 3) I tried starting a new tarmac bot with similar config and juju-core doing the bootstrap to lcy02, but it seems charms:precise/tarmac uses puppet, and that just doesn't work now.
[10:33] <davecheney> or I can do the push if you cna point me to the deb in the archive
[10:33] <mramm> giving mgz creds makes sense to me
[10:34] <mramm> lp:~james-page/ubuntu/raring/juju-core/1.10 looks like the proposed release
[10:34] <davecheney> sure, the only reason I hesitate is they belong to gustavo
[10:34] <mramm> yea
[10:34] <davecheney> so, be careful with 'em
[10:34] <jam> davecheney: load testing time!
[10:34] <jam> how many CC4.xxlarge can we run? :)
[10:34] <mramm> we ultamately need to take that over and make it something owned by a team, not a person :/
[10:35] <jam> mramm: is it possible to change ownership of an s3 bucket?
[10:36] <mramm> not easily
[10:36] <davecheney> mgz: ceheck your make
[10:37] <davecheney> mramm: jam much easier to change the source in the code
[10:37] <davecheney> mgz: check your mail
[10:37] <mramm> davecheney: right
[10:37] <mramm> you can delete it, and then try to pick the name up under a new account
[10:38] <mramm> but 1) anybody can pick it up when it becomes available
[10:38] <mramm> and 2) it can take up to 24 hours to go back into the available names pool
[10:38] <mgz> jam: one funny thing I noticed was there are a couple of goosebot instances, one of which is shutoff, on lcy01. it's possible the routing is broken just for that host.
[10:39] <mgz> jam: the tarmac puppet charm not working with juju-core is a bug here I guess, unless it does something very bogus
[10:39] <jam> mgz: the shutoff one is the python-juju bootstrap node
[10:40] <jam> I wasn't able to start and get to that one either, but didn't really need to.
[10:40] <mgz> there are two shutoff ones...
[10:42] <TheMue> mramm: thats why I suggest a dns name like tools.juju.ubuntu.com in our code to point to ANY bucket we want (or other server)
[10:43] <rogpeppe> jamespage: i'm pushing a source-only branch containing all the latest source and its dependencies.
[10:43] <mramm> TheMue: file a kanban card for it and add it to tomorrow's agenda ;)
[10:44] <davecheney> TheMue: https://docs.google.com/a/canonical.com/document/d/1p_OzWxqxaXalHBI3ohkUsB9_iQBPSGWwM-qODm5FSbI/edit#
[10:44] <TheMue> mramm: yep, will do
[10:44] <rogpeppe> jamespage: unfortunately my network upload speed is outrageously slow at the moment (about 50Kbits/s) so it will take a while
[10:44] <TheMue> davecheney: agenda is already edited, for a different topic ;)
[10:44] <jamespage> rogpeppe, ack
[10:45] <mgz> jam: seems 10.55.60.94 is indeed unreachable, worth raising with canonistack support
[10:47] <jam> mgz: is that ask web-ops and then get it escalated ?
[10:47] <jam> mgz: do you know if you have to have the puppet agent running to have the script run ? (default is that puppet agent in /etc/defaults/puppet is to not run)
[10:47] <mgz> for the record, the packaging branch is at lp:~james-page/ubuntu/raring/juju-core/1.10.0 with the trailing .0
[10:51] <rogpeppe> jamespage:  lp:~rogpeppe/+junk/juju-1.10.0-source-only
[10:51] <mgz> jamespage: your packaging branch looks good to me
[10:51] <jamespage> rogpeppe, that really need to be somewhere official
[10:51]  * TheMue is at lunch
[10:51] <jamespage> like lp:juju-core/1.10.0
[10:51] <rogpeppe> jamespage: ah
[10:52] <rogpeppe> i'm worried i might start stepping on someone's toes here - i'm not usually involved with this stuff
[10:52] <mgz> needs a different name if you're putting it under the juju-core project
[10:52] <mgz> as it's packaging
[10:52] <mgz> I can push it somewhere though
[10:53] <jamespage> mgz, no - thats the point
[10:53] <jamespage> I want an upstream release of juju-core
[10:53] <jamespage> forget the packaging
[10:53] <mgz> it's not juju-core 1.10.0
[10:53] <mgz> it's that plus all the deps
[10:53] <jamespage> yes
[10:53] <mgz> okay.
[10:53] <jamespage> juju-core plus the deps that the juju-core dev team say are good for 1.10.0
[10:53] <rogpeppe> basically i fetched all the deps from scratch and removed the .bzr and .hg directories
[10:53] <jamespage> mgz, I want to switch the packaging away from 3.0 (native) - its not required
[10:54] <mgz> to quilt?
[10:54] <jamespage> mgz, yes
[10:54] <jamespage> native rarely makes sense
[10:55] <mgz> I can live with that
[10:55] <jam> mgz: I wonder if you could grab them and use 'bzr-upload' to create light dirs so that it is easy to update them to newer tools.
[10:56] <mgz> so, issue #1 is resolved by james' branch, issue #2 is okay now I have the ec2 creds, last question is if we carry the cloud-init hack or not...
[10:58] <mgz> jam: we could certainly do something more elegant, for now I'm happy with just dumping the code and adding a bunch of fresh unrelated stuff to the repo on launchpad
[10:59] <mramm> be back in a few -- getting breakfast and etc.   Ping me if needed.
[11:00] <mgz> jamespage: probably a question for you as much as any of the juju guys, see <https://codereview.appspot.com/8648047/>
[11:01] <jam> mgz: did you still want the hp-cloud instance runnng?
[11:02] <mgz> jam: nope, I fixed my script so I can restart it myself as needed
[11:03] <jam> mgz:except you couldn't reach http, right?
[11:03] <jam> non-http because of the non-standard port
[11:03] <jam> for keyauth
[11:03] <mgz> well, "myself", provided I remember to do it in advance, otherwise without manual intervention (lesson: use sed -r when being fancy)
[11:18] <rogpeppe> my pesky phone line seems like it might be out for another whole week
[11:20]  * rogpeppe didn't know about sed -r
[11:20] <rogpeppe> i always get bitten by the fact that standard sed doesn't do "proper" regexps, 'cos i'm used to plan 9's sed which does them by default
[11:21] <jamespage> mgz, rogpeppe: which of you two are working on preparing the juju-core 1.10.0 snapshot release and sticking it something official?
[11:21] <jamespage> not clear from my backscroll
[11:22] <rogpeppe> mgz: given my (lack of) current bandwidth, you might be best doing that
[11:22] <jam> rogpeppe: still no home internet? ouch
[11:23] <rogpeppe> jam: yeah. i just talked with the phone company, and they have no idea when it'll be resolved. it's been out since last tues
[11:23] <jam> rogpeppe: for sed, you mean '\d' vs [[:digit:]] ?
[11:23] <rogpeppe> jam: well, there is *some* internet, but the upload speed is stupidly bad, and i've been getting 3-5s ping response times
[11:24] <rogpeppe> jam: no, i mean (foo|bar)
[11:28] <mgz> jamespage: I can push the source roger put up to somewhere... and I guess we can just leave the other questions for now
[11:30] <mgz>  lp:~juju-core/ubuntu/raring/juju-core/1.10.0-source okay?
[11:31] <jam> mgz, dimitern: standup?
[11:31] <jamespage> mgz, not sure why we need the ubuntu/raring prefix
[11:31] <jamespage> its not ubuntu or raring - its juju-core 1.10.0
[11:31] <jam> jamespage: because ~juju-core/juju-core is completely unrelated code.
[11:31] <jam> well, "unrelated"
[11:31] <jam> in that it is just the base tree, not all deps
[11:32] <jamespage> well like I said - I'm good with a tarball published on launchpad.net
[11:32] <jamespage> like we do for juju
[11:33] <mgz> hm, that would be ideal really
[11:33] <jamespage> i'd rather have it that was - otherwise I have to cut my own tarball still
[11:33] <jamespage> which is not always deterministic
[11:33] <jam> mgz: mumble?
[11:34] <mgz> I'm there.
[11:41] <mgz> so, we don't have a 1.10.0 milestone yet? ...ah, the fun is it's half the 2.0 milestone
[11:52] <wallyworld_> fwereade: want to join us on mumble?
[12:02] <mgz> jamespage: https://launchpad.net/juju-core/1.10/1.10.0 has tarball based on the rogpeppe branch
[12:02] <fwereade> wallyworld_, actually, could we have a quick hangout for 15 mins?
[12:02] <mgz> I'll tidy up the rest of the release sutff for now
[12:02] <wallyworld_> sure
[12:02] <fwereade> wallyworld_, I forgot our differing interpretations of wednesday
[12:02] <mgz> blast, no hangout for me
[12:02] <jamespage> mgz, ta muchly
[12:03] <wallyworld_> fwereade: if mgz can't do hangout, can you do mumble?
[12:03] <fwereade> wallyworld_, er maybe, how can I set it up in 2 minutes or less?
[12:03] <rogpeppe> mgz: you might want to include this info somewhere around the place: http://paste.ubuntu.com/5598150/
[12:03] <wallyworld_> fwereade:  not in 2 mins i don't think :-(
[12:04] <rogpeppe> mgz: it's the full list of the revision numbers
[12:04] <wallyworld_> mgz: can you reduce your bandwidth for a hangout?
[12:04] <mgz> fwereade: `sudo apt-get install mumble` then <https://wiki.canonical.com/StayingInTouch/Voice/Mumble> but hangout safer, go for that
[12:04] <fwereade> mgz, cheers, I'll set that up after this then
[12:04] <mgz> wallyworld_: issue is google don't provide arm debs for their binary blobs
[12:04] <wallyworld_> mumble can be fiddly
[12:04] <rogpeppe> mgz: hmm, except the revision of the go tree itself. that should probably be included too, i suppose, although we're not including it in the tarball
[12:05] <rogpeppe> mgz: maybe we should
[12:05] <rogpeppe> mgz: although i suppose we're building against a known go revision
[12:05] <mgz> I can, if needed, cheat and re-up the tarball with fixes
[12:05] <fwereade> wallyworld_, mgz: anyone else I should invite? jam?
[12:05] <mgz> I'm not certain the current dir layout makes sense for instance
[12:05] <wallyworld_> fwereade: he is afk for a little bit, will join when he gets back
[12:05] <mgz> fwereade: jam was interested, but has to go off now
[12:06] <wallyworld_> so invite him
[12:06] <rogpeppe> mgz: i just did it so the root of the tree could be used as $GOPATH
[12:06] <fwereade> mgz, I need to in 10 mins bit I think we can cover some stuff usefully
[12:06] <mgz> yeah, go go, you and ian
[12:06]  * dimitern bbin1h
[12:36] <jamespage> mramm, mgz: I noticed the debian/copyright file was not complete in the packaging - working on that now
[12:36] <jamespage> which means I have to document the copyright and license for all of the bundled projects as well
[12:36]  * jamespage sighs
[12:40] <mgz> ...sorry about that
[12:41] <jamespage> mramm, mgz: OK - I have two problems
[12:41] <jamespage> goose has no explicit Copyright holder
[12:41] <jamespage> and lpad has neither a Copyright holder or license
[12:45] <mgz> fwereade: sorry for the incoming launchpad email surge, you had a bunch of fixed bug targetted at 2.0 that you actually fixed long ago
[12:45] <mgz> ...is lpad actually a dependency?
[12:45] <mgz> or should we remove it from the tarball?
[12:46] <mgz> goose is fixable
[12:46] <mgz> we've had no external contributions, the copyright holder is just canonical
[12:47] <mgz> jamespage: can you try building what you have as well, to see if tests etc are all fine? I'll upload a -1 tarball with any fixes needed
[12:48] <jamespage> mgz, builds just fine - this is just a distro copyright/license thing
[12:48] <jamespage> mgz, I don't see any tests executing fwiw
[12:49] <mgz> jamespage: where should we put the copyright holder if we're not doing per-file licencing?
[12:49] <mgz> jamespage: I doubt they're run as part of the packaging
[12:50] <mgz> we should probably add that, but not now
[12:50] <jamespage> mgz, good question - most of niemeyer's projects have it in LICENSE
[12:51] <niemeyer> mgz, jamespage: I tend to do it per file as well
[12:51] <niemeyer> mgz, jamespage: and certainly on a LICENSE file or similar
[12:51] <jamespage> niemeyer, I spotted
[12:51] <jamespage> :-)
[12:51] <mgz> in juju-core that seems to just be a copy of the agpl
[12:51] <jamespage> mramm, mgz: LICENSE/LICENCE or suchlike is sufficient - every file is best practice
[12:52] <niemeyer> jamespage: +1
[12:52] <niemeyer> Even because some projects (e.g. goyaml) do have mixed licensing
[13:06]  * jamespage looks at goyaml again
[13:07] <mgz> hm, and now canonistack is refusing to talk to me
[13:20] <mgz> okay, I can't build or upload anything till canonistack is back, so having lunch
[14:51] <mgz> jamespage: okay, have written a hacky script for rolling up tarball
[14:51] <jamespage> mgz, yeah for hacky scripts ;-)
[14:51] <mgz> I just need to know, exactly, what change you want me to make for the goose license thing
[14:51] <jamespage> it was missing copyright - hrm
[14:51] <mgz> I currently have COPYING and COPYING.LESSER in the branch
[14:52] <mgz> I don't really want to modify those, as they're just the text from gnu
[14:52] <mgz> not that I know why we have both...
[14:53] <mgz> modifying every darn file in the tree is also not sane, though it's what the gpl generally wants (header on each source file)
[14:53] <jamespage> mgz, indeed - I think just adding a LICENSE file detailing which of those two licenses its licensed under and details of the C holder would be OK
[14:53] <jamespage> for now anyway
[14:53] <jamespage> infact I'm happy to leave what you have in tarball as-is - so long as there is a commit in the bzr branch with the details on it - I can refer to that with a comment
[14:54] <jamespage> time is of the essence and all that
[15:06] <mgz> okay, done, uploading
[15:10] <mgz> this really does need the tests run on it though, as it's pristine from export, rather than copied files from trees that have been tested together
[15:11] <mgz> jamespage: (and everybody else) https://launchpad.net/juju-core/1.10/1.10.0/+download/juju-core_1.10.0-1.tar.gz
[15:22] <jamespage> mgz, wanna check my branch again? just re-cut using that tarball
[15:29] <rogpeppe1> mgz: i ran the basic tests on it, but not live tests.
[15:29] <rogpeppe1> mgz: BTW, i'm pretty sure that davecheney builds from pristine for each release
[15:36] <mgz> jamespage: looking, it seems good, one question for the others
[15:36] <mgz> go.net has lost the html/ package, is that something we used in any way?
[15:41] <m_3> TheMue: I'm gonna turn that into a blog post though.. so give it a couple of days and I'll have a version that's a bit easier to read
[15:42] <mgz> seem safe, I needed exp/html from go trunk long ago, but I think that was for rietveld not juju-core anyway
[15:48] <TheMue> m_3: Great, that's what we need. Thx for your effort.
[15:49] <mgz> jamespage: seems there were some changes to the cert code in your old cut of the source which I'm not clear on the origin of...
[15:50] <mgz> what's there now is all that's been on trunk as far as I can see, and looks okay to me
[15:51] <mgz> anyway, I shall build out of that branch, and upload to public bucket
[15:52] <mgz> I guess we may also want to change the recipe to use this, and rebuild what's in the ppa?
[15:57] <rogpeppe1> simple git question for someone: what's the equivalent of bzr revision-info in git?
[15:59] <mgz> hm, something is not happy
[16:06] <TheMue> rogpeppe1: took a look at git show?
[16:06] <rogpeppe1> TheMue: i think that "git rev-parse HEAD" is what i need
[16:07] <TheMue> rogpeppe1: oh, i'll take a look
[16:07] <rogpeppe1> TheMue: it seems there's no linear idea of commit history in git, unlike hg and bzr. is that right? i.e. no numeric log numbering.
[16:07] <rogpeppe1> TheMue: thanks. i've never used git in anger.
[16:09] <TheMue> rogpeppe1: i've just started for private projects, used hg before
[16:11] <TheMue> rogpeppe1: and yes, it seems to use large numbers like uuids and commit, tree, parent relations
[16:11] <jamespage> mgz, OK- just got a ftbfs on arm
[16:11] <rogpeppe1> jamespage: interesting. what's the error?
[16:12] <jamespage> apologies for the paste:
[16:12] <jamespage> launchpad.net/goyaml
[16:12] <jamespage> # launchpad.net/goyaml
[16:12] <jamespage> src/launchpad.net/goyaml/goyaml.go:89: undefined: newDecoder
[16:12] <jamespage> src/launchpad.net/goyaml/goyaml.go:90: undefined: newParser
[16:12] <jamespage> src/launchpad.net/goyaml/goyaml.go:135: undefined: newEncoder
[16:12] <rogpeppe1> jamespage: ah, i think i know what the issue might be. i wonder if cgo is disabled/not working on arm
[16:13] <jamespage> rogpeppe1, how do I check?
[16:13] <rogpeppe1> jamespage: are you getting the error at a command prompt?
[16:13] <rogpeppe1> jamespage: oh jeeze
[16:13] <rogpeppe1> jamespage: this is an old version of go we're using
[16:14] <rogpeppe1> jamespage: i'm not surprised actually
[16:14] <jamespage> 1.0.2 as in raring
[16:14] <rogpeppe1> jamespage: yeah. hmm.
[16:14] <rogpeppe1> jamespage: davecheney's the man for knowing about go-on-arm stuff.
[16:15] <rogpeppe1> jamespage: if you type "go env" on the arm box, it should have an output line saying something like: CGOENABLED="1"
[16:16] <rogpeppe1> sorry, CGO_ENABLED
[16:16] <jamespage> rogpeppe1, CGO_ENABLED="0"
[16:16] <jamespage> yikes
[16:16] <rogpeppe1> jamespage: right, so that's the issue
[16:16] <jamespage> rogpeppe1, OK - we'll drop arm for juju-core in raring
[16:16] <rogpeppe1> jamespage: +1
[16:16] <rogpeppe1> jamespage: we've done no testing on arm
[16:16] <jamespage> mramm, ^^
[16:17] <mramm> jamespage: no arm is fine
[16:17] <mramm> jamespage: arm support was explicitly pushed back to post 13.04 anyway
[16:18] <mgz> well, nearly very smooth: <http://paste.ubuntu.com/5598730/>
[16:18] <jamespage> mramm, OK
[16:18] <rogpeppe1> jamespage: from http://code.google.com/p/go-wiki/wiki/GoArm : "currently the development version of Go includes better support for linux/arm, including full cgo support, than Go 1.0."
[16:19] <mramm> rogpeppe1:  correct, git has no version numbers, just commits (identified by the hash of the commit contents)
[16:19] <rogpeppe1> mramm: thanks
[16:19] <mramm> commits define their parent commit, and you therefore get a graph of commits
[16:20] <mramm> which git walks to show you a timeline
[16:20] <mgz> and yeah, I can't build juju-core on this box, we don't work on arm currently, but we were also not targetting it
[16:20] <rogpeppe1> mramm: that's true of all of those systems i think - but the linear history is quite nice for referring to a given trunk
[16:20] <rogpeppe1> mgz: you probably could if you used go tip
[16:21] <mramm> rogpeppe1: well, git does not attempt to pretend that there is a linear history
[16:21] <rogpeppe1> mgz: i'd be interested to find out whether it works actually
[16:21] <rogpeppe1> mramm: yeah.
[16:21] <mgz> I'll add that to my list of fun-time things to try :)
[16:21] <rogpeppe1> hmm, i wonder if i my san box upstairs would be up to the task...
[16:21] <mramm> in a fully distributed system I have commits in a branch you don't have and vice versa, so linear history is impossible to get right
[16:22] <rogpeppe1> mramm: yeah, but hg and bzr both pretend quite well :-)
[16:25] <mramm> linus was very opposed to pretending
[16:26] <rogpeppe1> mramm: another stupid git question: how do i update the current working tree to a given rev id?
[16:27] <rogpeppe1> mramm: git pull?
[16:27] <fwereade> whoops, not been focusing on this: https://codereview.appspot.com/8939043
[16:27] <fwereade> rogpeppe1, quick look for form's sake please?
[16:27] <mramm> git checkout <sha>
[16:27] <rogpeppe1> fwereade: LGTM trivial
[16:28] <fwereade> rogpeppe1, cheers
[16:28] <rogpeppe1> mramm: thanks
[16:28] <mramm> git pull will grab objects (commits and trees) from the remote repo an pull them down
[16:28] <mramm> git checkout will switch the cwd
[16:29] <fwereade> ok, I'm nearly up to date on my reviews, and everyone can start committing their approved bits and pieces to trunk now
[16:29] <fwereade> will try to swing by again later -- if not, ttyall tomorrow
[16:30] <fwereade> dimitern, if C+L sleep early I might ping you for a late  beer/catchup re upgrade-charm
[16:30] <fwereade> dimitern, otherwise, maybe 20 mins before the meeting tomorrow?
[16:32] <dimitern> fwereade: sgtm
[16:32] <dimitern> fwereade: when you can
[16:40] <jamespage> mgz, rogpeppe1, fwereade: you guys happy with what we are proposing for release into raring?
[16:40] <rogpeppe1> jamespage: yup, sgtm
[16:40] <rogpeppe1> jamespage: assuming it's essentially still the sources i put together
[16:40] <jamespage> rogpeppe1, its the re-cut sources mgz did
[16:41] <rogpeppe1> mgz: what did you change?
[16:41] <jamespage> lpad got dropped
[16:41] <jamespage> and some licensing clarification around goose
[16:42] <dimitern> why is juju depending on lpad anyway? it's only used by lbox when interacting with LP
[16:43]  * jamespage shrugs
[16:43] <jamespage> anyway I just uploaded to raring - Daviey and slangasek are lined up for review
[16:44] <dimitern> jamespage: tyvm!
[16:47] <mramm> jamespage: awesome work!
[16:48] <jamespage> lets not get to excited - its just in the queue!
[16:48] <mramm> thank you very, very much
[16:48] <mramm> haha
[16:48] <mramm> understood
[16:48] <mramm> but it is progress
[16:48] <jamespage> mramm, action prior to next release - get some copyright/license headers in all source across the board please!
[16:48] <Makyo> fwereade, I got your review comments in the middle of submitting.  Would you be alright with a separate branch with those implemented?
[16:49] <mramm> jamespage: will do
[16:49] <Makyo> Er, too late :/
[16:51] <rogpeppe1> dimitern: it's actually just the store code that depends on lpad
[16:52] <rogpeppe1> dimitern: i included it just so i could do go test ./... without errors
[16:52] <dimitern> rogpeppe1: it's about time to separate the store from juju-core now
[16:52] <rogpeppe1> dimitern: it's happened before and it will happen again :-)
[16:53] <dimitern> rogpeppe1: i certainly hope it'll be soon :)
[16:54] <dimitern> Makyo: it's never late for another branch ;)
[16:54] <Makyo> dimitern, More branches, more branches!
[17:28] <Makyo> Getting conflicting reviews on r1192 and it should be reverted.  I've never been successful at that.  Can someone help me out?
[17:46] <fwereade> Makyo, sure, that's fine
[17:47] <fwereade> Makyo, sorry about that
[17:47] <fwereade> Makyo, no need to revert if it's already in, just note that you will update in the review please
[17:48] <mattyw> what's the best way for a charm to work out if it's been deployed using py-juju or go-juju other than checking for location of the agent.conf files?
[17:49] <Makyo> fwereade, even with rogpeppe1'
[17:49] <Makyo> 's comments?
[17:49] <fwereade> Makyo, ah sorry, just saw rog's
[17:49] <fwereade> Makyo, 2 mins thinking time
[17:50] <rogpeppe1> mattyw: JUJU_CONTEXT_ID is a reasonable indication, i think.
[17:50] <rogpeppe1> Makyo: sorry for tardy review - i forgot to submit the comments earlier
[17:51] <Makyo> rogpeppe1, That's okay, they're definitely necessary.  I think reverting might be the best choice, though, to make sure things aren't half-right.
[17:53] <fwereade_> Makyo, yeah, it needs more thought -- sorry, Ineeded to look up what we did with deploy in that case
[17:55] <fwereade_> Makyo, I think we should keep charm-adding and charm-setting spearate, essentially as they are in deploy
[17:58] <Makyo> fwereade_, alright, looking through deploy...
[17:59] <fwereade_> Makyo, trying to figure it out myself
[17:59] <fwereade_> Makyo, rogpeppe1: how do charms get into state from the GUI ie via the API?
[17:59] <rogpeppe1> fwereade_: currently they don't
[18:00] <rogpeppe1> fwereade_: we're restricted to charms in the charm store
[18:00] <mgz> jamespage: what was the reason for juju-core rejection? ...I wish there was something recorded on launchpad
[18:00] <Makyo> fwereade_, rogpeppe1, Correct, no support deploy/upgrade on local yet, though we can see them once they're deployed.
[18:00] <fwereade_> rogpeppe1, ah ok, so the client just has to pass a url known to the charms store?
[18:00] <rogpeppe1> fwereade_: yes
[18:01] <rogpeppe1> fwereade_: my vague plan is to have an optional extra call to upload a charm (probably a sequence of calls so we don't bundle up MB in one json message)
[18:01] <rogpeppe1> fwereade_: it may even end up being better as a PUT
[18:01] <fwereade_> Makyo, rogpeppe1: ok, then are we ok just putting the same restriction on upgrades  for now? ie ServiceName, CharmURL, Force?
[18:02] <Makyo> fwereade_, rogpeppe1, sounds good to me.
[18:02] <rogpeppe1> fwereade_: i think so
[18:02] <fwereade_> Makyo, rogpeppe1: I'm fine punting on local charms today at least, so long as we punt consistently
[18:02] <rogpeppe1> fwereade_: yup
[18:03] <rogpeppe1> i'm done for the day
[18:03] <Makyo> Still on to revert 1192?
[18:03] <rogpeppe1> see y'all tomorrow
[18:03] <Makyo> Later.
[18:04] <gary_poster> Hi all.  I'd like to announce the GUI's compatibility with juju core, but the Raring Juju from the devel PPA fails for me like this: http://pastebin.ubuntu.com/5598967/
[18:04] <gary_poster> Is that known?  Is there some other, better way to suggest that people try out the GUI on juju core?  I didn't figure installing juju from source was the right sales pitch :-)
[18:19] <jamespage> mgz, release team said it was to late
[18:29] <ahasenack> gary_poster: so now --upload-tools is failing too?
[18:30] <gary_poster> ahasenack, as in the pastebin, bootstrap --upload-tools succeeds temporarily, but then I can't deploy anything. :-/
[18:30] <ahasenack> oh, ok, I missed the deploy command
[18:47] <mramm> jamespage: mgz: we are still fighting the good fight.   And if we don't get it in now, we will do everything possible to get it in via backports early  next week
[18:47] <mramm> so the end user visible difference will not be much
[18:52] <mramm> either way after next week a user will be able to sudo apt-get install juju-core, and get our package
[18:52] <mramm> and if it is in backports it will be even easier to get it updated with our monthly releases
[21:25] <davecheney> fwereade_: what is the story with the trunk ?
[21:26] <davecheney> can we land fixes ?
[21:28] <mgz> davecheney: I thought it had been mentioned that it's okay to land again, but I now can't find a reference
[21:28] <mgz> release things have all been branched though
[21:29] <davecheney> mgz: mramm fwereade_ : please email juju-dev with the status of the trunk
[21:29] <davecheney> mgz: i saw you were unassigning issues from 2.0
[21:29] <davecheney> thank you
[21:29] <mramm> feel free to use trunk
[21:29] <davecheney> mramm: email please
[21:29] <mramm> will do
[21:30] <davecheney> just copy and paste this discussion
[21:32] <mramm> it's posted
[21:32] <mramm> and I've updated the agenda so we talk about that
[21:32] <mramm> and so that we talk about the backports stuff
[21:32] <mramm> and general release status
[21:33] <mramm> I will try to write up a release status update e-mail later this evening
[21:33] <mramm> but right now there is a lot up in the air, and I need some time to get all the details organized and written down
[21:34] <mramm> and I have some personal stuff to take care of in a few min, so I don't think it will make it before I leave for that...
[21:37] <davecheney> thanks mark