[00:10] I just tried deploying a charm (using juju-core) that requires the onieric series, but the VMs failed in the "no matching tools" state. trying `juju sync-tools --all` just produced the same message. [00:13] here's how the world looks for me right now: https://gist.github.com/orospakr/b3a1950fbe80c61446b4 [00:15] orospakr: we do no support onaric [00:15] sorry [00:15] you won't find any tools [00:16] yeesh [00:16] we do not support onieric [00:16] huh, okay. thank you. [00:17] silly question: when you juju bootstrap with juju-core, the bootstrap VM is running juju-core as well? [00:22] the first machine, machine 0 is running the mongodb service (what we call the state server) [00:22] it holds the knowledge of everything in that environment [00:22] ah, I knew about that. I thought a jujud ran on it as well. [00:23] orospakr: well a jujud is running on that machine [00:23] i'm not sure which question you are asking [00:26] does the first node also actively administer the other nodes, in response to your commands with the juju client, in addition to storing state in mongodb? [00:31] orospakr: yes and no [00:31] the first node also runs the providisioning agent which is the one that talks to the provider and creates new machines (nodes) if required [00:34] ah, that's the thing I'm thinking of. good. :) [00:36] so, now I've got this other issue: I can't destroy the wedged service/units/machines, because they're all unaccessible to juju. [00:36] orospakr: can you explain what you mean by inaccessible ? [00:37] the onieric-based service is in the "dying" state, and the units and machines alike are in the "pending" state. [00:37] s/"dying" state/life parameter is "dying"/ [00:40] orospakr: have you tried juju terminate-machine ? [00:40] aye. [~]$ juju destroy-machine 2 3 4 [00:40] error: no machines were destroyed: machine 2 has unit "couchbase/0" assigned; machine 3 has unit "couchbase/1" assigned; machine 4 has unit "couchbase/2" assigned [00:40] oof, excuse me. that didn't paste well. [00:42] i'd start with destroy-unit, destroy-server, etc [00:42] services [00:42] but at this point i'd just destroy the environment and start again if that is an option [00:42] it is, but I'm trying to stick it out on things like this so I can be confident that I can deal with failures when I do use this in production. ;) [00:44] i think destroy-unit and destory-service take a --force [00:44] but they will create dangling references [01:20] m_3: ping [02:37] why did the PA restart ? http://paste.ubuntu.com/5597298/ [04:06] m_3-droid: i'm a tit [04:07] the problem isn't the HP endpoint, it is the number of concurrent connections to the mongo server running on machine/0 [04:08] ├─mongod───828*[{mongod}] [04:08] ├─mongod───9*[{mongod}] [04:08] oops [04:08] https://bugs.launchpad.net/juju-core/+bug/1172110 [05:05] shit, we're using 3 mongo connections per machine [05:18] davecheney: because of the different agents? [05:20] i suspect so [05:20] it isn't exactly 3 x [05:20] 450 connections == 156 machines [05:21] the mgo drivers' peer probing isn't helping [05:21] * davecheney really hates that feature [05:28] jam: https://codereview.appspot.com/8931044/ [05:31] jam: Apr 24 05:30:51 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:30:51 [conn24025] end connection 15.185.174.63:33866 (599 connections now open) [05:31] Apr 24 05:30:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:30:53 [initandlisten] connection accepted from 15.185.173.39:34560 #24049 (600 connections now open) [05:31] <_mup_> Bug #24049: KDE systemsettings doesn't have a file association component [05:31] ^ this is what you get from mongo [05:31] compared to how much we log [05:31] it's not a lot [05:31] #1 [05:31] #1234 [05:31] #12345 [05:31] <_mup_> Bug #12345: isdn does not work, fritz avm (pnp?) [05:31] #9999 [05:31] #10000 [05:31] <_mup_> Bug #10000: xserver-common: X crashed (signal 7) while scrolling in Mozilla [05:47] bzzt [05:47] Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection refused because too many open connections: 819 [05:47] Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection accepted from 15.185.176.240:44746 #55756 (820 connections now open) [05:47] <_mup_> Bug #55756: installer partman crash selecting swap partition [05:47] Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection refused because too many open connections: 819 [05:47] Apr 24 05:46:53 juju-goscale2-machine-0 mongod.37017[9222]: Wed Apr 24 05:46:53 [initandlisten] connection accepted from 15.185.177.196:51276 #55757 (820 connections now open) [05:47] <_mup_> Bug #55757: "Send a mail" to a contact list adds only the first contact to "To:" [05:59] morning everyone [05:59] how screwed up is everything today? === ChanServ changed the topic of #juju-dev to: ttps://juju.ubuntu.com | On-call reviewer: fwereade | Bugs: 2 Critical, 64 High - https://bugs.launchpad.net/juju-core/ [07:17] fwereade: we're looking at a hard limit of ~300 machines or 800 agents (the smaller of the two) [07:17] davecheney, thanks, I just read your email -- definitely mongodb connections? [07:17] fwereade: absolutelu [07:17] it says so in the mongo log [07:18] fwereade: also, https://codereview.appspot.com/8931044/ [07:18] not critical [07:19] davecheney, that is ludicrous, isn't it? surely that's not as far as it can go? [07:19] the default limit is 80% of nfiles, which is 1024 on ubuntu [07:19] these defaults can be changed [07:20] during service deployment we average 3x the number connections as machines [07:20] the number falls back to 2x after deployment [07:20] (although that may not be successful deployment as I restarted monogo [07:20] 2x sounds logical, 1 machine agent, 1 unit agent per service unit [07:20] davecheney, yep, agreed [07:20] the conn limit for mongo can be increased [07:21] fwereade: the other problem is i'm seeing continual cpu spikes as 600 agents wake up [07:21] and probe each mongodb server [07:21] this is a very unhelpful feature [07:22] davecheney, heh, I had been less worried about that in particular but it goes to show you never can tell:/ [07:23] at 2000 agents I estimate the mongodb server will be under constant polling pressure [07:23] davecheney, *if* we had an internal API that would be a non-issue but it doesn't help us today [07:23] which will probably fuck us with sockets in TIME_WAIT [07:23] fwereade: yes, that is the logical solution [07:23] federated mongodb might help [07:23] but the polling logic will still be a massive burdon [07:24] davecheney, yeah,doesn't feel like it'd really address the issue [08:26] mornin' all [08:28] rogpeppe: morning [08:28] dimitern: hiya [08:29] fwereade: hope you've recovered a bit! [08:30] rogpeppe, yeah, more or less, although I remain baffled at the actual state of play -- I am pondering dave's scale tests for now and hoping that mramm will wake up in an hour or two and bring us the gift of clarity [08:31] fwereade: unfortunately my network connection is still borked, so i missed the kanban meeting yesterday, so i don't really know [08:34] fwereade: i don't think i've seen anything about dave's scale tests other than the conversation just above [08:34] rogpeppe: I think nobody knows for sure, except probably mgz [08:35] rogpeppe, I had a brief chat with him this morning, it seems we are hitting mongo connection limits at around 300 machines [08:35] fwereade: interesting. i'm not entirely surprised actually. [08:36] fwereade: need to get that internal API done. and HA on that. [08:36] rogpeppe, yep [08:37] rogpeppe, considering possible mitigations in the meantime [08:37] fwereade: anything particular in mind? [08:37] rogpeppe, still going through internal sanity-vetting ;p [08:40] hmm.. cmd/juju tests running time seems to have improved slightly [08:47] mramm, heyhey [08:49] heyhey [08:50] I am trying to get caught up on the packaging issues [08:50] mramm, great [08:50] I thought things were moving smoothly until late yesterday when antonio informed me that the server guys did not think it would go in [08:51] I still don't have a clear picture from them of what needs to happen, just some hand waving about "issues" [08:52] mramm, my understanding had been that monday was the actual razor-sharp cutoff in any case [08:52] well, that was my understanding too [08:53] but I thought we'd given them something by then, and I hear from them yesterday that the razer sharp cutoff is actually today [08:53] mramm, but, regardless, if there is anything I can do to help I would be happy to; and if you do manage to glean some measure of clarity I, and others, would be most grateful for it [08:54] looks like jamespage is on the case [08:55] if you can join #server on canonical IRC that would be helpful [09:13] so anybody that can help jamespage in some way today gets an extra gold star from me ;) [09:25] good evening [09:25] could I draw your attention to [09:25] https://bugs.launchpad.net/juju-core/+bug/1172122 [09:25] <_mup_> Bug #1172122: state/presence: FAIL: presence_test.go:253: PresenceSuite.TestWatchPeriod [09:25] and [09:25] so, on subject of dave's e-mail [09:25] https://bugs.launchpad.net/juju-core/+bug/1172121 [09:25] <_mup_> Bug #1172121: environs/maas: multiple test failures [09:25] haha [09:26] mramm: i can leave again if you want to talk about me [09:26] i don't mind, i like the idea of being popular [09:26] also, https://docs.google.com/a/canonical.com/document/d/1p_OzWxqxaXalHBI3ohkUsB9_iQBPSGWwM-qODm5FSbI/edit# [09:27] haha [09:27] so, I think we should talk about the internal API stuff sooner rather than later [09:27] given that we are hitting scalability limits now [09:29] davecheney, did you update gomaasapi? [09:29] mramm: SGTM [09:29] fwereade: probably not [09:29] davecheney, I think that resolves it [09:29] right, i'll check that tomorrow [09:30] davecheney, I'll close it then, reopen if I'm wrong please [09:32] SGTM [09:32] what about state/presence ? [09:33] davecheney, I think I have no option but to do a deep dive there and try to figure out WTF is going on [09:33] davecheney, occasional failures from there have always been a thing, but not one that ever quite rose high enough to be looked at properly [09:33] has anyone else seen that problem ? [09:35] davecheney, it doesn't look entirely unfamiliar, but I thought we had a bug for it already; apparently not [10:10] so, can somebody help jamespage out with updating our "release" package to build from local source rather than latest trunk, so the source gets included in the package [10:10] we can pull the local source from trunk and then build [10:10] mramm, forget the package [10:10] I just want a release tarball of juju-core 1.10.0 that contains everything that is juju-core 1.10.0 aside from the packaging [10:11] ahh [10:11] cool [10:12] so rogpeppe, TheMue, fwereade, jam, dimitern: can one of you get that for jamespage now? [10:12] mramm: ok, i'll do that [10:13] rogpeppe: thanks! [10:13] jamespage: presumably not including binaries, right? [10:14] rogpeppe, you got it [10:14] ah, just wanted to ask too [10:18] if you fancy a break to look at it [10:21] morning mgz! [10:23] hey! seen your email. [10:24] mgz, hey! [10:26] so, there are a couple of issues for me: [10:27] the update-alternatives bits didn't get merged into dave's packaging branch, but he did then add a manpage, so that needed updating [10:28] I don't have rights to upload to the public bucket on ec2, nor have I done what dave normally does for releases [10:29] and there's a change in 1.10 working around the updates-during-raring to upstart bug that I don't think we want to release with [10:30] mgz, I've merged all the various bits of packaging - lp:~james-page/ubuntu/raring/juju-core/1.10 [10:30] mgz: I have some questions for you as well if/when you have some time. [10:30] jam: sure [10:30] mgz, and raised a FFe - 1172215 [10:30] bug 1172215 [10:30] <_mup_> Bug #1172215: [FFe] Please include juju-core 1.10.0 in Ubuntu 13.04 [10:31] mgz: 1) it is known that 1.10.0 is in the ppa but not uploaded to ec2, so you can't actually bootstrap (no tools found) [10:31] those are the only things I'm aware of. [10:32] davecheney: can you help out with the tools upload stuff if you are still around? [10:32] 2) I still can't ssh to the original goose instance, as near as I can tell chinstrap isn't letting ssh get to lcy01 (it is working to lcy02, but I get No route to host trying to get to 10.55.60.94) [10:32] any of the core guys should have rights, but no one responded when I poked the other day [10:33] so maybe they don't? [10:33] mramm: sure, mgz do you want the creds ? [10:33] 3) I tried starting a new tarmac bot with similar config and juju-core doing the bootstrap to lcy02, but it seems charms:precise/tarmac uses puppet, and that just doesn't work now. [10:33] or I can do the push if you cna point me to the deb in the archive [10:33] giving mgz creds makes sense to me [10:34] lp:~james-page/ubuntu/raring/juju-core/1.10 looks like the proposed release [10:34] sure, the only reason I hesitate is they belong to gustavo [10:34] yea [10:34] so, be careful with 'em [10:34] davecheney: load testing time! [10:34] how many CC4.xxlarge can we run? :) [10:34] we ultamately need to take that over and make it something owned by a team, not a person :/ [10:35] mramm: is it possible to change ownership of an s3 bucket? [10:36] not easily [10:36] mgz: ceheck your make [10:37] mramm: jam much easier to change the source in the code [10:37] mgz: check your mail [10:37] davecheney: right [10:37] you can delete it, and then try to pick the name up under a new account [10:38] but 1) anybody can pick it up when it becomes available [10:38] and 2) it can take up to 24 hours to go back into the available names pool [10:38] jam: one funny thing I noticed was there are a couple of goosebot instances, one of which is shutoff, on lcy01. it's possible the routing is broken just for that host. [10:39] jam: the tarmac puppet charm not working with juju-core is a bug here I guess, unless it does something very bogus [10:39] mgz: the shutoff one is the python-juju bootstrap node [10:40] I wasn't able to start and get to that one either, but didn't really need to. [10:40] there are two shutoff ones... [10:42] mramm: thats why I suggest a dns name like tools.juju.ubuntu.com in our code to point to ANY bucket we want (or other server) [10:43] jamespage: i'm pushing a source-only branch containing all the latest source and its dependencies. [10:43] TheMue: file a kanban card for it and add it to tomorrow's agenda ;) [10:44] TheMue: https://docs.google.com/a/canonical.com/document/d/1p_OzWxqxaXalHBI3ohkUsB9_iQBPSGWwM-qODm5FSbI/edit# [10:44] mramm: yep, will do [10:44] jamespage: unfortunately my network upload speed is outrageously slow at the moment (about 50Kbits/s) so it will take a while [10:44] davecheney: agenda is already edited, for a different topic ;) [10:44] rogpeppe, ack [10:45] jam: seems 10.55.60.94 is indeed unreachable, worth raising with canonistack support [10:47] mgz: is that ask web-ops and then get it escalated ? [10:47] mgz: do you know if you have to have the puppet agent running to have the script run ? (default is that puppet agent in /etc/defaults/puppet is to not run) [10:47] for the record, the packaging branch is at lp:~james-page/ubuntu/raring/juju-core/1.10.0 with the trailing .0 [10:51] jamespage: lp:~rogpeppe/+junk/juju-1.10.0-source-only [10:51] jamespage: your packaging branch looks good to me [10:51] rogpeppe, that really need to be somewhere official [10:51] * TheMue is at lunch [10:51] like lp:juju-core/1.10.0 [10:51] jamespage: ah [10:52] i'm worried i might start stepping on someone's toes here - i'm not usually involved with this stuff [10:52] needs a different name if you're putting it under the juju-core project [10:52] as it's packaging [10:52] I can push it somewhere though [10:53] mgz, no - thats the point [10:53] I want an upstream release of juju-core [10:53] forget the packaging [10:53] it's not juju-core 1.10.0 [10:53] it's that plus all the deps [10:53] yes [10:53] okay. [10:53] juju-core plus the deps that the juju-core dev team say are good for 1.10.0 [10:53] basically i fetched all the deps from scratch and removed the .bzr and .hg directories [10:53] mgz, I want to switch the packaging away from 3.0 (native) - its not required [10:54] to quilt? [10:54] mgz, yes [10:54] native rarely makes sense [10:55] I can live with that [10:55] mgz: I wonder if you could grab them and use 'bzr-upload' to create light dirs so that it is easy to update them to newer tools. [10:56] so, issue #1 is resolved by james' branch, issue #2 is okay now I have the ec2 creds, last question is if we carry the cloud-init hack or not... [10:58] jam: we could certainly do something more elegant, for now I'm happy with just dumping the code and adding a bunch of fresh unrelated stuff to the repo on launchpad [10:59] be back in a few -- getting breakfast and etc. Ping me if needed. [11:00] jamespage: probably a question for you as much as any of the juju guys, see [11:01] mgz: did you still want the hp-cloud instance runnng? [11:02] jam: nope, I fixed my script so I can restart it myself as needed [11:03] mgz:except you couldn't reach http, right? [11:03] non-http because of the non-standard port [11:03] for keyauth [11:03] well, "myself", provided I remember to do it in advance, otherwise without manual intervention (lesson: use sed -r when being fancy) [11:18] my pesky phone line seems like it might be out for another whole week [11:20] * rogpeppe didn't know about sed -r [11:20] i always get bitten by the fact that standard sed doesn't do "proper" regexps, 'cos i'm used to plan 9's sed which does them by default [11:21] mgz, rogpeppe: which of you two are working on preparing the juju-core 1.10.0 snapshot release and sticking it something official? [11:21] not clear from my backscroll [11:22] mgz: given my (lack of) current bandwidth, you might be best doing that [11:22] rogpeppe: still no home internet? ouch [11:23] jam: yeah. i just talked with the phone company, and they have no idea when it'll be resolved. it's been out since last tues [11:23] rogpeppe: for sed, you mean '\d' vs [[:digit:]] ? [11:23] jam: well, there is *some* internet, but the upload speed is stupidly bad, and i've been getting 3-5s ping response times [11:24] jam: no, i mean (foo|bar) [11:28] jamespage: I can push the source roger put up to somewhere... and I guess we can just leave the other questions for now [11:30] lp:~juju-core/ubuntu/raring/juju-core/1.10.0-source okay? [11:31] mgz, dimitern: standup? [11:31] mgz, not sure why we need the ubuntu/raring prefix [11:31] its not ubuntu or raring - its juju-core 1.10.0 [11:31] jamespage: because ~juju-core/juju-core is completely unrelated code. [11:31] well, "unrelated" [11:31] in that it is just the base tree, not all deps [11:32] well like I said - I'm good with a tarball published on launchpad.net [11:32] like we do for juju [11:33] hm, that would be ideal really [11:33] i'd rather have it that was - otherwise I have to cut my own tarball still [11:33] which is not always deterministic [11:33] mgz: mumble? [11:34] I'm there. [11:41] so, we don't have a 1.10.0 milestone yet? ...ah, the fun is it's half the 2.0 milestone [11:52] fwereade: want to join us on mumble? [12:02] jamespage: https://launchpad.net/juju-core/1.10/1.10.0 has tarball based on the rogpeppe branch [12:02] wallyworld_, actually, could we have a quick hangout for 15 mins? [12:02] I'll tidy up the rest of the release sutff for now [12:02] sure [12:02] wallyworld_, I forgot our differing interpretations of wednesday [12:02] blast, no hangout for me [12:02] mgz, ta muchly [12:03] fwereade: if mgz can't do hangout, can you do mumble? [12:03] wallyworld_, er maybe, how can I set it up in 2 minutes or less? [12:03] mgz: you might want to include this info somewhere around the place: http://paste.ubuntu.com/5598150/ [12:03] fwereade: not in 2 mins i don't think :-( [12:04] mgz: it's the full list of the revision numbers [12:04] mgz: can you reduce your bandwidth for a hangout? [12:04] fwereade: `sudo apt-get install mumble` then but hangout safer, go for that [12:04] mgz, cheers, I'll set that up after this then [12:04] wallyworld_: issue is google don't provide arm debs for their binary blobs [12:04] mumble can be fiddly [12:04] mgz: hmm, except the revision of the go tree itself. that should probably be included too, i suppose, although we're not including it in the tarball [12:05] mgz: maybe we should [12:05] mgz: although i suppose we're building against a known go revision [12:05] I can, if needed, cheat and re-up the tarball with fixes [12:05] wallyworld_, mgz: anyone else I should invite? jam? [12:05] I'm not certain the current dir layout makes sense for instance [12:05] fwereade: he is afk for a little bit, will join when he gets back [12:05] fwereade: jam was interested, but has to go off now [12:06] so invite him [12:06] mgz: i just did it so the root of the tree could be used as $GOPATH [12:06] mgz, I need to in 10 mins bit I think we can cover some stuff usefully [12:06] yeah, go go, you and ian [12:06] * dimitern bbin1h [12:36] mramm, mgz: I noticed the debian/copyright file was not complete in the packaging - working on that now [12:36] which means I have to document the copyright and license for all of the bundled projects as well [12:36] * jamespage sighs [12:40] ...sorry about that [12:41] mramm, mgz: OK - I have two problems [12:41] goose has no explicit Copyright holder [12:41] and lpad has neither a Copyright holder or license [12:45] fwereade: sorry for the incoming launchpad email surge, you had a bunch of fixed bug targetted at 2.0 that you actually fixed long ago [12:45] ...is lpad actually a dependency? [12:45] or should we remove it from the tarball? [12:46] goose is fixable [12:46] we've had no external contributions, the copyright holder is just canonical [12:47] jamespage: can you try building what you have as well, to see if tests etc are all fine? I'll upload a -1 tarball with any fixes needed [12:48] mgz, builds just fine - this is just a distro copyright/license thing [12:48] mgz, I don't see any tests executing fwiw [12:49] jamespage: where should we put the copyright holder if we're not doing per-file licencing? [12:49] jamespage: I doubt they're run as part of the packaging [12:50] we should probably add that, but not now [12:50] mgz, good question - most of niemeyer's projects have it in LICENSE [12:51] mgz, jamespage: I tend to do it per file as well [12:51] mgz, jamespage: and certainly on a LICENSE file or similar [12:51] niemeyer, I spotted [12:51] :-) [12:51] in juju-core that seems to just be a copy of the agpl [12:51] mramm, mgz: LICENSE/LICENCE or suchlike is sufficient - every file is best practice [12:52] jamespage: +1 [12:52] Even because some projects (e.g. goyaml) do have mixed licensing [13:06] * jamespage looks at goyaml again [13:07] hm, and now canonistack is refusing to talk to me [13:20] okay, I can't build or upload anything till canonistack is back, so having lunch === wedgwood_away is now known as wedgwood [14:51] jamespage: okay, have written a hacky script for rolling up tarball [14:51] mgz, yeah for hacky scripts ;-) [14:51] I just need to know, exactly, what change you want me to make for the goose license thing [14:51] it was missing copyright - hrm [14:51] I currently have COPYING and COPYING.LESSER in the branch [14:52] I don't really want to modify those, as they're just the text from gnu [14:52] not that I know why we have both... [14:53] modifying every darn file in the tree is also not sane, though it's what the gpl generally wants (header on each source file) [14:53] mgz, indeed - I think just adding a LICENSE file detailing which of those two licenses its licensed under and details of the C holder would be OK [14:53] for now anyway [14:53] infact I'm happy to leave what you have in tarball as-is - so long as there is a commit in the bzr branch with the details on it - I can refer to that with a comment [14:54] time is of the essence and all that [15:06] okay, done, uploading [15:10] this really does need the tests run on it though, as it's pristine from export, rather than copied files from trees that have been tested together [15:11] jamespage: (and everybody else) https://launchpad.net/juju-core/1.10/1.10.0/+download/juju-core_1.10.0-1.tar.gz [15:22] mgz, wanna check my branch again? just re-cut using that tarball [15:29] mgz: i ran the basic tests on it, but not live tests. [15:29] mgz: BTW, i'm pretty sure that davecheney builds from pristine for each release [15:36] jamespage: looking, it seems good, one question for the others [15:36] go.net has lost the html/ package, is that something we used in any way? [15:41] TheMue: I'm gonna turn that into a blog post though.. so give it a couple of days and I'll have a version that's a bit easier to read [15:42] seem safe, I needed exp/html from go trunk long ago, but I think that was for rietveld not juju-core anyway [15:48] m_3: Great, that's what we need. Thx for your effort. [15:49] jamespage: seems there were some changes to the cert code in your old cut of the source which I'm not clear on the origin of... [15:50] what's there now is all that's been on trunk as far as I can see, and looks okay to me [15:51] anyway, I shall build out of that branch, and upload to public bucket [15:52] I guess we may also want to change the recipe to use this, and rebuild what's in the ppa? [15:57] simple git question for someone: what's the equivalent of bzr revision-info in git? [15:59] hm, something is not happy [16:06] rogpeppe1: took a look at git show? [16:06] TheMue: i think that "git rev-parse HEAD" is what i need [16:07] rogpeppe1: oh, i'll take a look [16:07] TheMue: it seems there's no linear idea of commit history in git, unlike hg and bzr. is that right? i.e. no numeric log numbering. [16:07] TheMue: thanks. i've never used git in anger. [16:09] rogpeppe1: i've just started for private projects, used hg before [16:11] rogpeppe1: and yes, it seems to use large numbers like uuids and commit, tree, parent relations [16:11] mgz, OK- just got a ftbfs on arm [16:11] jamespage: interesting. what's the error? [16:12] apologies for the paste: [16:12] launchpad.net/goyaml [16:12] # launchpad.net/goyaml [16:12] src/launchpad.net/goyaml/goyaml.go:89: undefined: newDecoder [16:12] src/launchpad.net/goyaml/goyaml.go:90: undefined: newParser [16:12] src/launchpad.net/goyaml/goyaml.go:135: undefined: newEncoder [16:12] jamespage: ah, i think i know what the issue might be. i wonder if cgo is disabled/not working on arm [16:13] rogpeppe1, how do I check? [16:13] jamespage: are you getting the error at a command prompt? [16:13] jamespage: oh jeeze [16:13] jamespage: this is an old version of go we're using [16:14] jamespage: i'm not surprised actually [16:14] 1.0.2 as in raring [16:14] jamespage: yeah. hmm. [16:14] jamespage: davecheney's the man for knowing about go-on-arm stuff. [16:15] jamespage: if you type "go env" on the arm box, it should have an output line saying something like: CGOENABLED="1" [16:16] sorry, CGO_ENABLED [16:16] rogpeppe1, CGO_ENABLED="0" [16:16] yikes [16:16] jamespage: right, so that's the issue [16:16] rogpeppe1, OK - we'll drop arm for juju-core in raring [16:16] jamespage: +1 [16:16] jamespage: we've done no testing on arm [16:16] mramm, ^^ [16:17] jamespage: no arm is fine [16:17] jamespage: arm support was explicitly pushed back to post 13.04 anyway [16:18] well, nearly very smooth: [16:18] mramm, OK [16:18] jamespage: from http://code.google.com/p/go-wiki/wiki/GoArm : "currently the development version of Go includes better support for linux/arm, including full cgo support, than Go 1.0." [16:19] rogpeppe1: correct, git has no version numbers, just commits (identified by the hash of the commit contents) [16:19] mramm: thanks [16:19] commits define their parent commit, and you therefore get a graph of commits === _mup__ is now known as _mup_ [16:20] which git walks to show you a timeline [16:20] and yeah, I can't build juju-core on this box, we don't work on arm currently, but we were also not targetting it [16:20] mramm: that's true of all of those systems i think - but the linear history is quite nice for referring to a given trunk [16:20] mgz: you probably could if you used go tip [16:21] rogpeppe1: well, git does not attempt to pretend that there is a linear history [16:21] mgz: i'd be interested to find out whether it works actually [16:21] mramm: yeah. [16:21] I'll add that to my list of fun-time things to try :) [16:21] hmm, i wonder if i my san box upstairs would be up to the task... [16:21] in a fully distributed system I have commits in a branch you don't have and vice versa, so linear history is impossible to get right [16:22] mramm: yeah, but hg and bzr both pretend quite well :-) [16:25] linus was very opposed to pretending [16:26] mramm: another stupid git question: how do i update the current working tree to a given rev id? [16:27] mramm: git pull? [16:27] whoops, not been focusing on this: https://codereview.appspot.com/8939043 [16:27] rogpeppe1, quick look for form's sake please? [16:27] git checkout [16:27] fwereade: LGTM trivial [16:28] rogpeppe1, cheers [16:28] mramm: thanks [16:28] git pull will grab objects (commits and trees) from the remote repo an pull them down [16:28] git checkout will switch the cwd [16:29] ok, I'm nearly up to date on my reviews, and everyone can start committing their approved bits and pieces to trunk now [16:29] will try to swing by again later -- if not, ttyall tomorrow [16:30] dimitern, if C+L sleep early I might ping you for a late beer/catchup re upgrade-charm [16:30] dimitern, otherwise, maybe 20 mins before the meeting tomorrow? [16:32] fwereade: sgtm [16:32] fwereade: when you can [16:40] mgz, rogpeppe1, fwereade: you guys happy with what we are proposing for release into raring? [16:40] jamespage: yup, sgtm [16:40] jamespage: assuming it's essentially still the sources i put together [16:40] rogpeppe1, its the re-cut sources mgz did [16:41] mgz: what did you change? [16:41] lpad got dropped [16:41] and some licensing clarification around goose [16:42] why is juju depending on lpad anyway? it's only used by lbox when interacting with LP [16:43] * jamespage shrugs [16:43] anyway I just uploaded to raring - Daviey and slangasek are lined up for review [16:44] jamespage: tyvm! [16:47] jamespage: awesome work! [16:48] lets not get to excited - its just in the queue! [16:48] thank you very, very much [16:48] haha [16:48] understood [16:48] but it is progress [16:48] mramm, action prior to next release - get some copyright/license headers in all source across the board please! [16:48] fwereade, I got your review comments in the middle of submitting. Would you be alright with a separate branch with those implemented? [16:49] jamespage: will do [16:49] Er, too late :/ [16:51] dimitern: it's actually just the store code that depends on lpad [16:52] dimitern: i included it just so i could do go test ./... without errors [16:52] rogpeppe1: it's about time to separate the store from juju-core now [16:52] dimitern: it's happened before and it will happen again :-) [16:53] rogpeppe1: i certainly hope it'll be soon :) [16:54] Makyo: it's never late for another branch ;) [16:54] dimitern, More branches, more branches! === deryck is now known as deryck[afk] [17:28] Getting conflicting reviews on r1192 and it should be reverted. I've never been successful at that. Can someone help me out? [17:46] Makyo, sure, that's fine [17:47] Makyo, sorry about that [17:47] Makyo, no need to revert if it's already in, just note that you will update in the review please [17:48] what's the best way for a charm to work out if it's been deployed using py-juju or go-juju other than checking for location of the agent.conf files? [17:49] fwereade, even with rogpeppe1' [17:49] 's comments? [17:49] Makyo, ah sorry, just saw rog's [17:49] Makyo, 2 mins thinking time [17:50] mattyw: JUJU_CONTEXT_ID is a reasonable indication, i think. [17:50] Makyo: sorry for tardy review - i forgot to submit the comments earlier [17:51] rogpeppe1, That's okay, they're definitely necessary. I think reverting might be the best choice, though, to make sure things aren't half-right. [17:53] Makyo, yeah, it needs more thought -- sorry, Ineeded to look up what we did with deploy in that case [17:55] Makyo, I think we should keep charm-adding and charm-setting spearate, essentially as they are in deploy [17:58] fwereade_, alright, looking through deploy... [17:59] Makyo, trying to figure it out myself [17:59] Makyo, rogpeppe1: how do charms get into state from the GUI ie via the API? [17:59] fwereade_: currently they don't [18:00] fwereade_: we're restricted to charms in the charm store [18:00] jamespage: what was the reason for juju-core rejection? ...I wish there was something recorded on launchpad [18:00] fwereade_, rogpeppe1, Correct, no support deploy/upgrade on local yet, though we can see them once they're deployed. [18:00] rogpeppe1, ah ok, so the client just has to pass a url known to the charms store? [18:00] fwereade_: yes [18:01] fwereade_: my vague plan is to have an optional extra call to upload a charm (probably a sequence of calls so we don't bundle up MB in one json message) [18:01] fwereade_: it may even end up being better as a PUT [18:01] Makyo, rogpeppe1: ok, then are we ok just putting the same restriction on upgrades for now? ie ServiceName, CharmURL, Force? [18:02] fwereade_, rogpeppe1, sounds good to me. [18:02] fwereade_: i think so [18:02] Makyo, rogpeppe1: I'm fine punting on local charms today at least, so long as we punt consistently [18:02] fwereade_: yup [18:03] i'm done for the day [18:03] Still on to revert 1192? [18:03] see y'all tomorrow [18:03] Later. [18:04] Hi all. I'd like to announce the GUI's compatibility with juju core, but the Raring Juju from the devel PPA fails for me like this: http://pastebin.ubuntu.com/5598967/ [18:04] Is that known? Is there some other, better way to suggest that people try out the GUI on juju core? I didn't figure installing juju from source was the right sales pitch :-) [18:19] mgz, release team said it was to late [18:29] gary_poster: so now --upload-tools is failing too? [18:30] ahasenack, as in the pastebin, bootstrap --upload-tools succeeds temporarily, but then I can't deploy anything. :-/ [18:30] oh, ok, I missed the deploy command [18:47] jamespage: mgz: we are still fighting the good fight. And if we don't get it in now, we will do everything possible to get it in via backports early next week [18:47] so the end user visible difference will not be much [18:52] either way after next week a user will be able to sudo apt-get install juju-core, and get our package [18:52] and if it is in backports it will be even easier to get it updated with our monthly releases === flaviami_ is now known as flaviamissi === bac_ is now known as bac === deryck[afk] is now known as deryck [21:25] fwereade_: what is the story with the trunk ? [21:26] can we land fixes ? [21:28] davecheney: I thought it had been mentioned that it's okay to land again, but I now can't find a reference [21:28] release things have all been branched though [21:29] mgz: mramm fwereade_ : please email juju-dev with the status of the trunk [21:29] mgz: i saw you were unassigning issues from 2.0 [21:29] thank you [21:29] feel free to use trunk [21:29] mramm: email please [21:29] will do [21:30] just copy and paste this discussion [21:32] it's posted [21:32] and I've updated the agenda so we talk about that [21:32] and so that we talk about the backports stuff [21:32] and general release status [21:33] I will try to write up a release status update e-mail later this evening [21:33] but right now there is a lot up in the air, and I need some time to get all the details organized and written down [21:34] and I have some personal stuff to take care of in a few min, so I don't think it will make it before I leave for that... [21:37] thanks mark === wedgwood is now known as wedgwood_away === Makyo is now known as Makyo|out