[00:04] axw: so i found the maas storage issue, their regexp for parsing the consraints only allows a-zA-Z0-9 for labels :-( [00:05] perrito666: u r the shining light in ur neighbourhood :) [00:34] alexisb: did you follow up for bug 1451626 ? [00:34] Bug #1451626: Erroneous Juju user data on Windows for Juju version 1.23 <1.23> [00:48] katco: ping? [00:51] Bug #1452745 changed: 386 constant 250059350016 overflows int <386> [01:38] wallyworld: so, it doesn't have to be the tag. you could generate any old label [01:38] axw: yeah, i'm doing that now, just testing [01:38] but still [01:39] no reason why "-" should be excluded [01:39] a bit limiting, probably could be extended a bit [01:39] easiest thing is just to strip the "-" from the tag [01:39] makesit easy to map back [01:40] wallyworld: you could just use the tag's Id(), which would return the number [01:40] (as a string) [01:40] true, that would work [02:00] thumper: you around? [02:00] natefinch: yeah [02:02] thumper: I'm trying to figure out how to add tests to my latest patch that fixes our broken log rotation. Basically, once the machine agent is up and in some sort of reasonably steady state, I want to check that the default writer that loggo uses is still a lumberjack.Logger (since that was the problem before - it was getting overwritten with os.Stderr) [02:02] ok... [02:03] thumper: it's that first part which I don't really understand how to do "wait for the agent to be up in some kind of steady state" [02:03] yeah... wouldn't it be good if we had some pub/sub system? [02:04] I'm not sure there is a fully clean way right now [02:04] natefinch: although there may be a way to make sure you are past a certain point [02:04] menn0 has done a lot with jujud agent startup stuff [02:04] menn0: any ideas? [02:04] natefinch: there are tests in cmd/jujud/agent that wait for particular workers to start. you could perhaps leverage that? [02:05] natefinch: I also notice that there is no way to ask for the writer from loggo without removing it [02:05] thumper: ha yeah, I noticed that. That was cute. [02:05] thumper: *shrug* why would anyone ever care? :) [02:05] right... and it protects concurrent writes [02:06] FSVO protects [02:06] thumper: I can just remove it and make sure the one that was there was the right one, that's fine. [02:06] I don't hold string reasons not to have a query mechansim [02:07] menn0: I guess if I just look for one of the later workers to be started, that would be fine. really, once we're in the "starting workers" phase, I would hope we're well past anyone mucking with the logger [02:07] natefinch: once some of the later running workers have started you could do the check [02:07] of course then we come to the fact that the parent of the parent of the parent of the parent test suite resets the logger to be something other than the default [02:08] natefinch: ok, well look in machine_test.go for tests that use singularRecord [02:09] natefinch: it reports creation of the singularRunners and the worker they create (while also letting those things happen, it's pass through) [02:09] I'm reading lxd source... and slightly sad [02:10] we go to the trouble of splitting out errors, cmd, testing, and none of this is used, even though it is all written in go [02:11] I didn't realize lxd was written in go. That is sad that they're not reusing anything we've done. I can't say I'm terribly surprised, communication across teams is not that great... but it would have been nice if they reached out and said "hey, we're making this thing, any thoughts?" [02:12] menn0: cool, thanks. looking into that now. [02:13] natefinch: another approach would be to take advantage of the hooks that report when API and State connections are opened up (search for reportOpenedAPI and reportOpenedState and follow it through [02:13] natefinch: that might be more straight forward [02:13] natefinch: once a machine agent is opening up an API connection it's about to start workers and logging init should have been done [02:18] thumper: could you please have a look at: https://github.com/juju/replicaset/pull/2 [02:18] thumper: I need this to support the replicaset init fixes for 1.24 and up [02:21] menn0: ack [02:21] menn0: shipit [02:21] wallyworld: why did you change the sizes in the test data? [02:21] disk sizes [02:21] axw: one sec [02:22] thumper: thanks. turns out I could merge it myself even [02:22] \o/ [02:24] * thumper grunts [02:25] menn0: how do I pass in something like --show-log to the command? That's what happens in the real code, which triggers setting loggo's default writer. This code seems not to be setting that and thus not triggering it, and thus we're not using the right logger. [02:27] natefinch: hmmm not sure [02:28] axw: on i386: constant 250059350016 overflows int [02:29] wallyworld: ah, right [02:29] so i just knocked off a few 000's [02:29] wallyworld: hmm, maybe we should be using big.Int then [02:29] we don't really support i386 [02:29] wallyworld: this might happen IRL? [02:29] there was talk of dropping it [02:29] i think it's just there for clients [02:30] wallyworld: and armhf? [02:30] hmm [02:30] wouldn't hurt to change the type i guess [02:30] also, live test - maas accepts the constraints now but says no machine available [02:31] so i think the juju side is ok [02:31] natefinch: i've been looking. the existing machine agent tests the MachineAgent struct but not the cmd stuff which is a layer up and that's where the show-log is implemented. [02:31] menn0: ahh, hmm. [02:32] natefinch: you might need a test that calls cmd/jujud.Main ... [02:36] menn0: I may be able to reuse the stuff in bootstrap test [02:37] natefinch: yep. I was just looking at the tests in main_test.go. There might be some things you can use there. [02:42] this is open to anyone....... for 50 rupes ..... what is the purpose of ext-port and data-port params in neutron-openvswitch? [02:42] charmers:^^ [02:42] openstack-charmers:^^ [02:42] bdx: charmers are more likely to be watching #juju [02:43] natefinch: thanks! [03:12] thumper: where's the lxd code? I'd love to take a peruse [03:12] github.com/lxc/lxd [03:13] natefinch: although go get worked, the build failed... [03:13] haha [03:13] didn't care because I wasn't wanting to build it, just read it [03:13] natefinch: some packaging issue [03:13] and I didn't care enough to dig [03:42] well, I guess I'm giving up on this for tonight. Maybe I'll just write a CI test for it. [04:03] axw: you a lunch yet? [04:07] menn0: or thumper: small review for 1.24? http://reviews.vapour.ws/r/1624/ [04:07] * menn0 looks [04:08] ty [04:22] wallyworld: done, sorry [04:22] it took so long [04:22] np tyvm [04:27] Bug #1212689 changed: maas: implement root-disk constraint [04:33] Bug #1212689 was opened: maas: implement root-disk constraint [04:39] Bug #1212689 changed: maas: implement root-disk constraint [05:19] sigh... the leadership feature tests now work on vivid but not on trusty [05:19] at least I can repro it in a trusty VM [06:07] wallyworld: thanks for doing the root-disk constraint, nice to see [06:07] axw: yeah :-) [06:07] about to send an email [06:08] how was lunch? [06:08] axw: want to proof read the bit about placement? https://pastebin.canonical.com/131034/ [06:09] wallyworld: was good thanks. sure, I'll take a look [06:11] wallyworld: maybe you could specify what happens if you try to use storage with an old version of MAAS? (it should fail, if it doesn't already) [06:13] wallyworld: the last example sounds weird, like you're suggesting you can use cinder with MAAS, which you can't. [06:13] ah, ok [06:13] will reword [06:13] i'll have to check the failure case to see what juju does, can't recall [06:14] wallyworld: otherwise looks good [06:14] thanks [06:14] ta [06:17] axw: so if an older maas, no volumes are recorded but the node is still allocated. juju will get no volume info back :-( [06:17] even if we read the acquire node result and then error, the node will still be allocated [06:18] wallyworld: yeah, we ought to release the node and return an error I think [06:18] yeah [06:18] i'll have to do that [06:18] doesn't necessarily have to be before beta1 IMO [06:18] agreed, i'll add to release notes email [06:34] axw: http://reviews.vapour.ws/r/1625/ [06:34] wallyworld: that was quick :) [06:35] that's what she said [06:37] wallyworld: reviewed [06:38] axw: ty [06:39] axw: maas will not partially allocate - it returns a 409 [06:40] wallyworld: ok [07:27] morning o/ [09:01] TheMue, voidspace: hangout? [09:01] dooferlad: omw [09:01] omw [09:51] Hi, lads. Is 'juju expose' doing anything useful? The docs merely states that the FW rules will be adjusted, but I haven't seen any changes in iptables output after alternating with expose/unexpose. [09:52] Is there a charm-related code that needs to be written for the expose/unexpose to work properly (didn't find any hooks that would relate to that, at least the documentation doesn't mention it) [10:24] dooferlad: I've had it confirmed that the maas database is kept up to date by inspecting the leases file [10:24] dooferlad: and it's the leases file that is authoritative [10:24] voidspace: good stuff. One more pice of the puzzle solved! [10:25] dooferlad: indeed [10:25] dooferlad: so it really does seem like "lxc-stop" is *not* doing ifdown. That's what needs debugging. [10:26] voidspace: though weren't we seeing that just shutting down the container didn't perform ifdown? What about a physical machine? We need to know if it is the init scripts or something container specific. [10:33] dooferlad: right, a halt doesn't either - both initiate the shutdown sequence, which is where we were told ifdown was happening. [10:33] dooferlad: a physical machine doesn't do a DHCP release on shutdown as far as I know [10:33] we were told specifically that the lxc shutdown does though [10:34] voidspace: both should act the same. Not releasing on shutdown is bad. [10:36] dooferlad: when a physical machine shuts down it's usually ephemeral - it will come back at some point [10:36] dooferlad: anyway, I haven't confirmed this - this is just my understanding [10:37] voidspace: it makes no difference. Machines should be nice and if they have borrowed an address, they should give it back. [11:00] mattyw: I'm sorry for your loss. [11:01] natefinch, if Scotland had got independence they'd be facing an immigration crisis this morning [11:01] mattyw: ha [11:01] mattyw: Scotland's like, no way man, those english folk are crazy [11:02] natefinch, we're all crazy, but not all of us are nice crazy [11:12] mgz_: so it looks like i'll have to make the test for log rotation into a CI test.... there's just no infrastructure for testing the large swath of code that needs to run for log rotation to work, at least as far as I can tell. [11:12] perrito666: ^^ [11:13] jam: ^^ [11:13] natefinch: you're worried about scotland? :) [11:14] I think I'm missing your original request [11:15] jam: only you can save Scotland! [11:15] jam: you're our only hope! [11:15] jam: just talking about testing for the log rotation stuff. it requires more of the real workings of juju to be... working... during tests than we currently are able to set up in an isolated way. So I think a CI test is going to be required to test that we're really-really rotating in practice. [11:15] jam: mentioning you, since you responded to the review about needing a test for it [11:15] natefinch: I did think more CI for it [11:17] jam: yeah, I had hoped I could at least have some kind of basic unit test for it.... but I couldn';t make it work in a couple hours of trying. Just trying to keep the feedback loop as short as possible if we break something. [11:17] jam: probably a CI test would be good regardless, since it really is such a "runtime" thing. [11:17] jam: and because it's a global variable, effectively, anyone from anywhere in the code can screw it up ;) [11:17] natefinch: yeah, I feel like this is very easy to accidentally mock out the important thing [11:18] the fact that something is set to 300MB vs 300000MB is important [11:18] heh yeah [11:18] so setting it to 3kB to trigger it would be bad [11:19] I like your idea of a logspam charm [11:20] that's better than stopping jujud and manually adding a bunch of data to it, which was my idea [11:59] natefinch: what do you actually need beyond deploy test, plus adding more content to logs, then verifying they get moved? [11:59] you can certainly do that with what we have currently [11:59] you just write a couple of juju run using helpers [12:03] mgz_: that should do, I think [12:06] natefinch: works for me [12:18] jam: I replied in the spec to your comments, but I don't even know what questions to ask about the networking. [12:20] wwitzel3: did you ever get your vmaas to user a network accessible from outside the machine running it? [12:22] perrito666: well it uses private ip addresses, so I just port forward in to it. [12:23] ow, not fun [12:23] * perrito666 creates some routes by hand to try to solve it [12:24] aghh this is stupid, the only thing working is irc, something in my isp is messing with anything using websockets [13:05] which version of mongo are we using? === rogpeppe2 is now known as rogpeppe [13:12] perrito666: 2.4 [13:14] wherent we moving to 2.6? [13:17] perrito666: various things have been talked about, but 2.4 is what we have === anthonyf is now known as Guest11644 [13:44] Bug #1452381 changed: Failed to deploy vivid LXCs on vivid host [13:44] Bug #1452808 changed: no API addresses to connect to after juju upgrade [13:44] Bug #1452891 changed: juju determines the unit IP addresss based on the last interface attached in openstack [13:54] hah, just done the classic [13:54] ifdown on a machine I'm ssh'd into [13:54] and now I can't get back in... [13:54] ah well [13:55] voidspace: haha... we've all done it [13:56] natefinch: yeah :-) [13:57] natefinch: lxc-stop (how we gracefully shut down containers) doesn't do a DHCPRELEASE [13:57] natefinch: an explicit ifdown does [13:57] natefinch: this is why oil run out of dhcp leases when they create and destroy lots of containers [13:57] natefinch: :-/ [13:57] natefinch: we've been told that lxc *does* do an ifdown as part of the shutdown sequence, as far as I can tell that's not true [13:58] natefinch: still need to work out why it's not true and how to fix it [13:59] voidspace: wow, yeah, that would be a pretty serious problem [14:00] voidspace: seems like one of those "we can't possibly be the first people to run into it" kind of problems [14:02] Bug #1451385 changed: LDS 15.04 - OpenStack - lxc fails to retrieve tmpl to clone [14:07] natefinch: yeah, happens with the vanilla ubuntu template *and* the juju image [14:07] natefinch: so not something we've screwed up [14:08] voidspace: sucks. Hope we can figure out a way around it. [14:08] natefinch: I can't find other references to the problem though [14:09] natefinch: looks like "addressable containers" are the way we'll work round it for now [14:09] natefinch: where we statically allocate and explicitly release the addresses [14:13] dooferlad: server bits arrived [14:13] voidspace: :-) [14:13] dooferlad: tempted by that refurbed PDU you linked to [14:13] voidspace: Naa, JSON all the way! [14:13] dooferlad: but two servers plus PDU is just a bit over-budget [14:13] dooferlad: so yeah, I'll try the RESTful API first [14:14] dooferlad: can you link to your scripts in that document (if you don't already) so I have something to hack on [14:26] voidspace: doc updated. [14:39] what's the variable you set in a juju environment to turn debugging on/off? [14:41] katco: juju set-env logging-config="=INFO" [14:41] katco: you can also set logging-config: "=INFO" in the environment in environments.yaml [14:42] (before bootstrapping) [14:47] natefinch: ty [14:47] ok, my isp recommends I clear the cache of my browser..... [14:48] perrito666: first, find the start button [14:49] perrito666: you should really try one of those registry cleaners [14:49] they actually made me do a netstat and two download speed tests [14:49] lol [14:49] which would be nice if my problem wasn't the upload [14:50] perrito666: did they make you plug your computer directly into the cable modem? My ISPs wouldn't ever talk to me if I had a router between my computer and the cable modem.... even though it was always the cable modem or their hardware that was at fault. [14:51] natefinch: off course they did [14:51] and I didnt, but they have no way to know that [15:15] is there a python equivalent to the go playground? Just like an online repl where you can write some code and run it and share it with people? [15:15] er, I guess not repl - I don't need line by line values output. [15:15] https://repl.it/languages/Python3 :) [15:16] ahh [15:16] gotcha [15:16] just like "here's a python script, run it and show me the stdout/stderr [15:19] natefinch: http://ideone.com/ [15:31] gsamfira: thanks :) [15:54] dooferlad: thanks :-) [16:37] gsamfira, did you get your pull request reviewed for the bug I pinged you about earlier this week? [16:38] alexisb: not yet, but its in the queue :) [16:38] cmars, fyi ^^^ [16:38] TheMue, ^^ [16:38] alexisb: we already spoke :). Its on the radar :D [16:39] awesome thanks [16:39] I will get out of the way then :) [16:39] gsamfira: #1609? [16:40] TheMue: yes :) [16:41] gsamfira: ok, will take a look. only may have troubles to follow the windows specifics :D [16:41] I am really sorry about the large PR :(. Its one package that replaces the old. [16:41] I am here if you have questions [16:43] TheMue: there are a couple of functions that do some syscalls to specific windows stuff, but I think I commented most of it with links to the MSDN documentation page [16:43] gsamfira: ok, will ping you if needed [16:48] ericsnow: natefinch: if you land the bugs you have in review in the next hour, we can claim 3 extra bugs for this iteration :) [16:48] :) [16:48] katco: likely not happening for mine [16:49] ericsnow: no worries. just would have been nice :) [17:17] Bug #1453215 was opened: poolSuite tests fail on go 1.3 and gccgo [17:32] natefinch: planning meeting [17:38] natefinch, mgz,, katco : can either of you review http://reviews.vapour.ws/r/1633/ [17:40] sinzui: ship it! [17:40] thank you natefinch [17:47] Bug #1453215 changed: poolSuite tests fail on go 1.3 and gccgo [18:06] katco, wwitzel3: is it just me, or is ericsnow's sound going in and out? [18:06] and video [18:06] same here [18:07] natefinch: it is going in and out, but I assume the server is recording it [18:08] ericsnow: sorry eric, you were cutting out pretty bad [18:08] ericsnow: we were getting every other word [18:08] or at least i was [18:08] katco: the whole time I was talking? [18:09] ericsnow: not the entire time, but a good majority [18:09] katco: switching in and out of screen sharing was probably the culprit [18:09] ericsnow: possibly [18:09] ericsnow: well you can speak twice every word :p [18:09] ericsnow: you look frozen right now [18:10] katco: just holding very still [18:11] ericsnow: lol can you hear what nate is saying? [18:11] katco: loud and clear [18:11] katco: video too [18:11] ericsnow: k :) [18:24] natefinch: we're back in the other meeting [18:49] in the bit where I made the comment "e.g. like this" === anthonyf is now known as Guest82825 [19:35] omg I just tried to /query someone in the next room (the cleaning lady to be more precise) who does not even have an irc handle [19:48] lol [19:51] my brain might have a couple of bad sectors [20:09] perrito666, iqstat is a helpful tool for that ;) [20:22] I would kill for a fsck.brain [20:22] perrito666: http://www.muppetlabs.com/~breadbox/bf/ ? [20:24] katco: I prefer http://en.wikipedia.org/wiki/Whitespace_%28programming_language%29 [21:12] Bug #1453280 was opened: Juju machine service for Windows creates incorrect tools symlinks [21:18] jw4, gsamfira : you are now members of ~juju-report-contributors, which entitles you to view http://reports.vapour.ws/releases [21:19] I can add other people who need to see the CI logs [21:19] sinzui: cool! thanks! [21:19] please also add Bogdan Teleaga [21:20] gsamfira, done [21:20] thanks! [21:24] Bug #1453280 changed: Juju machine service for Windows creates incorrect tools symlinks [21:30] Bug #1453280 was opened: Juju machine service for Windows creates incorrect tools symlinks [21:47] sinzui: much appreciated! [22:18] Bug #1453297 was opened: juju not queuing actions after relation-changed hook [23:24] Bug #1450740 changed: provider/openstack: volumes are recorded with 0 size