[05:21] bigjools: still not seeing any decent way to test my virsh power script, or to make it quite trivial. === jtv1 is now known as jtv [05:22] One thing I could do is allow the caller to override the virsh executable it uses, so that a test can inject “echo” instead. But that leaves baggage in normal execution paths. [05:23] I could make it a power parameter, I guess. === Ursinha` is now known as Ursinha === Ursinha is now known as Guest90314 [06:51] jtv: sorry was caught up with things. I don't think you can test the scripts. [06:51] the approach I took with the wol stuff was to test the templating and to test that the script returns with a 0 code. [06:57] Thanks — I've got some things I can do now. [07:01] jtv: since the scripts are intended to be customised, I think unit testing them is not useful [07:01] they should be QAed instead [07:03] I'd like to know that at least it makes some kind of syntactic sense to the shell. [07:03] Which I can, actually, test to some extent. [07:04] jtv: exit code! [07:04] Alas, no, not that easy. [07:04] not for all, no [07:04] and we really don't want to start up VMs [07:05] so like I said, I'd really leave it to QA [07:05] this is why I made this level of separation [07:06] Well there's one thing I can do in tests that makes it not start up VMs and yet exercises most of the script. [07:06] One simple thing, that is. Complicated things can do more, I'm sure. :) [07:07] :) [07:08] And there. Just found a bug thanks to my test! [08:52] bigjools: I was just saying that we could stay on and chat about the anonymous-metadata requirement [08:52] jtv: ah ok I have a call with gavin first, will call you right after [08:52] OK [08:54] allenap: I see you've got a call now, so just ping me when you will be available to talk about this "migration" problem. [08:58] rvba: Cool, ta. [09:34] jtv: ok, wanna call? [09:35] bigjools: otp [09:36] w/someone else [09:37] * bigjools is hurt [09:37] you said you'd only ever otp me [09:39] uh-oh [09:40] jtv: I need to head off - maybe you want to talk to Daviey about the requirements for this? [09:40] Yes, I think I will. [09:47] o/ [09:48] jtv: [09:49] Daviey: Hi. It's about the anonymous metadata access… I wanted to be clear about the purpose. Is it _just_ for debugging? Because several of the things that were said sort of suggested that it might be needed for commissioning. [09:49] Which I hope is not the case, really. [09:50] But if it's for debugging, we might be able to require run-of-the-mill UI admin authentication. [09:51] jtv: so.. [09:51] jtv: We have some hardware that can't boot as maas can expect to.. [09:51] I see. This is indeed entirely different from the use-case we had before. [09:52] Therefore, for 'demo / testing' if we can avoid having to push data to the node, we can get a working setup [09:52] jtv: yeah, seems to fit the same model tho? [09:52] Wait — I'd like to comment on those two lines separately. [09:52] ok [09:54] About “seems to fit the same model,” I think admin auth makes more sense _for debugging_ than opening up the whole thing — whereas for difficult hardware it may not be very user-friendly. [09:54] jtv: well, injecting a known user/pass is MUCH easier than a generated on demand oauth key [09:55] The other thing is: what do you mean by “avoid having to push data to the node” exactly, so that I understand correctly? Do you mean that we don't want to push the credentials for the metadata service to the node through its boot params? [09:55] So.. using admin auth does help. [09:55] We could probably have a separate key for this. [09:55] jtv: we don't have access to boot params at deploy time. [09:55] But it's a security hazard if it's open in production. [09:55] Crappy hw [09:55] But then how does the node even know where to get its metadata? We push that in the same way. [09:55] jtv: yeah, this is for debug / demo (closed network) workflow [09:56] jtv: Yeah, knowing where the metadata service is = known constant [09:56] It's really injecting runtime generated auth is the problem [09:57] (although, could be really smart and use avahi for auto detecting where the metadata service is... but really.. overkill.) [09:57] I don't suppose we could have a single, shared key and differentiate by MAC address? Bit of a hack, and only good for a limited class of networks. [09:58] hmm [09:59] Then again, if you're running in the demo config… [09:59] the mac address isn't normally exposed in the http request. [09:59] Argh! How does the node even know how to identify itself? It doesn't receive its system_id either! [09:59] So.. the MAAS server would need to arp [09:59] Maybe the node could do it. [09:59] X-FORWARDED_FOR: $(hostname) .. seems sane? [10:00] How does it know its hostname at that stage? [10:00] X-Forwarded-For: rather. [10:00] jtv: well.. when using the model of dhcp allocated hostname.. MAAS is informed of the hostname, and the node knows it [10:02] DHCP-allocated hostname? Isn't that the node telling the DHCP server what hostname it wants? If so, it'd have to know first — quod non. [10:02] jtv: hmm.. there are two models.. MAAS controls dhcp, and an existing dhcp [10:02] this fits existing dhcp quite well. [10:02] Ouch ouch ouch you want to make it rely on a given DHCP setup as well? That's just making things worse & worse. [10:03] it's not all that bad IMO [10:03] remember, this is a non-prodcution setup [10:05] jtv: to contrast, this is how openstack does it.. https://github.com/openstack/nova/blob/master/nova/api/ec2/__init__.py#L253 [10:06] So you're talking about setting up a DHCP server with prepared leases with hostnames, for demo purposes? It feels a lot like designing a whole feature specifically for just one demo. [10:06] jtv: no, this also fits the debug model quite well. [10:07] Only if you have the hostnames. [10:07] i can do an out-of-band install of ubuntu server, apt-get install cloud-init, providing i installed with the hostname MAAS expects, we are GOLD [10:07] right? [10:08] jtv: the other solution is to do a reverse dns lookup based on the ip address, and post that.. That caters for MAAS controlling the dhcp [10:08] client side ^ [10:12] But why not have the client look up the mac address it uses for the metadata access? [10:12] also valid [10:13] In fact it could just loop over its interfaces until it found a hit. [10:13] jtv: technically, i'd say mac address lookup is more insecure :) [10:13] but security isn't an issue here [10:13] jtv: yeah, mac address seems reasonable :) [10:13] * jtv refreshes his memory on that topic [10:15] arp -na [10:15] No I mean, ISTR MAC addresses were significant somehow to the EC2 metadata service. Let me see if it matters to us at all. [10:16] (As you say, this is identification, not authentication — and yes, we'll still need to worry about how to fit this in with actual authentication) [10:16] jtv: nah, mac address isn't interesting to the meta data service afaik. [10:17] You're right. We ditched that. [10:18] Still, it's one of the few things the node does know when it starts commissioning. [10:45] Daviey: how would we do enlistment on this hardware? [10:45] (So I get a picture of what process we'd be working towards) [10:49] jtv: using the cmd line tool, maas-enlist [10:50] OK so MAAS knows about its MAC addresses anyway. [10:50] Will it need to have a custom(ized) metadata client? [10:50] I guess it will — hardcoded base metadata service URL [10:50] And it can embed its MAC address in the URL as well. [10:51] jtv: sounds good to me. [10:51] i think it will be a wrapper around cloud-init TBH [10:52] running cloud-init post-boot isn't as straight forward as calling a command, sadly [11:03] Daviey: I think we'd have to make it a whole separate http tree. Maybe a hidden /metadata//node/ [11:05] jtv: hmm, mac can't just be passed as a parameter through urls.py to the same function which defaults to None, and cecks django settings if METADATA_MATCH_MAC = True ? [11:05] ahh.. unauth'd url [11:05] frack [11:05] Yeah. Want separate URL anyway, I think. [11:06] would be nice to be able to reuse the same code.. [11:06] but i have every confidence you'd do the right thing :) [11:06] Well you have to get the URL from a different source anyway, right? [11:06] So that might as well include a mac address. [11:06] yuppers [11:07] And I take it the enlistment program sends all of the node's MAC addresses, in which case we don't even need to worry about which one we use. [11:09] right! [11:09] maas-enlist defaults to pushing all mac's [11:10] So then whatever code picks up the metadata URL from your static setup will append one of its mac addresses as a path component, and presto. [11:11] \o/ [11:11] sounds like a plan [11:11] (And it'll skip the oauth bit, but that much was obvious) [11:11] I can add a setting for just the dev/demo configs. [11:12] jtv: you rock my world. [11:12] Well let's wait until it works. :) [11:27] Daviey: I'll work it out in more detail tomorrow. Until then I remain, your loyal servant &c. :) [11:30] jtv: thanks.. nn sir [11:35] nn! === Guest90314 is now known as Ursinha [17:02] hey folks, this isn't a dev question, but I'm struggling with MaaS/dnsmasq right now and I was hoping you might be able to point me in the right direction. [17:02] I'm trying to stand up a new MaaS, I add 11 nodes to the MaaS, but when I run juju bootstrap, I get an ssh error. When I run verbose, it is trying to connect to the hostname of the node, which fails to resolve. [17:03] Shouldn't MaaS be adding the hostname to cobbler to add to dnsmasq automatically when the node is finished commissioning? [17:03] The strangeness comes in where one of my 11 blades seems to resolve but the other 10 do not. [17:05] cheez0r: can you ping the bootstrap's node hostname from where you are running juju? [17:07] roaksoax: no. that's part of the problem. It should have automatically been added via cobbler when I commissioned the node. [17:07] I'm trying to understand both why the dns hostname adds aren't working and where I could manually add them [17:09] cheez0r: are you running an external dns server or are you using maas-dhcp? [17:09] using maas-dhcp [17:09] because of that I was under the impression that MaaS would add hostname entries for each of the nodes as they were commissioned. [17:09] cheez0r: is the machine where you are running juju, using the maas server as DNS server? [17:09] yes, should be [17:10] right- I'm following the howto at https://wiki.ubuntu.com/ServerTeam/MAAS/Juju#MAAS:_getting_started_with_Juju [17:10] cheez0r: make sure that the machine were you are running juju from is using the maas server as DNS server (you probably have to do it manually) [17:12] how can I do that- the node is set as 'ready' in MaaS, it's fresh out of commissioning. I'm running juju bootstrap from the MaaS node. [17:12] It looks like a resolution from the MaaS node issue, not from the node itself [17:12] and none of the nodes resolve from the MaaS node except for one, for whatever reason. [17:12] They were all added identically. [17:17] cheez0r: I think I've hit the issue before but can't recall how to fix it but just editing /etc/resolve.conf and nameserver W.X.Y.Z [17:18] right, except that I don't know the IP addresses MaaS has handed out to my servers to manually configure their hostnames [17:18] it's resolving via dnsmasq; I just need to add the hostnames to the dnsmasq configuration [17:18] cheez0r: that's added automatically, unless there's a bug in MAAS where it is not updating cobbler correctly [17:19] cheez0r: try: sudo cobbler sync [17:19] right, which I think is the problem [17:19] I've done cobbler sync about a hundred times so far [17:19] :p [17:19] uhmmm i'll try to reproduce as soon as I can to further troubleshoot this [17:19] Do you know where the cobbler or dnsmasq configuration is located for the hostnames? [17:20] I can try manually adding them and see if it gets me past this point [17:20] cheez0r: /var/lib/cobbler/cobbler_hosts [17:20] cheez0r: what does that file show? [17:20] cheez0r: does it show hostname/ip combination? [17:21] it's empty. [17:22] cheez0r: so if you do: sudo cobbler system dumpvars --name node-XYZ(of whatever node) | grep dns [17:22] no output returned [17:22] system not found [17:23] cheez0r: so maybe MAAS is not setting the hostname [17:23] doesn't matter if I use the hostnames I specified or the default [17:23] cheez0r: you could try: sudo cobbler system edit --name node-XYZ --dns-name node-hostname [17:23] and then sudo cobbler sync [17:23] ok let me try that [17:23] and try to ping it by hostname [17:24] how can I get a list of nodes cobbler knows about? [17:24] cheez0r: sudo cobbler system list [17:24] that's really odd, they all seem to have the same mac address [17:25] names are node--d485645878c8 [17:26] the systems in that list seem to reflect the correct dns_name in the config it outputs [17:27] okay Im guessing this is a bug related to my specifying hostnames when adding nodes to MaaS [17:27] yes apparently so [17:27] let me delete and readd a node with no specified hostname and see what it does [17:27] i'd have to setup a physical cluster to troubleshoot [17:27] not asking you to do that, but thanks for the thought ;) [17:27] cheez0r: i know :) I just need to make sure it works well :) [17:28] well, I'm specifying amd64 arch and was specifiying a hostname in the format city_name-dc_name-enclosure#-blade# [17:29] might be throwing errors on all of the hyphens or some such, but I dunno [17:29] for hostnames? [17:29] yes [17:29] cheez0r: hostnames don't accept underscores, but does accept hypens [17:29] no underscores in the actual names [17:30] like paris-champselysee-enclosure1-blade12 [17:31] alright, yeah I think it might be realated to not updating cobbler correctly [17:31] well I've got one node re-commissioning with the default hostname node- so we'll see if that fixes it [17:32] cool [17:43] no change- the newly recommissioned node with the default hostname is still not resolving. [17:43] cobbler_hosts is still empty [17:46] the newly recommissioned node still shows up with a funky name with cobbler system list [17:53] re-commissioning with architecture set to i386 to see if that fixes anything [17:58] no change, still doesn't resolve [18:02] Hello I've run juju bootstrap. This was good. I have seen apt-get running. But juju status doens work. I get ERROR Invalid SSH key. ssh ubuntu@hostname works [18:02] http://paste.ubuntu.com/1016722/ [18:02] Where goes it wrong at my setup ? [18:02] Soekris: the hostname resolves? [18:02] or are you ssh ubuntu@IP? [18:05] cheez0r: it resolves [18:06] ubuntu@s1-cl1-maas works [18:06] ssh ubuntu@s1-cl1-maas works [18:06] hrm interesting [18:06] I have the same SSH issue but mine is because cobbler isn't adding the DNS names for some rason [18:07] I have configued dns and dhcp on a other server [18:09] It's strange that something like so easy can be so difficult :D [18:10] agreed [18:13] cheez0r: is it for production or for test purposel ? [18:13] kind of both [18:14] you can quick make the entries in the /etc/hosts [18:14] yeah I think that'll be my workaround right now [19:48] Strange one of the two servers are showing in the juju status. But how it's a MAAS mistery