[05:21] <jtv1> bigjools: still not seeing any decent way to test my virsh power script, or to make it quite trivial.
[05:22] <jtv> One thing I could do is allow the caller to override the virsh executable it uses, so that a test can inject “echo” instead.  But that leaves baggage in normal execution paths.
[05:23] <jtv> I could make it a power parameter, I guess.
[06:51] <bigjools> jtv: sorry was caught up with things.  I don't think you can test the scripts.
[06:51] <bigjools> the approach I took with the wol stuff was to test the templating and to test that the script returns with a 0 code.
[06:57] <jtv> Thanks — I've got some things I can do now.
[07:01] <bigjools> jtv: since the scripts are intended to be customised, I think unit testing them is not useful
[07:01] <bigjools> they should be QAed instead
[07:03] <jtv> I'd like to know that at least it makes some kind of syntactic sense to the shell.
[07:03] <jtv> Which I can, actually, test to some extent.
[07:04] <bigjools> jtv: exit code!
[07:04] <jtv> Alas, no, not that easy.
[07:04] <bigjools> not for all, no
[07:04] <bigjools> and we really don't want to start up VMs
[07:05] <bigjools> so like I said, I'd really leave it to QA
[07:05] <bigjools> this is why I made this level of separation
[07:06] <jtv> Well there's one thing I can do in tests that makes it not start up VMs and yet exercises most of the script.
[07:06] <jtv> One simple thing, that is.  Complicated things can do more, I'm sure.  :)
[07:07] <bigjools> :)
[07:08] <jtv> And there.  Just found a bug thanks to my test!
[08:52] <jtv> bigjools: I was just saying that we could stay on and chat about the anonymous-metadata requirement
[08:52] <bigjools> jtv: ah ok I have a call with gavin first, will call you right after
[08:52] <jtv> OK
[08:54] <rvba> allenap: I see you've got a call now, so just ping me when you will be available to talk about this "migration" problem.
[08:58] <allenap> rvba: Cool, ta.
[09:34] <bigjools> jtv: ok, wanna call?
[09:35] <jtv> bigjools: otp
[09:36] <jtv> w/someone else
[09:37]  * bigjools is hurt
[09:37] <bigjools> you said you'd only ever otp me
[09:39] <jtv> uh-oh
[09:40] <bigjools> jtv: I need to head off - maybe you want to talk to Daviey about the requirements for this?
[09:40] <jtv> Yes, I think I will.
[09:47] <Daviey> o/
[09:48] <Daviey> jtv:
[09:49] <jtv> Daviey: Hi.  It's about the anonymous metadata access… I wanted to be clear about the purpose.  Is it _just_ for debugging?  Because several of the things that were said sort of suggested that it might be needed for commissioning.
[09:49] <jtv> Which I hope is not the case, really.
[09:50] <jtv> But if it's for debugging, we might be able to require run-of-the-mill UI admin authentication.
[09:51] <Daviey> jtv: so..
[09:51] <Daviey> jtv: We have some hardware that can't boot as maas can expect to..
[09:51] <jtv> I see.  This is indeed entirely different from the use-case we had before.
[09:52] <Daviey> Therefore, for 'demo / testing' if we can avoid having to push data to the node, we can get a working setup
[09:52] <Daviey> jtv: yeah, seems to fit the same model tho?
[09:52] <jtv> Wait — I'd like to comment on those two lines separately.
[09:52] <Daviey> ok
[09:54] <jtv> About “seems to fit the same model,” I think admin auth makes more sense _for debugging_ than opening up the whole thing — whereas for difficult hardware it may not be very user-friendly.
[09:54] <Daviey> jtv: well, injecting a known user/pass is MUCH easier than a generated on demand oauth key
[09:55] <jtv> The other thing is: what do you mean by “avoid having to push data to the node” exactly, so that I understand correctly?  Do you mean that we don't want to push the credentials for the metadata service to the node through its boot params?
[09:55] <Daviey> So.. using admin auth does help.
[09:55] <jtv> We could probably have a separate key for this.
[09:55] <Daviey> jtv: we don't have access to boot params at deploy time.
[09:55] <jtv> But it's a security hazard if it's open in production.
[09:55] <Daviey> Crappy hw
[09:55] <jtv> But then how does the node even know where to get its metadata?  We push that in the same way.
[09:55] <Daviey> jtv: yeah, this is for debug / demo (closed network) workflow
[09:56] <Daviey> jtv: Yeah, knowing where the metadata service is = known constant
[09:56] <Daviey> It's really injecting runtime generated auth is the problem
[09:57] <Daviey> (although, could be really smart and use avahi for auto detecting where the metadata service is... but really.. overkill.)
[09:57] <jtv> I don't suppose we could have a single, shared key and differentiate by MAC address?  Bit of a hack, and only good for a limited class of networks.
[09:58] <Daviey> hmm
[09:59] <jtv> Then again, if you're running in the demo config…
[09:59] <Daviey> the mac address isn't normally exposed in the http request.
[09:59] <jtv> Argh!  How does the node even know how to identify itself?  It doesn't receive its system_id either!
[09:59] <Daviey> So.. the MAAS server would need to arp
[09:59] <jtv> Maybe the node could do it.
[09:59] <Daviey> X-FORWARDED_FOR: $(hostname) .. seems sane?
[10:00] <jtv> How does it know its hostname at that stage?
[10:00] <Daviey> X-Forwarded-For: rather.
[10:00] <Daviey> jtv: well.. when using the model of dhcp allocated hostname.. MAAS is informed of the hostname, and the node knows it
[10:02] <jtv> DHCP-allocated hostname?  Isn't that the node telling the DHCP server what hostname it wants?  If so, it'd have to know first — quod non.
[10:02] <Daviey> jtv: hmm.. there are two models.. MAAS controls dhcp, and an existing dhcp
[10:02] <Daviey> this fits existing dhcp quite well.
[10:02] <jtv>  Ouch ouch ouch you want to make it rely on a given DHCP setup as well?  That's just making things worse & worse.
[10:03] <Daviey> it's not all that bad IMO
[10:03] <Daviey> remember, this is a non-prodcution setup
[10:05] <Daviey> jtv: to contrast, this is how openstack does it.. https://github.com/openstack/nova/blob/master/nova/api/ec2/__init__.py#L253
[10:06] <jtv> So you're talking about setting up a DHCP server with prepared leases with hostnames, for demo purposes?  It feels a lot like designing a whole feature specifically for just one demo.
[10:06] <Daviey> jtv: no, this also fits the debug model quite well.
[10:07] <jtv> Only if you have the hostnames.
[10:07] <Daviey> i can do an out-of-band install of ubuntu server, apt-get install cloud-init, providing i installed with the hostname MAAS expects, we are GOLD
[10:07] <Daviey> right?
[10:08] <Daviey> jtv: the other solution is to do a reverse dns lookup based on the ip address, and post that.. That caters for MAAS controlling the dhcp
[10:08] <Daviey> client side ^
[10:12] <jtv> But why not have the client look up the mac address it uses for the metadata access?
[10:12] <Daviey> also valid
[10:13] <jtv> In fact it could just loop over its interfaces until it found a hit.
[10:13] <Daviey> jtv: technically, i'd say mac address lookup is more insecure :)
[10:13] <Daviey> but security isn't an issue here
[10:13] <Daviey> jtv: yeah, mac address seems reasonable :)
[10:13]  * jtv refreshes his memory on that topic
[10:15] <Daviey> arp -na
[10:15] <jtv> No I mean, ISTR MAC addresses were significant somehow to the EC2 metadata service.  Let me see if it matters to us at all.
[10:16] <jtv> (As you say, this is identification, not authentication — and yes, we'll still need to worry about how to fit this in with actual authentication)
[10:16] <Daviey> jtv: nah, mac address isn't interesting to the meta data service afaik.
[10:17] <jtv> You're right.  We ditched that.
[10:18] <jtv> Still, it's one of the few things the node does know when it starts commissioning.
[10:45] <jtv> Daviey: how would we do enlistment on this hardware?
[10:45] <jtv> (So I get a picture of what process we'd be working towards)
[10:49] <Daviey> jtv: using the cmd line tool, maas-enlist
[10:50] <jtv> OK so MAAS knows about its MAC addresses anyway.
[10:50] <jtv> Will it need to have a custom(ized) metadata client?
[10:50] <jtv> I guess it will — hardcoded base metadata service URL
[10:50] <jtv> And it can embed its MAC address in the URL as well.
[10:51] <Daviey> jtv: sounds good to me.
[10:51] <Daviey> i think it will be a wrapper around cloud-init TBH
[10:52] <Daviey> running cloud-init post-boot isn't as straight forward as calling a command, sadly
[11:03] <jtv> Daviey: I think we'd have to make it a whole separate http tree.  Maybe a hidden /metadata/<version>/node/<mac>
[11:05] <Daviey> jtv: hmm, mac can't just be passed as a parameter through urls.py to the same function which defaults to None, and cecks django settings if METADATA_MATCH_MAC = True ?
[11:05] <Daviey> ahh.. unauth'd url
[11:05] <Daviey> frack
[11:05] <jtv> Yeah.  Want separate URL anyway, I think.
[11:06] <Daviey> would be nice to be able to reuse the same code..
[11:06] <Daviey> but i have every confidence you'd do the right thing :)
[11:06] <jtv> Well you have to get the URL from a different source anyway, right?
[11:06] <jtv> So that might as well include a mac address.
[11:06] <Daviey> yuppers
[11:07] <jtv> And I take it the enlistment program sends all of the node's MAC addresses, in which case we don't even need to worry about which one we use.
[11:09] <Daviey> right!
[11:09] <Daviey> maas-enlist defaults to pushing all mac's
[11:10] <jtv> So then whatever code picks up the metadata URL from your static setup will append one of its mac addresses as a path component, and presto.
[11:11] <Daviey> \o/
[11:11] <Daviey> sounds like a plan
[11:11] <jtv> (And it'll skip the oauth bit, but that much was obvious)
[11:11] <jtv> I can add a setting for just the dev/demo configs.
[11:12] <Daviey> jtv: you rock my world.
[11:12] <jtv> Well let's wait until it works.  :)
[11:27] <jtv> Daviey: I'll work it out in more detail tomorrow.  Until then I remain, your loyal servant &c.  :)
[11:30] <Daviey> jtv: thanks.. nn sir
[11:35] <jtv> nn!
[17:02] <cheez0r> hey folks, this isn't a dev question, but I'm struggling with MaaS/dnsmasq right now and I was hoping you might be able to point me in the right direction.
[17:02] <cheez0r> I'm trying to stand up a new MaaS, I add 11 nodes to the MaaS, but when I run juju bootstrap, I get an ssh error. When I run verbose, it is trying to connect to the hostname of the node, which fails to resolve.
[17:03] <cheez0r> Shouldn't MaaS be adding the hostname to cobbler to add to dnsmasq automatically when the node is finished commissioning?
[17:03] <cheez0r> The strangeness comes in where one of my 11 blades seems to resolve but the other 10 do not.
[17:05] <roaksoax> cheez0r: can you ping the bootstrap's node hostname from where you are running juju?
[17:07] <cheez0r> roaksoax: no. that's part of the problem. It should have automatically been added via cobbler when I commissioned the node.
[17:07] <cheez0r> I'm trying to understand both why the dns hostname adds aren't working and where I could manually add them
[17:09] <roaksoax> cheez0r: are you running an external dns server or are you using maas-dhcp?
[17:09] <cheez0r> using maas-dhcp
[17:09] <cheez0r> because of that I was under the impression that MaaS would add hostname entries for each of the nodes as they were commissioned.
[17:09] <roaksoax> cheez0r: is the machine where you are running juju, using the maas server as DNS server?
[17:09] <cheez0r> yes, should be
[17:10] <cheez0r> right- I'm following the howto at https://wiki.ubuntu.com/ServerTeam/MAAS/Juju#MAAS:_getting_started_with_Juju
[17:10] <roaksoax> cheez0r: make sure that the machine were you are running juju from is using the maas server as DNS server (you probably have to do it manually)
[17:12] <cheez0r> how can I do that- the node is set as 'ready' in MaaS, it's fresh out of commissioning. I'm running juju bootstrap from the MaaS node.
[17:12] <cheez0r> It looks like a resolution from the MaaS node issue, not from the node itself
[17:12] <cheez0r> and none of the nodes resolve from the MaaS node except for one, for whatever reason.
[17:12] <cheez0r> They were all added identically.
[17:17] <roaksoax> cheez0r: I think I've hit the issue before but can't recall how to fix it but just editing /etc/resolve.conf and nameserver W.X.Y.Z
[17:18] <cheez0r> right, except that I don't know the IP addresses MaaS has handed out to my servers to manually configure their hostnames
[17:18] <cheez0r> it's resolving via dnsmasq; I just need to add the hostnames to the dnsmasq configuration
[17:18] <roaksoax> cheez0r: that's added automatically, unless there's a bug in MAAS where it is not updating cobbler correctly
[17:19] <roaksoax> cheez0r: try: sudo cobbler sync
[17:19] <cheez0r> right, which I think is the problem
[17:19] <cheez0r> I've done cobbler sync about a hundred times so far
[17:19] <cheez0r> :p
[17:19] <roaksoax> uhmmm i'll try to reproduce as soon as I can to further troubleshoot this
[17:19] <cheez0r> Do you know where the cobbler or dnsmasq configuration is located for the hostnames?
[17:20] <cheez0r> I can try manually adding them and see if it gets me past this point
[17:20] <roaksoax> cheez0r: /var/lib/cobbler/cobbler_hosts
[17:20] <roaksoax> cheez0r: what does that file show?
[17:20] <roaksoax> cheez0r: does it show hostname/ip combination?
[17:21] <cheez0r> it's empty.
[17:22] <roaksoax> cheez0r: so if you do: sudo cobbler system dumpvars --name node-XYZ(of whatever node) | grep dns
[17:22] <cheez0r> no output returned
[17:22] <cheez0r> system not found <hostname>
[17:23] <roaksoax> cheez0r: so maybe MAAS is not setting the hostname
[17:23] <cheez0r> doesn't matter if I use the hostnames I specified or the default
[17:23] <roaksoax> cheez0r: you could try: sudo cobbler system edit --name node-XYZ --dns-name node-hostname
[17:23] <roaksoax> and then sudo cobbler sync
[17:23] <cheez0r> ok let me try that
[17:23] <roaksoax> and try to ping it by hostname
[17:24] <cheez0r> how can I get a list of nodes cobbler knows about?
[17:24] <roaksoax> cheez0r: sudo cobbler system list
[17:24] <cheez0r> that's really odd, they all seem to have the same mac address
[17:25] <cheez0r> names are node-<stuff>-d485645878c8
[17:26] <cheez0r> the systems in that list seem to reflect the correct dns_name in the config it outputs
[17:27] <cheez0r> okay Im guessing this is a bug related to my specifying hostnames when adding nodes to MaaS
[17:27] <roaksoax> yes apparently so
[17:27] <cheez0r> let me delete and readd a node with no specified hostname and see what it does
[17:27] <roaksoax> i'd have to setup a physical cluster to troubleshoot
[17:27] <cheez0r> not asking you to do that, but thanks for the thought ;)
[17:27] <roaksoax> cheez0r: i know :) I just need to make sure it works well :)
[17:28] <cheez0r> well, I'm specifying amd64 arch and was specifiying a hostname in the format city_name-dc_name-enclosure#-blade#
[17:29] <cheez0r> might be throwing errors on all of the hyphens or some such, but I dunno
[17:29] <roaksoax> for hostnames?
[17:29] <cheez0r> yes
[17:29] <roaksoax> cheez0r: hostnames don't accept underscores, but does accept hypens
[17:29] <cheez0r> no underscores in the actual names
[17:30] <cheez0r> like paris-champselysee-enclosure1-blade12
[17:31] <roaksoax> alright, yeah I think it might be realated to not updating cobbler correctly
[17:31] <cheez0r> well I've got one node re-commissioning with the default hostname node-<MAC> so we'll see if that fixes it
[17:32] <roaksoax> cool
[17:43] <cheez0r> no change- the newly recommissioned node with the default hostname is still not resolving.
[17:43] <cheez0r> cobbler_hosts is still empty
[17:46] <cheez0r> the newly recommissioned node still shows up with a funky name with cobbler system list
[17:53] <cheez0r> re-commissioning with architecture set to i386 to see if that fixes anything
[17:58] <cheez0r> no change, still doesn't resolve
[18:02] <Soekris> Hello I've run juju bootstrap. This was good. I have seen apt-get running. But juju status doens work. I get ERROR Invalid SSH key. ssh ubuntu@hostname works
[18:02] <Soekris> http://paste.ubuntu.com/1016722/
[18:02] <Soekris> Where goes it wrong at my setup ?
[18:02] <cheez0r> Soekris: the hostname resolves?
[18:02] <cheez0r> or are you ssh ubuntu@IP?
[18:05] <Soekris> cheez0r: it resolves
[18:06] <Soekris> ubuntu@s1-cl1-maas works
[18:06] <Soekris> ssh ubuntu@s1-cl1-maas works
[18:06] <cheez0r> hrm interesting
[18:06] <cheez0r> I have the same SSH issue but mine is because cobbler isn't adding the DNS names for some rason
[18:07] <Soekris> I have configued dns and dhcp on a other server
[18:09] <Soekris> It's strange that something like so easy can be so difficult :D
[18:10] <cheez0r> agreed
[18:13] <Soekris> cheez0r: is it for production or for test purposel ?
[18:13] <cheez0r> kind of both
[18:14] <Soekris> you can quick make the entries in the /etc/hosts
[18:14] <cheez0r> yeah I think that'll be my workaround right now
[19:48] <Soekris> Strange one of the two servers are showing in the juju status. But how it's a MAAS mistery