=== CyberJacob is now known as CyberJacob|Away [03:18] jtv1: could I trouble you for some reviews please [03:19] I'm in the process of writing up a review. [03:20] ta === Guest18526 is now known as wallyworld === jtv1 is now known as jtv [05:50] bigjools: found it! “from django.contrib import messages” and then e.g. “messages.error(request, "Aaaigh!")” [05:50] Now to find the request... [05:51] perfick [05:52] Well, still have to find that request. [05:57] it's passed into the view iirc [05:58] and api request [05:58] afk for a few [06:02] Yes, the view gets it — but the triggers don't. They don't even know whether there is one. [06:02] Signal handlers, I mean. [06:02] Not triggers. [06:27] that's the point of the handlers [06:28] they're not supposed to care [06:28] not looked at django docs for signals but I wonder what it does if there's an error [06:31] Turns out the transaction does commit. Which does not bode well. [06:31] Meanwhile, my NUCs won't auto-enlist any more. :-( [06:31] My non-NUC test machines won't even netboot! Complain about "APM not present." === CyberJacob|Away is now known as CyberJacob [06:50] bigjools: thanks for the review of my robustness branch. Much appreciated. Addressing your comments now. [06:50] jtv: I've seen the "APM not present" message in the lab. [06:51] rvba: searching for it yielded very little information... it sounded as if some tool suddenly expects APM. [06:51] Which is strange, given the laws of nature. [06:52] jtv: it happens when node is told to power off (which is the default PXE config instruction sent when MAAS doesn't really know what to do) but fails to do so. [06:52] I mean specifically the one that says time moves in a forward direction. [06:52] That's what I found by searching. To me though it happens while trying to netboot... [06:52] Which probably means that the netboot/status combination is unexpected or wrong. [06:53] This is when trying to auto-enlist... MAAS shouldn't even know the machine exists. [06:55] jtv: I see nodes being enlisted okay in the current CI run. Could it be a problem with your specific branch? [06:56] Could be, though I think it's basically a version of trunk [07:06] Phew. Installing the latest trunk got me past it somehow. [07:07] Past the "APM not pressent" problem, that is. [07:19] blake_r: Hi blake. I'm having a look at the CI runs and the new maas-integration.TestMAASIntegration.test_imported_boot_resources test takes 20 minutes to complete. Why is this so long? (I'm asking because it's important to keep the total runtime as low possible) [08:58] rvba: you said you were seeing static IP addresses in the lab... is that a recent version of trunk? [09:00] jtv: it's trunk + my robustness changes (which shouldn't interfere with the IP assignment) [09:02] Hmmmright. Do you know what the last trunk revision in there was? [09:02] jtv: 2854 [09:03] Thanks. [09:03] That's current. So... what in blazes is going on? [09:10] rvba: sorry, I am being a hardass on your review [09:11] bigjools: the change you suggest about the netboot flag has nothing to do with my change. [09:13] bigjools: and I don't really understand why re-assigning a status is dangerous. [09:14] rvba: I explained why it's bad in the review comments [09:14] you can end up in bad states. I remember mentioning at the sprint that I don't like that flag any more [09:15] bigjools: I prefer the 'default' (i.e. what happens to nodes that won't be picked up by the migration) to be that the nodes end up 'Deployed' instead of 'Allocated'. [09:15] 2. the status change has a race if the maas is in use [09:15] bigjools: I agree that I need to write a migration. [09:16] the new state is fine, just don't change the values of existing ones [09:16] bigjools: there is no bad state. netboot is still only relevant for one status. Same as before. [09:17] bigjools: I think changing the meaning is the safest thing to do here. Let me explain: [09:17] rvba: EXACTLY! [09:18] Previously we had on state 'Allocated', which could mean 3 things. → Allocated/Deploying/Deployed. Now, I expect most nodes in the old 'Allocated' state to be effectively in the new 'Deployed' state and that's why I'd like this migration to be as transparent as possible. [09:19] bigjools: but it's really a detail. I can write one additional data migration if this is to get this branch landed. [09:22] rvba: ok [09:22] bigjools: re-netboot. I'm just saying this branch doesn't make things worse when it comes to the netboot stuff. Let's get it landed and think about whether or not we want to change this after. [09:22] rvba: that's fine [09:23] Okay, cool. I'll revert the change to the enum and write this migration then. [09:24] rvba: excellent [09:25] bigjools: Don't get me wrong, I appreciate the extra scrutiny on this. [09:25] rvba: I know :) [09:25] it'a a hairy area not to be rushed [09:25] Indeed. === jamespag` is now known as jamespage [10:27] allenap: I have a question re: the MockLiveClusterToRegionRPCFixture… When I set a mock result properly (see http://pastebin.ubuntu.com/8204974/), I get the following error: http://paste.ubuntu.com/8204977/. It's almost as though something is wrapping the list in a tuple and then the whole thing breaks. If I specify the "interfaces" item in the response as just being a single dict, rather than a list of dicts with one element, it works perfectly. HALP? [10:37] allenap: Aaah, hang on, I hadn't applied your patch, I think I see the problem… [10:39] allenap: Yeah, I'd not spotted the stray comma on the "interface =" lines. Thanks for that :) [11:53] rvba: those static IP addresses you saw in the lab... are you very very sure they're static? Because I'm seeing addresses now, but from the dynamic range. [12:07] jtv: I've got another run in progress, I'll tell you when it gets to the point where static IP addresses are assigned… I didn't check that the addresses I saw where from the static range last time. [12:08] jtv: I just had a problem in the lab (my nodes didn't get an entry in the zone file) but I think it's caused by the change I'm trying to QA. [12:09] rvba: thanks — highly interested to see if you meet with more success. [12:23] jtv: just did another test locally with revision 2857 and my node just got an IP from the static range. [12:23] Gah. [12:24] Here, my nodes do get IP addresses, just from the dynamic range. [12:24] But that's with the REVEAL_IPv6 flag set. I wonder if that makes a difference... [12:24] What I had before I set it, I believe, was no IP address at all. [12:42] allenap: your misc-boot-resources-stuff branch removes a lock check... is that intentional? [12:42] The one where it doesn't import if its lock is currently held? [12:44] jtv: Yes, it’s superfluous. It tries to get the lock later on. Actually, there’s a chance that it’ll block for a long time (it could have before too; it’s racy). I’ll improve that. [12:45] The occasional race may not be so bad, but the point was to skip the entire attempt if another thread is already working on a download... is that behaviour stil there? [12:49] jtv: It is, but it may wait 15 seconds before giving up. However, it then joins the lock thread, which will hang around until it can actually get the lock. Perhaps it doesn’t actually need to join the lock thread; that can be left to die on its own. Of course, that’s a leak in itself. [12:50] As long as it's cleaned up eventually, I guess... [12:54] rvba: ruddy-cave.maas is Deployed and on, but now I see no IP address for it at all... [12:54] Ah, one just appeared. And it's dynamic. [12:56] jtv: I think this is related to the thing I'm testing now (the robustness stuff). [12:58] The fact that it didn't get a static IP address? [12:58] Remember, I'm seeing the same thing with trunk in my own setup. [13:00] Hum, the StaticIPAddress table is empty. [13:00] That's weird. [13:01] Yup. [13:12] Oh this is just horrible. [13:12] Whether the node claims static IP addresses also seems to depend on its power type. [13:13] Unknown power type: no static IP. [13:13] ether_wake and no MAC address set in the power parameters: no static IP. [13:17] And am I going cross-eyed or are there a Node.claim_static_ips and a Node.claim_static_ip_addresses? [13:23] jtv: this code is a mess :/ [13:24] Yup. [13:24] Note no docstring. [13:24] On _create_tasks_for_static_ips. [13:25] And claim_static_ip_addresses is strangely similar to _create_tasks_for_static_ips. [13:29] /o\ [13:30] rvba: I also see a lot of special cases for "self.status == NODE_STATUS.ALLOCATED"... I guess those are complicating your life right now. [13:31] Node.claim_static_ips is going away, eventually. [13:31] However it’s not in use right now. [13:31] And claim_static_ip_addresses will be its eventual replacement? [13:32] jtv: Yep. [13:32] That'd be worth putting in docstrings. [13:33] jtv: I’m changing a lot of this code for the RPC work I’m doing, so if you find logical faults please tell me about them; I’ve recreated what was already there, so I may have recreated bugs. [13:34] Don't be afraid to write that something is unclear. Better than a shared false belief that it was all done deliberately! [13:51] rvba: it looks as if mac_addresses_on_managed_interfaces is not returning empty... Maybe MACAddress.cluster_interface never got set. [13:53] jtv: let's check the current run… [13:54] Nodes are commissioning now… [13:54] Static addresses should be assigned at the point where the nodes are first started in Allocated state. [13:55] jtv: when is MACAddress.cluster_interface populated exactly? [13:55] Good question. [13:56] I was just trying to find that out actually. [13:56] NodeGroupHandler.update_leases..? [13:57] Right, it calls update_mac_cluster_interfaces. [13:58] jtv: current state in the lab: http://paste.ubuntu.com/8206479/ [13:58] So those cluster interfaces haven't been populated. [13:59] Apparently not. Looks like a bug to me. [13:59] Maybe it's just a matter of waiting a bit longer..? [14:00] BTW I filed bug 1363999 about this. [14:00] bug 1363999 in MAAS "Not assigning static IP addresses" [Critical,Triaged] https://launchpad.net/bugs/1363999 [14:01] jtv: If the lease table is populated, it means update_leases has been called. [14:01] leases* [14:01] Ugh. I hadn't realised the significance of that part. [14:02] Oh, but careful: that table can contain old leases from deleted nodes. [14:03] This is from a run in the lab, it's using a clean VM. [14:03] Damn. [14:06] jtv, rvba: Do you want any more eyes on the problem? [14:06] Oh that would be great. [14:07] We're currently staring at update_mac_cluster_interfaces, in api/node_groups.py. [14:07] (Huh what, his groggy brain asks him, where did that huge api.py module go?) [14:08] We have reason to believe that that function runs, but it doesn't appear to be doing this: [14:08] mac_address.cluster_interface = interface [14:08] mac_address.save() [14:10] jtv: I don't understand why we still have MAC.cluster_interface now that the network stuff is unified and that we can use the Network<->MACAddress link. [14:11] They're not quite the same thing. For example, two NGIs can have overlapping IP ranges, which are different subnets that happen not to be connected. [14:11] It'd be nice to resolve that at some point, but we haven't taken that step yet. [14:11] I thought we didn't support overlapping IP ranges. [14:12] For Network we don't. [14:12] But two cluster interfaces (on different clusters) might still do it. [14:21] rvba: stupid question perhaps, but... do we even still call the API's update_leases method? [14:21] I mean, hasn't that been moved to RPC or anything? [14:22] jtv: well, that's a good question :). Let's have a look at the KB board. [14:23] jtv: apparently it's been ported to RPC by Julian… but if it is so, why is this method still there? [14:23] "Periodically upload DHCP leases"... [14:23] Lots of good questions today. [14:25] jtv: src/maasserver/rpc/leases.py [14:25] Doesn't call update_mac_cluster_interfaces :/ [14:26] Well that looks like an explanation. [14:28] Yep [14:28] Good find :) [14:28] Believe me, it gives me no joy. :) [14:29] Not even the relief I expected from discovering that it's not the IPv6 changes. [14:29] Now, which poor soul is going to fix it? [14:30] I might do it since it's blocking my work — but not tonight! [14:30] * jtv tired [14:35] blake_r: I can see two reasons why the import is slow: a) you're downloading many images by default (?) or b) you're not using the configured proxy (?). [14:35] blake_r: my money is on b). [14:36] rvba: I would go with number 2, unless the node by default is supposed to use that [14:37] blake_r: I don't see the relation to the node… this is all happening on the region. [14:39] blake_r: btw, did you land the UI for the new image stuff? [14:41] rvba: sorry I meant region [14:41] rvba: no ui yet [14:41] rvba: only api [14:41] rvba: ui is next [14:42] rvba: create a bug for not using proxy and I will fix it this week [14:42] blake_r: okay, cool. [15:36] blake_r: https://bugs.launchpad.net/maas/+bug/1364062 [15:36] Ubuntu bug 1364062 in MAAS "New download boot resources method doesn't use the configured proxy" [Critical,Triaged] === jfarschman is now known as MilesDenver === jfarschman is now known as MilesDenver === jfarschman is now known as MilesDenver === jfarschman is now known as MilesDenver === jfarschman is now known as MilesDenver === jfarschman is now known as MilesDenver [22:27] hi guys [22:27] any news on maas with arm devices === jfarschman is now known as MilesDenver === jfarschman is now known as MilesDenver