[07:44] <jtv> Hey... getting that test failure again:
[07:44] <jtv> One or more services are registered; this fixture cannot make a reasonable decision about what to do next.
[07:44] <jtv> I bet allenap knows more about it.
[08:07] <allenap> jtv: Are you getting it sporadically, or is it repeatable?
[09:01] <jtv> allenap: repeatable.
[09:13] <bigjools> allenap: repeatable for me to
[09:13] <bigjools> o
[09:13] <jtv> allenap: the services are rpc, rpc-advertise, and nonce-cleanup.
[09:13] <allenap> jtv: Can you point me to a branch and a pastebin?
[09:13] <bigjools> I suspect a different timezone in my db as per the other bug
[09:13] <allenap> of the error.
[09:13] <jtv> allenap: the branch is trunk.
[09:13] <allenap> Ah, right.
[09:13] <jtv> Pasting the error...
[09:17] <jtv> allenap: with a small tweak on my part to produce more helpful output (shortly to be up for review), it's http://pastebin.ubuntu.com/8043707/
[09:19] <jtv> I'm filing a bug.
[09:19] <bigjools> allenap: btw, I'm so happy you pointed out expectThat
[09:19] <bigjools> I can start killing all the crazy tuple comparisons
[09:24] <jtv> allenap, bigjools: I filed this as bug 1356788.
[09:25] <allenap> jtv: Cool. I’m looking at it now.
[09:26] <bigjools> jtv: will look at your branch now
[09:26] <bigjools> I shall leave this here -> https://code.launchpad.net/~julian-edwards/maas/consider-static-range/+merge/230760
[09:53] <allenap> jtv, bigjools: I can reproduce that bug, sometimes. I have no idea how the test passes at all.
[09:56] <bigjools> allenap: winning
[09:56] <allenap> jtv, bigjools: Actually, I’ve figured it out.
[09:56] <jtv> ?
[09:57] <allenap> self.addCleanup(eventloop.loop.reset) should be self.addCleanup(lambda: eventloop.loop.reset.wait(timeout))
[09:58] <jtv> Ah.  You can probably pass additional arguments: self.addCleanup(eventloop.loop.reset.wait, timeout)
[10:00] <bigjools> you can
[10:01] <allenap> Sorry, it should be self.addCleanup(lambda: eventloop.loop.reset().wait(timeout))
[10:02] <allenap> (I missed the call to reset.)
[10:03] <ezobn> Are there any timeouts for maas if it is not seeing the allocated node ? in my installation maas make several nodes  in "ready" state after the some time when cloud-init on the nodes  can't connect it to the maas.  Can I manually reallocated nodes without reinstalling OS on them ?
[10:18] <allenap> ezobn: MAAS is a bit stupid right now; it doesn’t really notice that a node has not come up. Fixing that is something we’re doing right now (along with a *ton* of other reliability related stuff). I don’t understand your second sentence; can you try rephrasing it? You can’t reallocate nodes to another user without releasing them; the other user will
[10:18] <allenap> then have to install something.
[10:22] <ezobn> allenap: So it was my suggestion ... what happened with domain ;-) Seems I was not right ... Actually in one day all nodes in the maas start to be in "Ready" state. So trying understand why ...
[10:34] <allenap> ezobn: That sounds like it worked?
[10:38] <ezobn> allenap: Yes, but it is servers currently thats using, so to continue use tham I have to remove maas from boot order ...
[10:45] <ezobn> allenap: or allocate again those server via maas, install workload again ...
[10:47] <ezobn> allenup: so for me really interesting is there some way to say maas that those servers was allocated and mark as allocated to certain user
[10:50] <allenap> ezobn: If I understand correctly, then no, I’m afraid not.
[12:36] <ezobn> allenup: sadly ... but do you know the situation when this change of nodes state from "alocated" to "ready" without call the  "stop node" can be done ?