[00:10] <roaksoax> jfkw: also, I /think/ that using a different kernel doesn't cuase issue
[00:10] <roaksoax> i /think/
[07:09] <ybaumy> good morning
[07:09] <ybaumy> im getting an error when trying to add vmware chassis
[07:09] <ybaumy>  maas.rpc.cluster: [error] Failed to probe and enlist VMware nodes: argument of type 'NoneType' is not iterable
[07:10] <ybaumy> can somebody help me. this error is new to me
[07:13] <ybaumy> anyone?
[07:18] <ybaumy> what does nonetype mean
[07:18] <ybaumy> maas baum machines add-chassis chassis_type=vmware
[07:18] <ybaumy> ...
[07:18] <ybaumy> and so on
[07:18] <ybaumy> thats what i use and used before to add nodes
[07:38] <ybaumy> hmm would really need help. have to move forward with my setup
[07:38] <ybaumy> else i loose a day or even more
[07:59] <ybaumy> hmm the only thing that changed i guess is that esx 6.5 was used for creating the new machines
[07:59] <ybaumy> is it compatible?
[08:04] <ybaumy> well tried adding a 6.0 compatible machine but still the same error
[08:16] <mup> Bug #1697209 changed: Network testing fail - ntp <MAAS:Confirmed> <https://launchpad.net/bugs/1697209>
[08:21] <ybaumy> could that be a python error
[08:21] <ybaumy> ?
[08:25] <mup> Bug #1697209 opened: Network testing fail - ntp <MAAS:Confirmed> <https://launchpad.net/bugs/1697209>
[08:28] <mup> Bug #1697209 changed: Network testing fail - ntp <MAAS:Confirmed> <https://launchpad.net/bugs/1697209>
[08:52] <mup> Bug #1701476 opened: maas.rpc.cluster: [error] Failed to probe and enlist VMware nodes: argument of type 'NoneType' is not iterable <MAAS:New> <https://launchpad.net/bugs/1701476>
[08:52] <mup> Bug #1701477 opened: maas.rpc.cluster: [error] Failed to probe and enlist VMware nodes: argument of type 'NoneType' is not iterable <MAAS:New> <https://launchpad.net/bugs/1701477>
[08:53] <ybaumy> sorry for the double bug i had a nervous finger ;) and double clicked
[13:08] <mup> Bug #1701477 changed: maas.rpc.cluster: [error] Failed to probe and enlist VMware nodes: argument of type 'NoneType' is not iterable <MAAS:New> <https://launchpad.net/bugs/1701477>
[16:32] <mup> Bug #1701682 opened: selecting “Settings” provides an “Internal server error.” <MAAS:New> <https://launchpad.net/bugs/1701682>
[16:45] <gimmic> I am still chasing a commissioning error with timeouts, even after bumping the ipmi_wait
[16:46] <gimmic> this is new to 2.2 when I upgraded from 1.9 (which had a whole slew of issues on its own)
[16:46] <gimmic> I upgraded the drac7 firmware to address any potential issues, just frustrating because "it worked before"
[16:47] <gimmic> I feel like the timeout is also immediate, even with a huge wait time set.
[16:47] <mup> Bug #1701682 changed: selecting “Settings” provides an “Internal server error.” <MAAS:New> <https://launchpad.net/bugs/1701682>
[16:47] <gimmic> wait_time = (4, 8, 12)   That's a backoff, right? 4sec 8sec 12 sec?
[16:53] <mup> Bug #1701682 opened: selecting “Settings” provides an “Internal server error.” <MAAS:New> <https://launchpad.net/bugs/1701682>
[17:03] <gimmic> Is there any way to template storage settings? Say I have 50 nodes with 3 disks I want to raid0 across..
[17:03] <gimmic> I can do each one manually, I can loop through them with API calls(which I did in 1.9)
[17:23] <mup> Bug #1701694 opened: [2.2, 2.3, API] Adding a CentOS image via CLI doesn't categorize it correctly <MAAS:Triaged by ltrager> <MAAS 2.2:Triaged> <https://launchpad.net/bugs/1701694>
[17:49] <roaksoax> gimmic: yes that's it
[17:49] <roaksoax> gimmic: that's per bmc though
[17:52] <roaksoax> gimmic: try this: http://paste.ubuntu.com/24990585/
[17:54] <roaksoax> gimmic: if not, can you change that wait_time = (4, 8, 24) and see what happens ?
[17:55] <gimmic> I had already bumped it to (10, 16, 25) and it appears to resolve
[17:55] <gimmic> these are dell drac7 on shared LOM1
[17:56] <gimmic> Not sure if the problem showed up in 2.2 or when I moved from dedicated drac to a LOM
[17:56] <roaksoax> gimmic: would you be willing to keep it as (4, 8, 24) and see if that works, so I can fix that upstream ?
[17:56] <gimmic> Sure, let me revert and try the fix.
[17:57] <gimmic> 4, 8, 24?
[17:59] <roaksoax> yeah
[18:23] <mup> Bug #1701701 opened: [2.x] maas doesn't warn/block commissioning with 'ssh access enabled' when user has no ssh keys <MAAS:New> <https://launchpad.net/bugs/1701701>
[18:43] <gimmic> roaksoax: did not seem to help. I bumped the timeouts up and it worked.
[18:43] <gimmic> from a UI standpoint, I am not happy about the new 'tabbing' of all the values on hosts
[18:43] <gimmic> more clicks without really any gain imo
[18:44] <gimmic> If anything, the section categories should be anchor links to the long page
[18:44] <gimmic> (interfaces / storage / commissioning etc)
[19:05] <roaksoax> i'll pass that along to design
[19:06] <roaksoax> that however, improves UI performance
[19:44] <gimmic> I could not get 16.04 LTS to deploy regardless of my storage configuration.. on a whim I tried 14.04 LTS and it deployed out of the box :|
[19:58] <kiko> gimmic, did you get an explanation as to what failed? kernel or platform install issue?
[19:58] <kiko> gimmic, does commissioning work?
[20:00] <gimmic> Yeah. It does show it looks like it has problems with partitioning/disk
[20:00] <gimmic> seemingly regardless of how i set up the storage
[20:01] <kiko> can I see that in a pastebin?
[20:01] <gimmic> "had no syspath" like it isn't seeing root?
[20:01] <gimmic> sure, give me a bit and i'll deploy another
[20:02] <kiko> ValueError: /dev/vgroot-lvroot (dm-0) had no syspath (/sys/class/block/vgroot-lvroot (dm-0))
[20:02] <kiko> something like that?
[20:06] <gimmic> yeah, looks similar
[20:14] <gimmic> kiko: actually, with the default options I'm getting sdb3 busy errors
[20:14] <gimmic> http://paste.ubuntu.com/24991442/
[20:14] <gimmic> which is odd, because I have 3 disks in the box, and sdb/c are just unallocated disks. sda is the only configured one with lvm
[20:16] <gimmic> going to try setting up the raid0 I want to do and see what errs then
[20:53] <gimmic> http://paste.ubuntu.com/24991646/
[20:53] <gimmic> There's an attempt on 16.04 with 3 disks in raid0