[01:13] <mup> Bug #1636969 changed: [1.9] Multiple negative spaces constraints given and rejected by MAAS <ci> <jujuqa> <maas-provider> <networking> <juju:Triaged> <MAAS:Invalid> <MAAS 1.9:Invalid> <https://launchpad.net/bugs/1636969>
[06:11] <mup> Bug #1637401 opened: Re-adding virsh chassis to discover new nodes powers down existing nodes <MAAS:New> <https://launchpad.net/bugs/1637401>
[06:56] <mup> Bug #1637412 opened: machine deploy operation fails if the machine is not allocated <MAAS:New> <https://launchpad.net/bugs/1637412>
[09:57] <gaurangt> we're facing an issue with MAAS while deploying a charm. The storage disk attached to the VM (sdb) is not getting picked up by the charm. The juju storage list always shows the disk status as pending. Any clues on what could be going wrong here?
[09:57] <gaurangt> the MAAS version we're using is 1.9 and that of juju is 2.0.
[10:23] <brendand> gaurangt, i'm not even sure that is a supported config
[10:23] <brendand> juju 2.0 is supposed to work with maas 2.0 afaik
[10:27] <brendand> gaurangt, juju team confirms that it is *supposed* to work, but that storage detection may be broken (particularly with vms)
[10:31] <gaurangt> brendand: oh ok
[10:33] <gaurangt> brendand: We create a storage pool with a tag and then add that tag in MAAS as well. MAAS correctly identifies the VM and starts it.
[10:33] <brendand> more to the point - neither they nor we test that
[10:33] <gaurangt> But the additional disk which gets attached (sda is used for OS install and sdb is supposed to be detected by charm), but the juju storage status always comes as pending
[10:37] <gaurangt> brendand: should we give a try with maas 2.0? will it work ?
[10:39] <brendand> gaurangt, not sure
[11:16] <gaurangt> brendand: sorry, I was disconnected.  Any idea if the juju storage feature would work on MAAS with KVM?
[11:16] <brendand> gaurangt, with 2.0?
[11:17] <brendand> gaurangt, not positive
[11:17] <gaurangt> brendand: yes
[11:17] <brendand> gaurangt, but it might
[11:17] <gaurangt> brendand: oh ok, let me give a try
[11:18] <gaurangt> brendand: also, one other issue is, MAAS deployment times out for xenial image. cloud-init takes a lot of time upgrading the kernel and it times out after 40 mins.
[11:18] <gaurangt> anything I'm missing?
[12:44] <GA> Hi, I posted this question last evening, and had to disconnet (sorry). It's about mounting an NFS share vie the preseed file:
[12:46] <GA> I'm trying to pass this:  mount_data: ["curtin", "in-target", "--", "sh", "-c", "mount", "192.168.1.1:", "/data", "/data"] , however, depends on what is the exact syntax, the deployment either fails, or finished - but the share is not mounted.
[12:46] <GA> any input is most welcome :)
[14:25] <roaksoax> GA: what you doing there is try to mount an nfs share on the "chroot"
[14:25] <roaksoax> GA: so if, say, that succeeds, when the machine reboots the nfs share would not be mounted
[14:25] <roaksoax> GA: is that what you trying to achieve?
[14:26] <GA> roaksoax: the goal is to have a share from a "head node" (this is going to be a test HPC cluster) onto the "compute nodes", it does not have have to be in chroot.
[14:27] <roaksoax> GA: right, but what I mean is that during the installation process of MAAS, we copy the installation image onto the disk and then chroot into it to do installation stuff
[14:27] <GA> and I'd like to have this share mounted w/o manual intervation, i.e have the image installed and upon boot have the /data share mounted on all "compute nodes"
[14:27] <roaksoax> GA: yeah, so what you are doing there is basically just mounting from NFS in a chroot during the installation process
[14:28] <GA> roaksoax: correct
[14:28] <roaksoax> GA: when the installation finishes, and the machine reboots into the installed OS disk, your mount share wont be there
[14:28] <roaksoax> GA: because you mounted it ephemerally
[14:28] <GA> roaksoax: I see.
[14:29] <GA> roaksoax: to test - I placed a simple script with "echo test > /tmp/test.file" and well.. it was not there when rebooted, so - I assume it's for the same reason.
[14:32] <roaksoax> GA: http://paste.ubuntu.com/23392952/
[14:33] <roaksoax> GA: you could do something like the above
[14:33] <roaksoax> GA: so that after the mahcine finishes install and rebots
[14:33] <roaksoax> reboots*
[14:33] <roaksoax> GA: the NFS share should/will be mounted
[14:34] <roaksoax> http://paste.ubuntu.com/23392954/
[15:00] <GA> roaksoax: it's certainly amuzing, your solution was offered here by a co-worker.. oh well, I won't go there. :)
[15:00] <roaksoax> :)
[15:01] <GA> roaksoax: I will give it a try, thank you.
[15:10] <roaksoax> np!
[16:13] <mup> Bug #1637570 opened: [2.1] Cavium ThunderX system with 128GigB of memory is reported as having 125.9GigB of memory in MAAS after commissioning <oil> <MAAS:New> <https://launchpad.net/bugs/1637570>