/srv/irclogs.ubuntu.com/2016/10/28/#maas.txt

mupBug #1636969 changed: [1.9] Multiple negative spaces constraints given and rejected by MAAS <ci> <jujuqa> <maas-provider> <networking> <juju:Triaged> <MAAS:Invalid> <MAAS 1.9:Invalid> <https://launchpad.net/bugs/1636969>01:13
mupBug #1637401 opened: Re-adding virsh chassis to discover new nodes powers down existing nodes <MAAS:New> <https://launchpad.net/bugs/1637401>06:11
mupBug #1637412 opened: machine deploy operation fails if the machine is not allocated <MAAS:New> <https://launchpad.net/bugs/1637412>06:56
gaurangtwe're facing an issue with MAAS while deploying a charm. The storage disk attached to the VM (sdb) is not getting picked up by the charm. The juju storage list always shows the disk status as pending. Any clues on what could be going wrong here?09:57
gaurangtthe MAAS version we're using is 1.9 and that of juju is 2.0.09:57
brendandgaurangt, i'm not even sure that is a supported config10:23
brendandjuju 2.0 is supposed to work with maas 2.0 afaik10:23
brendandgaurangt, juju team confirms that it is *supposed* to work, but that storage detection may be broken (particularly with vms)10:27
gaurangtbrendand: oh ok10:31
gaurangtbrendand: We create a storage pool with a tag and then add that tag in MAAS as well. MAAS correctly identifies the VM and starts it.10:33
brendandmore to the point - neither they nor we test that10:33
gaurangtBut the additional disk which gets attached (sda is used for OS install and sdb is supposed to be detected by charm), but the juju storage status always comes as pending10:33
gaurangtbrendand: should we give a try with maas 2.0? will it work ?10:37
brendandgaurangt, not sure10:39
=== Guest91977 is now known as ahasenack
=== ahasenack is now known as Guest60864
gaurangtbrendand: sorry, I was disconnected.  Any idea if the juju storage feature would work on MAAS with KVM?11:16
brendandgaurangt, with 2.0?11:16
brendandgaurangt, not positive11:17
gaurangtbrendand: yes11:17
brendandgaurangt, but it might11:17
gaurangtbrendand: oh ok, let me give a try11:17
gaurangtbrendand: also, one other issue is, MAAS deployment times out for xenial image. cloud-init takes a lot of time upgrading the kernel and it times out after 40 mins.11:18
gaurangtanything I'm missing?11:18
=== Guest60864 is now known as ahasenack
=== ahasenack is now known as Guest42345
GAHi, I posted this question last evening, and had to disconnet (sorry). It's about mounting an NFS share vie the preseed file:12:44
GAI'm trying to pass this:  mount_data: ["curtin", "in-target", "--", "sh", "-c", "mount", "192.168.1.1:", "/data", "/data"] , however, depends on what is the exact syntax, the deployment either fails, or finished - but the share is not mounted.12:46
GAany input is most welcome :)12:46
roaksoaxGA: what you doing there is try to mount an nfs share on the "chroot"14:25
roaksoaxGA: so if, say, that succeeds, when the machine reboots the nfs share would not be mounted14:25
roaksoaxGA: is that what you trying to achieve?14:25
GAroaksoax: the goal is to have a share from a "head node" (this is going to be a test HPC cluster) onto the "compute nodes", it does not have have to be in chroot.14:26
roaksoaxGA: right, but what I mean is that during the installation process of MAAS, we copy the installation image onto the disk and then chroot into it to do installation stuff14:27
GAand I'd like to have this share mounted w/o manual intervation, i.e have the image installed and upon boot have the /data share mounted on all "compute nodes"14:27
roaksoaxGA: yeah, so what you are doing there is basically just mounting from NFS in a chroot during the installation process14:27
GAroaksoax: correct14:28
roaksoaxGA: when the installation finishes, and the machine reboots into the installed OS disk, your mount share wont be there14:28
roaksoaxGA: because you mounted it ephemerally14:28
GAroaksoax: I see.14:28
GAroaksoax: to test - I placed a simple script with "echo test > /tmp/test.file" and well.. it was not there when rebooted, so - I assume it's for the same reason.14:29
roaksoaxGA: http://paste.ubuntu.com/23392952/14:32
roaksoaxGA: you could do something like the above14:33
roaksoaxGA: so that after the mahcine finishes install and rebots14:33
roaksoaxreboots*14:33
roaksoaxGA: the NFS share should/will be mounted14:33
roaksoaxhttp://paste.ubuntu.com/23392954/14:34
GAroaksoax: it's certainly amuzing, your solution was offered here by a co-worker.. oh well, I won't go there. :)15:00
roaksoax:)15:00
GAroaksoax: I will give it a try, thank you.15:01
roaksoaxnp!15:10
mupBug #1637570 opened: [2.1] Cavium ThunderX system with 128GigB of memory is reported as having 125.9GigB of memory in MAAS after commissioning <oil> <MAAS:New> <https://launchpad.net/bugs/1637570>16:13
=== Guest42345 is now known as ahasenack
=== ahasenack is now known as Guest79971
=== zz_CyberJacob is now known as CyberJacob

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!