[06:14] Hello folks! I'm in early stages with my discovery project to spin up MaaS inside a Docker container, right now I'm seeing some errors related to the tgt service not starting, and some ntp resolution socket errors too - as I'm running in a containerized snap, I theorized that there's some additional channels I could connect for a more relaxed runtime environment [06:28] the tgt service being the main roadblock as my rackd won't sync images [07:07] i wondered if I'm hitting the OOM issue that tgt (once) had? [07:38] I'm hoping to use MAAS as a orchestrator in a lab environment. Hence; I under stood that the way to go is to have multiple user accounts, and then acquire hosts to that user. Now; can I as an admin assign nodes to a user, or can this only be done from the user account? [07:39] Good Morning [07:40] hmm, looks like the tgt changes for 2.4.0 might help me here [07:40] i guess there's no snap for that? [08:03] seffyroff: not yet [08:03] snas in 18.04 are currently broekn [08:04] ok thanks for the heads up. [08:04] I'll try some other approaches to getting this working [08:06] seffyroff: you should probably look at 2.3.1 [08:06] it had a tgt fix running in containers (lxc) [08:06] yup, db migrations happening as we speak [08:07] thanks! [08:09] parlos: there's no suhc way to assign a machine to a user. Users/admins have acces to a machine pool, and they can allocate themselves a machine [08:10] roaksoax: so, if I want to limit what nodes a user can use, I have to login as user an acquire the nodes as that user... any plans for that? [08:39] actally roaksoax i do see a 2.4.0 snap, but I guess it'll sulk if I try to install it on a Xenial host? [08:40] channels: [08:40] stable: 2.3.0-6434-gd354690-snap (1349) 101MB - [08:40] candidate: 2.3.1-6470-g036d646-snap (1868) 100MB - [08:40] beta: 2.3.1-6470-g036d646-snap (1868) 100MB - [08:40] edge: 2.4.0~alpha2-6630-gc925539-snap (1857) 97MB - [08:47] hm, tgt still wouldn't start from 2.3.1 snap inside a docker container. It does start if I snap install to the host itself though. [10:18] seffyroff: the 2.4 snap requires ubuntu core 18 [10:18] is anyone able to answer a basic question about the API? [10:18] seffyroff: which is still in development [10:19] noted, thanks [10:19] looks like i'll go for package install, which I was leaning towards anyway so I can apply a downstream WOL patch [10:20] although possibly similar could be accomplished with snap scriptlets [10:20] aln: sure, just ask your question and if soneone knows the answerm, they will reply [10:21] it'll potentially produce a cleaner container also. So much low level hackery to get snap running in the contaienr first, then maas requires several additional hacks to work within snap within a container >.< [10:22] i want to deploy a machine with the api using `/machines/{system_id}/?op=deploy`. i would have expected it to take boot-resource ID as a parameter but instead it takes distro_series and hwe_kernel. [10:23] i tried to fetch information about the boot-resource from `/boot-sources/{id}/`, but this does not return any information that seems to fit into distro_series or hwe_kernel. [10:23] how can i issue a deploy via the api and specify the boot-resource/image i want to use? [10:23] (deploy works, i just can't get it to use anything other than the default image) [10:24] oops - second url should be `/boot-resource/{id}/` of course [10:25] aln: deploy distro_series=xenial [10:25] aln: hwe_kernel=hwe-16.04 [10:25] aln: "name": "ubuntu/xenial", [10:26] aln: "subarches": "generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04,hwe-16.04,hwe-16.10" [10:26] u'name': u'ubuntu/xenial', [10:26] u'subarches': u'generic,hwe-p,hwe-q,hwe-r,hwe-s,hwe-t,hwe-u,hwe-v,hwe-w,ga-16.04' [10:26] yeah, sorry haha [10:26] aln: yeah, although the subarches you can only use ga-16.04, hwe-16.04, etc [10:26] at least that's on ym streams [10:27] okay [10:27] and this works with custom non-linux images too? [10:27] so, u'name': u'windows/win2012r2' [10:27] u'subarches': u'generic',u'name': u'windows/win2012r2' [10:28] i can specify win2012r2 and generic? [10:28] aln: for non-ubuntu deployments, you dont need to set the hwe_kernel [10:28] aln: the hwe_kernel is only really valid for ubuntu, because that's the kernle the machine will be running post deployment [10:28] yes of course, thanks [10:29] aln: as for windows, distro_series=win2012r2 [10:29] aln: it should workt distro_series=windows/win2012r2 [10:29] but doens't necessarily [10:29] yeah, it didn't for me [10:29] but it is not necessary [10:29] ok, yeah that seems like a bug [10:34] roaksoax: thanks for the help, but afraid this didn't work [10:34] issuing request with distro_series=windows2012r2 makes it deploy xenial [10:36] aln: it doens't deployxenial, it pxe boots into xenial to deploy windows [10:36] status: Deploying Ubuntu 16.04 LTS [10:36] aln: ah1 that's stranger [10:36] strange [10:36] what version you using ? [10:36] of maas? [10:36] 2.3.0 [10:38] aln: so I'm looking at our CI, and we use maas machine deploy distro_series=win2012hrv2 [10:38] win2012hvr2 [10:39] so aln we test that path [10:39] Deploying Ubuntu 16.04 LTS [10:39] but presumably that's what is after the slash in your boot-resource's name [10:40] right, so if boot-resource's name is / on distro_series= just use the [10:40] yeah [10:40] Deploying Ubuntu 16.04 LTS [10:40] aln: also, if you are talking directly to the API, not via the CLI [10:41] aln: you should look at: https://github.com/maas/python-libmaas [10:45] i did have a look at this, but we decided just to interface directly, as we don't need to use it much (='my boss told me to do it this way') [10:45] gotcha [10:45] looking though the library for an example of the call being made for deploy, but it's pretty abstracted [11:17] if anyone reads my messages above and think they can help, i made a post here: https://askubuntu.com/questions/1013371/specifying-boot-resource-for-maas-api-deploy [13:27] Quick question for you guys [13:27] Is there a newer version of the maas-image-builder? [13:28] I can't seem to BZR branch it [13:32] ChanServ Who is allive? [13:32] *allive [13:32] *alive [13:55] i set up some VMs with manual power management and it seems like they never become ready, is this intentional? [13:57] neferty: they should be ready after commisioning, then you have to turn them off manualy [13:57] i have turn them off? eh, alright [13:58] no necessary, it should be on byt after commisioning it reched ready state, if not then there should be some error [13:58] in test or commision [13:59] oh hold on, i think i misunderstood when a machine is considered ready, my mistake [16:15] Bug #1754697 opened: [FFe] Standing FFe for MAAS 2.4