[02:01] <smoser> nturner, maas provides the install environment with config for curtin.
[02:02] <smoser> the cloud-config-url on the cmdline is actually for the installer environment
[02:02] <smoser> the urls are slightly different beteween installer environment and installed environment
[02:02] <smoser> the installer environment's user-data is the installer
[02:02] <smoser> the installed environment is configured via config to curtin.
[02:02] <smoser> i think its done via dpkg set selections
[02:03] <smoser> nturner, i'd have to look around more, and what i'd do is https://gist.github.com/smoser/2610e9b78b8d7b54319675d9e3986a1b
[02:03] <smoser> do a deploy, jump in and poke around
[02:03] <smoser> likely the data you're after is in /curtin/config some where.
[06:33] <mup> Bug #1597625 opened: package maas-rack-controller (not installed) failed to install/upgrade: subprocess new pre-installation script returned error exit status 1 <amd64> <apport-package> <xenial> <maas (Ubuntu):New> <https://launchpad.net/bugs/1597625>
[11:12] <mgz> hey guys
[11:12] <mgz> where is the source for the user docs kept?
[11:13] <mgz> eg http://maas.ubuntu.com/docs2.0/install.html
[11:32] <mgz> lp:maas docs/ directory, for irc record
[12:49] <f1gjam> anyone in London got MaaS, Autopilot, Landscape and Openstack running?
[13:34] <geese> question:  Do you absolutely need IPMI/BMC for MaaS to operate?  Or can it gracefully degrade for "plain" systems?
[13:34] <geese> I've got 6 servers, and 2 of them have IPMI
[13:35] <geese> second question: can I provide netboot nbi for mac systems to launch and install linux?  a la: http://www.fink.org/netboot/netbooting.html
[13:35] <geese> I have about 8 2012 mac minis
[13:36] <geese> they're running ESXi now, but I want to build a lab openstack on them
[13:36] <geese> thanks!
[14:32] <mup> Bug #1597779 opened: Wily Deployements are failing in Maas <MAAS:New> <https://launchpad.net/bugs/1597779>
[14:35] <mup> Bug #1597779 changed: Wily Deployements are failing in Maas <MAAS:New> <https://launchpad.net/bugs/1597779>
[14:44] <mup> Bug #1597779 opened: Wily Deployements are failing in Maas <MAAS:New> <https://launchpad.net/bugs/1597779>
[14:44] <mup> Bug #1597787 opened: cannot create more than 4 partitions when disk is configured with dos DiskLabel <MAAS:New> <https://launchpad.net/bugs/1597787>
[14:53] <mup> Bug #1597787 changed: cannot create more than 4 partitions when disk is configured with dos DiskLabel <MAAS:New> <https://launchpad.net/bugs/1597787>
[15:02] <mup> Bug #1597787 opened: cannot create more than 4 partitions when disk is configured with dos DiskLabel <MAAS:New> <https://launchpad.net/bugs/1597787>
[15:26] <mup> Bug #1597779 changed: Wily Deployements are failing in Maas <cloud-init:New> <MAAS:Invalid> <https://launchpad.net/bugs/1597779>
[15:26] <mup> Bug #1597801 opened: cannot create node VLAN interfaces on centos <MAAS:New> <https://launchpad.net/bugs/1597801>
[17:57] <f1gjam> anyone in London got MaaS, Autopilot, Landscape and Openstack running?
[18:57] <f1gjam> mpontillo, kiko - I am about to quit my endeavour to get autopilot running. It just seems to buggy
[19:00] <mpontillo> f1gjam: sorry to hear that; if you can describe the issue I'll see if I can find someone to help
[19:00] <f1gjam> its landscape, it wont install now. I have completely started from scratch, because before landscape would just hang installing openstack at 80% or so
[19:01] <f1gjam> now doing openstack-install
[19:01] <f1gjam> it just crashes
[19:01] <f1gjam> the install just times out
[19:01] <f1gjam> the funny thing is i had it installed
[19:01] <mpontillo> f1gjam: anything in the logs?
[19:02] <f1gjam> only that it is timing out
[19:02] <f1gjam> http://paste.ubuntu.com/18188970/
[19:04] <roaksoax_> dpb1_: ^^
[19:09] <f1gjam> mpontillo, im gona try mirantis for a bit then come back to MaaS maybe after 2.0 comes out??
[19:09] <f1gjam> thanks for all your help, i do want to complete my doc, just need to get end to end working 100%
[19:11] <mpontillo> f1gjam: should be pretty soon. As for MAAS 2.0, I know the landscape team is now testing with it. Not sure what their timeline is. Thanks for trying MAAS and for all your feedback.
[19:12] <f1gjam> il be back in a week most likely
[19:12] <f1gjam> i was thinking of just going to eh office ;)
[19:13] <f1gjam> and saying hey anyone here got MaaS working :D
[19:13] <mpontillo> f1gjam: lol, hold up a sign
[19:13] <f1gjam> yeah
[19:13] <f1gjam> one man protest
[19:19] <dpb1_> hey f1gjam do you have a juju status from that attempt?
[19:19] <f1gjam> erm
[19:19] <f1gjam> i chan check
[19:20] <dpb1_> export JUJU_HOME=~/.cloud-install/juju
[19:20] <dpb1_> juju status
[19:21] <f1gjam> its doing something
[19:22] <f1gjam> just hanging
[19:24] <f1gjam> dpb1_, just haning
[19:24] <dpb1_> is the bootstrap node still 'deployed' in maas?
[19:39] <f1gjam> no
[19:39] <f1gjam> i released it
[19:39] <f1gjam> let me kick the install off again
[19:45] <mup> Bug #1597266 changed: MAAS - multiple DNS zones with the same duplicated zone name <cpec> <maas> <MAAS:New> <https://launchpad.net/bugs/1597266>
[19:48] <mup> Bug #1597266 opened: MAAS - multiple DNS zones with the same duplicated zone name <cpec> <maas> <MAAS:New> <https://launchpad.net/bugs/1597266>
[19:51] <mup> Bug #1597266 changed: MAAS - multiple DNS zones with the same duplicated zone name <cpec> <maas> <MAAS:New> <https://launchpad.net/bugs/1597266>
[20:01] <ahasenack> f1gjam: another good test is to just do a juju bootstrap on that maas, that is, configure a juju env that points at the maas server
[20:01] <ahasenack> f1gjam: and bypass landscape and cloud installer for now
[20:08] <pacavaca_> hey guys. I'll just repeat my yesterdays question, maybe there's someone around to answer it this time. How hard it is to "fork" MAAS, make few small changes to the REST API (and maybe in UI) and then build custom deb packages, maybe a private PPA (if such thing even exist)? My chnges are really small, but you probably will not want them in upstream. However, they're very important for the internal use
[20:08] <pacavaca_> case where I'm trying to apply MAAS..
[20:13] <f1gjam> ahasenack, yeha i was thining baout that, thats how the 2.0 install woudl be right?
[20:13] <f1gjam> whats the vallue of landscape im guessing
[20:14] <ahasenack> f1gjam: you can bypass the cloud-installer if all you want is landscape, it just saves you a few easy configuration steps in landscape
[20:14] <f1gjam> i want openstack isntalled
[20:14] <f1gjam> the one that is supported for up to 10 machines
[20:15] <ahasenack> f1gjam: I suggest to start with the plain juju + maas combination, see if you can bootstrap a node
[20:15] <ahasenack> and maybe even do a simple "juju deploy ubuntu", then access it ("juju ssh ubuntu/0")
[20:15] <f1gjam> i can defintely bootstrap a ndoe
[20:15] <ahasenack> because we need all that working to proceed
[20:15] <f1gjam> and i definitely had landscape installed
[20:15] <ahasenack> also see if the node can reach the internet
[20:15] <f1gjam> where it hung for me was
[20:15] <ahasenack> ok, that's good
[20:15] <f1gjam> during the openstack setup in landscape it got stuck
[20:15] <f1gjam> i still have those original logs
[20:16] <ahasenack> f1gjam: did you happen to click "cancel" and download logs, or file a bug?
[20:16] <f1gjam> i downloaded
[20:16] <f1gjam> didnt raise a bug as i didnt know what to raise a bug for
[20:17] <ahasenack> f1gjam: file a bug here: https://bugs.launchpad.net/landscape/+filebug
[20:17] <ahasenack> and attach the tarball you have
[20:17] <f1gjam> https://www.dropbox.com/s/w1ftoof4a58lvba/landscape-openstack-autopilot-logs-2016-06-29T00-03-09Z.tar.gz?dl=0
[20:17] <ahasenack> f1gjam: you can try openstack install again from scratch if you want, and if you hit the landscape timeout error again, please leave the environment up and we can check what happened
[20:18] <f1gjam> oik
[20:18] <f1gjam> im doing that now
[20:18] <f1gjam> its deploying landscape as we speak (via autopilot)
[20:18] <f1gjam> so another 5/10 mins max
[20:18] <ahasenack> hm, no, let's fix the terminology
[20:19] <ahasenack> autopilot is a landscape component, and is what deploys openstack
[20:19] <f1gjam> oh
[20:19] <ahasenack> landscape has autopilot and many other things
[20:20] <ahasenack> so you are deploying landscape via openstack-install, if I understood you
[20:20] <f1gjam> yes thats correct
[20:20] <f1gjam> that was failing
[20:20] <ahasenack> landscape being one of the options from within openstack-install
[20:20] <f1gjam> and now ....
[20:20] <f1gjam> i havent changed anything... lets see if it installs
[20:20] <f1gjam> as I did this before and it worked....
[20:20] <f1gjam> the only thing i did was release all servers and start again
[20:20] <ahasenack> in another terminal, can you export JUJU_HOME again and show me a "juju status --format=tabular"?
[20:20] <f1gjam> so if this works, there is a bug somewhere which causes it to randomly fail
[20:21] <f1gjam> http://paste.ubuntu.com/18193535/
[20:23] <f1gjam> https://bugs.launchpad.net/landscape/+bug/1597907
[20:23] <ahasenack> thx
[20:23] <ahasenack> f1gjam: can you check /var/log/syslog on the maas server, see if there are errors or messages about running out of dhcp leases?/
[20:24] <f1gjam> ahasenack, here is the doc which i started creating about the whole install... as you can see Landscape was installed. https://www.dropbox.com/s/fxs3voyjgtkrei4/Ubuntu_14.04_Install_Docuement.pdf?dl=0
[20:25] <dpb1_> pacavaca_: I'd suggest you just try it. :)
[20:25] <f1gjam> ahasenack, no errors
[20:25] <f1gjam> i have like over 50 IP address leases availble
[20:25] <f1gjam> only error
[20:25] <f1gjam> Jun 30 21:10:11 maas dhcpd: Can't create new lease file: Permission denied
[20:25] <ahasenack> f1gjam: how is your ip separation in the maas cluster interface? In terms of dynamic and static ranges
[20:26] <ahasenack> ok, that error is "normal"
[20:26] <f1gjam> 192.168.10.0 - Public
[20:26] <f1gjam> 192.168.20.0/24 private
[20:26] <f1gjam> DHCP - 192.168.101-150 Dynamic
[20:26] <f1gjam> 192.168.20.51-100 Static
[20:26] <f1gjam> brb
[20:26] <f1gjam> i need 10 mins
[20:32] <mpontillo> pacavaca_: I'm curious what your changes are; do you have a diff? if there is something missing from MAAS that is very important for someone to deploy it, I'd like to better understand what that is
[20:34] <mpontillo> pacavaca_: I've pushed MAAS builds to a personal PPA before, but I don't have instructions handy. I'll try to remember to ping you if that changes
[20:36] <mpontillo> pacavaca_: we have separate branches for packaging that you'd want to pull... this is the packaging branch for trunk; there are others used for various MAAS releases. https://code.launchpad.net/~maas-maintainers/maas/packaging
[20:39] <pacavaca_> mpontillo: So, basically we want to be able to release/acquire machines without re-imaging in between (most of the times). I found in code that it's pretty much enough to copy power_on action in API, rename it to, say boot_local, and before powering node on call set_netboot(False).
[20:40] <pacavaca_> mpontillo: I'm trying to run build locally, is it supposed to generate .deb packages at the end or they're somehow built on launchpad side? (I'm completely unfamiliar with launchpad and ubuntu hacking..)
[20:41] <mpontillo> pacavaca_: as I understand it (though packaging is not my forte) you upload the source and signed .dsc and launchpad builds the package for you
[20:41] <mpontillo> pacavaca_: I think the last time I did it, I followed this guide. http://askubuntu.com/questions/71510/how-do-i-create-a-ppa
[20:42] <mpontillo> pacavaca_: if you could post your code as a merge proposal, I'd like to get an understanding of the use case and if we could one day support something like that in MAAS
[20:43] <pacavaca_> mpontillo: thank you, will read through it. Sure, let me just first write the change :) The use case I can explain though. It's not exactly what MAAS is designed for, but in my opinion, maas fits there quite well.
[20:44] <mpontillo> pacavaca_: can you tell me more about why you need that type of release/acquire cycle? keep in mind that releasing a node has other side effects, like resetting the storage configuration
[20:48] <ahasenack> f1gjam: I'm reading your doc, I think you need a bigger dynamic range in dhcp
[20:48] <ahasenack> it's used by containers, and we deploy a lot of them
[20:49] <ahasenack> I would suggest 10-149 for dynamic, 150-250 for static
[20:49] <ahasenack> dynamic is also used when enlisting and commissioning nodes
[20:49] <pacavaca_> Basically, we have lot's of dev hardware available for different team to test their code. Right now the team is not so big and everything is managed manually (in chat, in google docs, everywhere). What I'm proposing is sort of a self-service portal, where each developer can come and claim any number of nodes with given capabilities. We also want those nodes to be imaged with our custom image (ubuntu 12.04
[20:49] <pacavaca_> + lots of packages pre-installed + our toolchain). The image is 3.7 Gb large and it's quite painful to install it every time node changes hands, though sometimes people screw up the OS and that's when we really want to re-image. Most of the time they're ok re-using the node after someone else. In fact, that's what everybody does right now and re-imaging is only done in emergency cases through clonezilla.
[20:49] <pacavaca_> In addition, I think if we would have a centralized pool of hw, we can easily re-purpose some dev nodes for other infra needs by installing pre-canned images there (e.g. we need one more LXD host - ok, take one node and flash LXD host image there).
[20:49] <pacavaca_> that's to mpontillo:
[20:49] <dpb1_> ahasenack: http://www.ubuntu.com/download/cloud/install-openstack-with-autopilot <- step 4 now lists the minimum amounts
[20:50] <ahasenack> thx dpb1_
[20:50] <ahasenack> f1gjam: ^
[20:51] <mpontillo> pacavaca_: ah, interesting - yeah, MAAS is really optimized for small images so that's understandable
[20:54] <mpontillo> pacavaca_: would it be sufficient to be able to simply change the owner of a node?
[20:54] <pacavaca_> mpontillo: I discovered MAAS couple month ago and "sold" to the team. Now I've already spent quite a lot of time trying to POC maas (and not only maas) solutions and none of them 100% finished. MAAS seems closest to the finish, but this small thing is really a show stopper. No one will use it if they need to wait 30 min+ before getting their nodes. So, that why I thought maybe it's just easier to add
[20:54] <mpontillo> pacavaca_: like, request ownership transfer from another user, or similar?
[20:54] <pacavaca_> missing functionality. It doesn't look too big of a change, but maybe I'm missing something.
[20:54] <pacavaca_> mpontillo: yeah, that would be good too
[20:54] <pacavaca_> maybe even easier than rebooting the node, etc
[20:55] <mpontillo> pacavaca_: right now I don't think MAAS allows you to see nodes someone else has acquired. but I wonder if the API could be used to update the owner
[20:56] <mpontillo> pacavaca_: I guess you would have to reboot it at least once (or kick cloud-init somehow) to grab the new SSH keys
[20:56] <pacavaca_> mpontillo: I haven't seen such an option, but I can re-check. SSH keys are easy in our case: we all use the same key :)
[20:59] <f1gjam> ahasenack, dpb1_ thanks for the info, ill adjust now. BTW the install failed, so i am guessing it is because the IP. Although the longs dont make that clear
[20:59] <f1gjam> do you want to see the logs?
[20:59] <ahasenack> f1gjam: the landscape install should work, it doesn't consume many ips
[20:59] <ahasenack> f1gjam: what does juju status --format=tabular have to say now?
[21:00] <f1gjam> http://paste.ubuntu.com/18195325/
[21:00] <mpontillo> pacavaca_: which version of MAAS are you using?
[21:00] <ahasenack> f1gjam: and juju debug-log, is it still printing out messages?
[21:01] <ahasenack> (you can abort with ctrl-c)
[21:01] <ahasenack> it's like tail -f <log>
[21:01] <f1gjam> yes its till printing messages
[21:01] <f1gjam> http://paste.ubuntu.com/18195389/
[21:02] <pacavaca_> mpontillo: 2.0b8 Just found "maas admin machine set-owner-data". This is probably what you're talking about. But I also realized, that people, who're done using the node should release it, otherwise how would others know which nodes they can acquire?
[21:02] <pacavaca_> and releasing will automatically power it off and then boot from pxe next time
[21:03] <ahasenack> f1gjam: hm, those are bad
[21:04] <ahasenack> f1gjam: can you "juju ssh 0 ps fauxw" and print the output?
[21:04] <ahasenack> paste, I mean
[21:04] <f1gjam> http://paste.ubuntu.com/18195527/
[21:05] <mpontillo> pacavaca_: set-owner-data allows you to add arbitrary key/value pairs to a node, to make notes about it and such
[21:08] <f1gjam> ahasenack, should i start from scratch
[21:08] <f1gjam> release all nodes
[21:08] <f1gjam> re-commission
[21:08] <ahasenack> f1gjam: yeah, and try something simple
[21:08] <ahasenack> f1gjam: juju bootstrap
[21:08] <ahasenack> f1gjam: then juju deploy ubuntu --to lxc:0
[21:08] <ahasenack> f1gjam: that will test bootstrap (which worked so far) and deploying a container on the bootstrap node
[21:09] <ahasenack> which is what we need working to begin with
[21:09] <f1gjam> but release the nodes first?
[21:09] <ahasenack> yes
[21:09] <ahasenack> you will have to configure juju to point at your maas server, have you done that before?
[21:09] <f1gjam> nope
[21:09] <ahasenack> and unset JUJU_HOME, that was just for openstack-install
[21:10] <ahasenack> f1gjam: https://jujucharms.com/docs/stable/config-maas
[21:10] <ahasenack> all you really need is an environments.yaml file with the maas env, those instructions will get you one with a ton of environments
[21:11] <f1gjam> ok node released
[21:11] <f1gjam> udpted dhcp
[21:12] <ahasenack> f1gjam: http://pastebin.ubuntu.com/18195888/ is a minimal file, put it in ~/.juju/environments.yaml
 is the ip of your maas server, and the oauth string you can get by logging in to maas and clicking on your name in the top right
[21:12] <ahasenack> it's a string with two :
[21:12] <ahasenack> like: text:text:text
[21:13] <ahasenack> f1gjam: then do juju env -l, to list (should show maas only), juju env (to confirm it's maas)
[21:13] <ahasenack> and then try "juju bootstrap", it should pick a node and start installing
[21:13] <ahasenack> f1gjam: what version of ubuntu are you on, btw? The machine where you ran cloud-installer, and where maas sits
[21:13] <ahasenack> trusty or xenial?
[21:14] <f1gjam> trusty
[21:14] <f1gjam> ok file created
[21:14] <f1gjam> ERROR the name of the environment must be specified
[21:16] <ahasenack> for bootstrap?
[21:17] <f1gjam> yes
[21:17] <ahasenack> did you unset JUJU_HOME?
[21:17] <f1gjam> no
[21:17] <f1gjam> let me do that
[21:17] <ahasenack> do that, or else it will be pointing at the other one created by openstack-install
[21:17] <ahasenack> the default is ~/.juju
[21:18] <f1gjam> done
[21:18] <ahasenack> so if you set JUJU_HOME to something else, that's where it will look
[21:18] <f1gjam> its doing something
[21:18] <ahasenack> ok, what does juju env -l show?
[21:18] <ahasenack> oh, or just go ahead :)
[21:18] <f1gjam> is this basically the CLI install
[21:18] <ahasenack> no
[21:18] <ahasenack> this is the underlying tech that the openstack-install and ladnscape/autopilot use
[21:19] <ahasenack> it's just easier to debug certain issues if we isolate them like this
[21:19] <f1gjam> i see
[21:19] <ahasenack> did it allocate a maas node?
[21:19] <f1gjam> everytime i hit a bug
[21:19] <f1gjam> i re-started from scratch
[21:19] <ahasenack> this will be stuck for a bit, some minutes
[21:19] <f1gjam> included maas rebuild
[21:19] <f1gjam> that way the doc is tested
[21:19] <ahasenack> but you can watch progress in the maas server
[21:20] <f1gjam> how can you watch
[21:20] <f1gjam> on maaS
[21:20] <ahasenack> go to the node listing
[21:20] <f1gjam> i just says deploying
[21:20] <ahasenack> right
[21:20] <ahasenack> so click on that
[21:20] <ahasenack> there is an event log or something down the page
[21:20] <f1gjam> ok
[21:20] <f1gjam> ok its going throught pxe cycle
[21:20] <f1gjam> 3 network cards
[21:21] <f1gjam> so takes a bit just to get past that
[21:21] <f1gjam> let me remote console on to the machine
[21:21] <ahasenack> I think if you click on "view history" you will get more details, and live refresh
[21:21] <mpontillo> pacavaca_: here's a hack that lets you change the owner. https://paste.ubuntu.com/18196246/
[21:21] <f1gjam> yeah i noticed that
[21:21] <f1gjam> i put in a feature
[21:21] <f1gjam> request
[21:21] <f1gjam> for console view in MaaS
[21:21] <f1gjam> :)
[21:21] <ahasenack> f1gjam: I have to go now, it's 6:21pm here
[21:21] <ahasenack> :)
[21:21] <f1gjam> where are you
[21:21] <mpontillo> pacavaca_: you would change it by doing something like "maas admin machine update 4y3hab owner=user2"
[21:22] <ahasenack> f1gjam: br
[21:22] <f1gjam> br?
[21:22] <ahasenack> f1gjam: yep, Brazil
[21:22] <f1gjam> oh wow
[21:22] <f1gjam> ok mate
[21:22] <f1gjam> thanks for your help
[21:22] <f1gjam> ill be here till around 1/2am UK time
[21:22] <ahasenack> f1gjam: after bootstrap, if that worked, to the "juju deploy ubuntu --to lxc:0"
[21:22] <f1gjam> ill keep positng
[21:22] <f1gjam> ok
[21:22] <f1gjam> but if that works
[21:22] <ahasenack> f1gjam: and after bootstrap is done, you can start doing "juju status --format=tabular" to check on things
[21:22] <f1gjam> ill start from scrath again
[21:22] <pacavaca_> mpontillo: thank you!
[21:23] <ahasenack> and juju ssh ubuntu/0, juju ssh 0, that kind of thing
[21:23] <ahasenack> f1gjam: interesting logs to look at are /var/log/juju/all-machines.log on the bootstrap node, and other logs in /var/log/juju
[21:23] <ahasenack> f1gjam: a good deploy will end with this ubuntu/0 service "started" in juju status and juju debug-log mostly silent
[21:23] <ahasenack> good luck, cya around
[21:24] <mpontillo> pacavaca_: let me know if that works for you, if so it might be an easy change for MAAS 2.1. will need to discuss with the team though
[21:24] <f1gjam> c ya
[21:32] <pacavaca_> mpontillo: I'll take a closer look once I have the buildout build passing locally. But as I said, I think nodes still will need to go through "Release" step, otherwise there's no way for people who want a node to know which nodes are free.
[21:34] <pacavaca_> However I have another idea in mind: what if release will not set netboot to True by default, but it will be done by "Deploy" instead, right before deployment. This way power-on will simply power the node on, which seems like a more intuitively expected thing for it to do.
[21:36] <mpontillo> pacavaca_: Hm. Or maybe a "null deployment" option, which would go through the motions but not actually touch the disks.
[21:38] <mpontillo> pacavaca_: you're right that this is a fairly specialized use case; we designed for MAAS having no idea what is currently on the disk of that node; MAAS assumes it could have been anything!
[21:38] <pacavaca_> pacavaca_: yeah, something like this. "null deployment" seems more like "just booting" :)
[21:38] <pacavaca_> mpontillo: Yeah, I understand that. But what maas does covers 90% of our use case, so that's why I'm trying to cover the other 10
[21:38] <mpontillo> pacavaca_: well, in addition you'd see the node change status to "Deployed" rather than just staying in Allocated
[21:40] <mpontillo> pacavaca_: in any case, patches to MAAS are welcome if you can generalize them enough. I think the "ownership transfer" idea is safest because then it's explicitly a user giving another user a node, and the original user knows exactly what was on it. when a node is released, as I said, MAAS's assumption is that we don't know what's on the disks and they need
[21:40] <mpontillo> to be reformatted
[21:41] <pacavaca_> mpontillo: actually, simply powering it on and then booting from disk (tried that yesterday by hacking code in python package directly), that changes state to Deploying. Then it times out though, but I suspect that's because it waits for something before cloud-init to happen and of course it never happens.
[21:41] <mpontillo> pacavaca_: your point about the user not knowing a node is free is a good one though. it's almost a "half deployed" state. except that when I get a test machine I normally want a clean slate =)
[21:41] <pacavaca_> mpontillo: yeah, sure. I'll first try to solve the specific use case and then, if I can generalize it, I'll submit a patch, no doubts
[21:42] <mpontillo> pacavaca_: great, thanks!
[21:45] <pacavaca_> mpontillo: What is the "recommisioning" concept exactly? Isn't it "ensuring that node is in a clean state faster than by re-imaging"?
[21:47] <mpontillo> pacavaca_: commissioning is when MAAS performs a network boot to inventory the host - CPU, RAM, disks, network cards, etc. recommissioning is just doing that again (in case something changed)
[21:48] <pacavaca_> mpontillo: ah, ok then. thanks
[23:09] <mup> Bug #1579073 changed: [ERROR] Failed to probe and enlist VMware nodes: 'vim.Folder' object has no attribute 'summary' <MAAS:New> <MAAS 1.9:New> <MAAS 2.0:New> <https://launchpad.net/bugs/1579073>
[23:19] <f1gjam> ahasenack, im starting from scratch