[16:30] <qhartman> So, I have 7 machines under maas control right now
[16:30] <qhartman> and all of them have been correctly started and configured with juju, and allocated to the correct user
[16:31] <qhartman> this morning I restarted celery in the hopes that it would clear up a problem I'm having with etherwake not working (it didn't) and now all the nodes are in the "ready" state, rather than showing as already allocated
[16:31] <qhartman> is that to be expected if celery restarts?
[16:31] <qhartman> How do I correct it?
[16:49] <jtv> qhartman: absolutely not expected — I can't imagine how that could result from a celery restart.
[16:57] <qhartman> jtv, ok, I didn't think so. I have some other issue going with juju where I hit a bug that nuked my env, so I'm thinking that might be the root cause.
[17:00] <jtv> Yes, that sounds much more probable.  The juju env gave the nodes back to the maas.
[17:04] <qhartman> jtv, yeah. I hadn't noticed that happened when I posted initially, so I'm going to call that the root.
[17:22] <qhartman_too> it does lead me to another question though. If I need to nuke-and-pave a node in maas, what's the "right" was to do it? I have been deleting it from maas and then re-initializing it, but the installer was refusing to install to a non-empty hdd
[17:22] <qhartman_too> so I've been manually wiping the drives of machines before trying to bring them back into maas
[19:52] <Term1nal> So, I got MAAS up and running. I bootstrapped juju, but when I run juju status, it cannot resolve the host, despite the MAAS cluster set to DHCP/DNS
[19:56] <qhartman_too> make sure that the node name in the .juju/environments/your_env.jenv actually resolves correctly
[19:57] <qhartman_too> I am running into a problem with make on lan not actually working all of a sudden
[19:58] <qhartman_too> I rebooted the maas box just in case something got into a weird state, and I can WOL machines fro mthe commandline using etherwake
[19:58] <qhartman_too> and the WOL template in the /etc/maas/templates seems right
[19:58] <qhartman_too> (and was working)
[19:58] <qhartman_too> but for some reason MAAS can't wake machines to commission them anymore.
[19:59] <qhartman_too> I looked in the celery.log as suggested earlier. Couldn't really make heads or tails of it, but nothing that seemed to mention WOL popped out.
[20:00] <qhartman_too> any suggestions for troubleshooting would be appreciated
[20:21] <Term1nal> qhartman_too: well I got it bootstrapped now, both nodes are allocated, except one of my modes is always "pending" while the other is "running"
[20:21] <Term1nal> I deployed juju-gui, it went to the "pending" node
[20:21] <Term1nal> so it says the juju-gui agent status is "pending"
[20:21] <Term1nal> (I just went into my pfsense router and set the hostnames in the DNS forwarder so they resolve)
[20:23] <qhartman_too> is the installation actually going on the pending node?
[20:29] <Term1nal> yeah, juju-status shows the 2nd node as pending, and the juju-gui agent-state as pending (on machine "1")
[20:29] <Term1nal> I removed it and deployed it --to 0
[20:29] <Term1nal> so it went to the machine that says "ready"
[20:29] <Term1nal> but still pending
[20:29] <Term1nal> oh, no now it says started hmmm
[20:36] <qhartman_too> it does take a bit to get things installed and whatnot
[20:39] <Term1nal> the second node is still pending :(
[20:40] <Term1nal> and I have some now stating that "no matching tools available"
[20:42] <magicrobotmonkey> can you get on a console for the node?
[20:48] <Term1nal> hm, no
[20:48] <Term1nal> having a key issue.
[20:49] <Term1nal> Permisison denied (publickey)
[20:50] <magicrobotmonkey> i mean like a management console
[20:50] <magicrobotmonkey> though if its up enough to deny your key, thats probably a good sign
[20:50] <Term1nal> oh, like a local terminal? yeah.
[20:50] <Term1nal> I have it in my physical kvm
[20:58] <Term1nal> I just shitcanned the environment, gonna start over :D
[21:02] <Term1nal> magicrobotmonkey: So how do I fix the key issue?
[21:02] <Term1nal> I don't know how to log into the nodes interactively
[21:03] <Term1nal> I tried using the user/password for my admin account for MAAS
[21:03] <qhartman> you need to setup an ssh key for the user you're using to run the juju commands with in maas
[21:03] <qhartman> in the maas admin click on your user name and click preferences
[21:03] <qhartman> there should be a place to add an ssh key
[21:04] <Term1nal> Yeah I did, I must've used the wrong key.
[21:04] <magicrobotmonkey> are you sure you were using your key when ssh'ing?
[21:05] <Term1nal> I never SSHed into the nodes directly yet.
[21:05] <qhartman> oh, you'll also need to sepcify the user "ubuntu"
[21:05] <Term1nal> ah
[21:05] <magicrobotmonkey> ah right
[21:05] <Term1nal> ah that did it
[21:05] <qhartman> ssh -i /path/to/private_key ubuntu@host
[21:05] <magicrobotmonkey> i always forget that at least once
[21:05] <Term1nal> specifying ubuntu worked
[21:05] <qhartman> cool
[21:05] <Term1nal> so, do I want to SU to ubuntu to run the juju bootstrap, etc?
[21:06] <qhartman> no, just as whatever user you've been using
[21:06] <qhartman> ubuntu is the just the default username it uses when starting the hosts
[21:06] <Term1nal> I see, ok cool.
[21:06] <Term1nal> Thanks.
[21:06] <qhartman> you can do that if you want, but it's not needed
[21:07] <Term1nal> might be better for the juju channel, but is there a way to, using the juju-gui, to specify which node a service is being deployed to?
[21:07] <Term1nal> also, how does one determine the IP of the container?
[21:07] <qhartman> I dunno in the gui
[21:07] <magicrobotmonkey> i just use the cli for that
[21:07] <qhartman> on the cli you do --to N
[21:07] <magicrobotmonkey> its easier
[21:07] <qhartman> where N is the node number
[21:07] <Term1nal> yeah
[21:07] <Term1nal> what I've been doing.
[21:07] <Term1nal> what about the IP?
[21:07] <magicrobotmonkey> you can script spinning up an environment
[21:07] <magicrobotmonkey> juju status juju-gui
[21:08] <magicrobotmonkey> will tell you the hostname
[21:08] <Term1nal> ok, neat
[21:08] <Term1nal> this is kind of cool
[21:09] <Term1nal> cept that I did juju destroy-environment, now when I tried to bootstrap, it says it failed :P
[21:09] <Term1nal> job already running, juju-db, failed: rc: 1
[21:10] <qhartman> did you clean up the nodes and re-initialize them in maas?
[21:10] <Term1nal> I did not, do I just commission them again?
[21:10] <magicrobotmonkey> juju usually takes care of that
[21:10] <magicrobotmonkey> if its setup right, it will commission and decommission as needed
[21:10] <qhartman> huh, I have been doing that part by hand
[21:11] <Term1nal> hm
[21:11] <qhartman> How can juju commission nodes if it doesn't know their power settings?
[21:11] <magicrobotmonkey> through maas
[21:11] <qhartman> (unless you're using vms I suppose)
[21:11] <magicrobotmonkey> maas does an excellent job of using ipmi
[21:11] <qhartman> oh so you you fill theat in and then just stop short of commissioning
[21:12] <qhartman> huh, I thought they had to be "ready" before juju would touch them
[21:12] <magicrobotmonkey> i don't know but when i set up juju i hooked it up to my maas install and it takes care of everything for me
[21:12] <magicrobotmonkey> its kind of nuts
[21:12] <magicrobotmonkey> except when it doesn't work its kind of hard to debug
[21:12] <qhartman> are you using real hardware?
[21:12] <magicrobotmonkey> yea
[21:13] <qhartman> huh
[21:13] <magicrobotmonkey> some weird old dell stuff
[21:13] <qhartman> and yeah, it's suuupoer opaque
[21:13] <Term1nal> my nodes said "ready" but when it got partially through, said juju DB was running, stopping instance, then bootstrap failed.
[21:13] <magicrobotmonkey> yea im working on an openstack deployment
[21:13] <magicrobotmonkey> and maas and juju got me pretty far
[21:13] <qhartman> have you had luck with wol working consistently, or are you using some other power method?
[21:13] <magicrobotmonkey> but the networking stuff has stumped me
[21:13] <magicrobotmonkey> im using ipmi
[21:13] <qhartman> yeah, I'm at about the same spot
[21:13] <magicrobotmonkey> which works great
[21:13] <Term1nal> using WOL myself.. it wasn't working at first, but then magically it worked.
[21:13] <magicrobotmonkey> maas adds its own user when it boots the enlist preseed
[21:14] <magicrobotmonkey> super slick
[21:14] <qhartman> yeah, Term1nal I've had WOL stuff magically stop working
[21:14] <magicrobotmonkey> yea I've never used it
[21:14] <qhartman> when I enlist my HP machines it looks like it tries to IPMI them, but then it complains about no free user spots
[21:14] <Term1nal> lol
[21:14] <magicrobotmonkey> I've switched to attacking it with m established cobbler install and some ansible playbooks i found
[21:14] <Term1nal> well, they both just commissioned
[21:15] <Term1nal> so now I'm gonna run the bootstrap and watch things magically turn on
[21:15] <qhartman> I'm tempted to hook up their iLO ports, but I don't really have the switch space
[21:15] <Term1nal> it's pretty impressive :D
[21:15] <magicrobotmonkey> yea i think you need the ilo ports wired for ipmi?
[21:15] <qhartman> yeah, when I first got this going and the machines all started coming up one after another it was definitely a O_O moment.
[21:16] <magicrobotmonkey> yea same
[21:16] <qhartman> magicrobotmonkey, not sure, it's been awhile since I worked with iLO stuff, and it was always the "deluxe" iLO before, so I'm not sure of the quirks yet
[21:16] <magicrobotmonkey> if only maas was as configurable as cobbler, I'd be sold
[21:16] <qhartman> yeah, I'm still on the fence about the whole maas/juju thing
[21:17] <magicrobotmonkey> heh yea one of my machines randomly complains about not having the license for certain ilo functions
[21:17] <magicrobotmonkey> stupid
[21:17] <qhartman> yeah
[21:19] <magicrobotmonkey> yea I've been pretty happy with cobbler
[21:19] <qhartman> I haven't used it at all
[21:19] <qhartman> I use chef for all my AWS stuff
[21:19] <magicrobotmonkey> cobbler is like maas, for bare metal
[21:19] <qhartman> this is my first foray into config management w/ real hardware
[21:20] <qhartman> always just done it by hand before
[21:20] <qhartman> but if we grow this cluster like I think we will, that won't fly for long
[21:20] <magicrobotmonkey> heh me too then I had 80 nodes to do at once
[21:20] <qhartman> well, "by hand" using PXE and preseeds
[21:20] <qhartman> but still a helluva lot simpler than this
[21:21] <magicrobotmonkey> yea cobbler is more flexible/transparent
[21:21] <Term1nal> does the juju bootstrap do one at a time?
[21:21] <Term1nal> I have 2 nodes, only one powered up and started going.
[21:21] <qhartman> bootstrap should only bring up one node
[21:21] <Term1nal> ah
[21:21] <qhartman> the "machine 0"
[21:21] <magicrobotmonkey> it just powers up one node and install the juju master or whatever on it
[21:21] <Term1nal> then it gets node 1+?
[21:22] <qhartman> once that's up do the "juju deploy..." and it will bring up another
[21:22] <Term1nal> ah
[21:22] <qhartman> so, magicrobotmonkey, if you're happy w/ cobbler, why are you messing w/ maas?
[21:24] <magicrobotmonkey> openstack
[21:24] <magicrobotmonkey> the maas/juju seemed like a good way to get it going
[21:24] <qhartman> yeah
[21:24] <Term1nal> Yeah, I tried doing a foreman/staypuft plugin install of RDO openstack
[21:25] <Term1nal> but getting foreman setup and shit, and installing the staypuft plugin...
[21:25] <qhartman> yeah, I'm not far from giving up on maas / juju and just rolling some shell scripts
[21:25] <qhartman> at least then I'd get some insight into what's going on
[21:26] <qhartman> this just feels like it would be useful long term
[21:26] <Term1nal> only the latest pre-release version of foreman had the staypuft plugin in the repo, but it was an OLD version that was not compatible with the version of foreman for which the plugin was in the repo for...
[21:26] <Term1nal> So I would have to install from source
[21:26] <Term1nal> and it's all ruby, and screw ruby.
[21:27] <Term1nal> best I had so far was packstack (RDO) on CentOS
[21:27] <Term1nal> the only collection that I've had that got a running openstack platform, on a single box even, in less than an hour.
[21:27] <Term1nal> with only a few commands.
[21:27] <magicrobotmonkey> I'd give cobbbler a look, qhartman
[21:28] <qhartman> yeah?
[21:29] <qhartman> It looks interesting on the surface
[21:29] <magicrobotmonkey> its in between a bunch of shell scripts and maas
[21:29] <magicrobotmonkey> If you're already familiar with pxebooting, it'll be cake for you to get going
[21:30] <qhartman> I actually haven't had much trouble with MAAS, aside from the unreliable WOL, it's the juju that has bugged me
[21:30] <magicrobotmonkey> yea thats pretty much my experience too
[21:30] <qhartman> since my deployment needs aren't quite what they want to do, it's been tough figuring out the right way to tweak things
[21:30] <magicrobotmonkey> i need a primer on whats going on behind the scenes or something
[21:30] <qhartman> yeah, me too
[21:31] <magicrobotmonkey> it probably doesnt help that my first experience with it is attacking a project with the complexity of openstack
[21:31] <qhartman> there are a million how-to's but there's very little (that I've found) that goes under the covers
[21:31] <qhartman> heh
[21:31] <qhartman> <-also
[21:31] <magicrobotmonkey> i think it did an ok job of deploying
[21:31] <magicrobotmonkey> other than some handholding keystone around proxies
[21:31] <magicrobotmonkey> but the networking is as confusing as crap
[21:32] <magicrobotmonkey> I'm starting to think i might have a driver issue
[21:32] <qhartman> yeah, all the stuff it's done right, is like magic, but when things go weird, or don't support being installed on the same host as one another, or some other corner-case I have the knack of finding, it's tricky to pick apart
[21:32] <magicrobotmonkey> exactly
[21:32] <qhartman> and yeah, openstack networking is a PITA
[21:33] <qhartman> all I want is my VMs to be bridged onto the main network, and get their DHCP DNS handled by the stuff I have in place
[21:33] <magicrobotmonkey> heh all i want is any connectivity from my nodes
[21:33] <qhartman> no SDN, so single router to hide them, none of that
[21:33] <magicrobotmonkey> i dont care how
[21:33] <qhartman> so, if you are using the flatdhcpmanager
[21:34] <qhartman> I've found that the charms don't correctly install the nova-network package on the compute nodes
[21:34] <magicrobotmonkey> yup
[21:34] <qhartman> the OS and juju guys I've talked swear it's supposed to
[21:34] <magicrobotmonkey> i switched to using neutron and got further
[21:34] <qhartman> but I'll be damned if I can see how
[21:34] <qhartman> Installed those by hand, and it got working
[21:34] <magicrobotmonkey> yea it totally doesnt add any bridges
[21:35] <magicrobotmonkey> now im at the point where it gets all set up
[21:35] <magicrobotmonkey> and seems right
[21:35] <magicrobotmonkey> and then my external interface goes dark
[21:35] <qhartman> Yeah, it seems like neutron is supported better, but the last thing I want is all my VMs getting their traffic siphoned through a single host
[21:35] <magicrobotmonkey> yea no kidding
[21:35] <magicrobotmonkey> im still shooting for POC though
[21:35] <magicrobotmonkey> so just anything working would be nice
[21:36] <qhartman> yeah, I've managed to get there a couple times, but I hven't been able to repeat it consistently
[21:36] <magicrobotmonkey> heh
[21:36] <qhartman> last time one of the dhcp servers started talking on the main network and started fucking people up
[21:37] <qhartman> still not sure why
[21:37] <qhartman> I had everything working, and then adding a second compute node made that happen
[21:38] <magicrobotmonkey> haha
[21:38] <magicrobotmonkey> with its own dhcp?
[21:39] <qhartman> apparently. I had left the office already by the time it manifested, so I just shut everything down
[21:39] <qhartman> and have since nuked it all since I knew it was being bad, but not sure where
[21:52] <Term1nal> well I'll ask here since ubuntu-server is dead
[21:55] <jtv> Adding a node should never add a DHCP server...  Only editing the cluster network interfaces should do that.
[21:57] <qhartman> jtv, yeah, my best guess is that adding the second node made juju decide that the dnsmasq needed to be talking on the main interface so the other compute node could reach it
[21:58] <qhartman> and nobody noticed it was causing trouble until their lease expired
[21:58] <jtv> Hmm... maas doesn't run any dnsmasq.
[21:58] <qhartman> jtv, yeah, this has wandered into openstack territory
[21:59] <jtv> That does fit the story better I think.  :)
[21:59] <rvba> jtv: filed https://bugs.launchpad.net/maas/+bug/1317677
[22:00] <jtv> Cool.
[22:09] <Term1nal> hmmm
[22:09] <Term1nal> so I tried to deploy --to 0/lxc/0
[22:09] <Term1nal> do I need to -make- containers first before I can deploy to them?
[22:09] <Term1nal> how's that work?
[22:11] <qhartman> no
[22:11] <qhartman> do juju deploy --to lxc:0
[22:11] <qhartman> and that should do it
[22:11] <qhartman> start a new lxc container on node 0
[22:12] <qhartman> you only use the 0/lxc/0 -style notation when referring to existing nodes/containers
[22:14] <Term1nal> gotcha
[22:14] <Term1nal> so, if I do the deploy lxc:0, does that start it on a new node? can I specify a node to start the lxc on as well?
[22:15] <qhartman> after you have a couple running, examine the output of juju status and it will become clear
[22:15] <qhartman> the 0 refers to the node
[22:15] <Term1nal> OH
[22:15] <Term1nal> so I run lxc:0, that spins up a container ON 0
[22:15] <qhartman> so if you do multiple lxc:0 commands, it will spin up multiple containers on 0
[22:15] <qhartman> yup
[22:16] <Term1nal> oh sweet :3
[22:16] <Term1nal> that's neat.
[22:16] <qhartman> yeah, I have like a dozen lxc's running on my node 0
[22:17] <Term1nal> I deployed mysql and rabbit mq to node 0, and openstack-dashboard to lxc:0
[22:17] <qhartman> sure
[22:17] <qhartman> I did the opposite, gave rabbit and mysql their own containers, and put the dash on the node directly
[22:18] <Term1nal> ah
[22:18] <qhartman> The reason I went that way is that the dash needs to be user-facing
[22:18] <Term1nal> ohhhh
[22:18] <Term1nal> that makes sense
[22:18] <Term1nal> can I move a service into a container?
[22:18] <Term1nal> or just redeploy
[22:19] <qhartman> I think you'd have to destroy it and redeploy it
[22:19] <Term1nal> ok fair enough
[22:19] <qhartman> not sure though
[22:19]  * qhartman is still wearing his newb hat
[22:20] <Term1nal> So you can't have containers user-facing?
[22:20] <Term1nal> don't they get their own virtual IPs or what have you?
[22:20] <qhartman> dunno. they seem to only have single interfaces and they get their IPs from the admin-side network
[22:20] <qhartman> I'm sure it can be changed, but I've no idea how
[22:21] <qhartman> and on my physical boxes, I have eth0 as admin-side, and eth1 as user-side
[22:23] <qhartman> it seems like there should be a maas/juju/openstack channel to talk about the whole stack to help avoid the semi-OT talk in one channel or the other.
[22:24] <Term1nal> yeah I agree :D
[22:24] <Term1nal> it really involves all 3
[22:24] <qhartman> ok  join majuos
[22:28] <Term1nal> lol
[22:29] <rvba> allenap: https://bugs.launchpad.net/maas/+bug/1317682