[02:58] Bug #1630817 changed: [ERROR] Unable to find a copy of bootppc64.bin in the SimpleStream or a previously downloaded copy. [03:30] Hi Kiko [03:30] How to handle errors in the commissioning script? === frankban|afk is now known as frankban [13:04] Bug #1631358 opened: [2.1] Incorrect logging message - showing SERVICE_STATE.ON [13:13] Bug #1631358 changed: [2.1] Incorrect logging message - showing SERVICE_STATE.ON [13:25] Bug #1631358 opened: [2.1] Incorrect logging message - showing SERVICE_STATE.ON [14:04] Bug #1630745 changed: [2.1b1] Nodes faild to deploy due to 503 error in archive [14:22] Bug #1630745 opened: [2.1b1] Nodes faild to deploy due to 503 error in archive [14:25] Bug #1630745 changed: [2.1b1] Nodes faild to deploy due to 503 error in archive [14:39] * baldpope sigh [14:39] figured out my networking issue from inside lan - forgot to turn on nat at perimeter firewall.. [14:51] hi, can't restart maas-dhcpd [14:51] root@hnode1-8:~# service maas-dhcpd restart Failed to restart maas-dhcpd.service: Unit maas-rackd.service is masked. [14:57] fbarilla: restart maas-rackd [14:57] fbarilla: that should restart maas-dhcpd [14:57] Bug #1631403 opened: [2.1b2] Service 'ntp' is SERVICE_STATE.ON but not in the expected state of 'running', its current state is 'exited'. [14:57] root@hnode1-8:~# service maas-rackd restart Failed to restart maas-rackd.service: Unit maas-rackd.service is masked. [15:03] Bug #1631403 changed: [2.1b2] Service 'ntp' is SERVICE_STATE.ON but not in the expected state of 'running', its current state is 'exited'. [15:06] Bug #1631403 opened: [2.1b2] Service 'ntp' is SERVICE_STATE.ON but not in the expected state of 'running', its current state is 'exited'. [16:15] Bug #1631420 opened: [2.1 UI] Images page "Queued for download" is confusing when selections are not saved [16:15] Bug #1631421 opened: [2.1b2] failure to deploy all systems that lasted until reboot of maas container [16:33] hello [16:34] Is anyone on line? [16:36] yes [16:36] braven_, ^ [16:38] I am having an issue. I keep getting a internal error [16:38] when try to save a interface to dhcp [16:42] The cluster controller will not accept the changes === frankban is now known as frankban|afk [16:56] Code: Unknown Error [17:01] braven_, was it you was asking about the secret yesterday? [17:06] no [17:07] We have the cluster Controller connected [17:09] it was working. We tried adjust the dhcp range. It would not save [17:10] when we try to save the changes to the interface we get "Internal server error." [17:10] hi there [17:10] hello [17:11] i hope someone can shed some light for me, i am trying to install MAAS and it says a few sentences down about the clusters tab, where the heck is it at? [17:12] what version of MAAS [17:12] i think it is 2.0, let me check [17:13] if i'm running ubuntu 16.04 LTS, it should be 2.0 right? [17:13] yeap. [17:13] ok then, thats what i have running then. [17:14] is the clusters tab, no more then or something? [17:15] I am using the old version because my hardware. But do remember read they change it [17:15] i also can't seem to get my private network functioning correcty within ubuntu itself either for some reason, would that have something to do with it maybe? [17:15] They something new called Fabric [17:16] ok, i have seen that. [17:16] sean5384, do you have a link to the docs you are reading? [17:16] Have used the older version of MAAS [17:16] i was trying to install via the documentation, but you can't follow that now i can see. [17:16] braven_, which version are you using? [17:17] 1.8 [17:17] ahhh [17:18] on which version of ubuntu [17:18] http://maas.io/docs/en/installconfig-rack for Sean [17:18] 14.04 [17:18] the hardware I have is not supported by 16 [17:18] i just saw the documentation on top of this site..... [17:18] lol [17:19] this makes more sense now. [17:19] braven_, 1.8 is not really supported. 1.9 is the version in Trusty [17:19] i was getting distraught because there was no clusters tab at the top of the page anywhere. [17:19] lol [17:20] MAAS is awesome when it works. We were able to deploy 60 nodes in about 3 hour [17:20] wow [17:20] how do I know what version I am running? [17:20] i'm trying to get just 5 going right now. [17:20] braven_, it should say at the bottom of the web ui [17:21] or you can do dpkg -l | grep maas on the maas server [17:21] braven_, that largely depends on your hardware/network too, we regularly deploy 40+ nodes in about 15-20 mins [17:22] haven't timed it though so it could be a bit longer [17:22] wow [17:22] I wonder when we configured wrong [17:23] it will use a lot of bandwidth, is one thing [17:23] 1.7.6 [17:23] so can we upgrade without loosing all the nodes? [17:24] braven_, you *should* be able to [17:27] do know why it might be give that error when I am trying to save a cluster interface [17:29] could be anything really. haven't used 1.7 much [17:33] braven_, but some logs would be helpful [17:35] braven_, if you're willing to upgrade that would be a good first step, then if it still happens, file a bug [17:39] Hi all, if I want to control VMs on a vCenter server, what has to be on the same subnet as the rack controller, the vCenter appliance, or the individual esxi servers? [19:39] Nodes won't enlist, even though booted under maas. Metadata service appears unreachable. Version 1.9 on 14.04. [19:39] Where should I look for that? [19:54] Is there any help on this channel? Is there somewhere else to ask questions? === spammy is now known as Guest28928 === CyberJacob is now known as Guest90146 === med_ is now known as Guest94967 === Guest94967 is now known as medberry [20:20] with recent update to conjure, i get error after selecting which type of openstack to deploy [20:25] baldpope: which is? [20:25] Bug # changed: 1470202, 1473254, 1553647, 1563779, 1563799, 1563807, 1592666, 1604901 [20:27] conjure-up -> openstack for maas -> hawk (only option) -> error [20:27] unable to list models: error cannot list models: no api addresses [20:29] baldpope: sounds like a failed bootstrap [20:29] what does juju status return? [20:29] error no api address [20:29] yea your bootstrap failed [20:30] i'm seeing this on the ubuntu maas host [20:30] all nodes in ready - none currently deployed - i thought this prepped the first node? [20:31] or can I make this maas head the juju controller? [20:32] yea so you have a state where juju thinks a controller exist but maas has nothing deployed [20:32] i would juju kill-controller hawk [20:32] Unable to open API: no API addresses Unable to connect to the API server, destroying through provider [20:33] thats fine it'll just kill it manually and delete it from it's database [20:33] then you can try conjure-up again [20:40] baldpope: nah we have a seperate queue that handles all that logic so you can proceed until you see the screen showing the bootstrap output [20:44] looks like it's making progress [20:44] i see progress in the conjure window - hadn't see that before [20:45] baldpope: :D [20:50] stokachu, hm, seems like it's stalled again [20:50] baldpope: the machine is still in deploying state? [20:50] nope, deployed [20:51] what the wait screen showing? [20:51] oh wait [20:51] just finihed [20:51] wow - long time? [20:51] it's now deploying the rest [20:51] yea maas is depending on the hardware [20:52] so if the systems take a minute to boot up then it'll prolong that wait [20:52] etc [20:52] gotcha [20:52] * baldpope w000t-- progress [20:52] but honestly aws takes about the same amount of time from bootstrap to deploy [20:52] i don't mind failure as long as I learn something in the progress [20:53] how many machines do you have available in maas? [20:53] after the first one deployed? [20:53] 5 total [20:53] blade(1-5) [20:53] ok cool [20:53] all with 2nics? [20:53] the first, is where juju (I believe) completed the it's deployment [20:53] 6 technically, but 2 are plumbed [20:53] yea the first deployed machine is your controller node [20:54] ok [20:54] baldpope: and your network interfaces are coming up eth0/eth1 as well? [20:54] side note, i'm all in on openstack for the foreseeable future at work - so anything you guys need to test - i'm available [20:54] i know we fixed that yesterday [20:55] on blade1, it's shows as br-enp9s0f0 [20:55] as the active interface [20:55] and then en01, enp9s0f0, enp9s0f1, ens3f1, 2, 3, [20:55] ok neutron may fail with a 'config-changed' hook that we can fix with juju config neutron-gateway br-ext=en01 etc [20:56] and then juju resolved neutron-gateway/0 [20:56] you could probably set that now if you want [20:56] to avoid the hook error [20:57] so from the maas head, run what? [20:57] juju config neutron-gateway br-ext= [21:02] ok, quick question before I do that - the remaining 4 are now in deployed state [21:04] but everything is in waiting or maintenance state [21:05] yea they are all going through processing hooks etc [21:05] all the things needed to get the software up [21:06] should I still attempt to change the neutron-gateway? [21:06] yea it should queue it up and apply it [21:06] k [21:07] if the charm breaks because of performing that action it is a bug in the charm [21:07] unless the interface was entered incorrectly [21:07] and the second int name is the one on the node, correct? [21:07] though it should really let you know via status-set blocked [21:07] baldpope: yea the one on that node that is running neutron-gateway [21:09] ERROR unknown option "br-ext" [21:10] juju config neutron-gateway br-ext=enp9s0f1 [21:11] and conjure-up just errored out - neutron-gateway/0: hook failed: config-changed [21:11] sec [21:12] baldpope: ah you need to setup bridge-mappings [21:12] https://jujucharms.com/neutron-gateway/xenial/3#charm-config-bridge-mappings [21:14] so like if you have a bridge setup on that node you can do [21:14] juju config neutron-gateway bridge-mappings=mynet:br0 [21:14] stokachu, while setting up a bridge makes sense (and I'd want to get there eventually) I did not setup bridge mode (either in maas or on the switch) [21:15] i assumed the br-enp9s0f0 was stock from the existing charm? [21:16] does that bridge exist on the system? [21:16] if you juju ssh neutron-gateway/0 [21:16] more info as well: https://ask.openstack.org/en/question/94206/how-to-understand-the-bridge_mappings/ [21:17] http://pastebin.ubuntu.com/23290786/ [21:17] yea so you can assign the bridge mapping to br-enp9s0f0 [21:18] baldpope: additional info https://github.com/openstack/charm-neutron-gateway#port-configuration [21:19] you can access all those config options from conjure-up if you click on configure->advanced configuration [21:20] silly question stokachu - i feel like I've completely skipped a ton of reading - attempting to just follow the ubuntu docs on deploying open stack - is there a primer I skipped/missed? [21:20] baldpope: networking is complex and you'll want to be familiar with that === Guest28928 is now known as spammy [21:36] Do target hosts have to have internet access during elistment? [21:54] stokachu, not gonna lie - a lot to take in [21:55] baldpope: yea openstack is a complex beast [21:56] stokachu, for the sake of conversation i've got the 6 ports total, 2 on board, and 4 on the add-on card - the long-term idea being that I'd use the some combination to dedicate shared storage and some remain to dedicate to service networks [21:57] all that is done through juju ahead of deployment, or post deploy? [21:59] i can see where that made little sense, i'll try again... dedicate 4 ports to the shared storage, and the 2 onboard to network services [22:00] with the appropriate lag/bridge ports in place for redundancy and throughput [22:10] ok, so I did the following: juju config neutron-gateway bridge-mappings=br-enp9s0f0 [22:10] and am re-running conjure against the existing deployment [22:10] we'll see [22:11] bridge-mappings appears to be the current non-deprecated command