[02:58] <mup> Bug #1630817 changed: [ERROR] Unable to find a copy of bootppc64.bin in the SimpleStream or a previously downloaded copy. <MAAS:New> <https://launchpad.net/bugs/1630817>
[03:30] <Sujeet_> Hi Kiko
[03:30] <Sujeet_> How to handle errors in the commissioning script?
[13:04] <mup> Bug #1631358 opened: [2.1]  Incorrect logging message - showing SERVICE_STATE.ON <MAAS:New> <https://launchpad.net/bugs/1631358>
[13:13] <mup> Bug #1631358 changed: [2.1]  Incorrect logging message - showing SERVICE_STATE.ON <MAAS:New> <https://launchpad.net/bugs/1631358>
[13:25] <mup> Bug #1631358 opened: [2.1]  Incorrect logging message - showing SERVICE_STATE.ON <MAAS:New> <https://launchpad.net/bugs/1631358>
[14:04] <mup> Bug #1630745 changed: [2.1b1] Nodes faild to deploy due to 503 error in archive <cdo-qa> <cdo-qa-blocker> <MAAS:Invalid> <https://launchpad.net/bugs/1630745>
[14:22] <mup> Bug #1630745 opened: [2.1b1] Nodes faild to deploy due to 503 error in archive <cdo-qa> <cdo-qa-blocker> <MAAS:Invalid> <https://launchpad.net/bugs/1630745>
[14:25] <mup> Bug #1630745 changed: [2.1b1] Nodes faild to deploy due to 503 error in archive <cdo-qa> <cdo-qa-blocker> <MAAS:Invalid> <https://launchpad.net/bugs/1630745>
[14:39]  * baldpope sigh
[14:39] <baldpope> figured out my networking issue from inside lan - forgot to turn on nat at perimeter firewall..
[14:51] <fbarilla> hi, can't restart maas-dhcpd
[14:51] <fbarilla> root@hnode1-8:~# service maas-dhcpd restart Failed to restart maas-dhcpd.service: Unit maas-rackd.service is masked.
[14:57] <roaksoax> fbarilla: restart maas-rackd
[14:57] <roaksoax> fbarilla: that should restart maas-dhcpd
[14:57] <mup> Bug #1631403 opened: [2.1b2] Service 'ntp' is SERVICE_STATE.ON but not in the expected state of 'running', its current state is 'exited'. <oil> <MAAS:New> <https://launchpad.net/bugs/1631403>
[14:57] <fbarilla> root@hnode1-8:~# service maas-rackd restart Failed to restart maas-rackd.service: Unit maas-rackd.service is masked.
[15:03] <mup> Bug #1631403 changed: [2.1b2] Service 'ntp' is SERVICE_STATE.ON but not in the expected state of 'running', its current state is 'exited'. <oil> <MAAS:New> <https://launchpad.net/bugs/1631403>
[15:06] <mup> Bug #1631403 opened: [2.1b2] Service 'ntp' is SERVICE_STATE.ON but not in the expected state of 'running', its current state is 'exited'. <oil> <MAAS:New> <https://launchpad.net/bugs/1631403>
[16:15] <mup> Bug #1631420 opened: [2.1 UI] Images page "Queued for download" is confusing when selections are not saved <MAAS:New> <https://launchpad.net/bugs/1631420>
[16:15] <mup> Bug #1631421 opened: [2.1b2] failure to deploy all systems that lasted until reboot of maas container <oil> <MAAS:New> <https://launchpad.net/bugs/1631421>
[16:33] <braven_> hello
[16:34] <braven_> Is anyone on line?
[16:36] <brendand> yes
[16:36] <brendand> braven_, ^
[16:38] <braven_> I am having an issue. I keep getting a internal error
[16:38] <braven_> when try to save a interface to dhcp
[16:42] <braven_> The cluster controller will not accept the changes
[16:56] <braven_> Code<UNKNOWN>: Unknown Error
[17:01] <brendand> braven_, was it you was asking about the secret yesterday?
[17:06] <braven_> no
[17:07] <braven_> We have the cluster Controller connected
[17:09] <braven_> it was working. We tried adjust the dhcp range. It would not save
[17:10] <braven_> when we try to save the changes to the interface we get "Internal server error."
[17:10] <sean5384> hi there
[17:10] <braven_> hello
[17:11] <sean5384> i hope someone can shed some light for me, i am trying to install MAAS and it says a few sentences down about the clusters tab, where the heck is it at?
[17:12] <braven_> what version of MAAS
[17:12] <sean5384> i think it is 2.0, let me check
[17:13] <sean5384> if i'm running ubuntu 16.04 LTS, it should be 2.0 right?
[17:13] <braven_> yeap.
[17:13] <sean5384> ok then, thats what i have running then.
[17:14] <sean5384> is the clusters tab, no more then or something?
[17:15] <braven_> I am using the old version because my hardware. But do remember read they change it
[17:15] <sean5384> i also can't seem to get my private network functioning correcty within ubuntu itself either for some reason, would that have something to do with it maybe?
[17:15] <braven_> They something new called Fabric
[17:16] <sean5384> ok, i have seen that.
[17:16] <brendand> sean5384, do you have a link to the docs you are reading?
[17:16] <braven_> Have used the older version of MAAS
[17:16] <sean5384> i was trying to install via the documentation, but you can't follow that now i can see.
[17:16] <brendand> braven_, which version are you using?
[17:17] <braven_> 1.8
[17:17] <brendand> ahhh
[17:18] <brendand> on which version of ubuntu
[17:18] <braven_> http://maas.io/docs/en/installconfig-rack  for Sean
[17:18] <braven_> 14.04
[17:18] <braven_> the hardware I have is not supported by 16
[17:18] <sean5384> i just saw the documentation on top of this site.....
[17:18] <braven_> lol
[17:19] <sean5384> this makes more sense now.
[17:19] <brendand> braven_, 1.8 is not really supported. 1.9 is the version in Trusty
[17:19] <sean5384> i was getting distraught because there was no clusters tab at the top of the page anywhere.
[17:19] <sean5384> lol
[17:20] <braven_> MAAS is awesome when it works. We were able to deploy 60 nodes in about 3 hour
[17:20] <sean5384> wow
[17:20] <braven_> how do I know what version I am running?
[17:20] <sean5384> i'm trying to get just 5 going right now.
[17:20] <brendand> braven_, it should say at the bottom of the web ui
[17:21] <brendand> or you can do dpkg -l | grep maas on the maas server
[17:21] <brendand> braven_, that largely depends on your hardware/network too, we regularly deploy 40+ nodes in about 15-20 mins
[17:22] <brendand> haven't timed it though so it could be a bit longer
[17:22] <braven_> wow
[17:22] <braven_> I wonder when we configured wrong
[17:23] <brendand> it will use a lot of bandwidth, is one thing
[17:23] <braven_> 1.7.6
[17:23] <braven_> so can we upgrade without loosing all the nodes?
[17:24] <brendand> braven_, you *should* be able to
[17:27] <braven_> do know why it might be give that error when I am trying to save a cluster interface
[17:29] <brendand> could be anything really. haven't used 1.7 much
[17:33] <brendand> braven_, but some logs would be helpful
[17:35] <brendand> braven_, if you're willing to upgrade that would be a good first step, then if it still happens, file a bug
[17:39] <smgoller> Hi all, if I want to control VMs on a vCenter server, what has to be on the same subnet as the rack controller, the vCenter appliance, or the individual esxi servers?
[19:39] <batkins61_> Nodes won't enlist, even though booted under maas.  Metadata service appears unreachable.  Version 1.9 on 14.04.
[19:39] <batkins61_> Where should I look for that?
[19:54] <batkins61_> Is there any help on this channel?  Is there somewhere else to ask questions?
[20:20] <baldpope> with recent update to conjure, i get error after selecting which type of openstack to deploy
[20:25] <stokachu> baldpope: which is?
[20:25] <mup> Bug # changed: 1470202, 1473254, 1553647, 1563779, 1563799, 1563807, 1592666, 1604901
[20:27] <baldpope> conjure-up -> openstack for maas -> hawk (only option) -> error
[20:27] <baldpope> unable to list models: error cannot list models: no api addresses
[20:29] <stokachu> baldpope: sounds like a failed bootstrap
[20:29] <stokachu> what does juju status return?
[20:29] <baldpope> error no api address
[20:29] <stokachu> yea your bootstrap failed
[20:30] <baldpope> i'm seeing this on the ubuntu maas host
[20:30] <baldpope> all nodes in ready - none currently deployed - i thought this prepped the first node?
[20:31] <baldpope> or can I make this maas head the juju controller?
[20:32] <stokachu> yea so you have a state where juju thinks a controller exist but maas has nothing deployed
[20:32] <stokachu> i would juju kill-controller hawk
[20:32] <baldpope>  Unable to open API: no API addresses Unable to connect to the API server, destroying through provider
[20:33] <stokachu> thats fine it'll just kill it manually and delete it from it's database
[20:33] <stokachu> then you can try conjure-up again
[20:40] <stokachu> baldpope: nah we have a seperate queue that handles all that logic so you can proceed until you see the screen showing the bootstrap output
[20:44] <baldpope> looks like it's making progress
[20:44] <baldpope> i see progress in the conjure window - hadn't see that before
[20:45] <stokachu> baldpope: :D
[20:50] <baldpope> stokachu, hm, seems like it's stalled again
[20:50] <stokachu> baldpope: the machine is still in deploying state?
[20:50] <baldpope> nope, deployed
[20:51] <stokachu> what the wait screen showing?
[20:51] <baldpope> oh wait
[20:51] <baldpope> just finihed
[20:51] <baldpope> wow - long time?
[20:51] <baldpope> it's now deploying the rest
[20:51] <stokachu> yea maas is depending on the hardware
[20:52] <stokachu> so if the systems take a minute to boot up then it'll prolong that wait
[20:52] <stokachu> etc
[20:52] <baldpope> gotcha
[20:52]  * baldpope w000t-- progress
[20:52] <stokachu> but honestly aws takes about the same amount of time from bootstrap to deploy
[20:52] <baldpope> i don't mind failure as long as I learn something in the progress
[20:53] <stokachu> how many machines do you have available in maas?
[20:53] <stokachu> after the first one deployed?
[20:53] <baldpope> 5 total
[20:53] <baldpope> blade(1-5)
[20:53] <stokachu> ok cool
[20:53] <stokachu> all with 2nics?
[20:53] <baldpope> the first, is where juju (I believe) completed the it's deployment
[20:53] <baldpope> 6 technically, but 2 are plumbed
[20:53] <stokachu> yea the first deployed machine is your controller node
[20:54] <stokachu> ok
[20:54] <stokachu> baldpope: and your network interfaces are coming up eth0/eth1 as well?
[20:54] <baldpope> side note, i'm all in on openstack for the foreseeable future at work - so anything you guys need to test - i'm available
[20:54] <stokachu> i know we fixed that yesterday
[20:55] <baldpope> on blade1, it's shows as br-enp9s0f0
[20:55] <baldpope> as the active interface
[20:55] <baldpope> and then en01, enp9s0f0, enp9s0f1, ens3f1, 2, 3,
[20:55] <stokachu> ok neutron may fail with a 'config-changed' hook that we can fix with juju config neutron-gateway br-ext=en01 etc
[20:56] <stokachu> and then juju resolved neutron-gateway/0
[20:56] <stokachu> you could probably set that now if you want
[20:56] <stokachu> to avoid the hook error
[20:57] <baldpope> so from the maas head, run what?
[20:57] <stokachu> juju config neutron-gateway br-ext=<second interface name>
[21:02] <baldpope> ok, quick question before I do that - the remaining 4 are now in deployed state
[21:04] <baldpope> but everything is in waiting or maintenance state
[21:05] <stokachu> yea they are all going through processing hooks etc
[21:05] <stokachu> all the things needed to get the software up
[21:06] <baldpope> should I still attempt to change the neutron-gateway?
[21:06] <stokachu> yea it should queue it up and apply it
[21:06] <baldpope> k
[21:07] <stokachu> if the charm breaks because of performing that action it is a bug in the charm
[21:07] <stokachu> unless the interface was entered incorrectly
[21:07] <baldpope> and the second int name is the one on the node, correct?
[21:07] <stokachu> though it should really let you know via status-set blocked
[21:07] <stokachu> baldpope: yea the one on that node that is running neutron-gateway
[21:09] <baldpope> ERROR unknown option "br-ext"
[21:10] <baldpope> juju config neutron-gateway br-ext=enp9s0f1
[21:11] <baldpope> and conjure-up just errored out - neutron-gateway/0: hook failed: config-changed
[21:11] <stokachu> sec
[21:12] <stokachu> baldpope: ah you need to setup bridge-mappings
[21:12] <stokachu> https://jujucharms.com/neutron-gateway/xenial/3#charm-config-bridge-mappings
[21:14] <stokachu> so like if you have a bridge setup on that node you can do
[21:14] <stokachu> juju config neutron-gateway bridge-mappings=mynet:br0
[21:14] <baldpope> stokachu, while setting up a bridge makes sense (and I'd want to get there eventually) I did not setup bridge mode (either in maas or on the switch)
[21:15] <baldpope> i assumed the br-enp9s0f0 was stock from the existing charm?
[21:16] <stokachu> does that bridge exist on the system?
[21:16] <stokachu> if you juju ssh neutron-gateway/0
[21:16] <stokachu> more info as well: https://ask.openstack.org/en/question/94206/how-to-understand-the-bridge_mappings/
[21:17] <baldpope> http://pastebin.ubuntu.com/23290786/
[21:17] <stokachu> yea so you can assign the bridge mapping to br-enp9s0f0
[21:18] <stokachu> baldpope: additional info https://github.com/openstack/charm-neutron-gateway#port-configuration
[21:19] <stokachu> you can access all those config options from conjure-up if you click on configure->advanced configuration
[21:20] <baldpope> silly question stokachu - i feel like I've completely skipped a ton of reading - attempting to just follow the ubuntu docs on deploying open stack - is there a primer I skipped/missed?
[21:20] <stokachu> baldpope: networking is complex and you'll want to be familiar with that
[21:36] <batkins61> Do target hosts have to have internet access during elistment?
[21:54] <baldpope> stokachu, not gonna lie - a lot to take in
[21:55] <stokachu> baldpope: yea openstack is a complex beast
[21:56] <baldpope> stokachu, for the sake of conversation i've got the 6 ports total, 2 on board, and 4 on the add-on card - the long-term idea being that I'd use the some combination to dedicate shared storage and some remain to dedicate to service networks
[21:57] <baldpope> all that is done through juju ahead of deployment, or post deploy?
[21:59] <baldpope> i can see where that made little sense, i'll try again... dedicate 4 ports to the shared storage, and the 2 onboard to network services
[22:00] <baldpope> with the appropriate lag/bridge ports in place for redundancy and throughput
[22:10] <baldpope> ok, so I did the following: juju config neutron-gateway bridge-mappings=br-enp9s0f0
[22:10] <baldpope> and am re-running conjure against the existing deployment
[22:10] <baldpope> we'll see
[22:11] <baldpope> bridge-mappings appears to be the current non-deprecated command