[06:37] Bug #1630123 opened: OpenStack base 45 not being deployed with Juju GUI === frankban|afk is now known as frankban [08:49] Hi all [08:49] I have some questions about openstack on ubuntu [08:49] I have put it in an askubuntu question [08:49] http://askubuntu.com/questions/832736/openstack-with-autopilot-some-networking-clear-up [08:49] Some about them are MAAS related [11:26] Hi Kiko, Hi Roaksoax [11:50] Bug #1630123 changed: OpenStack base 45 not being deployed with Juju GUI [12:49] good morning kiko roaksoax [12:50] fyi - ran through the conjure at about 4pm yesterday, 13hrs or so and it's just sitting idle, no progress [12:50] i'll dig into it more this morning [12:53] baldpope: what do you mean sitting idle? [12:53] at what point? [12:53] finished inputting in maas server ip, and the api key [12:54] baldpope: are you seeing machines being deployed in maas? [12:54] then the next screen shows which modules will be selected, i presumed the default would be sufficient, so left untouched [12:55] of the 5 nodes, 1 is deployed (previously in the ready state) the other 4 are sitting in the idle state [12:55] the deployed host is online, and ssh is on, but I cannot ssh into it - the ssh key I added is not accepted when I attempt to connect as ubuntu [12:55] ssh key I added to the webui [12:56] it's probably using the ssh key generated by juju [12:56] you can do juju switch controller;juju ssh 0 [12:56] from the maas host? [12:57] yea [12:57] well wherver you ran conjure-up [12:57] right [12:57] sysadmin@ubuntu-ap-brk:~/.local/share/juju/ssh$ juju switch controller [12:57] finch:admin@local/conjure-up -> finch:admin@local/controller [12:57] sysadmin@ubuntu-ap-brk:~/.local/share/juju/ssh$ juju ssh 0 [12:57] ERROR no API addresses [12:57] your bootstrap failed then [12:57] what does juju models show [12:58] cannot list models - no api addresses [12:58] yea your bootstrap failed [12:58] are you able to deploy a node via maas ui? [13:00] i believe so, [13:00] 1sec [13:07] yea, was able to deploy a new bloade with ubuntu 16.04 lts [13:08] and i can login with the ssh key assigned in webui [13:08] can you access the internet from that node? [13:08] also what version of juju are you running [13:09] 2.0-rc2-xenial-amd64 [13:10] apt install worked (presumably through maas controller) but running lynx www.google.com fails [13:10] and can you access the internet from that node? or run a `sudo apt update` [13:10] yea [13:10] your network isn't configured properly [13:10] apt update worked [13:10] thats from the maas proxy [13:11] do you have IP forwarding enabled on the maas server? [13:11] am I mistaken, but I thought maas also acted as squid proxy ? [13:11] stokachu, shit .. i'll bet not [13:11] yea assuming you setup the network config on the maas server properly :) [13:11] you also want to NAT that traffic [13:11] so you'll need to add that rule in iptables [13:12] i have a forward rule, but I don't have any nat rules [13:12] sigh .. did I miss a step somewhere, thought I followed closely [13:12] /docs/en/users/#customize-headless-mode [13:12] err [13:12] http://paste.ubuntu.com/23275006/ [13:12] baldpope: thats what i use for my private network [13:13] just change that to whatever network you're using [13:13] that's in /etc/iptables/rules.save or something? [13:13] http://paste.ubuntu.com/23275007/ [13:13] see the pre-up line [13:13] you can add it there [13:13] or save it [13:15] do all of your nodes have 2 nics? [13:15] yea [13:15] all eth0/eth1? [13:15] well, 6 [13:15] enp9s0f0 and enp9s0f1 [13:16] ok so when you get to the application list make sure to configure your neutron br-ext to be enp9s0f1 [13:16] it defaults to eth1 [13:16] the other 4 not currently plugged in [13:17] i would plug those in because you can't select the machine you want to use for neutron [13:18] the nics, you mean? [13:18] yea [13:18] well the machine that is housing neutron [13:19] you can't specifically select that machine, juju will grab one at random [13:37] stokachu, (or anyone else) if I've got the masq rule in place, do I also need to be running squid, or will it just forward the traffic? [13:39] wait.. [13:40] ok, i may not have setup networking correctly, the 'internal' network I created is routable through firewall - is not required to go through maas controller [13:40] and in this case, the individual nodes do not have access through firewall out - [13:40] so the question I have now is - should the nodes be required to go through the maas controller, or is it ok for them to have direct access out? [13:41] the easiest solution is to route everything through your maas server [13:41] hm [13:41] that's interesting [13:41] i would think I would want to use maas for dpeloyment, but not necessarily to act as the end-all routing uplink.. [13:42] it doesn't have to be [13:42] thats just the easiest solution [13:42] if that's the case, more care should be taken on the maas controller to use a bridge interface with sub interfaces for redundancy [13:43] so eth0 and eth1 in bridge with br0.1 as the wan and br0.2 as lan? [13:43] along with any other sub interfaces as required [13:43] yea that will work too [13:44] hm [13:44] ok, not going to mess with that just yet [13:44] I can update the firewall to allow my private segment out [13:44] but I'm not sure how that resolves the juju deploy? [13:46] because juju needs to resolve things like streams.canonical.com:443 [13:46] ah [13:47] i thought that was being done from the maas controller [13:47] my mistake [13:55] in my 5 node environment, after the juju controller has already been deployed, am I limited to using the remaining 4 for compute, so the 1 head is lost? [13:59] yea unfortunately [13:59] hm [13:59] juju 2 requires a node for the controller [13:59] well, that by itself isn't terrible [14:00] if you plan ahead, you can pick a host that would suffice for juju, but might not be the ideal compute/storage node [14:00] that's what I thought I was deploying on the maas box (an older dell poweredge) [14:00] so you can select which machine to perform the bootstrap on [14:00] well - not exactly [14:00] JUJU_BOOTSTRAP_TO=host.maas conjure-up -d openstack [14:00] ? [14:01] ah [14:01] sorry - thought you were asking me a question [14:01] this only works for maas and i haven't documented it yet [14:01] stokachu, that would work perfectly, if I had a spare host to deploy to [14:01] most people usually create a VM to house the controller [14:01] not complaining - just trying to understand the environmen [14:01] and just register that in the maas [14:02] yea, that makes sense [14:05] stokachu, not sure if it's progressing or not [14:05] http://imgur.com/LalU5zS [14:06] did it get passed fetching juju agent yet? [14:06] no, been sitting here for the last couple of minutes [14:06] yea it still can't get out to streams.canonical.com [14:06] and both the node and maas controller have unfiltered access out [14:06] hm [14:08] looking at firewall, no blocked traffic [14:10] deploy another node via maas and just see if you can wget from streams.canonical.com [14:11] like wget http://streams.canonical.com/juju/tools/agent/2.0-rc2/juju-2.0-rc2-xenial-amd64.tgz [14:14] testing - need a minute to deploy again [14:15] thanks for taking a few minutes stokachu [14:15] np [14:15] i've got pages of notes on my side where I've made mistakes, thinkgs I've forgotten after cleaning/reinstalling/deploying [14:16] will be happy to share any relevant bits once I get it cleaned up and repeatable [14:16] yea im sure roaksoax and the docs team would want to look at that [14:46] How does one configure a large amount of servers in maas with a specific disk layout? [14:47] I'd like to have 36 machines configured with the same partition layout [14:47] I'd rather not have to go through 36 times and do it all [14:54] stokachu, ok, a bit lost now... i've deployed a new node - i can perform nslookup on www.google.com, with the reply coming from maas controller, but attempting wget fails, though traffic is not blocked [14:55] i can ssh directly into the node using the key provided through webui [14:55] default route is out through firewall [15:30] stokachu, i appear to have a routing issue - have to work on this and report back - but will have to be later... [15:31] baldpope: ok [16:40] Bug # changed: 1392763, 1394792, 1459888, 1481285, 1508975, 1589640, 1593388, 1623110, 1623634, 1623878, 1625711, 1625714, 1627019, 1627038, 1627039, 1627363, 1628052, 1628213, 1628298, 1629004, 1629008, 1629011, 1629019, 1629022, 1629045, 1629142, 1629402, 1629491, 1629868, 1629896 === frankban is now known as frankban|afk [18:16] Bug #1630343 opened: [2.1] upgrade from 2.0 to 2.1 broken [20:07] Bug #1629026 changed: [2.1] Images have been imported, but can't add a chassis [20:07] Bug #1630361 opened: [2.1 ipv6] MAAS should refuse to deploy a host with bad address-family config [21:19] Bug #1616232 changed: [2.1, 2.0] Installs should use GPT by default if volume is larger than 2TB [21:19] Bug #1630343 changed: [2.1] upgrade from 2.0 to 2.1 broken [21:28] Bug #1616232 opened: [2.1, 2.0] Installs should use GPT by default if volume is larger than 2TB [21:28] Bug #1630343 opened: [2.1] upgrade from 2.0 to 2.1 broken [21:34] Bug #1616232 changed: [2.1, 2.0] Installs should use GPT by default if volume is larger than 2TB [21:34] Bug #1630343 changed: [2.1] upgrade from 2.0 to 2.1 broken [22:19] Bug #1630394 opened: [2.1] Bootloaders not downloaded on initial import [23:11] Bug #1630398 opened: [2.0] EFI system fails to PXE boot: PXE-E23, Maas server returns TFTP error for bootx64.efi [23:13] MAAS 2.0 is not seeing all my interfaces on my server. How can I manually add them? [23:13] clarification, my rack controller can't see all of its interfaces. === cyberjacob is now known as zz_cyberjacob === zz_cyberjacob is now known as CyberJacob [23:56] Bug #1630398 changed: [2.0] EFI system fails to PXE boot: PXE-E23, Maas server returns TFTP error for bootx64.efi