=== BlackDex_ is now known as BlackDex === frankban|afk is now known as frankban [08:30] Good morning Juju world! === jamespag` is now known as jamespage === petevg_afk is now known as petevg [17:08] hi everyone do we have any JUJU people here ? [17:12] Teranet: how's it going? [17:14] good except my JUJU is not doing what is should do [17:15] Teranet: what does `juju --version` return? [17:15] juju bootstrap localhost fails [17:15] 2.0.2-xenial-amd64 [17:15] Errror is : ERROR failed to bootstrap model: waited for 20m0s without being able to connect: ssh: connect to host [17:15] Teranet: ok, have you verified that lxd is configured correctly on your machine? [17:16] it get's an IP assigned but the container when I see it don't get the IP [17:16] Teranet: can you launch a lxd instnace manually? [17:16] Teranet: `lxc launch ubuntu:16.04 ubuntu-test` [17:16] LXD is running correctly and the the bridge is in place [17:16] ok let me try [17:17] sysadmin@sf2-maas00:~$ lxc launch ubuntu:16.04 ubuntu-test [17:17] Creating ubuntu-test [17:17] Starting ubuntu-test [17:17] [17:17] looks like it just did [17:18] Teranet: ok, can you try this command for bootstrapping lxd local `juju bootstrap lxd lxd-dev` [17:18] Teranet: you can remove that test container with `lxd delete ubuntu-test --force` [17:20] ok will run it give it a sec [17:20] ok same result : [17:20] sysadmin@sf2-maas00:~$ juju bootstrap lxd lxd-dev [17:20] Creating Juju controller "lxd-dev" on lxd/localhost [17:20] Looking for packaged Juju agent version 2.0.2 for amd64 [17:20] To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md [17:20] Launching controller instance(s) on lxd/localhost... [17:20] - juju-141282-0 (arch=amd64) [17:20] Fetching Juju GUI 2.2.5 [17:20] Waiting for address [17:20] Attempting to connect to 10.5.100.53:22 [17:20] it's like not assigning the IP to the VM [17:21] 12: vethNL6O8E@if11: mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000 [17:21] link/ether fe:88:68:61:5d:72 brd ff:ff:ff:ff:ff:ff link-netnsid 0 [17:21] inet6 fe80::fc88:68ff:fe61:5d72/64 scope link [17:21] valid_lft forever preferred_lft forever [17:21] 14: vethMBJXX9@if13: mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000 [17:21] link/ether fe:f1:f5:1e:81:dc brd ff:ff:ff:ff:ff:ff link-netnsid 1 [17:21] inet6 fe80::fcf1:f5ff:fe1e:81dc/64 scope link [17:21] valid_lft forever preferred_lft forever [17:21] I do see IPv6 but no IPv4 Ip's assigned [17:21] Teranet: did your ubuntu-test container get an ip? [17:22] Teranet: it sounds like you might need to reconfigure your lxd-bridge [17:22] `lxc list | grep ubuntu-test` [17:23] nope [17:23] ahh, thats the issue [17:23] hold on [17:24] sysadmin@sf2-maas00:/etc/dhcp$ lxc list | grep ubuntu-test [17:24] | ubuntu-test | RUNNING | 10.5.100.68 (eth0) | | PERSISTENT | 0 | [17:24] ahh, ook [17:24] the ubuntu test yes [17:24] stupid eth0 [17:24] that shit has to be em1 grrr [17:25] my Server run's em1 and em2 as interfaces not eth0 [17:25] Teranet: are you using lxdbr0 for your lxd bridge? [17:25] let me verify [17:25] 4: lxdbr0: [17:26] Teranet: `cat /etc/default/lxd-bridge` [17:27] that should produce something similar to -> http://paste.ubuntu.com/23619976/ [17:27] for your ip space [17:27] there are some critical configs in that file for the lxd-bridge [17:28] it did http://paste.ubuntu.com/23619977/ [17:28] Ip's been assigned but SSH times out :-( [17:28] oooh [17:28] Teranet: its just being timely I think [17:29] Teranet: is this running on a laptop with spinning disk? [17:29] nope [17:29] run's on a big DELL Server with SSD's [17:29] well, ok then [17:29] ha [17:30] Teranet: has your bootstrap finished? [17:30] nope it's hanging [17:30] http://paste.ubuntu.com/23619986/ [17:30] Teranet: if/when you bootstrap again, it might be helpful to pass the --debug flag (my bad for not getting you that with the command initially) [17:30] like this [17:31] is ok let me redo it [17:32] Teranet: I see the problem [17:33] your gateway [17:33] why so ? [17:33] ## IPv4 network (e.g. 10.0.8.0/24) [17:33] LXD_IPV4_NETWORK="10.5.100.50/24" [17:34] sysadmin@sf2-maas00:/etc/dhcp$ ip r [17:34] default via 10.5.100.1 dev em1 onlink [17:34] 10.5.100.0/24 dev em1 proto kernel scope link src 10.5.100.250 [17:34] 10.5.100.0/24 dev lxdbr0 proto kernel scope link src 10.5.100.50 [17:34] y\ [17:34] there should be no issue [17:35] Teranet: Your not natting ivpv4 [17:35] ehh [17:35] ipv4 [17:35] Teranet: `sudo dpkg-reconfigure -p medium lxd` [17:36] correct why should I it's the same network [17:36] you'd need something to bridge those two interfaces. [17:36] Teranet because the container will not know how to talk to the internet [17:36] those are not the same network (unless there is other config). Those are 2 networks that happen to use the same address space. [17:37] correct on purpose [17:37] jrwren: nice catch [17:37] because I don't want LXD to use a different network [17:37] Teranet: then you need to create the bridge on the interface [17:38] which is [17:38] Teranet: in that case, you'll need to create your own bridge and add em1 to it and tell lxd to use it instead of lxdbr0 [17:38] 4: lxdbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000 [17:38] link/ether fe:55:75:16:41:9a brd ff:ff:ff:ff:ff:ff [17:38] inet 10.5.100.50/24 scope global lxdbr0 [17:38] valid_lft forever preferred_lft forever [17:38] inet6 fe80::84ba:caff:feba:ff95/64 scope link [17:38] valid_lft forever preferred_lft forever [17:38] Teranet: http://jrwren.wrenfam.com/blog/2015/11/10/converting-eth0-to-br0-and-getting-all-your-lxc-or-lxd-onto-your-lan/ [17:40] http://paste.ubuntu.com/23620012/ [17:40] take a look [17:42] Teranet: em1 needs to be in 'manual' mode [17:42] ok let me fix that [17:43] done now reboot again ? [17:46] Teranet: did you add lxdbr0 to em1 similar to jrwren's blog post? [17:46] i never reboot. [17:47] lxd changes on network's won't go into effect without a hard reboot [17:47] ifup containerbr should be enough [17:47] That has not been my experience. [17:47] that's a fact [17:47] i've no wish to argue. [17:48] Teranet: you can use brctl, and ifup/ifdown | ifconfig to accomplish these tasks w/o restarting [17:49] bdx not under newer ubuntu setup's just fyi [17:49] Teranet: really ? [17:50] yes it's an open issue with Kernel 4 since they pushed it out and the fix is in the works [17:50] Teranet: `sudo ifdown em1 && sudo ifup em1` doesn't do the trick? [17:50] some stupid register is not proper set which very often lets the interface restart hang or just emulate the restart but actually not doing a thing [17:50] crazy [17:51] terrible! What kernel version? I want to be sure to avoid it. [17:51] yes it's a known bug in the 4.4 series [17:51] it's still in the latest 4.4.0-35 [17:51] which I use right now [17:52] ok box is back up [17:52] stupid Dell init takes way to long :-( [17:53] but wait till you work on Supermicro boxes :-( [17:53] 10min reboot or longer even [17:54] somehow I'm on 4.4.0-45-generic, Wed Oct 19 14:12:37 UTC 2016 x86_64. Are you on x86_64? [17:54] Linux sf2-maas00 4.4.0-53-generic #74-Ubuntu SMP Fri Dec 2 15:59:10 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux [17:55] oh, -35 v. -53. a bit of discalulia [17:55] discalculia [17:55] LOL must had patched over night the box [17:56] sorry we have over 15000 Server here [17:56] often I get lost whats where [17:58] still looks to hang at the same crap ;-( [17:59] Teranet: so now you need to unconfigure the lxd bridge from lxd's perspective === frankban is now known as frankban|afk [17:59] Teranet: `sudo dpkg-reconfigure -p medium lxd` [17:59] I think I like to run this without a stupid bridge mode at all if possible [18:01] bridge is the only way to not have LXD use a different network. You can use the defaults and use routing and NAT, or you can use bridge, but I don't know of a way to use networking with LXD without one of those two things. [18:02] i think I might than have to yell at ubuntu again [18:02] LOL [18:03] Teranet: all it takes is -> http://paste.ubuntu.com/23620112/ [18:04] ok let me try with NAT [18:04] might solve the crap [18:04] Teranet: ^ is direct bridging [18:04] even I don't like the NAT in a Corp network [18:04] Teranet: ^ isn't nat [18:05] I do have to because we have around 300 networks and within those I must be able to reach each individual LXD when needed [18:05] so if network is not proper we have a bigger issue [18:06] bdx: if you bridge em1 onto lxdbr0 then the lxd-net service will provide DHCP service out that em1 interface. If you are plugged onto a LAN, you might make your neighbors angry. [18:07] we use a dedicated Management LAN and IPAM I reserved those blocks [18:07] so they won't collide [18:07] MAAS DHCP is 180-199 right now [18:08] and LDX only 51-69 [18:08] so they won't collide in this test setup [18:08] jrwren: not if lxd's networking is disabled [18:09] jrwren: http://paste.ubuntu.com/23620112/ works for me when I have external dhcp and I want to extend that to my containers on my host [18:09] which is what I feel like Teranet is trying to do [18:09] Teranet: is that correct? [18:09] bdx: ah, nice. [18:10] let me check [18:12] I just adjusted to your settings bdx lets give me a sec [18:13] FYI my MAAS Cluster setup uses the em1 on all boxes as Management and em2 is the public TRUNK with 30 Vlan's later on which is the public facing network ranges so to speak [18:19] Teranet: I must warn you, if this is a node in your MAAS, you are best off bootstrapping Juju to your maas, then letting maas configure the networks, and deploy containers ontop of them with juju using the maas provider [18:19] how would that be different ? [18:19] shouldn't that be the same procedure ? [18:19] Teranet: trying to manually mangle MAAS provisioned interfaces will only bring you grief .... does this make sense? [18:20] Teranet: MASS has built in container networking [18:20] Teranet: If you bootstrap your MAAS, you can then `juju deploy ubuntu --to lxd:` [18:20] I have MAAS running and this is the Cluster Node on which I deploy [18:21] Teranet: or even better, `juju deploy ubuntu --to lxd: --constraints "spaces=,` [18:22] dbx ??? now you lost me there [18:22] Teranet: sf2-maas00 [18:23] as far as I recall no matter if MAAS or not you deloy the first juju boot strap and all on the Cluster Master Node [18:23] is sf2-maas00 a node in your maas? [18:23] sf2-maas00 is the Master of the Cluster [18:23] no it's the Regional Controller of the MAAS [18:23] ahhh, so your combined region/rack controller, ok [18:23] Teranet: have you any machines checked into your maas? [18:24] I have 5 NODES ready [18:24] all same specs [18:24] Supermicro 24 core 32GB ram :-( and 500GB SSD for starters [18:24] Teranet: ok, do you have network spaces/subnets created, and associated with your node interfaces? [18:24] in your maas [18:25] all are in em1 Management network 10.5.100.0/24 [18:25] even their IPMI is [18:26] the idea is to have OpenStack at the end doing al lthe work [18:26] Teranet: have you created network spaces/subnets in the context of MAAS and configured your nodes interfaces to be attached to the networks? [18:26] Teranet: I've been deploying openstack on maas for a few years now [18:27] could you skip stupid juju to get Openstack up ? [18:27] Teranet: so without any networks configured in maas other than the mgmt net, containers deploy by default to the mgmt network [18:27] yes that's what they should for now [18:28] Teranet: do you have juju bootstrapped to your maas? [18:28] ... not 100% sure if I did or not [18:28] how can I check ? [18:28] Teranet: you would have to `juju bootstrap maas my-maas` [18:29] https://jujucharms.com/docs/2.0/clouds-maas [18:29] I just had juju installed and try to get this piece working [18:29] so I think I do not have it tied to the MAAs yet [18:29] Teranet: so your missing the juju <-> MAAS communication link [18:30] Teranet: once you get juju bootstrapped to your maas, deploying containers and openstack itself becomes very easy [18:30] right but brings me back to juju bootshit to fail right from the get go [18:30] Teranet: work on bootstrapping your maas [18:30] Teranet: bootstrapping your maas has nothing to do with containers or lxd bridge [18:31] you just need to do this https://jujucharms.com/docs/2.0/clouds-maas [18:31] once you have your maas bootstrapped, you can deploy containers to your nodes, and the networking bit gets handled by juju/maas, and you don't have to do any of it manually, its quite nice [18:33] Teranet: once you have your maas bootstrapped you can just `juju deploy ubuntu --to lxd:` and your container should be reachable from your maas management network [18:33] ok now I go this : maas-prod 0 maas Metal As A Service [18:33] [18:34] so I added the stuff [18:34] so now [18:35] `juju bootstrap maas-prod maas-prod-controller` [18:36] ok question so maas-prod I get but maas-prod-controller is than sf2-maas00 ? [18:36] 'maas-prod-controller' will be the name of your juju controller [18:36] oh crap how can I check that again ? [18:37] the juju controller will be selected from your available machines in your maas [18:37] and then will be named `maas-prod-controller` [18:37] your juju controller will be created upon bootstrap [18:38] http://paste.ubuntu.com/23620258/ ??? [18:39] ahhh [18:39] one last step [18:40] juju add-credential maas-prod [18:40] ok now it ask for credential name ?? [18:41] maas-admin [18:41] (arbitrary - I think) [18:41] so the admin user of maas ? [18:41] ehh ... yes [18:42] that might be best for consistency .... in case you end up with multiple maas users [18:44] ok .... building now [18:45] ooh, maas is bootstrapping now? [18:45] so far : [18:45] sysadmin@sf2-maas00:~$ juju bootstrap maas-prod maas-prod-controller [18:45] Creating Juju controller "maas-prod-controller" on maas-prod [18:45] Looking for packaged Juju agent version 2.0.2 for amd64 [18:45] Launching controller instance(s) on maas-prod... [18:45] - crcrbk (arch=amd64 mem=32G cores=24) [18:45] excellent [18:45] Fetching Juju GUI 2.2.5 [18:46] still fetching [18:47] Teranet: can you see that one of your nodes has started to deploy in maas? [18:47] let me check [18:48] yes 1 NODE turned on and is deploying now [18:48] ok somehow I will need to write this shit down later on LOL [18:49] because our ubuntu docu is not working at all again :-( [18:49] do figure LOL [18:50] Teranet: I compiled a ~200 page doc for my last company that details deploying openstack on maas with juju [18:51] let me see if I can dig it up for you :-) [18:51] this I have to put back onto ubuntu.com manuals LOL [18:52] Teranet: you can get a default vanilla deploy very easily just using the juju/maas docs I think, but you will want to start creating your own docs for this type of project for sure [18:52] think wrong :-) that's why I am here because someone F'd up the docu [18:52] LOL [18:53] Teranet: can you point out to me where the doc is failing you? [18:54] so when I did install after MAAS the apt install juju [18:54] once that's in it directs you to a full bunch of BS [18:55] it should direct you for OS setup with juju just like you gave me the striped down version of commands [18:55] Ahh, yes installing maas, has a few extra gotchas that need be addressed ... did you reference the MAAS 2.0 docs https://maas.ubuntu.com/docs/install.html? [18:57] correct that one is out of date as well [18:57] so we have it on the list to rewrite it [18:57] because with 16 its simpler now [18:57] ahh, ok [18:58] good my workshop for this is in 3 weeks LOL [18:58] not today [19:02] gosh still building wtf :-) [19:05] ok crap now what : ERROR failed to bootstrap model: bootstrap instance started but did not change to Deployed state: instance "crcrbk" is started but not deployed [19:05] [19:05] debug again ? [19:08] Teranet: are the nodes all setup to boot from lan first and disk second? [19:09] Teranet: when I've had that I've not had the boot order in the bios setup right and so it couldn't deliver the image, reboot, and get the node on disk going [19:11] yes network first than USB if attached and than RAID SSD [19:11] and IPMI controlled [19:11] Teranet: thats your issue [19:11] Teranet: you have a preconfigured hardware raid? [19:12] yes [19:12] RAID is set and the boxes are all in READY status [19:12] Teranet: can you configure your raid via maas please [19:12] it already is [19:13] Teranet: oh [19:13] rick_h: what is the status of raid support in maas? [19:14] Teranet: I would advise you to get this setup in the most default/vanilla way initially [19:14] vgroot-lvroot [19:14] 479.0 GB [19:14] ext4 [19:14] ok [19:15] Teranet: I ran into a few issues when setting up raid via maas, it had to do with how I was creating vgs and mounting filesystems [19:15] Teranet: can you make the raid a secondary goal [19:15] and kill it for now, for the sake of just getting up and running with minimal issues [19:16] so should I blow the hole MAAS box away and re add it or what ? [19:17] Teranet: yea, if you could just reprovision the nodes to not use raid pls [19:17] ?? it's a HARDWARE Raid just FYI [19:17] so the box don't see it [19:18] Teranet, yea, thats also an issue [19:18] Teranet: if its a hardware raid, then you didn't configure it via maas [19:18] Teranet: get rid of the raids, let maas manage your disks [19:19] Teranet: you can use maas to create a raid of those disks for your / [19:19] I think you don't get it [19:19] later, after you get everything standing vanilla [19:19] I won't change it because thats how it's default [19:19] RAID 1 is a must set and I won't change it [19:20] Teranet: maas expects to manage your hardware [19:20] Teranet: if you try to interfere, you will experience difficulties [19:21] Teranet: you can still have those disks in raid1 [19:21] Teranet: you would just need to do it via maas, not your hba controller [19:22] and I won't let mass software control my hardware raid [19:23] Teranet: maas doesn't control your raid [19:23] Teranet: maas just configures it upon deployment [19:24] Maas don't see it as a Raid and that's how it has to be [19:26] Teranet: Ok, I had the same thing going on .... I ended up switching my raid controllers into hba mode, bc it was just easier to let maas do it, but if you insist on using hardware raid, you might want to run the specs of your setup by the maas team so they can verify MAAS will work as intended on your hadware configuration [19:27] they are and I verified them myself [19:27] so that ain't the issue [19:27] Teranet: conceptually, what you are thinking should work, I'm just trying eliminate extra things that might cause issues while getting you up and running initially [19:28] Teranet: one thing you can do to troubleshoot, is to make sure the nodes deploy via maas, w/o juju [19:28] they do as I said all 5 nodes are in RAEADY mode [19:28] e.g. just deploy the node from the maas gui, and make sure they deploy successfully [19:29] so they are deployed via MAAS alone no problem with IPMI [19:29] if that checks out, then juju will be able to deploy them first [19:29] ok, then do you release them before bootstrapping with juju? [19:31] Teranet: if they deploy successfully via MAAS standalone, then juju bootstrap should work for sure .... so strange its not for you [19:31] no offense here bdx but this is not how it should work [19:31] Teranet: how so? [19:31] I am certain some code has changed in juju again and now the BS don't work again [19:32] I saw some guys pushed code changes in without approval [19:32] worse comes to worse I need to roll back the code and ban those guys from doing changes to our code again [19:38] kjackal, kwmonroe, petevg: New version of BT has been released, with matrix support, xunit reporting, and a few other fixes. [19:38] cory_fu: sweet! [19:38] awesome, thanks cory_fu [19:39] kjackal: Thank tvansteenburgh. He merged and released it. :) [19:39] then ... thanks tvansteenburgh! [19:39] * tvansteenburgh waves === mskalka is now known as mskalka_ === mskalka_ is now known as mskalka [20:37] tvansteenburgh: Man, you are on *top* of the PRs today. :) [20:37] i try cory_fu [20:37] you do get special treatment though [20:38] :) === mskalka_ is now known as mskalka