[02:20] <andrew-ii> I have an LXD container that seems to be live, but juju shows it as pending.
[02:21] <andrew-ii> I'm a bit at a loss for why. The logs don't seem to show anything wrong, I think.
[02:22] <andrew-ii> `juju ssh 0` followed by `sudo lxc list` shows it has IPV4 addresses, but it doesn't show anything before with `juju status`
[02:24] <andrew-ii> maas has the new container's two IP addresses shown like the machine's lxc command, but juju doesn't seem to be aware
[02:25] <anastasiamac> andrew-ii: does `juju status --format=yaml` show error?
[02:26] <andrew-ii> I don't think so. Just 'juju-status: current: pending' and 'machine-status: current: running'
[02:28] <andrew-ii> I feel like maybe it's related to maas rack HA, since a few days ago I added a second rack controller (but didn't work on it until today)
[02:29] <andrew-ii> This is with a juju 2.1.2 controller (fresh)
[02:31] <andrew-ii> Oh and maas 2.1.3
[02:32] <anastasiamac> andrew-ii: m not sure what's going on.. and there is nothing in Juju or MAAS logs?
[02:32] <andrew-ii> Not that I saw... I'll check again, though.
[02:33] <anastasiamac> andrew-ii: and u r not using kvm on maas?
[02:33] <andrew-ii> No, should only be lxd, I think
[02:36] <anastasiamac> andrew-ii: file a bug, include ur bootstrap/deploy commands, logs and --format=yaml status output :) any other info like MAAS setup, network topology, etc as u see fit
[02:36] <andrew-ii> Alrighty. I was assuming it was something screwy with how I set it up
[02:37] <andrew-ii> Should I try to rebuild the juju controller from an earlier version?
[02:38] <anastasiamac> if u can confirm if it works with earlier version would b awesome \o/ u could also try later Juju version - 2.2: alpha1 or daily snap...
[02:38] <anastasiamac> it mayb a problem with just 2.1.x
[02:38] <andrew-ii> Alrighty. Lemme gather the logs and such
[02:44] <andrew-ii> Err. Dumb questions asked poorly: how do I use root with `juju scp -- -r root@0:/var/log/juju ./machine-0-juju-logs`?
[04:43] <kwmonroe> andrew-ii: juju run --unit <foo>/0 'cp -a /var/log/juju /home/ubuntu/ridiculous && chown -R ubuntu /home/ubuntu/ridiculous'
[04:43] <kwmonroe> andrew-ii: juju scp -- -r cwr/0:ridiculous /tmp
[04:43] <kwmonroe> it's not pretty andrew-ii, and i can't belive juju scp doesn't work in root context, but it is what it is.
[04:46] <kwmonroe> oh, i said "cwr" in that scp command.. i meant "<foo>", as in, whatever unit you want to scp from.  i just happen to test this in an env with cwr/0 deployed.
[05:32] <ybaumy> i just learned about the existence of foreman. does anyone have experience what is better? juju or foreman
[07:34] <zeestrat> ybaumy: I'd compare it more to MAAS. Comes from Red Hat and has been around for a while. Works well with Puppet
[08:04] <kjackal> Good morning Juju world
[09:34] <BlackDex> Hello there. Is it possible to have an bootstrap node in LXD for usage with MAAS?
[09:35] <BlackDex> so i have a maas node, and i create an LXD container on that same node to be the bootstrap node?
[09:40] <cnf> BlackDex: i used kvm
[09:41] <BlackDex> i do that also
[09:41] <BlackDex> but i wonderd if i can skip the v in kvm ;)
[09:41] <BlackDex> because of performance
[09:44] <BlackDex> maybe i can do it by first creating a LXD container, and adding that to MAAS
[09:45] <BlackDex> giving it the correct tags and tell juju to use that
[09:47] <cnf> i don't think you can add lxd to maas?
[09:53] <BlackDex> yea you can :)
[09:53] <BlackDex> MAAS Supports LXD :)
[10:24] <andrew-ii> kwmonroe: thanks! The command makes sense and works fine
[11:35] <stub> What hook gets called if I 'juju attach' a resource to an application?
[11:36] <stub> I'm guessing config-changed ?
[11:38] <stub> nope, upgrade-charm according to docs.
[12:06] <marcoceppi> stub: correct, upgrade-charm.
[12:06] <tvansteenburgh> jamespage: where is the ganglia charm repo?
[12:07] <tvansteenburgh> the charm points to https://code.launchpad.net/~charmers/charms/trusty/ganglia/trunk but that hasn't been touched in a long time
[12:56] <jamespage> tvansteenburgh: https://github.com/ganglia-charms
[14:07] <andrew-ii> Another DQAP: Can I deploy onto a controller?
[14:08] <andrew-ii> Mostly because `juju deploy openvpn` is just so convenient...
[14:09] <andrew-ii> Granted, I suspect it adds a ton of crazy to the model, but I've seen references to people finagling odd setups and figured I'd ask.
[14:12] <cnf> morning
[14:13] <cnf> balloons: poke?
[14:13] <marcoceppi> andrew-ii: you can certainly try, there shouldn't be much collision, but it's something you'll want to test first
[14:13] <marcoceppi> andrew-ii: juju deploy --to 0 -m controller openvpen
[14:13] <balloons> cnf, howdy
[14:13] <cnf> ohai! \o
[14:13] <cnf> balloons: so i'm back it :P
[14:14] <cnf> did you see https://bugs.launchpad.net/juju/+bug/1674759 ?
[14:14] <mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:Incomplete> <https://launchpad.net/bugs/1674759>
[14:15] <cnf> anastasiamac: also poke :P
[14:17] <balloons> cnf, looks like it would be worth trying the unstable 2.2 and see if it things are better
[14:17] <cnf> balloons: cn i upgrade to 2.1.2 with a 2.2 client?
[14:18] <cnf> or are you/they suggesting upgrading the controller to 2.2?
[14:18] <cnf> it's a PoC, so i don't mind overly much
[14:19] <balloons> cnf, you could use upgrade to go backwards, since there wouldn't be anything newer if you did try 2.2-alpah1
[14:19] <balloons> But yea, it would be a new controller
[14:19] <cnf> $ ./juju upgrade-juju
[14:19] <cnf> no prepackaged tools available, using local agent binary 2.2-alpha1.1
[14:19] <cnf> ERROR no matching tools available
[14:21] <cnf> updated the issue as well
[14:22] <cnf> balloons: i'm tempted to just delete the controller, and bootstrap a new one
[14:22] <cnf> but i will probably run into this again, then
[14:24] <balloons> cnf, just changing the client won't change things indeed. You'd have to bootstrap a new controller
[14:25] <cnf> hmm, so my only option is to bootstrap a new controller
[14:25] <cnf> hmm
[14:26] <balloons> as far as verifying if the newer juju fixes the bug yes
[14:26] <balloons> Or bootstrapping a newer controller and doing model migration
[14:26] <cnf> hmm
[14:27] <cnf> i might as well throw everything away, and start new
[14:27] <balloons> lxd might be useful here as well
[14:27] <cnf> balloons: how so?
[14:28] <balloons> if you lack the physical machines
[14:28] <cnf> can i deploy a controller to MAAS using LXD?
[14:31] <balloons> I think actually the easier way to do it is to add a vm using virsh on the maas controller for the juju controller. That's a better way to double dip I think
[14:32] <cnf> that's what i have now
[14:32] <cnf> the juju controller is a KVM
[14:32] <balloons> ahh, awesome
[14:32] <balloons> so can you simply add another kvm?
[14:32] <cnf> on an ESXi vm...
[14:33] <balloons> it would only have to live long enough to migrate your workload. In the interest of checking the bug, you migrate, then attempt to upgrade
[14:33] <cnf> well, "simply"
[14:34] <cnf> but i have nothing deployed atm
[14:34] <cnf> so i might as well just trash this one, and bootstrap again
[14:34] <balloons> ack
[14:34] <cnf> it just has me worried i'll run into it again
[14:35] <cnf> hmm
[14:36] <cnf> also, if i do this, i can no longer help debug the problem
[14:37] <cnf> anyone know when anastasiamac comes online?
[14:40] <balloons> cnf, about 7 hours from now
[14:40] <marcoceppi> cnf: she's in NZ timezone
[14:40] <cnf> hmm, that's a bummer :P
[14:40] <balloons> cnf, presumably you could recreate easily enough
[15:20] <kwmonroe> cory_fu: i don't know enough puppet to grok the syntax on https://issues.apache.org/jira/browse/BIGTOP-2708, but i *think* we're ok because we used role-based bigtop (and not component-based, which is what this is patching).
[15:20] <kwmonroe> having said that, i shall deploy zeppelin and see what happens :)
[15:34] <cnf> hmm, crap, i used the wrong user for the controller again
[15:34] <cnf> can I cswitch the maas suer the juju controller uses?
[15:52] <cnf> hmm, why is juju status --output=yaml empty?
[15:52] <cnf> juju status shows me plenty
[15:55] <cnf> hmz
[15:56] <cnf> wtf
[15:59] <cnf> ugh, i really don't know how to debug this stuff...
[16:00] <cnf> jamespage: what was the right incantation?
[16:04] <jamespage> cnf: --format
[16:04] <jamespage> cnf: --format=yaml
[16:05] <cnf> oh
[16:05] <cnf> i'm blind :P
[16:06] <cnf> so now to find why this isn't working, still :(
[16:07] <cnf> jamespage: http://termbin.com/vz8q see anything obvious?
[16:11] <cnf> juju seems to just break shit, and then stop doing stuff
[16:11] <cnf> wth :(
[16:11] <cnf> maas brings up the machine with all the right IP's
[16:11] <cnf> juju changes the IP's around to bridge interfaces, except doesn't do it right
[16:11] <cnf> and then fails
[16:12] <cnf> i think :(
[16:12] <cnf> can anyone help with this?
[16:12] <jamespage> jam: any chance you can caste your eyes over cnf's problem above?
[16:12] <jamespage> appears to be some sort of network-space device binding allocation lxd type problem
[16:15] <cnf> seems every time i file a big, and get around it, something else pops up
[16:21] <cnf> jamespage: btw, that person has not called me yet ^^;
[16:21] <cnf> dno if you made the link to the mails etc, but we met at cfgmgmtcamp
[16:22] <jamespage> cnf: yep I gotcha
[16:28] <ybaumy> jamespage: nice that you are here. maybe you can help. is the following right to do. remember i set the root password for the mysql charm already. now i want .. juju add-machine -n2 ; juju add-unit mysql --to machine1 ; juju add-unit hacluster --to machine2 ; juju add-relation mysql/0 hacluster/0 ; juju add-relation mysql/1 hacluster/0 ???
[16:29] <cnf> so jam is the person i need for this, right?
[16:30] <ybaumy> or is it juju deploy hacluster --to machine2 i guess
[16:35] <cnf> hmm
[16:35] <cnf> i'm at a loss here
[16:38] <Budgie^Smore> lazyPower did you read the backlog in channel?
[16:39] <lazyPower> Budgie^Smore: i did see that you encountered some failure scenario(s) in AWS
[16:41] <ybaumy> cnf: i feel you
[16:41] <Budgie^Smore> lazyPower yup, I need to resize the instances anyway so am going to do a clean install rather than salvage this on but wondered if your team would like any failure data before I destroyed the cluster
[16:42] <kwmonroe> cnf: i'm pretty sure jam is in UTC+<a couple hours>.  you may have more luck pasting http://termbin.com/vz8q in #openstack-charms.
[16:42] <cnf> kwmonroe: i think it's a juju issue, but no harm, i guess
[16:44] <kwmonroe> cnf: i'm not familiar enough with network space binding to offer much help -- hopefully #openstack-charms has the people with the right know-how for ya.
[16:44] <cnf> i hope so
[16:44] <cnf> i'm getting quite dispirited...
[16:45] <lazyPower> Budgie^Smore: oh heck yeah
[16:45] <lazyPower> Budgie^Smore: can you get us a juju-crashdump of the model?
[16:45] <lazyPower> Budgie^Smore: snap install juju-crashdump --edge && juju-crashdump. I can get you a secure upload point again
[16:45] <lazyPower> need a sec tho i'm in an openstack meeting
[16:45] <Budgie^Smore> yeah I need a min or two to get my laptop up and running, etc.
[16:46] <Budgie^Smore> make that a few more than than that since I need more coffee
[16:53] <Budgie^Smore> ok I haven't used snap before and getting an error about requiring classic or confinement override
[16:53] <jamespage> kwmonroe: I'm already a bit stumped on what the problem is with cnf's deployment
[16:55] <cnf> jamespage: bit more reference http://termbin.com/ep44 and http://termbin.com/6o0h
[16:55] <jamespage> ybaumy: kinda - adding units is nearly as you describe
[16:55] <cnf> maas brings it up fine, then juju tries to make the bridges
[16:56] <jamespage> you'll want todo something like juju add-unit --to lxd:<physical machines id> mysql
[16:56] <jamespage> with regards to hacluster
[16:56] <jamespage> ybaumy: juju deploy hacluster hacluster-mysql
[16:56] <jamespage> juju add-relation hacluster-mysql mysql
[16:57] <Budgie^Smore> lazyPower I added --classic to that snap command
[16:57] <jamespage> ybaumy: hacluster is a subordinate charm so to make units of it, you relate it to a principle charm like percona-cluster
[16:57] <jamespage> or glance or cinder or nova-cloud-controller
[16:58] <ybaumy> ok
[16:58] <ybaumy> gonna make a backup before i try that
[16:59] <zeestrat> jamespage: Do y'all have a reference bundle for OpenStack HA laying around besides https://launchpadlibrarian.net/298175262/bundle.yaml?
[16:59] <zeestrat> If not, then that might help you out, ybaumy.
[17:01] <lazyPower> Budgie^Smore: ah good plan, i forgot that flag
[17:01] <lazyPower> Budgie^Smore: sorry, this is what happens when i triple-task.... :\
[17:02] <ybaumy> zeestrat: i dont want to start over at that point ..
[17:02] <Budgie^Smore> no worries, being able to figure out issues like that is makes me good at what I do
[17:02] <ybaumy> zeestrat: i know of that template you pasted it before for me
[17:03] <ybaumy> zeestrat: but i have a setup and dont want to start over every 2 days
[17:03] <ybaumy> zeestrat: i already scripted alot to fit the current environment i dont know if the scripts will work with that new setup then
[17:05] <zeestrat> ybaumy: My bad. Memory is not what it used to be!
[17:08] <Budgie^Smore> getting a bunch of runtime/cgo: pthread_create failed: Resource temporarily unavailable
[17:08] <ybaumy> zeestrat: yesterday i got a deadline for the POC. and i now have one month to show them that ubuntu/juju/maas is the platform we want to use. else i have to start with redhat or suse
[17:08] <Budgie^Smore> getting a bunch of "runtime/cgo: pthread_create failed: Resource temporarily unavailable:" errors running that command
[17:08] <ybaumy> so i need some support
[17:12] <lazyPower> Budgie^Smore: thats emitting from snapd
[17:12] <lazyPower> Budgie^Smore: so long as the process hasn't returned its still tarballing up the relevant bits of the model and preparing a dump package
[17:13] <lazyPower> Budgie^Smore: i encourage you to extract it and take a quick peek before washing hands, this app dumps the full state of a "crashed model" for post analysis, this may be something you want to have in your toolkit as you're building a pretty heady k8s model right?
[17:14] <Budgie^Smore> lazyPower, where did you want this ran? I assumed on the client that was connecting to the client but I am thinking now you meant a different system since I got told it needs to run as root but root isn't connected to the controller
[17:15] <lazyPower> cory_fu: ^ needs to run as root? huh?
[17:15] <lazyPower> Budgie^Smore: hang on cc'ing a stakeholder on the project
[17:16] <lazyPower> Budgie^Smore: but no, it should be run as your user on your client workstation, that much is correct.
[17:16] <Budgie^Smore> lazyPower oh I was intending on holding onto a copy to look at myself too as yes I want to at least try and avoid this scenario but I need to upgrade the underlying infra anyway and always like to start from a known good state
[17:16] <cory_fu> lazyPower: Trying to get caught up on the backscroll, but what run as root?
[17:16] <cnf> no one that has an idea what is going on here http://termbin.com/vz8q ?
[17:17] <lazyPower> cory_fu: ah, this was returned from juju-crashdump for Budgie^Smore
[17:17] <lazyPower> cory_fu: i was instructing budgie to create a crashdump archive of the model before destroying it so we could do some post-analysis of what went sideways during model deltas.
[17:17] <lazyPower> and apparently its complaining it needs to be run as root? is this new behavior? I've not seen this before.
[17:17] <cory_fu> lazyPower: Hrm.  crashdump shouldn't require root on the local machine, and anything it does on the remote machine should be done with `juju run` so it should have root (I think)
[17:18] <Budgie^Smore> lazyPower, cory_fu here is the ful output from juju-crashdump: http://paste.ubuntu.com/24229662/
[17:18] <cory_fu> lutostag: ^
[17:18] <ybaumy> cnf: alot of waiting for machine... what does maas say?
[17:18] <cnf> ybaumy: all machines up
[17:18] <cnf> it's only "waiting" on lxd containers
[17:18] <lazyPower> oh that bubbled up from snapd again. thats so weird.
[17:18] <cnf> juju is fucking up the networking
[17:18] <lazyPower> Budgie^Smore: did you get a crashdump package in $PWD?
[17:19] <cory_fu> Yeah, I have no idea what those messages are about
[17:19] <cnf> ybaumy: http://termbin.com/ep44 and http://termbin.com/6o0h
[17:19] <Budgie^Smore> lazyPower don't see any files created :-/
[17:19] <cnf> maas makes the net devices, and adds the right IPs
[17:19] <lazyPower> :| thats no bueno, it should have left a crashdump-$datestamp package in $PWD.... ok lets get lutostag in on this one to help if he's around
[17:20] <cnf> juju then tries to create bridges, and move the ip's
[17:20] <cnf> for some reason it fails
[17:20] <lazyPower> thanks cory_fu for taking a look.
[17:20] <cnf> and then complains it can't find ips
[17:20] <ybaumy> cnf: hmm that looks like a mess. i wouldnt know where to start too sorry
[17:21] <ybaumy> which vlan is the public ip
[17:21] <ybaumy> for the lxd containers
[17:21] <Budgie^Smore> lazyPower, lutostag for the record here is the command I used to install juju-crashdump: $ sudo snap install juju-crashdump --edge --classic
[17:21] <cnf> you get BridgeName:br-enp3s0f0.4013}] devices on host "machine-0" for container "machine-0-lxd-7" with delay=0, acquiring lock "machine-lock" to-bridge="enp3s0f0.4013" --activate --bridge-prefix=br-  --reconfigure-delay=0 /etc/network/interfaces <<'EOF'
[17:21] <cnf> and then a few blank lines
[17:22] <cnf> and then find host bridge for space(s) "space-public" for container "0/lxd/7"), retrying in 10s (3 more attempts)
[17:22] <cory_fu> lazyPower: lutostag has much more understanding of crashdump.  I just use it from time to time.  ;)
[17:22] <cnf> and the same thing again
[17:22] <cnf> >,<
[17:22] <lazyPower> cory_fu: i still associate you as a stakeholder since you pimp it so much :)
[17:22] <lazyPower> and by extension, i now pimp it just as much
[17:22] <Budgie^Smore> lol
[17:22] <ybaumy> you are using maas dhcp right on that vlan?
[17:23] <Budgie^Smore> well apparently you are pimping an STI right now :P
[17:23] <lazyPower> O_O
[17:23] <cnf> ybaumy: it's not dhcp, it's maas assigned
[17:23] <lazyPower> Budgie^Smore: if you snap list, do you have 1.0.0 of juju-crashdump installed?
[17:23] <jamespage> cnf: bridges not appearing right?
[17:24] <jamespage> cnf: beisner just highlighted this problem to me - https://bugs.launchpad.net/juju/+bug/1672327
[17:24] <mup> Bug #1672327: Too long names for bridges <juju:Triaged> <MAAS:Triaged> <https://launchpad.net/bugs/1672327>
[17:24] <Budgie^Smore> lazyPower hey that was the cleanest of the jokes that flashed through my head... anyway http://paste.ubuntu.com/24229699/
[17:25] <cnf> jamespage: well, i'll be...
[17:25] <jamespage> cnf: those where not my words...
[17:25] <jamespage> damn ivoks is not here...
[17:25] <cnf> why do I have to run into every fucking weird bug with this?
[17:25] <jamespage> Dmitrii-Sh: re bug https://bugs.launchpad.net/juju/+bug/1672327
[17:25] <mup> Bug #1672327: Too long names for bridges <juju:Triaged> <MAAS:Triaged> <https://launchpad.net/bugs/1672327>
[17:26] <jamespage> Dmitrii-Sh: did you and ivoks figure out a workaround for that?
[17:26] <Budgie^Smore> cnf, welcome to the club, just ask lazyPower how many weird issues I have hit
[17:26] <ybaumy> jamespage: isnt it possible to use udev for that?
[17:26] <Dmitrii-Sh> jamespage: as a workaround we just renamed the interfaces to something really short
[17:26] <cnf> Dmitrii-Sh: renamed them where?
[17:26] <cnf> in maas?
[17:26] <Dmitrii-Sh> jamespage: e.g. instead of en1s0 use e0
[17:26] <Dmitrii-Sh> cnf: yes
[17:27] <cnf> hmm, ok
[17:27] <cnf> i'm fine with using eth0 and eth1, ffs
[17:27] <jamespage> cnf: we all where :-)
[17:27] <Dmitrii-Sh> cnf: the problem is that UAPI kernel headers have a byte limit of 15
[17:27] <Dmitrii-Sh> cnf: same with libc
[17:27] <cnf> <very quick rant> stupid ass retarded piece of shit systemd!</rant over>
[17:27] <Dmitrii-Sh> cnf: :^)
[17:28] <cnf> ok
[17:28] <cnf> jamespage: well, i'd say both maas and juju should guard against this
[17:28] <cnf> instead of just silently failing
[17:28] <jamespage> don't disagree
[17:28] <lazyPower> yeah, thats all the same here too Budgie^Smore not sure why its tanking :( this bums me out
[17:28] <Dmitrii-Sh> cnf: links to the kernel code and libc are in the comment if needed
[17:28] <cnf> ok, so destroying the setup
[17:28] <cnf> changing network names
[17:28] <cnf> and  deploying again
[17:29] <Dmitrii-Sh> cnf: y
[17:29] <Budgie^Smore> lazyPower sorry :-/ the sarcastic side of me has a few choice "jokes" for the devs right now ;-)
[17:29] <Budgie^Smore> lazyPower hey at least this one isn't your fault :)
[17:29] <lazyPower> Budgie^Smore: do what you gotta do man ;) i can filter appropriately
[17:29] <cnf> jamespage: asked a maas question over in #maas
[17:29] <lazyPower> ikr!
[17:29] <lazyPower> for a change, i dont feel totally responsible
[17:30] <jamespage> Dmitrii-Sh: how exactly did we do the renames for the interfaces?
[17:30] <Budgie^Smore> lazyPower nope I won't got to that dark place, I am reforming ops guy :P
[17:30] <ybaumy> i dont know much about that but cant you just modify udev network rules every first boot to rename interfaces
[17:30] <ybaumy> thats what i would do
[17:31] <cnf> ybaumy: modify them how?
[17:32] <cnf> ybaumy: this is all automated deploys
[17:32] <jamespage> ybaumy: that's pretty much exactly what maas will do on deployment
[17:32] <Dmitrii-Sh> jamespage: in short: I don't have a script for it yet. I just renamed them via maas gui. Changed interface names to 2-byte names and then redeployed. Any VLAN interfaces are updated automatically by maas
[17:32] <cnf> Dmitrii-Sh: indeed
[17:32] <ybaumy> jamespage: ah ok
[17:32] <cnf> i'm collecting a LONG lust of maas and juju bugs  :/
[17:33] <jamespage> cnf: hearing you
[17:33] <ybaumy> jamespage: i would have modified cloud-init but thats probably what maas does too
[17:34] <cnf> Dmitrii-Sh: do you know how to get back interfaces you deleted in MAAS?
[17:34] <cnf> Dmitrii-Sh: assuming you don't know the MAC addresses anymore :P
[17:35] <jamespage> cnf: I suspect if you recommission in the machine, they will get re-discovered - but I'd defer to those with superior MAAS knowledge to me
[17:35] <cnf> k
[17:36] <Dmitrii-Sh> cnf: better to recommission them, yes. That should boot an ephemeral ubuntu image via PXE get hw data again and bail out
[17:36] <cnf> so br-bond0.4011 is fine, at least
[17:41] <cnf> so anyone know how i can change the user a juju controller uses?
[17:42] <cnf> to talk to maas, that is
[17:44] <jamespage> you have to switch the credential being used... I'd have to dig docs
[17:45] <ybaumy> cnf: ~/.local/share/juju there is a credentials file
[17:45] <Budgie^Smore> juju add-credential with the same id for the credential will update it
[17:45] <cnf> ybaumy: yes, which isn't used after bootstrap
[17:45] <ybaumy> cnf: last time i just edited that
[17:46] <ybaumy> or like Budgie^Smore says
[17:47] <cnf> as far as I can see, that doesn't get used after bootstrap
[17:48] <Budgie^Smore> it should be used anytime juju needs to get cloud resources
[17:48] <cnf> it isn;t
[17:48] <ybaumy> i thought so too
[17:48] <cnf> try it
[17:48] <cnf> bootstrap with one user
[17:48] <cnf> then change local credentials
[17:48] <cnf> and then deploy something
[17:48] <cnf> they'll be added to maas with the original credentials
[17:49] <ybaumy> hmm
[17:49] <ybaumy> then you have to use the command
[17:49] <ybaumy> sorry then
[17:49] <ybaumy> cant try right now. backing up everything
[17:51] <jamespage> cnf: I think you should be able todo it with 'juju update-credential'
[17:51] <cnf> jjuju update-credential yeah
[17:51] <cnf> jamespage: that works, indeed
[17:51] <cnf> jamespage: doesn't change what is already running, though :P
[17:51] <cnf> but it is something, thanks
[17:51] <jamespage> cnf: in terms of how that's allocated in maas
[17:51] <jamespage> no I'd not expect it todo that
[17:51] <Budgie^Smore> ok cnf, that is the part that I don't get
[17:51] <cnf> ok, while those HP machines are booting, i'm going for a shower, and look for food
[17:52] <cnf> Budgie^Smore: what?
[17:52] <jamespage> cnf: I'm also concerned you might island existing allocated ressources.
[17:52] <ybaumy> cnf: you cant change the owner of a added ressource in maas once its added i guess
[17:52] <Budgie^Smore> cnf when you say change what is already running
[17:52] <cnf> jamespage: yeah, and it's the controller :P
[17:53] <cnf> lets see if we can get openstack working, i'll worry about ACL's and accounts later
[17:53] <cnf> bbiab
[17:53] <cnf> jamespage: thanks so far
[17:53] <Budgie^Smore> cnf I probably missed it but what are you trying to accomplish after you change the credentials juju uses?
[17:54] <cnf> Budgie^Smore: juju should use a juju user in talking to maas
[17:54] <cnf> i was loged in with my admin user
[17:56] <Budgie^Smore> cnf, ok I think there maybe a misunderstanding in the difference of cloud credential and juju user here
[17:59] <ybaumy> a juju user is just a user of a cloud but not the cloud user
[17:59] <ybaumy> right?
[18:00] <Budgie^Smore> cnf, they are 2 separate ACL systems, reason being clouds (including MaaS) auth APIs are different and the juju user is for interfacing with juju
[18:01] <Budgie^Smore> cnf, for example for AWS it isn't even a username and password you pass to authenticate, it is 2 keys which are associated with a user account on Amazon's end
[18:03] <sfeole> petevg, ping you around?
[18:04] <petevg> stroke: i am.  What's up?
[18:04] <petevg> sfeole: silly autocorrect munger your name, though.
[18:04] <sfeole> petevg, hey, i wanted to use some exit handlers in libjuju, one that i looked at was atexit,  But i don't think I can properly utilize that handler with asyncio
[18:05] <petevg> sfeole: there are usually asyncio equivalents for that sort of thing.
[18:05] <sfeole> petevg, ahh there are
[18:05] <sfeole> petevg, i'll take a look then
[18:06] <sfeole> petevg, i want to simply destroy a model upon exit
[18:06] <petevg> sfeole: cool. Ping me if you want to bounce anything off of me.
[18:06] <sfeole> petevg, sounds good
[18:12] <cnf> Budgie^Smore: ybaumy so i talk to juju, not to the cloud
[18:12] <cnf> Budgie^Smore: ybaumy so i should not HAVE cloud credentials, juju should
[18:15] <Budgie^Smore> cnf what I haven't tried is if you can talk to juju without being able to talk to the cloud (or MaaS) but I have tried logging in as multiple users to juju from the same machine so that both juju accounts had access to my cloud credentials
[18:15] <cnf> Budgie^Smore: well, if you have several users talking to maas, that gets a mess
[18:15] <cnf> because each need their own cloud credentials?
[18:15] <cnf> then you need to mirror ACLs
[18:15] <cnf> that will become a MESS very very fast...
[18:15] <Budgie^Smore> cnf and in the juju world, models are "owned" by users and you have to give other users access the model
[18:16] <cnf> Budgie^Smore: right
[18:16] <cnf> Budgie^Smore: but then do the other users also need access to the same underlying cloud resources?
[18:16] <cnf> and what if they don't match?
[18:16] <cnf> so i expect the juju controller to use one user / api key / whstever
[18:16] <cnf> and deal with juju user itself
[18:17] <Budgie^Smore> cnf oh I kinda get that but from a security stand point you want all your ops people to have their own account on both the cloud and juju. this is more important in public clouds though where you already have to set people up there separate from juju
[18:17] <cnf> Budgie^Smore: no, no you really don't
[18:17] <cnf> Budgie^Smore: because ACLs on both will NOT match up
[18:17] <cnf> they won't have the same mechanisms
[18:18] <cnf> ( i mean, you want them to have a cloud account to deal with cloud issues, but not to talk to the cloud through juju)
[18:18] <Budgie^Smore> cnf and there is why you see why they are separate in juju to start with
[18:18] <cnf> Budgie^Smore: what?
[18:19] <cnf> a juju user should NOT! need a cloud account to talk to juju and do stuff
[18:19] <Budgie^Smore> no you want them to use their cloud credentials through juju especially since otherwise you wouldn't be able to tell actions apart
[18:19] <cnf> no, no you really don't...
[18:19] <cnf> that will be a mess
[18:19] <cnf> it just can't work
[18:20] <Budgie^Smore> cnf how are you going to get instances spun up with out clound access?
[18:20] <cnf> Budgie^Smore: JUJU should have cloud access
[18:20] <cnf> not the juju user
[18:20] <cnf> this is how you set up vnf controllers...
[18:20] <Budgie^Smore> cnf how are you going to tell apart who did what through juju to the cloud?
[18:21] <cnf> it just doesn't work if you need credentials all the way down the line...
[18:21] <cnf> Budgie^Smore: in juju...,
[18:21] <cnf> that is why you have a juju controller...
[18:23] <cnf> Budgie^Smore: say i make a budgie/default model, which you have access to
[18:23] <cnf> you need a budgie user on the cloud
[18:23] <Budgie^Smore> cnf but that is not how it works nor would any infosec team I know sign off on it working like that. yes you can use juju to determine some of that but infosec onces to be able to see it at every level and shared access at any level is frowned upon significantly
[18:23] <cnf> then i give bob access to budgie/default
[18:23] <cnf> now the bob user on the cloud needs access to your resources on the cloud
[18:23] <cnf> but the cloud has no granularity, so he has access to ALL your resources?
[18:23] <cnf> Budgie^Smore: uhm, sure infosec would sign off on it, why would they not?
[18:24] <cnf> that's nonsense
[18:24] <cnf> that's how orchestration works
[18:24] <cnf> needing credentials on every single step along the way is unmaintainable
[18:24] <cnf> and as such a security nightmare
[18:25] <cnf> it just does NOT work or scale in any form whatsoever
[18:26] <Budgie^Smore> cnf I would love to get into the ins and outs of why the design is the way it is and why infosec likes it that way but I do actually have a cluster I need to spin up sorry. your work around is to have a shared account on maas that you give out to everyone
[18:27] <cnf> Budgie^Smore: infosec doesn't like it your way :P
[18:27] <cnf> and no way
[18:27] <cnf> no one is getting cloud access
[18:27] <cnf> at all
[18:27] <cnf> not one bit
[18:28] <cnf> if juju needs that, i don't think i'll happen at $currentclient
[18:28] <Budgie^Smore> cnf oh I highly doubt that but then running 30% of the world's Internet might make me jaded about what enterprise infosec people want
[18:29] <Budgie^Smore> cnf on and don't get me started about auditors - internal and external - and their requirements!
[18:30] <cnf> you run 30% of the worlds internet?
[18:32] <Budgie^Smore> was part of the company that does until recently
[18:33] <cnf> sure
[18:34] <cnf> jamespage: now i;m at message: 'can''t get info for image ''juju/xenial/amd64'': not found' :P
[18:36] <Budgie^Smore>  http://www.reuters.com/article/us-akamai-tech-results-idUSKBN0NJ2IV20150428 - "Akamai, which delivers between 15-30 percent of all Web traffic"
[18:46] <Budgie^Smore> cnf if there is one thing I never lie about it is what I have done in my career. hell it might come across as bragging but truth is it still blows me away what I have accomplished and who I have worked for over the years
[18:51] <Budgie^Smore> think I am going to "test" out JaaS to deploy the new cluster
[18:52] <rick_h> Budgie^Smore: let us know if you hit anything.
[18:52] <Budgie^Smore> rick_h: ack that
[18:52] <rick_h> Budgie^Smore: make sure to login to the website first to make sure your account is ready to go
[18:53] <Budgie^Smore> rick_h: I just logged in through the juju interface before I started modelling right?
[18:53] <rick_h> Budgie^Smore: rgr
[18:54] <Budgie^Smore> rick_h yeah I figured that would be a good first step :)
[18:54] <Budgie^Smore> rick_h I have "played" with the demo for years
[18:54] <rick_h> Budgie^Smore: sec, let me get you the in dev docs branch as well
[18:54] <rick_h> crash course!
[18:55] <rick_h> Budgie^Smore:https://github.com/juju/docs/blob/jaas/src/en/getting-started-jaas.md
[18:55] <rick_h> Budgie^Smore: hah, well the "demo" is going to get more fun for you this time
[18:56] <Budgie^Smore> rick_h come on now, I am a man and you expect me to read a manual ;-) (side not, would be better if the images actually weren't broken links :P)
[18:58] <rick_h> Budgie^Smore: yea, once the branch lands and it's rendered on jujucharms.com/docs it'll be pretty and themed and such
[18:58] <rick_h> Budgie^Smore: just some in-flight stuff as it goes through reviews/etc
[18:58] <Budgie^Smore> rick_h yeah I get that :) just giving you crap
[18:58] <rick_h> Budgie^Smore: bring it on! :P
[18:59] <Budgie^Smore> rick_h I could come work with you as a colleague and not just a user :P
[19:12] <cnf> jamespage: so things are still not coming up
[19:12] <cnf> i think i am missing relatkons?
[19:37] <Budgie^Smore> I am pondering adding charmscaler to this cluster
[19:38] <Budgie^Smore> wonder how well it would work in AWS though
[19:46] <rick_h> Budgie^Smore: :) https://www.canonical.com/careers
[19:46] <anastasiamac> cnf: m almost here now... it's only just before 6am :D how can I help?
[19:47] <Budgie^Smore> rick_h, *cough* no comment, I am taking the 5th *cough*
[19:47] <cnf> anastasiamac: https://bugs.launchpad.net/juju/+bug/1674759 but i just redeployed :/
[19:47] <mup> Bug #1674759: juju upgrade-juju doesn't honor proxy settings <juju:Incomplete> <https://launchpad.net/bugs/1674759>
[19:48] <cnf> anastasiamac: so take your time :P
[19:48] <cnf> it's almost 21:00 here anyway :P
[19:52] <cnf> now i'm stuck on the next issue :/
[19:52] <Budgie^Smore> rick_h quick question, is it possible to modify the constraints after you have created a machine in an model or is that something that is only done when addming a machine?
[19:53] <rick_h> Budgie^Smore: yea, only done when adding a machine. If you set them on an application level you can change the constraints and new units pick up the new values
[19:54] <Budgie^Smore> rick_h oh I get that, just makes it kinda tricky to use bundles where you might want different constraints than default
[19:54] <Budgie^Smore> rick_h suppose I could just download the model, modify it to what I want and import it back?
[19:56] <rick_h> Budgie^Smore: yea, I think the idea is that you'd swap up any constraints in the bundle yourself as that changes what's setup.
[19:56] <rick_h> Budgie^Smore: at some point we'll allow config/placement/etc overrides during the deploy command
[19:56] <Budgie^Smore> rick_h ok, follow up question, does the UI have a way of updating the default constraints?
[19:57] <rick_h> Budgie^Smore: hmm, I'm trying to trace that question. What are the default constraints?
[19:57] <cnf> and i'm betting jamespage has gone home for the day :P
[19:58] <Budgie^Smore> rick_h oh that would be nice, kinda like the UI for doing the placement overrides, etc... I can't remember but the last time I left everything alone I got m3.medium instances which is only 4G of mem
[19:58] <rick_h> Budgie^Smore: so in the GUI you can alter constraints before hitting deploy
[19:58] <Budgie^Smore> rick_h think that is based on the juju controller memor requirements
[19:58] <rick_h> Budgie^Smore: I guess I'm not sure which "UI" you're referring and such.
[19:58] <cnf> and one message: 'can''t get info for image ''juju/xenial/amd64'': not found'
[19:58] <rick_h> Budgie^Smore: well it's a "default" value. Like anything, it needs to not be too crazy for folks trying/testing/etc and those that are running long running production systems
[19:59] <cnf> anyone know what that is about?
[19:59] <Budgie^Smore> rick_h sorry I am a bit old school use UI to mean GUI and CLI for ... well CLI
[19:59] <rick_h> Budgie^Smore: all good
[20:00] <Budgie^Smore> Budgie^Smore ok if I should be able to alter the constraints in the GUI, I am not finding where, only iption I get when clicking on the machine is destroy
[20:05] <Budgie^Smore> rick_h about the only way I can think of doing it would be to destroy the "new" machines and "add" new ones with the constraints I want baked in and then replacing the charms back on to the newly created machines... oh and wow when I have a brain fart I have a brain fart
[20:06] <rick_h> Budgie^Smore: or download the bundle yaml and go down to the machines and edit the numbers before doing deploy?
[20:06] <Budgie^Smore> yup
[20:07] <Budgie^Smore> seems a little counterproductive when the GUI does give the ability to add constraints that it wouldn't have a way to update predeploy because techinical the machine isn't added until after it is deployed
[20:08] <hatch> Budgie^Smore you are correct - this feature has been on our roadmap BUT if you're feeling adventurous you could file a feature request here: https://github.com/juju/juju-gui/issues  :)
[20:09] <Budgie^Smore> hatch thanks :) I might do that... good knowing someone else thinks it is worth "fixing"
[20:09] <hatch> Budgie^Smore in fact https://github.com/juju/juju-gui/issues/1982 :D
[20:10] <Budgie^Smore> hatch in the meantime I have a "workaround" that will do it anyway
[20:10] <hatch> good good
[20:11] <Budgie^Smore> I have liked and am watching the issue
[20:13] <hatch> great thanks
[20:39] <tvansteenburgh> rick_h: i need some clarification about resources. i thought a resource was attached to a specific revision of a charm, but that doesn't seem to be the case.
[20:42] <rick_h> tvansteenburgh: when you do a release you release with a specific revision of a charm and a specific revision of a resource
[20:43] <rick_h> tvansteenburgh: I'm actually playing with the charm command today and making it part of the release output that it shows what version of the charm and what revision of each resource are in each channel
[20:46] <tvansteenburgh> rick_h: right, that's what i thought. so the resource revisions sticks with that charm revision
[20:49] <rick_h> tvansteenburgh: well it's mallable via the release calls
[20:50] <rick_h> tvansteenburgh: so a single charm revision can have a series of resource revisions over time
[20:51] <tvansteenburgh> rick_h: i have a situation where old revisions of a charm are deploying with a much newer resource than they were published with, which was unexpected. trying to figure out how that happenend
[20:51] <rick_h> tvansteenburgh: either a bug or someone did new releases with the charm command and updated resources
[20:55] <Budgie^Smore>  rick_h do I need to be running juju 2.2 cli to log into JaaS from the cli?
[20:56] <rick_h> Budgie^Smore: no, 2.x
[20:58] <Budgie^Smore> rick_h using juju loging jaas?
[21:01] <Budgie^Smore> rick_h I would get it if I knew where to find the register command I need :)
[21:01] <hatch> Budgie^Smore juju register
[21:01] <rick_h> juju register jimm.jujucharms.com
[21:01] <hatch> juju register jimm.jujucharms.com jaas
[21:02] <hatch> rick_h don't forget the fancy name :)
[21:02] <rick_h> doh!
[21:02] <Budgie^Smore> ah ok, I vagurely remember that from somewhere :)
[21:02] <rick_h> Budgie^Smore: *cough* in that docs page *cough*
[21:03] <Budgie^Smore> rick you sure about that?
[21:03] <hatch> Budgie^Smore were there places you looked for that command? It's possible we should have it in more places, or more accessible places
[21:03] <Budgie^Smore> I was looking at the controller page of the jaas branch for starters rick_h
[21:04] <tvansteenburgh> rick_h: sorry, i don't see how someone could update a resource that was already released ?
[21:04] <Budgie^Smore> rick_h but that is after I scanned that page and couldn't see anything about registering to the controller
[21:05] <tvansteenburgh> rick_h: e.g. `charm release wily/django-42 --resource website-3` <- afaik, neither django-42 nor website-3 can be changed now right?
[21:05] <hatch> Budgie^Smore you're right, it's on a special CLI docs page
[21:05] <rick_h> tvansteenburgh: sec otp
[21:05] <hatch> Budgie^Smore https://github.com/juju/docs/blob/jaas/src/en/jaas-cli.md
[21:06] <hatch> we should probably make these options more obvious at the top of each page
[21:06] <rick_h> tvansteenburgh: I can always do charm release wily/django-42 --resource website-4
[21:06] <hatch> thanks Budgie^Smore
[21:06] <tvansteenburgh> rick_h: ok, i didn't know that
[21:06] <tvansteenburgh> rick_h: but in my case, i don't think someone went through every old charm rev and updated the resource
[21:07] <Budgie^Smore> hatch I keep forgetting that the CLI is it's own section, to be honest I almost always expect to see UI and CLI steps in the same place in the docs
[21:07] <tvansteenburgh> technically i see it's possible now, i'll need to check
[21:07] <Budgie^Smore> hatch to me the CLI page is for stuff the CLI can do that the UI can't
[21:07] <rick_h> tvansteenburgh: k, just stating what can be done there.
[21:11] <Budgie^Smore> oh and hatch, (based on my login just now) rick_h had it right, you can't pass 'jaas' that way... register seems to take only 1 argument
[21:11] <hatch> heresy!
[21:12] <hatch> Budgie^Smore 2.1.0?
[21:12] <hatch> er, version 2.1.0
[21:12] <hatch> wait you
[21:12] <hatch> re right
[21:12] <Budgie^Smore> http://paste.ubuntu.com/24230847/
[21:12] <hatch> I failed
[21:12] <hatch> plz disregard me :)
[21:14] <Budgie^Smore> sometimes you gotta give credit where credit is due
[21:15] <hatch> haha indeed
[21:17] <cory_fu> layer-basic PR for general review: https://github.com/juju-solutions/layer-basic/pull/92
[21:23] <Budgie^Smore> lazyPower I am going to "stop" that cluster until we can determine what we want to get off it and how
[21:23] <tvansteenburgh> ninja'd marcoceppi
[21:24]  * tvansteenburgh notches belt
[21:37] <kwmonroe> marcoceppi: you are awesome... i'm pretty sure you're responsible for charm build messages like "build: layer.yaml includes hbase-quorum which isn't used in metadata.yaml".  saving my hide all day chief!
[21:38] <magicaltrout> i told you not to break it....
[21:38] <Budgie^Smore> OK next crazy question ... where does juju get the 10.0.0.0 IP range when it is deploying lxd containers?
[21:39] <kwmonroe> hell, this juju magic makes it damn near impossible to break things.  #amirite magicaltrout?
[21:39] <cnf> haha!
[21:39] <cnf> funny
[21:49] <Budgie^Smore> ok I have found what I was looking for, what's the best way to override the IP address scheme when deploying LXD into a mixed LXD / machine environment so that they share the same range?
[21:57] <Budgie^Smore> ok I feel like I am missing something basic when it comes to the Juju deployment of LXD containers
[22:04] <smgoller> Hey all, how does juju tell maas about containers it creates?
[22:04] <smgoller> i.e. in the code
[22:05] <rick_h> Budgie^Smore: so having containers on the same host network range is only supported on Maas right now
[22:05] <rick_h> Budgie^Smore: aws only lets a host have 1 Mac address so containers can't have the addresses and the host for instance
[22:06] <rick_h> Budgie^Smore: there's work to make that work on manual/OpenStack/etc
[22:06] <rick_h> Where there's something to do up address management on the network.
[22:06] <cnf> does anyone know what this means: 'can''t get info for image ''juju/xenial/amd64'': not found'
[22:09] <cnf> also, how do i kick something to tell it stuff has changed, and it should try again?
[22:11] <Budgie^Smore> rick_h ah ok now that make sense
[22:11] <kwmonroe> cnf: wadda you mean by stuff?  if a charm is in an error state, you can say "juju resolved <foo>/<x>" to make juju retry the last thing that may have failed.
[22:11] <cnf> it's not in an error state
[22:12] <Budgie^Smore> rick_h do all the instances need public IPs or can internal stuff use the VPC's private IP and only give public IPs to instances I want access to?
[22:12] <kwmonroe> rick_h: is the bundle spec public, or rather, can i make a wish list for the next bundle spec?
[22:12] <cnf> it just says 'can''t get info for image ''juju/xenial/amd64'': not found'
[22:12] <kwmonroe> rick_h: my kingdom for "services:  foo:  charm:  foo\n channel: edge"
[22:13] <cnf> so idno what the fuck to do :/
[22:13] <rick_h> kwmonroe: file bugs on GH/juju/charm maybe?
[22:13] <cnf> i can't use retry-provisioning, because that doesn't support containers
[22:13] <cnf> so how do i kick it?
[22:14] <kwmonroe> sure rick_h - i don't mind opening an issue, just wanted to get the right place... for ex, https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md is the last place i saw reference to version X of a bundle spec.
[22:15] <kwmonroe> if gh/juju/charm is the right place, i'll open my wants there.
[22:15] <rick_h> kwmonroe: yes there's a v6 unstable branch now
[22:15] <cnf> so how do i make it retry provisioning containers? o,O
[22:16] <rick_h> cnf: juju retry-provisioning ?
[22:16] <cnf> rick_h: doesn't support containers!
[22:16] <cnf> error: invalid machine "1/lxd/0" retry-provisioning does not support containers
[22:16] <rick_h> cnf: ah my apologies.
[22:18] <cnf> rick_h: i do apologize if i'm a bit snappy, been running into problems like this for a few weeks now
[22:18] <rick_h> Understand, my fault. I recall that now.
[22:18] <cnf> i keep running into deadlock issues it seems
[22:19] <cnf> not confidence inspiring
[22:20] <cnf> i have stuff that has failed, and no bloody way to fix it >,<
[22:27] <cnf> hmm
[22:27] <rick_h> cnf: I'm at my son's violin. Do me a favor. Let's regroup tomorrow. I'd like to help but not sure about the background of your setup and what state things are in. Maybe tomorrow we can set a base via Hangouts and see where to go forward.
[22:27] <cnf> rick_h: that'd be nice, i'm in CET though
[22:28] <cnf> don't know if your timezone matches up
[22:28] <cnf> rick_h: but now, be at your sons violin
[22:28] <rick_h> EST here
[22:28] <cnf> you ain't getting that back
[22:28] <kwmonroe> oh lordy rick_h.  kid's violin.  you are a brave soul.
[22:28] <rick_h> All good, he's with a teacher
[22:28] <rick_h> I'm sitting outside but we're about to leave. Lessons end in 2min
[22:28] <cnf> rick_h: i'll be at the office between 10:00 and 18:00 CET tomorrow
[22:29] <magicaltrout> Junior Strings used to practice down the hall from the Junior Jazz Band when I was a kid
[22:29] <magicaltrout> it was like strangling cats
[22:29] <rick_h> Cnf ok, so your some 5hrs ahead of me
[22:29] <cnf> rick_h: i'll go in late(er) and stick around a while
[22:29] <rick_h> cnf: so I'll be a bit late for your morning but will ping when I get my breakfast in from of my computer
[22:29] <cnf> i can make that 11:00 to 19:00 or so
[22:30] <cnf> rick_h: thanks, that'd be appreciated
[22:53] <Budgie^Smore> rick_h one more crazy question if you are still around, is there a way to specify the AWS VPC ID to use in the GUI?
[23:04] <hatch> Budgie^Smore nope
[23:05] <Budgie^Smore> hatch can you do it from the cli using jaas or am I going to need my own controller?
[23:05] <hatch> hmm
[23:06] <Budgie^Smore> I came across a juju bootstrap command that forces the vpc-id using --config
[23:06] <hatch> right....
[23:07] <Budgie^Smore> I am wondering if I constraint the add machine to a space that is assigned to a subnet in the right VPC if that would be sufficient
[23:09] <hatch> Budgie^Smore so I'm not actually sure, there was some discussion around vpc-id on a per model basis but that's outside of my ballpark
[23:09] <hatch> I can find out tomorrow though when the correct peeps get in
[23:09] <hatch> are you able to file a bug on the GUI project to that effect and I can reply in kind?
[23:09] <Budgie^Smore> hatch I think it would be awesome to have a per model VPC
[23:09] <hatch> tomorrow that is
[23:10] <Budgie^Smore> I will try and get around to that, I need to get this cluster up today if I can
[23:10] <hatch> sure, thanks, I'll make a note none the less to bring it up, so if you're around tomorrow I'll try and get back to you
[23:10] <Budgie^Smore> trying to secure it as best I can at the moment and the legacy stuff is using the default vpc is a mess
[23:53] <Budgie^Smore> hey hatch so I started "hand building" my model using the CLI and I don't see the mode in the jaas version of the GUI
[23:53] <hatch> Budgie^Smore when you run `juju list-controllers` do you see the jaas controller?
[23:53] <hatch> selected
[23:53] <hatch> it should have a * beside it
[23:54] <Budgie^Smore> yeah and I didn't get any errors running juju deploy
[23:54] <Budgie^Smore> jaas*  k8s-aws     gburgess@external  (unknown)                      -         -     -  2.0.0
[23:55] <hatch> can you run `juju list-controllers--refresh`
[23:55] <hatch> it's showing there that you have no models/machines
[23:55] <hatch> those last few dashes
[23:55] <hatch> er
[23:56] <hatch> can you run `juju list-controllers --refresh`
[23:57] <Budgie^Smore> yeah looks like it is having problems aloocating a machine, if what I am reading from juju status is anything to go by
[23:57] <Budgie^Smore> ok that command still hasn't returned hatch
[23:57] <hatch> hmm
[23:58] <hatch> it's fast here
[23:58] <hatch> maybe it got hung in the wild wild webs
[23:58] <hatch> :)
[23:59] <hatch> Budgie^Smore when you visit https://jujucharms.com/u/gburgess do you see your model? You may have to log in if you haven't already