[08:39] <magicaltrout> marcoceppi: I suspect there is an issue! http://review.juju.solutions/review/2387 :)
[09:34] <TheMue> marcoceppi: happy birthday
[12:42] <Sophie_> hellooooooooo
[12:43] <Sophie_> I need some help00
[12:43] <tvansteenburgh> Sophie_: what's up
[12:44] <Sophie_> hi tvansteenburgh!
[12:44] <tvansteenburgh> o/
[12:44] <Sophie_> I am trying to bootstrap juju and I always get an error
[12:45] <Sophie_> I have maas on ubunntu and one VM as node
[12:45] <Sophie_> for juju
[12:45] <Sophie_> but I get this error ERROR juju.cmd supercommand.go:430 Get http://localhost/MAAS/api/1.0/nodes/?agent_name=f2490a45-5d59-4339-803a-d2ce9d33ca88&id=node-b09f39a4-f590-11e5-9d51-34e6ada748ce&op=list: dial tcp 127.0.0.1:80: connection refused
[12:46] <tvansteenburgh> hrm, seems that it's trying to hit maas on the bootstrap node, and that's not where it is
[12:47] <tvansteenburgh> Sophie_: you are using juju 1.25?
[12:47] <tvansteenburgh> `juju version`
[12:47] <Sophie_> I can loging to maas on localhost/MAAS if this what you mean
[12:48] <Sophie_> 1.24.7-trusty-amd64
[12:48] <tvansteenburgh> okay, can i see your environments.yaml file? at least, the section for this maas
[12:49] <Sophie_> ok should I copy it here?
[12:49] <tvansteenburgh> what's the maas-server value?
[12:50] <Sophie_>     maas:
[12:50] <Sophie_>         type: maas
[12:50] <Sophie_>     
[12:50] <Sophie_>         bootstrap-timeout: 2200
[12:50] <Sophie_>         # maas-server specifies the location of the MAAS server. It must
[12:50] <Sophie_>         # specify the base path.
[12:50] <Sophie_>         #
[12:50] <Sophie_>         maas-server: 'http://localhost/MAAS/'
[12:50] <Sophie_>     
[12:50] <Sophie_>         # maas-oauth holds the OAuth credentials from MAAS.
[12:50] <Sophie_>         #
[12:50] <Sophie_>         maas-oauth: 'Etws6esDr9HRvNcKMA:awpPWjxU6WX8AQ7dzK:xgDg5r2pzQqKNsYg4WpMGhxcbDgXZS63'
[12:50] <Sophie_>         authorized-keys-path: ~/.ssh/authorized_keys
[12:52] <tvansteenburgh> okay. so your juju vm it hitting localhost trying to communicate with maas
[12:52] <tvansteenburgh> that's why it doesn't work
[12:52] <tvansteenburgh> you need an ip or hostname that the juju vm can talk to
[12:52] <Sophie_> maas is on localhost
[12:52] <Sophie_> my main pc and juju node is virtual
[12:52] <Sophie_> I think I tried this but I will try it again
[12:53] <tvansteenburgh> what kind of virtual?
[12:53] <tvansteenburgh> kvm? lxc?
[12:53] <Sophie_> I use vmm
[12:54] <Sophie_> and kvm
[12:54] <tvansteenburgh> ok. well inside the vm, localhost is that `guest` only, it can't see outside to the host
[12:54] <Sophie_> virtual machine manager
[12:55] <Sophie_> sorry didnt get this
[12:55] <tvansteenburgh> you need an http address for maas that you can curl from inside your juju vm
[12:55] <tvansteenburgh> once that works, juju bootstrap will work
[12:55] <Sophie_> my guest can ping internet I logged in with ssh if that what you mean
[12:56] <tvansteenburgh> if you ssh into your juju vm and curl http://localhost/MAAS/ what do you get?
[12:59] <jacekn> hello. Can somebody give me an update on https://bugs.launchpad.net/charms/+bug/1538573 ?
[12:59] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1538573>
[13:00] <Sophie_> sorry for the delay tvansteenburgh
[13:00] <Sophie_> it says Failed to connect to localhost port 80: Connection refused
[13:00] <tvansteenburgh> Sophie_: right, and that's why bootstrap is failing :)
[13:01] <Sophie_> yeah I changes the enviroment virable to my maas ip
[13:01] <Sophie_> and still failed
[13:01] <Sophie_> when I curl with the ip it prints nothing btw
[13:02] <Sophie_> I made  maas-server: 'http://xx.xx.xx.xx/MAAS/'  bootstraped again and failed :/
[13:02] <tvansteenburgh> Sophie_: if you curl that from the vm does it work?
[13:02] <Sophie_> when the vm reboots again it stack here  for 120 sec "cloud init nonet" dunno if this is something
[13:03] <Sophie_> it does not give me connection refused
[13:03] <Sophie_> it gives blank page
[13:03] <tvansteenburgh> yeah, so there are some more general networking issues that need to be resolved
[13:03] <Sophie_> like nothing happened
[13:03] <tvansteenburgh> first you need to get your vm and the maas host communicating
[13:04] <tvansteenburgh> jacekn: if no one gets to it before then, i'll be doing some reviews tomorrow morning and can take a look
[13:04] <Sophie_> I think they are cause the when I boot the node with pxe it recognize maas and it register as ready
[13:05] <A-Kaser> Hi
[13:05] <Sophie_> dont know whats wrong with my network
[13:05] <tvansteenburgh> A-Kaser: hi
[13:05] <jacekn> tvansteenburgh: thanks!
[13:06] <tvansteenburgh> Sophie_: i'm not sure how to help you with that part. marcoceppi are you around?
[13:07]  * marcoceppi reads scrollback
[13:09] <Sophie_> ok thank you very much for your time anyway :)
[13:10] <tvansteenburgh> Sophie_: my pleasure, i hope you get it resolved.
[13:11] <lamertje> hi all, how does one expose an previoysly undefined port after deployment?
[13:15] <tvansteenburgh> lamertje: `juju run --service wordpress "open-port 80"`
[13:17] <lamertje> sweet!! tnx tvansteenburgh!
[13:18] <tvansteenburgh> lamertje: you're welcome :)
[13:23] <marcoceppi> Sophie_: so, this VM that's running MAAS and Juju, has MAAS been configured to talk to other VMs?
[13:23] <Sophie_> hi!
[13:23] <Sophie_> maas is running on my main pc
[13:23] <Sophie_> juju is on VM
[13:24] <Sophie_> and I can ssh from maas to my vm
[13:24] <Sophie_> is this what you mean?
[13:26] <lamertje> In case anyone is looking for the right doc on the commands you can run in your hooks and via juju run --service <servicename>
[13:26] <lamertje> You can find it at: https://jujucharms.com/docs/stable/reference-hook-tools
[13:27]  * lamertje off to play !
[13:29] <marcoceppi> Sophie_: what do you mean Juju runs in a vm? As in you installed Juju in a vm - or you bootstrapped a vm?
[13:30] <Sophie_> i have a VM node and I am trying to bootstrap juju on that node
[13:30] <Sophie_> but it fails everytime
[13:53] <marcoceppi> magicaltrout: tests passed \o/ http://review.juju.solutions/review/2387 it just took a while to process
[13:53] <marcoceppi> there's a pretty big backlog
[13:53] <magicaltrout> hehe
[13:53] <magicaltrout> yeah i saw marcoceppi
[13:53] <magicaltrout> thanks a lot
[13:54] <A-Kaser> how to remove an unit in "WORKLOAD-STATE" error
[13:59] <tvansteenburgh> A-Kaser:  juju resolved wordpress/0 && juju remove-unit wordpress/0
[13:59] <neiljerram> Morning all.
[13:59] <tvansteenburgh> neiljerram: ol
[13:59] <tvansteenburgh> o/
[14:00] <neiljerram> Is there some trick for getting juju-deployer to place units on the machines that my bundle file says?
[14:01] <neiljerram> My bundle file has a "to" key for every charm that it uses.  But juju-deployer only appears to obey that in about half of the cases.
[14:01] <tvansteenburgh> neiljerram: the machines in the bundle are logical machines. they will not always match the physical machine numbers
[14:02] <neiljerram> Ah, thank you.
[14:03] <neiljerram> So in the "machines" section of the YAML, is each key a physical number or a logical number?
[14:03] <tvansteenburgh> neiljerram: np, you're not the first person to be confused by that :)
[14:04] <tvansteenburgh> neiljerram: it's a logical number
[14:04] <neiljerram> OK, and in the value of each "to" key?
[14:04] <neiljerram> And what does "juju status --format=tabular" report?
[14:05] <tvansteenburgh> neiljerram: you match that to the logical number from the machines section
[14:05] <tvansteenburgh> juju status reports the physical machine number
[14:05] <neiljerram> Ah, OK, so your last point is the key one, I think.
[14:05] <tvansteenburgh> yep
[14:06] <neiljerram> So what controls the mapping between logical and physical machines?
[14:06] <A-Kaser> tvansteenburgh: thx
[14:06] <tvansteenburgh> neiljerram: juju-deployer controls it, but the mapping is arbitrary
[14:07] <tvansteenburgh> A-Kaser: you're welcome
[14:08] <neiljerram> Is it possible for me to see what the mapping is, once a deployment is complete?
[14:08] <tvansteenburgh> neiljerram: only by deduction :)
[14:10] <neiljerram> OK, perhaps I should mention the eventual problem, which is that charms end up on the same physical machine, when I need them to be on different physical machines.  Is there I way that I can get juju-deployer to honour that?
[14:10] <tvansteenburgh> neiljerram: that should not happen
[14:11] <tvansteenburgh> neiljerram: can you pastebin a copy of your bundle, and the output of `juju status --format yaml`
[14:11] <neiljerram> Sure, coming up...
[14:13] <neiljerram> Here's the bundle: http://pastebin.com/9As3ECA6  Note that keystone has "to": 2 and mysql has "to": 3.
[14:17] <neiljerram> Here is the juju status: http://pastebin.com/vjQkcc8R  Note that both keystone and mysql have "machine": 2.
[14:18] <tvansteenburgh> neiljerram: looks like a bug to me
[14:18] <neiljerram> :-(
[14:19] <tvansteenburgh> neiljerram: can you file a bug here https://bugs.launchpad.net/juju-deployer
[14:19] <tvansteenburgh> neiljerram: with links to those pastes
[14:19] <neiljerram> Just checking first if my juju packages are up to date...
[14:20] <tvansteenburgh> neiljerram: i'm slammed at the moment, but will try to look into it in the next day or two
[14:20] <tvansteenburgh> neiljerram: yeah, curious if you have the latest deployer
[14:20] <neiljerram> tvansteenburgh, thanks.
[14:21] <tvansteenburgh> neiljerram: juju-deployer 0.6.4 is latest
[14:22] <neiljerram> tvansteenburgh, My package versions are at http://pastebin.com/qzqdMsfY - looks like I have 0.6.4 for juju-deployer, so that should be good.
[14:22] <tvansteenburgh> neiljerram: yeah, okay
[14:26] <neiljerram> tvansteenburgh, Could my problem be the same as https://bugs.launchpad.net/juju-deployer/+bug/1507372 ?
[14:26] <mup> Bug #1507372: Bundle v4 format: service placement when using machines orders deployment incorrectly <juju-deployer:New> <https://launchpad.net/bugs/1507372>
[14:26] <neiljerram> (I could either raise a new bug, or add my report and attachments to that one.)
[14:27] <tvansteenburgh> neiljerram: i think that's a different issue
[14:28] <neiljerram> tvansteenburgh, OK, thanks, I'll raise a new bug then.
[14:29] <tvansteenburgh> neiljerram: thank you
[14:39] <neiljerram> tvansteenburgh, fyi, https://bugs.launchpad.net/juju-deployer/+bug/1563352
[14:39] <mup> Bug #1563352: juju-deployer 0.6.4 does not honor bundle placement directives <juju-deployer:New> <https://launchpad.net/bugs/1563352>
[15:58] <cory_fu> kwmonroe, kjackal_, c0s: Have any of you seen an error like this before?  http://pastebin.ubuntu.com/15552546/
[16:12] <beisner> hi cholcombe, intending to land this one? https://review.openstack.org/#/c/296632/   it'll need a wf+1 along with the cr+2 if so.
[16:13] <cholcombe> beisner, ah ok
[16:18] <kjackal_> cory_fu, where did you see that?
[16:18] <kwmonroe> cory_fu: is the filesystem full on your namenode unit?
[16:19] <cory_fu> It's a fresh deploy of namenode
[16:19] <kjackal_> I have seen this before but it is the internal su command that fails
[16:19] <kjackal_> I mean, we could have more info if we could login to that unit and run the command manually
[16:20] <kjackal_> that part: /usr/lib/hadoop/bin/hdfs' 'namenode' '-format' '-noninteractive'"
[16:20] <kwmonroe> cory_fu: also check that the ubuntu (or hdfs user, can't remember who does the hdfs format) can get to /usr/local/hadoop/data/cache/hadoop/dfs/name/current
[16:21] <kjackal_> this is also, strange "java.io.IOException: Cannot create directory /usr/local/hadoop/data/cache/hadoop/dfs/name/current"
[16:22] <cory_fu> kjackal_: What's your LP ID?
[16:22] <c0s> cory_fu: looks like this is because of
[16:22] <c0s>   unit-namenode-0[3002]: 2016-03-29 15:55:59 INFO unit.namenode/0.install logger.go:40 java.io.IOException: Cannot create directory /usr/local/hadoop/data/cache/hadoop/dfs/name/current
[16:22] <c0s> seems like a permission issues
[16:22] <cory_fu> c0s: Yeah, but I don't know why it started
[16:23] <c0s> started what? The NN?
[16:23] <kjackal_> cory_fu: kos.tsakalozos
[16:23] <cory_fu> c0s: I don't know why this error just started happening.  I don't know what changed
[16:23] <c0s> ah...
[16:23] <c0s> that I won't tell
[16:24] <c0s> you've asked if I've seen the error. Indeed I have ;)
[16:24] <cory_fu> unit IP is: 54.152.123.208
[16:24] <cory_fu> c0s: :)
[16:24] <c0s> sorry, not much help from me, I guess
[16:24] <cory_fu> I'm in a tmux session
[16:27] <kjackal_> here is the exception https://pastebin.canonical.com/152884/
[16:28] <kjackal_> this is strange https://pastebin.canonical.com/152885/
[16:28] <cory_fu> Hrm.  Yeah
[16:31] <cory_fu> Hrm.  Why is the perms for cache_base in hadoop-base dist.yaml 1775 instead of 0775?
[16:31] <cory_fu> That's almost certainly wrong, since I bet it makes it non-octal
[16:32] <kjackal_> was it working before?
[16:32] <kjackal_> did it just break?
[16:32] <cory_fu> Yes, it was working like 30 minutes ago
[16:33] <cory_fu> kwmonroe: Any explanation for https://github.com/juju-solutions/layer-hadoop-base/commit/8a99d258fe21aaf0ed7277937318fe276c116a7a ?
[16:35] <kwmonroe> cory_fu: 01775 isn't a valid permission.  we wanted sticky bit set, full perms for owner/group, and execute for other.
[16:35] <kwmonroe> so, 1775
[16:36] <cory_fu> The leading 0 is required for it to be read as octal.  https://api.jujucharms.com/charmstore/v5/~bigdata-dev/apache-hadoop-namenode/archive/dist.yaml
[16:36] <kwmonroe> oh ffs
[16:36] <kwmonroe> my very bad then
[16:37] <cory_fu> kwmonroe: Hangout?
[16:37] <kwmonroe> jan 15 commit though?  how has this worked for 10 weeks?
[16:37] <kwmonroe> sure cory_fu
[16:38] <kwmonroe> omw
[16:38] <cory_fu> ￼No idea
[16:38] <kwmonroe> i blame python somehow
[16:42] <lazyPower> kwmonroe - thats bit me more than once too
[16:43] <lazyPower> permissions thanks to os.chmod() gave me severe headaches for a while in the legacy etcd charm.
[16:43] <kwmonroe> i'm fixin to move all my python to popen(sudo chmod).  bash4life.
[16:43] <lazyPower> or, just write the charms in bash
[16:43] <lazyPower> i mean either way
[16:43] <kwmonroe> :)
[16:44]  * lazyPower puts on his "immatroll" hat
[16:49] <cory_fu> kwmonroe, kjackal_: I'm looking at my locally built NN's dist.yaml and cache_dir has: "perms": !!int "775"
[16:49] <cory_fu> Contrast that to the link above, which has: "perms": !!int "509"
[16:49] <cory_fu> It seems like none of my perms are being interpreted as octal any more
[16:52] <lazyPower> cory_fu - i noticed that it does drop the type inline when converting using ruaml.  Is that common?
[16:52] <kwmonroe> cory_fu: do me a solid, what are the perms on /usr/lib/hadoop on your deployed NN?
[16:52] <lazyPower> i'd never seen yaml with those definitions inline until we started building charms
[16:52] <cory_fu> kwmonroe: dr----x--t 9 root root 4096 Mar 29 16:06 /usr/lib/hadoop
[16:53] <kwmonroe> heh, yeah, so that's no bueno
[16:53] <c0s> yeah, that's weird set of perms
[16:54] <kwmonroe> cory_fu: so perhaps we need to quote the perms in https://github.com/juju-solutions/layer-hadoop-base/blob/master/dist.yaml
[16:54] <cory_fu> I think this is related to https://github.com/juju/charm-tools/pull/104
[16:55] <kwmonroe> to prevent whomever is converting octal to decimal
[16:59] <cory_fu> Hrm.  My ruamel.yaml got updated in my venv somehow
[17:00] <cory_fu> Downgrading to 0.10.2 fixes this issue
[17:00] <cory_fu> I'm not sure how it got upgraded, though
[17:01] <c0s> spark 1.6.1 upgrade isn't in the public repository, is it?
[17:01] <cory_fu> kwmonroe: Why did we need the sticky bit on cache_base but not any of the other dirs?
[17:02] <c0s> not yet, that is?
[17:02] <pcdummy> hey guys :)
[17:02]  * pcdummy needs to test out JUJU soon.
[17:03] <kwmonroe> c0s: 1.6.1 is in the ~bigdata-dev namespace: https://jujucharms.com/u/bigdata-dev/apache-spark, but you wouldn't know that unless you peeked inside resources.yaml.  somebody forgot to update the readme ;)
[17:03] <cory_fu> kwmonroe: I'm fixing the README as part of my current work
[17:03] <kwmonroe> bless you cory_fu
[17:03] <c0s> ah, cool Thanks kwmonroe
[17:03] <lazyPower> oh hey pcdummy  o/
[17:04] <pcdummy> lazyPower: thanks for bringing me here, can i deploy Juju 2 on a laptop vm machine?
[17:04] <c0s> kwmonroe: so if I do
[17:04] <c0s>   % juju deploy apache-hadoop-spark-zeppelin
[17:04] <c0s> I should get the latest, right?
[17:04] <pcdummy> or do i need >8 GB Ram to test it out.
[17:04] <lazyPower> pcdummy - you sure can. I would highly recommend you trial this on xenial
[17:04] <pcdummy> i'm on xenial
[17:04] <lazyPower> even *better*
[17:05] <cory_fu> c0s: You'd need the u/bigdata-dev/apache-hadoop-spark-zeppelin version instead
[17:05] <c0s> ok, thanks
[17:05] <lazyPower> pcdummy - the release notes in /topic should be enough to get you started
[17:05] <lazyPower> pcdummy and if you have any questions, i'm here to lend a hand along the way
[17:05] <pcdummy> lazyPower: thanks a lot!
[17:06] <lazyPower> anytime :) Happy that I happened to tab into the right room at the right time :D
[17:07] <kwmonroe> cory_fu: we need the sticky bit because i think you can screw up hdfs if non-owners remove cache files.  so for example, since the ubuntu user is in the hadoop group and the dir is 775, that user could remove stuff unless we stickied it.  now, only root and hdfs can remove stuff regardless of gorup membership.
[17:15] <LiftedKilt> d34dp@@l
[17:16] <lazyPower> LiftedKilt - great movie
[17:21] <beisner> did i leave the stove on?
[17:21] <LiftedKilt> lazyPower also great password for my lab haha
[17:21] <LiftedKilt> lazyPower: oops
[17:21] <lazyPower> i'm constantly doing that, mis alt-tabbing into the wrong app and punching out my password
[17:22] <lazyPower> thankfully its usually in a terminal going nowhere
[17:22] <LiftedKilt> at least it wasn't an important password haha
[17:27] <pcdummy> http://loremflickr.com/1024/768/deadpool,movie/all
[17:33] <c0s> kwmonroe: Actually, it seems that deploy from
[17:33] <c0s>   cs:~bigdata-dev/bundle/apache-hadoop-spark-zeppelin
[17:33] <c0s> still pulls in Spark 1.4.1
[17:33] <c0s> which is probably ok for the immediate needs, but thought you might want to know ;)
[17:35] <c0s> holy crap... https://twitter.com/Smerity/status/714861045420990464 - it took people decades to figure out that parallel execution isn't faster then a sequential? Oh my ....
[17:36] <kwmonroe> heh - nice c0s ^^
[17:37] <kwmonroe> and yeah, thanks for the heads up on the a-h-s-z bundle :/
[17:39] <c0s> sure
[17:44] <CZauX> Is there a PHP-FPM charm, or how does that fit into things with JuJu?
[17:45] <lazyPower> CZauX - there's a layer for it actually
[17:45] <lazyPower> CZauX - charms recently underwent a rennovation, where we now build charms from layers, so everyone can benefit from the learnings in that layer, such as memory optimizations that get repeated on every deployment.
[17:46] <CZauX> How would I add that in in the web interface, or is that a command line thing?
[17:49] <lazyPower> its a command line thing used when building charms, so there's no real representation in the GUI of that layer, aside from relations it adds, and configuration options on the final charm
[17:49] <lazyPower> there's a howto that walks through this process, let me fish that up for you
[17:50] <lazyPower> CZauX - https://jujucharms.com/docs/devel/developer-getting-started  is the 10k foot view, and if you want to view the actual instructions for hoiw those layers are written/assembled: https://jujucharms.com/docs/devel/developer-layer-example
[17:51] <beisner> cholcombe, so is https://review.openstack.org/#/c/296632/ ready for the wf+1 (merge that schtuff) ?
[17:51] <cholcombe> beisner, yeah i think so.  icey you agree?
[17:51] <beisner> cholcombe, icey - fyi lgtm
[17:52] <icey> +1 beisner cholcombe, it should now correctly gate what shoud be passing :)
[17:52] <cholcombe> cool
[17:52] <beisner> boom, wf+1'd
[17:53] <beisner> thanks icey, cholcombe
[17:53] <cholcombe> woo
[17:59] <cory_fu> c0s: Oh, sorry.  I haven't published the update to that bundle yet because I was working on the readme and smoke tests
[18:01] <c0s> no worries cory_fu
[18:02] <cory_fu> c0s: I feel like the real message is that all generalizations are false.  Like everything in life, the real answer is, "it depends."
[18:02] <cory_fu> c0s: (of that tweet)
[18:10] <c0s> :)
[18:10] <c0s> cory_fu: is true.
[18:11] <c0s> However, in parallel computing you can not run your code faster than the slowest sequential part of it. Hence, in many cases, good sequential code will beat the crap out of ... well... crappy parallel code
[18:40] <A-Kaser> kwmonroe: if you update README file on https://jujucharms.com/u/bigdata-dev/apache-spark you could update the Spark version :)
[18:41] <A-Kaser> title said spark 1.4 and resources spark-1.6.1-bin-hadoop2.6
[18:41] <kwmonroe> yup A-Kaser, cory_fu already got it
[18:42] <kwmonroe> thanks :)
[18:42] <A-Kaser> fine
[18:58] <c0s> kwmonroe: is it possible to redeplo just one service in an already deployed bundle?
[18:59] <c0s> Say, Zeppelin behaves funny, so I just want to re-install it. Is it possible at all?
[18:59] <cory_fu> c0s: You can `juju remove-service` and then `juju deploy` that service again.  Are you using quickstart for deploying the bundle?
[19:00] <cory_fu> c0s: Also, be aware that since Zeppelin is a subordinate, you might end up in a situation where you'd need to remove the principal (spark) as well.  But if the charm cleans up properly after itself, you shouldn't
[19:01] <cory_fu> c0s: These instructions look better to you for the smoke-test?  https://github.com/johnsca/bundle-apache-hadoop-spark-zeppelin/blob/readme/README.md
[19:02] <cory_fu> kwmonroe: Can you take a look at the PRs I have open (spark, zeppelin, RM, NN, and that bundle's README), please?
[19:03] <kwmonroe> yup cory_fu
[19:03] <cory_fu> Thanks
[19:05] <skay> cory_fu: do you have a prompt that displays the current juju env?
[19:05] <cory_fu> skay: I do
[19:05] <skay> cory_fu: tell me more
[19:05] <skay> yay
[19:06] <c0s> cory_fu: I am on juju 2.x
[19:06] <cory_fu> c0s: Ah, nice
[19:07] <c0s> cory_fu: I like that juju action do namenode/0 smoke-test thing
[19:08] <c0s> much cleaner than a long instructions on how to pull in some files and check them
[19:08] <skay> cory_fu: earlier I was asking, and lazyPower said you might. I want to display the env in a prompt because I have a shared account with someone and multiple envs
[19:08] <skay> and that's just a disaster in the making
[19:09] <cory_fu> skay: I use liquidprompt (https://github.com/nojhan/liquidprompt) and this is my liquid.ps1 file: http://pastebin.ubuntu.com/15554369/  I also have this set up as a once-a-minute cron: http://pastebin.ubuntu.com/15554384/
[19:10] <cory_fu> So that gives me both the current env and the number of machines.  I think I also have juju-machine-count running on demand, too, after some juju operations, but I can't recall how
[19:11] <c0s> cory_fu: if I removed/deployed zeppeling - shall I also add relations manually, etc?
[19:11] <skay> thanks
[19:11] <cory_fu> c0s: Yes.  I'm not sure how 2.0 handles re-deploying a partially deployed bundle, but it's worth trying just `juju deploy <bundle>` again and see if it does the right thing (adding missing services and finishing the relations)
[19:12] <skay> cory_fu: why juju switch versus juju env?
[19:12] <cory_fu> I know quickstart did not, which is why I asked about that
[19:12] <c0s> oh, I guess just add unit
[19:12] <cory_fu> skay: *shrug*  They both do the same thing, and are probably aliases
[19:12] <cory_fu> skay: Also, that will probably have to change for juju 2.0
[19:13] <skay> cory_fu: okay. I wanted to make sure I wasn't missing something about the command
[19:16] <cory_fu> marcoceppi, rick_h_: Will juju 2.0 become the default in trusty once it's released, or will it be Xenial+ only (w/o a ppa)?
[19:17] <cory_fu> Or, rather, will the stable ppa be updated to 2.0 for trusty
[19:22] <rick_h_> cory_fu: yes, but it will be a juju2 package and put into main. I'm not sure what the timeline is on that re: xenial release
[19:23] <cory_fu> rick_h_: Just trying to plan our README and other charm updates
[19:25] <lazyPower> rick_h_ - that differs from what i heard from stokachu just yesterday
[19:25] <rick_h_> lazyPower: :)
[19:25] <lazyPower> i read that the juju2 package was goign away, and a juju-1.25 package would be created
[19:25] <rick_h_> lazyPower: right, but on trusty you already have juju-core
[19:25]  * lazyPower is utterly confused
[19:25] <rick_h_> lazyPower: so on trusty, you'll have juju2 package
[19:26] <rick_h_> lazyPower: you're correct on xenial and that's the reverse story than what cory_fu was asking
[19:28] <lazyPower> ok, i suppose that makes more sense
[19:28] <lazyPower> seems weird that we have 2 install paths between the two LTS's, but i suppose there are concerns far bigger than my understanding of why :)
[19:29] <jcastro> let's pretend you're on 1.25
[19:29] <jcastro> and xenial comes out
[19:29] <jcastro> but I can't upgrade from 1.25 to 2
[19:29] <jcastro> we need to provide both
[19:29] <rick_h_> jcastro: we will
[19:29] <lazyPower> > I can't upgrade from 1.25 to 2 -   when will that scenario come to pass?
[19:30] <rick_h_> jcastro: lazyPower we're working witht he release team and folks to find a path that's acceptable to all
[19:30] <lazyPower> i thought a 1.25 to 2.0 environment jump was the *only* supported path we had to upgrading an older env to the new env
[19:33] <c0s> damn, looks like it is easier to re-bootstrap the controller
[19:34] <cory_fu> c0s: :(
[19:34] <c0s> perhaps it is my inaptness around juju
[19:34] <cory_fu> c0s: If you're on 2.0, you can create new envs w/o rebootstrapping
[19:34] <cory_fu> I'm not sure what the syntax is, though
[19:35] <lazyPower> juju create-model myname
[19:35] <lazyPower> cory_fu ^
[19:35] <stormmore1> I am still trying to get my head around the model concept in juju
[19:36] <c0s> oh... I didn't know that cory_fu
[19:36] <c0s> too late for this time though ;)
[20:01] <deanman> lazyPower: Trying to use docker charm using manual provider on a local VM and it hangs on the "waiting for agent initialisation to finish". Any specific limitation on that charm ?
[20:01] <lazyPower> deanman - which charm specifically?
[20:02] <deanman> cs:trusty/docker-8
[20:02] <lazyPower> i have a todo item to deprecate the older docker charm and instead promote the new layer-docker, as well as supplant that charm.
[20:02] <lazyPower> yeah, thats the older docker charm which uses ansible. Its fine, and works as is with docker verions <= 1.10, when it makes the systemd rollover in xenial that charm will be fully deprecated and not make the leap, in leu of https://github.com/juju-solutions/layer-docker
[20:03] <lazyPower> deanman - now about the agent being pending - how long have you been waiting? it can take up to ~ 5 minutes with the kvm provider on lower end systems for the vm to come up, and the agent to fully init.
[20:04] <deanman> lazyPower: on a manual provider when deploying a charm is deployed on a KVM container by default ?
[20:04] <lazyPower> well manual will enlist anything you point it at so long as it has a passwordless sudo user, and it can communicate with your model-controller.
[20:05] <lazyPower> that can be a kvm instance, a vm over on digital ocean, etc.
[20:06] <deanman> Yeah, my setup is a two VM running locally, one used for state and one used for deploying docker containers, i'm bootstrapping the first with manual provider and then adding the second with "machine add". On juju status everything is up and running
[20:07] <lazyPower> deanman - can i get a pastebin of your juju status output?
[20:07] <deanman> sure, http://pastebin.ubuntu.com/15554799/
[20:10] <c0s> darn... kwmonroe cory_fu is there any way to go from one version of a bundle to a later one without scrapping everything?
[20:10] <c0s> case in point is apache-hadoop-spark-zeppelin to cs:~bigdata-dev/bundle/apache-hadoop-spark-zeppelin
[20:10] <deanman> it seems that it is not a specific charm issue, all charms that i tried to deploy on that second machine failed,
[20:11] <magicaltrout> c0s: you can switch charms
[20:11] <magicaltrout> during upgrades
[20:11] <magicaltrout> https://jujucharms.com/docs/1.25/authors-charm-upgrades
[20:12] <magicaltrout> might be what you're after
[20:12] <cory_fu> c0s: You would probably have to do `juju upgrade-charm --switch` on every service, and if there are any differences in the relations, it might fail
[20:12] <lazyPower> deanman           current: failed  -- something happened during the agent provisioning that failed hard
[20:13] <lazyPower> deanman - does juju ssh docker/8 work? if so, lets see if we can get the logs to figure out where the hangup up, they will be located in /var/log/juju/
[20:13] <deanman> lazyPower: You mean the first time when added that machine that something could have gone wrong and agent not installed correctly ?
[20:13] <c0s> I see
[20:13] <c0s> re-deploy it is
[20:13] <stormmore> can any help me understand what controllername is in the context of juju bootstrap [options] <controllername> <cloud>[/<region>]?
[20:14] <stormmore> and how does it relate the juju model concept?
[20:14] <lazyPower> stormmore - you can name a controller anything you want so its pertinent to you
[20:14] <kwmonroe> yeah c0s, i would just destroy-model and fire up another.  the relation differences between promulgated and bigdata-dev charms would probably not survive a mult-charm upgrade.  ie, you'd upgrade plugin and immediately break everyting else.  then you'd be in resolved --retry hell longer than it would take to just destroy and re-deploy
[20:14] <stormmore> lazyPower: i.e. openstack-cluster, or wordpress-site?
[20:14] <lazyPower> stormmore - so, a controller alias makes it easier to identify when you're working in a multi-environment setup, like deploying on AWS and Azure to keep redundant deployments going in 2 dc's.
[20:15] <deanman> lazyPower: It nags about KVM acceleration can NOT be used and exits with status 1. So weird, why deploying on state VM works but not on the main VM ?
[20:15] <LiftedKilt> stormmore: a controller manages a set of models
[20:15] <lazyPower> stormmore - that sounds more like a model name than a controller name. In my specific ocntext, i have 3 controllers i use on a daily basis -    juju bootstrap personal aws/us-east-1   as an example.
[20:16] <lazyPower> deanman 1 sec :) i cant believe that the KVM support is what would nuke that agent. it doesn't need KVM extensions in this context unless you told it ot deploy --to kvm:1
[20:16] <lazyPower> stormmore - so think of it like this, a model-controller (or just the controller) is what allows you to work with many different models
[20:16] <LiftedKilt> so you could have "prod" and "dev" controllers on a cloud, and then have separate models for your different clusters inside those
[20:16] <deanman> lazyPower: Yeah that's very weird because i didn't specify any container, simply used "juju deploy <charm>" which it default to select the second VM
[20:17] <lazyPower> you can model your entire web presence in one model, and then deploy an ERP for your accounting/sales team in another model, controlled by the same controller.
[20:17] <lazyPower> deanman - yep, i think we're missing something here, can you pastebin those logs? protip: apt-get install -y pastebinit  makes that a breeze
[20:17] <stormmore> OK I think I got it, so once I have done my initial bootstrap, then I should create a model for instance for openstack
[20:17] <lazyPower> stormmore - i hope that helped clarify, is it starting to make more sense?
[20:17] <lazyPower> yes!
[20:18] <lazyPower> the default model you're presented with is an 'admin' model
[20:18] <lazyPower> its likely you want to deploy things like the juju-gui into this model, and then create different models for any additional application partitions you may want/need
[20:18] <stormmore> awesome OK now that kinda makes sense, not concepts I played with in 1.9
[20:18] <stormmore> yeah I have juju-gui deployed :)
[20:19] <stormmore> took a little bit of work (still not figured out to auto deploy routing changes I need to node) but it working as I intended for the PoC
[20:19] <lazyPower> glad we could shed some light :) its a bit of a departure from our story to date, it really accelerates testing ideas in juju not having to wait for a bootstrap when you want to tweak something.
[20:19] <lazyPower> and managing multiple models from a single controller also just plain kicks butt, so there's that too.
[20:20] <stormmore> I do like the model idea and can think of internal use cases, however right now I am just trying to get a MAAS, Juju, Openstack cluster PoC done
[20:20] <lazyPower> pcdummy - speaking of people poking about, did you discover everything you were looking for?
[20:21] <deanman> lazyPower: http://paste.ubuntu.com/15554853/ (docker/8), http://paste.ubuntu.com/15554860/ (redis)
[20:22] <stormmore> lazyPower: do you have any suggestion where adanced routing (i.e. multiple gateways) should be setup so that juju / maas deploys the right post-up and post-down lines for the interfaces?
[20:22] <lazyPower> stormmore - network modelling is somethiing we introduced recently, i'm not positive on how much of that is in maas 1.9
[20:23] <stormmore> lazyPower: just as well I am running 2.0 beta 2
[20:23] <lazyPower> oh wow, you're on bleeding edge then :D
[20:24] <lazyPower> iirc stormmore - you just need to model the networking you're looking to achieve in maas, anything outside of that can be done with network spaces
[20:24] <LiftedKilt> stormmore: are you pre-provisioning your machines in maas and then adding them in juju via ssh?
[20:24] <stormmore> why not, by the time this PoC is production ready it won’t be bleeding edge no more
[20:24] <stormmore> LiftedKilt: no I am letting juju ask maas for resources
[20:24] <LiftedKilt> stormmore, lazyPower: the 2.0 API is out?
[20:25] <lazyPower> LiftedKilt - i'm not entirely sure, last i heard the core team was sprinting on the maas 2.0 / juju 2.0 integration bits
[20:25] <LiftedKilt> err the integration with juju and maas 2.0
[20:25] <lazyPower> when thumper shows up we can poke him for the details
[20:26] <LiftedKilt> lazyPower: that was my understanding as well, and when I attempted a juju2/maas2 deployment a few days ago it wasn't yet operational
[20:26] <lazyPower> deanman - silly question, but can your VM reach 10.0.2.15:17070?
[20:26] <LiftedKilt> hadn't checked to see if it was snuck into juju 2 beta 3
[20:26] <stormmore> I am not using maas 2.0 as I heard rumors that it is removing wakeonlan and for the time being I am reliant on that
[20:26] <lazyPower> deanman - i'm wondering if its having an issue connecting to the api server, but the machine agent appears to have spun up fine, its the unit agent that i'm concerned with :/
[20:27] <lazyPower> stormmore - ok so you're on maas 1.9, and juju 2.x?
[20:27] <LiftedKilt> stormmore lazyPower: that makes a lot more sense
[20:27] <stormmore> yeah exactly
[20:27] <lazyPower> phwewwwww
[20:27] <lazyPower> ok
[20:27] <lazyPower> when i said you'r eon bleeding edge, i silently cringed over here
[20:27] <LiftedKilt> haha
[20:27] <lazyPower> as i know thats been a pain point we're working through
[20:27] <stormmore> ok as bleed edge as I can be without having to “hack” to get wakeonlan to work again
[20:27] <lazyPower> haha
[20:28] <lazyPower> we're removing WoL support from maas? thats weird
[20:28] <stormmore> yeah I thought that
[20:28]  * lazyPower is missing updates all over the place
[20:28] <LiftedKilt> I hadn't heard about removing wol from maas 2? Maybe it's just broken right now?
[20:29] <lazyPower> That seems more likely than removing it
[20:29] <lazyPower> stormmore - but yeah, model your networking under the associated tabs in maas, set your gateways and cidrs attached to your interfaces
[20:30] <LiftedKilt> WoL is pretty integral to the workflow - pxe/enlist, shutdown, commission, shutdown, deploy
[20:30] <lazyPower> stormmore - then when juju brings up the services it should "just work" as you've modeled it slightly lower in the stack. if you have need of a custom network that you're not modeling in maas, investigate network spaces, as its a newish feature, supported on MAAS and AWS substrates that i'm aware of (potentially more)
[20:31] <deanman> lazyPower: well i don't get it why it uses that IP address, the VM does indeed report two configured interfaces, eth0  (10.0.2.15) and eth1 (192.168.11.12). I used the latter to add it on the juju environment.
[20:32] <lazyPower> you're using vagrant to spin these up right? i wonder if you dont need to weak the networking on that so its only got a single nic
[20:32] <lazyPower> juju will poll the listed interfaces and bind ot the first one it lists as the admin interface unless i'm mistaken
[20:32] <deanman> lazyPower: Yeap, vagrant...well it shouldn't get two ifaces.
[20:32] <lazyPower> which means, if you do something like install docker before juju, it'll bind to that docker0 interface, 172.0.4.x - and then things get bundled up like this
[20:33] <lazyPower> *bungled
[20:33] <deanman> lazyPower: well that's the best lead of what's wrong so far.
[20:33] <lazyPower> well, lets hack on that Vagrantfile you're using and make it right :)
[20:33] <thumper> lazyPower: say what?
[20:33] <thumper> oh...
[20:33] <thumper> maas and juju 2.0 love?
[20:33] <lazyPower> thumper <3 hows juju2 / maas2 doin? we're still same place we were right? "we're working on it and you should stop bugging me"
[20:34] <thumper> yeah... won't be any time soon
[20:34] <deanman> lazyPower: but...machine 1 has the same setup, it reports two interfaces, but deployments there work ;-)
[20:34] <thumper> where soon is in the next two / three weeks
[20:34] <thumper> will happen as soon as we can
[20:34] <thumper> we are going flat stick
[20:34] <deanman> machine 0*
[20:34] <lazyPower> k, thanks :) it was a misunderstanding earlier, i thought we had a user on maas 2.0 trying juju 2.0, i silently wept.
[20:34] <thumper> :)
[20:34] <lazyPower> deanman insert confused face
[20:34] <stormmore> lazyPower: when I setup my maas server I set it up to control both nics fully but I ran into a problem trying to deploy until I took one of the gateways away
[20:34] <thumper> lazyPower: FWIW I'm going to create a status google doc on the progress
[20:34] <lazyPower> i'm not sure why that would be the case... i can try to reproduce if i have th disk space
[20:35] <stormmore> lazyPower: ended up having to log into the deployed box create a new routing table and run a few iproute2 commands
[20:35] <lazyPower> hmm, ok - i recommend you post what you're trying to do on the juju mailing list, our networking spaces guru is on EU time, so that gives them a chance to participate in the convo
[20:35] <lazyPower> stormmore ^
[20:35] <LiftedKilt> stormmore: you might need to dpkg-reconfigure maas-region-controller
[20:35] <LiftedKilt> stormmore: and make sure it's on the right address
[20:36] <stormmore> lifeless:
[20:36] <stormmore> whoops
[20:36] <lazyPower> deanman - can you hit me with your Vagrantfile? i'll try to reproduce over here, i'm constrained on disk, but lets give it a go
[20:36] <lazyPower> juju status
[20:36] <lazyPower> gah
[20:37] <stormmore> LiftedKilt: when it happened I when through the whole configuration, as soon as I removed the one gateway address, deployments worked, I suspect it is how the rest of my setup is and the constraints I am working within
[20:38] <stormmore> lazyPower: I just doublecheck the docs, wakeonlan / ether_wake has been removed from the development branch
[20:39] <lazyPower> stormmore - i just relayed the question to our maas devs, i'll similarly proxy a response back when i have one
[20:40] <stormmore> sweet :) thanks... yet another reason I am enjoying working with these tools :)
[20:40] <LiftedKilt> stormmore: http://paste.ubuntu.com/15555003/
[20:41] <deanman> lazyPower: http://paste.ubuntu.com/15555000/ After vagrant up i would ssh to each machine and install latest juju 1.25.3
[20:41] <LiftedKilt> stormmore: I would take a look at that and see if it might fix your problem - instead of adding the gateways, you add them as post-up routes with metrics
[20:41] <lazyPower> ok give me a sec to poke around in here deanman, i'll need to fetch all the base box's and what not
[20:42] <deanman> lazyPower: i really appreciate your help
[20:44] <stormmore> lazyPower: I did consider that method, how would I implement that if one of those gateways happens to also be the maas server? I assume using space config?
[20:45] <lazyPower> LiftedKilt - hmm, how do you model that post-up route w/ maas?
[20:46] <deanman> lazyPower: and this would be my environments.yam file http://paste.ubuntu.com/15555031/
[20:46] <lazyPower> deanman anytime :) sorry you've bumped your head a bit trying ot get this going. I'd like to get you sorted so you can hack on projects with us :D
[20:46] <LiftedKilt> lazyPower: if you needed to deploy that to all your nodes? eesh I'm not sure. A custom post deployment script I guess
[20:47] <LiftedKilt> I thought stormmore's problem was multi gateway on the maas server itself though, not the deployed nodes
[20:48] <stormmore> nope the with nodes :-/
[20:49] <stormmore> http://paste.ubuntu.com/15555049/ is what I ended up doing on the node manually
[20:50] <stormmore> the MAAS server also has post-up and post-down commands to SNAT for the MAAS mgmt network
[20:52] <LiftedKilt> stormmore: gotcha - yeah configuring the networking on maas nodes is still pretty hazy for me...I'm not sure the best way to do things
[20:53] <lazyPower> oh no :( bad news
[20:53] <lazyPower> roaksoax lazyPower: yes, it is removed
[20:53] <lazyPower> stormmore - it appears you weren't wrong in your concern about WoL going away
[20:53] <LiftedKilt> stormmore: and I feel like I can't really get a grasp of maas' methodology for networking
[20:53] <LiftedKilt> lazyPower: what why?
[20:54] <stormmore> LiftedKilt: it is taking me a bit but I also have a background in networking and vmware which helps me. first thing is to stop thinking of a server as a server but instead consider it a rack in a box
[20:54] <stormmore> probably because WoL is unreliable at best
[20:55] <stormmore> I am only using it cause I don’t have access to help me setup the rpdu configuration
[20:56] <c0s> cory_fu: trying to load some files in Spark using the a-h-s-z bundle from bigdata-dev repo.
[20:56] <c0s> And am getting this
[20:56] <c0s> org.apache.spark.SparkException: Found both spark.driver.extraClassPath and SPARK_CLASSPATH. Use only the former.
[20:56] <c0s> 	at org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$8.apply(SparkConf.scala:444)
[20:56] <c0s> 	at org.apache.spark.SparkConf$$anonfun$validateSettings$6$$anonfun$apply$8.apply(SparkConf.scala:442)
[20:56] <lazyPower> short answer here LiftedKilt : .. but it is not a robust driver, since it doesn't support power off or status, and requires layer2 network connectivity ..
[20:56] <lazyPower> >  one issue is, it's difficult to know which rack should send the WoL packet, especially in an HA configuration with many racks
[20:57] <LiftedKilt> lazyPower: hmmm - well, that's reasonable
[20:57] <LiftedKilt> lazyPower: and was actually a question I had on my list of things to investigate - how to determine which controller would control wol for each server
[20:59] <LiftedKilt> stormmore: I come from a traditional networking background, and am trying to jump into calico or opencontrail and l3 everywhere. I understand the concepts, but it feels really foreign
[20:59] <cory_fu> c0s: Hrm.  That should be fixed.  We might need to publish a new rev of apache-spark or update the bundle
[21:00] <stormmore> LiftedKilt: totally get that :) that is why it took me a day to remember how to handle multiple default gateways in Linux again with enough confidence to test in a remote DC
[21:01] <stormmore> LiftedKilt: I prefer the separate routing table to multiple metric based gateways, just seems cleaner and possible more efficient
[21:01] <c0s> ok
[21:01] <c0s> so, for now I will try to go around with spark-shell
[21:02] <c0s> cause 1.3.0 Spark (in the currently published bundle) doesn't provide certain things
[21:12] <kwmonroe> c0s: with your a-h-s-z deployment, can you pastebin the output of "juju status"?
[21:12] <kwmonroe> the conflicting SPARK_CLASSPATH and extraClassPath should have been resolved 5 days ago with : https://github.com/juju-solutions/bundle-apache-hadoop-spark-zeppelin/commit/d4d77ba503d7ff409d1f942321cfa54099feeecd
[21:16] <cory_fu> I have to run to dinner.  Maybe EOD, or I might be back for a bit
[21:17] <kwmonroe> marcoceppi: is there a 'charm show' equivalent for bundles?
[21:17] <admcleod1> c0s: hey what email address should i use for you?
[21:17] <lazyPower> kwmonroe - does that not work with a bundle resource?
[21:17] <c0s> here it is kwmonroe http://paste.ubuntu.com/15555219/
[21:17] <kwmonroe> lazyPower: i got this http://paste.ubuntu.com/15555216/
[21:17] <lazyPower> ah right, that was a bug that was slated to be fixed
[21:18] <lazyPower> we identified that late last week
[21:18] <kwmonroe> ack lazyPower.. is there an open issue?
[21:43] <stormmore> well that is totally weird, I just lost juju-gui after I destroyed a different charm
[21:44] <rick_h_> stormmore: ?!
[21:44] <stormmore> I was working in the juju, decided I wanted to destroy a charm I just deployed and then now I am getting connection time outs on juju-gui
[21:45] <stormmore> working in the juju-gui
[21:45] <rick_h_> you destroyed it in the GUI UI? Or from the cli?
[21:45] <rick_h_> stormmore: ^
[21:46] <stormmore> rick_h_: in the GUI
[21:46] <rick_h_> stormmore: to the same machine/host?
[21:46] <rick_h_> stormmore: if you can replicate/describe please file a bug and we'll get on that as that sounds dangerous. https://github.com/juju/juju-gui/issues
[21:47] <stormmore> rick_h_: yes, I had just deployed neutron-gateway, thought I messed up the constraints, decided to destroyed and redeploy and now the gui is nonresponsive
[21:48] <stormmore> rick_h_: hoping not to recreate :P most likely right now I am going to detroy the controller and start again
[21:48] <rick_h_> stormmore: ok, is this 2.0 beta3?
[21:48] <rick_h_> hatch: ^ fyi
[21:48] <stormmore> rick_h_: beta 2
[21:48] <rick_h_> stormmore: k
[21:49] <stormmore> is 3 out? can I upgrade straight to it or is burn and rebuild operation?
[21:51] <stormmore> rick_h_: it might be worth noting, on logging into the node I see that it didn’t in fact destroy the neutron-gateway, going to see if I can clean up using the cli first
[21:53] <rick_h_> stormmore: yes, 3 came out this morning
[21:53] <rick_h_> stormmore: make sure to kill any running controllers
[21:54] <rick_h_> stormmore: it *might work* but I'm not 100% sure if you'll need to kill/restart
[21:55] <stormmore> OK sweet, well I am about to destroy this controller even if I figure out why the juju-gui has stopped responding
[21:55] <rick_h_> stormmore: k, yea upgrade to beta3 and you'll get the latest good stuff. Loves of big improvements in b3
[21:56] <stormmore> sweet :) I have so far been really impressed with each and every upgrade I have done in the last 3 months
[21:56] <stormmore> it pays to have a few servers as a playground for sure :)
[21:57] <rick_h_> playgrounds make life more fun
[21:57] <stormmore> oh yeah :) and right now that is what I have until I get a stable PoC
[21:58] <stormmore> rick_h_: know if there is a way to restart the juju-gui service without restarting the node?
[21:59] <rick_h_> stormmore: yes, there's a service on there called 'guiserver' I think
[21:59] <rick_h_> stormmore: sudo service guiserver restart
[21:59] <rick_h_> maybe?
[21:59] <kwmonroe> hey c0s, GREAT news and bad news... great news:  the latest bigdata-dev a-h-s-z bundle includes charm revisions that fix your classpath conflict: https://jujucharms.com/u/bigdata-dev/apache-hadoop-spark-zeppelin.  bad news:  your deployment is too old (by about 4 hours).
[22:00] <c0s> yeah, well ;)
[22:00] <stormmore> trying :)
[22:00] <c0s> I will continue with what I have - shark-shell works for the poking around
[22:00] <c0s> in am, I will get the latest one and give Z a spin ;)
[22:00] <c0s> thanks kwmonroe
[22:01] <c0s> it took me a bit longer to get up and running all I needed, but you know ;)
[22:01] <kwmonroe> specifically c0s, you need spark-69, zeppelin-49, and plugin-65... for sure continue on if you're able, and when you need a better spark-submit, burn that model and fire up the latest ;)
[22:01] <c0s> yeah, what I have will sustain me through the day ;)
[22:05] <stormmore> last test before I kill this controller, destory and redeploy juju-gui
[22:07] <stormmore> think http://paste.ubuntu.com/15555560/ says it all, ok killing controllers and upgrading to beta 3
[22:09] <rick_h_> stormmore: ok, so that's good and should be error info in the unit logs there
[22:10] <stormmore> rick_h_: on the node? path?
[22:10] <rick_h_> stormmore: /var/log/juju/unit-XXXXXXX where XXX is the unit name/etc
[22:15] <stormmore> rick_h_: “subprocess.CalledProcessError: Command '['apt-get', 'update']' returned non-zero exit status 100” seems the pertinent line from the log. think my node network got messed up beyond repair
[22:15] <rick_h_> stormmore: ah, ok so it couldn't apt-get installed
[22:17] <stormmore> rick_h_: that is what is looks like which would go with the fact that the routing got hosed, going to separate the gateway onto a different node to rule that problem out
[22:18] <rick_h_> stormmore: gotcha, ok I'm scared less then
[22:18] <hatch> hello
[22:18] <rick_h_> stormmore: the whole "destroy X and Y goes away" freaked me out a bit :)
[22:19] <rick_h_> hatch: carry on, nothing to see here
[22:19]  * hatch carry's on
[22:19] <stormmore> rick_h_: right :) still anonying that destroying X removed some other required setting
[22:20] <rick_h_> stormmore: well, you destroyed something that borked networking
[22:20] <stormmore> yup
[22:37] <LiftedKilt> has anyone else had a problem with destroy-model leaving an empty unusable model in the ui?
[22:49] <stormmore> fun now I need to figure out how to create an internal mirror repo too
[23:08] <rick_h_> LiftedKilt: yes, just got link to the bug on that https://bugs.launchpad.net/juju-core/+bug/1534627
[23:08] <mup> Bug #1534627: Destroyed models still show up in list-models <2.0-count> <conjure> <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1534627>
[23:13] <stormmore> OK so I created a new controller using beta 3 but machine 0 isn’t showing up and juju list-machines isn’t showing any machines
[23:14] <rick_h_> stormmore: juju list-models
[23:14] <stormmore> # juju list-models
[23:14] <stormmore> NAME      OWNER        LAST CONNECTION
[23:14] <stormmore> admin     admin@local  never connected
[23:14] <stormmore> default*  admin@local  40 seconds ago
[23:14] <rick_h_> stormmore: beta3 fixes it so that there are two models out of the box, the admin model that you don't use and the default one that is the empty default space to work in
[23:14] <rick_h_> stormmore: so create-model or use default to work in, machine 0 is in the adin model and so doesn't show in default and other models
[23:15] <stormmore> rick_h_: so how do I deploy services to machine 0?
[23:16] <LiftedKilt> rick_h_: Thanks for the link
[23:16] <rick_h_> stormmore: you can juju switch admin
[23:17] <rick_h_> stormmore: and deploy on there, but in order to help prevent folks from doing that when you can have many models it's a bit discouraged
[23:17] <rick_h_> stormmore: using lxd containers or the like is much preferred for density, but machine 0 is hosting the state data for all models so deploying/removing on that is potentially messy
[23:17] <stormmore> rick_h_: oh I get that, but in most examples, juju-gui usually gets deployed to machine 0
[23:18] <stormmore> rick_h_: not quite ready for lxd, still using trusty
[23:20] <rick_h_> stormmore: gotcha, yes I'd juju switch admin and juju deploy there
[23:20] <rick_h_> stormmore: fyi that there's a new juju gui command that will work so that the gui is always built into juju on bootstrap
[23:20] <rick_h_> stormmore: so it won't be a problem by the next release I believe
[23:20] <stormmore> rick_h_: oh do share :)
[23:21] <rick_h_> stormmore: juju gui --help
[23:23] <stormmore> rick_h_: oh nice, can’t wait now :P
[23:23] <rick_h_> stormmore: so atm you'd have to git clone the juju-gui and do https://pastebin.canonical.com/152913/
[23:23] <rick_h_> stormmore: but soon, bootstrap will get the gui release and auto set it up
[23:26] <stormmore> rick_h_: that will definitely cut out a couple of steps in deployment :)
[23:27] <stormmore> rick_h_: shame I don’t have access to that pastebin ;-)
[23:29] <rick_h_> stormmore: doh, my bad. http://paste.ubuntu.com/15555902/
[23:30] <stormmore> nice, yeah definitely makes sense since I am not sure anyone wouldn’t want the gui to show off the models :)
[23:39] <stormmore> so I am running into a problem deploying juju-gui, keep getting http://paste.ubuntu.com/15555944/
[23:41] <rick_h_> stormmore: hmm, can't get to the charmstore? can you reach api.jujucharms.com/charmstore/v4/trusty/juju-gui-52/archive on the state server?
[23:43] <stormmore> yes
[23:43] <rick_h_> stormmore: can you deploy other services there?
[23:43] <stormmore> rick_h_: haven’t tried yet, let me try something else
[23:44] <rick_h_> the error looks like it's getting a 400 from the https charmstore request, but not sure why it would.
[23:44]  * rick_h_ wonders if it'll do that on any charm or if there's something wrong with the api/gui charm combo
[23:45] <stormmore> rick_h_: looking like it http://paste.ubuntu.com/15555975/
[23:45] <stormmore> rick_h_: I am aware just how stupid it to deploy mysql on the same box but hey it was a test ;-)
[23:46] <rick_h_> stormmore: lol yea appreciat eit
[23:46]  * rick_h_ pokes at charm to see if it's ok
[23:47] <stormmore> just mysql is in “Unit is ready” state
[23:54] <stormmore> 2nd controller destroyed today
[23:55] <rick_h_> stormmore: hmm, so permissions are right on the charm and such. what did you hit?
[23:56] <stormmore> rick_h_: I just ran juju deploy --to 0 juju-gui and it didn’t work :-/
[23:56] <rick_h_> stormmore: yea, trying to see why
[23:57] <stormmore> rick_h_: giving it another shot, seeing if I can replicate from clean deployment
[23:57] <lazyPower> stormmore - thanks for digging deep on this :)
[23:57]  * rick_h_ is bootstrapping to see if he can dupe
[23:58] <stormmore> oh any time :) having had this much fun “learning” new tech in awhile
[23:58] <rick_h_> learning == fun :)
[23:59] <lazyPower> ayyyy you kids today and your clouds and learning and modeling. its like you hate the manual side of ops ;)
[23:59] <stormmore> for the record, my original PoC of MAAS and Juju has had the direct consequence of us hiring a contractor to help investigate it’s use in our lab on a large scale