[07:50] <kjackal> admcleod: yes I saw your comments on Mahout
[07:50] <admcleod> kjackal: cool! :)
[07:50] <kjackal> do you think the execution time was acceptable?
[07:51] <admcleod> hmmm it seemed ok yes
[07:51] <kjackal> ok, cool
[07:51] <admcleod> but it would still be preferable to have a non-hdfs non-yarn smoke test
[07:52] <kjackal> agreed
[07:52] <kjackal> I initialy thought that the localhost target was depricated for Mahout, but i was wrong
[07:53] <admcleod> and there are several references to recommender algorithms that dont require hdfs
[07:53] <kjackal> there are some algorithms that you can run localy
[07:53] <admcleod> yep
[07:53] <kjackal> really?
[07:53] <kjackal> It could be an easy fix then
[07:53] <kjackal> ok, so! My plan is finish Kafka, and then either Mahout of HBase depending on the feedback i get from you
[07:54] <admcleod> kjackal: this one for example: https://mahout.apache.org/users/classification/twenty-newsgroups.html
[07:54] <kjackal> Nice! I will try to refactor this
[07:55] <kjackal> this=action+amulet
[08:01] <admcleod> kjackal: cool
[08:02] <admcleod> kjackal: https://mahout.apache.org/users/misc/testing.html
[08:04] <kjackal> admcleod (or anyone else): I was looking at flannel yesterday.  https://jujucharms.com/u/hazmat/flannel/trusty/1 I was trying to make a monster VM from smaller ones. Have you seen this before. Wouldn't be great if there was an option on the manual provider to provision lxc containers inside machines you give him?
[08:07] <admcleod> kjackal: pretty cool although i dont understand your last sentence
[08:07] <admcleod> kjackal: https://github.com/apache/mahout/blob/b25a70a1bc6b9f8cb6c89947e0eaba5588463652/mr/src/test/java/org/apache/mahout/driver/MahoutDriverTest.java
[08:07] <kjackal> You know how you have the manual provider and you manually give the machines you have access to
[08:08] <kjackal> then if you deploy something it gets deployed to that machines, right?
[08:08] <admcleod> kjackal: right
[08:09] <kjackal> What if you could tell the manual provider to spawn lxc containers inside the manualy provided machines
[08:09] <kjackal> based on a round robin or whatever other policy
[08:09] <kjackal> What happened? did we move to IPv6?
[08:10] <admcleod> kjackal: i see, right
[08:10] <kjackal> admcleod: http://pastebin.ubuntu.com/17392595/ look at zkb
[08:11] <admcleod> kjackal: what cloud is that?
[08:11] <kjackal> local with lxc on juju 2.0 deploying apache-zookeeper
[08:12] <admcleod> oh, somethings up with your lxc then
[08:12] <kjackal> next deployment got IPv4 !!!
[08:12] <admcleod> logs please
[08:13] <kjackal> I wouldn't dare!
[08:13] <kjackal> Ahhh ok only because its you!
[08:16] <kjackal> admcleod: http://pastebin.ubuntu.com/17392643/
[08:18] <admcleod> kjackal: if you ssh into it does it actually have an ipv4 address aswell?
[08:20] <kjackal> admcleod: looks legit http://pastebin.ubuntu.com/17392669/
[08:21] <admcleod> kjackal: so, if the others have ipv6 also, seems like juju displaying the wrong address
[08:22] <kjackal> yes, probably this
[08:25] <admcleod> kjackal: https://bugs.launchpad.net/juju-core/+bug/1574844
[08:25] <mup> Bug #1574844: juju2 gives ipv6 address for one lxd, rabbit doesn't appreciate it. <conjure> <juju-release-support> <landscape> <lxd-provider> <juju-core:Won't Fix> <rabbitmq-server (Juju Charms Collection):Fix Released by james-page> <https://launchpad.net/bugs/1574844>
[08:28] <kjackal> hm.... I must be running an old juju version
[08:28] <kjackal> beta7
[08:28] <kjackal> there should be at least a beta8, right?
[08:31] <admcleod> kjackal: yes
[08:36] <kjackal> admcleod: WHER
[08:37] <kjackal> admcleod: where you involved in the IP issues we were seeing because of java?
[08:37] <kjackal> I think kafka could be affected by this: http://stackoverflow.com/questions/1881546/inetaddress-getlocalhost-throws-unknownhostexception
[08:37] <kjackal> admcleod: did we fix this with upgrading to java8?
[08:37] <admcleod> kjackal: i am aware of a few different issues re dns and hostnames and java 7
[08:38] <admcleod> kjackal: is this on lxc also?
[08:38] <kjackal> yes, lxc juju 2.0
[08:38] <Yash> Hello
[08:38] <Yash> I'm facing a problem.
[08:38] <Yash> 2016-06-16 08:36:27 DEBUG juju.api apiclient.go:500 error dialing "wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api", will retry: websocket.Dial wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api: dial tcp [fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070: getsockopt: connection refused [14:07] <Yash> How to solve this?
[08:38] <Yash> I rebooted my machine many times with no luck
[08:39] <Yash> Ubuntu 16.04 and juju 2.0 beta
[08:39] <admcleod> kjackal: right well we came across the problem on joyent since the default joyent resolves are googles public nameservers, so you can't use InetAddress.getLocalHost() to resolve the local hostname (for example)
[08:39] <kjackal> http://pastebin.ubuntu.com/17392798/
[08:39] <admcleod> kjackal: yeah, cos theres no DNS
[08:40] <Yash> @kjackal are you suggesting anything to me? or other problem?
[08:42] <kjackal> hi Yash I think we have separete problems :)
[08:42] <Yash> ok
[08:43] <admcleod> Yash: how did you bootstrap the environment?
[08:44] <admcleod> Yash: and what cloud/substrate is it?
[08:44] <Yash> juju bootstrap lxd-test localhost
[08:44] <Yash> https://jujucharms.com/docs/devel/getting-started
[08:44] <Yash> It was working fine
[08:44] <Yash> I installed many openstack components
[08:45] <Yash> then pending machine problem.. so I removed those machine and services and rebooted whole machine
[08:45] <Yash> Now I'm trying on desktop
[08:46] <Yash> SIngle machine with 24 GB ram and + 4TB
[08:46] <admcleod> can you actually telnet to fd4f:23ae:5d73:5c67:216:3eff:febc:8b38 port 17070? is that port open on that ip?
[08:46] <Yash> I found out if we can restart machine 0 it would resolve problem
[08:47] <Yash> but juju 2.0 changed like there is no /var/lib/juju dir
[08:47] <Yash> instead lxd and lcxfs
[08:48] <Yash> which contains all of it
[08:48] <admcleod> babbageclunk: hello!
[08:48] <babbageclunk> admcleod: hi!
[08:48] <babbageclunk> admcleod: uh oh
[08:48] <admcleod> Yash: unfortunately my experience with juju 2.0 and networking isnt great.. however..
[08:48] <Yash> Any help?
[08:48] <admcleod> ;)
[08:48] <admcleod> Yash: maybe babbageclunk can help ;)
[08:49] <Yash> @babbageclunk can you please suggest anything?
[08:49] <babbageclunk> Yash: so, you're trying to reboot machine 0 but can't?
[08:50] <Yash> Yea
[08:50] <Yash> Trying as per stackoverflow to solve issue
[08:50] <Yash> but since 2.0 changed this also
[08:50] <Yash> so can't find proper way
[08:50] <babbageclunk> Yash: can you ssh into the machine with "juju ssh 0"?
[08:51] <Yash> juju is in hang stat so I can't
[08:51] <babbageclunk> And it's all deployed on lxd?
[08:51] <Yash> Yes
[08:52] <babbageclunk> How about rebooting the container with lxc restart?
[08:52] <Yash> How can I do?
[08:53] <Yash> I've 2 weeks exp only
[08:53] <babbageclunk> :)
[08:53] <babbageclunk> I'm pretty new here too.
[08:53] <Yash> ok :)
[08:54] <babbageclunk> Try "sudo lxc restart <container name>"
[08:54] <babbageclunk> you can get the container name from "sudo lxc list"
[08:54] <babbageclunk> It should come back up pretty quickly.
[08:55] <babbageclunk> (Assuming there's not some other problem.)
[08:55] <Yash> ok Let me try..Thanks
[08:58] <babbageclunk> Any luck?
[09:08] <Yash> nope
[09:08] <Yash> Its just restarted but same problem
[09:08] <Yash>  DEBUG juju.api apiclient.go:500 error dialing "wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api", will retry: websocket.Dial wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api: dial tcp [fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070: getsockopt: connection refused [14:07] <Yash> How to solve this?
[09:09] <Yash> Worse..I can't use juju so don't know what is problem and how to solve it
[09:10] <Yash> I'm not using MAAS as it's optional for single machine. Right?
[09:11] <Yash> @babbageclunk anything you may suggest?
[09:12] <babbageclunk> Yash: Sorry, on the phone at the moment, davecheney's comment in #juju-dev might be worth a look
[11:04] <admcleod> i want to constrain a particular service deployed in an amulet test has an SSD (i only care about running it on AWS at the moment) - how do i achieve that?
[13:05] <magicalt1out> just told the guy who owns the other 50% of my company I'm leaving to work for NASA full time..... talk about dumping a spanner in the works.....
[13:12] <rick_h_> magicaltrout: congrats?
[13:14] <magicaltrout> thanks rick_h_ , one of those weird things, I don't really want to quit, or at least go very part time
[13:14] <magicaltrout> but how often do you get a job offer from NASA to work on big data stuff?
[13:14] <magicaltrout> especially when you live in the UK
[13:15] <kjackal> nice! you will see realy big data there!
[13:15] <kjackal> true big data!
[13:17] <lazyPower> no doubt right? all that historical sensor and probe data to churn through
[13:17] <lazyPower> magicaltrout - im not a data scientist, but the prospects of that are making me jealous
[13:17] <kjackal> What software stack do they have over there in NASA, magicaltrout? In house?
[13:19] <magicaltrout> a lot of Hadoop, SciSpark and IPython/Zeppelin at the mo kjackal
[13:19] <magicaltrout> of course depends what area you work in I guess
[13:20] <kjackal> magicaltrout: nice!
[13:21] <magicaltrout> i'm gonna charm up scispark at some point over the next few months
[13:21] <magicaltrout> I did a big data demo in San Diego last week
[13:21] <magicaltrout> the guys who were there loved it
[13:23] <kjackal> :) thank you magicaltrout
[13:24] <magicaltrout> sadly for the project I joined we're too late for Juju so I'm introducing them to docker
[13:24] <magicaltrout> 1 step at a time I guess
[13:33] <kjackal> magicaltrout: what about going to space? any progress there? :)
[13:36] <magicaltrout> the mrs banned that career move a long time ago
[13:49] <admcleod> magicaltrout: congrats
[14:34] <admcleod> so...
[14:34] <admcleod> i want to constrain a particular service deployed in an amulet test so that it has an SSD (i only care about running it on AWS at the moment) - how do i achieve that?
[14:46] <aisrael> admcleod: Pretty sure you can specify that via constraint. Let me see if I can find an example
[14:46] <admcleod> aisrael: thanks :}
[14:51] <aisrael> admcleod: http://pastebin.ubuntu.com/17397999/
[14:51] <aisrael> basically, you have to force it to one of the aws instance types that has ssd backing
[14:52] <admcleod> aisrael: ah yeah cool thatll do, thanks
[15:42] <petevg> lazyPower: I'm review queuing, and have a question about https://bugs.launchpad.net/charms/+bug/1587641, which you +1ed a week ago. It's still in a "new" state. Can I move it to "fix released"?
[15:42] <mup> Bug #1587641: Update for MariaDB charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1587641>
[15:42] <lazyPower> petevg - oh indeed! i missed closing the bug, sorry about that
[15:43] <petevg> No worries.
[15:44] <petevg> lazyPower: I closed out the bug. (Learning how to do so was useful :-) )
[15:44] <lazyPower> \o/ glad my mistake was a learning experienc e:D
[15:44] <lazyPower> thats the best kind of mistakes to make
[16:46] <balalaika> What's the best way to expose a charms IP to the user?
[16:46] <balalaika> I'm deploying gitlab and I want them to be able to point their domain to the address.
[16:48] <balalaika> I'm aware of unit-get public-address juju helper method.
[16:49] <balalaika> Should I just document in the README that they should inspect the deployed service via CLI?
[17:17] <Prabakaran> For my other charm requirement I had to write this particular template http://paste.ubuntu.com/17332169/ in bash. <lazypower> could you please help me on this?. And also I have to copy some JAR files to mysql charm container where it is installed? Is it possible for me to do it?
[17:18] <lazyPower> Prabakaran - ok so your charm layer is no longer in python? its in bash now?
[17:19] <Prabakaran> I am asking this for other charm which i am developing IBM Platform RTM
[17:19] <lazyPower> ah, ok. I was confused on why the last minute language change
[17:20] <Prabakaran> I have noted in my learning section what ever u had sent it tome
[17:20] <Prabakaran> it was helpful
[17:20] <Prabakaran> but need ur help in bash alos
[17:28] <lazyPower> Prabakaran http://paste.ubuntu.com/17404518/
[17:28] <lazyPower> my bash reactive is questionable at best, fix syntax where applicable
[17:31] <Prabakaran> ya sure.. Thanks for your immediated help on this <lazypower>.. i will implement this.... but adding to that I have to copy some JAR files to mysql charm container where it is installed? Is it possible for me to do it?
[17:32] <lazyPower> It is, but i'm unclear on what you'll do after you've sent them to the mysql unit. jar's aren't very helpful on their own right? you'll need to not only copy them but also action on them right? such as run the jar files in a JRE
[17:33] <lazyPower> so right while you need to copy them, it seems that what you'll really want to do, is add *another* relationship and interface to mysql specific to your use case, so you can take those actions once you've received the jar files. Ohtherwise you've put bits on disk and cant do anything with them
[17:33] <lazyPower> kwmonroe - does that sound about right? ^
[17:35] <lazyPower> Prabakaran - before you go off to implement htat, i'd like to clarify with kwmonroe as he's got some familiarity with the goals of your charm(s)
[17:36] <kwmonroe> Prabakaran: what jars do you need to copy to the mysql unit?
[17:38] <kwmonroe> if they are something that others could use, i'd suggest opening a feature req against mysql to control the inclusion of those jars as a config opt in mysql.
[17:40] <kwmonroe> or set up a shared filesystem that both mysql and your charm support (nfs, etc).. you could stick the jars out there, but then would probably need to tell mysql to use that shared location as a 'classpath' (if that's even a thing for mysql)
[17:40] <Prabakaran> Sorry i was wrong.. there was a tar file which has some *.sql files which i need to copy to the container and run all those...but as per recent chat from <lazypower> we can do this with mysql-client wherein we dont need to copy those tar--->*.sql file
[17:41] <Prabakaran> but i asked this for my understanding
[17:42] <kwmonroe> right - no need to copy *.sql files over to the mysql host.  you can execute any *.sql against the mysql charm using the 'mysql -h host -u user -p password <command>' like lazyPower had in that earlier pastebin
[17:42] <beisner> lutostag, jcastro - re: the jenkins charm ... is the source of truth for dev @ https://github.com/jenkinsci/jenkins-charm ?    i ask because of this proposal in the queue:  https://code.launchpad.net/~lutostag/charms/trusty/jenkins/xenial/+merge/296222
[17:42] <lazyPower> oooo
[17:42] <lazyPower> beisner - good catch on that one
[17:43] <lazyPower> i'm pretty sure we did move source of truth to upstream
[17:45] <beisner> lutostag, ps tons of thx for fixing up jenkins on X.  that was on my oh-shoot list o' stuff to do.
[17:48] <lutostag> beisner: ah, I had no idea about that
[17:49] <lutostag> I can do a PR through github if that is preferred
[17:49] <beisner> just looked back to confirm.  if ya blinked, you might have missed it.  https://lists.ubuntu.com/archives/juju/2016-February/006611.html
[17:50] <lutostag> indeed, I'll PR there tomorrow/next week
[17:51] <beisner> lutostag, thx again sir :)
[18:03] <Prabakaran> Thanks kevin and lazypower for the explaination
[18:03] <Prabakaran> I have small doubt in mysql charm and its interface.
[18:03] <Prabakaran> Here mysql charm is non-layered charm and it would have exposed hostname, port, username and password using relation-set command in relation hook but mysql charm should set/pass those values to the mysql interface like how we use relation-call in the layercharm to interface. Can you please explain me the flow how it works?
[18:06] <beisner> lazyPower, so i wonder:  what does the get-it-into-the-cs story look like for jenkins @ its new home, and what should become of the lp branch?
[18:06] <lazyPower> beisner - we've been dropping lp branches like flies post ingestion kill-off
[18:07] <lazyPower> beisner - get it into the cs right now depends on a manual review and push by a ~charmer until we launch the new rev q, which is still pending iirc
[18:07] <lazyPower> beisner - so if you've got a hot item fix, lmk. otherwise, I'd like to wait for the revq to launch so we have a breadcrumb trail of whats happened.
[18:07] <lazyPower> but thats just me :)
[18:07] <lazyPower> i'm sure there are other opinions out there
[18:08] <beisner> lazyPower, ok.  it may be worth going ahead and doing a charm push from the gh repo and setting the homepage and bugs url metadata so that the cs points to the right place, then nixing the lp branch.  whaddaya say?
[18:09] <lazyPower> beisner - do you have good test run output for me so i get the warm fuzzies?
[18:10] <lazyPower> i can update the store meta no problem, but i want test results before i push :)
[18:10] <beisner> lazyPower, i don't have anything hammering on that charm's dev flow atm.  and... i think it needs tests to be added.
[18:10] <lazyPower> beisner - tests became a mandatory requirement as of the trusty series :\
[18:10] <lazyPower> i cannot in good faith push a charm without tests
[18:11] <beisner> lazyPower, oh it does have these: https://github.com/jenkinsci/jenkins-charm/tree/master/tests
[18:11]  * beisner is with ya lazyPower 
[18:11] <lazyPower> LOL
[18:11] <lazyPower> i wrote these tests
[18:11] <lazyPower> forever ago
[18:11] <beisner> welcome baaaaack
[18:11] <lazyPower> https://github.com/jenkinsci/jenkins-charm/blame/master/tests/100-deploy-trusty#L21
[18:12] <lazyPower> oh man, thats tricky
[18:12] <lazyPower> its deploying a precise series charm to validate leader/follower
[18:13] <lazyPower> beisner - i suggest we pin this for tomorrow, and fold in marco/jcastro if they're around. Otherwise lets vet the charm and make sure its ready for a release
[18:13] <lazyPower> will you have 20/30 minutes tomorrow to do so? we can pair and knock it out quickly
[18:13] <lazyPower> at least the boolean "yes we can push" portion
[18:14] <beisner> lazyPower, yep, no pressure here.  just taking a spin through the queue to see what is in my familiarity zone.  thx for your help.
[18:14] <lazyPower> np beisner, happy to help
[18:14] <lazyPower> beisner - did you see my post to the list re: tags in github?
[18:14] <lazyPower> beisner - as you do a lot of test/release planning, i'd love your feedback on that
[18:16] <beisner> lazyPower, i think that's a good value for projects that have a github dev focus.
[18:17] <beisner> lazyPower, for the openstack charms, github repos are just a sync from cgit @ openstack upstream, and i'm not sure what our tagging abilities are.
[18:17] <lazyPower> tags are independent of github, they are a git native primitive
[18:17] <beisner> we've begun injecting a repo-info file during our build/push/publish automation.  ie.  https://jujucharms.com/u/openstack-charmers-next/neutron-gateway/xenial
[18:17] <lazyPower> so you can tag and push to remote if you have write access to the repository
[18:17]  * lazyPower looks
[18:18] <beisner> lazyPower, that solves one of our big challenges:  "Joe is in possession of a charm.  we don't really know where it came from."
[18:18] <lazyPower> ah man i like that
[18:18] <lazyPower> the repo-info file
[18:18] <lazyPower> its exactly what i would have expected from the revision file
[18:18] <lazyPower> https://api.jujucharms.com/charmstore/v5/~openstack-charmers-next/xenial/neutron-gateway/archive/repo-info  <- that thing
[18:19] <beisner> that's the one
[18:19] <lazyPower> have you already built tooling around this?
[18:19] <lazyPower> and if so, how much of it is specific to the openstack setup?
[18:19] <beisner> we just launched openstack's rendition of layer ci.    it'll be specific in that it is centered around gerrit.
[18:20] <beisner> ie. no git pull requests
[18:20]  * lazyPower snaps
[18:20] <lazyPower> thats such a tease beisner
[18:20] <lazyPower> you show me what i'm looking for in another format, and then pull it away D:
[18:20] <beisner> but yeah, now we can put up a review, tests fly in all sorts of directions, a reviewer can approve and land with a vote, then it pushes/published to CS.
[18:21] <lazyPower> right, makes sense for your use case
[18:21] <lazyPower> ok i'll noodle this s'more and wait for feedback on the ml post, i feel like there's a big gap i'm not seeing in that process though.
[18:21] <lazyPower> and i'm going to regret doing it once i start
[18:22] <beisner> lazyPower, anyway +1 if we can tag revs with cs refs, that will be sweet indeed
[18:22] <beisner> i already had someone ask if we could do that, and owe a bit of research to the idea and whether our tags will survive the flows and syncs through systems outside our direct control.
[18:23] <beisner> i do know this:  we don't have perms to directly tag @ the cgit repos.
[18:23] <beisner> only the bots do
[18:23] <beisner> and a few core/root infra team peeps
[18:23] <lazyPower> ok, so you're mirroring that cgit right?
[18:23] <lazyPower> which means you can carry meta information in your fork
[18:24] <beisner> we aren't mirroring it though.  it happens for anything in https://github.com/openstack/*
[18:24] <beisner> no fork
[18:24] <lazyPower> ah
[18:24] <lazyPower> what a funky setup
[18:24] <lazyPower> i kind of like that its locked down though
[18:24] <beisner> it's a bit weird.  if one were to raise a pr against the gh repo, it will get nack'd and squashed by a bot
[18:25] <lazyPower> as it should if there's a gerrit review process
[18:25] <beisner> yep, there is exactly one way to land a bit. :)
[18:27] <beisner> so it'd be analogous to an enterprise operating their own private internal cgit systems, but also making those repos avail via github for one-way consumption.  a bit funky, yah.
[19:40] <bdx> hey whats up everyone? Whats the reccomended best practice for including pip deps in a charm?
[19:41] <bdx> recommended*
[19:41] <bdx> i can spell
[19:43] <bdx> or, in a layer
[19:43] <bdx> my bad
[19:43] <rick_h_> bdx: so with juju 2.0 I'd say to put together an offline cache of pip deps and zip that as a juju resource
[19:43] <rick_h_> bdx: but maybe otherw siwll have other suggestions as well
[19:44] <bdx> rick_h_: thanks
[19:45] <bdx> rick_h_: for example -> https://github.com/DarkHorseComics/layer-whelp/blob/master/lib/charms/layer/whelp_utils.py#L5
[19:45] <rick_h_> bdx: hmm, so that's for a charm. For a layer, that you want to reuse it's more interesting
[19:47] <stub> bdx: If you are writing a reactive charm, embed them by creating a wheelhouse.txt file in your layer.
[19:47] <rick_h_> heh that's what I was looking for. I knew they had some wheel setup for the layers
[19:47] <stub> https://github.com/juju-solutions/layer-basic/blob/master/wheelhouse.txt
[19:48] <huckst> The command `juju destroy-controller` hangs.
[19:48] <huckst> 2.0-beta8-xenial-amd64
[19:48] <bdx> stub: nice, so any layer can define a wheelhouse.txt and those deps will be picked up?
[19:49] <rick_h_> huckst: can you ssh to the controller still?
[19:49] <rick_h_> huckst: anything in those logs that might prove helpful? Any output to judge where/why it's hanging?
[19:49] <bdx> thats what I thought .... but couldn't find any examples of subsequent layers using this so I wasn't sure
[19:49] <bdx> stub, rick_h_: thx
[19:49] <lazyPower> bdx - dont ilsten to rick. use the wheelhouse.
[19:50] <rick_h_> yea, I was thinking of a string of app deps for a charm
[19:50] <rick_h_> as a resource, I missed the layer specific bit there
[19:50] <huckst> No machine to ssh to.
[19:50] <stub> bdx: I believe so, but haven't used it in anger (I'm still pulling debs from ppas)
[19:51] <huckst> I had setup the gce google controller and the lxd localhost controller. Started working on the lxd controller, forgot about the gce controller until now.
[19:51] <huckst> Then when I went to start using the gce controller, nothing. So I'm trying to drop and re-bootstrap.
[19:51] <rick_h_> huckst: ok, so which one did you destroy-controller on?
[19:51] <huckst> gce google
[19:51] <lazyPower> bdx - i use the wheelhouse dependency chain quite a bit
[19:51] <lazyPower> bdx https://github.com/juju-solutions/layer-docker/blob/master/wheelhouse.txt  - as an example
[19:52] <rick_h_> huckst: ok, and the instances are all shut down?
[19:52] <rick_h_> huckst: if you look in the gce panel? so it ran some output during destroy?
[19:52] <rick_h_> can you pastebin the output?
[19:52] <huckst> Nothing ran in the gce panel.
[19:53] <rick_h_> I understand nothing ran there, but if you had a GCE controller then you had an instance running on the GCE cloud and it would show in the GCE control panel
[19:53] <huckst> Correct. I can confirm it had started weeks ago and was successful at doing somethings. But nothing recently.
[19:55] <jhobbs> To make a new team in the charmstore, do I need to do anything other than create the team in launchpad?
[19:55] <magicaltrout> nope
[19:56] <jhobbs> ok, guess it takes a while to sync or something
[19:58] <magicaltrout> where jhobbs, in charm tools?
[19:58] <bdx> lazyPower: niccceeeee
[19:59] <magicaltrout> you have to log out of literally everything and log back in again
[19:59] <magicaltrout> charm tools, jujucharms.com etc
[19:59] <lazyPower> jhobbs - what magicaltrout said, its a known bug. it only syncs groups on login.
[19:59] <jhobbs> magicaltrout: yeah i created a group on launchpad about 15 minutes ago, logged out of charm tools and logged back in and it won't let me push
[19:59] <lazyPower> bdx - hope that gets you unblocked, lmk if you need any further help
[19:59] <jhobbs> ok
[19:59] <magicaltrout> i like to think of it more as an annoing feature ;)
[20:00] <jhobbs> i will log out of more stuff
[20:00] <jhobbs> thanks
[20:00] <lazyPower> magicaltrout - six of  one, half dozen of the other ;)
[20:00] <magicaltrout> jhobbs yeah website and everything
[20:00] <rick_h_> yes, once SSO can do the username stuff we can look at actually disconnecting from LP there
[20:00] <magicaltrout> even then it doesn't always work :)
[20:04] <stub> jhobbs: I think you need to relogin to the web interface, not charm tools
[20:18] <jhobbs> yay, working now, thanks everyone
[20:18] <magicaltrout> \o/
[20:19] <arosales> jhobbs: not intuitive at all, but the UI folks are working to make that better
[20:24] <huskc> I was wrong about the gce google controller hung issue. (new to gce) After navigating to the correct dashboard I saw a juju instance running.
[20:25] <huskc> The logs output had an ongoing 'unable to connect to API' error.
[20:27] <huskc> I manually deleted the gce (juju bootstrap) instance, but juju still hangs on destroying the controller locally.
[20:28] <huskc> It doesn't exist, so shouldn't the CLI `juju destroy-controller gce-devenv` just flush what's configured locally?
[20:35] <huskc> I worked around the CLI hangup by manually remove the 'dead' gce-controller from all juju config in ~/.local/share/juju/*
[20:36] <huskc> Now I can successfully re-bootstrap the google cloud.
[20:36] <huskc> It's nice that the config is just YAML and easily modified (and not some SQLite db). ;)
[20:54] <valeech> Is this a good place to get help with this error: “ERROR cannot resolve URL "cs:maas-region": charm or bundle not found” I am trying to deploy MAAS HA with juju following this guide https://maas.ubuntu.com/docs2.0/ha.html. I have a fresh install of xenial and juju 2.0 beta 7.
[21:03] <magicaltrout> valeech: it is, but i've never used MAAS so I'm not the person to ask :)
[21:05] <valeech> magicaltrout: understood. I am pretty new to juju myself. I am wondering if the bundle name changed in some fashion.
[21:05] <magicaltrout> thats easy enough to find out i would hope
[21:06] <magicaltrout> https://jujucharms.com/q/?text=maas-region
[21:07] <magicaltrout> i have no clue about the tutorial but there doesn't seem to be a charmers charm there
[21:07] <magicaltrout> so
[21:07] <magicaltrout> juju deploy cs:~maas-maintainers/trusty/maas-region-3
[21:07] <valeech> excellent! Thanks so much!
[21:07] <magicaltrout> that looks like a likely solution
[23:18] <arosales> nice find magicaltrout, it looks like maas-region is not a recommended charm and thus one would need to prefix the username as suggested
[23:19] <arosales> valeech: thanks for the feedback I'll file a bug on the maas docs
[23:25] <valeech> arosales: where are you seeing that maas-region is not a recommended charm?
[23:28] <arosales> valeech: a recommended charm would have a flat name space like https://jujucharms.com/maas-region/
[23:28] <arosales> and 'juju deploy maas-region' would just work
[23:28] <arosales> in this cae it doesn't as it hasn't been through the review process and thus not mark as recommended
[23:29] <valeech> arosales: makes sense. thanks for the explanation
[23:29] <arosales> it very well may be a good working charm the MAAS folks would like users to use judging by the name maas-maintainers, but that explains why "juju deploy maas-region" didn't work per the MAAS docs
[23:29] <arosales> valeech: happy deploying
[23:30] <valeech> thanks@!
[23:30] <valeech> !
[23:32] <arosales> valeech: and welcome to the juju community :-) If you don't find an answer here askubuntu.com and the juju mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) are also good resources
[23:32] <valeech> Great! thanks.
[23:33] <valeech> I was having some weird issues on 2.0beta7 so I blew it away and spun 2.0beta9. Now it’s getting even more strange
[23:34] <valeech> I can’t seem to deploy any apps to my machines. I have 2 machines added to a manual cloud and the juju status shows them as started but I keep getting this error on all charms I load:
[23:35] <valeech> cannot assign unit "wordpress/0" to machine: cannot assign unit "wordpress/0" to new machine or container: cannot assign unit "wordpress/0" to new machine: use "juju add-machine ssh:[user@]<host>" to provision machines
[23:44] <arosales> valeech: to confirm you are using manual provider, correct?
[23:44] <arosales> if so can you pastbin the output of juju status
[23:44] <valeech> juju bootstrap maas manual/10.131.107.128
[23:44] <valeech> like that?
[23:44] <arosales> I think you just need to specify which unit you want to deply the charm to with --to
[23:45] <arosales> valeech: what does `juju status` currently return?
[23:46] <valeech> http://pastebin.com/93RGJfCw
[23:47] <valeech> juju runs on one VM, and mass2 and maas3 are 2 other VMs
[23:48] <lazyPower> valeech - i dont believe that wordpress is supported on xenial by the charm.
[23:48] <valeech> I am batting a 1000 :)
[23:48] <arosales> valeech: try 'juju deploy --to 0 wordpress wordpress2
[23:49] <lazyPower> valeech - i see your enlisted machines are series: xenial. Juju didn't tell you a very helpful error message that its due to not having a series allocated the charm can consume :\  but thats the issue here
[23:49] <arosales> lazyPower: one can still deploy trusty images though
[23:49] <lazyPower> arosales true, but valeech will need to manually provision on ein maas and add it to his juju env, theres no magic here since its the manual provider
[23:49] <lazyPower> s/his/their/
[23:49] <arosales> lazyPower: ah yes, but they --to would manually place it on the xenail machine
[23:49] <lazyPower> arosales not if its series: trusty, you cant force smash a series difference
[23:50] <arosales> lazyPower: well I think valeech is just trying to test his setup
[23:50] <lazyPower> i do get what you're saying tho :)
[23:50] <arosales> valeech: perhaps try, "juju deploy --to 4 ubuntu"
[23:50] <arosales> and see if at least you can get that to deploy
[23:50] <arosales> lazyPower: that should work, right?
[23:51] <lazyPower> arosales - it should, yeah
[23:51] <arosales> and earlier when I said "--to 0" juju would have balked at that because machine 0 is reserved, so apologies there
[23:52] <valeech> arosales - it did balk I will try —to 4
[23:52] <bdx> while we are on wordpress ....
[23:52] <valeech> results from "juju deploy --to 4 ubuntu”: ubuntu/0     unknown   idle   4               maas3
[23:52] <bdx> we need a wordpress charm reform effort initiated
[23:53] <arosales> here are the xenial charms available: https://jujucharms.com/store?type=charm&series=xenial
[23:53] <lazyPower> look at those beats!
[23:53] <lazyPower> that just landed today :D
[23:53] <arosales> lazyPower: :-)
[23:53] <arosales> valeech: what is your current juju status post that deploy command?
[23:54] <bdx> lazyPower: Nice!
[23:54] <arosales> post that ubuntu deploy command
[23:54]  * arosales waves to bdx
[23:54] <valeech> so there’s a juju-gui charm for xenial but isn’t GUI in juju 2.0 included rendering the need for a charm useless?
[23:54] <lazyPower> bdx - packetbeat landed too,  juju deploy cs:~containers/packetbeat  -- it needs review for promulgation but its done :)
[23:54] <bdx> arosales: whats up!
[23:54] <bdx> lazyPower: so sick! pumped
[23:54] <lazyPower> valeech - correct, juju 2.0-beta6 started shipping with the juju-gui
[23:54] <arosales> valeech: in 2.0 the juju gui comes baked into the controller :-)
[23:54] <valeech>  juju deploy --to 4 ubuntu
[23:54] <valeech> Added charm "cs:ubuntu-0" to the model.
[23:54] <valeech> Deploying charm "cs:ubuntu-0" with the default charm metadata series "xenial".
[23:54] <arosales> just issue "juju gui" and you get it out of the box
[23:55] <arosales> valeech: looking good
[23:55] <lazyPower> bdx - glad you're excited :) Gimme some bugs, i know they're in there
[23:55] <arosales> bdx: finally a maas expert in here :-)
[23:55] <valeech> from juju status:
[23:55] <valeech> ubuntu     unknown  false    jujucharms  ubuntu     0    ubuntu
[23:56] <bdx> lazyPower: nice work seriously
[23:56] <bdx> lazyPower: did you ever figure out the geo_ip?
[23:56] <lazyPower> bdx thanks man :)
[23:56] <lazyPower> i did, but didnt ship with it, because the db is so old
[23:56] <lazyPower> i'd like to engage with elastic to get something more recent in there
[23:56] <bdx> ahh
[23:57] <bdx> arosales: ha - I hope to be
[23:57] <lazyPower> i've got a little video i'm working on to push on my social channels, once its done i'll ship it over to elastic as the intro to the work done, and see if they're interested in collaborating/upstreaming the charms
[23:57] <arosales> valeech: could I get a full pastbin output of juju status, I lost the context if that was machine or application output
[23:57] <valeech> sure..
[23:57] <bdx> arosales: I only have 5 maas deploys
[23:57] <arosales> bdx: only :-)
[23:58] <valeech> http://pastebin.com/DkZ37RT7
[23:58] <bdx> lazyPower: thats a great idea
[23:59] <bdx> valeech: yea - 'juju deploy maas-region' just works if you have the needed configs set