[01:27] <jmartinez916> hello, I was wondering if I could get some advice on a juju bootstrap using lxd?
[07:30] <RAJITH> Hi while connecting to mariadb from remote machine  I am using command: mysql -h hostname -u username -p password -D database name , getting error: ERROR 1130 (HY000): Host '' is not allowed to connect to this MySQL server
[08:13] <magicaltrout> RAJITH: remove the space after the -h I believe
[08:13] <magicaltrout> mysql -hhostname -uusername -ppassword -Ddatabase
[08:18] <admcleod> magicaltrout: im wondering what your irc nemsis would be called. pragmaticgrizzly? or perhaps, scientificbear.
[08:19] <magicaltrout> or just
[08:19] <magicaltrout> hungryfisherman
[08:19] <magicaltrout> good to know you're hard at work admcleod ;)
[08:22] <magicalt1out> oops byobu kill-server
[08:22] <magicalt1out> wasn't the command I was looking for
[08:30] <admcleod> magicaltrout: its what i do. work hard.
[08:31] <magicalt1out> all things are relative I suppose ;)
[08:31] <admcleod> :P
[08:35] <magicalt1out> admcleod: is the review queue still the same? nothings changed yet?
[08:35] <admcleod> magicalt1out: you mean re the update to the new version? it appears so
[08:36] <magicalt1out> well the way us minions get stuff into it
[08:36] <magicaltrout> Saiku will get its GA in the next few days and I want to get Saiku & Drill signed off if I can
[08:37] <magicaltrout> once I've got Saiku 3.9 released
[08:37] <magicaltrout> because it will provide connectivity to a bunch charms
[08:39] <admcleod> magicaltrout: i dont think anything has changed yet, but ill ask later when the rq guys are online
[08:39] <magicaltrout> cool, its no biggie just checking
[08:39] <magicaltrout> be nice to finally get Saiku reviewed and signed off
[08:44] <magicaltrout> jcastro: https://highlyscalable.wordpress.com/2013/08/20/in-stream-big-data-processing/ you should work on a Juju version ;)
[08:45] <magicaltrout> this one is a bit old but pretty cool for instream processing background and knowledge
[08:48] <magicaltrout> RAJITH unless its top secret keep it in the channel please
[08:51] <magicaltrout> thats not top secret
[08:52] <magicaltrout> does "hostname" return a valid response?
[08:52] <magicaltrout> and hostname -f for that matter
[09:12] <magicaltrout> sod it lets make saiku reactive before it goes GA
[10:10] <magicaltrout> admcleod: also, I'm assuming if its up to scratch I can submit an interface to the platform somehow?
[10:10] <magicaltrout> it would be good to have a drill layer so that users can hook up to it
[10:51] <SaMnCo> magicaltrout: hey, I'm trying the DCOS charms
[10:51] <SaMnCo> but it doesn't connect the GUI
[10:51] <SaMnCo> I get : channel 3: open failed: connect failed: Connection refused
[10:51] <magicaltrout> you trying a funky ssh tunnel SaMnCo or just juju expose?
[10:51] <admcleod> magicaltrout: well, yeah also im not sure - sorry. ill find out
[10:51] <SaMnCo> (after the SSH port forwarding is setup)
[10:52] <magicaltrout> SaMnCo: on EC2 recently the straight expose works
[10:52] <SaMnCo> ah, I thought I needed the SSH stuff
[10:52] <magicaltrout> yeah that was initially but it seems have magically rectified itself
[10:52] <magicaltrout> don't ask
[10:53] <magicaltrout> its all in a bit of a state of flux as I was just working on the multi-master
[10:53] <magicaltrout> the problem with that is DCOS don't like to add masters on the fly
[10:53] <magicaltrout> and I don't like that plan
[10:53] <magicaltrout> so I'm messing around trying to figure out which bits of config need prodding to make it realise that masters have been added
[10:53] <SaMnCo> still no luck, I get connection refused on the public ip
[10:55] <magicaltrout> on EC?
[10:55] <magicaltrout> 2
[10:56] <SaMnCo> yeah. Is there a route for the URL?
[10:56] <magicaltrout> nope
[10:57] <magicaltrout> 2 mins just bootstrapping to test
[10:57] <SaMnCo> btw I'm on Xenial
[10:57] <admcleod> SaMnCo: is the daemon bound to all ips?
[10:57] <magicaltrout> ah
[10:57] <magicaltrout> that'll do you
[10:57] <magicaltrout> don't use xenial
[10:57] <magicaltrout> use wily
[10:57] <SaMnCo> ok, any reason for this behavior?
[10:58] <SaMnCo> what's wrong with Xenial?
[10:58] <magicaltrout> there was a bunch of upstart/systemd issues I was fiddling with, it would work on Xenial, but that didn't exist when I first built it
[10:58] <magicaltrout> so I've not updated the xenial image
[10:58] <SaMnCo> right ok
[10:58] <magicaltrout> but wily is updated
[10:58] <magicaltrout> or just push a new xenial build, it should just work
[10:59] <SaMnCo> ok, restarting from scratch then, will let you know how it goes
[11:00] <magicaltrout> yeah i'm just deploying a master
[11:00] <magicaltrout> it should work though
[11:00] <magicaltrout> just not >1 currently
[11:01] <SaMnCo> compared to the native experience with CloudFormation, what diffs should I expect?
[11:02] <SaMnCo> cloud load balancer addition works OOTB?
[11:02] <magicaltrout> all the bits should come up, this is basically the DC/OS advanced installation done with Juju
[11:02] <magicaltrout> not sure about load balancer, didn't check that far down
[11:03] <magicaltrout> spin it up and file issues on github though and I'll get round to them next week hopefully
[11:03] <SaMnCo> so if you create a framework in DC/OS, will it open the AWS firewall afayk?
[11:03] <SaMnCo> will do
[11:04] <magicaltrout> it wont open ports not defined in the charms, which is a bit of an interesting one
[11:04] <magicaltrout> so you need to prod the charm to expose other ports
[11:04] <SaMnCo> hmm
[11:05] <SaMnCo> ok so it's the same gap as the k8s stuff
[11:05] <magicaltrout> yeah
[11:05] <magicaltrout> the lack of mindreading capability
[11:05] <SaMnCo> well, the CloudFormation template does it for both, so it must be possible
[11:06] <magicaltrout> yeah but the firewall in EC2 is managed by Juju, so you'd have to either tell DC/OS to use the same VPC stuff or hook some action up from DC/OS to Juju to expose it
[11:07] <admcleod> get the charm to do aws api calls
[11:07] <SaMnCo> you wouldn't want that, because once you deploy in DC/OS, you're part of the lifecycle of that app. Since Juju won't do any scheduling in there, nor be aware of it, you want DC/OS to be autonomous and talk directly to AWS
[11:08] <SaMnCo> which means the charm needs to convey AWS credentials
[11:08] <SaMnCo> which it not cool since there is no secret management yet
[11:08] <magicaltrout> okay i might have left it in a broken state as the UI doesn't come up ;)
[11:09] <magicaltrout> oh it does
[11:09] <magicaltrout> i'm too quick for it
[11:09] <admcleod> the charm wouldnt necessarily need to convey credentials if juju can allocate roles to instances
[11:11] <SaMnCo> that's right
[11:17] <SaMnCo> interesting, the default CloudFormation for DC/OS is based on CoreOS
[11:17] <magicaltrout> yup
[11:18] <magicaltrout> their vagrant install for testing $hit is pretty handy for debugging my juju hacks as well ;)
[11:21] <SaMnCo> have you had any issue because of the multi AZ default setup of Juju so far?
[11:31] <magicaltrout> nope
[12:17] <magicaltrout> how do I replace an "old charm" with a new charm push setup?
[12:17] <magicaltrout> or don't I?
[12:21] <mbruzek> magicaltrout: I don't think you do, it is just a new version.
[12:22] <magicaltrout> yeah i thought so mbruzek
[12:22] <magicaltrout> i've got some weirdness
[12:23] <magicaltrout> something to do with multiseries charms
[12:23] <magicaltrout> i'm clearly being moronic
[12:23] <mbruzek> Which one?
[12:24] <magicaltrout> http://pastebin.com/zgQ5mQ9v
[12:25] <mbruzek> magicaltrout: Try pushing it without the "trusty" in the name
[12:25] <magicaltrout> ah yeah that works
[12:26]  * magicaltrout gets confused with these new pushing calls
[12:26]  * mbruzek does too
[12:26] <mbruzek> But that is actually a problem I have run into before
[12:27] <magicaltrout> ah cool thats deploying updated code
[12:27] <magicaltrout> thanks mbruzek
[12:27] <mbruzek> You are most welcome magicaltrout
[12:29] <magicaltrout> http://www.bbc.co.uk/news/technology-36711989
[12:29] <magicaltrout> where do you find a spare £40k to build a huge tetris?
[12:51] <lazyPower> neiljerram - o/ ping
[13:28] <shruthima> Hi kevin, Regarding IBM-IM amulet test , we are getting that issue only when we are running amulet test.
[13:36] <neiljerram> lazyPower, hi
[13:36] <lazyPower> hey neiljerram, just checking in post holiday.
[13:40] <lazyPower> howd things go with the new proxy sub / etcd charm?
[13:41] <neiljerram> lazyPower, thanks. I'm still working on the client charm mods that I need, but so far it looks as though etcd-19 is good.
[13:41] <lazyPower> awesome, so that branch up for review you commented on is g2g from your perspective?
[13:41] <neiljerram> lazyPower, yes
[13:41] <lazyPower> (confirming so i can pilot that to land today)
[13:41] <neiljerram> lazyPower, that would indeed be good
[13:41] <lazyPower> perfect. I'll get on that and ping you when its merged. I'll keep my eye on the issue tracker as well.
[13:42] <lazyPower> i see there was a question about xenial series, that seems to be a byproduct of having been pushed at the /trusty/  series prior. I'll see if we can remove the series from charm url as it supportes trusty/xenial in the same charm but is listed under a single series
[13:42] <lazyPower> Thanks for giving it a go and confirming for me :) Much appreciated!
[13:42] <neiljerram> I think I said before that I was planning etcd-local-proxy as a subordinate charm; but now I'm working on that as a layer, that each of my two client charms will incorporate.
[13:44] <neiljerram> lazyPower, I think the trusty/xenial think is just a matter of how you publish.  If the charm metadata says both xenial and trusty (as it does?) then I think you should push to a URL that doesn't include the series.
[13:44] <lazyPower> neiljerram - i did that for -3, and it still published under /trusty/
[13:44] <neiljerram> lazyPower, ah OK, must be something else that I don't understand yet, then
[13:44] <lazyPower> i'm going to ping the store api people for a look and will ping back if there's something specific we need to do
[13:45] <lazyPower> its probably pebkac :)
[13:45] <neiljerram> impossible! :-)
[13:50] <magicaltrout> lazyPower: you need to get onboard with the PICNIC ancronym, its far easier to read and confuses people as to why you'd be eating food when making mistakes
[13:51] <lazyPower> Problem In Chair Networking in Computer?
[13:52] <magicaltrout> s/Networking/Not
[13:52] <lazyPower> hah
[13:52] <lazyPower> i love it
[13:52] <lazyPower> deal. all references to the old acronym have been scrubbed
[13:52]  * lazyPower garbage collects
[13:52]  * lazyPower starts swapping due to an old java module
[14:26] <lazyPower> ryebot - if you have a moment can i get you to patch the nits on https://github.com/juju/docs/pull/1100?
[14:28] <ryebot> lazyPower: Yeah, I'll do it asap, thanks for the headsup
[15:39] <shruthima> Hi Team, we have this configuration http://paste.ubuntu.com/18552641/ for Z machines, will Juju 2.0 work proper with this.. we need to raise a request for more environments  so validating before that ...
[15:43] <shruthima> hi kwmonroe, here is the link for http://bazaar.launchpad.net/~salmavar/charms/trusty/ibm-im/ibm-im-branch/view/head:/reactive/ibm-im.sh ibm-im reactive file , can you please suggest why it is hanging at fetching empty zip from store..?
[15:53] <lazyPower> shruthima - which version of juju?
[15:53] <shruthima> juju 2.0
[15:55] <shruthima> lazypower: juju 2.0
[15:55] <lazyPower> shruthima - which beta?
[15:57] <shruthima> lazypower:2.0-beta10
[16:01] <lazyPower> shruthima - have you tried with 2.0-beta11? that was just released last friday
[16:02] <shruthima> lazypower: no i  have not tried with beta11
[16:14] <babbageclunk> marcoceppi: I'm in the process of adding a new charm hook tool - application-version-set
[16:16] <babbageclunk> marcoceppi: oops, meant to ping you first!
[16:20] <marcoceppi> babbageclunk: cool, do you need something from me?
[16:22] <babbageclunk> marcoceppi: I'm going to add a corresponding function to charmhelpers in hookenv, but I'm not really sure what to do if the tool isn't available. Do you think I should have some fallback behaviour?
[16:23] <marcoceppi> babbageclunk: there's examples of this already, it should just raise a NotImplemented error
[16:23] <babbageclunk> marcoceppi: oh, ok - great!
[16:23] <babbageclunk> marcoceppi: thanks
[16:23] <marcoceppi> babbageclunk: I'll get you an example
[16:24] <marcoceppi> babbageclunk: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/core/hookenv.py#L848
[16:25] <marcoceppi> babbageclunk: tbh, we really don't use min-juju-version, we just work around it in the charms
[16:25] <babbageclunk> marcoceppi: ok, looks straightforward
[16:26] <marcoceppi> babbageclunk: it should be, there's also an interesting example in status-set, where we just write out to juju-log instead
[16:27] <marcoceppi> well, we used to
[16:28] <babbageclunk> marcoceppi: It looks like you still do - or do you mean it used to run juju-log rather than writing to the python log (which I guess gets caught in the hook output?)
[16:29] <marcoceppi> babbageclunk: well it used to just silently fallback to writing to log instead, so no exception rasied at all on OSError
[16:29] <marcoceppi> babbageclunk: actually, that's basically what it does not
[16:29] <marcoceppi> now*
[16:29] <marcoceppi> it just silently fails to Juju-log
[16:29] <marcoceppi> instead of raising an exception
[16:30] <babbageclunk> marcoceppi: right. I think I'll do the same thing.
[16:30] <marcoceppi> when status-set is not provided,
[16:30] <marcoceppi> babbageclunk: ack, it's probably the best since it's a set only command
[16:30] <babbageclunk> marcoceppi: cool - thanks for the tips!
[16:30] <marcoceppi> babbageclunk: unless...are you going to be providing an application-version-get command?
[16:31] <babbageclunk> marcoceppi: no, it didn't seem needed (and is a bit subtle)
[16:37] <mbruzek> rjc ping
[16:40] <mbruzek> Does anyone know if the cloud image for xenial was updated recently? I am getting a charm failure because python is not installed (at all!)
[16:42] <Odd_Bloke> mbruzek: When you say Python, do you mean python2?
[16:42] <mbruzek> Odd_Bloke: I sshed to the machine and python was not in the path.
[16:42] <Odd_Bloke> mbruzek: Because I don't believe that's installed in xenial in general (although some specific clouds will have it installed because they have agents etc. that pull it in).
[16:42] <mbruzek> Not python2 nor python3
[16:43] <mbruzek> Odd_Bloke: The install hook failed because it has a python shebang.
[16:44] <mbruzek> The error looked like this in the unit log:
[16:44] <Odd_Bloke> mbruzek: cloud-init uses python3, so it's unlikely the instance doesn't have python3 installed.
[16:44] <mbruzek> 2016-07-05 16:28:30 ERROR juju.worker.uniter.operation runhook.go:107 hook "install" failed: fork/exec /var/lib/juju/agents/unit-was-lib-0/charm/hooks/install: no such file or directory
[16:44] <Odd_Bloke> mbruzek: But note that `python` will never run Python 3.
[16:45] <mbruzek> Odd_Bloke: I am corrected, python3 appears to be there.
[16:46] <mbruzek> But the install file in this charm has #!/usr/bin/python
[16:46] <mbruzek> Which is why the install hook failed.
[16:47] <Odd_Bloke> mbruzek: Python 2 has never been in the xenial cloud image, so I don't think this is a new failure.
[16:47] <Odd_Bloke> mbruzek: (As I said, there are some clouds that will have Python 2, but that's not normal)
[16:48] <cafaroo> Hello everyone! I've been trying to tackle maas and juju for deployment of openstack for way over a month now. Now when im trying to bootstrap i get following error "2016-07-05 16:38:28 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: bootstrap instance started but did not change to Deployed state: instance "/MAAS/api/1.0/nodes/node-536ef382-4135-11e6-95ad-000c29f03191/" failed
[16:48] <cafaroo> to deploy". Dont know where to start looking for faults has anyone had the same error? The nodes are baremetal and I have confirmed that they can reach the internet trough maas nat forwarding.
[16:49] <cafaroo> Hope i am in the right place for this kind of question, any help would be appreciated.
[16:51] <mbruzek> Odd_Bloke: OK I will dig into this more. would changing the shebang to #!/usr/bin/env python  give us python3?
[16:52] <Odd_Bloke> mbruzek: Nope, "python" will always be Python 2.
[16:53] <mbruzek> then we will have several charms that will not run on xenial
[16:55] <mbruzek> cafaroo I don't know what that error message means, there is also a #maas channel on irc if this is a MAAS problem.
[16:56] <mbruzek> cafaroo: you can retry the bootstrap with --debug and -v for verbose.
[16:56] <mbruzek> cafaroo: then pastebin the results
[17:00] <cafaroo> mbruzek: okey ill do that i just found an error in cloud-init-output.log. Maybe thats whats causing it.
[17:00] <cafaroo> http://pastebin.com/1bhNfmAP
[17:01] <cafaroo> 'cciss!c0d0' should be my disk ill try to format it somehow
[17:17] <cholcombe> rick_h_, what's your thoughts on this: https://github.com/juju/juju/issues/5766
[17:19] <rick_h_> cholcombe: I'd like to compare that to the way the other openstack services handle the manual process to roll.
[17:19] <rick_h_> cholcombe: it's come up about juju managing this but the "devil" in the details gets in the way in that what indicates a successfull upgrade, how many to upgrade at a time, etc.
[17:20] <rick_h_> cholcombe: with leader election, the new application-version feature going into 2.0, and status I wonder if there's enough bits into place to build a basic layer to help this.
[17:20] <cholcombe> rick_h_, hmm i'm not sure
[17:20] <cholcombe> i think it would be nice to have config settings that say: upgrade x at once
[17:21] <cholcombe> rick_h_, and a hook that can be called to validate that the upgrade was successful
[17:21] <cholcombe> that should be enough to take care of it
[17:25] <marcoceppi> rick_h_: maybe, but stacks is really the way to handle this
[17:29] <rick_h_> cholcombe: yea, I can see some of that. We can bring it up at next planning sprint.
[17:29] <rick_h_> marcoceppi: yea, but those tend to be with across services vs a single service like ceph/etc
[17:30] <rick_h_> marcoceppi: upgrading a single web app rolling/canary style shouldn't need stacks to get involved
[17:30] <marcoceppi> rick_h_: even with ceph, it's still pretty complex application ceph-mon/osd/etc
[17:30] <rick_h_> marcoceppi: right, he's using the external thing that happens to be there to track the state for him
[17:30] <marcoceppi> rick_h_: the new application-version, will that be present in relation data implicitly?
[17:31] <rick_h_> marcoceppi: it's in status atm, we can look to add it across if it's useful.
[17:31] <marcoceppi> I could see http interface growing this, so that haproxy and others could spin off a new lb group as part of a blue/green rollout
[17:31] <rick_h_> marcoceppi: though we're exposing it as an application level entity so guess it's not as helpful here
[17:31]  * rick_h_ is slow thinking on cold meds today
[17:31] <marcoceppi> rick_h_: ah, so only leader sets it?
[17:32] <rick_h_> marcoceppi: it's last unit gets to set it atm
[17:32] <marcoceppi> rick_h_: perhaps it should be scoped at leader? since it knows what's going on in the world?
[17:32]  * marcoceppi will stop bombarding cold med rick_h_
[17:56] <petevg> Catching up on conversations ... I left a comment on the rolling upgrade ticket -- I think that the coordinator layer can handle that case.
[19:08] <bryan_att> gnuoy: ping - when I reached out to the Congress team for info about how to upstream charm-congress, they said they have no idea and to reach out to openstack-charmers, which AFAICT is a group you are involved with. How do I get the charm upstreamed to https://github.com/openstack-charmers?
[19:10] <marcoceppi> hey bryan_att, gnuoy is probably EOD now
[19:10] <bryan_att> marcoceppi: thanks, I'll watch for the response tomorrow
[19:11] <marcoceppi> bryan_att: alternativly, since openstack-charmers is a pretty big group, poking juju@lists.ubuntu.com might help get a better response
[19:12] <marcoceppi> bryan_att: can't recall for sure, but I think the github is a mirror of what's in gerrit/openstack git
[19:12] <marcoceppi> and there's some process or another to get that done
[19:12] <bryan_att> marcoceppi: thanks, I'll start there
[19:12] <marcoceppi> thedac tinwood ^^
[19:17] <narindergupta> marcoceppi: thedac tinwood gnuoy jamespage bascially bryan_att  is looking for push the congress charm in github projects under openstack.org like we have other core charms. He is ready to maintain and upgrade as per his need basis though.
[19:18] <marcoceppi> narindergupta: makes sense, thedac tinwood gnuoy and jamespage (and by proxy, the mailing list) would be the best place to figure that out.
[19:18] <narindergupta> marcoceppi: ok thanks
[19:59] <thedac> bryan_att: Hi, so there are two things we should not confuse. One is charm store inclusion which we controll and the other is an openstack upstream project which openstack.org controlls. You can have your charm code hosted anywhere and still get included in the charmstore. Whereas to get on openstack.org you need be your own project with openstack.org.
[20:00] <thedac> So when you are ready for your charm to be reviewed you can let us know here (which you have) or on one of the mailing lists.
[20:01] <bryan_att> The OpenStack Congress project is already there - are you saying that each OpenStack service charm has to be its own project? I would expect, like python-congressclient, that this is just another repo managed by the existing OpenStack Congress team (to which I contribute). Or do you really mean there needs to be a distinct project for this>
[20:01] <thedac> If you let me know where the charm code lives now I can get the ball rolling on the review process
[20:01] <thedac> bryan_att: if congress is already an upstream project having the charm live there as just another repo is fine.
[20:01] <bryan_att> https://github.com/gnuoy/charm-congress is the master of which my repo https://github.com/blsaws/charm-congress is a fork
[20:02] <bryan_att> I work in my fork and sync with gnuoy as needed.
[20:02] <thedac> bryan_att: great. I'll bring this up to gnuoy then if you aleady have a process working
[20:03] <bryan_att> OK, should I also drop a note to juju@lists.ubuntu.org? that was recommended
[20:04] <bryan_att> note that also at some point I will need this included in the charm store - since OPNFV pulls charms from there for deployment
[20:04] <thedac> bryan_att: right, when you say it is ready we can do the final review and push it to the charm store
[20:05] <thedac> bryan_att: re mailing lists. I think we are moving to openstack-dev with [Charms] in the subject. But we would still see juju@lists.ubuntu.org
[20:05] <bryan_att> Would you suggest I copy both?
[20:06] <thedac> bryan_att: openstack-dev is the primary now for openstack charms
[20:06] <bryan_att> ok, thanks. I'm already on that list
[20:06] <thedac> great
[20:35] <alai> Hi guys, can someone take a peek to see why I can't file a bug on nuage-vsd and nuage-vsc charm?
[20:35] <alai> https://bugs.launchpad.net/charms/+source/nuage-vsd
[20:35] <alai>  nuage-vsd" does not exist in Juju Charms Collection. Please choose a different package. If you're unsure, please select "I don't know"
[20:36] <alai> it also complaints when I select 'i don't know'
[20:37] <marcoceppi> alai: that's weird, https://bugs.launchpad.net/charms/+source/nuage-vsd/+filebug that link works for me
[20:39] <alai> marcoceppi, the link works but after hitting the 'submit bug' button it gives the error
[20:40] <marcoceppi> alai: that's odd, not sure hy
[20:42] <jhobbs> alai: probably need to ask in #launchpad then
[20:42] <alai> jhobbs, sure i'll ask there
[21:14] <holocrono> anyone using juju 2.0beta11 and a custom image and tools metadata url? I can get juju to use my custom images url, but getting this now for the tools: ERROR juju.environs.config config.go:1130 unknown config field "tools-metadata-url"
[21:14] <holocrono> it looks like this option was removed in 2.0?
[21:15] <mgz> holocrono: it's agent-metadata-url
[21:15] <mgz> tools- spelling has been deprecated for a while
[21:16] <holocrono> i wish i could find some solid docs, i should've been able to figure that out
[21:16] <holocrono> i did see that option though, thanks for pointing it out
[21:17] <mgz> it's the kind of thing that should be release noted, but it's easy to forgot to re-mention when the compat naming is eventually removed
[21:19] <holocrono> mgz: someone was telling me that juju 1 had support for kvm providers, do you know if this is going to be in 2.0?
[21:20] <mgz> depends exactly what you mean
[21:20] <mgz> the old local provider could use kvm instead of lxc
[21:20] <mgz> the new one is just lxd
[21:20] <mgz> I don't think that's changing
[21:21] <mgz> we probably do want to support kvm as a container type for clouds that support it though (which isn't many of the public ones)
[21:21] <holocrono> i'm talking about connecting to a private qemu/kvm host
[21:22] <mgz> I think that works if you're using maas?
[21:22] <mgz> or the manual provider
[21:23] <holocrono> do you have any links to information on this?
[21:23] <mgz> bug 1547665 for the local case
[21:23] <mup> Bug #1547665: juju 2.0 no longer supports KVM for local provider <2.0-count> <juju-core:Triaged> <https://launchpad.net/bugs/1547665>
[21:23] <mgz> I think maas.ubuntu.com/docs has some stuff on vmaas setup
[21:24] <holocrono> thanks!
[21:24] <mgz> jujucharms.com/docs/devel/clouds-manual is what you want for the other
[21:25] <mgz> which doesn't explictly mention kvm, but does say what requirements 'machines' need to meet to work with juju
[21:25] <mgz> holocrono: I'm interested in your requirements/experiences here, post to the juju list with what you're up to
[21:27] <holocrono> mgz: sure, i'll do that -- thanks again
[21:48] <mskalka> can anyone help me with something? I have two charms that are related, one with provides:unitid and the other with requires:unitid. The relation is labeled properly and matches both metadata files. When I go into one unit with debug-hooks and do "relation-set unitid=foo" it complies just fine, but nothing shows up when I do relation-get, just what looks like the default private-address field
[21:56] <holocrono> mgz: got another question for you :D
[21:57] <holocrono> so it appears that the local agent metadata and file are getting downloaded in the bootstrap, but it's hanging up on trying to download the gui:
[21:58] <holocrono> http://pastebin.com/3rxn1E6n
[21:59] <holocrono> i see some options for setting http proxy for apt on the model, but that's not really want i need here
[21:59] <holocrono> i'd prefer not to use a proxy in any case and provide everything privately anyways
[22:14] <holocrono> how would i modify this setting: https://github.com/juju/juju/blob/master/environs/bootstrap/bootstrap.go#L124
[22:32] <mgz> holocrono: hm, interesting. there's a flag to pass to say no gui, which is probably what you want?
[22:33] <mgz> otherwise need to proxy or mirror streams
[22:34] <mskalka> does anyone know why a charm would fail to report the correct number of units when using relation-list? It spits out the first unit it's related to but none of the others
[22:38] <holocrono> mgz: i'd like to see the gui working.. personally don't care but it's probably the one thing that gets people excited about juju at first glance
[22:39] <holocrono> mgz: i'll make an attempt to mirror it
[22:40] <holocrono> but there isn't a way to override the config? I see something about specifying an environment variable
[22:41] <holocrono> https://github.com/juju/juju/blob/master/environs/bootstrap/bootstrap.go#L542
[22:41] <holocrono> this is the best I can do?
[22:42] <mgz> holocrono: I'm not sure the private cloud case was fully thought out for gui
[22:42] <mgz> we can likely improve it
[22:43] <rick_h_> mgz: holocrono the idea for a private cloud is that you can manually provide the file at any time
[22:43] <rick_h_> mgz: holocrono so you can bootstrap with no-gui and then juju upgrade-gui with any revision you trust
[22:44] <holocrono> ah, yes oka
[22:44] <holocrono> +y - trying this now
[22:44] <rick_h_> you can get any gui release from the streams links or from the GH page https://github.com/juju/juju-gui/releases
[22:45] <rick_h_> the .tar.bz2 for each release