[09:18] <Sophie_> hellooooo
[09:18] <Sophie_> may I ask a question ?
[09:18] <Sophie_> I am trying to bootstrap juju on maas on a node but always I get an error that is says "dial tcp 127.0.0.1:80: connection refused"
[09:19] <Sophie_> my maas server is on localhost
[13:51] <beisner> hi jcastro, can you squeeze these couple of 1-liners into your review?  tldr;  use `charm-proof`now instead of `charm proof`
[13:51] <beisner> https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/update-charm-proof/+merge/290198
[13:51] <beisner> https://code.launchpad.net/~1chb1n/charms/trusty/ubuntu/update-charm-proof/+merge/290196
[13:57] <jcastro> beisner: I've not done code reviews before, that time is usually for doc and AU reviews
[13:57] <jcastro> though these seem so trivial I'll pair up with someone and learn the process
[13:58] <beisner> jcastro, ah gotcha.   & thx :)
[14:34] <arosales> morning o/
[14:39] <marcoceppi> \o
[15:38] <tych0> hi everyone. if i just want a juju xenaial instance (on gce, if it matters), how do i do that? juju deploy cs:xenial/ubuntu gives me a "charm not found"
[15:42] <cherylj> tych0: juju add-machine --series xenial should work for you
[15:49] <tych0> cherylj: sweet, thanks!
[15:52] <tych0> hmm, although it puts me in the "error" state
[15:53] <tych0> how do i figure out what went wrong? juju debug-log -n50 -l DEBUG doesn't tell me anything
[15:59] <tych0> ah, juju status --format=yaml says something helpful,
[15:59] <tych0> no "xenial" images in us-central1 with arches [amd64]
[16:00] <cherylj> tych0: are you just trying to get a xenial machine?  I mean, can it be outside of juju?
[16:01] <cherylj> (and btw for the above error - maybe set the image-stream to daily?)
[16:01] <tych0> well, i want juju to know about it so i can test some lxd container type code
[16:01] <tych0> cherylj: yeah, that's what i probably need to do, but i don't know how to do that :)
[16:01] <cherylj> tych0: juju get-model-config | grep image-stream will tell you if / what it's set to
[16:01] <cherylj> tych0: then juju set-model-config image-stream="daily"
[16:01] <cherylj> if you need to change it
[16:02] <tych0> cool, thanks
[16:02] <tych0> how do i delete this failed machine?
[16:02] <tych0> remove-machine :)
[16:03] <tych0> cherylj: hmm. is the google provider busted?
[16:03] <tych0> oh, maybe not
[16:04] <tych0> http://paste.ubuntu.com/15540341/
[16:04] <tych0> Odd_Bloke: ^^
[16:04] <tych0> does that mean the gce daily streams are broken?
[16:21] <jose> beisner: ping
[16:21] <beisner> hi jose
[16:21] <jose> beisner: hey, I see that you did a couple lint fixes on ubuntu and mongodb, are those intended for review by any charmer or by openstack charmers?
[16:22] <jose> I mean, I could take a look, but wouldn't want to touch them if you expect openstack charmers to review
[16:24] <beisner> jose, yep they're not openstack-specific.  i think jcastro was going to have a go with the merge/review, but might be happy to have the first-available reviewer take it instead.  whaddaya say, jorge?
[16:26] <jose> jcastro: if you want me to guide you through the process I'd be happy to
[16:32] <rcj> tych0: that daily should be usable: http://paste.ubuntu.com/15540787/
[16:35] <cholcombe> tinwood, for your merge conflict you can just git review again and gerrit will merge and tell you which files to fix :)
[16:37] <tych0> rcj: ok, so this is a juju bug then?
[16:40] <rcj> tych0, ah, I see the problem
[16:40] <rcj> image is in ubuntu-os-cloud-devel project, not ubuntu-os-cloud
[16:41] <tych0> rcj: ok, what does that mean? :)
[16:45] <rcj> tych0, well, to use the gcloud cli as an example, this would launch that image... gcloud compute instances create "gce-rcj-x2" --zone us-central1-c --machine-type g1-small --network "default" --boot-disk-size 10 --boot-disk-type "pd-ssd" --boot-disk-device-name "gce-rcj-x1" --image /ubuntu-os-cloud-devel/daily-ubuntu-1604-xenial-v20160326
[16:46] <rcj> tych0, the paste you had showed the image in /ubuntu-os-cloud/ which is where we put release images only, not daily.
[16:47] <tych0> rcj: so is it fair to say that,
[16:47] <tych0> ubuntuImageBasePath  = "projects/ubuntu-os-cloud/global/images/"
[16:47] <tych0> we need another constant like that that looks like,
[16:47] <tych0> ubuntuDailyImageBasePath = "projects/ubuntu-os-cloud-devel/global/images/"
[16:47] <tych0> ?
[16:47] <rcj> yes
[16:47] <tych0> cool
[16:48] <tych0> ah, fuck.
[16:48] <tych0> looks like juju doesn't have access to the stream at that point maybe
[16:50] <tych0> rcj: thanks. i'll try to send a juju patch for htis
[16:50] <tych0> i'll close that CPC issue too, one sec
[17:19] <tych0> rcj: cherylj: https://github.com/juju/juju/pull/4902
[17:36] <c0s> kjackal_ kwmonroe cory_fu: I am looking into https://api.jujucharms.com/charmstore/v5/trusty/apache-spark-6/archive/config.yaml and just noticed the "yarn-client" execution mode for Spark
[17:36] <c0s> any reasons yarn-standalone wasn't used?
[17:37] <c0s> this is the _driver_ execution mode
[17:39] <icey> in juju 2, is there still an environment variable that can be set to control the controller/model for juju to connect to?
[17:41] <kwmonroe> c0s: i think the 2 yarn options are yarn-cluster and yarn-client, and we went with client because we wanted to use yarn resources as workers, but keep the spark master separate from yarn.  had we used yarn-cluster, i believe that would use a yarn resource for the spark master as well.
[17:42] <kwmonroe> c0s: unless yarn-standalone is new for spark > 1.5, in which case, i'm not sure what that does.
[17:44] <c0s> kwmonroe: the diff between yarn-client and yarn-standalone is where the _driver_ be running, really.
[17:45] <c0s> in the case of yarn-client, it will be executed on the client host, which might be totally outside of the cluster. hence adding more network traffic, etc.
[17:45] <cory_fu> kwmonroe: https://spark.apache.org/docs/0.8.1/running-on-yarn.html mentions yarn-standalone so it's not new.  But https://spark.apache.org/docs/latest/running-on-yarn.html doesn't mention it.  Is it perhaps an older spelling?
[17:45] <c0s> in yarn-standalone, the dirver will be co-located on a thread in Application Manager
[17:46] <c0s> cory_fu: indeed, it isn't a new thing
[17:47] <c0s> I am not sure why Spark docs don't have it any more. In all honesty, I am not sure about many thing that Spark project does nowadays
[17:47] <cory_fu> c0s: Looks like the spelling of the options might have changed to --master yarn --deploy-mode {cluster,client,standalone}?
[17:48] <kwmonroe> c0s: :)  cory_fu, i think you're right.  looks like our latest spark charm isn't up with the times: https://github.com/juju-solutions/layer-apache-spark/blob/master/lib/charms/layer/apache_spark.py#L107
[17:48] <cory_fu> Hrm, no, I think --deploy-mode only accepts {cluster,client}
[17:48] <c0s> oh, that's interesting
[17:48] <kwmonroe> iow, i don't think --master yarn-* is relevant in my read of the latest docs.  it needs to be --master yarn and --deploy-mode cluster|client
[17:49] <c0s> at any rate - if we are thinking of moving away from yarn deployment model, this whole discussion might be mott
[17:49] <kjackal_> http://spark.apache.org/docs/latest/running-on-yarn.html
[17:49] <c0s> kwmonroe: perhaps --master yarn --deploy-mode cluster replaces the yarn-standalone thing. I could check the source code to make sure, if it is of any interest to anyone ;)
[17:50] <kjackal_> Thank you c0s please keep me in the loop
[17:50] <kwmonroe> it is indeed of interest to me c0s -- i'd like to get our --master fixed, because what we have now shouldn't work (unless i'm missing deprecation warnings)
[17:51] <c0s> indeed so .... https://github.com/apache/spark/pull/95
[17:52] <c0s> yarn-standalone has been renamed
[17:52] <c0s> please accept my apologies for confusing the hell out of everybody
[17:52] <kwmonroe> lol
[17:52]  * c0s ducks
[17:53]  * magicaltrout is permenantly in a state of confusion anywa
[17:53] <c0s> kjackal_: however, it still might worth changing yarn-client to yarn-cluster in the scope of this discussion ;)
[17:53] <kwmonroe> i think we're still not correct c0s.  we pass execution mode config setting as the --master $var, but in the latest docs form kjackal_, it seems any yarn mode needs --master yarn followed by another setting controlling what type of yarn mode we want.
[17:53] <magicaltrout> permanently
[17:53] <c0s> I think you're right kwmonroe
[17:54] <c0s> magicaltrout: thanks, I feel better for at least not adding to your burden ;)
[17:54] <kjackal_> the "yarn-client" resolves to master=yarn and deploy=client
[17:54] <c0s> yeah, that's what I am saying
[17:54] <kjackal_> let me try to find the doc I read about this
[17:55] <c0s> and deploy, I believe, needs to be cluster
[17:55] <kjackal_> but you are right it is the prefered way
[17:56] <c0s> I think --deploy-mode client is good for spark-shell and some such nonsenses
[17:56] <kjackal_> http://spark.apache.org/docs/latest/submitting-applications.html
[17:57] <kjackal_> yarn-client	Equivalent to yarn with --deploy-mode client, which is preferred to `yarn-client`
[17:57] <kjackal_> yarn-cluster	Equivalent to yarn with --deploy-mode cluster, which is preferred to `yarn-cluster`
[17:58] <cory_fu> That's an awfully non-committal deprecation note
[17:59] <c0s> yeah, also this is what's important really
[17:59] <c0s> --deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client)  †
[17:59] <c0s> along with the foot-note
[17:59] <kwmonroe> so c0s, you're saying that you want this in cluster mode so both master (driver) and workers are in the yarn cluster.  i'm ok with that, but originally, we thought it might be best for the driver to be outside of yarn.  i'm not sure why.  probably flipped a coin.
[18:00] <c0s> here it is
[18:00] <c0s> † A common deployment strategy is to submit your application from a gateway machine that is physically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster). In this setup, client mode is appropriate. In client mode, the driver is launched directly within the spark-submit process which acts as a client to the cluster. The input and output of the application is attached to the console. Thus, this mode is especially suitable for applica
[18:01] <c0s> as I said earlier: client mode will involve more network traffic and will require an alive connection between the client machine and the cluster to avoid loosing some output, etc.
[18:02] <c0s> however, it seems that in case where the client machine is co-located with the cluster - the deploy mode doesn't make much of a difference
[18:02] <cory_fu> Doesn't this only affect jobs that are run by the charm, in which case they *would* be co-located within the cluster?
[18:03] <cory_fu> And they mention it being "especially suitable for applications that involve the REPL (e.g. Spark shell)."  That seems like it would include the Zeppelin notebooks, no?
[18:03] <c0s> what if the configuration from such charmed cluster is copied to an external client and used there then?
[18:04] <c0s> if you, however, frame the question like cory_fu does - then it makes sense to use client one, I guess
[18:05] <c0s> again, apologies for crying wolf
[18:05] <cory_fu> c0s: Hrm.  I don't think we should be supporting copying the config like that.  For one thing, it will reference cloud-internal IPs for everything, which wouldn't work outside anyway.  Instead, we should probably have an action or something that gives you the config you need to use the cluster from an external client
[18:09] <c0s> fair enough, cory_fu
[18:27] <kwmonroe> hey cory_fu, i'm working with Guest57293 (that's her given name).  what would cause something like this:
[18:27] <kwmonroe>  update-status   File "/usr/local/lib/python3.4/dist-packages/charms/reactive/relations.py", line 255, in conversation 2016-03-28 17:42:32 INFO update-status     raise ValueError('Unable to determine default scope: no current hook or global scope') 2016-03-28 17:42:32 INFO update-status ValueError: Unable to determine default scope: no current hook or global scope
[18:27] <kwmonroe> trying to access a conversation from something like config-changed?
[18:30] <cory_fu> That means your interface layer is not scope=GLOBAL and is using a method that requires a default conversation, like self.conversation(), self.get_remote(), etc.  IOW, you're asking it to tell you something about "this conversation" without it having enough information to know what conversation  you're talking about
[18:30] <cory_fu> The conversation metaphor was intended to simplify things but I think ended up making things more confusing.  :(
[18:32] <cory_fu> I'd have to see the specific code in question, but self.get_remote(), self.set_remote(), or self.conversation() (singular) are the most common cause, and it generally means you need to instead have a "for conv in self.conversations(): conv.get_remote()" or similar loop.
[18:32] <cory_fu> Let me find an example
[18:32] <magicaltrout> a confusing metaphor in Juju?! no way!
[18:32]  * magicaltrout is truely shocked
[18:33] <magicaltrout> -e
[18:36] <cory_fu> I think https://github.com/juju-solutions/interface-mapred/blob/master/provides.py is a fair example.  You'll notice that inside @hook decorated methods, it uses self.conversation().  That's because it's handling a specific relation hook, and so it knows what remote unit its talking to.  But in the other methods, it has to be aware that it could be talking about several different remote units at once, so it loops over the list of conversations and
[18:36] <cory_fu> handles each one separately
[18:36] <cory_fu> kwmonroe, Guest57293: If you can point me to the code that's causing the error, I can give more specific advice
[18:42] <Guest57293> Hi Cory, you can find the code here : provides.py : http://paste.ubuntu.com/15541870/
[18:42] <Guest57293> requires.py : http://paste.ubuntu.com/15541889/
[18:46] <cory_fu> Guest57293: Ok, so in provides.py, the scope.UNIT seems right because you might be providing NFS storage to several different clients.  However, that means that host_name has to send the hostname_storage to  every client, and nfsclient_ip might return several addresses, one for each client.
[18:47] <cory_fu> Guest57293: So, both of those need to use the "for conv in self.conversations():" loop, and nfsclient_ip I guess needs to return a list instead of one value
[18:49] <Guest57293> ok.
[18:51] <Guest57293> I had one more doubt, one I am having a relation between NFS Storage Server and n number of nfs clients. For the first unit of nfs client, all states and values are set, which is fine.
[18:52] <Guest57293> But for the second unit , the previous state values are retained, so it directly enters the @when condition. How can I reset my state values and variables ?
[18:53] <Guest57293> for nfs client units
[19:02] <cory_fu> Guest57293: So, there are two approaches to that for the interface layer.  My preferred approach is for the interface layer to set the states according to what is connected / available, and then have the charm use things like "data_changed" to figure out if something changed and needs to be responded to
[19:03] <cory_fu> Guest57293: The other approach is more what you're talking about, which is to set the state when something happens and then remove the states once its been processed.  In that case,  you'd need to add an "acknowledge_client" method on the interface which would remove the states.
[19:04] <cory_fu> Guest57293: The downside of that approach is that you then can no longer ask the interface layer "what are all the connected nodes", for instance.
[19:04] <cory_fu> But maybe you don't need that and the other approach might simplify your charm.
[19:04] <magicaltrout> marcoceppi: i have a charm in the review queue with ancient tests so it flags red
[19:05] <magicaltrout> a) does that matter? b) can I do anything about it or do I just wait?
[19:05] <marcoceppi> magicaltrout: do you want another test run?
[19:05] <magicaltrout> be nice
[19:05] <magicaltrout> they run in bundletester
[19:05] <Guest57293> thanks cory
[19:05] <magicaltrout> but I have no clue how that translates to the real thing
[19:05] <marcoceppi> magicaltrout: which charm?
[19:05] <marcoceppi> magicaltrout: bundletester is what we run in the test env
[19:06] <magicaltrout> yeah, but a lxc juju 1.2 instance != the canonical test env ;) saikuanalytics-enterprise
[19:06] <marcoceppi> magicaltrout: heh, which charm?
[19:07] <magicaltrout> https://jujucharms.com/u/spicule/saikuanalytics-enterprise/trusty/6
[19:07] <marcoceppi> magicaltrout: I just rekicked the test
[19:08] <marcoceppi> they old
[19:08] <magicaltrout> ta
[19:08] <magicaltrout> yeah, i assumed that dumping your charm back into the review queue would re-test the charm
[19:08] <magicaltrout> but that assumption was invalid :)
[19:14] <marcoceppi> magicaltrout: it will in new review queue! but not in current one
[19:14] <magicaltrout> supposedly this new review queue will also write my charm for me....
[20:11] <cholcombe> make lint keeps catching me ridin dirty haha
[20:18] <gQuigs> is there a charm config way I missed to disable SSLv3 /RC4 in  juju-gui or openstack-dashboard  (also asked if charms should just do that by default in -dev)
[20:23] <durschatz1> What is the best way to find out the exact steps taken to upgrade the openstack  neutron-dhcp-agent?  I'm not sure who the charmer is to ask.
[20:26] <deanman> I'm having layer-docker charm failing to install properly during install hook. Is debug-hooks the proper way to debug this ?
[20:50] <kwmonroe> deanman: you could do debug-hooks on the failed unit, then "juju resolved --retry <unit>" in another terminal to manuall run the install hook.  or you could "juju debug-log --include <unit> -n 100" to see the last 100 lines of unit log.
[20:58] <marcoceppi> gQuigs: https://jujucharms.com/juju-gui/#charm-config-secure that seems to be what you want, though it may not work in the gui charm
[21:02] <deanman> kwmonroe: Thanks, i found the problem.
[21:03] <deanman> Is deploying a layer-docker inside an lxc a bad practice? I'm trying to learn how to deploy my custom docker images  but it seems that they are failing when deploying on a wily lxc container using the layer-docker charm template/
[21:04] <kwmonroe> deanman: lazyPower or mbruzek know for sure, but i didn't think docker inside lxc containers worked.
[21:05] <kwmonroe> (sorry, i don't know why, but think i've heard that recently.  i should probably be quiet)
[21:05] <lazyPower> deanman - it works only if (that i myself have verified, others may work but i havent vetted them):  you're on xenial installing the docker.io package   inside a lxd container
[21:05] <lazyPower> and yeah, docker inside lxd totally works if you do the above :)
[21:05] <deanman> kwmonroe: How would you propose to work a local dev machine then with docker ?
[21:06] <lazyPower> the default fs type needs to be changed to overlay as well, vs the default of aufs. I believe there's still some tweaking that needs to happen to the docker profile.
[21:06] <deanman> I have local bootstrap VM deploying into lxc
[21:06] <lazyPower> ah also, you have to fire up that lxd image w/ the docker profile to disable the appropriate cgroups.
[21:06] <lazyPower> deanman - i haven't had any luck in that method :(
[21:07] <lazyPower> deanman - what we've done i the past, is use juju 1.25 w/ local: kvm
[21:07] <bdx> Is anyone else having issues with filing bugs on launchpad?
[21:07] <mbruzek> deanman: what Ubuntu version is your VM?
[21:08] <bdx> -> https://www.dropbox.com/s/vl3hac0ns9kvcwg/Screen%20Shot%202016-03-28%20at%202.04.56%20PM.png?dl=0
[21:08] <deanman> lazyPower: i think i saw somewhere juju nagging about my cpu not supporting KVM or something (running MacOS and then vagrant to host my juju env)
[21:08] <lazyPower> ah yeah, hypervisor in a hypervisor
[21:09] <mbruzek> bdx: there is a #launchpad channel on IRC
[21:09] <lazyPower> even if it did work, the performance would be abysmal
[21:09] <deanman> mbruzek: wily64
[21:09] <bdx> mbruzek: sweet, thx
[21:09] <mbruzek> deanman: I have only got it two work with xenial
[21:10] <deanman> mbruzek: Would it be stable enough to do my juju learning and invite colleagues to participate ?
[21:10] <lazyPower> deanman - if you dont have a local configuration to work with, you can registher for the charm developer program and we'll give you some AWS credits to dev with
[21:10] <mbruzek> deanman or is lxc a requirement?
[21:12] <deanman> lazyPower: by local configuration you mean a non-virtualised ubuntu environment right ?
[21:13] <deanman> mbruzek: My only requirement is how to port a product we have in docker into juju and share the magic with my colleagues :-)
[21:13] <lazyPower> deanman - Right. If you have spare hardware there's a few options available to you as well, such as setting up a single server to run juju w/ kvm to work on docker workloads until we've gotten a stable docker-running-in-lxd story.
[21:13] <lazyPower> deanman - but to start, here's the form for the charm developer program: https://developer.juju.solutions/
[21:15] <mbruzek> deanman: OK. Yeah LXD/LXC can run together, but that is really coming together for 16.04 (yet unreleased)  and the experience is not great. You can sign up for the developer program to use amazon
[21:15] <mbruzek> where you get a full vm and it will run docker, lxc whatever
[21:16] <mbruzek> I have an early copy of 16.04 (xenial) on my laptop and have been able to run Docker inside LXC, but not through Juju yet.
[21:16] <mbruzek> I was able to run this manually
[21:17] <mbruzek> I use the local kvm provider with Juju 1.25 with much success.
[21:17] <deanman> so for docker workloads you either deploy on a bare machine e.g. --to 1 or if you want to pack more using --to kvm:x ?
[21:18] <gQuigs> marcoceppi that just makes it not in plain text, I want it to use modern SSL..  thanks though
[21:18] <lazyPower> deanman: correct
[21:19] <mbruzek> deanman: actually give me "juju version"
[21:19] <lazyPower> only we dont use --to kvm:x, the local:kvm provider routes all "machine" requests through the kvm installation, and spins up vm's that act as bare metal.
[21:19] <mbruzek> https://jujucharms.com/docs/stable/config-KVM
[21:19] <deanman> mbruzek: 1.25.3-wily-amd64
[21:20] <mbruzek> This works with juju 1.25, use ^ document to set up the kvm provider
[21:20] <mbruzek> Then you can deploy all the docker workloads you want
[21:21] <deanman> That's for the remote juju deployment/workflow. For local dev i'll have to wait for xenial ?
[21:22] <deanman> or at least in my case running mac
[21:22] <lazyPower> deanman - yep, and possibly a bit longer
[21:22] <lazyPower> we're still exploring and piloting running docker based workloads in lxd
[21:23] <mbruzek> deanman: That is our local provider setup.  I don't own a mac so I can't tell you if kvm will work with your vm solution
[21:24] <deanman> mbruzek: on a linux based pc for local dev. KVM is fast enough ?
[21:24] <mbruzek> deanman: Are you doing vagrant to get wily64 ?
[21:24] <deanman> mbruzek: Yeap! Using vagrant to ease the whole process and to introduce it to a wide audience, mostly windows machines and less linux.
[21:25] <mbruzek> deanman: KVM is awesome fast. But if you are using a mac, you are doing virtualization, and then kvm (another virtualization) it wouldn't be what I would choose
[21:25] <mbruzek> aisrael: lazyPower: can vagrant vms do KVM inside them?  I have heard parallels can.
[21:26] <mbruzek> but I don't know vagrant.
[21:26] <deanman> mbruzek: vagrant -> virtualbox
[21:26] <mbruzek> deanman: Then that should work, the setup for the local provider is pretty simple/easy.
[21:26] <mbruzek> you will know for sure soon enough.
[21:27] <mbruzek> deanman: If that does not work sign up for the developer program that lazyPower referenced and you can get some free amazon creds.
[21:28] <mbruzek> deanman: my w540 laptop has linux on it,  have ubuntu installed, and then configure the local kvm provider and it is crazy fast.
[21:28] <mbruzek> through juju
[21:29] <mattrae> hi, is it possible to change how Juju generates the hostnames of LXC containers (i.e. juju-machine-1-lxc-10) to avoid having duplicate names if managing mulitple juju environments with the DNS domain?
[21:32] <mbruzek> mattrae: you would have to contact a core developer on that one.
[21:33] <mbruzek> mattrae: how are you getting a duplicate name?
[21:36] <mattrae> mbruzek: thanks mbruzek. i need to do more testing, it happens reportedly by having multiple environments on the same maas server, both environments will use juju-machine-1-lxc-1 for their first container on machine 1
[21:36] <mbruzek> mattrae: I see now. This is not something I have ever tried, and don't know how to do.
[21:37] <mbruzek> mattrae: I once ran 2 lxc providers on the same machine and was able to change the ports of juju so they would not collide, but not the names of the instances
[21:37] <deanman> mbruzek: following the kvm guide your shared and it complains about "failed verification of local provider prerequisites: kvm ok is not installed"
[21:38] <mbruzek> deanman: `sudo apt-get install juju-local`
[21:38] <mattrae> mbruzek: cool thanks for the suggestions :)
[21:38] <mbruzek> deanman: you may also need to install kvm, let me check the packages that I have installed
[21:39] <deanman> mbruzek: i think it's not supported on macos
[21:39] <mbruzek> deanman install it on your wiley64
[21:39] <mbruzek> deanman: if the juju-local does not install qemu-kvm you should install that too
[21:40] <mbruzek> but I think it should
[21:40] <deanman> mrjazzcat: "KVM acceleration can NOT be used" when running kvm-ok inside the vm
[21:40] <deanman> mbruzek: "KVM acceleration can NOT be used" when running kvm-ok inside the vm
[21:41] <mbruzek> deanman: that means kvm will run slow, but I suspect it will still run
[21:41] <mbruzek> deanman: Some virtualization technologies do not run nested virtualization. On Ubuntu I am able to run KVM within KVM
[21:42] <deanman> i think it didn't like it when defined inside environments.yaml (container: kvm)
[21:42] <mbruzek> deanman: what was the error?
[21:43] <mbruzek> deanman: juju would give you an error when you tried to bootstrap, if you got an environments.yaml error I suspect a yaml format error
[21:43] <deanman> mbruzek: "ERROR there was an issue examining the environment: failed verification of local provider prerequisites: kvm-ok is not installed. Please install the cpu-checker package."
[21:44] <mbruzek> apt-get install cpu-checker
[21:44] <deanman> mbruzek: cpu-checker is installed ;-)
[21:44] <deanman> minor bug
[21:44] <mbruzek> OK
[21:44] <mbruzek> well then sign up for the developer program and use aws
[21:45] <mbruzek> I apologize this didn't work, I am not mac savvy
[21:45] <deanman> ok it did go forward and was able to deploy juju-gui on a new machine (guessing KVM)
[21:45] <mbruzek> virsh list --all
[21:47] <mbruzek> that will show the kvm machines on the ssystem.
[21:47] <deanman> mbruzek: empty but after bootstrapping a local provider inside a vm and issuing juju deploy juju-gut i saw a new machine being added and the charm actually deploying
[21:48] <deanman> had to use sudo though for virsh, that's the expected way right?
[21:48] <mbruzek> deanman: yes but you can add the user to the virsh group and use virsh commands
[21:49] <deanman> would sudo report something different? no entries on virsh
[21:49] <mbruzek> deanman: no sudo would give you everything
[21:50] <deanman> mbruzek: ok so I'm not running that charm inside kvm, where am i running it then ? :-0
[21:51] <mbruzek> deanman: did you specify kvm when you bootstrapped? I suspect you might be in lxc
[21:52] <deanman> mbruzek: nope, no kvm or lxc reference whatsoever while bootstrapping or in environments.yaml
[21:53] <mbruzek> deanman: http://pastebin.ubuntu.com/15544632/
[21:53] <mbruzek> That is what my virsh list --all looks like
[21:54] <deanman> http://pastebin.ubuntu.com/15544671/
[21:54] <mbruzek> local in this case is lxc
[21:55] <mbruzek> sudo lxc image list
[21:55] <deanman> and this is my environments http://pastebin.ubuntu.com/15544688/
[21:55] <mbruzek> where was your kvm configuration
[21:55] <mbruzek> ?
[21:56] <deanman> it was in that environments.yam file but removed when it complained during bootstrap
[21:56] <mbruzek> deanman: That is LXC then: https://jujucharms.com/docs/stable/config-LXC
[21:57] <mbruzek> You will not be able to use docker within those LXC containers.
[21:57] <deanman> mbruzek: yeah it seems so...
[21:57] <mbruzek> The gui will deploy and other charms should work fine, just Docker does not play nice with the wily code
[21:58] <mbruzek> I have to get going soon, I recommend the developer program and use aws
[21:59] <mbruzek> deanman: https://developer.juju.solutions/
[22:00] <mbruzek> and ping marcoceppi once you fill that out, then you can use a real cloud
[22:03] <deanman> mbruzek: Thank you
[22:09] <mbruzek> deanman: you are welcome good luck.
[22:31] <LiftedKilt> Is there a way to specify the series in the lxc containers that are created? For example my maas default deploy is xenial, and I want all my servers to be xenial. If I need to deploy a charm that doesn't support xenial, can I have juju request a new xenial server, and deploy the charm in a trusty container?
[22:34] <marcoceppi> LiftedKilt: you can create a xenial machine then tell juju to put a trusty lxc on int
[22:34] <marcoceppi> juju add-machine --series xenial
[22:34] <marcoceppi> that gives you, say machine 8
[22:35] <marcoceppi> juju deploy trusty/juju-gui --to lxc:8
[22:35] <LiftedKilt> marcoceppi: awesome - that's exactly what I was hoping for
[22:36] <LiftedKilt> marcoceppi: is there a way to deploy it without first adding the machine or specifying which machine I want it on?
[22:38] <marcoceppi> LiftedKilt: not entirely
[22:39] <marcoceppi> LiftedKilt: are you looking to do this in a bundle?
[22:39] <LiftedKilt> marcoceppi: potentially, but not exclusively
[22:40] <LiftedKilt> marcoceppi: was hoping for a way to be able to "juju deploy charm --to lxc:" and let it pick a machine or create one as it sees fit
[22:43] <LiftedKilt> marcoceppi: essentially I want to be able to deploy charms without worrying about series, and have juju create the appropriate containers to faciliate those charms
[22:44] <LiftedKilt> marcoceppi: with the intention of deploying every charm to a separate container
[23:14] <marcoceppi> LiftedKilt: that's a great idea, but not one that exists today
[23:27] <LiftedKilt> marcoceppi: haha well that's a bummer
[23:27] <marcoceppi> LiftedKilt: it's not a bad idea to reply to the 2.1 roadmap email about it
[23:27] <LiftedKilt> marcoceppi: I think I can make do without, but it would definitely make things easier
[23:27] <marcoceppi> --to lxc: would be nice