lazypower | you realize my hand is getting sore from all the hi5's lately dude. You've been killin it. | 00:00 |
---|---|---|
jose | lazypower: apparently the Chamilo charm is working and needs to be promulgated so it can be promoted during an expo | 00:00 |
jose | thanks :) | 00:00 |
lazypower | when is this expo? | 00:00 |
jose | from today until saturday | 00:00 |
jose | started like at noon today | 00:00 |
lazypower | oh ok, zero notice | 00:00 |
* jose told Marco on Monday | 00:00 | |
lazypower | i'm checking into the MapR Hadoop distribution charm. Once that's done i can take a look at chamilio again | 00:00 |
jose | thank you! | 00:01 |
lazypower | ah, he's been saddled with my MongoDB work since i fell ill | 00:01 |
lazypower | i'll pick up the torch nbd | 00:01 |
jose | in the meanwhile, I'll organize the Ubuntu on Air! channel for a nicer experience | 00:01 |
lazypower | what | 00:01 |
jose | you'll see later today | 00:04 |
* lazypower eyeballs jose suspiciously | 00:05 | |
marcoceppi | jose: promulgated | 01:04 |
jose | marcoceppi: awesome, thanks a bunch! \o/ | 01:04 |
=== thumper is now known as thumper-afk | ||
=== vladk|offline is now known as vladk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
yaell | Hi we are new to juju and trying to deploy openstack using it. We encountered some basic problems. We are using 10 machines one is the maas and juju server and the 9 others for deployment. The problem is there are 9 services that have to be deployed but only 8 machines left after bootstrap. When trying to deploy 2 services on 1 machine something always fails. Does anyone know if there are services that can not be installed on the same m | 07:45 |
yaell | In addition is this the best place to ask questions regarding juju and openstack or is there an additional mailing list? | 07:46 |
axw | yaell: I don't know the answer to your problem, but there's a mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) and http://askubuntu.com/ | 07:50 |
axw | this is an appropriate place, but I think most of the people who know the answers will be asleep | 07:51 |
mwhudson | yaell: you can deploy a bunch of the services to separate containers on one node | 07:57 |
mwhudson | i'm fairly sure | 07:57 |
mwhudson | how production-ish is this? | 07:57 |
yaell | Just testing it for now | 08:04 |
yaell | how do I deploy to separate containers on the same node? | 08:04 |
=== vladk is now known as vladk|offline | ||
mwhudson | yaell: if you know the machine number you can say juju deploy --to lxc:$machine number | 08:18 |
mwhudson | yaell: juju help deploy has some more examples | 08:19 |
lukebennett | lazypower: Thanks for your reply to me yesterday. No the node stayed as allocated. As suspected, when I powered the node down manually and reran juju bootstrap, it PXE booted and deployed as expected. | 08:28 |
yaell | Yes I did that the problem is when I do that the service does not come up properly or if I add relations the service fails. | 08:29 |
mivtachyahu | good morning. Anybody ever seen a bug where juju mixes up what services are running on which machines? | 08:33 |
sherman | hey all, how's things? | 08:39 |
mwhudson | yaell: oh | 09:09 |
mwhudson | you need to multilate networking a bit | 09:10 |
mwhudson | something like this https://github.com/Ubuntu-Solutions-Engineering/cloud-installer/blob/master/tools/cloud-sh/lxc-network.sh | 09:10 |
=== vladk|offline is now known as vladk | ||
=== roadmr is now known as roadmr_afk | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
sherman | hey, dumb q: can I bootstrap to a local container then juju deploy to MaaS? or do I need a MaaS node bootstrapped? | 11:05 |
=== vladk|offline is now known as vladk | ||
sherman | like is my machine:0 just a node because maas is set as my default? | 11:08 |
=== roadmr_afk is now known as roadmr | ||
AskUbuntu | MAAS/juju not cleaning up nodes | http://askubuntu.com/q/488396 | 11:24 |
=== roadmr is now known as roadmr_afk | ||
=== roadmr_afk is now known as roadmr | ||
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== eagles0513875 is now known as greenrice | ||
=== greenrice is now known as eagles0513875 | ||
=== urulama is now known as uru-food | ||
schegi | jamespage, out there?? | 13:01 |
=== vladk|offline is now known as vladk | ||
lazypower | lukebennett: interesting. That's happened to me a few times, things seem to get out of whack once in a great while and things go into a pending state. | 13:30 |
=== uru-food is now known as urulama | ||
lazypower | sherman: i dont understand the question | 13:31 |
lazypower | You want to place your bootstrap node on an lXC container somewhere, and use that to orchestrate your maas cluster? | 13:32 |
=== rogpeppe2 is now known as rogpeppe | ||
jcastro | hey so I noticed something odd | 13:43 |
jcastro | hey marcoceppi | 13:43 |
jcastro | https://juju.ubuntu.com/docs/tools-amulet.html | 13:43 |
jcastro | in the sidebar | 13:43 |
marcoceppi | jcastro: hey | 13:43 |
jcastro | I think we should just rename it to "Writing Tests" to be more obvious. | 13:43 |
gQuigs | with the local-provider all of my VMs get the same IP/DNS name, what am I doing wrong? | 13:43 |
jcastro | https://juju.ubuntu.com/docs/authors-testing.html | 13:44 |
jcastro | oh, this is the page I wanted anyway | 13:44 |
marcoceppi | jcastro: it's the Amulet reference guide | 13:44 |
marcoceppi | yeah | 13:44 |
jcastro | I'm going to retitle that | 13:44 |
jcastro | "Writing tests for charms" | 13:44 |
lazypower | gQuigs: lxc is attempting to assign the same ip to all of your containers? | 13:46 |
gQuigs | lazypower: afaict it actually does | 13:47 |
lazypower | that.. makes no sense. its given a range in the configuration to choose from | 13:47 |
lazypower | what version of juju/lxc? | 13:47 |
gQuigs | last time, I could ssh to the IP and I would randomly get one of the VMs | 13:47 |
lazypower | gQuigs: if you look in /etc/defaults/lxc you should see a config variable for LXC_DHCP_RANGE | 13:48 |
gQuigs | juju, stable/ppa - 1.18.4-trusty-amd64; lxc, the stable | 13:48 |
jcastro | huh, how is that even possible | 13:49 |
lazypower | jcastro: this ties in with what bluespeck emailed us about this morning. | 13:50 |
jcastro | surely it would error out if it tried to get the same ip? | 13:50 |
lazypower | and right, the container should sit in pending due to ip collision | 13:50 |
jcastro | lazypower, but they were on azure, so if it affects multiple providers .... | 13:50 |
lazypower | jcastro: well i dont have imperical evidence but this sounds strikingly similar | 13:50 |
jcastro | similar enough to ask somebody | 13:51 |
gQuigs | lazypower: jcastro: nope, it tries really hard to work... http://pastebin.ubuntu.com/7705971/ | 13:51 |
* gQuigs checking in ..default/ now | 13:51 | |
lazypower | what the | 13:51 |
jcastro | hey so let's open a bug right away | 13:51 |
jcastro | so I can ask people to take a look | 13:52 |
lazypower | gQuigs: do you mind attaching the output of your machine0.log, your juju-status, and filing a but at launchpad.net/juju-core ? | 13:53 |
lazypower | er juju-local | 13:53 |
gQuigs | lazypower: sure, will do if I can find a cause | 13:55 |
lazypower | gQuigs: we need to poke at this sooner rather than later. this appears to be related (but may not be) | 13:55 |
gQuigs | lazypower: what is your /etc/default/lxc-net supposed to look at? | 13:56 |
lazypower | jcastro: once we have a bug # will you follow up with the bluespeck guys? | 13:56 |
lazypower | gQuigs: http://paste.ubuntu.com/7705989/ is mine | 13:56 |
* gQuigs is identical... I'm going to revert to a non-ppa LXC... | 13:57 | |
sparkiegeek | marcoceppi: jcastro: sorry for nagging but would really appreciate a look at https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041. Fixes a bug that prevented installation on stale images and adds a ton of tests | 14:00 |
jcastro | sparkiegeek, yeah reviewers have been swamped | 14:02 |
lazypower | sparkiegeek: this looks good - however your tests are in hooks/ | 14:02 |
jcastro | lazypower, you got time to take a break and give that a review today? | 14:02 |
lazypower | there's an idiom to keep tests in $CHARM_DIR so they can be executed via charm test | 14:02 |
sparkiegeek | lazypower: they aren't amulet tests (as discussed before, can't do that due to postgresql charm relation) | 14:03 |
lazypower | ahhh ok so you're aware | 14:03 |
sparkiegeek | lazypower: I consider it Python best practice to put them where they are now (close to the code) | 14:03 |
lazypower | well, you have a make target for them | 14:03 |
lazypower | +1 | 14:03 |
sparkiegeek | yup :) | 14:03 |
lazypower | i dont have the same dependencies though, is it possible you could add a requirements.txt or something with the deps and a make setup target that would pip install those for me? | 14:04 |
lazypower | that way i'm not manually chasing down the deps? | 14:04 |
sparkiegeek | lazypower: of course | 14:04 |
=== vladk is now known as vladk|offline | ||
jcastro | asanjar, hey elasticsearch ninja-edition is now in the store, please start bundling cool stuff | 14:08 |
wesleymason | marcoceppi: lazypower: I've been told you've been working on the mongo charm, I've been working on getting it to use the storage subordinate for persistent volumes | 14:10 |
jcastro | marcoceppi, how often does the github sync work? elasticsearch seems out of date but it updated in the store like yesterday | 14:12 |
lazypower | wesleymason: we're under an active effort to rewrite it to use currently existing charm-helpers and reduce overall complexity of the charm. We should have something to look at by next week. Do you have your storage-volume branch somewhere we can reference? | 14:12 |
lazypower | i'd be willing to try to fold it in to the first cut if its basically there in the mongodb charms current form. | 14:12 |
wesleymason | lazypower: I have an MP, with branch referenced: https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539 | 14:16 |
lazypower | wesleymason: ah, thanks. seems pretty straight forward. I'l add this to our card to have this as a base feature ootb | 14:17 |
wesleymason | lazypower: cheers :) | 14:17 |
lazypower | gQuigs: still banging your head against the desk with ip assignment and LXC? | 14:18 |
gQuigs | lazypower: nuked all the juju/local/lxc packages and trying fresh | 14:19 |
gQuigs | hmm.. now on reinstall I'm getting lxcbr0 not found.. maybe I messed with that at some point... | 14:23 |
lazypower | gQuigs: its a virtual ethernet device bridge. when you purged it probably removed the device. did you install the juju-local package? | 14:25 |
gQuigs | lazypower: yup | 14:25 |
wesleymason | lazypower: fyi I'm also *currently* adding some nagios checks to the mongodb charm on another branch | 14:28 |
lazypower | wesleymason: might want to hold on that pending our re-release of the charm | 14:28 |
wesleymason | lazypower: is it going to add checks? Or just because of the refactor of the charm? | 14:29 |
marcoceppi | lazypower wesleymason adding nagions relation should be pretty easy to transplant, unless you're adding it in the hooks.py | 14:29 |
wesleymason | marcoceppi: yeah, the relation/hook is about 6 lines of python, most of the checks are in separate nagios plugin script files | 14:29 |
marcoceppi | wesleymason: cool, should be pretty easy to transplant that | 14:29 |
wesleymason | aye, won't be a merge, but not hard to copy over | 14:30 |
wesleymason | as I ended up bumping charmhelpers on my branch too, for the nrpe helper | 14:30 |
wesleymason | marcoceppi: lazypower: got any idea of an ETA on charm refactor, or just "when it's done"? | 14:31 |
marcoceppi | wesleymason: I'm working on it right now, I should have it done by the end of the weekend | 14:31 |
wesleymason | awesome | 14:32 |
gQuigs | yup, now installing lxc does't create the lxcbr0 at installl time... | 14:48 |
asanjar | lazypower: i am back from doc visist, what's up | 14:48 |
* gQuigs goes to#lxcontainers | 14:48 | |
lazypower | couple of things actually | 14:48 |
lazypower | i gated a merge against your hadoop2-devel charm: https://code.launchpad.net/~lazypower/charms/trusty/hadoop2-devel/python_rewrite/+merge/224647 | 14:48 |
lazypower | that should close the loop on removing all the class bits out of the hooks and 1:1 with the bash counterpart. so we're good to gofor replacement | 14:49 |
lazypower | we can pair program the tests as early as next week. | 14:49 |
lazypower | Did you get a chance to review MapR yet? | 14:50 |
=== makyo_ is now known as Makyo | ||
=== CyberJacob is now known as CyberJacob|Away | ||
schegi | anyone knows how to name ceph clusters when deployed with ceph charm?? | 14:55 |
gQuigs | and now that I have lxcbr0 working again, it's all fine - different IPs for each machine, yay :) | 15:00 |
jcastro | sebas5384, it will take me a bit longer to get me a shirt to you | 15:00 |
schegi | jamespage, thx for your help yesterday ceph cluster up and running with splited network. now i have to figure out how to configure the osds to use external journal devices. but this seems to be doable. | 15:02 |
lazypower | sebas5384: hey! i haven't forgot about you. I'll be working on the vagrant plugin this weekend. I should have a PR circa sunday. | 15:08 |
sebas5384 | yeah!! lazypower i'm a bit in phantom mode these days hehe | 15:08 |
lazypower | Totally understood :) Thanks for laying out the repository for the work though. I'm going to head back through it tonight and setup the remaining milestones and project plan | 15:09 |
sebas5384 | but definitely i'm going to comeback to you in these days :D | 15:09 |
lazypower | i started it and never finished the work. I've been sidetracked with 8 billion other objectives | 15:10 |
sebas5384 | great!! and i had some ideias that i'm going to registrate in the issues list :D | 15:10 |
sebas5384 | hehehe | 15:10 |
sebas5384 | i know that feeling | 15:10 |
sebas5384 | right now im in a hackathon of drupal 8 | 15:10 |
lazypower | sounds good. I'm heading out to a dr's appt. I'll catch you on the flip sebas5384 | 15:10 |
sebas5384 | porting some modules | 15:10 |
lazypower | nice | 15:10 |
lazypower | keep kicking butt sebas5384 | 15:11 |
sebas5384 | hehehe yeah :D | 15:11 |
sebas5384 | great to hear from you man | 15:11 |
=== vladk|offline is now known as vladk | ||
=== vladk is now known as vladk|offline | ||
=== roadmr is now known as roadmr_afk | ||
Pa^2 | Probably should have asked this first: Can I run juju on a desktop system or does it require server? | 16:30 |
marcoceppi | Pa^2: you can run juju the client on Ubuntu, Mac OSX or Windows systems | 16:31 |
Pa^2 | I am still missing something. Did apt-get install juju-core, juju-local. Then did juju init, switch local, bootstrap - Machine "0" : agent-state started. | 16:51 |
g0ldr4k3 | hello everyone, I've a question for you, juju-core has to be installed on host machine or on region controller (MaaS) if I want to have an environment based on MaaS (Region Controller + 3 Cluster Controller +3 nodes for each CC)? | 16:51 |
Pa^2 | When I attempt to do juju deploy wordpress it appears to try and start machine "1" but just says pending - many minutes now. Thoughts? | 16:52 |
sarnold | Pa^2: the first deploy downloads a whole cloud image.. that can take a while | 16:52 |
Pa^2 | I will wait and get back to you. Thanks. | 16:53 |
g0ldr4k3 | because I've installed it on RC but when to run the command "juju bootstrap -e maas --debug". juju try to connect to node but without to create a connection!!! | 16:55 |
g0ldr4k3 | is there someone can help me? thanks | 16:55 |
g0ldr4k3 | there string is this "2014-06-26 16:55:29 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash" | 16:56 |
sarnold | g0ldr4k3: oh, nice; what happens when you run that command by hand? | 16:58 |
g0ldr4k3 | sarnold: which is the command I have to run? | 16:59 |
g0ldr4k3 | sarnold: and why it try to connect on only one node the others | 16:59 |
sarnold | g0ldr4k3: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash | 17:00 |
g0ldr4k3 | sarnold: the result is this "ssh: Could not resolve hostname ubuntu14.04ltsnode02cluster01svr: Name or service not known" | 17:00 |
sarnold | g0ldr4k3: nice. | 17:01 |
sarnold | g0ldr4k3: do you recognize that funny hostname? "Ubuntu14.04LtsNode02Cluster01Svr" | 17:01 |
g0ldr4k3 | if I try to run a ssh connectio from host machine it works well | 17:01 |
sarnold | g0ldr4k3: do you recall if that's something that you might have assigned to it in some way? I'm really surprised about the . in the middle of it.. | 17:01 |
g0ldr4k3 | yes but it's a test in my lab | 17:02 |
sarnold | can you change the name to somehting that doesn't include a period? | 17:02 |
sarnold | unless you're in a position to run a domain 04LtsNode02Cluster01Svr that has a host named Ubuntu14... | 17:03 |
g0ldr4k3 | no its hostname is another it's just the node's name added on MaaS UI | 17:04 |
g0ldr4k3 | and that's it | 17:04 |
sarnold | hrm, did MAAS assign that name?? | 17:04 |
g0ldr4k3 | Anyone can help me? | 17:18 |
g0ldr4k3 | I've posted my question tenore... | 17:21 |
sparkiegeek | g0ldr4k3: please rename the node in MAAS to not have the "." in it as per sarnold above (unless you really have a domain as he suggests) | 17:24 |
g0ldr4k3 | I've to check i dont think its name has "." | 17:25 |
g0ldr4k3 | Maybe i understand the issue. I've not a doman set on maas | 17:29 |
g0ldr4k3 | The connection via ssh from host machine work's only with ip | 17:30 |
g0ldr4k3 | Sparkiegeek: thanks a lot for support i'll try to set it on maas | 17:32 |
lukebennett | sparkiegeek: Thanks for your response to my query yesterday. PXE boot seems fine but I'm still having problems - http://askubuntu.com/questions/488396/ | 17:51 |
sparkiegeek | lukebennett: are you using virtual machines, or is this real hardware? What power type are you using? | 17:54 |
lukebennett | Real hardware via WOL | 17:54 |
sparkiegeek | lukebennett: any other DHCP servers in the network? Can you share the console log? | 17:55 |
lukebennett | Yes, using external DHCP server but with reserved IP and relevant PXE configuration | 17:56 |
lukebennett | You mean the maas log? | 17:57 |
sparkiegeek | lukebennett: "relevant PXE" scares me... are you not using PXE from MAAS? | 17:57 |
sparkiegeek | no I mean the log of the node that got booted | 17:58 |
lukebennett | Test bed consists of two nodes - one running as MAAS cluster controller and region controller, one as a node... node has reserved IP and DHCP server is configured so it PXE boots from the MAAS server | 17:58 |
lukebennett | PXE booting works fine - when I juju bootstrap, the server powers up | 17:58 |
lukebennett | The environment deploys ok (well it doesn't, I get various errors with some of the charms but that's another story) | 17:58 |
lukebennett | When I destroy it, mongod keeps running and the server stays online so a subsequent bootstrap fails | 17:59 |
lukebennett | Which particular log file are you interested in? Sorry if that's a stupid question, still finding my way around | 17:59 |
sparkiegeek | lukebennett: ok, sorry I went to read up on MAAS power settings | 18:05 |
sparkiegeek | lukebennett: so it turns out that (as I suspected) MAAS can't do power down using WOL | 18:05 |
lukebennett | Ah | 18:05 |
lukebennett | Thanks | 18:05 |
sparkiegeek | lukebennett: the MAAS guys hang out in #maas if you need more help :) | 18:06 |
lukebennett | Thanks, I've been somewhat split between the two :) It feels like regardless of the power down, juju ought to be able to clean up after itself? | 18:06 |
lukebennett | i.e. I shouldn't have to power the node down to redeploy onto it | 18:06 |
sparkiegeek | how are you "stopping" juju? | 18:08 |
sparkiegeek | I mean, destroy-environment or ? | 18:08 |
lukebennett | juju destroy-environment xxxx | 18:08 |
=== CyberJacob|Away is now known as CyberJacob | ||
AskUbuntu | Juju - configuring a service by clicking on charm with stand-alone implementation - LXCa | http://askubuntu.com/q/488575 | 18:26 |
sparkiegeek | lukebennett: can't say for sure but I suspect Juju expects MAAS to be able to power down machines and so doesn't do it itself | 18:28 |
sparkiegeek | lukebennett: do you have any remote power control for turning the node off? | 18:28 |
sparkiegeek | lukebennett: if so, you could "teach" it to MAAS by editting /etc/maas/templates/power/ether_wake.template | 18:29 |
lukebennett | I think WOL is all we have on this setup but I'll speak to our ops guys about it. May have to try going down the VM route for testing | 18:31 |
lukebennett | I would say that juju ought to be agnostic of what MAAS decides to do with powering down however | 18:32 |
lukebennett | I need to hit the road now but thanks for your help | 18:34 |
=== BradCrittenden is now known as bac | ||
sparkiegeek | lukebennett: no worries, sorry I couldn't give you a better overall answer! | 18:38 |
lazypower | sparkiegeek: did you get reviewboards make targets updated for me? this MP looks good otherwise. | 18:49 |
sparkiegeek | lazypower: haven't had a chance yet I'm afraid | 18:49 |
lazypower | ping me when that lands and i'll merge this for you. | 18:49 |
sparkiegeek | lazypower: sure, thanks | 18:50 |
arosales | jose: nice to see tests in your chamilo charm | 19:14 |
Pa^2 | Just how big is cs:precise\wordpress? | 19:30 |
Pa^2 | BTW: I successfully got trusty/mysql and juju-gui working on my local install. | 19:31 |
=== mfa298__ is now known as mfa298 | ||
jcastro | marcoceppi, here's a small one | 19:37 |
jcastro | https://code.launchpad.net/~chris-gondolin/charms/precise/nrpe-external-master/trunk/+merge/221062 | 19:37 |
jcastro | I think chuck sorted the lowest hanging fruit | 19:37 |
jcastro | also, wrt to your mongo work | 19:38 |
jcastro | https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539 | 19:38 |
jcastro | marcoceppi, this one looks straightforward: https://bugs.launchpad.net/charms/+bug/1195736 | 19:39 |
_mup_ | Bug #1195736: Storm Charm was added. <Juju Charms Collection:Fix Committed by maarten-ectors> <https://launchpad.net/bugs/1195736> | 19:39 |
jcastro | here's another easier one | 19:40 |
jcastro | https://code.launchpad.net/~jacekn/charms/precise/mysql/n-e-m-relation-v2/+merge/218969 | 19:40 |
lazypower | jcastro: i have that MP in the card on the board | 19:41 |
jcastro | rock | 19:41 |
Pa^2 | Hmm, on a trusty system trusty charms spin right up, precise machines and charms not so much. | 20:16 |
Pa^2 | failed to process updated machines: cannot start machine 3: no matching tools available | 20:20 |
Pa^2 | Machine 3 is precise, 0,1,2 are trusty | 20:21 |
Pa^2 | Is there a way to DL precise "tools" | 20:21 |
sarnold | Pa^2: juju sync-tools? tools-sync? | 20:22 |
Pa^2 | Thanks | 20:22 |
lazypower | Pa^2: thats progress! | 20:23 |
lazypower | yesterday you weren't even getting the tools notice | 20:23 |
lazypower | sarnold: its sync-tools | 20:23 |
sarnold | lazypower: nice, alphabetical order :) thanks | 20:24 |
lazypower | np :) | 20:24 |
Pa^2 | I have the precise tarball in ~/.juju/local/storage/tools/releases. Is that where it should draw from? | 20:30 |
Pa^2 | Unfortunately there is no precise in ~/.juju/local/tools | 20:32 |
lazypower | Pa^2: http://askubuntu.com/questions/285395/how-can-i-copy-juju-tools-for-use-in-my-deployment | 20:35 |
Pa^2 | Thanks. | 20:35 |
Pa^2 | Well, time for burgers and beer... I will let this go until I get back in the am. Thanks for all the assistance. | 20:38 |
=== Guest8558 is now known as wallyworld | ||
pmatulis | how do i remove proxy settings from juju? | 22:25 |
pmatulis | i removed it from ~/.juju/environments/local.jenv but no | 22:27 |
=== alexisb is now known as alexisb_afk | ||
=== Guest28217 is now known as wallyworld | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== CyberJacob|Away is now known as CyberJacob |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!