/srv/irclogs.ubuntu.com/2014/06/25/#juju.txt

=== _mup__ is now known as _mup_
AskUbuntuGetting started with juju | http://askubuntu.com/q/48790605:12
=== vladk|offline is now known as vladk
=== CyberJacob|Away is now known as CyberJacob
simhonhello all07:07
simhonneed help about juju bootstrap07:07
simhonit seems to get stuck on the ssh part07:08
simhonthe node itself is up and running and i can ssh it myself using the same command the juju does07:08
simhon ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/simhon/.juju/ssh/juju_id_rsa -i /home/simhon/.ssh/id_rsa ubuntu@10.0.2.152 /bin/bash07:09
simhonbut somehow juju keeps retrying this command07:09
simhonwhen it times out it shows me the following :07:13
simhon2014-06-25 07:09:10 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist Stopping instance... 2014-06-25 07:09:10 INFO juju.cmd cmd.go:113 Bootstrap failed, destroying environment 2014-06-25 07:09:10 INFO juju.provider.common destroy.go:14 destroying environment "maas" 2014-06-25 07:09:11 ERROR juju.cmd supercommand.go:300 waited for 10m0s wi07:13
simhonany idea ????07:13
=== roadmr_afk is now known as roadmr
=== roadmr is now known as roadmr_afk
=== roadmr_afk is now known as roadmr
schegijamespage, you pointed me to your ceph charms with network split support. where to put the charm-helpers supporting network split so that they are?08:04
jamespageschegi, from a client use perspective?08:04
jamespagenot sure I understand your question08:05
schegii am pretty new to juju charms and not shure how they depend on each other. so what i did was branching the charms from lp. and putting them somewhere on my juju node. For deployment i use something like 'juju deploy --repository /custom/trusty ceph' and it seems to deploy the network-split charm from my local repo. But as long as i understand it depends on the charm-helpers, so how to keep this dependency to the branched charm-helpers08:10
schegijamespage, as long as i understand the network split versions of the charms need the network split version of the charm-helpers. when deploying from a local repository how to ensure usage of the network split version of the charm helpers?08:19
jamespageschegi, ah - I see - what you need is the branches including the updated charm-helper which I've not actually done yet - on that today08:20
schegijamespage, in lp there is also a branch of the charm-helpers with network split support (according to the comments). but i assume that just branching that and putting it somewhere will not be enough.08:22
jamespageschegi, no - its needs to be synced into the charms that relate to ceph08:22
schegijamespage ok and that means? What do i have to do? can you maybe point me to some resource that provides some additional information. The official juju pages are a little bit highlvl.08:26
jamespageschegi, if you give me 1 hr I can do it08:26
jamespageschegi, I just need to respond to some emails first08:26
schegino problem08:26
gnuoyjamespage, do you know if neutron_plugin.conf is actually used ?08:27
jamespagegnuoy, maybe08:31
jamespagegnuoy, I'd have to grep the code to see where tho08:31
jamespageschegi, OK - its untested but I've synced the required helpers branch into nova-compute, cinder and glance under my LP account08:32
jamespagenetwork-splits suffix08:32
jamespageschegi, https://code.launchpad.net/~james-page08:33
=== CyberJacob is now known as CyberJacob|Away
jamespageschegi, and cinder-ceph if you happen to be using ceph via cinder that way08:35
schegithx a lot i will give it a try08:37
jamespagegnuoy, maybe nova-compute - rings a bell10:04
gnuoyjamespage, I'm going to override that method to do nothing in the neutron-api version of the class10:05
jamespagegnuoy, +110:06
bloodearnestanyone know of any charms that use postgresql and a handle new slaves coming online in their hooks?10:08
bloodearnestah, landscape charm has good support, awesome10:18
=== roadmr is now known as roadmr_afk
jamespageschegi, fyi those branches are WIP _ expect quite a bit of change and potential breaks :-010:39
jamespagegnuoy, do you have  fix for the neutron-api charm yet? I'm using the next branch and I don't like typing mkdir /etc/nova  :-)10:45
gnuoyjamespage,I'll do it now10:45
jamespagegnuoy, ta10:45
=== wallyworld__ is now known as wallyworld
jamespageschegi, just applied a fix to the ceph network-splits branch - doubt you will be able to get a cluster up without it11:10
gnuoyjamespage, https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/fix-flag-bug/+merge/22441411:17
=== roadmr_afk is now known as roadmr
schegijamespage, your right currently trying with the unfixed version leads to unreachability between the mon nodes11:49
jamespageschegi, yeah - that sounds right11:49
jamespagemissed a switch from ceph_public_addr to ceph-public-address11:49
jamespage:-(11:49
schegiill give the new version a try11:50
schegibut i was wondering always getting the missing keyring error when calling ceph without -k <pathtokeyring>, what am i doing wrong11:51
schegihm once started ceph units seem to be undestroyable. hanging ind agent-state: started and life:dying11:56
schegiis there a way to force destruction other than destroy the whole environment or remove the machines??11:57
=== jcsackett is now known as idioteque
=== idioteque is now known as foobarbazqux
=== alexisb_bbl is now known as alexisb
schegijamespage, monmap looks good so far monmap e1: 3 mons at {storage1=10.10.1.21:6789/0,storage2=10.10.1.22:6789/0,storage3=10.10.1.23:6789/0}, election epoch 4, quorum 0,1,2 storage1,storage2,storage313:14
schegistill some issues but should work13:14
=== anthonyf` is now known as anthonyf
avoinenoodles775: have you an idea how to solve the race condition with multiple ansible charms on the same machine?13:30
avoineit just hit me13:31
avoinenoodles775: https://bugs.launchpad.net/charm-helpers/+bug/133428113:37
_mup_Bug #1334281: cant have multiple ansible charms on the same machine <Charm Helpers:New> <https://launchpad.net/bugs/1334281>13:37
noodles775avoine: let me look.13:38
noodles775avoine: Nope, but a hosts file per unit would be perfect as you suggested. I'll try to get to it in the next week or two, or if you're keen to submit a patch, even better :)13:40
noodles775s/per unit/per service/ (should be enough)13:40
noodles775avoine: hrm, if hooks are run serially, how are you actually hitting this? (I mean, the context should be written out to the /etc/ansible/host_vars/localhost when each hook runs?) Or what am I missing?13:42
automatemecolemaservice is stuck in a dying state, but has no errors to resolve. How can I force it to go away?13:43
automatemecolemanevermind juju remove-machine --force <#>13:44
jamespageschegi, I have it on my list to figure out how to switch a running ceph from just eth0 to split networks13:57
jamespageI don't think I can do it non-distruptively- clients will lose access for a bit13:57
schegijamespage, i looked into it a bit. it is not so easy. recommended way is to replace the running mons by new ones configured with the alternative network. there is a messy way but i didn't tried it.13:59
jamespageschegi, OK - sounds like it needs a big health warning then13:59
=== foobarbazqux is now known as jcsackett
avoinenoodles775: I'm hitting it on subordinate14:05
noodles775avoine: yeah, I think I remember bloodearnest hitting it in a subordinate too. I'd still thought that hooks were serialised there too, but obviously I'm missing something. But +1 to the suggested fix.14:07
avoinenoodles775: ok, I'll finish with the django charm and I'll work on a fix14:09
=== vds` is now known as vds
noodles775avoine: Oh - great. Let me know if you need a hand or don't get to it and I can take a look.14:10
avoineok14:18
schegijamespage, another question if i like to have specific devices used for journals for the single osds (got a couple of ssds especially for journals) could i just try to add the [osd.X] sections to the ceph charm ceph.conf template or is there anything that speaks against it?14:20
jamespageschegi, there is a specific configuration option for osd journal devices for this purpose14:21
jamespageschegi, osd-journal14:21
cory_fuHow does one re-attach to a debug-hooks session that was disconnected while still in use?  Running `juju debug-hooks` again says it's already being debugged14:22
jamespageschegi, it may be a little blunt for your requirements - let me know14:22
cory_fuAh.  `juju ssh` followed by `sudo byobu` seems to have worked14:24
schegijamespage, the config.yaml of the ceph charm only mentiones a parameter osd-journal and says "The device to use as a shared journal drive for all OSD's.". But i like ceph to use one particular device per osd running.14:27
jamespageschegi, yeah - its blunt atm14:27
jamespageschegi, we need a nice way of hashing the osd's onto the avaliable osd-journal devices automatically - right now its just a single device14:28
schegiSo i though adding [osd.x] sections to the ceph.conf template could help. they won't change anyway14:28
schegiit would be fine for me to do the mapping manually just to get it working. But yeah you are right especially if qou are working on different machines.14:29
marcoceppinegronjl: hey, do you have any updated mongodb branches?14:33
marcoceppithe one in the store is broken and I've got a demo in 1.5 hours at mongodb world14:34
negronjlmarcoceppi, I don't ... broken how ?14:34
negronjlmarcoceppi, I'm already working on the race condition for the mongos relation but, no fix yet14:34
schegijamespage, i always thought when playing around with the charm it would also be a nice idea to have the opportunity to pass a ceph.conf file to the charm on deployment. So that it gets all of its parameters from this conf.14:34
marcoceppinegronjl: mongos is my main problem14:34
marcoceppieverytime I try deploying it and working aroudn the race it still fails14:35
marcoceppialso, it only works on local providers14:35
marcoceppiall other providers it expects mongodb to be exposed14:35
negronjlmarcoceppi, the only workaround that I can give you is to deploy manually ( juju deploy ...... ) as opposed to the bundle14:35
marcoceppiand fails on hp-cloud14:35
marcoceppinegronjl: yeah, that's failing for me to14:35
automatemecolemaWhy is it that sometimes my local charms don't show up in the gui?14:35
negronjlmarcoceppi, I am still working on it :/14:35
marcoceppinegronjl: okay, np14:36
automatemecolemamacroceppi: So are you saying the Mongo charm doesn't work in a bundle on any providers except local? We were planning on having a bundle that include Mongo14:37
marcoceppinegronjl: yeah, now configsrv is failing14:38
negronjlmarcoceppi, on which provider ?14:38
negronjlmarcoceppi, precise or trusty ?14:38
marcoceppiprecise14:40
negronjlmarcoceppi, pastebin the bundle that you are using so I can use it to debug here ... I'm working on that now14:40
marcoceppinegronjl: precise, http://paste.ubuntu.com/7695787/ I removed the database -> mongos relation in the bundle14:41
marcoceppiso cfgsvr would come up first14:41
marcoceppibut now that's getting failed relations14:41
bloodearnestnoodles775: avoine: it is my understanding that hooks are serialised on a unit by the unit agent, regardless of which charm they are part of14:41
bloodearnestnoodles775: avoine: but you could get a clash if you set up a cron job that uses ansible, for example14:42
avoinebloodearnest: even subordinate?14:42
bloodearnestavoine: yes14:43
negronjlmarcoceppi, deploying now14:44
marcoceppigod speed14:45
marcoceppinegronjl: when I got to the webadmin, after attaching a shard, it doesn't say anything about repelsets14:54
negronjlmarcoceppi, still deploying ... give me a few to look around14:55
marcoceppiwow, it took a long ass time, but all the config servers just failed with replica-set-relation-joined14:57
marcoceppinegronjl: I think I've made a little progress14:59
negronjlmarcoceppi, what have you found ?15:00
marcoceppiconfigsvr mongodbs are failing to start15:00
marcoceppierror command line: too many positional options15:01
marcoceppion configsvr15:01
marcoceppiupstart job is here15:01
marcoceppihttp://paste.ubuntu.com/7700869/15:02
marcoceppinegronjl: ^15:02
* negronjl reads15:02
marcoceppinegronjl: removing the -- seems to fix it15:03
marcoceppilooks like a parameter might not be written to the file correctly15:04
* marcoceppi attempts to figure out where that is15:04
negronjlmarcoceppi, that will not really fix the issue ... just hide the replset parameter15:04
marcoceppinegronjl: it seems to start fine15:04
marcoceppioh wait15:04
marcoceppijk15:04
negronjlmarcoceppi, the arguments after the -- ( the one that's by itself ) pass the params to mongod15:04
marcoceppino it doesn15:04
marcoceppiyeah15:05
marcoceppifuck15:05
negronjlyou should not have a replset at all now15:05
=== scuttle|afk is now known as scuttlemonkey
lukebennettHello everybody, I have an issue I can't find any reference to online anywhere - when bootstrapping my environment using MAAS, it's crashing out because the juju-db job is already running. This didn't occur the first time I bootstrapped, only after I destroyed that initial environment. It feels like the MAAS node it deployed to hasn't destroyed itself15:45
lukebennettproperly. Any ideas?15:45
lukebennettI haven't yet tried manually rebooting the node but it feels like that shouldn't be necessary15:46
=== tvansteenburgh1 is now known as tvansteenburgh
lazypowerlukebennett: did your bootstrap node go from allocated to off when you ran destroy-environment?16:30
automatemecolemaAny takers on if it makes sense to use juju as a provisioning tool, but allow a config management tool to all the heavy lifting in regards to relationship. I'm thinking along the lines of hiera in puppet16:31
lazypowerautomatemecolema: when you say heavy lifting - what do you mean?16:31
automatemecolemalazypower: well we looking at using puppet hiera to build relationships with different nodes. and use puppet to deploy apps on the instances.16:32
lazypowerautomatemecolema: sounds feasable - are you initiating the relationships with juju?16:33
AskUbuntuMAAS JUJU cloud-init-nonet waiting for network device | http://askubuntu.com/q/48811416:40
sparkiegeeklukebennett: sounds like maybe your node isn't set to PXE boot?16:43
=== CyberJacob|Away is now known as CyberJacob
frobwareI was trying to run juju bootstrap on arm64 and it mostly works (http://paste.ubuntu.com/7701639/) but at the very end of the pastebin output it tries to use an amd64 download.  Is there somewhere where I can persuade juju that I want arm64?17:49
automatemecolemalazypower: well the thought was attaching facts to relationships and having puppet hiera do the relationship work17:57
Pa^2My machine "0" started just fine.  Machine "1" has been pending for almost 3 hours.  Running 14.04 but the Wordpress says "precise".  Am I missing something simple?19:15
Pa^2...this is a local install.19:16
arosalesPa^2: is your home dir encrypted?20:07
Pa^2arosales: no20:07
arosalesPa^2: ok.20:10
arosalesPa^2: and since your client is running 14.04 I am guessing you are running juju version ~1.18, correct?20:10
Pa^2I assume so.  How can I verify?20:12
Pa^21.18.4.120:12
arosalesjuju --version20:13
Pa^21.18.4-trusty-amd6420:14
arosalesPa^2: and what does `dpkg -l | grep juju-local` return?20:14
lazypowerPa^2: when you say local install you mean you're working with the local provider?20:17
Pa^2ii  juju-local                                  1.18.4-0ubuntu1~14.04.1~juju1         all          dependency package for the Juju local provider20:17
Pa^2yes to local provider20:17
lazypowerrun `sudo lxc-ls --fancy` do you see a precise template that is in the STOPPED state?20:17
Pa^2Yes, the precise template is in the STOPPED state.20:19
lazypowerhmm ok, so far so good20:22
lazypowercan you pastebin your machine-0.log for me?20:22
lazypowerPa^2: just fyi, the logpath is ~/.juju/local/logs/machine-0.log20:28
Pa^2Haven't used pastebin before, lets see if this works.  http://pastebin.com/SHSEABYJ20:28
lazypowerah, and protip for next time, sudo apt-get install pastebinit. you can then call `pastebinit path/to/file` and it gives you the short link20:28
Pa^2Thank you, great tip.20:29
AskUbuntuMAAS / Juju bootstrap - ubuntu installation stuck at partitioner step | http://askubuntu.com/q/48817020:30
lazypowerhmm,  nothing really leaps out at me here as a red flag. its pending you say? is your local network perchance in the 10.0.3.x range?20:30
Pa^2My system is dual homed, Yeah, I wondered about that... 10.0.0.0 is routed to my WAN gateway.20:32
lazypowerwell, i know that if youre in the 10.0.3.0/24 cidr, your containers may run into ip collision20:32
lazypowerwhich will prevent them from starting20:32
lazypowerif you run `juju destroy local && juju bootstrap && juju deploy wordpress && watch juju status`20:33
lazypoweryou will recreate the local environment. it should literally take seconds to get moving since you have the container templates cached20:33
lazypowerif you run that, and it still sits in pending longer than a minute, can you re-pastebin the machine-0.log and unit-wordpress-0.log for me?20:34
lazypoweractually, instead of unit-wordpress-0, give me the all-machines.log20:34
Pa^2That would explain it... I will down the WAN interface and see if your suggestion works.20:34
lazypowerok, if that doesnt help next step in debugging is to try and recreate it and capture the event thats causing your pinch point.20:34
lazypowerPa^2: warning, i have to leave to head to my dentist appt in about 10 minutes - i'll be back in about an hour and can help further troubleshooting20:35
Pa^2Thanks so much for taking the time.  Much appreciated.20:35
lazypowerno problem :) its what i'm here for.20:36
gQuigshow does a charm know it's for precise or trusty?   (I'd really like to get trusty versions of nfs, wordpress, and more...)20:37
arosalesgQuigs: when the charm is reviwed to pass policy it is put into a trusy or precise branch20:39
gQuigsarosales: I'm trying to run it locally though... to help make either of them work on trusty20:40
lazypowergQuigs: the largest portion of blockers we have for charms making it into trusty are lack of tests.20:40
arosalesgQuigs: the ~charmer's team is currrently working to verify charms on trusty and working with charm authors to promote applicable precise charms to trusy20:40
Pa^2Still no love... I think I will start with a new clean platform and try it from the ground up.20:40
lazypowergQuigs: in your local charm repo, mkdir trusty, and `charm get nfs` then you can deploy with `juju deploy --repository=~/charms local:trusty/nfs` and trial the charm on trusty.20:40
arosalesgQuigs: ah, just put then in file patch such as ~/charms/trusty/20:40
gQuigsinteresting.. is precise hardcoded in?   works: juju deploy --repository=/store/git precise/nfs20:41
lazypowergQuigs: however if you can lend your hand to writing the tests we'd love to have more involvement from the community on charm promotion from precise => trusty. Tests are the quickest way to make that happen.20:41
gQuigsdoesn't: juju deploy --repository=/store/git trusty/nfs20:41
arosalesfrom ~/ issue "juju deploy --repositor=./charms local:trusty/charm-name20:41
gQuigsarosales: lazypower thanks, making the folder trusty worked20:42
gQuigs:)20:42
gQuigslazypower:  hmm, what kind of tests are needed?20:42
gQuigsdoc?20:43
arosalesgQuigs: juju is looking for something along the lines of "<repository>:<series>/<service>"20:43
arosaleshttps://juju.ubuntu.com/docs/charms-deploying.html#deploying-from-a-local-repository20:43
lazypowergQuigs: we *love* unit tests, but amulet tests as an integration/relationship testing framework is acceptable in place of unit tests (extra bonus points for both)20:43
lazypowergQuigs: just make sure your charm repo looks similar to the following: http://paste.ubuntu.com/7702523/20:43
lazypowergQuigs: and wrt docs -  https://juju.ubuntu.com/docs/tools-amulet.html20:44
lazypowerunit testing is hyper specific to the language of the charm as its written. but amulet is always 100% python20:44
lazypowerthe idea is to model a deployment, and using the sentries to probe / make assertions about the deployment20:44
lazypowerusing wordpress/nfs as an example, you add wordpress and nfs to the deployment topology, configure them, deploy them, then do things like from teh wordpress host, use dd to write a 8MB file to the NFS share, then using the NFS sentry - probe to ensure 8MB were written across teh wire and landed where we expect them to be.20:45
lazypowerit can be tricky to write the tests, and subjective, but they go a long way to validating claims that the charm is doing what we intend it to do.20:45
lazypowers/we/the author/20:45
Pa^2http://paste.ubuntu.com/7702540/20:46
lazypowerstrange its not panicing, its not giving you errors about anything common20:46
lazypowerand this is on a trusty host too right PA?20:47
lazypoweryeah20:47
lazypowerhmm20:47
Pa^2affirmative20:47
lazypowerPa^2: lets see your all-machines.log20:47
Pa^2http://paste.ubuntu.com/7702550/20:48
lazypoweroh wow it never even hit the init cycle, i expected *something* additional in all-machines20:48
gQuigslazypower: thanks, will try... they don't have to be comprehensive though?  (nfs can work with anything that can do "mount"  which is pretty open ended..)20:48
Pa^2Don't be late for your appointment... this can wait20:48
lazypowergood point. thanks Pa^2 - my only other thought is to look into using the juju-clean plugin20:49
lazypowerand starting from complete scratch - as in re-fetching the templates and recreating them (takes ~ 8 minutes on a typical broadband connection)20:49
Pa^2Will have a look.  Thanks again.  I will keep you apprised.20:50
lazypowerif the template creation pooped out in the middle / the end there, and didnt raise an error it can be finicky like this20:50
lazypowerok i'm out, see you shortly20:50
lazypowergQuigs: one final note before i jam - take a look at the owncloud charm, and the tomcat charm20:51
lazypowerthey have tests, and they exhibit what i would consider decent tests. The openstack charms are another example of high quality tested charms20:51
lazypowerusing them as a guide will set you on the right path20:51
gQuigslazypower: will do, thanks!20:56
ChrisW1mwhudson: too?21:07
mwhudsonChrisW1: eh?21:07
ChrisW1are you on the juju team at canonical?21:07
mwhudsonChrisW1: no21:07
ChrisW1that's okay then :-)21:08
mwhudsonbut i do arm server stuff and was involved in porting juju21:08
ChrisW1was getting a little freaked out at the number of people I appear to know on the juju team...21:08
mwhudsonah yeah21:08
ChrisW1anyway, long time no speak, how's NZ treating you?21:09
mwhudsongood!21:09
mwhudsonalthough my wife is away for work and my daughter's not been sleeping well so i'm pretty tired :)21:10
mwhudsonluckily the coffee is good...21:10
ChrisW1hahaha21:10
ChrisW1yes, coffee is good21:10
ChrisW1I have a bad habit of making a pot of espresso and drinking it in a cup when I'm at home...21:11
mwhudsonand you?  are you using juju for stuff, or just here for social reasons? :)21:11
ChrisW1oh, *cough* "social reasons"...21:11
whithey heath21:41
=== vladk is now known as vladk|offline
josetvansteenburgh: ping22:23
joseniedbalski: ping22:23
josecory_fu: ping22:24
cory_fujose: What's up?22:25
josecory_fu: hey, I was wondering if you could take a look at the Chamilo charm and give it a review22:25
joseI was trying to get as many charm-contributors reviews so it can be easy for charmers to approve22:25
joseupstream is having an expo and they could mention juju22:25
=== CyberJacob is now known as CyberJacob|Away
niedbalskijose, sure, i can do it in a bit22:31
niedbalskibrb22:31
josethank you :)22:31
AskUbuntujuju mongodb charm not found | http://askubuntu.com/q/48820922:32
lazypowerPa^2: how goes it?22:38
cory_fujose: I ran into one issue when testing that charm.23:38
cory_fuOther than that, though, it looked great23:38
cory_fuThat would be an excellent case-study for converting to the services framework, when it gets merged to charmhelpers.  It also made me realize that we need to enable the services framework to work with non-python charms23:39
Pa^2lazypower: I tried deleting all of the .juju and lxc cache then deploying mysql.... just checked, still pending after almost two hours.23:40
cory_fuBut it would handle all of the logic that the charm currently manages using dot-files, automatically23:40
lazypowerPa^2: if you're on a decent connection, if it takes longer than 10 - 12 minutes its failed.23:40
lazypowerusually once you've got the deployment images cached, it'll take seconds. ~ 5 to 20 seconds and you'll see the container up and running23:40
josecory_fu: what was the issue>/23:41
lazypowercory_fu: the services framework has been merged as of yesterday. you mean the -t right?23:41
Pa^2I will start with a clean with a clean bare metal install of Ubuntu and start from scratch.23:41
lazypowerPa^2: ok, ping me i'll be around most of the evening if you need any further assistance.23:41
lazypowerPa^2: i can also spin up a VM and follow along at home23:42
josecory_fu: oh, read the comment. will check now23:42
Pa^2I won't be able to do so until I am at work tomorrow.  Remote sessions via putty from a Windows laptop are just too cumbersome.23:42
lazypoweri understand completely. I have an idea for you though Pa^223:43
lazypowerwe publish a juju vagrant image that may be of some use to you.23:43
lazypowervagrant's support on windows is pretty stellar23:43
josecory_fu: the branch has been fixed, if you could try again that'd be awesome :)23:43
lazypowerPa^2: https://juju.ubuntu.com/docs/config-vagrant.html23:44
Pa^2Good possibility.  I will look into it.  They frown on my loading on my production laptop.23:44
Pa^2One way or another I will keep you apprised of my cirucmstances.23:45
Pa^2Should have brought my Linux laptop instead of this PoC.23:47
Pa^2Misunderstood, vagrant on my Linux box at work then...23:49
Pa^2Lemme look into it.23:50
cory_fujose: Great, that fixed it23:52
* jose starts running in circles waving his arms23:52
* Pa^2 slaps jose on the back... Success is grand.23:53
cory_fujose: Added my +1 :)23:54
josethanks, cory_fu!23:54
cory_fuAlright, well, I'm out for the evening23:54
cory_fuHave a good night23:54
joseyou too23:54
lazypowerjose: did you do something good?23:59
lazypowerdo i get to hi5 you?23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!