=== _mup__ is now known as _mup_ | ||
AskUbuntu | Getting started with juju | http://askubuntu.com/q/487906 | 05:12 |
---|---|---|
=== vladk|offline is now known as vladk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
simhon | hello all | 07:07 |
simhon | need help about juju bootstrap | 07:07 |
simhon | it seems to get stuck on the ssh part | 07:08 |
simhon | the node itself is up and running and i can ssh it myself using the same command the juju does | 07:08 |
simhon | ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/simhon/.juju/ssh/juju_id_rsa -i /home/simhon/.ssh/id_rsa ubuntu@10.0.2.152 /bin/bash | 07:09 |
simhon | but somehow juju keeps retrying this command | 07:09 |
simhon | when it times out it shows me the following : | 07:13 |
simhon | 2014-06-25 07:09:10 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist Stopping instance... 2014-06-25 07:09:10 INFO juju.cmd cmd.go:113 Bootstrap failed, destroying environment 2014-06-25 07:09:10 INFO juju.provider.common destroy.go:14 destroying environment "maas" 2014-06-25 07:09:11 ERROR juju.cmd supercommand.go:300 waited for 10m0s wi | 07:13 |
simhon | any idea ???? | 07:13 |
=== roadmr_afk is now known as roadmr | ||
=== roadmr is now known as roadmr_afk | ||
=== roadmr_afk is now known as roadmr | ||
schegi | jamespage, you pointed me to your ceph charms with network split support. where to put the charm-helpers supporting network split so that they are? | 08:04 |
jamespage | schegi, from a client use perspective? | 08:04 |
jamespage | not sure I understand your question | 08:05 |
schegi | i am pretty new to juju charms and not shure how they depend on each other. so what i did was branching the charms from lp. and putting them somewhere on my juju node. For deployment i use something like 'juju deploy --repository /custom/trusty ceph' and it seems to deploy the network-split charm from my local repo. But as long as i understand it depends on the charm-helpers, so how to keep this dependency to the branched charm-helpers | 08:10 |
schegi | jamespage, as long as i understand the network split versions of the charms need the network split version of the charm-helpers. when deploying from a local repository how to ensure usage of the network split version of the charm helpers? | 08:19 |
jamespage | schegi, ah - I see - what you need is the branches including the updated charm-helper which I've not actually done yet - on that today | 08:20 |
schegi | jamespage, in lp there is also a branch of the charm-helpers with network split support (according to the comments). but i assume that just branching that and putting it somewhere will not be enough. | 08:22 |
jamespage | schegi, no - its needs to be synced into the charms that relate to ceph | 08:22 |
schegi | jamespage ok and that means? What do i have to do? can you maybe point me to some resource that provides some additional information. The official juju pages are a little bit highlvl. | 08:26 |
jamespage | schegi, if you give me 1 hr I can do it | 08:26 |
jamespage | schegi, I just need to respond to some emails first | 08:26 |
schegi | no problem | 08:26 |
gnuoy | jamespage, do you know if neutron_plugin.conf is actually used ? | 08:27 |
jamespage | gnuoy, maybe | 08:31 |
jamespage | gnuoy, I'd have to grep the code to see where tho | 08:31 |
jamespage | schegi, OK - its untested but I've synced the required helpers branch into nova-compute, cinder and glance under my LP account | 08:32 |
jamespage | network-splits suffix | 08:32 |
jamespage | schegi, https://code.launchpad.net/~james-page | 08:33 |
=== CyberJacob is now known as CyberJacob|Away | ||
jamespage | schegi, and cinder-ceph if you happen to be using ceph via cinder that way | 08:35 |
schegi | thx a lot i will give it a try | 08:37 |
jamespage | gnuoy, maybe nova-compute - rings a bell | 10:04 |
gnuoy | jamespage, I'm going to override that method to do nothing in the neutron-api version of the class | 10:05 |
jamespage | gnuoy, +1 | 10:06 |
bloodearnest | anyone know of any charms that use postgresql and a handle new slaves coming online in their hooks? | 10:08 |
bloodearnest | ah, landscape charm has good support, awesome | 10:18 |
=== roadmr is now known as roadmr_afk | ||
jamespage | schegi, fyi those branches are WIP _ expect quite a bit of change and potential breaks :-0 | 10:39 |
jamespage | gnuoy, do you have fix for the neutron-api charm yet? I'm using the next branch and I don't like typing mkdir /etc/nova :-) | 10:45 |
gnuoy | jamespage,I'll do it now | 10:45 |
jamespage | gnuoy, ta | 10:45 |
=== wallyworld__ is now known as wallyworld | ||
jamespage | schegi, just applied a fix to the ceph network-splits branch - doubt you will be able to get a cluster up without it | 11:10 |
gnuoy | jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/fix-flag-bug/+merge/224414 | 11:17 |
=== roadmr_afk is now known as roadmr | ||
schegi | jamespage, your right currently trying with the unfixed version leads to unreachability between the mon nodes | 11:49 |
jamespage | schegi, yeah - that sounds right | 11:49 |
jamespage | missed a switch from ceph_public_addr to ceph-public-address | 11:49 |
jamespage | :-( | 11:49 |
schegi | ill give the new version a try | 11:50 |
schegi | but i was wondering always getting the missing keyring error when calling ceph without -k <pathtokeyring>, what am i doing wrong | 11:51 |
schegi | hm once started ceph units seem to be undestroyable. hanging ind agent-state: started and life:dying | 11:56 |
schegi | is there a way to force destruction other than destroy the whole environment or remove the machines?? | 11:57 |
=== jcsackett is now known as idioteque | ||
=== idioteque is now known as foobarbazqux | ||
=== alexisb_bbl is now known as alexisb | ||
schegi | jamespage, monmap looks good so far monmap e1: 3 mons at {storage1=10.10.1.21:6789/0,storage2=10.10.1.22:6789/0,storage3=10.10.1.23:6789/0}, election epoch 4, quorum 0,1,2 storage1,storage2,storage3 | 13:14 |
schegi | still some issues but should work | 13:14 |
=== anthonyf` is now known as anthonyf | ||
avoine | noodles775: have you an idea how to solve the race condition with multiple ansible charms on the same machine? | 13:30 |
avoine | it just hit me | 13:31 |
avoine | noodles775: https://bugs.launchpad.net/charm-helpers/+bug/1334281 | 13:37 |
_mup_ | Bug #1334281: cant have multiple ansible charms on the same machine <Charm Helpers:New> <https://launchpad.net/bugs/1334281> | 13:37 |
noodles775 | avoine: let me look. | 13:38 |
noodles775 | avoine: Nope, but a hosts file per unit would be perfect as you suggested. I'll try to get to it in the next week or two, or if you're keen to submit a patch, even better :) | 13:40 |
noodles775 | s/per unit/per service/ (should be enough) | 13:40 |
noodles775 | avoine: hrm, if hooks are run serially, how are you actually hitting this? (I mean, the context should be written out to the /etc/ansible/host_vars/localhost when each hook runs?) Or what am I missing? | 13:42 |
automatemecolema | service is stuck in a dying state, but has no errors to resolve. How can I force it to go away? | 13:43 |
automatemecolema | nevermind juju remove-machine --force <#> | 13:44 |
jamespage | schegi, I have it on my list to figure out how to switch a running ceph from just eth0 to split networks | 13:57 |
jamespage | I don't think I can do it non-distruptively- clients will lose access for a bit | 13:57 |
schegi | jamespage, i looked into it a bit. it is not so easy. recommended way is to replace the running mons by new ones configured with the alternative network. there is a messy way but i didn't tried it. | 13:59 |
jamespage | schegi, OK - sounds like it needs a big health warning then | 13:59 |
=== foobarbazqux is now known as jcsackett | ||
avoine | noodles775: I'm hitting it on subordinate | 14:05 |
noodles775 | avoine: yeah, I think I remember bloodearnest hitting it in a subordinate too. I'd still thought that hooks were serialised there too, but obviously I'm missing something. But +1 to the suggested fix. | 14:07 |
avoine | noodles775: ok, I'll finish with the django charm and I'll work on a fix | 14:09 |
=== vds` is now known as vds | ||
noodles775 | avoine: Oh - great. Let me know if you need a hand or don't get to it and I can take a look. | 14:10 |
avoine | ok | 14:18 |
schegi | jamespage, another question if i like to have specific devices used for journals for the single osds (got a couple of ssds especially for journals) could i just try to add the [osd.X] sections to the ceph charm ceph.conf template or is there anything that speaks against it? | 14:20 |
jamespage | schegi, there is a specific configuration option for osd journal devices for this purpose | 14:21 |
jamespage | schegi, osd-journal | 14:21 |
cory_fu | How does one re-attach to a debug-hooks session that was disconnected while still in use? Running `juju debug-hooks` again says it's already being debugged | 14:22 |
jamespage | schegi, it may be a little blunt for your requirements - let me know | 14:22 |
cory_fu | Ah. `juju ssh` followed by `sudo byobu` seems to have worked | 14:24 |
schegi | jamespage, the config.yaml of the ceph charm only mentiones a parameter osd-journal and says "The device to use as a shared journal drive for all OSD's.". But i like ceph to use one particular device per osd running. | 14:27 |
jamespage | schegi, yeah - its blunt atm | 14:27 |
jamespage | schegi, we need a nice way of hashing the osd's onto the avaliable osd-journal devices automatically - right now its just a single device | 14:28 |
schegi | So i though adding [osd.x] sections to the ceph.conf template could help. they won't change anyway | 14:28 |
schegi | it would be fine for me to do the mapping manually just to get it working. But yeah you are right especially if qou are working on different machines. | 14:29 |
marcoceppi | negronjl: hey, do you have any updated mongodb branches? | 14:33 |
marcoceppi | the one in the store is broken and I've got a demo in 1.5 hours at mongodb world | 14:34 |
negronjl | marcoceppi, I don't ... broken how ? | 14:34 |
negronjl | marcoceppi, I'm already working on the race condition for the mongos relation but, no fix yet | 14:34 |
schegi | jamespage, i always thought when playing around with the charm it would also be a nice idea to have the opportunity to pass a ceph.conf file to the charm on deployment. So that it gets all of its parameters from this conf. | 14:34 |
marcoceppi | negronjl: mongos is my main problem | 14:34 |
marcoceppi | everytime I try deploying it and working aroudn the race it still fails | 14:35 |
marcoceppi | also, it only works on local providers | 14:35 |
marcoceppi | all other providers it expects mongodb to be exposed | 14:35 |
negronjl | marcoceppi, the only workaround that I can give you is to deploy manually ( juju deploy ...... ) as opposed to the bundle | 14:35 |
marcoceppi | and fails on hp-cloud | 14:35 |
marcoceppi | negronjl: yeah, that's failing for me to | 14:35 |
automatemecolema | Why is it that sometimes my local charms don't show up in the gui? | 14:35 |
negronjl | marcoceppi, I am still working on it :/ | 14:35 |
marcoceppi | negronjl: okay, np | 14:36 |
automatemecolema | macroceppi: So are you saying the Mongo charm doesn't work in a bundle on any providers except local? We were planning on having a bundle that include Mongo | 14:37 |
marcoceppi | negronjl: yeah, now configsrv is failing | 14:38 |
negronjl | marcoceppi, on which provider ? | 14:38 |
negronjl | marcoceppi, precise or trusty ? | 14:38 |
marcoceppi | precise | 14:40 |
negronjl | marcoceppi, pastebin the bundle that you are using so I can use it to debug here ... I'm working on that now | 14:40 |
marcoceppi | negronjl: precise, http://paste.ubuntu.com/7695787/ I removed the database -> mongos relation in the bundle | 14:41 |
marcoceppi | so cfgsvr would come up first | 14:41 |
marcoceppi | but now that's getting failed relations | 14:41 |
bloodearnest | noodles775: avoine: it is my understanding that hooks are serialised on a unit by the unit agent, regardless of which charm they are part of | 14:41 |
bloodearnest | noodles775: avoine: but you could get a clash if you set up a cron job that uses ansible, for example | 14:42 |
avoine | bloodearnest: even subordinate? | 14:42 |
bloodearnest | avoine: yes | 14:43 |
negronjl | marcoceppi, deploying now | 14:44 |
marcoceppi | god speed | 14:45 |
marcoceppi | negronjl: when I got to the webadmin, after attaching a shard, it doesn't say anything about repelsets | 14:54 |
negronjl | marcoceppi, still deploying ... give me a few to look around | 14:55 |
marcoceppi | wow, it took a long ass time, but all the config servers just failed with replica-set-relation-joined | 14:57 |
marcoceppi | negronjl: I think I've made a little progress | 14:59 |
negronjl | marcoceppi, what have you found ? | 15:00 |
marcoceppi | configsvr mongodbs are failing to start | 15:00 |
marcoceppi | error command line: too many positional options | 15:01 |
marcoceppi | on configsvr | 15:01 |
marcoceppi | upstart job is here | 15:01 |
marcoceppi | http://paste.ubuntu.com/7700869/ | 15:02 |
marcoceppi | negronjl: ^ | 15:02 |
* negronjl reads | 15:02 | |
marcoceppi | negronjl: removing the -- seems to fix it | 15:03 |
marcoceppi | looks like a parameter might not be written to the file correctly | 15:04 |
* marcoceppi attempts to figure out where that is | 15:04 | |
negronjl | marcoceppi, that will not really fix the issue ... just hide the replset parameter | 15:04 |
marcoceppi | negronjl: it seems to start fine | 15:04 |
marcoceppi | oh wait | 15:04 |
marcoceppi | jk | 15:04 |
negronjl | marcoceppi, the arguments after the -- ( the one that's by itself ) pass the params to mongod | 15:04 |
marcoceppi | no it doesn | 15:04 |
marcoceppi | yeah | 15:05 |
marcoceppi | fuck | 15:05 |
negronjl | you should not have a replset at all now | 15:05 |
=== scuttle|afk is now known as scuttlemonkey | ||
lukebennett | Hello everybody, I have an issue I can't find any reference to online anywhere - when bootstrapping my environment using MAAS, it's crashing out because the juju-db job is already running. This didn't occur the first time I bootstrapped, only after I destroyed that initial environment. It feels like the MAAS node it deployed to hasn't destroyed itself | 15:45 |
lukebennett | properly. Any ideas? | 15:45 |
lukebennett | I haven't yet tried manually rebooting the node but it feels like that shouldn't be necessary | 15:46 |
=== tvansteenburgh1 is now known as tvansteenburgh | ||
lazypower | lukebennett: did your bootstrap node go from allocated to off when you ran destroy-environment? | 16:30 |
automatemecolema | Any takers on if it makes sense to use juju as a provisioning tool, but allow a config management tool to all the heavy lifting in regards to relationship. I'm thinking along the lines of hiera in puppet | 16:31 |
lazypower | automatemecolema: when you say heavy lifting - what do you mean? | 16:31 |
automatemecolema | lazypower: well we looking at using puppet hiera to build relationships with different nodes. and use puppet to deploy apps on the instances. | 16:32 |
lazypower | automatemecolema: sounds feasable - are you initiating the relationships with juju? | 16:33 |
AskUbuntu | MAAS JUJU cloud-init-nonet waiting for network device | http://askubuntu.com/q/488114 | 16:40 |
sparkiegeek | lukebennett: sounds like maybe your node isn't set to PXE boot? | 16:43 |
=== CyberJacob|Away is now known as CyberJacob | ||
frobware | I was trying to run juju bootstrap on arm64 and it mostly works (http://paste.ubuntu.com/7701639/) but at the very end of the pastebin output it tries to use an amd64 download. Is there somewhere where I can persuade juju that I want arm64? | 17:49 |
automatemecolema | lazypower: well the thought was attaching facts to relationships and having puppet hiera do the relationship work | 17:57 |
Pa^2 | My machine "0" started just fine. Machine "1" has been pending for almost 3 hours. Running 14.04 but the Wordpress says "precise". Am I missing something simple? | 19:15 |
Pa^2 | ...this is a local install. | 19:16 |
arosales | Pa^2: is your home dir encrypted? | 20:07 |
Pa^2 | arosales: no | 20:07 |
arosales | Pa^2: ok. | 20:10 |
arosales | Pa^2: and since your client is running 14.04 I am guessing you are running juju version ~1.18, correct? | 20:10 |
Pa^2 | I assume so. How can I verify? | 20:12 |
Pa^2 | 1.18.4.1 | 20:12 |
arosales | juju --version | 20:13 |
Pa^2 | 1.18.4-trusty-amd64 | 20:14 |
arosales | Pa^2: and what does `dpkg -l | grep juju-local` return? | 20:14 |
lazypower | Pa^2: when you say local install you mean you're working with the local provider? | 20:17 |
Pa^2 | ii juju-local 1.18.4-0ubuntu1~14.04.1~juju1 all dependency package for the Juju local provider | 20:17 |
Pa^2 | yes to local provider | 20:17 |
lazypower | run `sudo lxc-ls --fancy` do you see a precise template that is in the STOPPED state? | 20:17 |
Pa^2 | Yes, the precise template is in the STOPPED state. | 20:19 |
lazypower | hmm ok, so far so good | 20:22 |
lazypower | can you pastebin your machine-0.log for me? | 20:22 |
lazypower | Pa^2: just fyi, the logpath is ~/.juju/local/logs/machine-0.log | 20:28 |
Pa^2 | Haven't used pastebin before, lets see if this works. http://pastebin.com/SHSEABYJ | 20:28 |
lazypower | ah, and protip for next time, sudo apt-get install pastebinit. you can then call `pastebinit path/to/file` and it gives you the short link | 20:28 |
Pa^2 | Thank you, great tip. | 20:29 |
AskUbuntu | MAAS / Juju bootstrap - ubuntu installation stuck at partitioner step | http://askubuntu.com/q/488170 | 20:30 |
lazypower | hmm, nothing really leaps out at me here as a red flag. its pending you say? is your local network perchance in the 10.0.3.x range? | 20:30 |
Pa^2 | My system is dual homed, Yeah, I wondered about that... 10.0.0.0 is routed to my WAN gateway. | 20:32 |
lazypower | well, i know that if youre in the 10.0.3.0/24 cidr, your containers may run into ip collision | 20:32 |
lazypower | which will prevent them from starting | 20:32 |
lazypower | if you run `juju destroy local && juju bootstrap && juju deploy wordpress && watch juju status` | 20:33 |
lazypower | you will recreate the local environment. it should literally take seconds to get moving since you have the container templates cached | 20:33 |
lazypower | if you run that, and it still sits in pending longer than a minute, can you re-pastebin the machine-0.log and unit-wordpress-0.log for me? | 20:34 |
lazypower | actually, instead of unit-wordpress-0, give me the all-machines.log | 20:34 |
Pa^2 | That would explain it... I will down the WAN interface and see if your suggestion works. | 20:34 |
lazypower | ok, if that doesnt help next step in debugging is to try and recreate it and capture the event thats causing your pinch point. | 20:34 |
lazypower | Pa^2: warning, i have to leave to head to my dentist appt in about 10 minutes - i'll be back in about an hour and can help further troubleshooting | 20:35 |
Pa^2 | Thanks so much for taking the time. Much appreciated. | 20:35 |
lazypower | no problem :) its what i'm here for. | 20:36 |
gQuigs | how does a charm know it's for precise or trusty? (I'd really like to get trusty versions of nfs, wordpress, and more...) | 20:37 |
arosales | gQuigs: when the charm is reviwed to pass policy it is put into a trusy or precise branch | 20:39 |
gQuigs | arosales: I'm trying to run it locally though... to help make either of them work on trusty | 20:40 |
lazypower | gQuigs: the largest portion of blockers we have for charms making it into trusty are lack of tests. | 20:40 |
arosales | gQuigs: the ~charmer's team is currrently working to verify charms on trusty and working with charm authors to promote applicable precise charms to trusy | 20:40 |
Pa^2 | Still no love... I think I will start with a new clean platform and try it from the ground up. | 20:40 |
lazypower | gQuigs: in your local charm repo, mkdir trusty, and `charm get nfs` then you can deploy with `juju deploy --repository=~/charms local:trusty/nfs` and trial the charm on trusty. | 20:40 |
arosales | gQuigs: ah, just put then in file patch such as ~/charms/trusty/ | 20:40 |
gQuigs | interesting.. is precise hardcoded in? works: juju deploy --repository=/store/git precise/nfs | 20:41 |
lazypower | gQuigs: however if you can lend your hand to writing the tests we'd love to have more involvement from the community on charm promotion from precise => trusty. Tests are the quickest way to make that happen. | 20:41 |
gQuigs | doesn't: juju deploy --repository=/store/git trusty/nfs | 20:41 |
arosales | from ~/ issue "juju deploy --repositor=./charms local:trusty/charm-name | 20:41 |
gQuigs | arosales: lazypower thanks, making the folder trusty worked | 20:42 |
gQuigs | :) | 20:42 |
gQuigs | lazypower: hmm, what kind of tests are needed? | 20:42 |
gQuigs | doc? | 20:43 |
arosales | gQuigs: juju is looking for something along the lines of "<repository>:<series>/<service>" | 20:43 |
arosales | https://juju.ubuntu.com/docs/charms-deploying.html#deploying-from-a-local-repository | 20:43 |
lazypower | gQuigs: we *love* unit tests, but amulet tests as an integration/relationship testing framework is acceptable in place of unit tests (extra bonus points for both) | 20:43 |
lazypower | gQuigs: just make sure your charm repo looks similar to the following: http://paste.ubuntu.com/7702523/ | 20:43 |
lazypower | gQuigs: and wrt docs - https://juju.ubuntu.com/docs/tools-amulet.html | 20:44 |
lazypower | unit testing is hyper specific to the language of the charm as its written. but amulet is always 100% python | 20:44 |
lazypower | the idea is to model a deployment, and using the sentries to probe / make assertions about the deployment | 20:44 |
lazypower | using wordpress/nfs as an example, you add wordpress and nfs to the deployment topology, configure them, deploy them, then do things like from teh wordpress host, use dd to write a 8MB file to the NFS share, then using the NFS sentry - probe to ensure 8MB were written across teh wire and landed where we expect them to be. | 20:45 |
lazypower | it can be tricky to write the tests, and subjective, but they go a long way to validating claims that the charm is doing what we intend it to do. | 20:45 |
lazypower | s/we/the author/ | 20:45 |
Pa^2 | http://paste.ubuntu.com/7702540/ | 20:46 |
lazypower | strange its not panicing, its not giving you errors about anything common | 20:46 |
lazypower | and this is on a trusty host too right PA? | 20:47 |
lazypower | yeah | 20:47 |
lazypower | hmm | 20:47 |
Pa^2 | affirmative | 20:47 |
lazypower | Pa^2: lets see your all-machines.log | 20:47 |
Pa^2 | http://paste.ubuntu.com/7702550/ | 20:48 |
lazypower | oh wow it never even hit the init cycle, i expected *something* additional in all-machines | 20:48 |
gQuigs | lazypower: thanks, will try... they don't have to be comprehensive though? (nfs can work with anything that can do "mount" which is pretty open ended..) | 20:48 |
Pa^2 | Don't be late for your appointment... this can wait | 20:48 |
lazypower | good point. thanks Pa^2 - my only other thought is to look into using the juju-clean plugin | 20:49 |
lazypower | and starting from complete scratch - as in re-fetching the templates and recreating them (takes ~ 8 minutes on a typical broadband connection) | 20:49 |
Pa^2 | Will have a look. Thanks again. I will keep you apprised. | 20:50 |
lazypower | if the template creation pooped out in the middle / the end there, and didnt raise an error it can be finicky like this | 20:50 |
lazypower | ok i'm out, see you shortly | 20:50 |
lazypower | gQuigs: one final note before i jam - take a look at the owncloud charm, and the tomcat charm | 20:51 |
lazypower | they have tests, and they exhibit what i would consider decent tests. The openstack charms are another example of high quality tested charms | 20:51 |
lazypower | using them as a guide will set you on the right path | 20:51 |
gQuigs | lazypower: will do, thanks! | 20:56 |
ChrisW1 | mwhudson: too? | 21:07 |
mwhudson | ChrisW1: eh? | 21:07 |
ChrisW1 | are you on the juju team at canonical? | 21:07 |
mwhudson | ChrisW1: no | 21:07 |
ChrisW1 | that's okay then :-) | 21:08 |
mwhudson | but i do arm server stuff and was involved in porting juju | 21:08 |
ChrisW1 | was getting a little freaked out at the number of people I appear to know on the juju team... | 21:08 |
mwhudson | ah yeah | 21:08 |
ChrisW1 | anyway, long time no speak, how's NZ treating you? | 21:09 |
mwhudson | good! | 21:09 |
mwhudson | although my wife is away for work and my daughter's not been sleeping well so i'm pretty tired :) | 21:10 |
mwhudson | luckily the coffee is good... | 21:10 |
ChrisW1 | hahaha | 21:10 |
ChrisW1 | yes, coffee is good | 21:10 |
ChrisW1 | I have a bad habit of making a pot of espresso and drinking it in a cup when I'm at home... | 21:11 |
mwhudson | and you? are you using juju for stuff, or just here for social reasons? :) | 21:11 |
ChrisW1 | oh, *cough* "social reasons"... | 21:11 |
whit | hey heath | 21:41 |
=== vladk is now known as vladk|offline | ||
jose | tvansteenburgh: ping | 22:23 |
jose | niedbalski: ping | 22:23 |
jose | cory_fu: ping | 22:24 |
cory_fu | jose: What's up? | 22:25 |
jose | cory_fu: hey, I was wondering if you could take a look at the Chamilo charm and give it a review | 22:25 |
jose | I was trying to get as many charm-contributors reviews so it can be easy for charmers to approve | 22:25 |
jose | upstream is having an expo and they could mention juju | 22:25 |
=== CyberJacob is now known as CyberJacob|Away | ||
niedbalski | jose, sure, i can do it in a bit | 22:31 |
niedbalski | brb | 22:31 |
jose | thank you :) | 22:31 |
AskUbuntu | juju mongodb charm not found | http://askubuntu.com/q/488209 | 22:32 |
lazypower | Pa^2: how goes it? | 22:38 |
cory_fu | jose: I ran into one issue when testing that charm. | 23:38 |
cory_fu | Other than that, though, it looked great | 23:38 |
cory_fu | That would be an excellent case-study for converting to the services framework, when it gets merged to charmhelpers. It also made me realize that we need to enable the services framework to work with non-python charms | 23:39 |
Pa^2 | lazypower: I tried deleting all of the .juju and lxc cache then deploying mysql.... just checked, still pending after almost two hours. | 23:40 |
cory_fu | But it would handle all of the logic that the charm currently manages using dot-files, automatically | 23:40 |
lazypower | Pa^2: if you're on a decent connection, if it takes longer than 10 - 12 minutes its failed. | 23:40 |
lazypower | usually once you've got the deployment images cached, it'll take seconds. ~ 5 to 20 seconds and you'll see the container up and running | 23:40 |
jose | cory_fu: what was the issue>/ | 23:41 |
lazypower | cory_fu: the services framework has been merged as of yesterday. you mean the -t right? | 23:41 |
Pa^2 | I will start with a clean with a clean bare metal install of Ubuntu and start from scratch. | 23:41 |
lazypower | Pa^2: ok, ping me i'll be around most of the evening if you need any further assistance. | 23:41 |
lazypower | Pa^2: i can also spin up a VM and follow along at home | 23:42 |
jose | cory_fu: oh, read the comment. will check now | 23:42 |
Pa^2 | I won't be able to do so until I am at work tomorrow. Remote sessions via putty from a Windows laptop are just too cumbersome. | 23:42 |
lazypower | i understand completely. I have an idea for you though Pa^2 | 23:43 |
lazypower | we publish a juju vagrant image that may be of some use to you. | 23:43 |
lazypower | vagrant's support on windows is pretty stellar | 23:43 |
jose | cory_fu: the branch has been fixed, if you could try again that'd be awesome :) | 23:43 |
lazypower | Pa^2: https://juju.ubuntu.com/docs/config-vagrant.html | 23:44 |
Pa^2 | Good possibility. I will look into it. They frown on my loading on my production laptop. | 23:44 |
Pa^2 | One way or another I will keep you apprised of my cirucmstances. | 23:45 |
Pa^2 | Should have brought my Linux laptop instead of this PoC. | 23:47 |
Pa^2 | Misunderstood, vagrant on my Linux box at work then... | 23:49 |
Pa^2 | Lemme look into it. | 23:50 |
cory_fu | jose: Great, that fixed it | 23:52 |
* jose starts running in circles waving his arms | 23:52 | |
* Pa^2 slaps jose on the back... Success is grand. | 23:53 | |
cory_fu | jose: Added my +1 :) | 23:54 |
jose | thanks, cory_fu! | 23:54 |
cory_fu | Alright, well, I'm out for the evening | 23:54 |
cory_fu | Have a good night | 23:54 |
jose | you too | 23:54 |
lazypower | jose: did you do something good? | 23:59 |
lazypower | do i get to hi5 you? | 23:59 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!