/srv/irclogs.ubuntu.com/2014/04/23/#juju.txt

=== CyberJacob is now known as CyberJacob|Away
hazmatlazyPower, well i wasn't able to reproduce the issue utlemming was having00:37
hazmatbut i was using trusty.. i'll try again with precise00:37
lazyPowerhazmat: did the redirect daemon work ootb?00:37
hazmatlazyPower, the underlying symptom was exception on connection00:38
hazmatlazyPower, which didn't occur00:38
lazyPowerHmmmm.. curious00:39
=== 21WAAD3MF is now known as wallyworld
josenegronjl: pushed fixes+additions to seafile, if you could review them01:43
josesorry for the branch name being a little bit... long01:43
negronjljose: no worries ... I'll review a bit later tonight01:44
josethanks :)01:44
Tughow can I access juju's bootstrap env ?03:10
Tugi'd like to hack into the db and remove an entry :)03:10
=== timrc is now known as timrc-afk
=== wallyworld_ is now known as wallyworld
=== vladk|offline is now known as vladk
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== vladk is now known as vladk|offline
=== axw is now known as axw-away
=== vladk|offline is now known as vladk
ghartmanndo we already have any clue why add-machine doesn't work for local providers anymore ?11:45
=== timrc-afk is now known as timrc
james_whi13:20
james_wI'm unable to deploy any charms to a local environment on my trusty box13:21
james_whttp://paste.ubuntu.com/7310317/ is all-machines.log13:21
james_wanyone have any ideas of how to debug further?13:21
=== BradCrittenden is now known as bac
cory_fujames_w: I believe that's the error I ran into a few days ago.  Try adding "default_series: precise" to your local entry in environments.yaml13:29
cory_fuSorry, default-series: precise13:29
james_wcory_fu: no change13:36
cory_fuHrm.  mbruzek, tvansteenburgh: Do you recall what ended up being the fix for the environ-provisioner error on LXC?13:38
mbruzekYes13:38
mbruzeksetting default series and cleaning up the local env.13:38
james_w"cleaning up"?13:39
cory_fuOh, let me pastebin the cleanup script13:39
mbruzekJust as econd13:39
cory_fuhttp://pastebin.ubuntu.com/7314725/13:39
cory_fuYou may need to change line 21 to: sudo rm -rf ~/.juju/local13:40
mbruzekhttp://paste.ubuntu.com/7314726/13:40
bloodearnestmbruzek: gunicorn readme fix:13:40
bloodearnesthttps://code.launchpad.net/~bloodearnest/charms/precise/gunicorn/fix-readme/+merge/21683613:40
cory_fujames_w: Also, do you have encrypted home dirs enabled?  If so, you may need to set the root-dir in the local env to outside of the home dir (e.g., /var/juju, as in my cleanup script)13:41
mbruzekI saw that bloodearnest thank you.13:41
james_wI don't13:41
cory_fuOk, then mbruzek's version is what you want, then13:41
cory_fujames_w: NB: You should run the cleanup script *after* doing juju destroy-environment --force -y local13:43
cory_fuAnd you will get several "file not found" responses from the script, which is fine13:43
james_wok, machine 0 is reporting:    agent-state: down13:47
james_wnow I did deploy ubuntu and it has agent-state: started13:48
james_wand I still have the environ-provisioner message13:50
james_wcory_fu: any other ideas?13:53
cory_fuHrm.  Other than trying juju destroy-environment --force -y local ; clean-lxc ; juju bootstrap a couple more times, not really.  :-/13:55
cory_fuWhat versions of juju and juju-local do you have?13:56
james_whmm13:57
james_wthey weren't installed13:58
james_wtrying again after installing them13:58
james_w1.18.1-0ubuntu113:59
cory_fuOk, that's the current version13:59
james_wstill looks to be the same13:59
james_whttps://bugs.launchpad.net/juju-core/+bug/1248800 mentions the error and says "restarting the provisioner fixed it"14:01
james_whow would I try that?14:01
_mup_Bug #1248800: worker/provisioner missed signal to start new machine <deploy> <hs-arm64> <juju-core:Triaged> <https://launchpad.net/bugs/1248800>14:01
cory_fuHrm.  I'm really not sure14:03
james_wI can't get my work done without a working environment14:04
james_wI guess I could try deploying to ec214:04
cory_fuI think the provisioner should be restarted when you destroy-env and bootstrap14:05
=== ming is now known as Guest7729
mbruzekjames_w, Are you still having problems with juju and local?14:27
james_wmbruzek: yep14:27
mbruzekCan you pastebin the error you are seeing or your log file?14:28
james_wmbruzek: http://paste.ubuntu.com/7315103/ is the all-machines.log14:30
mbruzekOK james_w try this please14:33
mbruzekjuju destroy-environment -e local -y --force14:33
mbruzek(run clean script)14:33
mbruzekjuju sync-tools14:33
mbruzekjuju bootstrap -e local14:34
mbruzek(with the --upload-tools flag)14:34
mbruzekjames_w, I am assuming your ~/.juju/environments.yaml file already has the default-series: set to something valid.14:35
james_wmbruzek: precise14:36
mbruzekjames_w, Any progress?14:44
james_wmbruzek: still looks the same14:45
mbruzekjames_w, What is this bit about you not having juju-local installed?  Was this working at some point, or is this your first attempt at getting juju local running?14:48
james_wmbruzek: first attempt with juju-core14:48
mbruzekjames_w, Can you describe what is not working for you?  The log on pastebin looks mostly OK.14:51
james_wmbruzek: no services start14:51
mbruzekAre they all stuck in pending?14:51
james_whttp://pastebin.ubuntu.com/7315210/14:52
lazyPowerjames_w: is your LAN using the 10.0.3.0 segment?14:52
qhartmanI'm working on getting a MAAS / Juju deployment setup to manage an Openstack cluster, and I've gotten to the point that I have Juju basically going, but any host I add gets stuck in "pending".14:53
qhartmanoh hey, looks like james_w is facing something similar maybe...14:53
james_wlazyPower: 10.0.114:53
lazyPowerjames_w: ok, i ask because ip collision will prevent the lxc containers from booting14:53
james_wqhartman: mine is with lxc14:53
qhartmanand any nodes I try to destroy get stuck in "dying".14:53
qhartmanjames_w, ah14:53
lazyPowerif you're not using 10.0.3.0 you'll be fine.14:53
lazyPowerfor that issue anyway14:54
lazyPowerqhartman: logs?14:54
mbruzekqhartman do you have a pastebin of the logs?14:54
qhartmanlazyPower, Love to. Which ones?14:54
lazyPowerall-machines / machine-014:54
qhartmanrgr14:54
mbruzek~/.juju/local/log/all-machines.log14:54
lazyPowermbruzek: thats fine for lxc, but he's working with maas.14:55
lazyPowers/he's/qhartman14:55
mbruzekThanks lazyPower14:55
lazyPowernp bruddah14:55
qhartmanThere's machine-0.log: 2014-04-22 23:00:34 INFO juju.cmd supercommand.go:297 running juju-1.18.1-trusty-amd64 [gc]14:58
qhartman2014-04-22 23:00:34 INFO juju.cmd.jujud machine.go:127 machine agent machine-0 start (1.18.1-trusty-amd64 [gc])14:58
qhartman2014-04-22 23:00:34 DEBUG juju.agent agent.go:384 read agent config, format "1.18"14:58
qhartman2014-04-22 23:00:34 INFO juju.cmd.jujud machine.go:155 Starting StateWorker for machine-014:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "state"14:58
qhartman2014-04-22 23:00:34 INFO juju.state open.go:81 opening state; mongo addresses: ["localhost:37017"]; entity "machine-0"14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "api"14:58
lazyPowerqhartman: pastebin plz14:58
qhartman2014-04-22 23:00:34 INFO juju apiclient.go:114 state/api: dialing "wss://localhost:17070/"14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "termination"14:58
qhartman2014-04-22 23:00:34 ERROR juju apiclient.go:119 state/api: websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused14:58
qhartman2014-04-22 23:00:34 ERROR juju runner.go:220 worker: exited "api": websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:254 worker: restarting "api" in 3s14:58
qhartman2014-04-22 23:00:34 INFO juju.state open.go:119 connection established14:58
qhartman2014-04-22 23:00:34 DEBUG juju.utils gomaxprocs.go:24 setting GOMAXPROCS to 1614:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "instancepoller"14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "apiserver"14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "cleaner"14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "resumer"14:58
mbruzekqhartman please use pastebin14:58
qhartman2014-04-22 23:00:34 INFO juju.state.apiserver apiserver.go:43 listening on "[::]:17070"14:58
qhartman2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "minunitsworker"14:58
qhartman2014-04-22 23:00:37 INFO juju runner.go:262 worker: start "api"14:58
qhartman2014-04-22 23:00:37 INFO juju apiclient.go:114 state/api: dialing "wss://localhost:17070/"14:58
qhartman2014-04-22 23:00:37 INFO juju.state.apiserver apiserver.go:131 [1] API connection from 127.0.0.1:5600214:59
qhartman2014-04-22 23:014:59
qhartmanoh balls, sorry14:59
qhartmanyeah14:59
qhartmanthere: http://pastebin.com/44wjjRvA14:59
qhartmanthough I had copied the URL when I hadn't14:59
lazyPowerqhartman: on the machine that's in 'pending' is this machine registered in the maas region controller?15:00
qhartmanyup15:00
qhartmanand when I juju-created it it correctly started up and did the fastpath install15:00
lazyPowerok15:00
qhartmanand then... nothing15:00
lazyPoweri'm curious because the logs state mysql has no node associated15:01
lazyPowerline 5715:01
qhartmanright, I've been futzing around with trying to add/ remove serivces and machines since then15:01
qhartmanand a lot of that is weirdly not reflected in that log15:01
qhartmanprobably a good place to start would be how to clean up this "dying" machine-1 so I can start again from a clean-ish spot15:02
lazyPowerjuju remove-unit <unit #> --force15:02
qhartmanaha15:02
lazyPowerer15:02
lazyPowerwhere am i this morning15:02
qhartmanheh15:02
* lazyPower needs coffee15:02
qhartmanso say we all15:02
lazyPowerjuju destroy-machine <machine #> --force15:02
qhartmanright15:02
lazyPowerremove-unit :P hah15:02
lazyPoweri'm clearly working on something else with that command15:03
* lazyPower whistles innocently15:03
qhartmanheh15:03
qhartmanalright, now it says it's dead with a pending agent-state15:04
qhartmanaaand, gone15:04
qhartmanok15:04
lazyPowerboom15:06
qhartmanalright "juju deploy mysql", it seems to have grabbed the same machine it used for machine-1 before, but now it's calling it machine-315:08
qhartmaneverything is "pending....15:08
lazyPowerright juju will assign its own alias to the machine15:09
lazyPowerit increments +115:09
lazyPowerfor each15:09
qhartmanright, makes sense15:09
qhartmanis there any activity I can look for on the machine?15:10
qhartmanthe machine-0 log hasn't changed much15:10
qhartmanhere are the new lines: http://pastebin.com/wFyGGTYF15:11
lazyPowerdid the node ever get bootstrapped into juju? when i boot my kvm maas units, it takes ~ 2 minutes for them to come online and register with the juju bootstrap node15:12
lazyPoweralso, can your maas units reach the bootstrap node?15:12
qhartmanthey should be able to. Do they try to reach it by IP or name?15:12
lazyPowermost of my help will be anecdotal qhartman, i've got a physical hardware machien as my region cluster, and all my maas nodes are KVM15:12
lazyPowerit tries by name first, then by ip15:12
qhartmanok15:12
qhartmanyeah, I'm trying to set things up so everything is multi-homed which seems to be confusing things somewhat.15:13
qhartmanso, "juju bootstrap" from my MAAS controller spun up and bootstrapped the juju node machine-015:14
qhartmanbut I didn't do any "bootstrap" on any other machine.15:14
lazyPoweryou dont need to15:14
lazyPowerand your juju bootstrap controller came online right?15:14
lazyPowerits up, running, and communicating with you  - i guess thats the case since you could issue a juju deploy mysql15:15
lazyPowerhummm15:15
qhartmanI guess so, here's my juju status: http://pastebin.com/D25ANbyw15:15
qhartmanyeah, everything for machine-0 seems right as far as I can tell15:15
lazyPowerit seems like the agent isn't doing what it needs to.15:15
qhartmanright15:15
qhartmanhow does that get installed?15:15
qhartmandoes machine-0 try to ssh as some user over to it or something?15:16
lazyPower^15:16
lazyPowerduring cloud init it pushes the proper ssh keys to the node15:16
lazyPowerthen the bootstrap node ssh's into it and kicks off the agent installation15:16
qhartmanhm, ok15:16
lazyPoweri'm thinking this may be wrt the tools15:16
lazyPowertry donig a juju sync-tools and retry the provisioning15:17
qhartmandestroy the service and the machine, and remove and re-add them to maas? All the way to the beginning?15:17
qhartmanI see the ubuntu user on machine-3 has a bunch of juju keys in it's authorized_keys15:18
qhartmanso that seems to have worked....15:18
qhartmanI wonder if the keys got out of sync somewhere...15:18
lazyPowerits doubtful but possible.15:19
qhartmanah, I think I figured it out15:19
qhartmanthe routing is messed up on machine-3, it's default path is super wrong15:20
qhartmanso it can't download the tools from canonical15:20
qhartmanok15:20
qhartmanwhee15:20
qhartmannow I have somewhere to go15:20
qhartmanThanks for the help15:21
qhartmanI'll be back if I hit another wall.15:21
qhartman:D15:21
lazyPowernp qhartman, glad that helped15:21
* qhartman enters lurk mode15:21
mhall119marcoceppi: can I deploy both precise and trusty units in Canonistack?15:31
mhall119while I wait for a trusty-enabled postgresql charm15:31
=== BradCrittenden is now known as bac
james_wmbruzek: found the problem. I had to disable ufw on my hos16:48
james_whost16:48
james_wmbruzek: now the units start16:48
james_wbut the unit agent fails with16:48
mbruzekjames_w, that is GREAT16:48
james_w2014-04-23 16:47:19 ERROR juju runner.go:220 worker: exited "uniter": ModeInstalling cs:precise/ubuntu-4: git init failed: exec: "git": executable file not found in $PATH16:48
james_wis that part of the 'tools'16:49
mbruzekThanks for sharing that16:49
james_wor a problem with the charm?16:49
mbruzekLooks like git is not installed in the charm16:49
james_wthere's also 2014-04-23 16:49:35 WARNING juju.worker.uniter.charm git_deployer.go:200 no current staging repo16:50
=== vladk is now known as vladk|offline
mbruzekjames_w, Did you get the 1.18 to work or did you update to the latest version?16:56
james_wmbruzek: I upgraded16:56
james_wmbruzek: not to say 1.18 wouldn't have worked with the disabled firewall16:56
mbruzekBased on your statement I suspect that the ufw could have been disabled with 1.1816:56
james_wthe git thing is because the package install during cloud-init failed17:04
james_wapparently because there's something wrong with dns in the container17:04
qhartmanlazyPower, I ended up needing to nuke and pave the host that was formerly machine-3, but adding the ip-forwarding on the MAAS controller so the route works ended up allowing "juju deploy" to work17:06
qhartmanI now have nodes joining the juju and deploying services17:06
qhartman\o/17:06
lazyPowerqhartman: awesome!17:12
lazyPowerglad you got it sorted :)17:12
james_whttps://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1205086 is the dns problem17:21
_mup_Bug #1205086: lxc-net dnsmasq --strict-order breaks dns for lxc non-recursive nameserver <libvirt (Ubuntu):New> <lxc (Ubuntu):New> <https://launchpad.net/bugs/1205086>17:21
=== roadmr is now known as roadmr_afk
qhartmancalamity! Trusty doesn't have a nova-volume charm!17:40
qhartmanis that by design?17:40
lazyPowerqhartman: there's an audit going on for charms to have series promotion17:42
qhartmanwhich means this doc doesn't apply correctly: https://help.ubuntu.com/community/UbuntuCloudInfrastructure . Is there a version that has been updated for Trusty? I have not found one.17:42
lazyPowerright now, you're best bet is to target precise - if you need a trusty charm, you can be one of the brave beta users and just specify trusty, deploy, and pray it works correctly17:43
lazyPowermost of the charms will be ok, but there are some discrepancies between precise => trusty deployments.17:43
qhartmanright17:43
qhartmanso, I'm on trusty, so to deploy the precise charm I use "juju deploy cs:precise/nova-volume"?17:44
lazyPowerso in order for you to do that, you need to create a local charm repository, and charm-get the charm you want to deploy on trusty into the trusty series directory17:44
qhartmanah, ok17:44
lazyPowerthen juju deploy local:trusty/nova-volume --repository=../../ (from within the nova charm dir)17:44
lazyPowerbut be warned, if it blows up17:44
qhartmanyeah17:44
lazyPoweras of right now, you're accepting a backwoods warranty. if it blows up in half, you own both halves.17:44
qhartmanI get to keep all the pieces17:44
lazyPower:P17:45
qhartmanwell, this is testing deployment anyway17:45
lazyPoweryour feedback will be gold17:45
lazyPowerif you run into any caveats make sure you ping the juju list with them17:45
qhartmanonce I get a feel for things I'm going to kill it and re-deploy anyway17:45
qhartmanwill do17:45
lazyPowerthanks qhartman17:45
qhartmansure thing17:45
qhartmanWhat's the state of the ceph charms on Trusty? Big picture I'm planning on using that for storage on this deployment17:46
lazyPowernot sure. check the charm store17:47
qhartmank17:47
lazyPoweri think most if not all of the openstack charms have trusty releases17:47
lazyPoweri know they were sprinting to get them pushed day of trusty release.17:47
flohackHey, am I doing something wrong or is there simply no charm for memcached / wordpress on trusty?17:47
lazyPowerbut i'm not sure of anything beyond that, i've been busy with other areas of focus17:47
qhartmanflohack, no trusty charm that I see17:48
lazyPowerflohack: there is not. The trusty charms are part of an audit, and slow going.17:48
qhartmanlazyPower, sure.17:48
lazyPowerif you want to be a +1 reviewer to charms in the trusty series, we'd appreciate the extra hands on deck, and possible amulet test submissions17:48
flohacklazyPower: ok, thanks! anything I could do about it? Like copy it locally, try and see if it works when modifying the series!?17:48
lazyPowerflohack: theres 2 requirements you'll need to be aware of. it has to pass a full blown charm review, and contain deployment tests (amulet flavored is preferrable)17:49
lazyPowerbut the caveat here is series support in amulet is pending a merge last i checked. so it may blow up on you attempting to write tests17:49
lazyPowermarcoceppi: any updates on that to speak of?17:49
lazyPowerhe's at lunch, so responses may be latent.17:50
marcoceppilazyPower: what's the tl;dr?17:50
flohacklazyPower: Ok, I'm not familiar with writing/testing/auditing charms so far, I'm a professional dev though, so maybe with a few pointers, I'll be able to contribute!?17:50
lazyPowermarcoceppi: trusty / series support in amulet.17:50
lazyPowerflohack: i ran a charm school yesterday on it. let me fish up the link17:51
marcoceppilazyPower: that will be fixed in 1.5, to be released early next week17:51
lazyPowerflohack: https://www.youtube.com/watch?v=2Y1MiSPox5I#t=3117:51
lazyPowerhere's how to get acquainted with the rev Q, and there is a docs page on writing amulet tests https://juju.ubuntu.com/docs/tools-amulet.html17:51
flohacklazyPower: anything to read? I so much quicker when reading compared to watching a video ;-)17:52
lazyPowerflohack: https://juju.ubuntu.com/docs/reference-reviewers.html17:52
lazyPowerflohack: if you start doing reviews / audit work - any questions should go here or to the mailing list. We'll be more than happy to help support you in your efforts17:53
=== CyberJacob|Away is now known as CyberJacob
flohacklazyPower: Great, so for starters, how do I copy the precise charm for memcached to a local repository?17:54
flohackis there a git to clone?17:54
hazmatflohack, bzr branch lp:charms/precise/memcached17:56
flohackcheers mates! I'll take it from there and get back with questions!17:56
lazyPoweractually use charm-get from charm-tools17:59
lazyPowerits a wrapper, but keeps things consistent17:59
=== vladk|offline is now known as vladk
=== roadmr_afk is now known as roadmr
kikohey there18:21
kikoI'm trying to start up an lxc instance and I'm getting this error18:21
kiko    agent-state-info: '(error: container failed to start)'18:22
kikohow can I debug what is going wrong?18:22
kikothis is on 1.18.1.418:22
kikois there a limit to the number of containers I can run?18:22
kikomm18:24
lazyPowerkiko: nope, have you put in a default-series directive in your environments.yaml?18:53
lazyPowerkiko: the only limit superimposed by lxc is dependent on your physical hardware, if you go spinning up crazy containers with crazy processes, it'll cause other unpredictable behavior due to being out of resources. But juju / lxc superimpose no limit to the quantity of machines you can spin up.18:54
kikolazyPower, I haven't put default-series in my environments18:54
kikoand I already have a bunch of working containers18:54
kikoit seems like the new way to run containers isn't working18:55
kikohmm18:55
kikookay18:57
kikoso it seems like the container actually DOES run18:57
kikohmm18:57
kikolazyPower, so here's what I am seeing19:02
kikolazyPower, the container is running (i.e. lxc-console --name gets to it)19:02
kikolazyPower, juju status says "container failed to start")19:02
kikothe rest looks all normal19:03
lazyPowerany notice in the logs about a tools mismatch?19:03
lazyPowerany possible IP Collisions?19:03
kikolazyPower, well, which log should I look at? oddly, there is no log in /var/log/juju-juju-local for that machine19:04
lazyPowershould be in $HOME/.juju/local/logs19:04
kikoah, there is now19:04
lazyPowerthe tools mismatch log message scrolls in machine-0.log19:04
lazyPowerand can be corrected with sync-tools19:04
kikolazyPower, the word "mismatch" does not appear in machine-0.log19:05
kikolazyPower, is machine 0 special when using the local provider?19:05
kikoI notice it's set to localhost19:05
lazyPowerIt is. machine-0, your bootstrap node, is the parent machine warehousing the lxc containers19:05
lazyPowerhowever we are running to the end of my knowledge of common problems with LXC - if its not ip collision or tools/series.19:06
kikothat's very interesting! is it documented anywhere?19:06
lazyPowerwhich aspect of the output? that the bootstrap node is localhost?19:06
kikohmmm19:07
kikoI guess, yes, or that the bootstrap node's logs are interesting :)19:07
=== natefinch is now known as natefinch-afk
kikolazyPower, aha, machine-0's log has a lot of interesting stuff19:10
=== roadmr is now known as roadmr_afk
kiko2014-04-23 19:09:34 DEBUG juju.environs.simplestreams simplestreams.go:490 fetchData failed for "http://192.168.99.5:8040/tools/streams/v1/index.sjson": file "tools/streams/v1/index.sjson" not found19:10
kiko2014-04-23 19:09:37 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found19:11
joseis there any way to hardcode the type of machine juju deploys on EC2? it's very annoying finding out that even though you set the constraints it deploys a machine bigger than what you were expecting19:22
lazyPowerkiko: thats.. not good. it cant find the simplestreams data19:23
lazyPowermarcoceppi: simplestreams on juju-local is updated when you run sync-tools no?19:24
marcoceppilazyPower: kiko local provider doesn't use simplestreams19:25
marcoceppilast I checked19:25
lazyPowerwat - i thought /tools/streams - was in fact simplestrams19:25
marcoceppithat DEBUG is a red herring19:25
lazyPowermaybe my terminology is wrong19:25
lazyPowerbah19:25
lazyPowerthanks for the clarification19:25
marcoceppikiko: is 192.168.99.5 your machine?19:26
kikomarcoceppi, yes19:26
kikomarcoceppi, good to hear that simplestreams is a red herring19:27
marcoceppikiko: I might be wrong though, local has changed a lot.19:27
marcoceppiWhat version is this?19:27
kikomarcoceppi, 1.18.1.419:27
marcoceppikiko: can you destroy then re-run with the --debug flag?19:28
kikomarcoceppi, no, this is in production19:29
kikooh19:29
kikoyou mean destroy the machine?19:29
kikoI have already19:29
kikomany times19:29
marcoceppilocal provider in production?19:29
kikomarcoceppi, sure, I hear that's the way to do it :)19:29
=== natefinch-afk is now known as natefinch
renier'Fix genghisapp charm directory name' https://github.com/juju/docs/pull/8419:33
qhartmanso I have a charm that failed to deploy, and destroying it doesn't seem to work.19:45
qhartmanHow can I force that to happen (no --force it seems) and/or force the charm to be redeployed on the node to see if I've fixed the problem that caused it to fail?19:46
=== timrc is now known as timrc-afk
=== timrc-afk is now known as timrc
qhartmanIf I force destroy the machine, then I can destroy the services that are in a bad state.19:53
joseqhartman: you need to do 'juju resolved unitname/#' first19:54
joseas juju is event-based, it needs to be taken out of the error state before continuing with the next action19:55
qhartmanjose, oooooh, that makes sense19:55
qhartmancool. Is this sort of usage stuff documented anywhere? I've found lots of howto-style docs, but very little reference for juju.19:57
josewell, we have docs at juju.ubuntu.com/docs19:57
josebut I'm not sure if that specific point is documented, should be19:57
qhartmanok19:57
qhartmanyeah, I've been poking around on there19:58
josehttps://juju.ubuntu.com/docs/charms-destroy.html#life:-dying19:58
josethere it is :)19:58
qhartmanand this sort of middle-ground stuff isn't covered well, or I'm just not seeing it. Lots of bootstrappy stuff, and lots of dev oriented stuff though19:58
qhartmanawesome19:58
qhartmanI must just need to learn how to find stuff here19:58
lazyPowerqhartman: feedback on the list to your experience with the docs is gold as well :)19:59
qhartmanlazyPower, I'll add that to the list19:59
joseif you have any questions you're welcome to just ask around here :)19:59
qhartmanlazyPower, I posted a couple items a few minutes ago19:59
qhartmanjose, yup, and gotten lots of help so far20:00
joseqhartman: btw, nova-volume has not been promulgated to trusty yet20:00
qhartmanjose, so, how did you find that page? I don't see it in an index anywhere and a couple of intentionally naive but sensible searches didn't turn it up20:00
josejuju.ubuntu.com/docs, on the sidebar, the 'Destroying Services' page, and read until the end of it20:01
qhartmanjose, yup, I knew I was breaking ground somewhat, was mostly hoping that my notes could help someone else along,20:01
qhartmanjose, aha, I see it now20:01
joseI think the search looks for articles on the blog (if there's any)20:02
qhartmanyeah, it doesn't seem to search the docs at all20:02
=== roadmr_afk is now known as roadmr
zdrWhat exactly is juju-mongodb for? Is it to enable SSL? What else? And why on Trusty only?20:33
lazyPowerzdr: juju-mongodb is the tweaked and tuned package for mongodb's inclusion into teh juju ecosystem.20:41
lazyPowermongodb is the storage mechanism for your topology and other juju specific bits20:43
=== vladk is now known as vladk|offline
sarnoldmarcoceppi: I can't find that ppa in my "subscriptions" list thingy -- should I have access or should I not? :)21:56
lazyPowersarnold: shhh, we wont talk about that22:13
sarnold_marcoceppi: sorry, my irc machine died after I saw a highlight but before I could switch to this channel to see what was said.. I assume it was you who replied to me, anyway :)22:15
=== sarnold_ is now known as sarnold
=== NoNameYet_xnox is now known as xnox
=== CyberJacob is now known as CyberJacob|Away
=== wedgwood is now known as Guest68564

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!