=== CyberJacob is now known as CyberJacob|Away | ||
hazmat | lazyPower, well i wasn't able to reproduce the issue utlemming was having | 00:37 |
---|---|---|
hazmat | but i was using trusty.. i'll try again with precise | 00:37 |
lazyPower | hazmat: did the redirect daemon work ootb? | 00:37 |
hazmat | lazyPower, the underlying symptom was exception on connection | 00:38 |
hazmat | lazyPower, which didn't occur | 00:38 |
lazyPower | Hmmmm.. curious | 00:39 |
=== 21WAAD3MF is now known as wallyworld | ||
jose | negronjl: pushed fixes+additions to seafile, if you could review them | 01:43 |
jose | sorry for the branch name being a little bit... long | 01:43 |
negronjl | jose: no worries ... I'll review a bit later tonight | 01:44 |
jose | thanks :) | 01:44 |
Tug | how can I access juju's bootstrap env ? | 03:10 |
Tug | i'd like to hack into the db and remove an entry :) | 03:10 |
=== timrc is now known as timrc-afk | ||
=== wallyworld_ is now known as wallyworld | ||
=== vladk|offline is now known as vladk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== vladk is now known as vladk|offline | ||
=== axw is now known as axw-away | ||
=== vladk|offline is now known as vladk | ||
ghartmann | do we already have any clue why add-machine doesn't work for local providers anymore ? | 11:45 |
=== timrc-afk is now known as timrc | ||
james_w | hi | 13:20 |
james_w | I'm unable to deploy any charms to a local environment on my trusty box | 13:21 |
james_w | http://paste.ubuntu.com/7310317/ is all-machines.log | 13:21 |
james_w | anyone have any ideas of how to debug further? | 13:21 |
=== BradCrittenden is now known as bac | ||
cory_fu | james_w: I believe that's the error I ran into a few days ago. Try adding "default_series: precise" to your local entry in environments.yaml | 13:29 |
cory_fu | Sorry, default-series: precise | 13:29 |
james_w | cory_fu: no change | 13:36 |
cory_fu | Hrm. mbruzek, tvansteenburgh: Do you recall what ended up being the fix for the environ-provisioner error on LXC? | 13:38 |
mbruzek | Yes | 13:38 |
mbruzek | setting default series and cleaning up the local env. | 13:38 |
james_w | "cleaning up"? | 13:39 |
cory_fu | Oh, let me pastebin the cleanup script | 13:39 |
mbruzek | Just as econd | 13:39 |
cory_fu | http://pastebin.ubuntu.com/7314725/ | 13:39 |
cory_fu | You may need to change line 21 to: sudo rm -rf ~/.juju/local | 13:40 |
mbruzek | http://paste.ubuntu.com/7314726/ | 13:40 |
bloodearnest | mbruzek: gunicorn readme fix: | 13:40 |
bloodearnest | https://code.launchpad.net/~bloodearnest/charms/precise/gunicorn/fix-readme/+merge/216836 | 13:40 |
cory_fu | james_w: Also, do you have encrypted home dirs enabled? If so, you may need to set the root-dir in the local env to outside of the home dir (e.g., /var/juju, as in my cleanup script) | 13:41 |
mbruzek | I saw that bloodearnest thank you. | 13:41 |
james_w | I don't | 13:41 |
cory_fu | Ok, then mbruzek's version is what you want, then | 13:41 |
cory_fu | james_w: NB: You should run the cleanup script *after* doing juju destroy-environment --force -y local | 13:43 |
cory_fu | And you will get several "file not found" responses from the script, which is fine | 13:43 |
james_w | ok, machine 0 is reporting: agent-state: down | 13:47 |
james_w | now I did deploy ubuntu and it has agent-state: started | 13:48 |
james_w | and I still have the environ-provisioner message | 13:50 |
james_w | cory_fu: any other ideas? | 13:53 |
cory_fu | Hrm. Other than trying juju destroy-environment --force -y local ; clean-lxc ; juju bootstrap a couple more times, not really. :-/ | 13:55 |
cory_fu | What versions of juju and juju-local do you have? | 13:56 |
james_w | hmm | 13:57 |
james_w | they weren't installed | 13:58 |
james_w | trying again after installing them | 13:58 |
james_w | 1.18.1-0ubuntu1 | 13:59 |
cory_fu | Ok, that's the current version | 13:59 |
james_w | still looks to be the same | 13:59 |
james_w | https://bugs.launchpad.net/juju-core/+bug/1248800 mentions the error and says "restarting the provisioner fixed it" | 14:01 |
james_w | how would I try that? | 14:01 |
_mup_ | Bug #1248800: worker/provisioner missed signal to start new machine <deploy> <hs-arm64> <juju-core:Triaged> <https://launchpad.net/bugs/1248800> | 14:01 |
cory_fu | Hrm. I'm really not sure | 14:03 |
james_w | I can't get my work done without a working environment | 14:04 |
james_w | I guess I could try deploying to ec2 | 14:04 |
cory_fu | I think the provisioner should be restarted when you destroy-env and bootstrap | 14:05 |
=== ming is now known as Guest7729 | ||
mbruzek | james_w, Are you still having problems with juju and local? | 14:27 |
james_w | mbruzek: yep | 14:27 |
mbruzek | Can you pastebin the error you are seeing or your log file? | 14:28 |
james_w | mbruzek: http://paste.ubuntu.com/7315103/ is the all-machines.log | 14:30 |
mbruzek | OK james_w try this please | 14:33 |
mbruzek | juju destroy-environment -e local -y --force | 14:33 |
mbruzek | (run clean script) | 14:33 |
mbruzek | juju sync-tools | 14:33 |
mbruzek | juju bootstrap -e local | 14:34 |
mbruzek | (with the --upload-tools flag) | 14:34 |
mbruzek | james_w, I am assuming your ~/.juju/environments.yaml file already has the default-series: set to something valid. | 14:35 |
james_w | mbruzek: precise | 14:36 |
mbruzek | james_w, Any progress? | 14:44 |
james_w | mbruzek: still looks the same | 14:45 |
mbruzek | james_w, What is this bit about you not having juju-local installed? Was this working at some point, or is this your first attempt at getting juju local running? | 14:48 |
james_w | mbruzek: first attempt with juju-core | 14:48 |
mbruzek | james_w, Can you describe what is not working for you? The log on pastebin looks mostly OK. | 14:51 |
james_w | mbruzek: no services start | 14:51 |
mbruzek | Are they all stuck in pending? | 14:51 |
james_w | http://pastebin.ubuntu.com/7315210/ | 14:52 |
lazyPower | james_w: is your LAN using the 10.0.3.0 segment? | 14:52 |
qhartman | I'm working on getting a MAAS / Juju deployment setup to manage an Openstack cluster, and I've gotten to the point that I have Juju basically going, but any host I add gets stuck in "pending". | 14:53 |
qhartman | oh hey, looks like james_w is facing something similar maybe... | 14:53 |
james_w | lazyPower: 10.0.1 | 14:53 |
lazyPower | james_w: ok, i ask because ip collision will prevent the lxc containers from booting | 14:53 |
james_w | qhartman: mine is with lxc | 14:53 |
qhartman | and any nodes I try to destroy get stuck in "dying". | 14:53 |
qhartman | james_w, ah | 14:53 |
lazyPower | if you're not using 10.0.3.0 you'll be fine. | 14:53 |
lazyPower | for that issue anyway | 14:54 |
lazyPower | qhartman: logs? | 14:54 |
mbruzek | qhartman do you have a pastebin of the logs? | 14:54 |
qhartman | lazyPower, Love to. Which ones? | 14:54 |
lazyPower | all-machines / machine-0 | 14:54 |
qhartman | rgr | 14:54 |
mbruzek | ~/.juju/local/log/all-machines.log | 14:54 |
lazyPower | mbruzek: thats fine for lxc, but he's working with maas. | 14:55 |
lazyPower | s/he's/qhartman | 14:55 |
mbruzek | Thanks lazyPower | 14:55 |
lazyPower | np bruddah | 14:55 |
qhartman | There's machine-0.log: 2014-04-22 23:00:34 INFO juju.cmd supercommand.go:297 running juju-1.18.1-trusty-amd64 [gc] | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju.cmd.jujud machine.go:127 machine agent machine-0 start (1.18.1-trusty-amd64 [gc]) | 14:58 |
qhartman | 2014-04-22 23:00:34 DEBUG juju.agent agent.go:384 read agent config, format "1.18" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju.cmd.jujud machine.go:155 Starting StateWorker for machine-0 | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "state" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju.state open.go:81 opening state; mongo addresses: ["localhost:37017"]; entity "machine-0" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "api" | 14:58 |
lazyPower | qhartman: pastebin plz | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju apiclient.go:114 state/api: dialing "wss://localhost:17070/" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "termination" | 14:58 |
qhartman | 2014-04-22 23:00:34 ERROR juju apiclient.go:119 state/api: websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused | 14:58 |
qhartman | 2014-04-22 23:00:34 ERROR juju runner.go:220 worker: exited "api": websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:254 worker: restarting "api" in 3s | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju.state open.go:119 connection established | 14:58 |
qhartman | 2014-04-22 23:00:34 DEBUG juju.utils gomaxprocs.go:24 setting GOMAXPROCS to 16 | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "instancepoller" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "apiserver" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "cleaner" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "resumer" | 14:58 |
mbruzek | qhartman please use pastebin | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju.state.apiserver apiserver.go:43 listening on "[::]:17070" | 14:58 |
qhartman | 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "minunitsworker" | 14:58 |
qhartman | 2014-04-22 23:00:37 INFO juju runner.go:262 worker: start "api" | 14:58 |
qhartman | 2014-04-22 23:00:37 INFO juju apiclient.go:114 state/api: dialing "wss://localhost:17070/" | 14:58 |
qhartman | 2014-04-22 23:00:37 INFO juju.state.apiserver apiserver.go:131 [1] API connection from 127.0.0.1:56002 | 14:59 |
qhartman | 2014-04-22 23:0 | 14:59 |
qhartman | oh balls, sorry | 14:59 |
qhartman | yeah | 14:59 |
qhartman | there: http://pastebin.com/44wjjRvA | 14:59 |
qhartman | though I had copied the URL when I hadn't | 14:59 |
lazyPower | qhartman: on the machine that's in 'pending' is this machine registered in the maas region controller? | 15:00 |
qhartman | yup | 15:00 |
qhartman | and when I juju-created it it correctly started up and did the fastpath install | 15:00 |
lazyPower | ok | 15:00 |
qhartman | and then... nothing | 15:00 |
lazyPower | i'm curious because the logs state mysql has no node associated | 15:01 |
lazyPower | line 57 | 15:01 |
qhartman | right, I've been futzing around with trying to add/ remove serivces and machines since then | 15:01 |
qhartman | and a lot of that is weirdly not reflected in that log | 15:01 |
qhartman | probably a good place to start would be how to clean up this "dying" machine-1 so I can start again from a clean-ish spot | 15:02 |
lazyPower | juju remove-unit <unit #> --force | 15:02 |
qhartman | aha | 15:02 |
lazyPower | er | 15:02 |
lazyPower | where am i this morning | 15:02 |
qhartman | heh | 15:02 |
* lazyPower needs coffee | 15:02 | |
qhartman | so say we all | 15:02 |
lazyPower | juju destroy-machine <machine #> --force | 15:02 |
qhartman | right | 15:02 |
lazyPower | remove-unit :P hah | 15:02 |
lazyPower | i'm clearly working on something else with that command | 15:03 |
* lazyPower whistles innocently | 15:03 | |
qhartman | heh | 15:03 |
qhartman | alright, now it says it's dead with a pending agent-state | 15:04 |
qhartman | aaand, gone | 15:04 |
qhartman | ok | 15:04 |
lazyPower | boom | 15:06 |
qhartman | alright "juju deploy mysql", it seems to have grabbed the same machine it used for machine-1 before, but now it's calling it machine-3 | 15:08 |
qhartman | everything is "pending.... | 15:08 |
lazyPower | right juju will assign its own alias to the machine | 15:09 |
lazyPower | it increments +1 | 15:09 |
lazyPower | for each | 15:09 |
qhartman | right, makes sense | 15:09 |
qhartman | is there any activity I can look for on the machine? | 15:10 |
qhartman | the machine-0 log hasn't changed much | 15:10 |
qhartman | here are the new lines: http://pastebin.com/wFyGGTYF | 15:11 |
lazyPower | did the node ever get bootstrapped into juju? when i boot my kvm maas units, it takes ~ 2 minutes for them to come online and register with the juju bootstrap node | 15:12 |
lazyPower | also, can your maas units reach the bootstrap node? | 15:12 |
qhartman | they should be able to. Do they try to reach it by IP or name? | 15:12 |
lazyPower | most of my help will be anecdotal qhartman, i've got a physical hardware machien as my region cluster, and all my maas nodes are KVM | 15:12 |
lazyPower | it tries by name first, then by ip | 15:12 |
qhartman | ok | 15:12 |
qhartman | yeah, I'm trying to set things up so everything is multi-homed which seems to be confusing things somewhat. | 15:13 |
qhartman | so, "juju bootstrap" from my MAAS controller spun up and bootstrapped the juju node machine-0 | 15:14 |
qhartman | but I didn't do any "bootstrap" on any other machine. | 15:14 |
lazyPower | you dont need to | 15:14 |
lazyPower | and your juju bootstrap controller came online right? | 15:14 |
lazyPower | its up, running, and communicating with you - i guess thats the case since you could issue a juju deploy mysql | 15:15 |
lazyPower | hummm | 15:15 |
qhartman | I guess so, here's my juju status: http://pastebin.com/D25ANbyw | 15:15 |
qhartman | yeah, everything for machine-0 seems right as far as I can tell | 15:15 |
lazyPower | it seems like the agent isn't doing what it needs to. | 15:15 |
qhartman | right | 15:15 |
qhartman | how does that get installed? | 15:15 |
qhartman | does machine-0 try to ssh as some user over to it or something? | 15:16 |
lazyPower | ^ | 15:16 |
lazyPower | during cloud init it pushes the proper ssh keys to the node | 15:16 |
lazyPower | then the bootstrap node ssh's into it and kicks off the agent installation | 15:16 |
qhartman | hm, ok | 15:16 |
lazyPower | i'm thinking this may be wrt the tools | 15:16 |
lazyPower | try donig a juju sync-tools and retry the provisioning | 15:17 |
qhartman | destroy the service and the machine, and remove and re-add them to maas? All the way to the beginning? | 15:17 |
qhartman | I see the ubuntu user on machine-3 has a bunch of juju keys in it's authorized_keys | 15:18 |
qhartman | so that seems to have worked.... | 15:18 |
qhartman | I wonder if the keys got out of sync somewhere... | 15:18 |
lazyPower | its doubtful but possible. | 15:19 |
qhartman | ah, I think I figured it out | 15:19 |
qhartman | the routing is messed up on machine-3, it's default path is super wrong | 15:20 |
qhartman | so it can't download the tools from canonical | 15:20 |
qhartman | ok | 15:20 |
qhartman | whee | 15:20 |
qhartman | now I have somewhere to go | 15:20 |
qhartman | Thanks for the help | 15:21 |
qhartman | I'll be back if I hit another wall. | 15:21 |
qhartman | :D | 15:21 |
lazyPower | np qhartman, glad that helped | 15:21 |
* qhartman enters lurk mode | 15:21 | |
mhall119 | marcoceppi: can I deploy both precise and trusty units in Canonistack? | 15:31 |
mhall119 | while I wait for a trusty-enabled postgresql charm | 15:31 |
=== BradCrittenden is now known as bac | ||
james_w | mbruzek: found the problem. I had to disable ufw on my hos | 16:48 |
james_w | host | 16:48 |
james_w | mbruzek: now the units start | 16:48 |
james_w | but the unit agent fails with | 16:48 |
mbruzek | james_w, that is GREAT | 16:48 |
james_w | 2014-04-23 16:47:19 ERROR juju runner.go:220 worker: exited "uniter": ModeInstalling cs:precise/ubuntu-4: git init failed: exec: "git": executable file not found in $PATH | 16:48 |
james_w | is that part of the 'tools' | 16:49 |
mbruzek | Thanks for sharing that | 16:49 |
james_w | or a problem with the charm? | 16:49 |
mbruzek | Looks like git is not installed in the charm | 16:49 |
james_w | there's also 2014-04-23 16:49:35 WARNING juju.worker.uniter.charm git_deployer.go:200 no current staging repo | 16:50 |
=== vladk is now known as vladk|offline | ||
mbruzek | james_w, Did you get the 1.18 to work or did you update to the latest version? | 16:56 |
james_w | mbruzek: I upgraded | 16:56 |
james_w | mbruzek: not to say 1.18 wouldn't have worked with the disabled firewall | 16:56 |
mbruzek | Based on your statement I suspect that the ufw could have been disabled with 1.18 | 16:56 |
james_w | the git thing is because the package install during cloud-init failed | 17:04 |
james_w | apparently because there's something wrong with dns in the container | 17:04 |
qhartman | lazyPower, I ended up needing to nuke and pave the host that was formerly machine-3, but adding the ip-forwarding on the MAAS controller so the route works ended up allowing "juju deploy" to work | 17:06 |
qhartman | I now have nodes joining the juju and deploying services | 17:06 |
qhartman | \o/ | 17:06 |
lazyPower | qhartman: awesome! | 17:12 |
lazyPower | glad you got it sorted :) | 17:12 |
james_w | https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1205086 is the dns problem | 17:21 |
_mup_ | Bug #1205086: lxc-net dnsmasq --strict-order breaks dns for lxc non-recursive nameserver <libvirt (Ubuntu):New> <lxc (Ubuntu):New> <https://launchpad.net/bugs/1205086> | 17:21 |
=== roadmr is now known as roadmr_afk | ||
qhartman | calamity! Trusty doesn't have a nova-volume charm! | 17:40 |
qhartman | is that by design? | 17:40 |
lazyPower | qhartman: there's an audit going on for charms to have series promotion | 17:42 |
qhartman | which means this doc doesn't apply correctly: https://help.ubuntu.com/community/UbuntuCloudInfrastructure . Is there a version that has been updated for Trusty? I have not found one. | 17:42 |
lazyPower | right now, you're best bet is to target precise - if you need a trusty charm, you can be one of the brave beta users and just specify trusty, deploy, and pray it works correctly | 17:43 |
lazyPower | most of the charms will be ok, but there are some discrepancies between precise => trusty deployments. | 17:43 |
qhartman | right | 17:43 |
qhartman | so, I'm on trusty, so to deploy the precise charm I use "juju deploy cs:precise/nova-volume"? | 17:44 |
lazyPower | so in order for you to do that, you need to create a local charm repository, and charm-get the charm you want to deploy on trusty into the trusty series directory | 17:44 |
qhartman | ah, ok | 17:44 |
lazyPower | then juju deploy local:trusty/nova-volume --repository=../../ (from within the nova charm dir) | 17:44 |
lazyPower | but be warned, if it blows up | 17:44 |
qhartman | yeah | 17:44 |
lazyPower | as of right now, you're accepting a backwoods warranty. if it blows up in half, you own both halves. | 17:44 |
qhartman | I get to keep all the pieces | 17:44 |
lazyPower | :P | 17:45 |
qhartman | well, this is testing deployment anyway | 17:45 |
lazyPower | your feedback will be gold | 17:45 |
lazyPower | if you run into any caveats make sure you ping the juju list with them | 17:45 |
qhartman | once I get a feel for things I'm going to kill it and re-deploy anyway | 17:45 |
qhartman | will do | 17:45 |
lazyPower | thanks qhartman | 17:45 |
qhartman | sure thing | 17:45 |
qhartman | What's the state of the ceph charms on Trusty? Big picture I'm planning on using that for storage on this deployment | 17:46 |
lazyPower | not sure. check the charm store | 17:47 |
qhartman | k | 17:47 |
lazyPower | i think most if not all of the openstack charms have trusty releases | 17:47 |
lazyPower | i know they were sprinting to get them pushed day of trusty release. | 17:47 |
flohack | Hey, am I doing something wrong or is there simply no charm for memcached / wordpress on trusty? | 17:47 |
lazyPower | but i'm not sure of anything beyond that, i've been busy with other areas of focus | 17:47 |
qhartman | flohack, no trusty charm that I see | 17:48 |
lazyPower | flohack: there is not. The trusty charms are part of an audit, and slow going. | 17:48 |
qhartman | lazyPower, sure. | 17:48 |
lazyPower | if you want to be a +1 reviewer to charms in the trusty series, we'd appreciate the extra hands on deck, and possible amulet test submissions | 17:48 |
flohack | lazyPower: ok, thanks! anything I could do about it? Like copy it locally, try and see if it works when modifying the series!? | 17:48 |
lazyPower | flohack: theres 2 requirements you'll need to be aware of. it has to pass a full blown charm review, and contain deployment tests (amulet flavored is preferrable) | 17:49 |
lazyPower | but the caveat here is series support in amulet is pending a merge last i checked. so it may blow up on you attempting to write tests | 17:49 |
lazyPower | marcoceppi: any updates on that to speak of? | 17:49 |
lazyPower | he's at lunch, so responses may be latent. | 17:50 |
marcoceppi | lazyPower: what's the tl;dr? | 17:50 |
flohack | lazyPower: Ok, I'm not familiar with writing/testing/auditing charms so far, I'm a professional dev though, so maybe with a few pointers, I'll be able to contribute!? | 17:50 |
lazyPower | marcoceppi: trusty / series support in amulet. | 17:50 |
lazyPower | flohack: i ran a charm school yesterday on it. let me fish up the link | 17:51 |
marcoceppi | lazyPower: that will be fixed in 1.5, to be released early next week | 17:51 |
lazyPower | flohack: https://www.youtube.com/watch?v=2Y1MiSPox5I#t=31 | 17:51 |
lazyPower | here's how to get acquainted with the rev Q, and there is a docs page on writing amulet tests https://juju.ubuntu.com/docs/tools-amulet.html | 17:51 |
flohack | lazyPower: anything to read? I so much quicker when reading compared to watching a video ;-) | 17:52 |
lazyPower | flohack: https://juju.ubuntu.com/docs/reference-reviewers.html | 17:52 |
lazyPower | flohack: if you start doing reviews / audit work - any questions should go here or to the mailing list. We'll be more than happy to help support you in your efforts | 17:53 |
=== CyberJacob|Away is now known as CyberJacob | ||
flohack | lazyPower: Great, so for starters, how do I copy the precise charm for memcached to a local repository? | 17:54 |
flohack | is there a git to clone? | 17:54 |
hazmat | flohack, bzr branch lp:charms/precise/memcached | 17:56 |
flohack | cheers mates! I'll take it from there and get back with questions! | 17:56 |
lazyPower | actually use charm-get from charm-tools | 17:59 |
lazyPower | its a wrapper, but keeps things consistent | 17:59 |
=== vladk|offline is now known as vladk | ||
=== roadmr_afk is now known as roadmr | ||
kiko | hey there | 18:21 |
kiko | I'm trying to start up an lxc instance and I'm getting this error | 18:21 |
kiko | agent-state-info: '(error: container failed to start)' | 18:22 |
kiko | how can I debug what is going wrong? | 18:22 |
kiko | this is on 1.18.1.4 | 18:22 |
kiko | is there a limit to the number of containers I can run? | 18:22 |
kiko | mm | 18:24 |
lazyPower | kiko: nope, have you put in a default-series directive in your environments.yaml? | 18:53 |
lazyPower | kiko: the only limit superimposed by lxc is dependent on your physical hardware, if you go spinning up crazy containers with crazy processes, it'll cause other unpredictable behavior due to being out of resources. But juju / lxc superimpose no limit to the quantity of machines you can spin up. | 18:54 |
kiko | lazyPower, I haven't put default-series in my environments | 18:54 |
kiko | and I already have a bunch of working containers | 18:54 |
kiko | it seems like the new way to run containers isn't working | 18:55 |
kiko | hmm | 18:55 |
kiko | okay | 18:57 |
kiko | so it seems like the container actually DOES run | 18:57 |
kiko | hmm | 18:57 |
kiko | lazyPower, so here's what I am seeing | 19:02 |
kiko | lazyPower, the container is running (i.e. lxc-console --name gets to it) | 19:02 |
kiko | lazyPower, juju status says "container failed to start") | 19:02 |
kiko | the rest looks all normal | 19:03 |
lazyPower | any notice in the logs about a tools mismatch? | 19:03 |
lazyPower | any possible IP Collisions? | 19:03 |
kiko | lazyPower, well, which log should I look at? oddly, there is no log in /var/log/juju-juju-local for that machine | 19:04 |
lazyPower | should be in $HOME/.juju/local/logs | 19:04 |
kiko | ah, there is now | 19:04 |
lazyPower | the tools mismatch log message scrolls in machine-0.log | 19:04 |
lazyPower | and can be corrected with sync-tools | 19:04 |
kiko | lazyPower, the word "mismatch" does not appear in machine-0.log | 19:05 |
kiko | lazyPower, is machine 0 special when using the local provider? | 19:05 |
kiko | I notice it's set to localhost | 19:05 |
lazyPower | It is. machine-0, your bootstrap node, is the parent machine warehousing the lxc containers | 19:05 |
lazyPower | however we are running to the end of my knowledge of common problems with LXC - if its not ip collision or tools/series. | 19:06 |
kiko | that's very interesting! is it documented anywhere? | 19:06 |
lazyPower | which aspect of the output? that the bootstrap node is localhost? | 19:06 |
kiko | hmmm | 19:07 |
kiko | I guess, yes, or that the bootstrap node's logs are interesting :) | 19:07 |
=== natefinch is now known as natefinch-afk | ||
kiko | lazyPower, aha, machine-0's log has a lot of interesting stuff | 19:10 |
=== roadmr is now known as roadmr_afk | ||
kiko | 2014-04-23 19:09:34 DEBUG juju.environs.simplestreams simplestreams.go:490 fetchData failed for "http://192.168.99.5:8040/tools/streams/v1/index.sjson": file "tools/streams/v1/index.sjson" not found | 19:10 |
kiko | 2014-04-23 19:09:37 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found | 19:11 |
jose | is there any way to hardcode the type of machine juju deploys on EC2? it's very annoying finding out that even though you set the constraints it deploys a machine bigger than what you were expecting | 19:22 |
lazyPower | kiko: thats.. not good. it cant find the simplestreams data | 19:23 |
lazyPower | marcoceppi: simplestreams on juju-local is updated when you run sync-tools no? | 19:24 |
marcoceppi | lazyPower: kiko local provider doesn't use simplestreams | 19:25 |
marcoceppi | last I checked | 19:25 |
lazyPower | wat - i thought /tools/streams - was in fact simplestrams | 19:25 |
marcoceppi | that DEBUG is a red herring | 19:25 |
lazyPower | maybe my terminology is wrong | 19:25 |
lazyPower | bah | 19:25 |
lazyPower | thanks for the clarification | 19:25 |
marcoceppi | kiko: is 192.168.99.5 your machine? | 19:26 |
kiko | marcoceppi, yes | 19:26 |
kiko | marcoceppi, good to hear that simplestreams is a red herring | 19:27 |
marcoceppi | kiko: I might be wrong though, local has changed a lot. | 19:27 |
marcoceppi | What version is this? | 19:27 |
kiko | marcoceppi, 1.18.1.4 | 19:27 |
marcoceppi | kiko: can you destroy then re-run with the --debug flag? | 19:28 |
kiko | marcoceppi, no, this is in production | 19:29 |
kiko | oh | 19:29 |
kiko | you mean destroy the machine? | 19:29 |
kiko | I have already | 19:29 |
kiko | many times | 19:29 |
marcoceppi | local provider in production? | 19:29 |
kiko | marcoceppi, sure, I hear that's the way to do it :) | 19:29 |
=== natefinch-afk is now known as natefinch | ||
renier | 'Fix genghisapp charm directory name' https://github.com/juju/docs/pull/84 | 19:33 |
qhartman | so I have a charm that failed to deploy, and destroying it doesn't seem to work. | 19:45 |
qhartman | How can I force that to happen (no --force it seems) and/or force the charm to be redeployed on the node to see if I've fixed the problem that caused it to fail? | 19:46 |
=== timrc is now known as timrc-afk | ||
=== timrc-afk is now known as timrc | ||
qhartman | If I force destroy the machine, then I can destroy the services that are in a bad state. | 19:53 |
jose | qhartman: you need to do 'juju resolved unitname/#' first | 19:54 |
jose | as juju is event-based, it needs to be taken out of the error state before continuing with the next action | 19:55 |
qhartman | jose, oooooh, that makes sense | 19:55 |
qhartman | cool. Is this sort of usage stuff documented anywhere? I've found lots of howto-style docs, but very little reference for juju. | 19:57 |
jose | well, we have docs at juju.ubuntu.com/docs | 19:57 |
jose | but I'm not sure if that specific point is documented, should be | 19:57 |
qhartman | ok | 19:57 |
qhartman | yeah, I've been poking around on there | 19:58 |
jose | https://juju.ubuntu.com/docs/charms-destroy.html#life:-dying | 19:58 |
jose | there it is :) | 19:58 |
qhartman | and this sort of middle-ground stuff isn't covered well, or I'm just not seeing it. Lots of bootstrappy stuff, and lots of dev oriented stuff though | 19:58 |
qhartman | awesome | 19:58 |
qhartman | I must just need to learn how to find stuff here | 19:58 |
lazyPower | qhartman: feedback on the list to your experience with the docs is gold as well :) | 19:59 |
qhartman | lazyPower, I'll add that to the list | 19:59 |
jose | if you have any questions you're welcome to just ask around here :) | 19:59 |
qhartman | lazyPower, I posted a couple items a few minutes ago | 19:59 |
qhartman | jose, yup, and gotten lots of help so far | 20:00 |
jose | qhartman: btw, nova-volume has not been promulgated to trusty yet | 20:00 |
qhartman | jose, so, how did you find that page? I don't see it in an index anywhere and a couple of intentionally naive but sensible searches didn't turn it up | 20:00 |
jose | juju.ubuntu.com/docs, on the sidebar, the 'Destroying Services' page, and read until the end of it | 20:01 |
qhartman | jose, yup, I knew I was breaking ground somewhat, was mostly hoping that my notes could help someone else along, | 20:01 |
qhartman | jose, aha, I see it now | 20:01 |
jose | I think the search looks for articles on the blog (if there's any) | 20:02 |
qhartman | yeah, it doesn't seem to search the docs at all | 20:02 |
=== roadmr_afk is now known as roadmr | ||
zdr | What exactly is juju-mongodb for? Is it to enable SSL? What else? And why on Trusty only? | 20:33 |
lazyPower | zdr: juju-mongodb is the tweaked and tuned package for mongodb's inclusion into teh juju ecosystem. | 20:41 |
lazyPower | mongodb is the storage mechanism for your topology and other juju specific bits | 20:43 |
=== vladk is now known as vladk|offline | ||
sarnold | marcoceppi: I can't find that ppa in my "subscriptions" list thingy -- should I have access or should I not? :) | 21:56 |
lazyPower | sarnold: shhh, we wont talk about that | 22:13 |
sarnold_ | marcoceppi: sorry, my irc machine died after I saw a highlight but before I could switch to this channel to see what was said.. I assume it was you who replied to me, anyway :) | 22:15 |
=== sarnold_ is now known as sarnold | ||
=== NoNameYet_xnox is now known as xnox | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== wedgwood is now known as Guest68564 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!