[01:19] Scaled down Openstack Grizzly installation with Juju and Maas | http://askubuntu.com/q/346898 === wallyworld___ is now known as wallyworld === CyberJacz is now known as CyberJacob === CyberJacob is now known as Guest81313 === tasdomas_afk is now known as tasdomas === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [08:11] Juju GUI public IP | http://askubuntu.com/q/346991 === defunctzombie_zz is now known as defunctzombie [08:32] dosaboy, around? lets discuss ceph/cinder ehere [08:45] jamespage: howdi [08:45] congratz btw [08:46] so it is clear that a wider rework is in order [08:46] dosaboy, ta [08:46] for now, as you suggest, i'll add the rep count to the cinder config [08:46] dosaboy, agreed; I want to rework the cinder integration anyway so lets do something tactical for the time being [08:46] dosaboy, I made another comment on the MP for the replica count stuff [08:47] dosaboy, for the rework I want to re-implement the ceph/cinder integration as a separate 'backend' charm [08:47] as a single cinder instance can support multiple backends >= grizzly [08:48] yup [08:48] dosaboy, so the ceph integration becomes a subordinate charm for cinder [08:48] dosaboy, alongside other backend sub charms [08:48] makes sense [08:48] existing compat would be retained still for existing deployments [08:49] jamespage: i assume the problem we see here also applies to the glance charm [08:49] dosaboy, yup [08:49] * dosaboy checks [08:49] i'll apply same interim patch there too [08:50] dosaboy, +1 [08:50] if you want todo those I can ack them through [08:50] (well this morning - not around PM today) [08:50] but in tomorrow [08:51] doing it now === defunctzombie is now known as defunctzombie_zz === jmb^ is now known as jmb [09:28] does juju-core specifically bridge to eth0 when bootstrap with maas? [09:38] jamespage: cinder and glance patches should be ready for review now [09:49] I can't see what command I need to run to actually run charm tests https://juju.ubuntu.com/docs/authors-testing.html [10:14] dosaboy, odd - I get an access denied when the cinder charm tries to create the pool [10:17] damn i suspected as much, maybe that the 'ceph osd' command requires different perms to rados command [10:19] jamespage: can you show me the permissions for the cindr client? [10:19] dosaboy, maybe [10:20] osd rwx [10:20] mon r [10:20] dosaboy, ^^ [10:23] jamepage: hmm that should be enough [10:23] or maybe * is needed for admin [10:23] i can test if you gimme a few mins [10:31] dosaboy, just trying a fix now [10:39] dosaboy, needs "mon rw" [10:40] aha! [10:41] jamespage: the user is created by the ceph charm right? [10:42] dosaboy, yes [10:42] ceph.py [10:42] are you going to patch or do you want me to? [10:42] dosaboy, trying to figure out how to upgrade permissions [10:42] jamespage: it may be yucky [10:42] 'capabilities' [10:46] hazmat: in precise it does, shouldn't in the ppa [11:09] dosaboy, ceph auth caps [11:16] dosaboy, ceph auth caps client. mon 'allow rw' osd 'allow rwx' [11:16] specifically [12:16] i have a local juju setup, but trying to ssh into any of my machines i get a Permission denied (publickey,password)., however there is no difference between id_rsa.pub on my host user and the authorized_keys on the lxc machines [12:32] Dotted^: so I had to add username ubuntu@ to get into things. I should setup an ssh config block to auto use that user [12:32] Dotted^: try ssh'ing into one of the ips manuall ssh ubuntu@ip [12:33] that did the trick, thanks [12:33] a bit unfortunate that the "juju ssh" wont work [12:35] Dotted^: yea, so make that work you'd have to edit .ssh/config and add a block for your lxc ip range http://paste.ubuntu.com/6123709/ === thomi_ is now known as thomi [12:38] well ssh now works without adding ubuntu@, juju ssh is still broken it seems [12:38] Dotted^: hmm, :( not sure then. Maybe marcoceppi knows more when he's around. [12:39] alright, well thanks - at least i can get into the machines now :) === bladernr` is now known as bladernr_ [13:12] hey evilnickveitch [13:33] does someone have a moment to answer a couple of questions? [13:34] jcastro, hey [13:34] mattyw: no, get out. [13:35] of course we have a moment! [13:35] evilnickveitch: hey, I removed the warning on the debugs hook page [13:35] jcastro, I was just grabbing my coat ;) [13:35] the one that is like "this doesn't actually work yet." [13:35] evilnickveitch: the other is, I'm listed as maintainer for lp:maas-website, can we move that over to a team or something? [13:36] jcastro, oh, cool. that is being rewritten [13:36] jcastro, this page - on charm testing: https://juju.ubuntu.com/docs/authors-testing.html it doesn't actually say how I can run the tests [13:36] jcastro, also, in a charms config.yaml is it possible for options to depend on other options in the file. For exmple. set a BaseDir and then have other values use that? [13:39] mattyw: 1 is a ceppi question, this page looks new to me. [13:39] 2. is I don't think so? But I am not sure. [13:39] mattyw: any other questions I can no know the answer to for ya? :) [13:41] jcastro, are they any plans to allow you to deploy from github like juju deploy github.com/mattyw/myAwesomeCharmCollection myWickedCharm [13:41] mattyw: so I've seen some that use multiple configs [13:41] but I don't know about enforcing usage of one over the other other than just not working [13:41] yes, there are plans to deploy from anywhere, just not github [13:42] ideally we can pull from any internal or external source [13:42] jcastro, just not or not just? [13:42] not just, sorry! [13:42] <-- case of the mondays [13:43] juju deploy someinternalresource blah [13:46] jcastro, one more question [13:46] framework charms [13:46] jcastro: why are there machine folders in /var/lib/lxc/ even after I destroy-machine them? [13:47] I don't know I need to ask thumper about that [13:47] well actually - maybe not framework charms, but it would be great to be able to send a charm a message to run an arbitrary hook [13:47] mhall119: I noticed that this weekend for the first time, I hope it's not a bug [13:47] jcastro: should I wait for my service units to come up before calling add-relation, or will is queue that until they are ready? [13:48] mhall119: once the bootstrap is up the entire thing is async, you can go at whatever pace you like [13:49] mattyw: from within a charm or just manually? [13:50] mhall119: you pulled in the "juju-local" package too I assume? [13:50] jcastro, externally [13:51] jcastro, well actually let me re phrase [13:51] mattyw: hmmm [13:51] mhall119: are you on 1.13 or 1.14 of juju? "juju version" will tell you [13:51] the rails charm: if I understand correct it will install rails then setup for a specific app? [13:51] yeah [13:52] 1.13.3-saucy-i386 [13:53] jcastro, so how do you do that specific stuff - is that done with the config? [13:53] yeah [13:53] ok cool [13:53] the setup for the specific app you mean? === kentb is now known as kentb-afk [13:53] that's right [13:53] Right now it's just like, the repo URL and stuff though [13:53] I don't think it's doing anything too fancy yet [13:54] Dotted^: how are you trying to ssh? `juju ssh `? [13:55] yes [13:55] Dotted^: that does not work on local provider, please use `juju ssh ` instead, so juju ssh wordpress/0 for example [13:55] Dotted^: this is a known issue [13:55] ahh thanks, that works [13:56] hey marcoceppi, both mhall119 and I saw an issue where the lxc stuff isn't being cleaned up with destroy-environment [13:56] jcastro: that's a bug then [13:56] jcastro: destroy-machine, I haven't done destroy-environment yet [13:56] mhall119: destroy-machine doesn't remove the machine? [13:57] it removed it from juju status [13:57] * marcoceppi bootstraps a local [13:57] but not from /var/lib/lxc/ [13:57] I had the same thing happen [13:57] agent-state-info: 'rror: container "jorge-local-machine-1" is already created)' [14:04] ok I'm going to file a bug since I can reproduce it [14:05] jcastro: cool, I'll try to get more debug information [14:06] https://bugs.launchpad.net/juju-core/+bug/1227145 [14:06] <_mup_> Bug #1227145: Juju isn't cleaning up destroyed LXC containers [14:06] mhall119: can you confirm the bug please? [14:06] jcastro: confirmed [14:19] "Internal Server Error" Progress! === freeflying is now known as freeflying_away [14:35] sure would be nice if it actually wrote the internal error message to the error log file :( [14:45] mhall119: is that from juju or the application? [14:48] marcoceppi: from gunicorn [15:22] hi, i'm receiving this error after a juju bootstrap, when trying to deploy a service, or even with juju status: error: cannot log in to admin database: auth fails [15:22] environment was working fine until this afternoon === kentb-afk is now known as kentb [15:42] so...juju destroy-environment failed [15:42] now I have a postgres instance in "down" agent-state that I can't get rid of [15:47] Should I worry about updating juju when I already have a environment deployed? [15:47] an* [15:48] marcoceppi: ping [15:48] kurt_: pong [15:49] marcoceppi: are there any good guides available for setting up openstack once horizon is in place? Or any guides that are WIP? [15:49] x-warrior: you can run juju upgrade-juju to upgarde the deployed version of juju. The client should be compatible with most deployed versions of juju since 1.12 [15:49] kurt_: you should be able to follow any instructions/guides on the openstack site [15:49] mhall119: that's so weird [15:50] ok. it would be nice if there were one published in conjunction with the settings for jamespage's openstack manifesto. [15:50] that links up his configurations with IP addressing schema, etc [15:51] kurt_: there are so many ways you can take the configuration of openstack after setting up the charms though, it's really about what you want to do at that point I imagein [15:53] agreed, but a demo path with a working configuration would be cool. I am headed that way anyways. The whole network configuration taken to a working finished model IMHO is one of the biggest challenges with openstack. [15:58] marcoceppi, what command do you call to run charm tests? https://juju.ubuntu.com/docs/authors-testing.html [15:59] charm status hangout in just a few minutes! [16:01] Charm Weekly hangout starting in a few [16:01] hangout out URL at: https://plus.google.com/hangouts/_/edb0a66997812f8ade7263387d24623dc7094011?authuser=0&hl=en [16:01] if you want to join us. [16:01] arosales_: I need the youtube URL [16:01] marcoceppi: After destroying some other services and machines, destroy-environment finally worked [16:01] I've manually deleted all the old lxc container data in /var/lib/lxc/ [16:02] starting fresh now [16:02] mattyw: You need to install the juju-test plugin, it's in the juju-plugins project [16:03] http://pad.ubuntu.com/7mf2jvKXNa [16:03] for the notes [16:03] http://ubuntuonair.com if you wanna follow along! [16:08] marcoceppi, this one? https://launchpad.net/juju-plugins [16:08] mattyw: yes, no ppa/installer for it yet. Just copy juju_test.py to juju-test somewhere in path for now [16:25] omg omg omg! I got something working! [16:26] jcastro: how can I get a local shell of a LXC deployed instance? [16:35] mhall119: juju ssh api-website/0 === arosales_ is now known as arosales [16:43] so I finally got it all working, only to find out I need to put it all behind an Apache proxy :( [16:46] marcoceppi: is there an apache subordinate that can automagically serve static files *and* act as a proxy for my gunicorn subordinate? [16:47] mhall119: no, but someone had a similar question, I can't remember what I did to get it working with them [16:47] mhall119: there's an apache2 charm [16:49] mhall119: http://irclogs.ubuntu.com/2013/07/12/%23juju.html might or might not help [16:49] thanks marcoceppi [17:41] is there a way to get the juju debug-log on lxc containers? [17:42] Dotted^: I believe I read that the easy way is to read the files through the host filesystem [17:43] where are they located? [17:44] Dotted^: try .juju/local/log/unit-*.log and /var/lib/lxc/ [17:44] nice thanks [17:55] are there any issues with wordpress? combining it with mysql seems to work, wordpress and memcache and nfs is fine, but it explodes when trying to add relations to all 3 [17:55] Dotted^: the LXC containers all mount /var/log/juju/ on each machine to .juju/local/log - so just read them there as sarnold pointed out [17:55] Dotted^: there's a known issue with memcached and wordpress not working [17:55] Dotted^: NFS and MySQL work fine [17:56] marcoceppi: aha :) thanks! [17:56] https://bugs.launchpad.net/charms/+source/wordpress/+bug/1057212 https://bugs.launchpad.net/charms/+source/wordpress/+bug/1170034 [17:56] <_mup_> Bug #1057212: Memcached relation fails if wp install isn't complete [17:56] <_mup_> Bug #1170034: integration with memcached broke [17:57] so its only because wp isnt setup yet? [17:59] Dotted^: no, memcached dones't work regardless of the state of wordpress [17:59] Dotted^: it's a bug with the upstream plugin [17:59] ah [17:59] Dotted^: I've been meaning to fix it, just haven't found the time just yet === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [18:55] jcastro: marcoceppi: can you guys sanity check https://code.launchpad.net/~mhall119/ubuntu-api-website/canonical-is-charm/ for me? [18:57] hmm adding nfs to wordpress breaks wordpress, the log complains about "mount.nfs: access denied by server while mounting 10.0.3.201:/srv/data" [18:58] mhall119: do README.md or README.rst [18:58] 10.0.3.201:/srv/data/wordpressfolder even [18:59] mhall119: in whichever format you prefer. [19:01] jcastro: what's wrong with just README? [19:02] it needs an extension [19:02] hmm, did the boilerplate not give it an extension? [19:02] jcastro: this is copied from another charm [19:03] ok [19:03] an internal one that IS uses [19:03] this will also only be used by Canonical IS [19:03] oh ok! [19:03] then it doesn't matter [19:13] what's a good way to explain 'admin-secret' in the environments.yaml file? I know you can pretty much use anything you want, but, do any of the supported environments (maas, ec2, etc) actuall make use of it for anything (other than logging into juju-gui)? [19:49] kentb: it's only really used for API access, it's internal to juju [19:50] marcoceppi: ok. thanks! === BradCrittenden is now known as bac === defunctzombie_zz is now known as defunctzombie [21:14] kentb: how far have you gotten? === tasdomas is now known as tasdomas_afk [21:15] I managed to get an instance running, but am now stuck on getting an IP allocated to the instance with "an error occurred" - trying to figure out the logging for that - where it is [21:26] Hey All. I'm trying to set up a local juju server. The documentation is a little light on the network setup prior to installing juju, mongodb and bootstrapping. Anyone have experience with this or know of a good walkthrough. I've been googleing and many step-by-steps seem old and outmoded. Specifically i'm interested in the host network setup. I have three NICs, and am wondering what the best [21:26] practices are for setting those up, and any VLANs I should be setting up on the switch. [21:31] kurt_: Had a bunch of other important stuff jump in front of openstack today, but, when I had it all up and running the first time last week, I could allocate IP addresses to my instances and ping both the internal namespace address as well as the public-facing one. [21:32] kentb: wow, I really need to see your configuration [21:33] blairbo: for local server, there is no additional networking setup. If you want to access the juju machine outside of the host running the local provider you'll need to configure the LXC bridge, but that's outside of the juju documentation [21:59] im doing some benchmarks and im getting about 600 requests per second for wordpress on nginx tuned for single, that seems a bit low to me or is it supposed to be so low ? [22:00] nvm === kentb is now known as kentb-out [22:16] marcoceppi [22:17] marcoceppi: ok.. i must be having other issues then [22:17] blairbo: could you describe your problem? [22:19] Hello. I have bootstraps my AWS ENV and see the bootstrap instance in the AWS dashboard, but when run juju deploy juju-gui the new machine (#0) never spins up, just says "pending" forever. Where should i be looking for errors? [22:23] Log files i should be looking at? === freeflying_away is now known as freeflying [22:30] Hello, anyone awake? Its no limited to juju-gui, any charge i attempt to deploy crates a new machine and never gets past the pending state [22:35] ZonkedZebra: what size instances are you using? I understand micros can take forever to come up.. [22:36] sarnold: I haven't specified, the bootstrap node came up on a m1.small instance, none of the other machines even show up as pending or initializing or anything at all in the AWS dashboard [22:38] ZonkedZebra: they don't even show up in the dashboard? hrm.. [22:38] and /var/log/juju/all-machines.log is helpfully entirely empty [22:39] ZonkedZebra: could you paste your juju status output? [22:39] to paste.ubuntu.com [22:39] http://paste.ubuntu.com/6125937/ [22:40] ZonkedZebra: you have a few other problems! Somehow you've got a bootstrap node (which is machine 0) that's pending but you can actually get a juju status which is illogical [22:41] marcoceppi: after bootstrapping juju status reports 0 machines and 0 services [22:41] i had assumed that the bootstrap node was excluded from the list [22:41] ZonkedZebra: yeah, that makes no sense. There is always 1 machine, the bootstrap node, machine 0 [22:42] ZonkedZebra: Could you ssh on to the bootstrap node and see if you have any files in /etc/init/juju-* ? [22:42] yep [22:42] juju-db.conf appears to be the only juju related file in /etc/init [22:43] ZonkedZebra: everything about this situation makes me believe something is very wrong. I'd recommend `juju destroy-environment` and then try bootstrapping again. Once you bootstrap run juju status and make sure you have a machine 0 that is in a started state [22:43] marcoceppi: I have, this is my third go around [22:44] ZonkedZebra: what does `juju version` say? [22:44] 1.11.2-unknown-amd64 [22:44] ZonkedZebra: Ah! i know the problem [22:44] ZonkedZebra: are you on Mac OSX? [22:44] I am [22:45] ZonkedZebra: the current release of juju is 1.14.0, so when you bootstrap instead of it detecting the client version you're using it's setting up 1.14. I could have sworn the brew recipe was updated to deploy the latests juju version, did you recently do a brew install> [22:45] ZonkedZebra: or is this `brew install` from a while ago? [22:45] arosales: ^ [22:46] I tried the homebrew method, got some crazy go compile error, so used the package from github linked to from here: https://github.com/juju/juju-core/releases [22:47] ZonkedZebra: yeah, release is pretty outdated [22:47] ZonkedZebra: we opted to use brew since we can keep it up-to-date rather than trying to upload every release [22:47] marcoceppi: I went through the setup guide and ran the test proceedure. agent-state for machine 0 goes down and charm agent-state remains as pending for wordpress and mysql [22:48] Tangentially related to juju, can openstack run on a vps, or do I have to run it on bare metal? [22:49] blairbo: if agent-state for machine 0 is down, then juju isn't working. On your local machine try to restart the juju-* services in /etc/init. If it's still down after a few mins pastebin (http://paste.ubuntu.com/) the files from /var/log/upstart/juju-* [22:49] dalek49: there's "devstack" which can be setup on a single machine. Depending on the VPS is virtualize, yes you can, but beware of performance issue [22:49] issues* [22:49] marcoceppi: brew install juju installs go with no issues (1.1.2) but during the go install command runtime/cgo throws this error: "clang: error: no such file or directory: 'libgcc.a'" [22:50] marcoceppi, reading backscroll [22:50] ZonkedZebra: yikes, I dont' have a mac osx machine to test what you need [22:51] its using "juju-core_1.12.0-1.tar.gz", should it be using 1.14? [22:51] ZonkedZebra, ya we need to removed that from github. jcastro put it there before brew. [22:52] ZonkedZebra, here is the info on homebrew [22:52] [1] https://github.com/mxcl/homebrew/blob/master/Library/Formula/juju.rb [22:52] [2] https://github.com/mxcl/homebrew/pull/21858 [22:53] ZonkedZebra: try running brew install for juju with "--use-llvm" option === Guest81313 is now known as Guest81313|Away [22:53] * marcoceppi grabs at sticks [22:53] ZonkedZebra, https://lists.ubuntu.com/archives/juju/2013-August/002847.html is the thread [22:53] ZonkedZebra, we just released 1.14 today so we proably need to get 1.14 added to that brew though [22:54] marcoceppi: ha-zahh! --use-llvm did the trick [22:54] ZonkedZebra: WOO! So happy that worked [22:54] ZonkedZebra: that should give you a juju version of 1.12.0, which should work with 1.14.0 for bootstraps [22:54] arosales: I'll update the docs with that work around for the brew install command in the mac osx section [22:55] marcoceppi, thank you [22:55] afterifinishtestingglusters [22:55] marcoceppi, ack [22:55] I'll also reach out to rodrigo and see if can update the brew receipe. [22:56] arosales: we should be able to open a pull req [22:57] actually, I might just do that now [22:57] arosales: we should have the release team do it as part of the stable releases [22:58] marcoceppi, +1 [22:58] arosales: just update the URLs in this file on github and open pull req https://github.com/mxcl/homebrew/blob/master/Library/Formula/juju.rb [22:59] marcoceppi, email sent to sinzui [23:04] arosales ZonkedZebra: https://github.com/mxcl/homebrew/pull/22669 Submitted update to brew for latest stable/devel. Hopefully they will be merged soon [23:05] ZonkedZebra: are you getting a better juju status this time around? [23:05] Yep, everything going smoothly so far, just waiting on the juju-gui agent to start [23:06] ZonkedZebra: excellent, if you're looking to save money in the future, you can co-locate the juju-gui charm on the bootstrap node, so you can get your moneys worth on that node; `juju deploy --to 0 juju-gui`; for the next time around :) [23:07] wonderful, thanks for all the help [23:08] ZonkedZebra: np, the room is most active during business hours in the UK/US. So if it's "after hours" and no one is here feel free to post your question to either http://askubuntu.com tagged with Juju or on our mailing juju@lists.ubuntu.com that way we can get back to you in a more permenant form [23:22] marcoceppi: when i tab after typing "service" the list does not include either of the two juju services [23:23] blairbo: what does `initctl list` show? does it show and juju- services? [23:24] marcoceppi: I did "initctl list | grep -i juju" and i got these two - juju-agent-juju-local stop/waiting [23:24] juju-db-juju-local start/running, process 1151 [23:25] blairbo: start the juju-agent-juju-local [23:25] blairbo: that's the APi that controls the provisioning [23:25] blairbo: `sudo start juju-agent-juju local` should do it [23:26] blairbo: after you start it, run juju status, if it's still says agent down and juju-agent-juju-local is stopped, pastebin the log from /var/log/upstart/juju-agent-juju-local.log [23:41] marcoceppi: quick question. Im attempting to extend the node-app charm to support my preferred database (postgres) an addition to mongodb, Any examples of that kicking around (multiple databases) in an existing charm? [23:41] ZonkedZebra: not that I know of, let me check [23:42] ZonkedZebra: you'd basically just add the bits and bobs that postgresql charm requries to the node-app [23:43] ZonkedZebra: there's no way to say, "only need one or the other", in relations, so if someone adds a mongodb and a postgresql, the charm will process both of them [23:44] ZonkedZebra: this one looks like it can handle postgresql or mysql or mongodb: http://manage.jujucharms.com/charms/precise/rails [23:45] sarnold: ZonkedZebra: however, that charm also does everything via chef scripts, so it's not so easy to follow [23:45] oh man :/ [23:45] sorry ZonkedZebra :) [23:45] sarnold, marcoceppi : well its better then nothing, thanks === thumper is now known as thumper-gym [23:55] marcoceppi: it issued a process ID, but when i list again it's still in stop/waiting state, and juju status shows agent-state still down [23:56] blairbo: can you install pastebinit and run `cat /var/log/upstart/juju-agent-juju-local.lot | pastebinit` [23:56] marcoceppi: also there's no log file with the path specified. There is one for the db, but not the agent. [23:56] .lot ? [23:58] he meant .log [23:58] just checking:) hehe [23:59] :)