=== freeflying_away is now known as freeflying === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [02:53] SSH_USER = 'juju_keystone' where does this user come from in keystone's charm [03:11] freeflying: thats an inline config that gets consumed during the install hooks at first glance. [03:11] lazyPower, yes, with this user, we can't sync credential to peer node [03:13] I'm not real familiar with open stack - so i feel that I'm not a great resource of info on how to accomplish what you're trying to do. I literally just opened the charm and scanned the source. [03:14] lazyPower, thanks anyway === julian__ is now known as julianwa [03:20] No problem *hat tip* === kenn_ is now known as kenn === CyberJacob|Away is now known as CyberJacob [08:58] Hi, esteemed juju dev colleagues :-) [08:58] I may be one of the first trying to deploy OpenStack's nova-volume inside an LXC container [08:58] it's not working because the charm cannot access loop devices [08:58] and that's because apparmor is preventing that [08:58] and to make it work, a line needs to be uncommented in the container config: [08:58] #lxc.aa_profile = unconfined [08:58] and that line is commented in the config generated by /usr/share/lxc/templates/lxc-ubuntu-cloud [08:58] so, how do I change the container config while it's being deployed? [08:58] (phew :-) ) thank you === CyberJacob is now known as CyberJacob|Away [09:40] I have a weird problem. I'm trying to deploy a service using juju deploy --constraints "instance-type=cpu2-ram4-disk50-ephemeral20" but am getting an error "ERROR Bad 'instance-type' constraint 'm1.small': unknown instance type" (which is what the service would have used when it was first deployed). Any ideas why it's ignoring my manually set constraints? [09:56] Hola, i m trying to use juju 1.14.1-precise-amd64 against a grizzly openstack installation [09:56] bootstrap fails with "error: required environment variable not set for credentials attribute: Region" [09:56] any idea what to do ? (a simple google search gave me https://bugs.launchpad.net/juju-core/+bug/1086674) [09:58] ahh, i put the region in environments.yaml, seems to go better [10:13] hola, i m hitting what looks like https://bugs.launchpad.net/juju-core/+bug/1202163 [10:13] <_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs [10:13] the last comment mention " [10:13] Changed in juju-core: [10:13] milestone: none → 1.16.0" [10:14] i dont understand exactly wich version has the fix, i installed ppa devel (1.15.0-precise-amd64) [10:14] but i still have the same behaviour [10:14] any idea what should i try ? [10:39] melmoth, I think the answer is that is not fixed yet [10:40] jamespage, yep, i switched to pyjuju for this environment. [11:02] melmoth: we're literally discussing this on a conf call right now [11:02] keep an eye on the bug, status updating REAL SOON NOW [11:02] https://bugs.launchpad.net/juju-core/+bug/1202163 [11:02] <_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs [11:03] ok [11:03] bug #1234577 [11:03] <_mup_> Bug #1234577: Uniter needs to support ssl-hostname-verification: false [11:03] #1234576 [11:03] <_mup_> Bug #1234576: Upgrader needs to support ssl-hostname-verification: false [11:03] melmoth: short version, we are REALLY, _really_, r e a l l y, hoping to get this done before the 1.16.0 release [11:04] any idea whe this will be ? (i m leaving the premicises on friday) [11:04] melmoth: if friday is tomorrow [11:04] it won't be done by then [11:04] ok [11:04] it *might* be done by the following monday [11:04] i'll stick to the pyjuju version then. [11:04] as we're also aginst the final deadlie for saucy [11:04] anyway, i have a quantum problem that make the bootstrap not working anyway.... [11:05] melmoth: shitter [11:05] so this one specific problem is not the worst i m currently facing :) === freeflying is now known as freeflying_away [11:25] hi, i'm using juju-deployer to deploy a set of charms, but i find this error : error: cannot get latest charm revision: charm not found in "/home/yolanda/development/canonical-ci": local:precise/postgresql [11:26] shouldn't be local:postgresql, not local:precise/postgresql ? [11:46] How can I know which machine Juju is actually using? | http://askubuntu.com/q/353114 [12:10] Migration from generic OpenStack to Ubuntu Openstack | http://askubuntu.com/q/353127 === freeflying_away is now known as freeflying [13:10] gary_poster, We confirmed hazmat's hadoop charm is really not in manage.jujucharms.com. We killed some stale processes. We just restarted the queue. I see http://manage.jujucharms.com/charms/precise/hadoop now [13:10] ^ hazmat ping me if you think manage.charmworld.com is slow or not working again [13:10] sinzui, great, thank you! [13:29] when bootstrapping with the maas provider is it possible to specify which physical machine you want to use as the boostrap host ? [13:39] gnuoy, theoretically, yes [13:40] gnuoy, if you use py version of juju, there is a constraint called maas-name, if it go version, you need some workarounds [13:41] freeflying, I'm using juju-core/go version [13:43] gnuoy, in this case, still some constraints can be used, like mem/cpu cores, other approach is deploy a pure ubuntu to those server, then deploy service as subordinate onto the machine [13:45] the charms I'm looking to deploy aren't subordinates and juju/maas seems to be ignoring mem/cpu constraints. I've tried bootstrapping with cpu/mem options which match my target host but a different host which doesn't match keeps getting picked === mars_ is now known as mars [13:47] gnuoy, or another approach enlist one, deploy one lol [13:47] yeah, not ideal [13:48] but works well here [13:52] marcoceppi, urhg: [13:52] 2013-10-03 13:52:09 INFO juju.worker.uniter context.go:234 HOOK E: Unable to locate package charm-helper-sh [13:52] from mysql charm on saucy [13:53] gnuoy, the honest truth is that right now that is tricky [13:53] gnuoy, maas tag would be the way todo it once 1.16 is out of juju-core [13:53] jamespage: you're deploying mysql on saucy? [13:54] marcoceppi, yes [13:54] didn't think we had a mysql saucy charm [13:54] marcoceppi, I never deploy direct from charmstore [13:54] hence the charm can be any series from local [13:54] jamespage: with charm-tools 1.0 charm-hlpers-sh is dropped from that package [13:54] yeah - I remember [13:54] you'll need to add a PPA to get it to work [13:55] marcoceppi, what the future of charm-helper-sh? [13:56] jamespage: gone, depricated instead for charm-helpers2 at lp:charm-helpers [13:56] marcoceppi, I really don't want to add a ppa [13:56] the MySQL charm shouldn't be using much from charm-helpers-sh [13:56] you can embed the helper files locally in the charm [13:57] jamespage, ok, thanks. I'm not sure how I'm going to address this. I have 24 odd services to deploy and 3 different types of server :-( [13:57] gnuoy, other constraints work [13:57] 1.16 is due today officially but more like Monday I think [13:59] jamespage: I don't see any of the code using charm-helpers-sh anymore, except for a mention in hooks/monitors-relation-departed, but I think that should be including monitors.common.bash and not a charm-helper [13:59] jamespage, so that'll be in ppa:juju/stable on monday'ish ? [14:00] actually, sorry, monitors.common.bash and master-relation-changed are using ch_is_ip and ch_get_ip [14:02] gnuoy, yeah - I think so [14:02] ok, that'd be great [14:30] sinzui, any reason why the ingest had to be restarted to make it work? [14:30] hazmat, there were stale procs, possibly zombies [14:30] new proces exit early when they think there are already running [14:32] sinzui, any way to monitor that (last collection update on status page)? [14:33] hazmt, we think so. I am going to write up recommendations to a better /heartbeat === freeflying is now known as freeflying_away [15:51] sinzui, cool better monitoring is one part of the problem.. fixing the stale proc would be the other, are there any forensics on the latter? [15:54] hazmat, no. There is definitely still a problem too [15:57] hazmat, looks like mthaddon found the issue, crontab was manually put in the wrong place in production. He is fixing it [15:57] sinzui: er, sorry? [15:58] sinzui: what's wrong with /etc/cron.d/charmworld ? [15:58] (as a location) [15:58] mthaddon, didn't you indicate that the charmworld crontab was in the wrong location? [15:58] sinzui: no, I indicated you asked for me to look in the wrong location, but it's in /etc/cron.d/charmworld [15:59] :( [15:59] (which is fine) [15:59] sinzui, that does not equate to stale process [15:59] I still have no clue as to what automated piece is missing. [15:59] sinzui, is staging running against the full set of charms? [15:59] hazmat, we had stale procs a few hours ago we killed them. [15:59] yes it is hazmat [16:00] I am using to to see what is missing [16:00] sinzui, staging has 725, prod has 746 [16:01] hazmat, that can alway happen because the older the instance, the more deleted charms it knows about [16:01] hazmat compare http://staging.jujucharms.com/recently-changed to http://manage.jujucharms.com/recently-changed [16:03] sinzui, 21 deleted charms in a few months is highly suspect imo [16:04] but fair enough the ingest is paramount for triage and analysis [16:06] sinzui: so do you need anything from us? I'm not sure if that cron info answers your questions [16:08] sinzui, mthaddon, ingest logs might be helpful to see where the stall happens [16:08] mthaddon, /home/charmworld/var/app.log [16:08] could need several days worth depending on length of stall [16:08] if its being rotated [16:09] 1.1G... no rotation by the looks of it [16:10] No it doesn't [16:11] can we update the charm to do that (not a critical item, but unrotated logs is a recipe for disk space problems)? [16:13] mthaddon, I think so [16:14] mthaddon, I am still preparing a list of issues to talk over with gary_poster to address what we wanted to know when the last release failed [16:14] sinzui: should I log a bug about that at https://bugs.launchpad.net/charmworld/+filebug ? [16:14] that is good [16:16] mthaddon, on the mongodb we want to know if the queue has the list of charms to ingest [16:16] mongo juju --eval 'db["charm-queue"].count()' [16:16] ^ that can be 0 because ingest drains the queue. If we see a number, we can be certain we are queuing [16:17] sinzui: got "1" [16:17] I would expect that to be empty then in a few minutes [16:22] mthaddon, hazmat, the log shows that we have only ingested once in two days. it was the run jjo intervened with [16:22] mthaddon, has the queue number changed? [16:22] sinzui: not yet [16:24] sinzui: I'm going to have to pass you to the vanguard in the other channel (there is now one) as I have a meeting to go to [16:24] thank you [16:28] How to remove a relation in Juju after destroying one of the associated services? | http://askubuntu.com/q/353231 [16:47] I got this ^ ^ [17:26] How can I deploy a new service in Juju GUI specifying the destination machine? | http://askubuntu.com/q/353262 [17:53] Hello, can anyone help me w/ Juju-core on Ubuntu-server 12.04? [17:54] is this the right place to ask questions? [17:55] FilipeCifali: sure; note that irc tends to work best if you just ask questions outright :) [17:55] I do like to be polite first :) so, I just installed, made the setup and I'm getting this same error: http://askubuntu.com/questions/351269/juju-errors-when-trying-to-deploy-to-ec2 [17:56] is the stable version not so stable? Juju is yelling at me: [17:56] ~# juju -v deploy wordpress [17:56] 2013-10-03 17:52:01 INFO juju.provider.ec2 ec2.go:187 opening environment "amazon" [17:56] 2013-10-03 17:52:07 ERROR juju supercommand.go:282 command failed: no instances found [17:56] error: no instances found [17:56] and I have credentials fixed, have done juju bootstrap before [17:57] ~# juju bootstrap [17:57] error: environment is already bootstrapped === CyberJacob|Away is now known as CyberJacob [17:58] (I can access my instance over ssh w/o problem) [18:12] brb [18:16] I'm back [18:19] welcome back FilipeCi_ :) did 'juju bootstrap' work? [18:19] it did [18:20] and I done again after I found that link [18:20] ~# juju bootstrap [18:20] error: environment is already bootstrapped [18:27] is there any other debug/verbose level to use? [18:28] FilipeCi_: you could just output of juju status [18:30] @sarnold just a sec [18:33] hmmm Juju status showed me a DNS timeout [18:33] gonna change my resolv.conf and try again [18:39] 2013-10-03 18:39:24 ERROR juju supercommand.go:282 command failed: Get : 301 response missing Location header [18:39] error: Get : 301 response missing Location header [18:39] that's really awkward [18:40] I found this in launchpad: https://bugs.launchpad.net/juju-core/+bug/1083017 but not sure if it's related since it's marked as fixed last year [18:40] <_mup_> Bug #1083017: Cannot bootstrap with public-tools in non us-east-1 region [18:42] The funny part: region: us-east-1 [18:42] in env.yaml being used [19:04] damn interwebs [19:04] any hints about that message? [19:04] FilipeCi_: you've missed nothing here [19:04] :( [19:05] maybe I should downgrade Juju? [19:05] the core/client in my server [19:06] clear [19:08] FilipeCi_: what version are you using? [19:09] ~# juju version [19:09] 1.14.1-precise-amd64 [19:10] from stable repo [19:10] (I was following https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment-using-ec2) [19:10] can you run juju destroy-environment; then run `juju bootstrap -v --debug` and paste the output to http://paste.ubuntu.com [19:10] just a sec and I'll post it [19:12] http://paste.ubuntu.com/6189419/ [19:13] running ubuntu-server 12.04 LTS [19:15] According to everything I'm reading this has already been fixed [19:16] how do I update? [19:16] FilipeCi_: this was fixed almost a year ago [19:16] FilipeCi_: change your control bucket name to something more unique and then try again [19:18] gonna try to put another md5 on that [19:28] oh yeah, it worked [19:28] TY! === natefinch is now known as natefinch-afk === _mup__ is now known as _mup_ === CyberJacob is now known as CyberJacob|Away [22:05] Hi -- what is the usual story around deploying juju-gui when your lab is firewalled? can you d/l the targz directly into the charm dir? [22:32] dpb1: I think it is to download the charm locally, then deploy as a local charm [22:32] however you may have issues with any apt-get or http GETs the install hooks may run [22:32] thumper: yes, I did that. but it's trying to curl a tar.gz from launchpad. [22:33] :-( [22:33] dpb1: don't suppose you can open ports to launchpad? [22:34] thumper: I can. I just wish it was a package. :) [22:34] since that already works [22:35] maybe I'll file a bug/enhancement idea [22:35] * thumper nods [22:35] cheers === natefinch-afk is now known as natefinch === natefinch is now known as natefinch-bedtim === natefinch-bedtim is now known as natefinch-bed [23:56] I have an environment running on AWS, but due to budget contraints I would like to run both prod and dev on the same box. My thought was to run each inside their own LXC inside the EC2 instance. Is this possible? Is it even a good idea? I haven't been able to find a command to add an LXC to and instance.