[00:10] SpamapS: omg that's funny [00:10] like a piano pedal [00:11] imbrandon: ok, I think I'm done polluting your bugspace with comments [00:11] np :) can i prom it too ? hehe [00:11] actually i need dinner, bbiab [00:12] imbrandon: just test running charm-tools 'review' from vim [00:12] m_3: btw what were ya testing ? [00:12] ahh [00:13] if someone wants to +1 it too would be cool to get into the store :) [00:13] but for now, off to get fooooood [00:14] imbrandon: me too [05:19] so, am I evil because I created a charm to scale out john the ripper? [05:20] EvilMog: ^^ [05:20] EvilMog: its.. alive [05:20] EvilMog: lp:~clint-fewbar/charms/precise/john/trunk [05:20] needs a README I suppose [05:26] and now for something completely different... [05:26] sleep [05:37] awesome [05:38] I'll give that a shout in the morning :) [05:39] shot [05:39] also I'll get the corelan and metasploit guys to try it out === almaisan-away is now known as al-maisan === zyga-afk is now known as zyga [07:01] <_mup_> txzookeeper/trunk r49 committed by kapil.foss@gmail.com [07:01] <_mup_> update changelog [08:08] is there standard copyright for Canonical written charms? [08:08] this is for an AGPL service [08:25] SpamapS: why did you lie to me? [09:02] SpamapS: never mind. === zyga is now known as zyga-afk === al-maisan is now known as almaisan-away === zyga-afk is now known as zyga [12:01] how do you tell juju to use a proxy server for apt ? [12:01] And [12:02] how can you tell juju to use the ip address of started instances, rather than the hostname [12:19] lifeless, we don't have support for it yet re proxy. [12:20] lifeless, the ip inspection is specific to the provider being used. typically its using hostname -f output [12:25] re apt proxy support its bug 897645 [12:25] <_mup_> Bug #897645: juju should support an apt proxy or alternate mirror for private clouds < https://launchpad.net/bugs/897645 > === voidspac_ is now known as voidspace [13:04] http://code.mumak.net/2012/06/unfiltered-reflections-on-my-first-juju.html === mrevell_ is now known as mrevell [13:35] jml: great info in the write-up thanks! [13:36] m_3: my pleasure [13:39] we need to update the django-based charms to check status of applied puppet manifests... I missed that [13:40] debug-hooks might be worth your time checking out... only problem is you can't catch the install hook. That often makes me wanna put more logic in config-changed so I can catch it [13:42] jml: which branches are you referring to when saying you should prefer http://code.launchpad.net/ over lp:? [13:44] jml: also, your typical dev iteration might include terminate-machine on ec2... as it's sometime nice to try again with a pristine instance. destroy-service does this in lxc, but not ec2 [13:46] anyways... awesome... I'm excited to see how pkgme gets structured [13:52] m_3: any branch on Launchpad. e.g. lp:libdep-service [13:53] m_3: I don't want to say http://bazaar.launchpad.net/~libdep-service-committers/libdep-service/trunk because it's not what I mean. [14:04] jml: ah, ok... thought you meant when checking out the charms into a local repo. was wondering where juju was complaining about that [14:05] m_3: it's not juju there, it's bzr moaning about not having launchpad-login set, which is always going to be the case in any automated deploy [14:06] m_3: it's really important that all displayed errors actually be errors, and that all actual errors are displayed as such [14:08] m_3, that's fixed incidentally but it timing dependent [14:08] its [14:08] ie debug-log and debug-hooks both can catch install === almaisan-away is now known as al-maisan [14:09] actually that shouldn't be timing dependent [14:09] * hazmat verifies [14:11] jml, hooks already run with dpkg env vars to set frontend to noninteractive and list changes frontend to none. [14:12] hazmat: yeah, I thought my list mentioned that, and that 'sudo' masks that env var [14:12] because sudo does that [14:12] jml but why sudo when one is already root [14:13] but good to note agreed [14:13] hazmat: because I didn't know hooks were run as root when I started [14:13] gotcha [14:13] and also, it's nice to note what bits I'm doing in the install that _have_ to be root [14:15] jml, alternatively sudo -E for env preservation [14:24] m_3, just verified you can run debug hooks on install hooks now [14:39] hazmat: whoohoo!! [14:53] Yeah I've been using debug-hooks to *write* install hooks for a while now [14:54] i.e., start with empty charm, deploy, debug-hooks (quickly!) and then iterate, then juju scp the hook back into the bzr branch :) [14:54] I'd love an option to make juju use bzr rather than the zip file to grab the charm, so it would be easy to push changes back [14:55] +1 [14:55] jml: great brain dump. These definitely help to focus our efforts. [15:01] SpamapS: thanks. I might follow up with something that's a bit more reflective [15:04] * jml submits a super-naïve patch [15:14] hey all - so I'm attempting to write a hook in python, but calls to subprocess seem to blow up [15:16] is there a better way to do relation/config-set/get in a python hook? If not how can I interact with the shell relation-get et al? [15:18] bloodearnest: there are lots of cases of python hooks which use subprocess to call relation-* and such [15:19] hostname = subprocess.check_output(['unit-get','private-address']).strip() [15:19] bloodearnest: for instance [15:23] SpamapS, right - so I'm doing something wrong then [15:24] bloodearnest: are you perhaps trying to call these hooks outside of the actual juju agent? [15:33] SpamapS, nah - my bad missed the /usr of /bin/env :( [15:57] bcsaller, jimbaker ping [15:57] hazmat: whats up? [15:57] need to have a trivial vetted [15:57] link? [15:58] see msg [15:59] uhm, trying to juju bootstrap on precise and seems to be hanging on virsh net-list --all, known issue? [16:05] sidnei: not sure... wow, actually _hanging_ on virsh net-list... you might make sure you either rebooted since installing libvirt-bin and/or are in the libvirtd group [16:06] checked both yes. also, i had lxc working before doing this, with cgroups-lite and such, lxcbr0 already present, and juju prompted me to install libvirtd-bin [16:07] maybe this is one for hazmat per https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-juju-charm-best-practices (hi!) [16:09] sidnei: yeah, lxcbr0 isn't used unfortunately... juju local provider uses lxc containers but libvirt networking [16:09] sidnei: sidnei sometimes you can clear out lxc networking hangs by a `juju destroy-environment` [16:09] maybe they are stomping on each other [16:09] sidnei: also look in `lxc-ls` and /var/lib/lxc [16:10] sidnei: possible... but they default to totally different nets... you can see with `ps auwx | grep dnsmasq` which nets are really active [16:10] as i said above, i was already using lxc, there's a couple envs under /var/lib/lxc all from before juju [16:14] sidnei: gotcha [16:19] negronjl: ping me when you're home [16:20] I'd like to find out how your demo went === al-maisan is now known as almaisan-away [16:25] 'morning all [16:25] jcastro: I'm here [16:25] mind if we G+? I just need to grab lunch real quick [16:31] jcastro: sure .. invite me when ready [16:33] I'm writing up documentation for my project for contributors and potential users [16:34] I'm saying, "The easiest way to run libdep-service is with Juju" [16:34] what landing page should I point them at for getting the initial EC2 and/or LXC stuff set up? [16:40] jml: https://juju.ubuntu.com/docs/getting-started.html [16:57] negronjl: hangout started! [16:57] I have some very nice improvements to the Varnish charm that I'm about to push. [16:57] Improvement #1: It works now. :-) [16:57] (on LXC, it might have worked on EC2 before) [16:59] newz2000: \o/ [17:01] <_mup_> juju/trunk r543 committed by kapil@canonical.com [17:01] <_mup_> [trivial] local provider disables password auth on containers [r=bcsaller] [17:12] Hi, got my first charm mp posted: https://code.launchpad.net/~newz/charms/precise/varnish/fix-lxc-deployment/+merge/111648 Would love feedback, going to use this to try and make a squid charm. [17:12] newz2000: heh.. squid.. because.. varnish is just too awesome for you? ;) [17:13] SpamapS: well, Varnish is awesome but we use Squid in ISD, the team I work on. [17:14] newz2000: actually squid should be good.. you can then run a test and see just how much better varnish is. :) [17:14] You're talking to the wrong guy on this. I just write the code. Someone else deploys it. [17:16] newz2000: thats not the devops way! ;) [17:16] we all write it [17:16] we all deploy it [17:16] I would *love* to do it the devops way [17:16] and we all wear a pager from time to time if things are really being fair [17:16] we don't do it that way though. :-) [17:17] newz2000: well anyway, I believe Dustin Kirkland took a stab at squid at some point [17:17] oh [17:17] newz2000: so maybe dig in https://code.launchpad.net/~kirkland [17:17] Apparently there is more than one place to look for charms [17:17] its not official tho.. so just do a fork of varnish if thats easier :) [17:17] k [17:31] jcastro: thanks. [17:32] does this read right? http://paste.ubuntu.com/1054642/ [17:32] anything I can do to make life easier for my contributors? [17:36] the docs say that config-get with no args will output json, but when calling it from python I get python a dictionary repr rather than json [17:37] *a python dictionary repr [17:39] bloodearnest: indeed, its a known bug [17:39] bloodearnest: --format json [17:39] SpamapS, I tried that, but got empty string# [17:39] marcoceppi: hey, how's wordpress coming along? [17:41] hi there... so, in my charm I need to setup access to a private ppa so I can install some custom packages... I was thinking of somehow passing the credentials in so I can customize a template for the apt config, any suggestions? [17:42] pindonga: pass it in config [17:42] SpamapS: any word from isd ? [17:42] err is [17:43] m_3, got any examples around? [17:43] imbrandon: they've ACK'd the ticket [17:43] :( [17:43] imbrandon: but not sure when it will be ready [17:45] SpamapS: i had wordpress running on rails last night :) phrake , kinda amusing ... for 5 minutes [17:46] pindonga: lemme look. thinking pass the key and/or key_id in yaml and then pass the $(config-get key) to apt-key [17:46] m_3, ah [17:46] k, could work [17:46] * pindonga tries === Leseb_ is now known as Leseb [18:01] ping again re patch: https://code.launchpad.net/~jml/juju/stderr-as-info/+merge/111609 [18:08] also, feedback requested on short doc snippet for users of a project that recommends using juju to deploy it: http://paste.ubuntu.com/1054642/ [18:09] jml: the readme should assume the charm is in the store, IIRC [18:09] marcoceppi: not necessarily [18:09] marcoceppi: I'm not sure this project belongs in the store, tbh. [18:10] there are plenty of cases where a charm will be highly useful, but never be in the store [18:10] marcoceppi: and atm, the charm is included in trunk [18:10] just like some things don't make any sense being in a distro's archive [18:10] (does that make it the Juju equivalent of "native package"?) [18:11] jml: I'd make sure it's clear that your --repository is a path... `juju deploy --repository ~/charms local:libdep-service` [18:11] m_3: good point (although it's "./charms" in this case) [18:11] jml: and then also don't forget `juju expose`... it won't be available over the web until after that's done [18:11] jml: lxc is an exception (no security) [18:11] ahhh [18:12] m_3: I was about to disagree with you :) [18:13] jml: I like to point people to entire deployment/spinup scripts... like https://gist.github.com/2050525 or https://gist.github.com/1406018 [18:13] Can people please try the latest version of the PPA with the local provider? I'd like to tag r543 as 0.5.1 [18:14] m_3: yeah, I think I'll do that in a future iteration. [18:14] SpamapS: sure... trying now [18:14] jml: awesome README example is lp:charms/hadoop [18:14] m_3: I guess I'm slightly disappointed that writing wrapper scripts seems appealing [18:14] ha [18:14] yes [18:15] jml: it's only for spinup... note that juju maintains the "model" going forward... you make changes directly using the juju cli [18:15] jml: but it's often handy to start things off with a script [18:15] jml: keep your eyes peeled for "stacks" going forward... (juju can serialize/deserialize its model) [18:16] Yeah, I remember talking about stacks in Capetown. [18:18] uhm, seems like virsh is stuck polling /var/run/libvirt/libvirt-sock [18:20] And, you know what, two blog posts in one day: http://code.mumak.net/2012/06/further-reflections-on-my-first-juju.html [18:20] don't ever say I don't care about you guys [18:22] jml: I don't care about you guys [18:22] I don't want to be harshing on you guys. It's great work. [18:22] I really do hope these posts help. [18:24] jml: hey, have you tried this, I do it sometimes and its surprisingly nice and interactive: watch 'juju status | ccze -A' [18:25] wait thats not right [18:26] SpamapS: I didn't know about ccze. [18:26] cool. [18:26] yeah its fairly smart [18:27] * jml has to go. I've got to pack & head out to Canterbury. [18:27] at one point I had it showing without flickering.. can't recall how now tho [18:28] jml: anyway, debug-log | ccze -A helps with the colorizing a lot [18:28] SpamapS: yeah, I'll bet. [18:29] (using -A just makes it scrollable) [18:29] jml: I too worry about the feedback loop a lot.. its just hard to correct that one because apt-get installing stuff will never be "zippy" [18:30] jml: but anywa, go away, enjoy Canterbury, and bring us their Tales when you're done [18:30] SpamapS: right. perhaps it could be cleaner though. [18:30] SpamapS: I think I'm obliged to tell tales on my way there. Not 100% sure though. [18:30] * jml is gone [18:33] "Oh, I should mention that I got stuck for a while at the very outset, redeploying the same version of my charm because I forgot to update the revision file and didn't realize that I actually wanted to run deploy with the --upgrade option." [18:34] And yet.. we still require this nonsense. :-/ [18:34] I don't think a single new user has been able to avoid that problem. [18:34] niemeyer: ^^ we need to think long and hard about the revision file.. again. [18:39] "#juju is heaps better when America is awake" [18:39] heh [18:40] lol, who said that [18:40] do we have EU charmers? James Page, but he's like, too busy holding up the pillars of ubuntu server and QA... [18:41] SpamapS, would you have any clue about virsh just plain hanging? [18:41] there are a few that are "unknowns" that are regualrs overnight [18:41] but i couldent tell ya their names [18:41] it'd be jim baker and lynxman [18:41] sidnei: hanging? no. [18:42] i see them night after night tho [18:42] those would be the other two .eu people I can think of [18:42] jcastro: Jim is in Colorado [18:42] oh, right [18:42] sorry, I meant will [18:42] but he's Golanging isn't he? [18:43] * m_3 _hates_ lxc [18:43] ^5 m_3 [18:43] SpamapS, turning the question around, is it possible to use juju in lxc mode without libvirt-bin installed? [18:43] sidnei: no [18:43] sidnei: not with out some code munging on your own [18:43] sidnei: it uses libvirt's network management [18:43] * sidnei < between a rock and a hard place [18:45] sidnei: is it possible that your libvirt 'default' network is broken? [18:45] SpamapS, well, even just running 'virsh' hangs, and i tried purging and rm -rf 'ing my way around a couple times already [18:46] sidnei: oh weird [18:46] sidnei: oh, one other thing... does your default use virbr0 as well as 122.0/24? [18:46] sidnei: sudo service libvirt-bin restart maybe? [18:47] virbr0 is explicitly required (at least was) [18:47] restart no luck, virbr0 is up on 122.0/24 [18:48] crap [18:48] with dnsmasq bound there as well? [18:48] yup [18:48] /usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override [18:49] and `virsh net-list --all` still hangs? [18:50] yup [18:51] anyone have ideas for fixing our US-centric responses? Other than "hey if you are not in the US pay more attention"? [18:51] so that's sounding like a more fundamental problem [18:51] jcastro: send me and taylor to paris for part of the year :) [18:51] uhm [18:51] http://askubuntu.com/questions/141720/why-is-sudo-virsh-hanging-in-the-console [18:52] and then killing dmidecode makes virsh work [18:53] SpamapS, m_3 ^ [18:53] nice... [18:53] resource contention [18:54] actually net-list should be any easy one to debug that on [18:56] SpamapS, i plan to merge in the charm-format-2 shortly, now that the issues around that are worked out (whether or not _JUJU_CHARM_FORMAT was the sticking point, it remains as an impl detail). the point of it is that it's fully backwards compatible, and there are a substantial number of tests verifying that, so it should be good for galapagos [18:57] jimbaker: you're late man.. VERY late [18:57] I sent a message over a week ago [18:57] Error processing 'cs:precise/ubuntu': entry not found [18:57] 2012-06-22 15:55:07,739 ERROR Error processing 'cs:precise/ubuntu': entry not found [18:57] shouldn't this work? ^ [18:58] sidnei: no, its broken [18:58] sidnei: sadly. :-/ [18:58] sidnei: the importer can't import that particular charm for some reason [18:58] looks like i picked a bad day to start :) [18:58] SpamapS, sorry, i misunderstood that. well it goes in the next release then [18:58] SpamapS: i sumited the int64 bug to the golang guys [18:58] is there any wiki page with tricks/idioms for writing juju charms? [18:58] I'd like to share a way to setup private ppa access in a charm [18:58] imbrandon: I don't believe thats the issue with the ubuntu charm [18:58] imbrandon: I think its because bzr prints extra stuff [18:59] SpamapS: so hopefully it will be fixed soonish [18:59] ahh [18:59] pindonga: we've been discussing the best way to collect best practices. No consensus yet. [18:59] pindonga: a wiki is as good a place as any. Unfortunately the juju wiki is locked down... :-/ [18:59] jimbaker: yeah, no big deal [19:00] jimbaker: we just release whatever is merged, we still have the PPA building the trunk. :) [19:00] SpamapS: also i have a new tool i want to unveil monday "hiabu" possibly as part of jitsu ( but its in node so i dunno if you want it in there _ [19:00] pindonga: also charm-tools if it's something that makes sense in a helper script [19:00] ) [19:00] imbrandon: I want it [19:00] k [19:00] imbrandon: juju-jitsu is for all things crazy [19:00] SpamapS, sounds like a plan, this is really to help as a bridge for the go port, as well as fix bugs like this one, bug 979859 [19:00] <_mup_> Bug #979859: Unable to set boolean config value via command line < https://launchpad.net/bugs/979859 > [19:00] imbrandon: which is why I'm surprised you haven't dominated it ;) [19:01] ouch [19:01] SpamapS: its an external dependancy manager [19:01] jimbaker: sure. I'm thinking honolulu should actually be shorter. Maybe 4 weeks, so we catch up [19:01] SpamapS: hahaha i've been busy with hiabu secretly [19:01] is why [19:01] :) [19:02] but yea hiabu == japaneese for hive, manage "groups" of juju service as one hive, eg. an external dependancy manager [19:02] SpamapS: ^ [19:02] imbrandon: nice [19:02] hiabu sounds like romanji [19:03] :0 [19:03] SpamapS: 543 spun up fine using ec2 to host a lp:charms/juju local container... still not up on my laptop, but that's not really expected :( [19:04] still needs lotas of work but i can "hibau deploy juju.com" and it figures out that wordpress and mysql etc are needed and spins them all up and scales etc, basicly its just calling out to the correct juju comannds tho [19:04] <_mup_> Bug #776426 was filed: Add debug hook cli flag for deploy and and add-unit. < https://launchpad.net/bugs/776426 > [19:05] i want to work that jitsu watch into it and make it smarter tho [19:05] too [19:05] imbrandon: sounds pretty rad :) [19:06] my stupid point in mentioning Romanji is that, if true, hiabu isn't strictly 'japanese' :v [19:06] ok need to go afk a few, back in ~45 min [19:06] ahhh :) [19:06] i just couldent think of a nother name to play off the jitsu name :) [19:06] and mean group [19:07] no it's cool, I like it :) [19:08] m_3: sweet thanks. I'll give it another 2 hours to settle and then tag. [19:21] just cause I got curious, Jisho.org tells me Su is the japanese word for hive [19:21] Hachisu means beehive [19:22] well, the kanji for hive, is pronounced Su but Su can have differen't meanings [19:22] sorry I'll shutup [19:22] hazmat: is there a hack for the proxy? [19:23] hazmat: on the host side, I have a local openstack install, which isn't integrated with dns, so lookups for e.g. server-45 fail [19:23] hazmat: is there a config option ? [19:23] hazmat: Using the ec2 provider. [19:29] lifeless: /etc/hosts ? [19:32] lifeless, re hack, i've got a wip branch but it probably needs some finishing.. lp:~hazmat/juju/apt-proxy-support [19:32] outside of that going for a dns setup is probably the best option.. [19:33] lifeless, actually the simplest hack.. is pretty straightforward [19:33] shazzner: i used google translate, hehe , likely wrong [19:33] modify juju/providers/common/utils.py format_cloud_init and cloud_config dict to include an apt-proxy-url key and value [19:34] er. cloud_config dict in that method [19:34] imbrandon: check out jisho.org [19:34] kk [19:34] best online japanese dictionary [19:34] rockin, good thing before i released it too :) [19:34] ty [19:34] oh and be sure to tick the Kana as romaji [19:35] imbrandon: np :) :) [19:45] hazmat: SpamapS: thanks [19:45] SpamapS: /etc/hosts doesn't set http_proxy :) [19:47] hazmat: is there a similar code tweak I could do to get ip usage? [19:49] lifeless, yes, the relevant bit is in juju/unit/address.py [19:50] the implementation class used varies by provider [19:52] hazmat: ah, I wasn't clear [19:53] hazmat: *bootstrap* is the thing that is using an unresolvable address [19:53] hazmat: the client node code can't talk to the metadata service. [19:53] ah [19:53] I have no idea whether the metadata service will be handing out crapola or not [19:53] lifeless, so the client resolves the instance id to the ip address [19:53] hazmat: *how* ? [19:54] like, does it call the ec2 API to get node details ? [19:54] lifeless, it queries out the server via the api using the instance id [19:54] lifeless, yes [19:54] right, its ending up with 'Server-NNN' atm, not an ip. [19:54] point me at the code :> [19:54] lifeless, if you do euca-describe-instances at your endpoint do you see the same? [19:55] I can't answer that right now [19:55] server is in weekend mode :P [19:57] SpamapS / hazmat : HAHA i wasent going to let it get the best of me, check out http://api.websitedevops.com/juju-docs/operating-systems.html ( then look at the footer for the sphinx version ) [19:57] lifeless, fair enough.. but the client can't really correct for provider errors about addresses. for the client we need an accessible/routable public address for the machine to reach it. [19:58] lifeless, have a good weekend (in case you follow your server :-) [19:58] hazmat: we do, but it has one [19:58] hazmat: I mean, I'll cross check the api results [19:58] hazmat: but if thats ok, where in the code is this called ? [19:59] lifeless, that part is a little more.. twisted let's say ;-) juju/providers/common/findzookeepers.py [20:00] SpamapS / hazmat : only one more little quark i need to workout before putting up a MP that should work on 0.6.4 ( all pages TOC is "correct" ) except for the langing page , so there is hope [20:00] lifeless, basically for ec2 it gets an instance map from provider storage (s3), and then does describe instances on it and returns the public address associated to the instance as the place to tunnel an ssh connection to for zk access. [20:00] i got the major issue worked out tho, it dident like :hidden: in the old version [20:00] hazmat: there is a 1:M mismatch in that code too, room for optimising it looks like [20:00] imbrandon, nice [20:01] lifeless, definitely.. although i'm not sure exactly which part your referencing [20:01] for instance_id in instance_ids: <- synchronous, one loop per instance id. [20:02] then collects multi-machine errors from the get_machine call, which took one id [20:02] * hazmat nods [20:02] that was my next target on the scaling branch we used for the 2k nodes [20:03] I would have written it as [20:03] machines, missing_instance_ids = yield provider.get_machines(instance_ids) [20:03] we do it something similiarly dense in status when querying out info [20:03] * hazmat checks [20:05] ah [20:05] lifeless, right now the provider.get_machines call errors out on missing ids [20:05] the contract for def get_machines(self, instance_ids=()): isn't conduicive to that approach [20:05] I'd fix that first :) [20:05] in set based processing, exceptions need to be super rare [20:06] because otherwise - well, you know - the successes get crushed. [20:07] yeah.. in eventually consistent distributed systems, some inconsistencies around the state need to be handled with more grace [20:08] lifeless: mind if i pick your brain for a sec on bzr, you work with its internals a bit right ? [20:08] I played a developer on TV once. [20:08] heh [20:08] (some stupidly large % of bzr is my code, yes I know its internals) [20:09] well i'm totally ignorant about the internals of vcs's but really just was wondering the fesability of a git<->bzr bridge thats like the git<->svn bridge on github, where either client can work with the same repo, not a copy of it, at the same time [20:10] hazmat: hahaha [20:10] instance.private_dns_name, [20:10] hazmat: ^ that is whats used, *not* the ip address. [20:10] * lifeless checks to see if there is a more useful field [20:11] lifeless, ? for what [20:11] hazmat: using private-ip-address would be better I think [20:11] hazmat: rather than private-dns-address [20:11] ( or even a bzr<->svn bridge etc ) basicly working with multi vcs natively at the same time and keeping the commits/logs in sync transparently [20:12] imbrandon: you mean like 'bzr-git' ? [20:12] lifeless, econtext used for what [20:12] well kinda, that works on a copy [20:12] imbrandon, maybe checkout tailor [20:12] 07:52 < lifeless> hazmat: ah, I wasn't clear [20:12] 07:53 < lifeless> hazmat: *bootstrap* is the thing that is using an unresolvable address [20:12] 07:53 < lifeless> hazmat: the client node code can't talk to the metadata service. [20:12] hazmat: yea, i have been looking into that [20:12] hazmat: I followed the pointers you gave me, get_machine -> ec2's get_machines -> machine_from_instance [20:13] lifeless: i mean more server side tho, like i can use `git clone https://github.com/myname/myrepo.git` or `svn co https://github.com/myname/myrepo` both r/w tansparently [20:13] hazmat: jujumachine wants private_dns_name, but that depends on resolvability, whereas taking the ip address in the front end wouldn't [20:14] something more like that [20:14] lifeless, when you say the client node can't talk to the metadata service.. that's unclear. [20:14] lifeless, the private dns name is used for inter unit internal communication [20:14] ie. relations [20:14] hazmat: its also used to talk to the bootstrap node [20:14] from the juju CLI [20:15] hazmat: or something; when I run 'juju status', its grabbing the private dns name, and trying to resolve it from outside the cloud. [20:15] hazmat: i was wondering why we do that too since split dns takes care of if we use the public dns name [20:15] and then its uniform [20:15] imbrandon: there might not be public ip or pblic dns names. [20:15] imbrandon: private* is the only guaranteed thing [20:16] hrm true, split dns dont help there [20:16] lifeless, the client isn't using the private address to talk to zk, its using the public dns name given by the api server. [20:17] hazmat: I beg to differ :) [20:17] lifeless, i'm confused where you think you see it using the private dns name to facilitate that [20:17] juju bootstrap-> stuff happens [20:17] juju status -> [20:17] 'ssh: Server-32 is not a valid somethingorother to forward via\n\n' [20:17] loops [20:18] lifeless, juju/providers/common/connect.py .. using dns_name not private_dns_name [20:18] lifeless, we'd never be able to connect to a machine in ec2 otherwise [20:18] or any other public cloud [20:20] hazmat: ok, so s/private/public and my whinge still applies; we're depending on dns resolvability, which is lacking for developer installs of e.g. openstack [20:20] I have no sane way of making the public names resolvable [20:20] or the private names for that matter [20:21] I'm on a different machine, having my machine poke via dnsmasq on the cloud server will break my network when I roam, for instance. [20:25] hazmat: added to that, juju bootstrap isn't requesting a floating ip, so we're only getting flat-network addresses allocated the machines *anyway* (10.0.0.2 for instance) [20:25] its reasonable to use ips, but their's a tension to have names for hosts for users and for charms that want to configure against names (although some also want to use ips). we should probably capture all. [20:25] hazmat: I'll have a poke around mondayish [20:25] lifeless, floating by default can be enabled [20:25] this has been very helpful, thanks. [20:25] hazmat: its one more thing for users to get right [20:25] lifeless, mgz's ostack provider also does the assignment [20:26] hazmat: and consider how hard canonicloud is to work with ;) [20:26] (for juju, not in general) [20:26] lifeless, its only gotten worse since i hacked out the support for it.. [20:26] i did it all without a chinstrap account previously.. [20:26] i was just about to try the ostack provider today on hpcloud [20:27] is it not working ? [20:27] hazmat: so, perhaps it would be a great exercise for you devs to do all your smoke-and-above testing using canonicloud; without canoniclod-specific hacks. [20:27] hazmat: :) [20:27] have a good weekend [20:29] lifeless, cheers, have a good one.. and no thanks re fool's errand... i'd be willing to talk to the canonicloud folks though if their willing to fix some of the pain points, we can accomodate some change as well (not hacks) [20:30] but its felt like a very slow moving progression, that's been in the wrong direction as far usability, and afaik still lacking swift [20:31] openstack isn't a product, its a toolkit for building your own cloud.. i mean snowflake [20:35] hehe [20:36] hazmat: so should i grab your branch for the ostack provider or ... ? i'm a little confused on what one to use for testing ( and has the higest probablility to work on hpcloud ) [20:36] i'm starting fresh today with it [20:37] and / or rs next gen ostack cloud too, got that activated yesterday [20:37] but i think they have ec2 and s3 compat on [20:37] imbrandon, its a sharp edge.. and not my branch, but if you want to play with it.. lp:~gz/juju/openstack_provider [20:38] imbrandon, they don't have s3 compat on [20:38] kk, yea nothing production but i wanna put those 3 months to use a little and try soem things [20:38] i know there will be lots of breakage probably [20:39] imbrandon, you have to use that for your client and specify juju-origin: lp:~gz/juju/openstack_provider for the env [20:39] k , is there an example env.y in the code ? i hope [20:40] there aren't any docs for it yet.. but the config keys are at juju/environment/config.py ... [20:40] rockin that will workj [20:40] yea i figured docs was out of the question for now [20:41] btw sometime this evening i'll have the normal docs building again if they upgrade to 12.04 or not [20:41] i found a few workarounds that arent too hackish , just one last bit to workout [20:41] well arent hackish at all just have better ways in the newer version :) [20:42] and really only like a 5 line delta so far, so not like alot of code either, just learning all the quarks [20:44] hi, I've read I can just include a repositories: section in my environments.yaml to configure my preferred repos [20:44] however I cannot get it to work, so I assume I edited the wrong place [20:44] is this actually supposed to be working with the precise version of juju? [20:44] pindonga: that was never implmented, its in the draft docs [20:44] imbrandon, ok, touch luck then [20:44] thx [20:44] pindonga: unless you see it somewhere else , please tell so i can correct it [20:45] no, I saw it in the drafts [20:45] some drafts include stuff that works right now [20:45] pindonga: you can use JUJU_REPOSITORY env varable to set the local path though [20:45] so that's why i tried: ) [20:45] yep [20:45] this was nicer though [20:45] don't have to do it every tim [20:45] that is implmented , yea i wish it was personally, i love that idea, but sadly its not [20:46] set the env varable in your bash_profile [20:46] yep [20:46] no worries [20:46] thx [20:46] like "export JUJU_REPOSITORY /var/lib/charms" [20:46] kk [20:47] SpamapS / hazmat : do you all know if that feature got axed or just not done yet, i wanna put a note on it in the docs, cuz i ran into that exact same thing [20:47] if its axed i'll just remove it, if not done yet i'll make a note: saying so [20:48] imbrandon, its implemented [20:48] you still have to prefix with local:charm_name [20:49] no i mean the environments.yaml [20:49] part [20:49] https://juju.ubuntu.com/docs/drafts/charm-namespaces.html [20:49] no that's very old [20:49] right, i was gonna brush it up a little [20:50] but yea i like that idea but i think its axed [20:50] the env.y part [20:50] i wrote that like the first week of juju dev.. [20:50] we can probably just yank the doc [20:51] actualy m_3 did alot of moving drafts out and cleaning up yesterday, let me check if he did this too [20:51] kk [20:51] i guess it has some useful info.. [20:51] i think he killed dups and moved stuff that was done out [20:51] ill see if he did, if not i will see about adding the few bits that are relevant to other docs and yank that one [20:52] all as a MP obviously since i'm not intimately familar etc etc [21:02] <_mup_> Bug #1016740 was filed: local provider bootstrap should create an actual working environment if none exists < https://launchpad.net/bugs/1016740 > [21:06] imbrandon: no, I didn't take the time to go through all the drafts... just did the obvious ones at the time [21:07] m_3: sweet, kk, i'll snag this one then [21:08] imbrandon: cool [21:10] m_3: also re: our quest for bzr-git harmony :) checkout the second paragraph here http://developer.github.com/v3/git/ , just more to add to the $someday list if no one else does it [21:31] have a good weekend folks [21:32] hazmat: u2 man [22:12] l8tr hazmat [22:13] m_3 / SpamapS : so many subordiantes , i think of the few in the repo i'm running all but one http://api.websitedevops.com/juju-status.txt [22:13] :) [22:25] imbrandon, nice [23:14] imbrandon: haha cool! [23:14] imbrandon: so is omg using ELB now? [23:14] yea [23:14] sweet [23:14] has been for like a month or so [23:15] ever since it went down the last time cuz joey rebooted the bootstrap node and the one with the eip [23:15] i put it on elb , and the charm just added instances that are up to the elb with a special check that wordpress is actually runnning ( manually configured ) [23:16] so littraly just juju add-unit and then the elb associates with the new omgweb that comes up and add's it to the elb, elb checks every minute to see if its serving wp and if so it adds it to the rotation [23:16] imbrandon: well done on getting things working w/ 0.6.4.. I have not seen any activity on the IS ticket yet [23:17] yea, i got it all working except the front page [23:17] took a little break then was gonna crack at that [23:17] the front page only lists the local toc [23:17] with the other fix [23:17] imbrandon: we should add an optional 'check_url' and 'check_string' to the http interface. [23:17] i'm sure there is a way around that tho [23:18] SpamapS: yea, i have a couple of sugestions for it, that being one [23:18] and the other being that the name of the elb isnt tied to the relation [23:18] but is an option [23:18] actually I really think its time we ditch 'http' for 'http2' and start doing things with the Host header that make sense. [23:18] like i had the elb running before i used your charm, so i had to do some fnaggling [23:19] yea, hold on ,i'll see what the check is [23:19] i have in the elb [23:20] HTTP:80/fpm.www.ping [23:21] but yea, i basicly wanted to get it to a place i could hand it to joey and say "go" and it not be a big deal [23:21] and its at that place now, has been a few weeks, he does his own code deploys etc etc etc [23:21] i havent HAD to touch anything in almost a month [23:21] now i [23:22] still have done a few things like the shortcode plugin etc, but thats purely fun etc [23:23] and nothing to do with juju or preformance etc ( the shortcode plugin where he can do [download4ubuntu url="http://apps.ubuntun.com/some/app"] and it puts my pretty css button in place :) [23:25] but yea, he now handles all the code deploys and juju updates and such, i just kinda keep an eye on things from a-far, and awser any questions he has but its not many, he picked it up quickly and solid grasp of it, may not be a coder but definately isnt a hands off as he's put up to be sometimes [23:26] he is also using a small aws_rds tooo, if you notice there was no mysql charm [23:27] but its not charmed, it is just set as a config option for now , omgweb charm has a config option of db_user db_pass db_host db_name , and sets those on config change [23:27] but it would be nice to charm that too [23:28] but seperate like that not only is it cheaper, but nightly backups are done without extra scripting that could possibly fail and is acutally cheaper ( only by like 1c an hour ) [23:29] but also allows the whole env to be rebuilt on the fly without thought of the db [23:31] great for those "oh shit" moments. now just juju bootstrap a new env or juju deploy omgweb some-other-service-name and then once up and config set to the db, destroy the old service without a second thought [23:31] perfect for handing off to less tech ppl [23:31] :) [23:32] imbrandon: so basically, omg has become a custom thing that is nothing like anything in the charm store. [23:32] fail [23:32] but its using juju for great stuff [23:32] WIN [23:32] so its beutifully handled by juju still, but all he needs to handle is the scaling of webhead units [23:32] imbrandon: its also EC2 specific [23:32] he can't move to HP [23:33] or RAX [23:33] bummer [23:33] SpamapS: well kinda, nothing from the store directly but parts from it are in alor of other charms [23:33] yea he can [23:33] infact he was just looking at moving to rack [23:33] maybe [23:33] imbrandon: this is like the early days when people built servers from Slackware... install slack, cd /usr/local/src .. and start building the box. [23:33] now HP dont have everything, but rack does [23:33] RAX doesn't have *RDS* [23:33] yea [23:33] you'd have to manually do that [23:33] it does [23:33] and ELB [23:33] nah [23:34] it has rds [23:34] which you'd have to do manually [23:34] and elb i;m not certain but it has a lb of come kind [23:34] because there's no charm is what I'm saying [23:34] a big part of juju's promise is cloud agnosticism [23:34] Now, OMG doesn't really *need* that [23:34] well the rds bit is manual right now anyhow, i want to charm it but its not yet, but yea they would need charmed [23:34] but the aws ones needed charmed too [23:35] but its sad to see how fast it blew off the rails into EC2 lock in [23:35] to begin with :) [23:35] well thats what i'm saying its not as locked in as it looks [23:35] because i'm not using any of the special features and its all charmed cept db backup [23:36] so it can be abstracted out into a rackspace_elb solution or a rackspace_rds one [23:36] now hp it would need to use a mysql one, but it still has those interfaces ( optional now ) so it could without change [23:36] if needed [23:37] Well I'd hope you'd charm RDS before attempting that, so you can charm RAX's db as a service, so you could then actually do the migration just that way [23:37] soh for sure [23:37] oh* [23:37] We need some more sites playing with juju [23:37] I'll be interested to see how the next one goes [23:37] yea, that was another reason to get it into joeys hands so i could possibly do "the next one" [23:38] etc :) [23:38] but yea, it LOOKS like its tied to aws but really its not, or only very very very slightly untill rds is charmed [23:39] even then not really cuz there is still sql dumps to the bootstrap node and it has the mysql relations in place [23:39] but yea [23:39] tried to make its as hands off as possible but still use juju for everything that it possibly can [23:39] to make it easy on him, me or anyone else [23:40] and it may not use stuff from the store, but its a guiney pig and large chunks of what we learn/learned with it are going to many charms, like the elb sugestions [23:40] from a bit ago [23:40] etc [23:42] that and when shit breaks the only "custom" code really is the webheads we designed, the rest can be pointed to aws ( or other cloud provider ) for outages [23:42] if needed, but honestly, its been more stable the last month since that last big breakdown that made me take these measures [23:42] than ever [23:43] ( and still 98% juju hehehe ) [23:44] and the webheads still rev proxy with a microcache just the same , all to each other etc, they just have an elb infront of them now as well [23:44] preventing one webhead die'ing and taking the whole site [23:45] ( or getting rebooted ) [23:45] imbrandon: Its making me question my vision of juju users actually contributing back the stuff that makes their websites go though. [23:46] and thinks like the uptime.u.co.uk and newrelic etc etc all are great cya now instead of hearsay :) [23:46] imbrandon: I had thought it would be simpler than this. :-/ [23:46] SpamapS: heh, thats what i've been trying to say from the start [23:46] SpamapS: there will be contributors but "custom" will be very common too, [23:46] esp when legacy sites are migrated [23:47] and not some new service [23:47] but yea, really there is very very little, if any that isnt contributed back [23:47] but for our first known live site to be custom, is sad. [23:48] just not in the clean fashion of a single commit etc [23:48] imbrandon: you're not using mysql, you're not using haproxy, or even an nginx lb charm, and wordpress seems to have stymied you *and* marco to get it into production shape. [23:48] SpamapS: nah, dont think of it like that, think of it like its still 100% juju even after the dust settles and not because it needs to be adn #2 [23:49] that it IS the first, their are bound to be hiccups [23:49] :) [23:49] imbrandon: I knew that production would bring radical change to the charm store. I just haven't seen *any* of that change in lp:charms yet [23:49] its using nginx lb [23:49] not from the charm store [23:49] and what its using has not been submitted [23:49] well only becuase i haveent cleaned it up enough to push [23:50] thats what I'm lamenting [23:50] i've been lazy in that respance [23:50] that it takes that much effort from you to clean it up [23:50] respect, but thats me [23:50] lazy is what I want [23:50] no not really, like i said its more of me lazy on that front [23:50] I want you to be *so* lazy that you never want to repeat this again [23:50] of just not doing it and doing other things [23:51] perhaps on the second site, you guys will get off your butts and submit things [23:51] SpamapS: sure, i totaly understand and agreee 10000% , but i mean a diff kind of lazy [23:51] nah [23:51] i need to do it now [23:51] and actually you *are* using your newrelic stuff, right? [23:51] so, WIN there. [23:51] infact i'm going to finish up this doc stuff to get it working and not work on other projects untill i have it at leaste up for revirew [23:51] haha yea [23:51] newrelic , elb ( from you ) [23:52] cs:~clint-fewbar/precise/elb [23:52] haha cs? [23:52] brave man [23:52] heh, when i deployed it dns wasent pointing to the elb yet :) [23:53] heh [23:53] yeah but you can't fix that charm [23:53] you can't do anything with it actually [23:53] because subordinates can't be removed [23:53] so if you find a bug, or want to change it.. oops.. you're stuck, nothing you can do [23:56] yea , i dident think of that till you brought it up the other dat [23:56] day* [23:57] but i could have swore i have "juju deploy mysql" then "juju upgrade-charm --repository /var/lib/charms mysql" and it grabs the upgrade from local [23:57] wasent mysql, but still, a "charm" [23:57] as long as the revision was higher, but maybe not, probably mistaken [23:59] imbrandon: that won't work no, because the charm is cs: not local: