=== zyga-afk is now known as zyga [05:46] Oookay, so should my juju scripts inform one node of the existence of another node in "install", or in "start", or in some other script? [05:46] The example in here: https://juju.ubuntu.com/docs/write-charm.html does it in db-relation-changed, but I don't exactly see how .. what would be analogous for my software. [05:50] Oh, I think I see it. [05:50] From https://juju.ubuntu.com/docs/charm.html, I think that I should define a relation of "introducer". [05:50] Getting sleepy. Maybe bang on it some more after some sleep... [05:59] zooko: best way is to grab a exisitng charm and see/dig into [06:01] koolhead17: well I've done that with the drupal one in the example. [06:01] * zooko looks at the hadoop one [06:02] zooko: hadoop one will fit the charm your writing [06:09] So... yeah I think I'll define a "relation" of "introducer"... [06:10] * zooko looks at http://jujucharms.com/charms/precise/hadoop/hooks/datanode-relation-changed [06:11] Yikes,there's a lot of code in there. === zyga is now known as zyga-gone [06:24] Hm... [06:24] http://codepad.org/EYUOJnKk [06:24] Now I've defined a relation. [06:25] But, will the "tahoe-lafs" charm have both "requires" and "provides" for this relation? [06:25] Or should I have a separate "tahoe-lafs-introducer" and "tahoe-lafs-storage-server" charm? [06:25] Okay, in another window my upload of a 1.6 GB file through Tahoe-LAFS to Amazon S3 is almost completed successfully, at which point I'm going to shutdown the Amazon EC2 instance and get some sleep. :-) [06:28] zooko: requires is for is there any deps for the charm [06:29] So, tahoe-lafs uses any number of storage servers. [06:30] Each storage server has some storage, such as having a direct attached spinning disk [06:30] or having access to an S3 bucket. [06:30] The storage servers could each be running on different machines in different locations on the globe, or whatever -- they just need to be reachable via TCP. [06:30] Now, there is exactly one introducer, whose job it is to inform each client about the IP addresses and public keys of all the storage servers. [06:31] So, I'm trying to figure out how to write into the charm that when deploying a new Tahoe-LAFS grid, you have to first create and launch the singular introducer, [06:31] then get its IP address and public key from it [06:31] then every time you add a storage server, you have to tell that storage server the IP address and public key of the introducer. [06:31] That's all I'm trying to accomplish at the moment. [06:33] I'm wondering if I need to have a hook script that gets called after a storage server launches, and that script calls [06:33] https://juju.ubuntu.com/docs/charm.html [06:33] calls relation-set to set its IP address and public key? [06:34] And then the hook script that gets run for each storage server... at some point in the process of creating that storage server, could call relation-get to get that information ?? [06:46] Okay, I'm about to crash, but I'll return to this channel when I wake and I'll hope that someone will guide me through this. [07:25] when am trying to get juju credentials from /settings/juju/ it throws internal server error [07:25] is it a known issue? am using Essex on PRecise [07:25] *precise [07:25] i am able to download Ec2 credentials and openstack credentials [07:26] lynxman: ^^ [08:08] jcastro, I think I remember volunteering for something like that at UDS [08:08] but I can't remember the detail - I would not normally attend pycon so it would be todo a charm school/presentation or whatever if so === zyga-gone is now known as zyga [11:57] marcoceppi, how'd you do at trivia night? [11:58] the dcpython meetup was fun, someone brought a nao robot [12:05] nice! [12:05] hazmat: we placed 2nd of 12 teams [12:06] marcoceppi, impressive [12:10] hazmat: I keep forgetting you're in the DC area, would love to hang out and hack on juju stuff some time [13:29] m_3: hey don't forget to submit for puppetconf [13:29] (unless you did already, in that case "great!") [13:50] jimbaker: heya, check the scrollback for some stuff zooko needs help with [13:51] jcastro, sounds good [13:52] Morning! [13:52] Yeah, this probably reflects on me more than on Juju, but I was perplexed. [13:53] Maybe what's going on is that Tahoe-LAFS is subtly or not-so-subtly different from typical software, so it doesn't map well to Juju. [13:53] it's ok, this is the sort of thing we should figure out [13:55] zooko, this hub & spoke topology works well w/ juju [13:55] I suspect one thing that complicates it is that in Tahoe-LAFS some information gets generated inside one node when it first starts up, and then that information is required to be provided to other nodes to configure them [13:55] . [13:55] So you can't just script "Set this up then set those up" [13:55] without scripting "extract that information from the first one" somehow. [13:56] zooko, that's really the point of the negotiation seen in service orchestration [13:57] Hrm. Well, I don't understand this part very well. [13:57] zooko, so what you need to do is publish the info on relation settings [13:57] Shall I restate my outstanding questions? [13:57] Aha, that's the answer to one of them. :-) [13:57] "Should I publish that info on relation settings?" ;-) [13:58] That was 1. [13:58] each time a unit changes its relation settings, the other unit(s) in the relation run their -relation-changed hooks [13:58] 2. Should I have separate charms for different services, or one charm that defines both "provides" and "requires" of that service because it defines both that service and the other things that require it? [13:58] 3. Will someone please spend a lot of money on EC2 instances so that I can run a 5000-node Tahoe-LAFS grid? [13:59] jimbaker: ah, interesting. But, this relation setting would only be done one time, on setup. [13:59] zooko, 2) my first instinct is that what you're describing is a peer relation [13:59] So it would be better, I think, to use -relation-joined ? [14:01] zooko, relation-joined is usually not what you want. more specifically, it's a good place to know that a unit is part of the relation. but that unit may not actually be ready, and has published any info [14:01] jimbaker: I see. [14:01] Okay, I guess it will work for -relation-changed. [14:02] So there'll be a start hook (is that the right hook) that runs when the introducer is created, and that will get the special data (the "furl") and publish it as a relation setting. [14:02] That publication will trigger the other things to have their lafs-introducer-relation-joined hooks run, and they'll query it from the relation settings. [14:02] Perfect! ☺ [14:02] zooko, like the furl is published as a config setting [14:02] likely [14:04] zooko, maybe not on second thought. going back, i don't believe the furl is not a human-generated config item [14:04] sorry, is a human generated [14:05] It is generated by the introducer itself (a computer program) when it first runs. [14:05] instead it's a relation setting between the spokes and the central hub of the introducer service. so that also works [14:05] So, wait, what? Is the plan sketched out above good? [14:05] zooko, cool, so it's a relation setting [14:06] zooko, probably :) [14:06] zooko, so do the spokes need to talk to each in their setup? [14:06] each other [14:06] Umm, is it possible to add a my_config.yaml file to a existing service. Will that trigger the config-changed hook? Just want to get some options into a service. [14:07] Says it is not accessible. [14:07] jimbaker: nope -- the spokes need nothing but that one "furl" from the introducer. [14:07] Plus, you know, human-chosen config options. [14:07] zooko, ok, you don't have a peer service [14:08] Right. [14:09] surgemcgee, this is the point of upgrade-charm. make certain you call it config.yaml [14:10] zooko, so each spoke is a client (requires) of the introducer (provides) [14:11] Agreed. [14:22] zooko: there is also peer relations too eg requires: provides: peers: in the metadata [14:44] imbrandon: hi! I don't quite understand peer relations, but I'm fairly sure that Tahoe-LAFS doesn't need them. [15:06] zooko: well like in 1 of my setups i have a loadbalancing scheme where all the nodes loadbal to each other, so they have a peer join/depart relation that fires and add or removes their IP from the relation config [15:06] so each of the nodes knows about all the others to add to their own LB config [15:06] * zooko nods [15:06] it was somewhat similar to what you described thus thought i;d mention it :) [15:08] morning y [15:08] 'all [15:08] heya m_3 [15:09] i tried a perl script they had linked on the prowlapp.com page, works great , i was gonna modify my script for ya but someone already wrote a irssi plugin :) [15:09] just fyi :) [15:09] jcastro: things all set for tomorrow? [15:12] jamespage: hey... how was jubilee? [15:12] m_3: rocking! I think the country finally remember what being British was all about :-) [15:13] m_3: nice article BTW === mrevell_ is now known as mrevell [15:17] ha! my fav description of what it means to be British was john cleese and kevin klein in a fish called wanda from years ago... pretty funny [15:18] jamespage: yeah, lemme know if you have changes to the article [15:25] hi all [15:33] I'm getting public address: null after I run juju expose wordpress from the link https://juju.ubuntu.com/ [15:33] I don't see anything when I run juju debug-log other than "enabling distibuted debug log ... ctrl-c to stop." [15:34] x_or: the instance is probably still coming up and doesn't have an address assigned yet [15:34] m_3: Yeah, I thought that, but it has been over five minutes. My other lxc machines come up in a few seconds. [15:35] x_or: the first service you start using a local provider takes a _long_ time to come up... dpending on your connection [15:35] Is there somewhere that juju stores the network interface to use? [15:35] Oh, OK. [15:35] Why is that? What is happening the first time? [15:35] x_or: (it's downloading a new image).. anywhere from 10-30mins [15:36] x_or: it's using libvirt's "default" network [15:36] Oh, OK. [15:36] So, it does this in the background, I see. [15:36] x_or: and expecting that to be 192.168.122.0/24... although I'm not sure that's necessary any more [15:36] x_or: yeah, we've got a bug open to give a little better messaging while it's doing this :) [15:36] Great. juju is very exciting. [15:37] yeah, it's fun [15:37] I'm loving lxc, so much lighter than virtualbox or vmware. [15:37] * m_3 loves lxc too... will love it even more once local provider is easier to setup/use [15:38] I've done a bunch with libvirt before... new to lxc in the last 6mos or so [15:38] love that it's lightweight [15:38] libvirt is pretty nice. I was getting a bit confused between docs for lxcbr0 and it, but I got it figured out now. [15:39] I'm having so much better experience with it on a 3 GB linux laptop and five VMs than two machines on 8 GB OSX machine with VirtualBox. [15:39] x_or: confused is understandable... the network config is _the_ hard part right now. that'll clear up over time though. [15:39] * m_3 notes to get peeps to blog more about that setup [15:40] I started a blog post on it, I will see whether I can get that finished first thing next week. I'm glad it was not just me. :) [15:40] x_or: awesome! [15:40] yea but your not working with something on the level of VBox or VMWare , its more like a chroot on crack [15:40] :) [15:40] imbrandon: This is why I like it. Lightweight. [15:41] VBox is so heavy, ditto for VMWare. [15:41] yup [15:42] x_or: we're still trying to figure out the best way to do this with osx too. We have a juju osx client that'll run remote cloud stuff fine... but the local provider story on osx is still pretty weak [15:42] depends on your perspective i guess, i see them as light , or even lighter it using a xen hyper, and in turn you get a real virtualized env, not a container [15:42] that isnt [15:42] m_3: i got 3/4 of a vbox provider for OS X ( and others ) written [15:43] Are you guys canonical employees? [15:43] i hoped to have it done by this weekend, well useable by then [15:43] imbrandon: awesome! vagrant? [15:43] m_3: yup [15:43] or just pure [15:43] gotcha [15:43] x_or: i am not. most of the devs in here are [15:44] x_or: yeah, it's a mix [15:44] * m_3 is [15:44] yea actually i take that most part back [15:44] heh [15:44] its a good mix [15:44] m_3: but yea, it is *almost* pure [15:45] vagrant's cool though... great handler stack, like rack [15:45] imbrandon: RPMs yo [15:45] and if i take a extra few days ( i may after the first push ) then it would be [15:45] jcastro: rpms are cookin this second [15:45] unf. [15:45] hey post on the list when you have something to test [15:45] like thats whta i was/am doing today [15:45] kk will do [15:46] m_3: btw you get charm tools to work on osx ? i have some bad problems with the bsd VS gnu tools but mostly just in charm-tools [15:47] imbrandon: haven't tried [15:47] I actually don't have an osx machine to test on... only ones in the house belong to the wife :) [15:47] i might dif some more later, for now i just installed gnu coreutils [15:47] dig* [15:47] yea i'm actually running linux again full time [15:48] but i still am testing / working on the stuff [15:48] imbrandon: is it possible to run recent versions in VMs now? (my laptop's a mbp, so legally I can run it) [15:48] so i keep a partition going [15:48] m_3: yup [15:48] but only vmware proper [15:48] oh nice... I'll have to google [15:48] like esxi or vmware server or player [15:48] looked into it a year or so ago, but no love [15:49] vbox and others it would "techinily" work but there are checks in the instller, vmware is the only legit unmodified way to install 10.7 or 10.8 [15:49] m_3: yea its legit now [15:49] imbrandon: awesome... thanks! that'll simplify osx testing considerably [15:49] they changed it when 10.7 was released [15:50] yea it makes it nice, was the reason i was willing to jump ship back to linux so easy too cuz i can keep the old install arround for porting etc [15:50] :) [15:51] imbrandon: right... I kept my orig osx drive around in an external hd case... could boot from it when I wanted [15:52] imbrandon: but lately ios update over the air means I don't even do that anymore [15:55] yup i;m lovein that [15:56] ota itunes syncing too, wonder if i can make banshee do that with my ipad hrm something for later :) [15:56] already got timemachine seeing my linux server as a timecapsul backup dest [15:56] :) === hspencer is now known as hspencer[afk] === al-maisan is now known as almaisan-away [16:20] jcastro, ping! [16:49] jamespage: pong [16:49] jcastro, hey! === hasp is now known as hspencer [18:10] do we _require_ install hooks to be idempotent? [18:18] non-idempotent install hooks sound like a terrible idea. :-) [18:27] yeah I added a sterner warning in the template [18:32] jcastro: all hooks _need_ to be idempotent [18:32] ok, it was just missing from the template then [18:32] incoming charm-tools merge proposal yo [18:32] marcoceppi: actually my question more for m_3: do we check for idempotency in the charm test and kick that back? [18:33] I am assuming yes, it wouldn't make sense otherwise === benji is now known as Guest68587 [18:36] jcastro: I'm not sure if we explicitly check idempotency in the tests, but when I do reviews I test for it [19:08] <_mup_> juju/trunk r540 committed by kapil.thangavelu@canonical.com [19:08] <_mup_> merge maas-provider-non-mandatory-port, default ports are inferred by protocol. [a=julian-edwards][r=fwereade,hazmat][f=972829] [19:27] jcastro: no, it's hard to verify that automatically [19:27] jcastro: it's part of the review process [19:28] * jcastro nods [19:28] jcastro: a non-idempotent install hook would actually be acceptable if there's a good reason... we haven't made it a hard/fast condition [19:29] I recommend linking upgrade-charm to the install hook... so the idempotency of the hook often depends on that of the underlying tools (like apt-get) [19:30] depends tho [19:37] m_3: quantl looks good [19:38] m_3: install *still* has to be idempotent [19:38] m_3: if there is an error late in the hook, it will be retried .. so the whole thing has to be idempotent [19:42] SpamapS: thanks for looking [19:42] SpamapS: how're things? [19:42] m_3: and symlinking upgrade-charm to install isn't really the best way to go. Better to go with stop,install,start,config-changed (the same order that happens on deploy) [19:42] m_3: good, first time touching the computer since Monday morning. :) [19:43] ha! [19:43] haven't really had the presence of mind to do anything useful with it anyway [19:43] gotcha... good idea about upgrade [19:44] m_3: I think we should start thinking about making a declarative charm-helper that does exactly that automatically [19:44] SpamapS: are you daddy+1 yet? [19:44] yeah, it's easy enough [19:45] m_3: been thinking about making a feature request for a 'missing-hook' hook that gets called whenever a hook script doesn't exist [19:45] ha! [19:45] that smells _dynamic_ even :) [19:45] m_3: yeah it would make declarative charming much easier [19:45] really like that feature [19:45] * SpamapS files that feature req [19:48] <_mup_> Bug #1009687 was filed: charms should be able to provide a 'missing-hook' script < https://launchpad.net/bugs/1009687 > [19:56] imbrandon: hey, you broke charm-tools [19:56] imbrandon: https://launchpadlibrarian.net/107021470/buildlog_ubuntu-oneiric-i386.charm-tools_0.3%2Bbzr145-2~oneiric1_FAILEDTOBUILD.txt.gz [19:56] imbrandon: always run 'make check' before pushing to trunk [20:04] jcastro: ^^ you too :) [20:05] yikes! [20:06] SpamapS: just make check? I get an error [20:06] make: *** No rule to make target `check'. Stop. [20:06] oh, wrong dir, nm [20:10] SpamapS: ahh crap i should know better, was just a readme update :) anyhow i'll fix it here in just a few, got a fire IRL on the phone i'm trying to defuse [20:13] its never "just a readme update" when you're changing templates. :) [20:48] imbrandon: also can you please include a description of the actual changes when you push to trunk, not just 'merging in jorge's changes' [20:49] i did in the lp commit message box. that workflow is so screwed [20:49] i'm just gonna ignore LP from now on and do it the right way [20:50] looks like it was a test that failed , fixed up in a sec [20:50] what lp commit message box? [20:51] imbrandon: no, jorge already fied [20:51] fixed even [20:51] ahh kk sorry was on the phone [20:51] but yea the one on ... let me find it [20:52] imbrandon: I never use the lP gui for doing merges so I don't know how that even works [20:53] yea i tried then when it dident work [20:53] i did it by hand the rest of the way [20:53] thus a little screwy this time [20:53] imbrandon: cool, well thanks for hitting the review queue anyway. :) [20:54] kinda like the github mergin, i hate it, its good to see and overview or a small one off, but pita for multi branch merging like i am normally trying to do [20:54] SpamapS: i plan on a little more today too , but just got wrapped up in that other, will get some more in a bit [20:55] got one my self to toss into charm-helper ( few more bash functions ) [21:02] SpamapS: what ya think about a X- prefix for random metadata.yaml fields, that takes care of the namespace issue if it ever were to become official etc and lets us add things like x-vcs to the metadata [21:02] kinda like the debian rule [21:02] imbrandon: no, thats already been rejected and I agree with the rejection. [21:02] it was ? [21:02] ok , i'll dig in the mailing list [21:03] imbrandon: yes, anything thats not ready for metadata.yaml should go in some other yaml file [21:03] preferrably one that is named around the tool that consumes it [21:03] would be useless for what i just said, and you mean rejected based on the uds convos ? [21:04] imbrandon: vcs.yaml would be fine [21:04] sure , but then it litters the dir. i'll think on it some [21:05] imbrandon: I suggested it way before UDS.. I also suggested a single key that would be for extending things. [21:05] ... ? [21:05] ok [21:06] hrm the top level dir is getting quite littered too [21:06] :( [21:06] littering the dir is better than littering metadata.yaml I think [21:07] because it will only be "littered" in there as long as something sees non-ubiquitous usage. Once it becomes ubiquitous it will go in the next format spec for juju. [21:07] right now the TLD has metadata.yaml, config.yaml, and hooks .. not what I'd call littered at the moment [21:08] nah because if that becomse standard then i can see things never getting migrated, it happens all the time becose things start looking in the first place it is [21:08] SpamapS: copyright [21:08] readme [21:08] license [21:08] config.yaml [21:08] license is not a standard file [21:08] meta [21:09] info.yaml [21:09] etc [21:09] info?! [21:09] you making stuff up now? ;) [21:09] well i cant use metadata :) [21:09] seems pretty reasonable to me [21:09] plus a templates dir for config templates [21:09] etc [21:09] and if things are migrated they'll be dropped from the charm store. [21:10] huh ? [21:10] what would be dropped ? i think i missed something [21:10] things aren't migrated I mean [21:10] charms that don't follow format migrations [21:10] oh , right [21:11] there will always be a range of formats supported in juju and the charm store [21:11] yea i know, was tring to avoid pitfalls of the past, no need to beat a dead horse tho [21:11] i'll just passive agressivly stay in da rules :) [21:16] which pitfall are we repeating? [21:17] i was talking about specific example of x-vcs still x- becuse it was used there first and became wide spread [21:18] Its not still X- [21:18] thats a lintian violation now [21:19] but like i said i'm not trying to make a big deal about it, just no one brought up prefixes that i was aware and the arguments i heard at uds were very vague and week so thought i;d mention it, no biggie tho i can use info.yaml or charm.yaml etc etc just as easy [21:19] SpamapS: heh sure , but how many years later ? [21:20] the argument was pretty solid actually, that we should stay strict on metadata.yaml so that charms are well defined and so that tools can enforce formats. [21:20] that doesnt discount prefexs then [21:20] :) [21:20] imbrandon: pretty much immediately after Vcs was added to policy, X-Vcs was added to lintian [21:21] sure but x-vcs has been in use for about 4 years if not more [21:21] before [21:22] so i can see alot of tools still looking in the old key , etc [21:23] perfect world i see your point and would agree in such a place, and do here just bacause its not worth me choosing that battle [21:23] so allowing a prefix will just be the same as pushing things into a different file, except that metadata.yaml stays "clean" [21:24] except the file can be named anything and becomes aother varaible its self to find progmaticly [21:25] file, field, makes no difference I think [21:26] guess not if i just load all the yaml up into spyc with a *.yaml glob :) [21:26] that sounds like extra work [21:26] then look for the keys and throw a E if keys overlap :) [21:26] or just interpret each yaml with the tool that actually is meant to interpret it [21:26] yes it is [21:27] ... [21:27] I'm also puzzled as to your intentions, as Vcs is a bad example I think.. I *HATE* that part of debian and much prefer the Ubuntu way where the path to the source branch is well known from the name+series only [21:27] spyc the lib i'm using is indeed made to intrepret yaml [21:28] imbrandon: thats a lib, whats the actual tool or field you want to populate [21:28] ? [21:28] i'm making the tool [21:28] and i dont wanna populate it, i want to read it [21:28] the x-vcs value in this case :) [21:29] like for real that was a real use case [21:29] the first one i came accross but i'm sure not last [21:30] has anyone out here proposed creating a new provider? Does juju allow for private 3rd party providers? [21:30] med_: yes and yes-ish [21:31] you can see examples in providers/* they are pretty streight forward [21:31] ( in the juju src ) [21:32] and a few branches on LP of alternate/testing ones like OpenStack++S3 and OpenStack with swift [21:41] thanks imbrandon [21:52] imbrandon: somebody added X-vcs in metadata.yaml ?! [21:54] well anyway, time to nap [21:54] SpamapS: night [21:59] SpamapS: that was supose to be only in my branch i pushed both unintentionally and am removeing it [22:02] i think charm.yaml will be better suited anyhow the more i think about it, metadata.y dosent definatively say if its data for the charm or the service or both etc , anyhow, time for food here /me is afk