[00:18] jason_, it can be any string re admin-secret [00:18] jason_, i assume for bootstrap? if you run juju -v bootstrap .. you should get a traceback on error, that you can paste to http://paste.ubuntu.com and paste the link here [00:29] niemeyer, sounds good [00:32] hazmat, http://paste.ubuntu.com/705195/ [00:33] jason_, it looks like your orchestra server isn't responding to api requests [00:34] the traceback is kinda of short.. so its hard to be sure, but its either the orchestra server isn't responding or the webdav server isn't up [00:34] I can to ssh to it, and I was able to pxe boot a system from it [00:35] now, in the sample, the address is marked off by trios of single quotes [00:35] ''' [00:35] do those not belong -- I tried without, and got a different, longer error [00:35] jason_, yeah.. it looks like its a problem with the webdav server when i look at the code [00:36] jason_, its the first thing talked to during bootstrap to check if a juju node already exists [00:36] jason_, not sure what you mean by the triple quotes [00:37] storage-url: '''http://192.168.1.103/webdav''' [00:38] jason_, that's not correct yaml afaics [00:38] jason_, single quotes around it are fine [00:38] else it parses to something broken for use as an address [00:39] yaml.load("storage-url: '''http://192.168.1.103/webdav'''") -> {'storage-url': "'http://192.168.1.103/webdav'"} [00:39] jason_, it doesn't even need quotes for a string in yaml for this case [00:39] jason_, which example are you referencing? [00:40] hazmat, https://help.ubuntu.com/community/Orchestra/JuJu [00:41] jason_, hmm.. ic. thanks, i hadn't realized this was documented somewhere.. first i'd try removing all the triple quotes [00:42] hazmat, the error w/o http://paste.ubuntu.com/705202/ [00:45] jason_, i'm not really all that familiar with it.. but my understanding is that first you need to define the management classes in cobbler that associate to machines you want to use, and those should match the management classes you have specified in juju's environment.yaml file.. the error is because juju queried the cobbler server and didn't find any machines matching the specified management classes. [00:45] hazmat, mmm, I think I might know what to try next, thanks. I've got to run for a while -- thanks [00:46] jason_, np.. good luck, and feel free to pop by if you have any more problems with it, there will be some folks around tomorrow who have more experience using juju + orchestra [00:46] hazmat, cool, thanks [00:49] hazmat, hey, got past that step, just had to netboot-enable it [00:49] jason_, awesome [00:50] hazmat, heh, meanwhile, my wife is going to kill me if I don't quit messing with this! :) [00:50] hehe ;-) [00:50] * hazmat knows that feeling [01:16] SpamapS, your updates are breaking the charms.. the revision file is missing === ejat- is now known as ejat [04:07] hazmat: so I still have to manage the revision?! :-( [04:07] * SpamapS had hoped it was optional [04:07] SpamapS, its in a separate file [04:07] Yeah I was hoping that was an optional file [04:07] 'revision' in the charm [04:08] SpamapS, it is during dev (it will autocreate it for you) [04:08] hazmat: so I need a universal pre-commit that checks for the file [04:08] hazmat: or we have to disallow general direct access to bzr. [04:08] SpamapS, the old way is backwards compatible [04:09] defined in metadata.yaml [04:09] hazmat: indeed, but it generates *copious* warnings [04:09] SpamapS, yeah.. for the 'charmers' group that might be good [04:09] probably should have used a deprecation warning.. but that's still once per process. [04:10] and random depending on which one you hit.. i guess the repo.find hits all of them. [04:12] yep === plars is now known as plars-holiday [12:30] Hallo! [12:54] niemeyer: mornin' [12:57] niemeyer: i tried to push my doc changes to the URL you suggested (lp:~juju/juju/trunk) and i still get a "read-only transport" error [13:03] rog: Can you please paste it? [13:10] <_mup_> Bug #871743 was filed: orchestra instance status is not visible < https://launchpad.net/bugs/871743 > [13:11] http://paste.ubuntu.com/705380/ [13:11] niemeyer: ^ [13:12] <_mup_> Bug #871745 was filed: orchestra: ks_meta not cleared < https://launchpad.net/bugs/871745 > [13:20] rog: This is wrong in a few different ways [13:20] rog: You can simply pick a branch and push onto trunk [13:20] rog: trunk has most certainly evolves since you created this branch [13:20] can? or can't? [13:20] rog: Sorry, can not [13:20] ok [13:20] rog: The way to go is to have a local trunk [13:21] i do [13:21] ok, and merge into that [13:21] rog: Then, bzr pull into it [13:21] then push it [13:21] rog: First pull from the real trunk [13:21] yup [13:21] rog: Then, merge onto that [13:21] rog: and _test_ it! [13:21] rog: Then, commit and push [13:21] even though i've only changed the docs? [13:21] rog: If you're using a bound branch, you actually don't have to push [13:22] just commit? [13:22] rog: But it's slightly easier to screw things up too [13:22] rog: Yeah [13:22] rog: For doc-only changes, build the docs again at least [13:22] how do i run the juju test suite BTW? [13:22] rog: and see how it looks in the browser [13:22] rog: ./test [13:22] ok [13:33] niemeyer: i still get the same error. i tried it from scratch: http://paste.ubuntu.com/705386/ [13:34] i'm sure i'm still doing something wrong :-) [13:34] rog: Hmm.. [13:34] rog: Are you not part of the team? [13:34] * niemeyer checks [13:35] rog: LOL, yeah [13:35] rog: Ok, try now [13:35] ah, that works. [13:35] rog: Sorry about that [13:35] BTW what *is* the difference between lp:juju and lp:~juju/juju/trunk [13:35] ? [13:40] rog: None, assuming that the former has a default series that points to the given branch [13:40] rog: Which is indeed the case [13:40] rog: Which series a project or a series points to is config-defined [13:40] niemeyer: ok. i understood something different from one of your previous remarks [13:40] Erm [13:40] rog: Which branch a project or a series points to is config-defined [13:40] rog: Which was? [13:41] niemeyer: http://paste.ubuntu.com/705389/ [13:42] (i'd used the URL lp:juju there) [13:42] rog: You were using http [13:42] rog: or that was my understanding at least [13:42] rog: if it wasn't, then I looked the URL incorrectly [13:43] niemeyer: i don't think i was. i just checked. i was using lp:juju [13:43] rog: Ok, sorry then [13:43] i woz confuzed [13:43] np [14:09] <_mup_> Bug #871773 was filed: machine_data needs a schema < https://launchpad.net/bugs/871773 > [14:21] <_mup_> juju/go-store r16 committed by gustavo@niemeyer.net [14:21] <_mup_> Merging from go-new-revisions. [14:50] <_mup_> juju/go-store r17 committed by gustavo@niemeyer.net [14:50] <_mup_> Renamed NewURL to ParseURL, and added MustParseURL. [15:05] SpamapS, fwiw i'm marking bugs for oneiric against the distribution and milestone oneiric updates [15:07] niemeyer, fwereade_ we don't have a separate oneiric series atm.. so merge order is critical to ensure we don't have new featues added before bugfixes for oneiric.. [15:08] hazmat: Agreed [15:08] hazmat: What's the context? [15:08] niemeyer, getting 399 and local provider storage into oneiric.. [15:08] hazmat: I've just been quietly working away and MPing against florence bugs, I hadn't been planning to merge anything until I had some idea what was going on [15:09] niemeyer, but it also applies to any other bugs we get that should be fixed against oneiric, where things become a bit harder. [15:09] if its post oneiric release, and we're onto florence feature dev [15:10] unless we're saying we're only going to do one sru update, and we'll save doing a maintained stabled till 12.04 [15:10] we'll at least get the practice in of doing an SRU before 12.04 which is nice [15:11] the question is if we need or want to do more than one [15:11] probably better is publishing a stablish ppa [15:12] hazmat: I don't know, but the focus ATM is indeed on oneiric [15:12] i'll ask again on list for wider feedback [15:13] hazmat: I think it depends quite a bit on how the 11.10 => 12.04 will look like [15:13] hazmat: There are important things we have to decide on that will modify the way we work on that period [15:15] hazmat: Regarding local-provider-storage, I wish we had gone with a webdav implementation from day zero [15:15] hazmat: But if you've tested that branch and it's solving the problem for the release right now, +1 [15:16] niemeyer, indeed, both i and jim have tested it [15:16] it solves the issue jamespage reported [15:16] hazmat: Cool, let's go with it then and get the problem fied [15:16] fixed [15:16] hazmat: We can reevaluate the approach in the future [15:17] niemeyer, what does webdav bring to the table here? and which webdav impl? [15:17] hazmat: It brings commonality between multiple providers, and it also brings privacy [15:17] hazmat: We already have webdav support in orchestra [15:20] niemeyer, hazmat: kinda-sorta: I only just MPed the version with authentication, and I think that's tied to the apache2 module (I forget what, but it skips one of the possible fields) [15:20] privacy between multi users on a machine with sudo root access, against atm public resources is a bit of a red herring, multiple environments on a single machine, means a cross provider resource (ie. host config alteration) or something costly like multiple apache2 webdavs. [15:21] hazmat: We already have it that code working. Just needed to bring up any existent webdav server properly configured. [15:22] with the new local provider storage, the fetch side is the same as what're using now against any url.. the push side is the same what we use in tests (disk storage), and is trivial [15:25] hazmat: Yeah, the fetch side is the same. The server side is disk store + twistd + wrapper to return URL on twistd. [15:26] hazmat: I hope that in the future that becomes a single server, for local, for orchestra, and for EC2. [15:44] Lunch, biab [15:54] later all [15:55] fwereade_, cheers [16:10] * hazmat wonders if google's dart aka java+coffeescript is really useful [16:11] * rog is quite glad that dart isn't stepping on Go's toes [16:23] <_mup_> juju/trunk r401 committed by kapil.thangavelu@canonical.com [16:23] <_mup_> merge local-provider-storage [r=jimbaker,niemeyer][f=869945] [16:23] <_mup_> Make local provider storage network accessible to allow for unit access, fixes problems [16:23] <_mup_> with charm upgrade. [16:44] * hazmat lunches [16:47] <_Groo_> hi/2 all [16:47] <_Groo_> could anyone help me? i want to test juju but i dont have a AWS key... [16:48] <_Groo_> how can i bootstrap juju without sucha key? can i emulate one or make juju ignore it? [16:48] _Groo_ , consider using the local provider setup [16:48] rog: I'm actually surprised by _how much_ it's not stepping on it [16:50] _Groo_: you can also use openstack [16:51] niemeyer: yeah, i thought there might be some influences going on there, but there don't appear to be [16:52] niemeyer: := and go-order type declarations would have been a nice touch... [16:52] rog: Interfaces as well, at the very least [16:52] niemeyer: yeah. [16:53] niemeyer: from my 30s look at it the type and object model seems substantially that of java [16:53] which does seem a bit... retro [16:54] rog: Yeah, it feels quite a bit like Javascript+Java [16:54] rog: Which is unsurprising given the stakeholders' background [16:54] on the positive side, it means i don't have an urge to waste time looking through it in detail... [16:55] niemeyer: yeah. seems like they could've been a little bit more radical. but maybe they like that space. [16:58] So, we may need to do one last upload to juju in 11.10 [16:58] oh wait, never mind [16:58] I was just thinking it uses the PPA by default [16:58] but it doesn't! w00t! [16:58] SpamapS: Yeah, it's.. magic! :) [17:00] niemeyer: I keep thinking that it would be better to have the bootstrap process build a repo with juju in it.. so you always get the same juju regardless of the archive/PPA state. [17:00] niemeyer: i'm off. i sent you a come back on that review by the way. all done save one query. [17:01] rog: Cool [17:01] rog: Thanks, and a good evening [17:01] see you tomorrow [17:01] btw, juju.ubuntu.com/docs is not being updated [17:02] * niemeyer tries to sort out ordering of updates in the store [17:08] hazmat: btw, we have some more bugs to fix in txaws .. [17:08] hazmat: the provisioning agent also needs to be a little more robust when handling errors from txaws .. [17:08] SpamapS, hmm.. this is around the machine termination work? [17:09] We had a failure during the demo where sometimes expose would try to list instances, fail, and then get an error raised because it tried to iterate on None [17:09] ugh [17:09] after that, the provisioning agent would not do *anything* [17:09] kill/restart it would work [17:09] luckily we had it a few times in rehearsal so I had Adam watching for it during the demo [17:10] It was about 1 in 10 times.. so we just crossed our fingers and went for it. :-P [17:10] glad the demo went well, sounds like it rocked [17:11] It went over great [17:11] hadoop w/ 7 nodes in under 5 minutes. :) [17:18] SpamapS, there's a related outstanding issue (bug824279) [17:18] bug 824279, please ;) [17:18] <_mup_> Bug #824279: Security group functions for EC2 provider should retry < https://launchpad.net/bugs/824279 > [17:20] jimbaker: that is definitely related, tho I think the bigger problem is that errors are allowed to disable the provisioning agent [17:20] from the sound of it, openstack occasionally returns a payload txaws can't parse, and this bubbles up inappropriately [17:20] jimbaker: exactly [17:20] SpamapS, yeah, i never liked that architecture [17:21] txaws should be smarter.. [17:21] but we should be more defensive [17:21] SpamapS, precisely [17:21] SpamapS: That's a slightly spread out issue indeed, and it's the sort of thing I hope to get cleaned up in the 11.10 => 12.04 timeline [17:21] Want to make sure its well reported so it will be easier to fix. [17:23] SpamapS, i think one possibility here is to consider the provisioning agent, when it does fall over, is something that can be restarted. ideally with ha. but there has to be a last line of defense. not you watching it ;) [17:24] yeah, what was weird was that it didn't exit.. it just stopped doing anything [17:24] SpamapS, ahh, so not even failfast [17:24] My guess was that some watch/callback/etc. needed to be re-added in a 'finally:' clause somewhere [17:24] Its quite reproducible.. [17:24] SpamapS: I've seen that happening before.. a deferred that never fires can cause that [17:24] just digging through bugs now to see if there's a dupe [17:25] (while also ISO testing the 11.10 release :) [17:28] SpamapS: My plan for 12.04 encompasses fixes for all of that, FWIW. We still have to talk about it to see if we're all buying into it, though. [17:31] SpamapS, is bug 863400 addressed already? [17:31] <_mup_> Bug #863400: examples repository is not installed from PPA < https://launchpad.net/bugs/863400 > [17:33] hazmat: no, its just blocked on me making the packaging from 11.10 backportable to lucid/maverick [17:33] trivial, but not done [17:33] BTW the reason they're not there is a missed file during the rename [17:35] ok.. i'm going to move it to the florence milestone.. i'm trying to close out eureka === hazmat changed the topic of #juju to: http://j.mp/juju-florence http://j.mp/juju-docs http://afgen.com/juju.html http://j.mp/irclog [17:38] florence is open [17:39] WOOHAY [17:40] very nice [17:44] it was very clear to me in working on the provisioning agent that it's very difficult to get a complex watch setup that is correct in twisted. so moving to golang is definitely something i continue to support [17:50] jimbaker, umm.. add an errBack ? [17:50] for the error handlng that is [17:52] SpamapS, do you have any of the logs from the provisoning agents by chance? [17:53] hazmat: no. :( [17:53] kept meaning to save one off [17:53] but its really easy to reproduce [17:53] just return None from describe_instances [17:54] can probably write a test actually [17:54] hazmat: rather, instead of returning None, raise an error [17:54] hazmat, sure, we could do that. i suspect (but i would need a log to confirm) that the problem would not be caught by the exception handler L461 in open_close_ports_on_machine in juju.agents.provision [17:55] actually that's just illustrative of where it should be caught, since in the specific case there, it's just to ignore something uninteresting [17:55] wouldn't the appropriate way be to handle unexpected errors in any underlying library call? [17:57] SpamapS, yes, to be more defensive, or failfast, when doing something external [17:57] SpamapS, religious backwards compatible will leave important features on the table.. zk security comes to mind as a pending one [17:59] that method does isolate any txaws calls related to expose, but it can be called in the context of a watch [18:00] SpamapS: Yes, that's really all that there is to it. Certain practices make that intrinsic, while others make that error prone. [18:06] hazmat: agreed 100%, I think it will slow down dev a lot. [18:07] hazmat: I'm not sure how we can manage to keep putting new versions of juju in -updates without it though.. otherwise we're going to get angry people who have dead environments. [18:08] SpamapS, i'm not sure why we should try past an initial SRU (which is good experience for 12.04) [18:08] have a stable ppa for people who want new features [18:08] if we want to have additional fixes for 11.10, we should have a release branch [18:09] Thats typically how its done, yes. [18:09] However I think we're being asked to think outside the box here. [18:12] SpamapS, hazmat: We may well live the entire 12.04 timeframe with a compatible code base [18:12] SpamapS, hazmat: It really depends on how we're running next [18:13] There are two very distinct levels of compatibility to think about too [18:13] There's the running environment, and the charms. [18:13] And a third, which is the CLI [18:13] SpamapS, hazmat: People must be able to trust it by 12.04, so the focus is on making it solid [18:14] Yeah, nobody's arguing that we shouldn't do some radical things to make juju a production ready product for at least a few use cases by 12.04. The current question is what to do about SRU [18:16] SpamapS: I don't think we need any incompatible changes to make it production-ready. [18:17] SpamapS: SRUs can flow from trunk before we break compatibility. [18:18] niemeyer: adding ZK security would probably be difficult to do w/o breaking a running env [18:19] unless we are careful in making things gracefully degrade on the ZK schema version [18:19] SpamapS: That's not critical for putting it in production, to be honest, but it'd be possible still [18:21] niemeyer: alright, I'l have to take your word on both of those, as I'm not entirely familiar with the current security model [18:21] The previous understanding I had was that any agent can change and view any part of ZK [18:21] thats not even close to production acceptible. [18:21] SpamapS: The main point I'm raising here is that I don't even think we should be addressing zk security right now [18:22] yeah I believe you that there's no need for it.. I just don't understand it well enough to speak to it. [18:23] SpamapS: "production" means different things to different people. [18:23] right now the story there is no security, there is an implementation on deck that can be finished up with in an additional week or two [18:23] but its moot if we go go [18:23] SpamapS: I'd certainly deploy juju with agents having access to zk [18:23] SpamapS: In production [18:23] SpamapS: I'd not deploy it without being able to reboot [18:24] SpamapS: etc [18:24] I'd put rebooting above zk security too. [18:24] niemeyer, iotw. there are other issues with higher priority [18:24] hazmat: You mean in the same words, yeah :-) [18:24] I'd just be hesitant to deploy something that makes it so one root compromise enables a global root compromise. [18:25] it can be "production part deux" ;) [18:25] SpamapS: Precisely [18:25] SpamapS: It's not that we disagree, it's just that that's critical critical, and critical :-) [18:25] s/that that's/that there's/ [18:27] i'd still like to fix finish the merge on the security stuff, just so its not pending.. the further away the further the context switch [18:27] hazmat: I'm not sure it's a good idea, but we should definitely talk about it [18:27] i should probably just update it to current trunk and leave it pending [18:28] niemeyer, i'm fine with holding off on it for now, but it is still something i'd like to see for 12.04 [18:28] wouldn't it be possible to simply put a version constraint on it.. if schema_version >= 3: ... else: warn("NO SECURITY!") [18:28] but agreed reboots and other prod issues are more important for now [18:28] hazmat: It's something I want to see in too, for sure, but we have to put the work in context of the overall strategy [18:29] niemeyer, we haven't done much overall strategy discussion for what we want to accomplish this cycle [18:29] hazmat: ROTFL [18:30] niemeyer: I went ahead and made 'prouction' an official tag for juju's bugs [18:31] hazmat: We haven't indeed.. but please excuse me while I go back to hacking the store so I can try to get this on time. :-D [18:32] hazmat: We've been consciously delaying strategy conversations since the strategy is very well known up to now [18:32] hazmat: Getting 11.10 in shape [18:33] niemeyer, good stuff we can revisit post store/repo launch [18:33] * hazmat returns to hacking [18:33] hazmat: Once we're good on that front (I'm still not), and we breath for a couple of days maybe, we should start more serious conversations on the months ahead [18:33] SpamapS, btw.. pls put in a request to merge status2gource ;-) that's awesome [18:33] SpamapS, is there a screencast of that rocking out? [18:34] hazmat: I'm hoping to just make it part of juju status. --gource [18:34] SpamapS, nice [18:34] its pretty simple [18:34] hazmat: we had a few [18:34] hazmat: hopefully we'll have full video of the actual demo [18:35] that would be awesome... although potentially a long wait [18:36] IIRC, the OpenStack conf guys are editting [18:37] SpamapS, sure.. but i remember waiting a year for the surge guys to finish up last years videos.. [18:37] doh [18:41] evening all [18:42] I have a query re public-address/private-address [18:42] JAMES PAGE! [18:42] * jamespage waves at niemeyer [18:44] I've been doing a bit of work on the cassandra charm and I've switched over to using private-address/public-address [18:45] but I have to configure cassandra to listen on a specific IP address for cluster communication [18:45] jamespage: Hmm, ok [18:46] I went with `unit-get private-address` - but running in ec2 this give me 'domU-12-31-39-0B-14-11.compute-1.internal' [18:46] jamespage: I actually already committed changes to use public-address/private-ddress [18:46] which binds onto 127.0.1.1 [18:46] heh thats interesting [18:47] ugh.. [18:47] yeah - thats what I thought :-) [18:47] seems like we should be putting private *address* in that field, not private-hostname [18:47] jamespage: Looks, buggy.. [18:47] since we have the collaboration of the provider, addresses should be useful [18:47] SpamapS: This should actually be the public one for EC2, right now [18:47] SpamapS: I've done some work on the seed management as it was restarting cassandra *alot* when it did not need to [18:48] jamespage: cool! [18:48] jamespage: you can just --overwrite lp:charm/cassandra .. I was just seeing how unit-info and private/public addresses work [18:48] niemeyer: huh? [18:48] SpamapS: wilco [18:49] SpamapS: It should be the address that allows units to intercommunicate.. I guess we can use the internal 10.*.*.* [18:49] jamespage: as far as binding a specific IP, is that necessary? can't you use 0.0.0.0 ? Or is it the snitch thing that needs to know its IP? [18:49] niemeyer: we *must* use the internal address, or people get massive bandwidth bills [18:49] SpamapS: you can for the thrift interface - but not for the peering [18:49] SpamapS: yeah, cool [18:50] jamespage: right that makes sense. [18:50] jamespage, that's interesting.. [18:50] host domU-12-31-39-0B-E0-59.compute-1.internal [18:50] domU-12-31-39-0B-E0-59.compute-1.internal has address 10.214.231.167 [18:50] jamespage: I suppose the simple way is to look it up with dig/host, instead of using gethostbyname [18:50] ping domU-12-31-39-0B-E0-59.compute-1.internal -> 127.0.1.1 [18:50] hazmat: bingo! [18:50] hazmat: right, thats because we always put a machine's hostname in /etc/hosts as 127.0.1.1 [18:51] hazmat: Cool.. we just need to resolve it differently [18:51] hazmat: Or rather.. are we resolving it? /me looks at get-unit private-address [18:51] jamespage, if we drop the search domain from /etc/hosts it should also work [18:51] niemeyer, it queries the private address from the metadata server [18:52] hazmat: But what's the output from the command? [18:52] hazmat: an ip, or the domain name? [18:52] niemeyer, we can get either, we use address for ec2 [18:52] hazmat: Is the address an ip, or a domain name? :-) [18:52] er. domain [18:53] niemeyer, its domains for every provider except local which uses ip [18:53] so what should public-address get me in ec2? [18:53] hazmat: That sounds good then [18:53] jamespage: public would be less relevant for cassandra [18:53] jamespage: The internal domain name for the local machine [18:53] jamespage, the public dns name for the machine [18:53] Sorry, s/internal/public/ for public-address [18:54] so that should be a name not an address - OK [18:54] I think whats important is that the behavior of private-address is well understood. I have to admit, I'd expect it to *always* give me a network address, not a hostname, but if it might do that sometimes, then charms can deal with that. [18:54] jamespage: I think you have to plan for both [18:55] jamespage: address is a loose term [18:55] if unit-info private-addres | grep -q "[a-zA-Z]" ; then resolve_hostname ; fi [18:55] hrm that breaks on IPv6 doesn't it? [18:56] jamespage: ipv4 address, ipv6 address, mac address, etc, are not loose terms [18:56] SpamapS, that's one reason its hostname for address, to preserve ipv6 compatbility [18:57] ie. leave it to dns to resolve [18:57] except here dns is effectively broken [18:57] yeah, my concern would be that sometimes it might be an IP [18:58] SpamapS: We can certainly polish/improve that over time [18:58] SpamapS, in the local provider it is indeed an ip, since the hostname isn't routable from the host to the container [18:58] SpamapS: Let's keep an eye on that and learn how people use it [18:58] niemeyer, that's kinda what caught me out; I've been testing in the local provider just fine [18:58] (appreciate its a simpler environment) [18:59] so the underlying problem comes from cloud-init [18:59] # Added by cloud-init [18:59] 127.0.1.1 domU-12-31-39-0B-E0-59.compute-1.internal domU-12-31-39-0B-E0-59 [18:59] Its not really a "problem" per se [18:59] switched to ec2, no peering with cassandra [18:59] if it didn't have the *.internal address it would just work [18:59] its meant to make sure the machine can resolve its hostname [18:59] that is in /etc/hosts [18:59] SpamapS, we can do that from the second part of that line [18:59] hostname [18:59] domU-12-31-39-0B-E0-59 [19:00] we don't need the domU-12-31-39-0B-E0-59.internal entry [19:00] hazmat: right, but the FQDN needs to resolve to the local machine [19:00] its actually something to do with MTA's IIRC [19:00] that's what causes the problem.. it doesn't need to link back to 127.0.1.1 just to the ip address? [19:00] smoser knows better than I do [19:01] if i remove that entry from the line it all works as expected [19:01] jamespage,hazmat: I think the way the cloud-init stuff is injecting the ip is a bit dubious as well, [19:01] That entry is not going away [19:01] I mean, raise a bug [19:01] :-) [19:01] IIRC its there for a good reason [19:01] SpamapS, but the entry would work with just -> 127.0.1.1 domU-12-31-39-0B-E0-59 [19:01] afaics [19:02] Would work for what? [19:02] You're assuming you know every way that the FQDN is used [19:02] i dunno... probably juju ;-) [19:02] SpamapS: FWIW, I don't think it's a common convention to use a loopback for the FQDN [19:02] i don't think fqdn -> localhost is something that should be assumed [19:02] its already resolvable [19:03] you could turn it off [19:03] We might even be setting it [19:04] cloud-init is setting it.. [19:04] * hazmat checks cloud-init config [19:04] manage_etc_hosts must be set to skip that bit [19:05] smoser: around? [19:05] SpamapS: you might be lucky - think he's on hols today [19:05] sanitize hosts file for system's hostname to 127.0.1.1 (LP: #802637) [19:05] <_mup_> Bug #802637: cloud-init needs to check for hostname to resolve to 127.0.1.1 < https://launchpad.net/bugs/802637 > [19:06] doh [19:06] no explanation as to why that is asserted [19:06] i don't see any reasoning in there on why fqdn should be aliased to localhost.. that looks like it introduced abug [19:06] but my guess based on the time frame is this was centered around orchestra starting to use cloud-init [19:07] m_3, ping [19:07] whoops [19:07] lynxman, ping [19:09] looks like scott committed it in rev 409 [19:10] hazmat: pong [19:11] lynxman, do you know why fqdn needs to alias to localhost in cloud init.. ie the reasoning behing the fix for bug 802637 [19:11] <_mup_> Bug #802637: cloud-init needs to check for hostname to resolve to 127.0.1.1 < https://launchpad.net/bugs/802637 > [19:11] hazmat: SpamapS: we added this to circumvent some instances not having an entry to 127.0.1.1 and follow debian guidelines [19:11] hazmat: since we found cases of daemons (MTA, rabbitmq, etc) that would refuse to start or delay start due to lack of hostname resolving to 127.0.1.1 [19:12] lynxman, wouldn't hostname -> 127.0.1.1be fine [19:12] resolving to 127.0.1.1, or resolving at all? [19:12] lynxman, is fqdn also needed? [19:12] SpamapS: it was an added requirement, I did hostname then we added FQDN as well, the debian guideline implies both [19:12] I do recall that rabbit used to be really picky about its fqdn [19:12] SpamapS: just hostname did the job tbh but we want to be legal [19:12] * hazmat remembers that as well [19:13] SpamapS: btw thanks for packaging juju, I've started the macports work to release it asap [19:14] lynxman: sweet, I have a mac here running Lion so let me know when you're done and I can test it. [19:14] SpamapS: will do, I need to create the packages for the new dependencies (pydot and py-apt) [19:15] btw is pyapt strictly needed on non ubuntu environments? [19:15] lynxman: this 127.0.1.1 thing is rather confusing.. I see this reference to it in debian's docs.. but can you point me to the guidelines you guys were following? http://www.debian.org/doc/manuals/debian-reference/ch05.en.html [19:16] lynxman, i don't think that fqdn is appropriate in cases where its already resolvable [19:16] to be aliased to localhost [19:16] SpamapS: exactly that, point 5.1.2 [19:16] lynxman: You may have to patch that stuff out, I don't think anybody had time to think about the ramifications of querying apt data on Mac OS X ;) [19:16] SpamapS: lol, will do [19:17] SpamapS: as said, several people have touched it, I did the initial implementation then adam_g and smoser [19:17] SpamapS: so it would do more stuff like get the fqdn from the metadata and such [19:17] SpamapS: I just hardcoded domain_name to localdomain to avoid conflicts in long term, but it wasn't usable on some scenarios [19:18] lynxman, its not.. its only used for local provider [19:18] Right, so.. [19:18] that paragraph is really unclear to me. [19:18] which won't work on osx anyways [19:18] hazmat: isn't it also used for juju-origin ? [19:19] SpamapS: to me as well, we implemented this in Dublin (city) so we went to the #debian channel to ask :) [19:19] SpamapS, hm.. its not. [19:19] SpamapS, it uses the cli for that one [19:19] SpamapS: this stuff generates quite a debate so hey, any opinion is welcome :) [19:19] SpamapS, failing to find the cli it defaults to ppa [19:19] actually to distro [19:19] thats good [19:19] SpamapS, that's only the case if juju-origin is not set [19:20] else we just use the juju-origin value [19:20] Well I'd say that there is a requirement for machines to be able to resolve their FQDN [19:20] SpamapS: and I'd agree [19:20] I'd also say that cloud-init is in charge of that on some level because it is stepping in for netcfg [19:20] SpamapS: for me the important part was getting a resolvable hostname, which has bigger ramifications [19:20] and in the case of ec2 that's already true without the alias [19:21] However, I do feel that the order of ops there is backwards when deciding whether or not to write it to /etc/hosts [19:21] If the FQDN is resolvable in DNS, it must be kept *out* of /etc/hosts [19:21] I think the debian reference is right.. [19:22] but since the resolution goes 1 -> 3, the fulfillment should go 3 -> 1 [19:23] I also think this may cause some issues for people using 11.10 in the cloud [19:24] SpamapS: so I'd ping smoser and ask his input as well, I'll be in Millbank tomorrow in case you need to get ahold of the release team :) [19:25] Its probably just a high prio SRU at worst [19:25] not a release blocker [19:25] * SpamapS moves discussion to #ubuntu-cloud tho [19:25] SpamapS: just sayin' [19:25] anyhow I need to run [19:25] catch you guys later o/ [19:25] later [19:31] Hey guys, I'm trying to troubleshoot a juju/orchestra setup, which I started with the instructions here: https://help.ubuntu.com/community/Orchestra/JuJu [19:33] I ran the bootstrap command ok, but I'm stopped at deploying the mysql example: http://paste.ubuntu.com/705567/ [19:34] jason_, that's removing the triple quotes in the addresses given in the example? [19:35] * hazmat wanders off to remove the triple quotes from the wiki page [19:36] * SpamapS filed bug 871966 to document the cloud-init discussion [19:36] <_mup_> Bug #871966: FQDN written to /etc/hosts causes problems for clustering systems < https://launchpad.net/bugs/871966 > [19:36] hazmat, yeah, removing the triple quotes, then enabling netboot in the profile I had in orchestra got me here [19:36] got me past the bootstrap [19:37] I'm not 100% sure what the mechanism is supposed to be here -- I have an orchestra server and one vm installed via orchestra [19:38] jason_, can you wget on /provider-state and paste it [19:38] jason_, the triple quotes are gone from both the webdav url and the orchestra server i assume... but the error is the same as it was with them [19:39] juju will store the address of the zookeeper server into that web-dav-url [19:39] http://paste.ubuntu.com/705571/ [19:39] oh wait [19:39] provider state [19:40] which it tries to contact on deploy [19:40] hazmat, hmm /provider_state or /webdav/provider_state are 404s [19:41] oh [19:41] hazmat, it's zookeeper-instances: [MTMxODEwNzQ0NS43NTE2NjE2MS4wNzExNQ] [19:42] so that looks good [19:42] is orchestra supposed to be enlisting the system I created through it to be the juju server? [19:43] jason_, no.. juju is going to create its own server through orchestra [19:43] but yes the system does need to be registered to the managemnt class in orchestra and setup for netboot [19:43] Ok, that might be an issue -- orchestra is running on virtualbox... not sure how it'd create a new system [19:43] on its own [19:44] the system I installed from it, I did creating a vm and pxe booting [19:46] SpamapS, fwereade_ if you have a moment, i'm a bit out of my element on debugging orchestraa setups [19:49] ack [19:49] jason_, yeah.. i'm not sure that setup is going to work [19:49] orchestra + juju via a vm [19:49] maybe if the machines are off and setup for netboot [19:49] jason_: so you have a system listed in orchestra, that you know pxe boots from the pxe/tftp/etc that orchestra is providing? [19:50] it could work w/ a bridge net [19:50] jason_: can you do a 'cobbler listvars --name=the-system-name' and pastebin it? (note that there may be sensitive stuff in there) [19:51] SpamapS, yes, it does pxe boot from cobbler, I'll paste that [19:53] SpamapS, No such command: listvars [19:53] * SpamapS will RTFM instead of reading his own memory [19:56] jason_: while I figure that out.. is it in the "available" management class? [19:57] SpamapS, under management classes in the left hand menu? [19:58] jason_: In the system definition itself, the last sub-menu is 'Management' [19:58] SpamapS, I have orchestra-juju aquired and available in the selected box [19:59] I think one of those had been in the available box, and I moved it while trying things outs [19:59] And currently, that system is installing anew -- I kicked that off a little bit ago [19:59] jason_: so the way juju's orchestra provider works, it will only grab systems that are *netboot enabled* and in the 'available-mgmt-class' from environments.yaml [20:00] ahh this is it [20:00] jason_: cobbler system dumpvars --name=name-of-system [20:01] SpamapS, I get an error there: TypeError: cannot marshal None unless allow_none is enabled [20:01] jason_: weird, did you run it with sudo? [20:02] it may be necessary actually [20:02] SpamapS, yes, it didn't let me otherwise [20:02] ok how about sudo cobbler system list [20:02] SpamapS, oneiric01.ubuntu.lan [20:02] that's my guy [20:03] jason_: ok, so you did 'cobbler system dumpvars --name="oneiric01.ubuntu.lan"' and got the none error? [20:04] Ok, I had bad syntax [20:05] jason_: btw, if you don't have it already, the package 'pastebinit' is useful in these instances. :) [20:05] you can just | pastebinit [20:05] ah, installing now [20:05] pastebinit++ [20:06] SpamapS, http://paste.ubuntu.com/705582/ [20:07] jason_: mgmt_classes : ['orchestra-juju-available', 'orchestra-juju-acquired'] [20:07] jason_: it should only be in 'orchestra-juju-available' [20:07] jason_: take the other one out [20:07] SpamapS, ok I'll move that back [20:08] jason_: Other than that, it should work as expected [20:08] jason_: since its already installing, its *possible* that it will come up fine. [20:09] jason_: actually it most definitely should come up fine [20:12] SpamapS, ok, I've got that class straight in here. Install is wrapping up -- another thing, my system currently is on a network where it can only talk to the orchestra server [20:13] I can add another nic -- or would it be better to switch that nic to the "public" network w/in my network [20:14] jason_: when the box tries to install juju, things may go wrong then. Depends on if your orchestra server can get to the internet. [20:14] jason_: the default profile uses the orchestra server's squid proxy to get to the net [20:14] SpamapS, the orchestra server can for sure [20:14] ah [20:14] jason_: so things *should* still work [20:15] SpamapS, ok, it's up -- I'm going to try the deploy [20:15] I've done fully disconnected installs with a local mirror of Ubuntu, so I know it works. [20:17] jason_: the problem is, deploy now wants *another* system [20:18] SpamapS, ah, ok, so if I mint another, then that ought to work [20:18] jason_: right. Currently there's a 1:1 relationship between deployed units of a service and machines. [20:19] jason_: so bootstrap runs on the first machine, and then each deploy/add-unit after that allocates another machine [20:19] SpamapS, ah, so wordpress takes 3 servers [20:19] jason_: at the moment, yes. [20:19] got it [20:20] jason_: bug 806241 will hopefully be done soon. :) [20:20] <_mup_> Bug #806241: It should be possible to deploy multiple units to a machine (service colocation) < https://launchpad.net/bugs/806241 > [20:20] <_mup_> juju/config-get r392 committed by kapil.thangavelu@canonical.com [20:20] <_mup_> config get subcommand to retrieve current settings or service schema [20:20] jason_: https://bugs.launchpad.net/juju/+bugs?field.tag=production there's a list of issues deemed necessary for making juju useful in production [20:26] SpamapS, cool, thanks. I'm spinning up a couple new vms. It looks like my first vm is complaining about some things -- tough to see the errors as they come up though -- bazaar has encountered an internal error is part of it [20:27] jason_: did you use the environments.yaml from the wiki directly? [20:27] jason_: juju-origin: lp:juju/pkgs isn't going to work [20:27] jason_: suggest removing that line. [20:27] jason_: also are you using juju from the PPA, or 11.10 ? [20:27] * SpamapS realizes its 1:30pm and goes to eat [20:27] jason_: bbiab [20:30] SpamapS, ok -- juju from bazaar [20:30] SpamapS, I'll change that enviro bit [21:11] Good progress today, and in a good break point.. I'll step out and do something outside.. back later. [21:14] jason_: So you are running juju by checking out lp:juju ? [21:14] Hi all [21:16] I have some wonderings ... [21:17] I'm hesitating beween juju, cloudformation , and { chef, puppet }  [21:18] these tools are somewhat complementary [21:18] but part of the scope is the same [21:18] is there someone using juju with cloudformation ? [21:19] xerxas: If I needed to put up critical production systems tomorrow, I'd go with chef or puppet... knowing that I can convert all of my chef/puppet to charms once juju is "production ready" [21:19] xerxas: I don't think Juju and cloudformation would be compatible together. [21:20] SpamapS: ok, so for iaas using cloudformation , for application management and it's configuration files, puppet or chef ? [21:20] SpamapS: ahh , interesting "Juju and cloudformation would be compatible together" , how come ? [21:20] juju creates some instances , this is why it's not compatible ? [21:21] so cloudformation is kind of "very static" ? [21:21] xerxas: they both use cloud-init to seed themselves into the instance [21:22] SpamapS: I'm not forced to use cloud-init with cloudformation , do I ? [21:22] xerxas: I personally wouldn't use cloudformation since its likely to never be available on any other IaaS provider [21:22] SpamapS: right, this is why I'm searching something else [21:23] xerxas: cloudformation uses cloud-init to make the instance do what it wants. So does Juju. [21:23] but puppet , chef , doesn't boostrap infrastructures, and creates ressource (chef knows how to create ec2 instances with knife, but no more , and knife is only client side) [21:23] SpamapS: what about elasticIPs, autoscale, securitygroups ? [21:23] xerxas: these bugs are all known problems that we think would be issues for using juju in production: https://bugs.launchpad.net/juju/+bugs?field.tag=production [21:24] I mean , with cloud formation , I can go up to "create a whole infrastructure for my application in my continuous integration" [21:24] xerxas: if you are willing to a) work around them, or b) help fix them, then juju would be a good choice for you today. :) [21:24] ;) [21:25] xerxas: right, thats exactly what we want to do with juju.. and you can do it right now.. but you will be accepting some risk [21:25] ok, I'm pretty much ok to accepting risk ;) [21:26] I want to be on the bleeding edge [21:26] Ruby dev? ;-) [21:26] but have no much time to contribute to juju [21:26] xerxas: thats ok, these will absolutely be solved by the 12.04 release of Ubuntu [21:27] no , system administrator ;) (using python as much ruby ;) ) [21:27] xerxas: have you played with juju yet? [21:27] yes [21:27] xerxas: how far did you get? [21:28] 2 years administrating 60 servers with puppet on ec2 (from 0 to 60 servers, bootstrapped al the infrastrcutre) , then used chef for 1 year , then now, testing juju and testing cloudformation [21:28] SpamapS: I could deploy charms ;) [21:28] SpamapS, I'm running juju w/ lp:juju [21:30] jason_: any reason you're not using the PPA or the one from 11.10 ? [21:31] SpamapS, I was using the one from 11.10, I actually had an issue with that where the version I had appeared to mismatch with what the examples needed [21:31] xerxas: it would be *really* helpful to have some bleeding edge ops feedback with juju, so if you're willing to be patient with us, WELCOME! :) [21:31] SpamapS, then with this howto, it suggested running from lp, so I did that [21:31] jason_: which examples were you reading from? r398 was just uploaded to 11.10 last night, and brings it up to date with most of the upstream docs. [21:32] SpamapS, from the /usr/share/docs [21:32] SpamapS, this was fri [21:33] jason_: yeah, the one in 11.10 now is going to be less likely to change out from under your feet. :) [21:33] SpamapS, or maybe thurs [21:33] SpamapS, I'll install that now [21:34] jason_: that should also eliminate your bzr branching problem since it will automatically choose 'distro' as your source, and that will let you use the squid proxy in your orchestra server [21:34] SpamapS, how do I see what tasks the cobbler server is sending out, and clear those -- all three of my systems came up and tried those broken bzr instructions [21:34] cool [21:36] jason_: juju destroy-environment first [21:37] jason_: that will clear everything out of the webdav server and should reset all the cobbler system records [21:37] www.thedubber.altervista.org [21:37] SpamapS, sweet [21:37] SpamapS: I would like to help and give feedback , I'm just evaluating the solution I'll use ... [21:38] so for , juju seems an intermediate between cloudformation and puppet [21:38] juju plays on the infrastructure side and application side , this seems interesting to me [21:40] SpamapS, so I bootstrapped again, and it's a matter of waiting for my systems to poll for instructions? [21:40] or do they need to restart [21:41] jason_: since you don't have power control defined, you have to manually reboot them [21:41] cool [21:42] jason_: if you had a PDU of some kind that cobbler can talk to, it would have powered them off/on [21:50] SpamapS: anyway, thanks for your answer [21:50] +s [21:50] still wondering how to use , and how ... ;) [21:52] xerxas: we're here if you have questions. :) [21:52] hmm.. seems in the run up to 11.10 we have introduced some python 2.7-isms [21:52] failing tests on lucid. :-P [21:53] SpamapS, log? [21:53] still running [21:53] but at least 5 thus far [21:53] A lot of them seem centered around checking for "too many args" [21:54] SpamapS, got it [21:54] * SpamapS vows to get jenkins setup with some LXC slaves soon. [21:55] SpamapS, are you referencing .. https://code.launchpad.net/~clint-fewbar/+recipe/juju-daily-test [21:55] i see all kinds of odd things there [21:55] could not init jvm.. etc [21:55] hazmat: that one doesn't even build yet on lucid because dh_python2 is missing [21:55] which is exactly what I'm working on right now [21:55] ah [22:00] FAILED (skips=7, failures=5, errors=3, successes=1549) [22:00] will pastebin the log.. [22:01] http://paste.ubuntu.com/705630/ [22:02] hazmat: is this problems with argparse? [22:02] twisted.trial.unittest.FailTest: 'juju: error: unrecognized arguments: fum' not in 'usage: juju unexpose [-h] [--environment ENVIRONMENT] service_name\njuju unexpose: error: unrecognized arguments: fum\n' [22:03] SpamapS, odd those tests have been going for a while [22:03] SpamapS, the test is being strict about checking error output [22:03] it looks like a variance to the output [22:04] s/juju: error/juju unexpose [22:04] pretty minor [22:04] yeah its all minor stuff [22:04] we could just be lest exact about and capture from 'error:' [22:04] compare that is [22:07] so yeah just drop the preceding command.. [22:07] looks like an argparse difference that doesn't matter [22:13] SpamapS, that's correct, we had to update some of the command tests when we moved to 2.7 because of this [22:19] jimbaker: so was there a definite decision to drop 2.6 support? [22:21] SpamapS, i do not believe so. in particular, we would expect these commands to be executed on client running 2.6 like os x [22:24] however, it has been the case for these tests since probably before budapest. i suppose we could use the python version to determine the error text, or relax it [22:27] Lion has 2.7 [22:33] SpamapS, I'm still getting bzr errors when my systems come up -- also, I am right that these need to keep reinstalling in order to get new commands from orchestra? [22:34] jason_: no, the install is just the way that you get a known-clean environment for juju to work with. [22:34] wut's up charmers! [22:34] jason_: the idea is once the agent starts, you don't have to reinstall anymore. :) [22:35] SpamapS, I commented that lp:juju/pkg line out, but maybe that wasn't sufficient... [22:35] My systems all reinstalled, came up, and failed on that [22:36] hola cole, tis release time [22:36] hazmat: nice! [22:37] jason_: can you pastebin /var/log/cloud-init-output.log ? [22:39] SpamapS, i would be more concerned about ensuring the test suite, perhaps just a subset, runs successfully on os x [22:40] jimbaker: indeed, would be good to have an OS X VM somewhere running as a jenkins slave [22:41] SpamapS, http://paste.ubuntu.com/705647/ [22:45] so this topic came out of the openstack meet up around donate: thoughts? https://blueprints.launchpad.net/juju/+spec/dynamic-juju [22:45] jason_: hm, that was less helpful than I thought it would be [22:46] jason_: perhaps use 'juju-origin: distro' in your environments.yaml [22:48] cole: as I've said before.. you can write that now without changing anything in juju and just have it run add-unit/remove-unit based on any number of metrics coming from any number of metric gathering services. [22:50] Basically I see no roadblocks to just doing that in charms. [22:50] SpamapS, ok, I'm making that change [22:50] jason_: I'm updating that wiki page too.. its woefully off [22:50] SpamapS: fantastic, i've not actually heard that said. Last conversation I had was around M-Collective and Gustavo said that it was a roadmap item. [22:52] cole: well we can always improve it. [22:53] cole: but really, how hard would it be to just have a charm that runs some kind of data collector and applies rules to the collected data [22:53] This isn't exactly a new idea. :) [22:53] juju just makes it a lot more flexible [22:54] SpamapS: agreed, I should have been more specific. Basically nebula wants to help with getting information out of ganglia and nagios to automatically do the scaling, or if the direction is agent based…so be it. [22:55] ganglia is agent based. :) [22:55] and nagios can be [22:55] dedicated agent ;P [22:55] message here being, less daemons the better [22:56] cole: yeah I'd say JFDI and if juju is getting in your way, thats the time to look at adding stuff to juju. [22:56] I do think that juju will have a rich plugin arch at some point, and that will be the place where this lives. [22:56] But I don't have nearly as much influence as niemeyer. :) [22:57] cole: there are some who would say more daemons means more separation of concerns. :) [22:57] which should lead to more robust systems [22:58] probably most important to make sure only *one* daemon does collection than to try and make one daemon to rule them all [22:58] otherwise I'd say you should look at adding this to upstart :) [23:00] cole: there is one bug that you'll probably need fixed before this becomes easy.. [23:00] cole: bug 806241 will allow multiple charms in a single machine/container [23:00] <_mup_> Bug #806241: It should be possible to deploy multiple units to a machine (service colocation) < https://launchpad.net/bugs/806241 > [23:01] cole: that would be needed to make sure a single collection service was deployed everywhere. [23:01] cole first step would be to get an api endpoint onto juju [23:01] hazmat: bah, cmdline for the first run. ;) [23:01] API does seem like something that needs to happen *soon* though [23:01] SpamapS, actually the command line would switch to using the ui [23:01] s/ui/api [23:02] should make the cli a bit faster [23:03] i like it! [23:03] SpamapS, another thing from my env.yaml -- I have default-series: oneiric-juju -- I think I added that when it complained about a default series [23:03] does that look ok? [23:04] jason_: you need it to be default-series: oneiric [23:04] jason_: that has nothing to do with the cobbler profile [23:05] jason_, its more like what release/version of ubuntu do you want to use [23:05] yes, makes sense [23:06] jason_: Hmm.. how did it complain about the default series? [23:06] jason_: The default value for this should actually work [23:06] My systems have started not pxe booting -- does it seem like that's because its what orchestra intends, or some other problem [23:06] niemeyer, that value is never validated [23:07] hazmat: I mean that the value shouldn't have to be changed [23:07] niemeyer, there was no value in the sample I started with, as I recall, juju complained about it when I was trying to bootstrap [23:07] it does have to be set for osx i believe [23:07] since it can't be inferred [23:08] hazmat: Ahh, ok, ECONTEXT, sorry [23:08] hazmat: Hmm.. even though, I'm pretty sure it's part of the default config [23:08] * niemeyer checks [23:08] It is indeed [23:08] hazmat: It shouldn't complain anyway [23:16] So, more store.. [23:17] SpamapS, when I pastebinned my cloud-init-output log earlier, it was the wrong log, here it is: http://paste.ubuntu.com/705669/ [23:17] that's from one that just ran [23:21] jason_: oh that looks like a bug in the etckeeper package [23:22] SpamapS: OMG.. the never-going-away UnicodeDecodeError.. [23:22] SpamapS, should that not be affecting juju? [23:23] SpamapS: Isn't it bzr itself, actually? [23:23] jason_: It's breaking the installation of packages [23:24] jason_: Do you have accents in your current pwd [23:24] ? [23:25] Hmm.. no, it's not the current pwd [23:25] The path isn't clear from the traceback [23:25] niemeyer, no [23:25] it' [23:25] it's ubuntu -- the default [23:25] oh [23:26] yeah, I get you [23:26] no [23:30] jason_: Something like this is whats' going on there: [23:30] >>> u"é" < "é" [23:30] Traceback (most recent call last): [23:30] File "", line 1, in [23:30] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) [23:30] Its etckeeper [23:31] trying to bzr add /etc [23:31] jason_: you can remove 'ubuntu-orchestra-client' from the kickstart/preseed for oneiric-i386-juju .. none of that stuff is needed [23:32] SpamapS, will do [23:32] jason_: Are you consciously setting the encoding to 'ANSI_X3.4-1968'? [23:33] we had similar problems with orchestra+juju last week in Boston [23:33] niemeyer, no [23:33] jason_: Ok.. it's likely a default from ascii somewhere then [23:33] jason_: I think those errors are actually ok and not breaking your install [23:35] Its something broken with the way cloud-init runs on /dev/console [23:36] need etckeeper to be fully seeded or it stops and asks for stuff [23:36] SpamapS, there's a $SNIPPET('orchestra_client_package') in /var/lib/cobbler/kickstarts/juju.preseed -- is that the line to remove? [23:37] jason_: I think so yes [23:38] jason_: still I think you may be up and running, did you try a 'juju status' ? [23:38] jason_: you may also see a debconf prompt on tty1 [23:38] jason_: if so you have to stop the getty and press enter through that [23:38] SpamapS, no, but I'm in the process of reinstalling on all three right now [23:38] if you see it, let me know, I'll report the bug [23:38] * SpamapS 's brain is fuzzy from all the churn last week [23:39] SpamapS, it takes forever to keep doing that, but it seems like the only way to get them to try again, maybe I'm wrong there [23:41] jason_: once you get zookeeper up and running, you shouldn't have to repeat the install [23:41] SpamapS, ok, juju status was interesting, complaining that my systems aren't reachable from my client -- they're on a network only with the server, so that's something [23:41] jason_: yeah you have to be able to reach them by ssh [23:41] jason_: simplest thing to do is to run the client from the orchestra server [23:41] SpamapS, so once bootsrap completes, zookeeper is up? [23:42] jason_: no, the other way around [23:42] jason_: bootstrap returns as soon as it has told cobbler to boot the machine in a bootstrap configuration.. [23:42] got it [23:42] jason_: then you have to basically poll the machine to see if zookeeper is up and running [23:42] jason_: hopefully once you have a running environment, you don't have to do bootstrap anymore. [23:43] SpamapS, is that with juju status? [23:43] jason_: thats the simplest way yes [23:44] SpamapS, cool -- about orchestra and monitoring, is there a separate nagios web interface? [23:45] jason_: I believe nagios ends up running on the orchestra-monitoring-server .. which is not necessarily the same box as orchestra-provisioning-server. [23:45] jason_: but I'm no Orchestra expert. :-P [23:46] jason_: #ubuntu-server has a few people who are, and also the mailing list will get a lot of answers. Docs are still pretty hard to come by as we're still in tech-preview "best effort" mode. [23:46] SpamapS, ok -- yeah