[07:15] <DarrenS> testline
[07:39] <_mup_> Bug #873907 was filed: Security group on EC2 does not open proper port <juju:New> < https://launchpad.net/bugs/873907 >
[07:40] <shang> https://bugs.launchpad.net/juju/+bug/873907
[07:40] <_mup_> Bug #873907: Security group on EC2 does not open proper port <juju:New> < https://launchpad.net/bugs/873907 >
[08:06] <fwereade> shang: I think you need a "juju expose wordpress"
[08:07] <fwereade> shang: and, I think you're right about the docs, someone mentioned them not getting updated
[08:07] <fwereade> shang: I should probably try to find out how that's all meant to work :)
[08:08] <shang> fwereade: so after the add-relation, just run the expose command?
[08:08]  * shang testing it
[08:09] <fwereade> shang: that should be it
[08:09] <fwereade> shang: here's the critical section of the user-tutorial docs, that hasn't landed for somereason: http://paste.ubuntu.com/707807/
[08:11] <fwereade> shang: (sorry I missed you before)
[08:12] <shang> fwereade: no worries ;-)
[08:13] <shang> fwereade: cuz I know Jane did a demo few days ago, so I was pretty sure things should be working, just can't figure out the missing pieces...
[08:15] <fwereade> shang: sadly I'm not really sure where to start looking for the magical documentation-updater
[08:16] <fwereade> shang: it might be helpful, just for now, to grab trunk and "cd docs && make html"
[08:17] <fwereade> shang: but I have a documentation bug to fix *unless someone else has already grabbed it) so once that's fixed I'll be bothering people incessantly about auto-updating
[08:19] <fwereade> shang: if you confirm it's working for you, I'll mark the bug invalid and add an explanation
[08:19] <shang> fwereade: the expose command should take care of the security groups part, right?
[08:19] <fwereade> shang: that's right
[08:19] <fwereade> shang: still having problems?
[08:20]  * shang tried the expose, but didn't see EC2 security group open the port
[08:20] <shang> fwereade: yeah
[08:20]  * fwereade is perplexed and goes off to peer at the code
[08:21] <fwereade> shang: did you get a "Service wordpress is exposed" message, but nothing happened?
[08:21] <shang> fwereade: from the status command, the wordpress open-ports:[]
[08:21] <shang> fwereade: yeah
[08:22] <fwereade> shang: hmm; can you ssh to the bootstrap node and see if there's anything in the provisioning agent log?
[08:25] <shang> fwereade: it looks fine from where I can see it
[08:25] <shang> http://paste.ubuntu.com/707819/
[08:29] <fwereade> shang: I remain perplexed :(
[08:29] <fwereade> shang: let me see if I can repro
[08:38] <shang> fwereade: ok, and even if I manually open the port 80 on the wordpress security instance, the wordpress has not being configured... let me know if that just me...
[08:39] <fwereade> shang: hm, that then sounds like a problem with the charm
[08:40] <shang> fwereade: does the 3306 needs to be open for the service to be configured?
[08:41] <fwereade> shang: I'm afraid I don't know, everything I've done has been on the provider side... I've not had much experience debugging charms
[08:41] <shang> fwereade: ah, ok...
[08:43] <fwereade> wrtp, sorry, I *completely* missed you
[08:43] <fwereade> shang: just a suggestion
[08:44] <wrtp> fwereade: it was nothing anyway :-
[08:44] <wrtp> )
[08:44] <fwereade> shang: FWIW, deploys and exposes fine for me
[08:44] <wrtp> fwereade: BTW it was me that submitted that doc fix - isn't the documentation automatically updated?
[08:45] <fwereade> shang: what's your juju-origin, and where did you get your charms from?
[08:45] <fwereade> wrtp: I recall jimbaker saying that the auto-updating wasn't working, but I was distracted and never followed up
[08:45] <wrtp> fwereade: i se
[08:45] <wrtp> e
[08:46] <shang> fwereade: I was using the -> sudo apt-get install charm-tools; charm update examples; charm getall examples
[08:46] <shang> fwereade: let me refresh them, perhaps
[08:47] <fwereade> shang: I'm not sure this is a relevant question, but just in case: do the charms you're deploying have a "revision" file?
[08:49] <shang> fwereade: yes, wordpress: rev. 30
[08:49] <shang> mysql rev. 103
[08:53] <fwereade> shang: hm, I'm feeling pretty short on ideas :(
[08:53] <shang> fwereade: um... :(
[08:54] <fwereade> shang: it might be worth trying to just deploy wordpress from scratch with a debug-log running
[08:54] <fwereade> shang: (all I mean is that the problem isn't obvious, not that I'm giving up ;))
[08:55] <shang> fwereade: thanks! :D
[08:55] <shang> fwereade: Let me give that another try
[08:55] <fwereade> shang: (open a separate terminal and "juju debug-log" before you deploy)
[08:56] <shang> fwereade: actually, I will run it on a different machine, maybe a fresh one and see if I can reproduce the issue
[08:57] <fwereade> shang: cool, I'm about to try with the charms from charm-tools instead of trunk, see if I can repro that myself
[08:59] <shang> fwereade: thanks a lot
[09:19] <shang> fwereade: what is the command (or location) you get the charms?
[09:20] <fwereade> shang: I've always just tested with the examples repo in trunk
[09:20] <shang> fwereade: ok
[09:26] <fwereade> shang: fyi, I'm seeing the same problem
[09:26] <fwereade> shang: no idea why yet ;)
[09:26] <shang> fwereade: so it is because the charm-tools
[09:26] <fwereade> shang: that seems likely
[09:27] <fwereade> shang: but I can't say exactly what yet
[09:27] <shang> fwereade: ok, at least we know what is causing it... which is a good start :-)
[09:27] <fwereade> shang: yep -- thanks :)
[09:48] <hazmat> g'morning
[09:48] <fwereade> hazmat: morning
[09:48] <hazmat> juju docs ticket is pending here fwiw.. https://portal.admin.canonical.com/48456 fwiw
[09:49]  * hazmat tries to catch up on the back log
[09:49] <fwereade> hazmat: sweet
[09:50] <fwereade> ty
[09:50] <hazmat> fwereade, np
[09:50] <hazmat> shang, so you've got a service deployed and exposed and its not available?
[09:50] <hazmat> and juju status says its 'started'?
[09:51] <hazmat> shang, could you paste bin the output of 'juju status'
[09:51] <hazmat> oh
[09:51] <hazmat> fwereade, your seeing it too?
[09:51] <fwereade> hazmat: yeah, I guess it's something to do with the wordpress charm
[09:52] <fwereade> hazmat: I'm floundering along in a semi-helpful way, but this is the first charm I've made any attempt at debugging ;)
[09:52] <hazmat> shang, fwereade, so getting the unit log is pretty helpful to understanding unit specific problems
[09:53] <hazmat> shang, fwereade before deploying, you can get it directly from juju if you start a juju debug-log in a separate shell before deploying/relating .. it captures all the logs from all the agents..
[09:54] <hazmat> shang, fwereade after the fact, you can use 'juju ssh wordpress/0' to login directly to the machine
[09:54] <hazmat> the log for a unit lives at /var/lib/juju/units/wordpress-0/formula.log   i believe
[09:55] <fwereade> hazmat: hm, I was aware of the existence of juju ssh, I just never thought of actually using it :/
[09:55] <hazmat> fwereade, no worries, we have better tools as well..
[09:55] <fwereade> hazmat: debug-log is helpful, indeed
[09:55] <shang> fwereade: I ran the command: bzr branch lp:~charmers/charm/oneiric/mysql/trunk mysql
[09:56] <shang> fwereade: still getting the same issue
[09:56] <shang> hazmat: let me get u the logs
[09:56] <hazmat> fwereade, shang  real debugging.. is using juju debug-hooks wordpress/0, right after deploying the unit, it will set up a tmux session on the machine
[09:57] <hazmat> and pop up new windows for hook executions, with all the hook env variables setup. you can manually execute the hook or interactively edit/perform work
[09:57] <hazmat> its good to have a log first of what's wrong thouh
[09:57] <shang> hazmat: http://pastebin.ubuntu.com/707872/
[09:57] <shang> hazmat: ok, let me try again
[09:57]  * hazmat tries with the local provider
[09:58] <shang> fwereade: which trunk did u use?
[09:59] <fwereade> shang: sorry, I was referring to the examples in juju trunk
[09:59] <shang> fwereade: ah, ok
[10:00] <hazmat> shang, so status looks good,  getting the /var/lib/juju/units/<unit-name>/charm.log file  is probably needed to debug a charm further
[10:00] <hazmat> we should probably have a special cli option builtin for this purpose
[10:01] <shang> hazmat: shouldn't the open-ports have the 80 in it?
[10:01] <hazmat> shang, it should
[10:02] <hazmat> shang, the wordpress charm in principia never does open-port
[10:03] <hazmat> which is why its broken
[10:03] <hazmat> shang, nice catch
[10:04] <shang> hazmat: so we start the wordpress instance
[10:04] <shang> and run the command:  juju debug-hooks wordpress/0
[10:04] <shang> in another terminal to see the debug info?
[10:07] <hazmat> shang, yes.. its not debug info.. its a tmux session .. where windows/shells will pop up that 'replace' a hook execution, instead activity done interactively by the user is the hook, when your ready for the hook to be done, you exit the pop'd up window
[10:09]  * hazmat works on fixing principia wordpress
[10:10] <fwereade> hazmat: would you let me know when you're done? I had this theory that was *probably* the issue, but haven't managed to actually fix it, and it would be nice to see the successful diff
[10:12] <fwereade> (and I wasn't going to go and confidently pronounce how to fix the problem until I'd actually, y'know, done so)
[10:14] <hazmat> fwereade, normally its just .. adding a call to open-port anywhere in the formula, the wordpress in principia (or whatever its called) is derived from juju/examples.. but diverged prior to the bashification.. in this case i'm effectively doing a hand merge of the bash script
[10:14] <hazmat> 100% of all open-port usage is non dynamic
[10:15] <hazmat> zero calls to close port
[10:15]  * hazmat takes a deep breath and moves on
[10:15] <fwereade> hazmat: ah, I wondered about that
[10:15] <fwereade> cheers
[10:19] <shang> hazmat: do you still need the charm.log?
[10:20] <hazmat> shang, no its cool, thanks though
[10:21] <shang> hazmat: ok, thanks
[10:22] <_mup_> juju/wordpress r50 committed by kapil.thangavelu@canonical.com
[10:22] <_mup_> pull from juju trunk, bashify, and include open-port call
[10:24] <hazmat> shang, if you update your branch/checkout of the formula it should work now
[10:25] <hazmat> shang, you'll need to destroy the service and redeploy with the new formula..  i didn't include an upgrade script
[10:26] <shang> hazmat: ok, let me try
[10:43]  * hazmat should go back to sleep
[12:49] <wrtp> hazmat: so, FindMachineSpec
[12:49] <wrtp> hazmat: the problem with returning a list of possible specs is that it might be outrageously long
[12:50] <wrtp> with all combinations of n parameters
[12:50] <hazmat> wrtp, yeah.. if its encapsulating all permutations
[12:50] <wrtp> yeah, so i think it might be better to expose some interface for finding possible values of each parameter
[12:50] <hazmat> wrtp, so i was thinking something a bit more simple..
[12:50] <hazmat> exactly
[12:51] <wrtp> as in: possible RAM config, possible OS image, possible location, etc
[12:51] <hazmat> think about driving a ui for example is a good scenario to keep in mind
[12:51] <hazmat> ie. what would you want to see
[12:51] <SpamapS> hm, this sounds a lot like facter
[12:51] <wrtp> SpamapS: facter?
[12:52] <SpamapS> facter shows you RAM, OS, CPU#, etc.
[12:52] <hazmat> SpamapS, its more akin to dash or ec2 ui
[12:52] <SpamapS> its used a lot in puppet
[12:52] <hazmat> and chef
[12:52] <SpamapS> I missed the context tho
[12:52] <wrtp> SpamapS: i was trying to come up with a nice way of specifying a machine to start
[12:52] <hazmat> SpamapS, just discussing environment interfaces in go
[12:53] <SpamapS> oh like, you want to choose the machine type for the user?
[12:53] <hazmat> wrtp, so i wouldn't worry about the enumeration stuff for now, we can add that latter
[12:53] <wrtp> hazmat: yeah
[12:53] <hazmat> SpamapS, no.. more like we want to give the user the option, and be able to validate it
[12:53] <hazmat> or present a ui with options
[12:53] <wrtp> SpamapS: for reference, here's my first stab at a spec/doc for a Go interface to juju:  http://paste.ubuntu.com/707950/
[12:54] <wrtp> hazmat: yeah, i'll keep it ultra simple for now, with the expectation of fleshing it out later
[12:54] <hazmat> SpamapS, but carry the user selection down to the provider
[12:54] <SpamapS> Wow I'm quite confused
[12:54] <SpamapS> at what point do you get to ask users questions?
[12:54] <hazmat> SpamapS, right now its kinda broken we only grab it from the config file, a user specification is passed down to the provider
[12:54] <SpamapS> (at what point would we ever WANT users to be bothered with questions?)
[12:54] <hazmat> SpamapS, at deploy time
[12:55] <hazmat> SpamapS, optionally we fall back to env defaults
[12:55] <hazmat> SpamapS, i want to deploy cassandra on a HUGE machine ;-)
[12:55] <wrtp> i'm more imagining an intelligent choice based on a previously specified user constraint
[12:55] <wrtp> rather than user interaction per se
[12:55] <hazmat> SpamapS, but haproxy on  a tiny machine
[12:56] <hazmat> the size of cassandra is based somewhat on usage
[12:56] <SpamapS> Hrm, I doubt users want to be stopped and asked about this
[12:56] <hazmat> interesting
[12:56] <SpamapS> if they don't do --ram BIG or --machine-type m1.large  ... env default seems appropriate
[12:57] <hazmat> and that's what it will continue to be
[12:57] <wrtp> i'm thinking that the user probably wants to be able to verify that they won't be paying more than a certain amount of money
[12:57] <SpamapS> it would DEFINITELY be cool to map abstract arguments to provider machine types
[12:57] <wrtp> SpamapS: that's the idea
[12:57] <SpamapS> --budget-per-hour 0.50
[12:57] <wrtp> yup
[12:57] <SpamapS> But to ask the user.. fail
[12:57] <wrtp> i think that's pretty crucial actually
[12:58] <SpamapS> just say "Cannot determine best machine type, options { x, y , z }" and exit(-1)
[12:58] <wrtp> SpamapS: yeah. although the user should be able to iterate without spending, i think. explore the possibilities.
[12:58] <SpamapS> Still
[12:58] <SpamapS> this sounds like we're getting way ahead of being awesome at what we currently do. ;)
[12:58] <wrtp> SpamapS: more like: "no machine type available that meets your budget constraints" perhaps
[12:59] <wrtp> SpamapS: this was all spawned from discussion about the FindImage method in the go docs i posted above
[12:59] <SpamapS> as a first iteration, just adding --machine-type XXXXXX would be a quantum leap
[12:59] <wrtp> SpamapS: yeah, we're way ahead of ourselves, but it probably helps to have an idea of where we might go
[13:00] <SpamapS> and also being able to change the machine type for a running service would be good
[13:00] <wrtp> and i do think being able to plan your budget (and explore different budgets across different providers) is going to be important in the long run
[13:01] <wrtp> SpamapS: is that actually possible?
[13:01] <SpamapS> wrtp: sure, change it, then issue a "replace units" command that would slowly remove the old type and add the new type
[13:02] <SpamapS> assuming its a service that has the ability to do that.. :)
[13:02] <SpamapS> if not.. well then.. don't do that!
[13:02] <wrtp> :-)
[13:02] <SpamapS> point being, add-unit just uses whatever was the env default at the time of deploy..
[13:03] <wrtp> SpamapS: so {add-unit; remove-unit}should do the job?
[13:04] <wrtp> where remove-unit is removing an old unit
[13:04] <wrtp> not the one just added
[13:04] <SpamapS> sorry to derail, what I am trying to convey is that you're enhancing something that has a weak foundation .. might be good to get a structure in place .. I like the end goal a lot tho
[13:05]  * SpamapS is just grasping at anything that will distract him from trying to figure out how to elastically grow a ceph cluster
[13:05] <wrtp> SpamapS: hmm, just wondering about "whatever was the env default at the time of deploy..". does that mean we'd have to push the current environment to zk on each state change?
[13:06] <wrtp> lol
[13:06] <wrtp> is there a list anywhere of "current juju shortcomings that we'd like to address"?
[13:06] <SpamapS> wrtp: the problem is that the provisioner that actually starts machines and assigns them to units has only one clue about the machine type, and thats the one in the service state in ZK
[13:07] <SpamapS> wrtp: the unit should be able to provide an overriding clue
[13:07] <SpamapS> wrtp: https://bugs.launchpad.net/juju
[13:07] <wrtp> of course
[13:07] <SpamapS> only 160 ;)
[13:08] <SpamapS> https://bugs.launchpad.net/juju/+bugs?field.tag=production
[13:08] <SpamapS> those are the ones we (or maybe just I) think are needed to be fixed for production usage of juju.
[13:09] <SpamapS> wrtp: bug 829397 is the one I'm basically describing
[13:09] <_mup_> Bug #829397: Link a service to a type of hardware and/or specific machine <production> <juju:Confirmed> < https://launchpad.net/bugs/829397 >
[13:09] <wrtp> SpamapS: that sounds like it's not something coming from the current environment, but from some specification of the service itself.
[13:11] <SpamapS> wrtp: when deploy happens, the current environment default is copied into the service def
[13:12] <hazmat> ?
[13:12] <wrtp> SpamapS: hmm. i think i prefer the idea of specifying an environment for a service rather than changing environments.yaml every time. but perhaps that's what would happen.
[13:12] <hazmat> wrtp, yeah.. that's a major failing re copying the env per deploy to capture changes
[13:13] <hazmat> its easy to fix that
[13:13] <hazmat> and we need to do it for multi-user usage
[13:13] <SpamapS> all things in environments.yaml that are not global should be runtime overrideable and changeable via some kind of command
[13:13] <SpamapS> ami, machine type, etc. etc.
[13:14] <wrtp> definitely.
[13:14] <hazmat> we'll bootstrap with the environments yaml and thereafter use some variation of juju set to set env values
[13:14] <wrtp> for things like default machine type etc, i'd imagine they would be (should be?) copied into the cloud only once, at bootstrap time
[13:14] <wrtp> exactly
[13:15] <SpamapS> hazmat: but what about that one time where I want to add-unit x1.large .. just to handle today's ridiculous load.. then back to m1.small's
[13:16] <hazmat> SpamapS, i'd like to go down the road of deploy cli parameters
[13:16] <wrtp> SpamapS: maybe the add-unit command should allow specification of things like machine type
[13:16] <wrtp> hazmat: yeah
[13:16] <SpamapS> let me override it at deploy time yes, but also let me override even that.
[13:17] <wrtp> add-unit > deploy > bootstrap
[13:17] <hazmat> SpamapS, ? huh
[13:17] <hazmat> yeah
[13:17] <SpamapS> hazmat: if I deploy with one type, I may want to change that later
[13:17] <hazmat> SpamapS, sounds good, we just run the risk of turning into knife if we expose all cli options
[13:17] <hazmat> but their always optional
[13:18] <wrtp> hazmat: knife?
[13:18] <SpamapS> so add-unit needs an override at the cli level, and I also need to be able to 'juju set service-name machine-type=foo' to change it permanently
[13:18] <SpamapS> hazmat: be religious about always having a sane default and you won't become knife. :)
[13:18] <wrtp> SpamapS: and probably juju set-default machine-type=foo
[13:18] <hazmat> wrtp, its the chef cli tool for management
[13:18] <SpamapS> juju deploy foo should always give you a workable foo
[13:18] <wrtp> hazmat: ah thanks
[13:20] <SpamapS> anyway, I'm highly impressionable, and this recent "Amazon gets platforms" rant from G+ has me thinking about how juju sits as a platform
[13:20] <hazmat> SpamapS, do tell
[13:20] <SpamapS> I'd say its better than some, but has a long way to go to be accessible
[13:20] <hazmat> SpamapS, and setting that on a service would do what to existing units?
[13:20] <SpamapS> hazmat: leave 'em alone
[13:20] <hazmat> SpamapS, we're working on it re accessible ;-)
[13:20] <SpamapS> right, I know a REST interface is in the works, +10 on that
[13:20] <hazmat> i'm going to explore some api work today
[13:21] <hazmat> yup
[13:21] <wrtp> currently we talk directly to the zk instance in the cloud, right?
[13:21] <SpamapS> yes
[13:21] <hazmat> wrtp, via ssh tunnel from the cli
[13:21] <SpamapS> which should definitely go away
[13:21] <hazmat> wrtp, its painfully slow for some ops
[13:21] <hazmat> on large installs, many roundtrips
[13:22] <wrtp> yeah. a higher level interface would be better.
[13:22]  * SpamapS curses ceph's incessant "then find some way to copy this little dir to all other nodes" documentation
[13:22] <hazmat> SpamapS, if only you had a distributed file system for that ;-)
[13:22] <SpamapS> so ironic
[13:22] <wrtp> some kind of json rpc thing might not be too horrible
[13:23] <SpamapS> BSON ftw.. ;)
[13:23] <hazmat> wrtp, yeah.. thats basically where i'm going .. effectively json rpc to expose the current cli as rest, and then some REST expose of resources
[13:23] <hazmat> although the latter is probably superflous for the first cut
[13:23] <wrtp> sounds plausible
[13:24] <wrtp> niemeyer: hi!
[13:24] <niemeyer> wrtp: Yo
[13:24] <hazmat> niemeyer, greetings
[13:24] <fwereade> morning niemeyer
[13:28] <niemeyer> Hey folks
[14:42] <hazmat> fwereade, the juju get stuff changed a little since the review to incorporate feedback, not sure if you wanted to have a second look before its merged
[14:42] <hazmat> basically the separate option for schema went away, it just merges the schema with the current values for display now
[14:50] <fwereade> hazmat, sounds like a nice idea actually
[14:51] <fwereade> hazmat: I'll take a quick look
[14:53] <hazmat> fwereade, thanks
[14:53] <hazmat> fwereade, bcsaller, jimbaker also there's a trivial but critical fix resolved branch in review.. its only like 10 lines.. https://code.launchpad.net/~hazmat/juju/retry-sans-hook/+merge/79358
[14:54] <hazmat> SpamapS, are we tagging SRU bugs separately, or you planning on just grabbing trunk?
[14:54] <jimbaker>  hazmat, taking a look
[14:54] <hazmat> er.. fix for resolved
[14:55] <jimbaker> hazmat, +1, lgtm
[14:57] <fwereade> hazmat, the docstring on command() needs updating, otherwise +1
[14:58] <fwereade> hazmat, for the other one, I don't follow the connection between the code change (which looks good) and the test change
[14:59] <_mup_> juju/expose-retry r408 committed by jim.baker@canonical.com
[14:59] <_mup_> Merged trunk
[15:00] <hazmat> fwereade, the test was making the bad hook ... into a good one, and then calling resolved, by leaving it bad we verify the end state was reached without hook execution
[15:01] <fwereade> hazmat: ok, I see now
[15:01] <fwereade> hazmat: bit slow today :/
[15:01] <fwereade> hazmat: +1
[15:01] <hazmat> jimbaker, fwereade thanks
[15:01] <hazmat> fwereade, its subtle one
[15:10] <_mup_> juju/expose-retry r409 committed by jim.baker@canonical.com
[15:10] <_mup_> Addressed review points
[15:16] <_mup_> juju/trunk r405 committed by jim.baker@canonical.com
[15:16] <_mup_> merge expose-retry [r=hazmat,fwereade][f=824279]
[15:16] <_mup_> Ensure that port actions related to expose are retried, even when
[15:16] <_mup_> unexpected exceptions are raised.
[15:18] <jimbaker> ok, that bug fix is merged in. i'm taking today off (my kids are out of school today instead of monday for some reason). yet another beautiful day here in colorado :)
[15:19] <_mup_> juju/config-get r397 committed by kapil.thangavelu@canonical.com
[15:19] <_mup_> cleanup docs
[15:19] <hazmat> jimbaker, cheers, have a good one
[15:21] <jimbaker> i should also mention: when i walked my puppy this morning, i struck up a conversation about ubuntu with a new neighbor. i had my ubuntu pullover on for the chill morning. yet another big fan of ubuntu, good to see!
[15:23] <fwereade> jimbaker, pleasing :)
[15:49] <_mup_> juju/trunk r406 committed by kapil.thangavelu@canonical.com
[15:49] <_mup_> merge config-get [r=fwereade,bcsaller][f=828326]
[15:49] <_mup_> New juju subcommand for inspecting a service's current configuration and schema.
[15:54] <_mup_> juju/trunk r407 committed by kapil.thangavelu@canonical.com
[15:54] <_mup_> merge retry-sans-hook [r=fwereade,jimbaker][f=814987]
[15:54] <_mup_> Fixes a bug with unit agent usage of resolved flags that caused
[15:54] <_mup_> resolved to always execute hooks, instead of when hook retry was
[15:54] <_mup_> explicitly specified.
[16:07] <fwereade> happy weekends all, I'll probably drop in a bit later but I think I'm done for now
[16:14] <hazmat> fwereade, have a good one
[16:42] <_mup_> Bug #874423 was filed: rest/jsonrpc api  <juju:New> < https://launchpad.net/bugs/874423 >
[16:47] <_mup_> juju/rest-agent-api r402 committed by kapil.thangavelu@canonical.com
[16:47] <_mup_> merge trunk
[16:53] <wrtp> fwereade: enjoy
[17:04] <SpamapS> hazmat: I am going to grab trunk, but reconcile each bug in the changelog
[17:04] <hazmat> SpamapS, ic, just wondering if  we should putting in test cases for each bug thats not a feature
[17:04] <hazmat> or tagging them in some way
[17:05] <SpamapS> hazmat: that would be helpful
[17:06] <SpamapS> hazmat: lets not get too procedure bound though.. since in this instance, we are "the community" .. we can more or less demand an SRU as long as we aren't flagrantly dropping refactors
[17:06] <SpamapS> hazmat: but it will go a long way to user trust that we don't break stuff in an update
[17:21] <_mup_> Bug #874456 was filed: Initial Go juju commit. <juju:In Progress by rogpeppe> < https://launchpad.net/bugs/874456 >
[17:22] <wrtp> hazmat: ^^
[17:26] <wrtp> is there any way to get the bzr diff web page to show the diffs between two revisions?
[17:26] <wrtp> e.g. here http://bazaar.launchpad.net/~rogpeppe/juju/juju-implementation/changes
[17:26] <wrtp> i'd like to see diffs between rev 11 and rev 15
[17:26] <wrtp> or do i have use a local tool for that?
[17:28] <wrtp> hmm, i just managed to do it, but i'm not quite sure how!
[17:36] <hazmat> wrtp, there's a couple of cli tools for this
[17:36] <hazmat> wrtp, i highly recommend the qbzr plugin
[17:36] <wrtp> plugin for what?
[17:36] <hazmat> for bzr
[17:36] <wrtp> ah
[17:36] <hazmat> wrtp, its a qt ui on top of bzr, adds a bunch of 'q' prefixed commands
[17:36] <wrtp> can i apt-get it?
[17:37] <hazmat> wrtp, yes
[17:37] <wrtp> i've been doing bzr diff --old x --new y --using diffuse
[17:37] <wrtp> but it's not very satisfactory
[17:37] <hazmat> wrtp bzr qbzr -r old..new
[17:38] <wrtp> i just realised that i forgot to make the changes to my source that we discussed earlier
[17:38] <hazmat> wrtp,  when reviewing a branch i typically do  from within the branch bzr qbzr -r ancestor:path_to_trunk
[17:38] <hazmat> which shows you a diff of the branch changes against trunk
[17:38] <wrtp> cool
[17:38] <hazmat> er.. that should be qdiff not qbzr
[17:38]  * hazmat needs a nap
[17:39] <wrtp> hazmat: when you say "path_to_trunk" do you mean the last trunk revision number?
[17:39] <hazmat> wrtp, no i mean the physical path to trunk
[17:40] <wrtp> ok. BTW what happens if there's a colon in the path name?
[17:40] <hazmat> wrtp, thats for a review of  a branch that's getting merged to trunk
[17:40] <hazmat> wrtp, i dunno, never came up for me
[17:40] <hazmat> wrtp, shouldn't be an issue
[17:40] <wrtp> fairy nuff
[17:41] <hazmat> wrtp, a double colon introduces some lookup behavior for a branch naming service  used by some of the more interesting plugins, like pipeline
[17:41] <hazmat> which automates a stack of changes
[17:41] <hazmat> too much information probably
[17:41] <wrtp> interesting
[17:42] <wrtp> i'm slooowly getting there with the bzr stuff
[17:42] <hazmat> wrtp, no worries
[17:43] <hazmat> wrtp, we should probably a have new developer doc to describe the particulars of best practice bzr layouts for dev
[17:44] <wrtp> that would be good
[17:44] <wrtp> i've barely used revision control systems in their full modern glory
[17:45] <wrtp> hazmat: qdiff FTW!
[17:45] <wrtp> marvellous
[17:46] <wrtp> i wish someone had told me that when i asked in the main canonical IRC channel...
[17:46] <hazmat> wrtp, #bzr on freenode is pretty good stop for bzr questions
[17:46] <hazmat> that main channel is just a presence thing, not a question place
[17:46] <wrtp> right
[17:48] <wrtp> i have to do another qdiff if i want to change the view to diff against another revision, right?
[17:49] <wrtp> hazmat: BTW i got an email about a monthly report. where is that?
[17:50] <hazmat> wrtp, priv msg
[18:08] <SpamapS> hazmat: ah, so r406 is definitely a new feature
[18:10] <hazmat> SpamapS, backwards compatible but yes..
[18:10] <hazmat> SpamapS, is that an issue?
[18:10] <SpamapS> yes and no
[18:12] <_mup_> Bug #874486 was filed: status should show all relations for multiply-related services <juju:New> < https://launchpad.net/bugs/874486 >
[18:13] <SpamapS> hazmat: normally we'd have "patch only" releases waved through the SRU process
[18:14] <SpamapS> hazmat: but since juju has no release process.. it will raise the eyebrows of the SRU team
[18:14] <hazmat> SpamapS, i can yank, but we need a date for features to go in
[18:14] <SpamapS> no don't yank!
[18:14] <hazmat> we're not purely in bug fix mode atm, we have open dev/refactor items up for this milestone
[18:15] <SpamapS> :)
[18:15] <hazmat> SpamapS, its easy enough to do.. i just need to be clearer on sru scheduling is
[18:15] <SpamapS> Just means we have to choose between fighting for the SRU despite the features and cherry picking only the bug fixes
[18:15] <SpamapS> there is no schedule
[18:15] <hazmat> SpamapS, my last understanding was we where going to be going through to 12.04 with srus
[18:15] <SpamapS> TECHNICALLY, SRU's are only for serious issues
[18:16] <SpamapS> but in universe, what is serious and what is not is entirely up to the community around the package...
[18:16] <hazmat> SpamapS, so do we really need to SRU everything?
[18:16] <SpamapS> since that is .. us... ;)
[18:16] <hazmat> SpamapS, seems like we should just point folks to a stable ppa?
[18:17] <SpamapS> as I've said, I think if we have some automated tests that verify we didn't break anhing (integration tests, not unit tests) than it should be fnie.
[18:17] <hazmat> and we can put in a week or two hiatus on features merges, yank config-get, do the sru, open up the trunk to features, and put out a stable ppa for folks who the latest and greatest
[18:17] <SpamapS> ugh my SSH is bursty
[18:17] <SpamapS> get off facetime you durn ipad users here in the starbucks!
[18:17] <hazmat> SpamapS,  that's what this is for http://wtf.labix.org/
[18:18] <hazmat> SpamapS, it runs unit tests and a functional test
[18:18] <SpamapS> hazmat: that does not count, because those tests are in tree and may be changed to fit the release
[18:18] <SpamapS> The tests that work now, must work, unchanged, for every SRU we do
[18:18] <SpamapS> IMO a stable PPA would also need this level of care
[18:19] <hazmat> SpamapS, true for the unit tests, but the functional tests haven't been touched, and don't they live in tree... jimbaker ?
[18:19] <hazmat> er.. i don't think
[18:19] <SpamapS> They live in their own tree, but are under your control, and will likely be updated as juju is changed in backward incompatible ways.
[18:20] <hazmat> SpamapS, they'll likely fail first, and we'll see that.. but okay, from a skeptical pov i can see why that's an issue
[18:21] <SpamapS> Also, they don't really exercise things enough to make me comfortable. I'd like to have all charms deploy, relate to at least one thing, exposed, configs changed, and then destroyed..
[18:21] <hazmat> SpamapS, okay... well you have some other ftests right?
[18:21] <SpamapS> I think I can automate that
[18:21] <hazmat> i need to go back read jamespage's charm tester stuff
[18:21] <hazmat> but first a nap b4 the sprint
[18:22] <SpamapS> I think jp's thing is an even higher order
[18:22] <SpamapS> hazaway: sleeeeeep ;)
[18:31] <zodiak> hey guys and gals, jst installed ubuntu 11.10, nice to see juju (aka ensemble ;) in there, I wanted to use my local machine, am I correct in thinking I have to setup basic lxc by itself first before trying to use charms ?
[19:14] <bcsaller> zodiak: It looks like the docs for the local provider are not on the main url yet, but you can look at there here http://bit.ly/nsjdWu
[19:15] <bcsaller> zodiak: the local provider will tell you if its missing packages when you try to bootstrap it
[19:43] <SpamapS> can we get a bump on that RT ticket for the docs?
[19:43] <SpamapS> this will get worse as 11.10 users start looking into juju
[19:57] <bcsaller> SpamapS: there is also a branch with a troubleshooting script that collects info to help debug local provider issues and that includes an expansion of those docs as well
[20:40] <hazmat> https://portal.admin.canonical.com/48456
[20:40] <hazmat> SpamapS, ^ it was 7 earlier today i thought
[20:43] <hazmat> SpamapS, just sent in additional comment noting the urgency
[20:44] <hazmat> now its 10 in the queue
[20:49] <SpamapS> we're dealing with the release turmoil
[20:49] <SpamapS> but I wonder if its getting bumped because nobody knows its part of the release
[20:52] <hazmat> i added an additional note to that effect, and that we're fielding suppot requests because of the mismatch of the old docs to pkg in oneiric
[21:26] <evandev> Is the UI available for Hadoop after Juju is bootstrapped and the environment is setup? (i.e. master with a slave running)
[21:33] <SpamapS> evandev: which "UI" would you be referring to?
[21:33] <SpamapS> evandev: and also, which charm? the one at lp:charm/oneiric/hadoop-master ?
[21:34] <evandev> well im using a local charm from http://github.com/charms/hadoop-master & hadoop-slave
[21:34] <evandev> and by UI I mean GUI ports 50030 / 50070 / 50075
[21:36] <evandev> Through those charms tho hadoop is not even getting installed so I guess thats why im running into an error
[21:43] <SpamapS> evandev: I believe the charms do open those ports
[21:43] <SpamapS> evandev: are you seeing an error in the install step?
[21:44] <evandev> Hadoop is not being installed on the hadoop-master/0
[21:45] <SpamapS> evandev: did you look at /var/lib/juju/units/haddop-master-0/charm.log ?
[21:46] <SpamapS> evandev: did you look at /var/lib/juju/units/hadoop-master-0/charm.log ?
[21:46] <SpamapS> i kan spel
[21:47] <SpamapS> evandev: also you probably need to 'juju expose hadoop-master' or the firewall will block access
[21:48] <evandev> That was it.
[21:48] <evandev> Thanks SpamapS very much
[21:49] <SpamapS> evandev: no problem. Note that there are a bunch more charms at https://code.launchpad.net/charm
[21:49] <SpamapS> evandev: the github page is just an experiment by m_3 ;)
[21:49] <SpamapS> m_3: ^^
[21:49] <evandev> Yea I want to modify a charm to install the CDH3 distrobution from cloudera
[21:51] <evandev> But thank you again. I really appreciate it
[22:32] <zodiak> bcsaller, hrm, so, whenever I do a juju deploy --repository=juju-charms local:memcached using lxc, the state never changes from null and I don't get any ip
[22:35] <bcsaller> zodiak: maybe you could download and run this script http://bazaar.launchpad.net/~bcsaller/juju/local-troubleshooting/view/head:/misc/devel-tools/juju-inspect-local-provider
[22:35] <zodiak> downloading (and thanks)
[22:36] <zodiak> pastie up the output I take it ? :)
[22:36] <bcsaller> yeah, that would be great
[22:37] <zodiak> http://pastie.org/2697581
[22:37] <zodiak> it's probably something 'duh' .. and if it is, that's awesome. I don't mind being an idiot :D
[22:40] <m_3> SpamapS: dang, those are still up from the openstack demo... different charms for natty -vs- oneiric... I'll fix it
[22:42] <m_3> the real fix is to get the oneiric hadoop packages into the partner ppa
[22:43] <bcsaller> zodiak: other than seeing a typo in that script I don't see the issue yet, still looking though
[22:43] <zodiak> bcsaller, thanks.. sorry about all this :)
[22:43] <bcsaller> zodiak: no problem at all
[22:44] <bcsaller> zodiak: it looks like its building out the initial image cache in /var/cache/lxc as expected, but we'd then expect an image to later appear when lxc-ls is run
[22:45] <bcsaller> zodiak: is this still running or have you stopped it with a destroy environment?
[22:48] <zodiak> I have done bootstrap and destroy a number of times
[22:48] <zodiak> how long should I wait for the state to change ? I waited like 45 minutes earlier
[22:49] <zodiak> and I am on FiOS so.. ;)
[22:50] <bcsaller> zodiak: ... it shouldn't take that long at all. So this script is new, this is helpful in terms of me finding out what info to gather. I'm going to include some changes and maybe we can try running it again :)
[22:50] <zodiak> surely
[22:51] <zodiak> let me try and rm -rf the cache/lxc and bootstrap again
[22:51] <zodiak> hrm
[22:51] <zodiak> I think it's something permissions related
[22:52] <zodiak> I am doing juju bootstrap as my user, not a problem. do a deploy and /var/cache/juju becomes owned by root
[22:57] <bcsaller> zodiak: yes, local provider will ask for permissions via sudo when it needs them
[22:57] <bcsaller> lxc interactions happen as root
[22:57] <zodiak> ah
[22:57] <zodiak> then shut my mouth :D
[22:58] <bcsaller> zodiak: in your ~/.juju/environments.yaml you defined a data-dir for the local provider, is the bootstrap creating a directory as expected in that data-dir?
[22:58] <zodiak> let me double check
[22:58] <zodiak> it is indeed
[22:58] <zodiak> charms, files, state and units (and a nice log ;)
[22:58] <bcsaller> indeed
[22:59] <bcsaller> the machine-agent.log, could you paste the last 20 lines or so
[22:59] <zodiak> surely.. want me to wipe that and lxc cache clean first ?
[23:00] <bcsaller> zodiak: can't hurt :) will take a little longer
[23:00] <zodiak> eh.. I am at work.. not a problem ;)
[23:03] <hazmat> zodiak, it would be good to pastebin the master-customize.log it sounds like
[23:03] <hazmat> any problem creating units
[23:03] <hazmat> is typically related
[23:05] <bcsaller> hazmat: I was making sure it was getting that far first
[23:05] <bcsaller> lxc-ls returned nothing, implying no template
[23:06] <hazmat> bcsaller, oh.. yeah.. machine-agent.log indeed
[23:06]  * hazmat goes back to mediawiki hacking and lurking
[23:06] <bcsaller> hazmat: :)
[23:07] <bcsaller> hazmat: you're back home after this weekend, right?
[23:08] <hazmat> bcsaller, monday late night
[23:08] <bcsaller> I'd like to schedule some time, maybe Tues to talk about the Co-lo stuff a little, I've been working through the old comments and some newer thinking and would like to go over it with you
[23:10] <zodiak> bcsaller, the last line of the machine-agent.log is looking hopeful ..
[23:10] <zodiak> 2011-10-14 16:05:32,853: unit.deploy@DEBUG: Creating master container...
[23:10] <zodiak> will let you know how it goes ;)
[23:10] <bcsaller> that is a good sign
[23:10] <hazmat> bcsaller, sounds good
[23:10] <bcsaller> zodiak: soon lxc-ls should show that it has a master template
[23:11] <bcsaller> once you have this working subsequent deployments are much faster
[23:11] <zodiak> yup. it does indeed.
[23:11] <zodiak> lxc-ls does show 'stef-sample_local-0-template
[23:11] <zodiak> but juju status still says state:null and no ip address
[23:12] <bcsaller> thats expected at this point
[23:12] <zodiak> awesome :)
[23:12] <bcsaller> it works by creating a template
[23:12] <bcsaller> and then it clones that for unit deployments
[23:12] <bcsaller> but creating the master the first time can take a little while
[23:12] <zodiak> ah. so even though lxc-ls .. gotcha
[23:12] <zodiak> danke
[23:13] <bcsaller> now, kapils suggestion will become important though, in data-dir/units there a master-customize.log
[23:13] <bcsaller> that that will provide insight into the creation of the master container
[23:14] <hazmat> a good sanity check if its done .. is if ps aux | grep lxc
[23:14] <hazmat> has any output
[23:14] <hazmat> if not...  lxc-ls should show stuff
[23:15] <bcsaller> hazmat: in the troubleshooting section of the doc I recommend: pgrep lxc| head -1| xargs watch pstree -alU
[23:15] <zodiak> ps auxfw to the rescue.. still chunking away :)
[23:18] <bcsaller> zodiak: yeah, the output of the master-customize.log is only written to the log file at the end of the run, but it will help us if this doesn't work
[23:22] <zodiak> bcsaller, okay, so, machine-agent.log says 'juju.agents.machine@INFO: Started service unit memcached/0'
[23:22] <bcsaller> zodiak: thats a good sign
[23:22] <zodiak> which is nice.. looking good there..
[23:23] <zodiak> but nothing in the units/ folder
[23:23] <bcsaller> and juju status hasn't changed?
[23:23] <zodiak> and state is still null :\
[23:24] <bcsaller> does "juju ssh memcached/0" work?
[23:25] <zodiak> nope. sticks on 'waiting for unit to come up'
[23:25] <hazmat> bcsaller, unit agent grep, and manual chroot and upstart perhaps
[23:25] <zodiak> I didn't do any AWS on this machine before, I don't need to setup them if I am using lxc correct ?
[23:25] <hazmat> zodiak, correct
[23:25] <zodiak> hazmat, danke.
[23:25] <zodiak> sorry about all this guys+gals :(
[23:26] <bcsaller> hazmat: yeah, I think we should be able to ssh as ubuntu into the box
[23:26] <bcsaller> zodiak: happy to help get this working
[23:27] <bcsaller> zodiak: we want to get the ip address of the unit so we can ssh in
[23:27] <bcsaller> zodiak: did status have it yet or no?
[23:27] <zodiak> nope.. still no status :(
[23:27] <bcsaller> there are other ways
[23:28] <bcsaller> I usually use nmap -sp 192.168.122.0/24 but you might not have that installed
[23:28] <bcsaller> looks like this should work for you though
[23:28] <zodiak> yup.. I have nmap
[23:29] <bcsaller> host stef-sample_local-memcached-0 192.168.122.1
[23:29] <bcsaller> oh, with nmap it was -sP (uppercase) anyway
[23:30] <zodiak> got it (the nmap I mean)
[23:30] <zodiak> and yes .. .1 is up
[23:31] <bcsaller> 1 is the host machine, if there isn't another ip on that bridge then it didn't bring up another container yet
[23:31] <bcsaller> what does lxc-ls show now?
[23:31] <zodiak> ah ... 228 in that case
[23:31] <hazmat> .1 is the bridge address, also has dnsmasq..
[23:31] <bcsaller> ahh, ok
[23:31] <hazmat> zodiak can you login into that .228 address (ubuntu@)
[23:31] <bcsaller> ssh ubuntu@192.168.122.228 should get you into that container
[23:32] <zodiak> it does indeed :D
[23:32] <zodiak> status still reports it as null state and no ip (fyi)
[23:32] <bcsaller> the ubuntu account will have full sudo access and can look at the logs in /var/log/juju which should help us understand
[23:32] <zodiak> gotcha. let me take a look see
[23:32] <zodiak> huh
[23:33] <zodiak> that log directory is empty
[23:34] <bcsaller> in other shell can you run the script I gave you before as root with stef-sample_local-memcached-0 as the cmdline argument
[23:34] <bcsaller> it should pull /etc/juju/juju.conf and so on to help us review that
[23:35] <bcsaller> in the container we can also check /etc/init for the upstart job for the charm
[23:35] <zodiak> http://pastie.org/2697796
[23:35] <bcsaller> I suspect that the charm itself is failing
[23:35] <zodiak> oh.
[23:36] <hazmat> ls: cannot access /usr/lib/juju/juju: No such file or directory
[23:36] <zodiak> bad luck on my part choosing that charm ?
[23:36] <hazmat> bcsaller, ^
[23:36] <hazmat> JUJU_ORIGIN=distro
[23:36] <bcsaller> zodiak: the examples that come with juju, mysql and wordpress work
[23:36] <bcsaller> hazmat: I saw that, but the juju package doesn't appear to be installed
[23:36] <hazmat> bcsaller, that's the root issue isn't it
[23:37] <bcsaller> I think so, yes, wondering how that happened. We'd been using the PPA for most of the testing, but that should be working as well...
[23:38] <hazmat> zodiak, can you try putting juju-origin: ppa in environments.yaml... destroy-environment && bootstrap
[23:38] <hazmat> don't need to clear out the cache
[23:38] <hazmat> the lxc cache that is
[23:38] <zodiak> hazmat, can do.. one sec
[23:39] <bcsaller> zodiak: the master-customize.log was never written though, right?
[23:39] <zodiak> if nothing else, I am getting used to juju :)
[23:39] <bcsaller> in data-dir/units
[23:39] <zodiak> bcsaller, correct
[23:39] <bcsaller> that should have indicated if there was an issue installing the package.. .oh well. setting it PPA as suggested should work
[23:40] <zodiak> where do I grab the charms from ?
[23:40] <hazmat> zodiak, can you pastebin the entire master-customize.log
[23:40] <zodiak> maybe it's something out of date there ?
[23:41] <hazmat> this is before the charm is touched even, so thats not an issue atm
[23:41] <hazmat> but the charms are in bzr on launchpad..
[23:42] <zodiak> okay, sorry, where is the master-customize.log ?
[23:42] <hazmat> a listing of them is at https://code.launchpad.net/charm
[23:42] <bcsaller> in your data-dir/units directory
[23:42] <bcsaller> but you said it wasn't there
[23:43] <zodiak> it's there after doing the ppa and destroy/bootstrap
[23:43] <hazmat> its definitely there if its on to creatin units
[23:43] <zodiak> let me pastie. one sec.
[23:45] <bcsaller> zodiak: I also pushed a newer version of the script
[23:45] <zodiak> http://pastie.org/2697831
[23:46] <zodiak> bcsaller, okay... although take a look at the pastie.. it appears to be a bridging issue from the lxc .. at least, that's what it looks like :(
[23:46] <zodiak> strange
[23:46] <bcsaller> zodiak: did you install apt-cacher-ng and is it running?
[23:47] <bcsaller> 192.168.122.1:3142 in a browser should tell you right away
[23:48] <bcsaller> there is a 'stats' link on the page you should see if its working which show you how the apt cache is working
[23:54] <bcsaller> zodiak: was that running or not? from the log it appears not to have been, which is odd because the local provider checks that its installed (but not running I guess)
[23:59] <hazmat>  zodiak ps aux | grep apt-cacher .. shows something ?
[23:59] <hazmat> in the host