[00:50] <_mup_> juju/enhanced-relation-support r11 committed by jim.baker@canonical.com
[00:50] <_mup_> Usage and impl details
[02:38] <m_3> bac: might be easier to hit me up wth email this week for questions about the buildbot review
[03:33] <_mup_> juju/enhanced-relation-support r12 committed by jim.baker@canonical.com
[03:33] <_mup_> Completed impl plan and details
[14:50] <jcastro> SpamapS: charm school rehearsal!
[15:12] <jcastro> SpamapS: 3 minutes!
[16:10] <ivoks> can someone help me out here? what's exactly failing here? http://paste.ubuntu.com/860691/
[16:24] <ivoks> juju destroy-environment otoh works :)
[16:39] <james_w> does someone have a pointer to this summit charm I'm hearing about?
[18:50] <robbiew> m_3: ^^^^^^?
[18:51] <robbiew> ivoks: hazmat jimbaker or bcsaller are your best bets for help there
[18:53] <bcsaller> ivoks: the last line of your paste: 2012-02-28 11:08:53,092 DEBUG Environment still initializing. Will wait.
[18:53] <bcsaller> ivoks: sounds like its still doing the bootstrap
[18:53] <bcsaller> takes a while
[18:55] <ivoks> bcsaller: it's not
[18:55] <ivoks> bcsaller: i've digged into it a bit more
[18:56] <ivoks> bcsaller: none of the juju services are started within the instance
[18:56] <hazmat> ivoks, are you on/ssh'd into the machine?
[18:56] <ivoks> hazmat: i'm not now, but yes, i can ssh into it
[18:56] <hazmat> ivoks, which version of juju are you running?
[18:56] <ivoks> hazmat: in an hour :)
[18:57] <ivoks> hazmat: all precise
[18:57] <ivoks> host and guests
[18:57] <hazmat> ivoks, from ppa or the distro version?
[18:57] <ivoks> distro
[18:57] <hazmat> ivoks, cool, if you can log in and get the cloud-init logs that should be helpful
[18:58] <ivoks> hazmat: i've noticed it complains about bad syntax for juju-admin
[18:58] <ivoks> and also both juju services said that --nodeamonize (or --nodaemon) is bad option
[18:59] <hazmat> ivoks, that would imply a client vs node version mismatch
[18:59] <hazmat> ie different juju versions
[18:59] <ivoks> hazmat: i'm reimplementing openstack as we speek, so i'll be able to reproduce this in 60-90 minutes (or less)
[18:59] <hazmat> there was some large changes that landed last week, but they caused some incompatbilities between versions
[19:00] <hazmat> ivoks, reimplementing or redeploying? ;-)
[19:00] <hazmat> ivoks, cool
[19:01] <ivoks> hazmat: both :/
[19:01] <ivoks> hazmat: hardware and sotfware :D
[19:31] <ivoks> hazmat: ok, i have system up and ready
[19:32] <hazmat> ivoks, the file i'd be interested in is.. /var/log/cloud-init-output.log
[19:32] <ivoks> give me a second, i need to bootstrap it
[19:39] <ivoks> hazmat: http://paste.ubuntu.com/860965/ <- that's from console
[19:40] <ivoks> hazmat: well, same thing is in the log
[19:43] <m_3> james_w: summit's still in progress... it's a combo of pgsql, memcache, and a fork of lp:~michael.nelson/charms/oneiric/apache-django-wsgi/trunk with summit-specific setup
[19:43] <james_w> m_3, interesting, I'd like to take a look when it's nearing completion
[19:44] <james_w> I have another django thing I want to charm, so I'm keen for some convergence
[19:44] <m_3> james_w: sure, I'll ping you for summit help... I understand most of what's going on so far
[19:44] <m_3> but there are a couple of places where it assumes interactive installation
[19:46] <james_w> in the init-summit stuff?
[19:46] <m_3> james_w: I'll push up to lp:~mark-mims/charms/oneiric/summit/trunk and ping you when there's more to look at
[19:46] <james_w> thanks
[19:46] <m_3> I'm planning on keeping with the puppet-based django charm for now... might back off of that later, but haven't decided yet
[19:50] <james_w> I think it's good as an example, but a set of helpers (puppet modules?) to make writing django charms easier would be better
[19:53] <ivoks> hazmat: machine-agent.log shows: juju.state.errors.MachineStateNotFound: Machine 0 was not found
[19:55] <ivoks> hazmat: i didn't look into juju internals, but does it ask nova-compute for the state of the node? i just noticed "AttributeError: 'dict' object has no attribute 'state'" in nova-compute's log
[19:58] <hazmat> ivoks, sorry was in a meeting
[19:59] <ivoks> hazmat: no worries
[19:59] <hazmat> ivoks, can you pastebin the cloud-init/user-data .. its at /var/lib/cloud/instance/user-data.txt
[19:59] <hazmat> this sounds vaguely familiar
[20:00] <ivoks> hazmat: http://paste.ubuntu.com/861000/
[20:00] <hazmat> SpamapS, m_3 does that ring any bells.. juju-admin: error: unrecognized arguments: 2007-01-19 2007-03-01 2007-08-29 2007
[20:01] <hazmat> its like its getting junk back from the metadata server
[20:01] <hazmat> ivoks, what version of openstack?
[20:01] <ivoks> hazmat: 2012.1~e4~20120224.12913
[20:02] <hazmat> ivoks, if you do curl http://169.254.169.254/1.0/meta-data/instance-id  what do you get back?
[20:02] <ivoks> 1.0
[20:03] <ivoks> and then bunch of dates
[20:03] <hazmat> that's the problem
[20:03] <ivoks> hazmat: http://paste.ubuntu.com/861002/
[20:03] <hazmat> smoser, adam_g does that sound familiar.. metadata server returning back junk
[20:04] <hazmat> i wonder if i can reproduce with devstack
[20:04] <SpamapS> hazmat: yes I've seen that.. IIRC its from broken metadata service.
[20:04] <ivoks> fwiw, this is keystone only auth
[20:05] <smoser> why are you using 1.0 ?
[20:05] <smoser> i think it doesn't matter actually, but i wouldn' tuse that.
[20:05] <ivoks> er... 1.0 of what?
[20:06] <smoser> in that url
[20:06] <smoser> '1.0/meta-data/instance-id'
[20:06] <hazmat> smoser, because its stable and well known
[20:06] <smoser> dont use that and it works.
[20:06] <smoser> hazmat, 1.0 is probably missing all sorts of things you want anyway.
[20:06] <ivoks> http://169.254.169.254/sdfgsdfg/meta-data/instance-id
[20:06] <ivoks> that gives the same outpu
[20:06] <smoser> right
[20:06] <smoser> its basically unknown to openstack
[20:07] <smoser> http://169.254.169.254/2009-04-04/meta-data/instance-id
[20:07] <smoser> use that
[20:07] <hazmat> smoser, but its returning back its knowns, and it returns back 1.0 as a valid value
[20:07] <ivoks> now that's right
[20:07] <ivoks> i-00000001
[20:07] <smoser> hazmat, yeah, you're right. it does list it in the index.
[20:07] <smoser> which is obviously wrong.
[20:08] <hazmat> ie. it should work, but its a bug in ostack imo
[20:08] <hazmat> we can update it to a newer version.. but really we don't need much else besides pub/priv address and instance id from the md server
[20:12] <ivoks> hazmat: is there anything i can do on a live system... i guess i need to change the path somewhere?
[20:12] <smoser> ivoks, please open a bug on nova.
[20:12] <ivoks> smoser: ok
[20:13] <ivoks> not sure how to phrase it :D
[20:13] <smoser> metadata service reports to support 1.0 but serves incorrect data
[20:13] <ivoks> thanks
[20:13] <smoser> then show your wget's
[20:16] <hazmat> ivoks, pls paste a link to the bug here, i'm going to explore it a little more
[20:17] <ivoks> https://bugs.launchpad.net/nova/+bug/942868
[20:17] <_mup_> Bug #942868: metadata service reports to support 1.0 but serves incorrect data <OpenStack Compute (nova):New> < https://launchpad.net/bugs/942868 >
[20:20] <hazmat> ivoks, thanks
[20:20] <ivoks> hazmat: np
[20:22] <ivoks> 2012-02-28 15:22:47,958 INFO Connected to environment.
[20:23] <ivoks> finally :)
[20:41] <hazmat> smoser, does devstack work on precise?
[20:41] <hazmat> there's a bunch of oneiric only labeling on it
[20:42] <ivoks> zul's packages work
[20:42] <ivoks> from the archives
[20:43] <smoser> hazmat, yes, it works.
[20:43] <smoser> FORCE=yes
[20:47] <hazmat> ivoks, smoser thanks
[20:47] <smoser> hazmat, by works, i mean maybe
[20:47] <smoser> :)
[20:47] <smoser> it has worked. and has been broken.
[20:48] <_mup_> Bug #942868 was filed: metadata service reports to support 1.0 but serves incorrect data <juju:Confirmed> <OpenStack Compute (nova):New> < https://launchpad.net/bugs/942868 >
[20:48] <zul> hazmat: yes it does
[20:48] <ivoks> hazmat: note that bootstraping juju with e4 will fail
[20:55] <ivoks> is there a way to pass additional info to cloud-init while doing juju bootstrap (like apt proxy)? i've seen a bug about this issue...
[20:59] <hazmat> ivoks, there isn't, although that one in particular is worth exposing i think
[21:07] <jcastro> SpamapS: 51 charms, did you promulgate something?
[21:20] <SpamapS> hazmat: I'd love to see the full power of cloud-init exposed to people
[21:20] <SpamapS> hazmat: by and large, we should abstract the things that make sense across all providers, but being able to slide your own cloud-config in would be quite useful.
[21:21] <niemeyer> SpamapS: This would render charms dependent on such cloud-init config..
[21:24] <SpamapS> niemeyer: I hadn't thought about it until you said charms, but really.. subordinate charms almost give us the same capability as cloud-init..
[21:25] <SpamapS> niemeyer: the only thing missing is the ability to do interesting things to the system in early-boot
[21:25] <SpamapS> which I don't think most people want
[21:26] <niemeyer> SpamapS: Agreed
[21:26] <SpamapS> what I do think people want is the ability to have their data on a RAID of EBS volumes, or the ability to install a bunch of extra stuff... both of those will work fine with subordinates
[21:26] <SpamapS> Though the EBS thing will still require manual intervention to attach the EBS volumes.. that would be true of cloud-init too actually.
[21:34] <niemeyer> SpamapS: That'll eventually be handled internally by juju itself
[21:35] <SpamapS> niemeyer: I've bee kicking some ideas around in my head about how it might work. I feel like we're still a world away from addressing it though.
[21:36] <niemeyer> SpamapS: As far as focus goes, agreed
[21:36] <SpamapS> niemeyer: like, I don't even know what to ask for. ;)
[21:36] <niemeyer> SpamapS: That's been on the table since first tech sprint, though
[21:38] <niemeyer> SpamapS: It's about management of volumes and relationship between volumes and units
[21:38] <niemeyer> SpamapS: This must be a first-class feature
[21:39] <SpamapS> niemeyer: yeah, I feel like the simples thing to do is to just track the volumes that get attached to EBS rooted machines.. and offer a way to say "add-unit --volume-id d-203494950"
[21:39] <SpamapS> niemeyer: but I know thats glossing over a lot of details
[21:41] <jcastro> SpamapS: launchpad workflow email on the list, I mention you explicitly to explain something so if you could that would be swell. :)
[21:42] <hazmat> we need juju eatings its own tail first
[21:42] <hazmat> then we can launch juju environment infrastructure services like volume management
[21:42] <hazmat> they would be provider specific in the case of volume management
[21:43] <hazmat> ie. perhaps ceph for maas/orchestra, ebs for ec2
[21:43] <hazmat> and then have per unit volumes against the storage service, or in some cases against direct/ephemeral disk
[21:44] <SpamapS> hazmat: at a simple level, those are all doable as subordinates now.
[21:44] <SpamapS> or
[21:44] <SpamapS> when it lands
[21:45]  * SpamapS notes that Ubuntu 12.04 beta1 is dropping any minute.. w/o subordinates.. :(
[21:45] <hazmat> SpamapS, subordinates get a little messy here, and subordinates live in their principal's environnment
[21:45] <hazmat> so they'd be coming in after the principal, the sequence is wrong.. its much simpler if its in core imo.. yes it could be hacked between cooperating charms with intimate details..
[21:46] <SpamapS> hazmat: the subordinate can be related to other services though right? So the ceph-client subordinate will be able to ask services where to mount their ceph-backed-block device
[21:46] <hazmat> SpamapS, first subordinate branches just landed
[21:46] <hazmat> SpamapS, yes
[21:46] <SpamapS> Ordering, once again, rears its ugly head
[21:46] <hazmat> SpamapS, but what's the lifecycle.. the subordinate can be added to the principal at any point in the principal's lifecycle.
[21:47] <hazmat> so the principal already has a bunch of data, and then your copying data around volumes and coordinating the unit's underlying service and relations to during that migration.. its pretty messy imo
[21:47] <SpamapS> hazmat: the principle needs a way to delay full configuration until certain relations are established.. in this case.. the subordinate "where do I put my data" relation. :)
[21:48] <SpamapS> hazmat: forget subordinates, this is true of other services too like nova api that have two options (local or remote database storage)
[21:48] <hazmat> SpamapS, again this is much simpler if something as intrinsic as storage is in the core
[21:48] <hazmat> SpamapS,  required relations are orthogonal, but also of import
[21:49] <hazmat> deploying subordinates when the principal isn't a known running state, complicates their communication imo.. a unit doesn't respond to relation events till its running/started
[21:49] <SpamapS> I see the two use cases as equals until we have any kind of real discussion about storage.
[21:50] <SpamapS> hazmat: so, the answer I have for the nova problem and ceph (it has this problem too) is to just have a config option giving the relations hints about when to store data.
[21:52] <SpamapS> So for ceph, you just tell ceph "start-nodes=4"" and it delays data storage until there are 4 peer units.
[21:52] <hazmat> SpamapS, so i think we've walked around two solutions for required.. one is per the subordinate implementation, don't deploy actual units till the required relations are satisifed.. ie. in much the same way that subordinates units are actualized till the subordinate relation is established
[21:53] <hazmat> the other is a unit don't join provider relations till its requires are satisifed
[21:54] <hazmat> which is actually a bit better/needed imo, else you get additional ordering issues around when is something fully operational.. ie deploy with dep relations doesn't nesc. solve the ordering around consumers of the service attempting to use it pre its initialazation.. but waiting till its dependencies are solved and running does.
[21:54] <hazmat> SpamapS, i don't think config option is the right solution
[21:55] <SpamapS> hazmat: no, its the "current" solution
[21:55] <hazmat> fair enough
[21:55] <_mup_> juju/enhanced-relation-support r13 committed by jim.baker@canonical.com
[21:55] <_mup_> Reworked docs on changes to relation commands
[22:05] <jcastro> niemeyer: hey we did this already right "[niemeyer] drive dicussion about interface documentation on juju mailing list"?
[22:05]  * jcastro is cleaning up spec work items
[22:05] <niemeyer> jcastro: Kind of..
[22:06] <niemeyer> jcastro: I did this before, maybe a couple of times even, but I wouldn't consider the problem as solved
[22:06] <jcastro> my bp is that we have a discussion, a solution or not isn't in my scope. :)
[22:07] <niemeyer> jcastro: LOL
[22:08] <hazmat> niemeyer, did you see clint's email to the list re env vars.. i asked him to post there for your feedback
[22:09] <niemeyer> hazmat: I missed that.. looking
[22:09] <jcastro> ok I'll put "inprogress" until I am more desperate to close it
[22:14] <hazmat> jcastro, it wouldn't be hard to do some bespoke interface analysis for documentation
[22:15] <hazmat> its kinda of hacky.. but we could track all the keys being get/set
[22:15] <jcastro> hazmat: hey speaking of, we need to figure out how to style the generated sphix docs
[22:15] <hazmat> via charm introspection
[22:15] <jcastro> say something magical like "bootstrap can do all that for us easily"
[22:15] <hazmat> jcastro, ugh.. that means another rt... better start now if you know what you want..
[22:15] <jcastro> heh ok
[22:16] <hazmat> jcastro, i'd look around/google for sphinx themes
[22:16] <hazmat> and see if there's something reasonable out there
[22:16] <jcastro> I'd like to see if anyone else in the project is styling their sphix
[22:16] <jcastro> maybe we can steal
[22:16] <hazmat> i don't think bootstrap whole sales is going to work with sphinx without some dev time
[22:16] <hazmat> jcastro, yup.. that's a good plan
[22:17] <hazmat> jcastro, i kinda of like the sphinx theme the celery folks did
[22:17] <jcastro> I don't see other ones styling it, I am debating if it's even worth the effort right now
[22:17] <james_w> jcastro, I'm pretty sure http://developer.ubuntu.com/packaging/html/ is sphinx
[22:17] <hazmat> oh.. nevermind that's the old one
[22:19] <jcastro> james_w: oh nice, where'd you get that, dholbach?
[22:19] <hazmat> jcastro, actually it looks we don't need an RT we can just mod the source..
[22:20] <hazmat> the makefile for the cron is the same
[22:20] <james_w> jcastro, yeah, he'll know for sure
[22:20] <hazmat> here's some.. http://pythonic.pocoo.org/2010/1/8/new-themes-in-sphinx-1-0
[22:21] <james_w> http://bazaar.launchpad.net/~ubuntu-packaging-guide-team/ubuntu-packaging-guide/trunk/files/head:/themes/ubuntu/
[22:21] <hazmat> these are the built in ones.. http://sphinx.pocoo.org/theming.html
[22:21] <hazmat> james_w, cool.. it kinda of looks ugly but we grab the colors ;-)
[22:24] <hazmat> m_3, ping
[22:26] <m_3> hazmat: hey
[22:27] <hazmat> m_3, hey just wanted to check and see how charmrunner is working out
[22:27] <m_3> hazmat: working through some recursion problems with the planner
[22:27] <hazmat> m_3, oh.. do tell?
[22:27] <hazmat> m_3, which charm?
[22:27] <m_3> on gnaglia for some reason... lemme get you the logs... link coming
[22:28] <hazmat> m_3, typically that means a semantic in the charm error but not always
[22:28] <m_3> also need to coerce max_time into float for watch
[22:29]  * m_3 grrrr latency
[22:29] <hazmat> m_3, i switched the project maintainer to juju-jitsu hackers.. if you need to merge something
[22:30] <m_3> http://ec2-50-16-5-188.compute-1.amazonaws.com:8080/job/oneiric-local-charm-ganglia/
[22:30] <m_3> hazmat: ^^
[22:30] <m_3> I'd like to take it down soon so lemme know when you're done digging through the workspace
[22:31] <m_3> hazmat: thanks for the ownership... I'll push the float commit
[22:31] <niemeyer> hazmat, SpamapS: Responded
[22:31] <hazmat> m_3, what's the channel for test notifications?
[22:32] <m_3> hazmat: ##charmbot-test
[22:32] <hazmat> ah it is a double hash
[22:33] <m_3> trying to be polite with made-up channels :)
[22:33] <m_3> I'll figure out the problem (like you said, prob just syntax somewher in interface)
[22:34] <m_3> but that' pretty much status atm
[22:38] <hazmat> m_3, so besides ganglia its been okay?
[22:38] <hazmat> m_3, are you copying the juju-record output into the workspace?.. is that logs.zip?
[22:39] <hazmat> its more than just the logs but fair enough
[22:39] <hazmat> cool!
[22:39] <m_3> hazmat: just walking through the list of fails... ganglia was the third type of fail
[22:40] <m_3> I need to re-run to test other timeout fails with new max_time... that was my next step
[22:41] <m_3> yeah, it's coming along nicely in general though
[22:41] <hazmat> m_3, yeah.. the service watcher timeout code was a little suspect i think
[22:42] <hazmat> it had too many ways of going at it
[22:42] <m_3> hazmat: there was another problem with the runner generating and reading plans from different places, but I'll push that back too
[22:42] <hazmat> m_3, sounds good
[22:42] <m_3> there's a bit of strange behavior in the watcher beesides timeout, but one at a time
[22:44] <m_3> cool... I'm gonna tear down the ganglia run then
[22:46] <hazmat> niemeyer, responded
[22:48] <niemeyer> hazmat: re-re-responded
[23:33] <hazmat> jamespage, wow that's seriously insane about jenkins master/slave
[23:34] <hazmat> that's like the craziest design impl i've heard all year.
[23:34] <niemeyer> hazmat: Do you realize the irony there? ;-)
[23:35] <hazmat> niemeyer, yeah.. i'm a slave, no i'm the master ;-)