/srv/irclogs.ubuntu.com/2013/08/21/#juju.txt

=== varud_away is now known as varud
=== varud is now known as varud_away
=== varud_away is now known as varud
=== varud is now known as varud_away
=== varud_away is now known as varud
=== JoseeAntonioR is now known as jose
=== varud is now known as varud_away
=== varud_away is now known as varud
stubfwereade: Still there?03:42
stubfwereade: I'm not sure exactly what you mean by early departure for peer relations regarding Bug #1192433, but if it stops the unit being listed in relation-list while all the -depart and -broken hooks are running that addresses the bug.03:50
_mup_Bug #1192433: relation-list reporting dying units <jujud> <relations> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1192433>03:50
=== varud is now known as varud_away
stubfwereade: Ordering of provides/requires actually sounds more interesting to me. For example, it would have addressed Bug #118750803:52
_mup_Bug #1187508: adding a unit of a related service to postgresql is racy <postgresql (Juju Charms Collection):In Progress by davidpbritton> <https://launchpad.net/bugs/1187508>03:52
stubfwereade: If the provider's join hook was run before the requirer's join hook, then I can guarantee that access from the require has been granted by the provider before the requirer attempts to connect. This makes the client charms much easier to write.03:53
marcoceppistub: I'd be a slight departure from the current story, not that I don't have a problem with that perse, but the idea I've gotten enstilled as to what each hook means is as follows:04:17
marcoceppijoined - "handshake" hook, expect to have no data available04:19
marcoceppichanged - "stuff on the wire", alwasy check values, idempotency04:19
marcoceppideparted - "waving goodbye", relation values should still be available, do what you need to with these values04:19
marcoceppibroken - "clean up", make any changes post-mortem04:19
marcoceppihow does having the clients joined/changed hook run prior to the providers create a race condition? That just seems like non-idempotent hooks to me04:19
* marcoceppi reads bug report04:20
stubmarcoceppi: Lets say you have an existing setup, with a single unit of service A related to a single unit of service B.04:20
stubmarcoceppi: Now add a new unit to service A (the requirer or client).04:20
stubmarcoceppi: It looks at the relation, picks up authentication credentials, and attempts to use them *before B has had a chance to authorize the new unit's IP address*)04:21
marcoceppiright, I see. db-admin relation data is already set because that unit's done it's thing, but relation-changed needs to run on the postgresql side to complete the setup04:21
marcoceppistub: I'm inclined to say this is a bug with juju, from my understanding the data should be unique per unit <-> unit (though core may need to correct my understanding of this)04:22
marcoceppiIn this case, each unit should get it's own credentials, not the credentials cached from the prior units relation04:22
marcoceppiand should follow the normal hooke execution as if it was a new add-relation04:23
stubmarcoceppi: Yes, you could make that argument. You could even make it somewhat backwards compatible if there was relation->unit and unit->unit data, with the unit->unit data overlayed over the top of the relation->unit data.04:23
* marcoceppi nods04:24
stubI don't think we would see a change like that until 2.0+ though, even if it was agreed on today.04:24
marcoceppiI feel like this would resolve your issue and is what the expected behavior of juju, at least the first part of a fresh relation each time04:25
marcoceppiPossibly04:25
marcoceppiIn the mean time your patch for postgres landed (while not the best answer to the question)04:25
stubta :)04:26
marcoceppiI'd hope this could be documented and solidified either as expected behaviour or fixed in core soon to represent the actual anticipated behaviour04:27
marcoceppiI'll poke #juju-dev tomorrow04:27
marcoceppi"tomorrow"04:27
stubmarcoceppi: The documentation certainly needs to be clearer. I think everyone has troubles when they start dealing with more complex charms, and end up writing test cases to work out wtf is actually happening.04:30
marcoceppistub: yeah, even during my demo last week I was like "wtf is going on" and had to step through debug-hooks as if it was part of the training, only so I could know understand what was actually going on04:31
stubmarcoceppi: I don't know if the change you propose to relation storage will make things better conceptually or not.04:31
marcoceppiI'd like to think so, but I may be biased04:31
stubIt means that to retrieve data from a specific bucket in the relation, two units need to be specified rather than one.04:32
stubI like specifying the ordering of hooks. I don't think it will slow things down much in the real works, and greatly reduces the number of states hooks need to cope with.04:34
stubc/in the real works/in the real world/04:34
=== varud_away is now known as varud
marcoceppistub: I don't see how you'd have to specify two units?04:45
marcoceppiI think relation-get always assumes the current unit, as you technically can't/shouldn't be able to spy on other units04:46
stubmarcoceppi: Right. But how do you retrieve the current units data? The data on its end of the relation? It has one bucket per remote unit.04:46
marcoceppistub: in the relation-* hooks $JUJU_REMOTE_UNIT is prefilled and used by the relation-* commands04:47
marcoceppiit's just pulled from env04:47
stubok, so you would make it impossible to access the other buckets04:47
stubpeer relations certainly need to spy on other units data04:48
marcoceppistub: With the exception of peer relations, I think the idea is you shouldnt' be able to tap in to other relations that the unit isn't aware of04:48
marcoceppipeers are a bit different, but still follow the same event cycle, it's just each unit is individuall connected to each other so you could relation-get -r <rel-id> <unit> and get data as you're technically connected to it04:49
* marcoceppi heads to bed04:51
stubGoodnight04:52
=== varud is now known as varud_away
stubI'll continue chasing race conditions from the hoops I have to jump through because there is no defined order that my hooks get called :)04:52
stubA maze of twisty hooks, all of them alike.04:53
=== varud_away is now known as varud
=== varud is now known as varud_away
=== varud_away is now known as varud
=== tasdomas_afk is now known as tasdomas
fwereadestub, heyhey07:41
stubfwereade: heya07:41
fwereadestub, so my current plans are to cause peer units only to (appear to) depart the relation as soon as they're dying07:42
fwereadestub, because that directly addresses the case in your bug (for which many thanks, btw)07:43
stubThat sounds fine to me.07:43
fwereadestub, it is not a watertight guarantee of ordering, though -- any given unit might be busy with, say, a long-running config-changed hook while the remote unit is departing07:45
fwereadestub, and *could* thus still end up only observing the departure after it actually happened07:46
stubYes, if other hooks are allowed to run at the same time, then anything that the hook knows may be a lie. It only has a snapshot of the environment from when it started.07:47
stubI'm just avoiding that whole issue at the moment in places where it is important, such as failover.07:48
fwereadestub, I don't think that there is much realistic hope of enforcing the sort of lockstep ordering I think you would ideally like to see across the whole system07:48
stubSo if you remove-unit a master, you are going to blow your foot off if you are silly enough to make other changes before it is finished.07:48
stubfwereade: I agree lockstep accoss the system is probably not a good idea. However, I'm thinking the sequence of relation joined/changed/broken/departed would be helpful to write less buggy charms.07:49
fwereadestub, however I am very keen to make it *easier* to write charms that are sane and effective07:50
stubThe number of possible states even a simple relationship can be in is very large, and some of those states are rare (eg. a unit is being removed before another units -joined hook has started running).07:50
fwereadestub, are you familiar with https://juju.ubuntu.com/docs/authors-charms-in-action.html ? I wrote it quite a long time ago but I don't think it's had quite the exposure it should have (I wasn't paying enough attention to the user-facing docs, I kinda threw what I'd written over the wall and didn't follow up)07:52
stuburgh... proxy seized up08:28
=== defunctzombie is now known as defunctzombie_zz
=== defunctzombie_zz is now known as defunctzombie
=== defunctzombie is now known as defunctzombie_zz
fwereadestub, sorry, didn't see you come back09:19
fwereadestub, not sure if you saw; I asked if you'd seen https://juju.ubuntu.com/docs/authors-charms-in-action.html ?09:20
stubfwereade: yes, I've seen that09:26
stubabsorbed it, maybe not :) I see it is describing the current departed/broken/actually leaves behavior09:28
ehwhey guys, does ppa:juju/stable (1.12 I think) support completely offline maas+openstack deployment?09:32
stubfwereade: In case it didn't get through, I said before that my highest priority issue is Bug #1200267 to make my tests reliable. The hook ordering stuff is all complex and tricky, but the charm I'm working on is complex and tricky.09:35
_mup_Bug #1200267: Have juju status expose running and pending hooks  <juju-core:Triaged> <https://launchpad.net/bugs/1200267>09:35
fwereadestub, ok, that's interesting... there's a bit of a problem there in that it's impractical to try to track the possible future, but it would be possible for the unit agent to report whether or not it's *currently* idle09:39
fwereadestub, and knowing that every unit in play is currently idle would probably be sufficient -- *at the moment* -- to determine that the system's in a steady state09:40
jamehw: we may have bugs in that area (especially with charms themselves), but with some config, I think stable has enough to work disconnected from the internet. You'll need to do stuff like get tools into the local cloud first, etc.09:40
stubfwereade: Yes. If no hooks are running, and no hooks are queued, we are steady.09:41
fwereadestub, I'm just not sure how useful that guarantee is in practice -- there's always the possibility that some other user will add a unit, or change config, or something09:41
stubfwereade: This is for writing tests.09:41
ehwjam: is there a way to get the tools in? are they stored in MAAS?09:41
* ehw has seen the FileStorage bits in MAAS, but they don't seem connected to anything09:42
jamehw: there is a 'juju sync-tools' command. I believe in the 1.12 stable release it always downloads them from amazon, in 1.13 there is a "--source" flag so you can copy them from your local disk.09:42
jamehw: there is also juju:ppa/devel if you find 1.12 d09:42
jamdoesn't work09:42
fwereadestub, and if in the future we add a juju-run mechanism (ie allowing a unit too feed info back into juju outside of a regularly scheduled hook) it's almost no guarantee at all09:42
fwereadestub, ok, the test context makes it much saner09:43
ehwjam: ok, I'll see if I can try this out with 1.1209:44
fwereadestub, is it very important to know *which* hooks are queued/running, or is a single bit of steadiness info per unit sufficient?09:44
stubfwereade: A single bit of steadiness is just fine for my use case.09:45
jamehw: so I'm pretty sure that for 1.12 the client (the place you are running 'juju *' from) needs at least some internet access. Once you're set up, I don't think the actual MaaS nodes need public internet access (ex09:45
jamexcept for some specific charms that talk out the network, etc)09:45
stubIt could even just be 'juju wait', that blocks until a steady state is reached. Then I don't even have to bother with polling juju status.09:45
ehwjam: have a deployment next week in a secure environment; working on a methodology for this atm09:46
fwereadestub, that feels like overreaching on juju's part, considering the possibility of juju-run, which prevents us from making that guarantee -- I would be much more comfortable with exposing current steadiness per unit, with the caveat that the information can only be acted upon if you have a lot of specific knowledge about the environment09:47
stubfwereade: ok09:48
stubI just need some way to tell that it is ok to proceed with making the test, rather than the current approach of sleep() and hope :)09:51
stubfwereade: Since we are talking about this only being useful for tests, this might sit better in amulet if there is some way of reaching into juju and inspecting what needs to be inspected.09:55
stub(the current implementation of wait there just parses the 'juju status' output, similar to my own test harness)09:57
=== defunctzombie_zz is now known as defunctzombie
=== defunctzombie is now known as defunctzombie_zz
stubjuju-run might fix this horrible bit of code. For a certain operation, I need to block until the database is out of backup mode. It might be in backup mode because backups are being run, or it might be in backup mode because a backup started and failed to complete. The charm currently emits details and instructions how to clear it manually to the log file every few minutes until it is unblocked.10:13
stubHmm... I can see this turning into twisted :)10:14
=== dosaboy_ is now known as dosaboy_afk
=== tasdomas is now known as tasdomas_afk
=== rogpeppe1 is now known as rogpeppe
jcastroReminder: Charm meeting in ~1 hour!14:52
jcastromarcoceppi: marcoceppi utlemming arosales ^^^14:53
=== freeflying is now known as freeflying_away
utlemmingI'm using ppa:juju/devel and looking at "juju debug-log"...15:01
arosalesjcastro, ack and thanks for the reminer15:03
arosales*reminder15:03
jamespagejcastro, what time is the charm call?15:23
jcastrojamespage: top of the hour15:24
jamespagejcastro, ack15:24
jcastro~40 minutes15:24
jcastroarosales: pad is up to date and ready!15:53
arosalesjcastro, thanks are you setting up the hangout?15:54
jcastroyep15:54
jcastrohttp://ubuntuonair.com/ for those want to follow along in the meeting15:58
jcastrohttps://plus.google.com/hangouts/_/96e374b291c31f8310be00593580b52eb8418b02?authuser=0&hl=en if you want to participate15:58
* m_3 having hardware issues with latest nvidia updates :-(16:01
marcoceppim_3: yaeh, those killed me last night16:04
=== varud is now known as varud_away
=== varud_away is now known as varud
m_3marcoceppi: maybe write that as a full deploy line16:23
arosalesjamespage, jcastro also this bug we need to address in precise https://bugs.launchpad.net/juju-core/+bug/120379516:24
_mup_Bug #1203795: mongodb with --ssl not available in precise <doc> <papercut> <juju-core:Confirmed> <https://launchpad.net/bugs/1203795>16:24
arosalesat least figure out the story since if we can't backport mongo and lxc16:25
kurt_jamespage: must the ceph.yaml go in the .juju directory?16:28
jamespagekurt_, no16:28
kurt_where should it go?16:28
jamespagekurt_, you pass it to juju deploy with the --config flag16:28
kurt_ah ok16:28
jamespagejuju deploy --config --config16:28
jamespagejuju deploy --config config.yaml16:29
kurt_its unnecessary after the configuration, correct?16:29
kurt_sorry, deployment16:29
AskUbuntuis openstack swift required for juju integration? | http://askubuntu.com/q/33544716:29
kurt_jamespage: is the ceph.yaml in scuttlemonkeys guide (http://ceph.com/dev-notes/deploying-ceph-with-juju/) up to date? I'm looking specifically at the ceph-osd source and device information and ceph-radosgw source.16:35
arosalesjcastro, jamespage http://bazaar.launchpad.net/~openstack-charmers/+junk/openstack-ha/view/head:/python-redux.json16:35
kurt_and I'm wondering in the source string for gui for ceph would then be changed to quantal to try it that way?16:36
hazmatkurt_, precise is fine16:43
hazmatre source string for gui16:43
hazmater default16:43
kurt_hazmat: default in gui for source string is "cloud:precise-updates/folsom"16:46
hazmatkurt_, oh sorry you meant for ceph within the gui, not for deploying the gui16:47
kurt_hazmat: my questions above may seem confusing, sorry - I had separate ones for CLI and gui methods16:47
hazmatkurt_, if your deploying the precise charm that's a good default .. its the charm that determines which distro release your getting, so you want the source string to match the charm series name.16:49
kurt_CLI - my questions were on the ceph.yaml and for the gui my question was about the source and now looking at it, the odd-devices string.16:49
kurt_ok, there appears to be no guide using the gui to deploy ceph.  I can see there is a separate charm for ceph-radosgw, ceph-osd, and ceph16:51
kurt_i should probably just stick to manual method for now16:52
weblifeIs there any important data stored on ephemeral storage?  Use Case: Stopping AWS bootstrap and staring it again.16:56
=== BradCrittenden is now known as bac
jcastrohazmat: bah I lost the URL, deployer docs are at ... ?17:04
hazmathttp://pythonhosted.org/juju-deployer/17:07
jcastrota17:07
jcastrohazmat: so this doesn't explain how to use it though, no examples in the man page either17:08
jcastroassuming I have example.yaml as an exported bundle how do I deploy it?17:08
hazmatjcastro, good point. juju-deployer  -v -W -c example.yaml name_of_bundle17:08
jcastrohazmat: aha! I get an error when trying to deploy this: http://bazaar.launchpad.net/~openstack-charmers/+junk/openstack-ha/view/head:/python-redux.yaml17:14
hazmatjcastro, which is? pastebin? on maas?17:14
jcastrohttp://pastebin.ubuntu.com/6011028/17:15
kurt_scuttlemokey: I have some updated instructions for the monitor-key generation section for your ceph documentation17:16
kurt_http://pastebin.ubuntu.com/6011030/17:16
hazmatjcastro, that reads like an error in the config file17:18
hazmatjcastro, ie. its not valid yaml17:18
jcastrohazmat: ah nm, I found the problem17:19
hazmatjcastro, html instead of yaml ?17:19
jcastroyeah so if you wget that page from lp you get the html17:20
jcastroduh17:20
hazmatjcastro, yeah.. just verified that the python-redux.yaml is correct17:20
hazmater.. is valid yaml17:20
jcastroheya kurt_ you gotta check this out, you still have maas?17:28
kurt_yep.17:29
jcastrosnag that yaml file17:29
jcastrothen get juju-deployer from ppa:juju/pkgs17:29
jcastrohttp://pastebin.ubuntu.com/6011079/17:29
kurt_jcastro: will this tear up what I have running currently?17:30
jcastroIt will probably clobber the universe17:30
marcoceppijcastro: wait, is juju-deployer in juju/pkgs?17:30
jcastroactually, I have it in saucy17:31
jcastroI've seen it in juju/pkgs17:31
hazmatits in the distro17:31
hazmatin saucy i think17:31
marcoceppijcastro: that's not the right version17:31
marcoceppiI need to remove that17:31
kurt_jcastro: shall I wait then?17:31
jcastroyeah17:32
jcastrojamespage: it bombed out during this part: http://pastebin.ubuntu.com/6011085/17:32
jcastrodo I need to have a bunch of locally branched stuff?17:33
marcoceppiI've never heard of switf-storage-z117:33
hazmatmarcoceppi, 3 swift nodes17:33
hazmatjuju-deployer also in a daily ppa https://launchpad.net/~ahasenack/+archive/juju-deployer-daily with dep https://launchpad.net/~ahasenack/+archive/python-jujuclient or from pypi.. virtualenv --system-site-packages deployer && source deployer/bin/activate && pip install juju-deployer17:34
ahasenackabout swift-storage, the charm is called swift-storage, but the config deploys it as swift-storage-zN17:35
ahasenackwhere N is 1, 2 or 3 (if we are talking about the openstack juju-deployer config file)17:36
ahasenackso I think that config is missing a "charm: swift-storage" entry under swift-storage-z117:36
ahasenack2013-08-21 13:29:48  Deploying service swift-storage-z1 using local:precise/swift-storage-z1 <-- wrong17:36
ahasenackshould be17:37
ahasenack2013-08-21 13:29:48  Deploying service swift-storage-z1 using local:precise/swift-storage17:37
marcoceppievilnickveitch jcastro: I also found out there's about 25 or so extra undocumented environments.yaml config options for juju-core17:39
marcoceppiokay, 25 might be an overestimate, but there's a quite a few17:40
kurt_jamespage: in looking at your cephalopod article - do you have precise instructions instead of quantal?17:43
hazmatahasenack, fwiw that's fixed in deployer trunk, charm: becomes optional, it will use the name in the charm md if none is specified17:51
ahasenackok17:53
=== defunctzombie_zz is now known as defunctzombie
marcoceppiahasenack hazmat btw I've got two merge proposals for deployer, once is minor and cosmetic the other is a blocker for amulet:  https://code.launchpad.net/~marcoceppi/juju-deployer/local-branches/+merge/181207 if you could reveiw sometime this week that'd be great!19:57
hazmatmarcoceppi, ack i'll get those merged in this evening.19:59
marcoceppihazmat: thank you sir!19:59
=== BradCrittenden is now known as bac
sidneiadam_g: did you have issues with rabbitmq-server 2.7.1? it's failing to start when deploying on precise local provider21:10
adam_gsidnei, no, its been working fine in non-local deployments for a while21:11
adam_gsidnei, with the exception of clustering in environments with funny DNS21:11
sidneii guess it might be a lxc-specific failure21:11
adam_gsidnei, what is the failure?21:11
sidneithere's nothing interesting in the rabbit logs, only 'failed to start'21:11
sidneiand epmd is left running21:11
adam_gsidnei, can you resolve your localhost name and IP?21:12
adam_ger, local hostname21:12
adam_g:)21:12
sidneiuhm, hostname is set to ubuntu, it might be resolving to multiple addresses21:12
sidneior not21:13
sidneiwell, it's probably that21:14
sidnei$ ping ubuntu21:14
sidneiPING ubuntu (10.0.3.15) 56(84) bytes of data.21:14
sidneibut local ip is 10.0.3.18721:14
sidneiwill dig further after the break21:14
sidneihttps://code.launchpad.net/~sidnei/charms/precise/haproxy/trunk/+merge/181421 up for review again22:16
kurt_jamespages: do you have any guides for ceph deployment with cinder and openstack?23:08
=== mhall119_ is now known as mhall119

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!