thumper | stub: do the postgresql charms for precise work on trusty? | 03:03 |
---|---|---|
stub | thumper: the are supposed to. I haven't run the tests recently on trusty. | 04:41 |
=== vladk|offline is now known as vladk | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== med_ is now known as 1JTAAEE11 | ||
axw | mgz: standup? | 09:02 |
=== vladk is now known as vladk|offline | ||
=== vladk|offline is now known as vladk | ||
=== dosaboy_ is now known as disaboy | ||
=== disaboy is now known as dosaboy | ||
mgz | axw: blast, did standup happen? | 09:33 |
axw | mgz: yes, wallworld is going to email you | 09:33 |
mgz | gah, really wanted to talk to you guys as last week we didn't get any | 09:34 |
tech2 | Hi all, I'm writing a juju charm and when a relation triggers, how do I determine the IP address of the machine the other service is on? I have a multi-master database setup, with replication. When a link is formed between the two via a relation in juju, how does the juju hook script know what each end's IP addresses are (or do I need to carry that around as local configuration info?) | 10:05 |
tech2 | Also, is there a way I can configure my charm to prevent multiple installations on the same machine (use of --to <number>)? The software I'm writing it for sadly only supports a single instance per machine. | 10:58 |
mthaddon | hi folks, I'm wondering if the swift charms could grow support for swift-dispersion-report relatively easily? http://docs.openstack.org/trunk/config-reference/content/object-storage-dispersion.html | 11:04 |
gnuoy | mthaddon, I'll take a look | 11:05 |
mthaddon | gnuoy: thx - I'm happy to file a bug if that's appropriate | 11:05 |
mthaddon | looks like it should "just" be creating a new user, generating /etc/swift/disperson.conf and then running swift-dispersion-populate - users can then run swift-dispersion-report whenever needed | 11:07 |
gnuoy | mthaddon, a bug would be great, thanks. | 11:08 |
mthaddon | k | 11:08 |
gnuoy | But I agree, at first glance it looks straight forward | 11:08 |
mthaddon | gnuoy: https://bugs.launchpad.net/charms/+source/swift-proxy/+bug/1328064 | 11:11 |
gnuoy | ta | 11:11 |
_mup_ | Bug #1328064: Add support for swift-dispersion-report to charms <swift-proxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1328064> | 11:11 |
=== Ursinha-afk is now known as Ursinha | ||
=== rogpeppe1 is now known as rogpeppe | ||
tech2 | Does juju have a concept of a "collection" of services, such that when a relation is made between two machines it's really made to the group, or is this something to be managed externally? I'm trying to manage multi-master database replication without having to set-relation between every instance in a full-mesh (but having the resulting configuration represent that). | 13:39 |
mthaddon | tech2: for which service are you trying to configure multi-master replication? | 13:41 |
tech2 | mthaddon: our own database system which I'm trying to write a charm for. | 13:42 |
mthaddon | tech2: ah okay - so relations are between services, not between machines, so basically the model would be that you'd have a foo-multimaster-db service, and the relation would be to that | 13:43 |
mthaddon | hi folks, I've been looking at the swift charms and there are two settings I'm wondering about - partition-power has a default of 8 in swift-proxy which means essentially the default expects a ring with up to 2 drives (you're supposed to have 100x the number of total drives you expect in the cluster and it's raised to the power of 2) | 13:44 |
mthaddon | the second setting is "workers" in the swift storage components, which seem to be hard coded to 2, which doesn't quite represent what the swift docs recommend | 13:45 |
tech2 | mthaddon: sorry, bad choice of words, and yes. So how does one manage the relations to create a full mesh? <dbname>-relation-joined would be run, but how would that derive information about all participants without someone having run set-relation on all the services? | 13:45 |
=== ev_ is now known as ev | ||
mthaddon | tech2: that'd be up to the charm itself. Basically when you run a set-relation on (say) the appserver side, that would trigger a response from the foo-multimaster-db service that would return information about all participants, but this may be DB specific. | 13:49 |
=== kentb-out is now known as kentb | ||
kentb | do suboridinate charms work with constraints, or, I guess I should ask, is there any point to declaring constraints when deploying a subordinate charm since they basically just 'bolt on' to an existing service? | 13:52 |
lazyPower | kentb: they bolt onto an existing service with scope: container | 13:54 |
lazyPower | tbh i haven't tried to deploy a subordinate with constraints to see if it blocks the deployment on a machine that doesn't meet the constraints. | 13:54 |
lazyPower | so it may have some functionality, but needs citation. | 13:54 |
kentb | lazyPower, ok. thanks. | 13:54 |
tech2 | mthaddon: so how would you propose structuring this (assuming I wanted N instances of the DB, all cross-replicating)? I'd have a second multimaster charm that just acts as a mechanism to connect things to? | 14:02 |
mthaddon | tech2: I don't think that's necessary, but depends on the application. Do the nodes already chatter amongst each other and keep an up to date list of what nodes are in the replication set? If so, could you use that? | 14:06 |
tech2 | mthaddon: I'm starting afresh. This used to be a package that was deployed to a box and then configured to replicate either by hand or via chef script by adding the IPs of the other instances it would replicate with. Instead of managing that by hand I'd just like the option of telling it "you're a part of this group, find your neighbours" or something. | 14:08 |
tech2 | mthaddon: I was just hoping this was a known pattern for services (and as such that there'd be a standard solution) rather than me making something up myself. | 14:11 |
mthaddon | tech2: I think the peer relations is what you're looking for - trying to find an example for you | 14:11 |
tech2 | Thanks | 14:13 |
mthaddon | tech2: the haproxy charm's peer-relation-joined (it's a symlink to hooks.py, but notify_peer (and therefore notify_relation) is probably what you're looking for | 14:13 |
tech2 | I'll take a look, thanks. | 14:14 |
jcastro | lazyPower, you're on reviews this week? | 14:20 |
jcastro | I believe cory_fu is your deputy if you guys wanna pair up or whatever | 14:20 |
lazyPower | i am? | 14:21 |
* lazyPower looks | 14:21 | |
lazyPower | Welp, there goes the neighborhood | 14:21 |
=== lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: lazypower / cory_fu || News and stuff: http://reddit.com/r/juju | ||
=== makyo_ is now known as Makyo | ||
=== salgado_ is now known as salgado | ||
cory_fu | Does anyone have experience with destroy-relation? | 17:01 |
cory_fu | Specifically, I issue the command, get no errors, but the relation never goes away | 17:07 |
lazyPower | cory_fu: is a dependent service in error? | 17:09 |
lazyPower | cory_fu: or is there an open debug-hooks session trapping your commands? | 17:09 |
cory_fu | No services in error state whatsoever | 17:09 |
cory_fu | No debug-hooks open, either | 17:09 |
lazyPower | strange. I've not seen that behavior before. What version of juju are you running? | 17:16 |
cory_fu | 1.19.3 | 17:17 |
cory_fu | I can work around it, but was wondering if it was a known bug | 17:18 |
lazyPower | it warrants filing | 17:19 |
lazyPower | I'm not seeing that behavior and i'm on 1.19.3 | 17:19 |
lazyPower | make sure you attach your all-machines log with the bug report, + how to reproduce, and scrub any api keys if you're using a public cloud from the log output. | 17:20 |
jcastro | hey mbruzek | 17:55 |
jcastro | do you think we can get elasticsearch working on power by say ... thursday? | 17:55 |
mbruzek | jcastro, ElasticSearch works, but logstash does not | 18:02 |
mbruzek | Logstash has a architecture dependency. | 18:03 |
=== CyberJacob|Away is now known as CyberJacob | ||
mbruzek | jcastro, IDK how to resolve the architecture problem in logstash, | 18:05 |
=== alexisb is now known as alexisb_lunch | ||
=== roadmr is now known as roadmr_afk | ||
=== hatch__ is now known as hatch | ||
=== alexisb_lunch is now known as alexisb | ||
=== hatch__ is now known as hatch | ||
=== roadmr_afk is now known as roadmr | ||
=== roadmr is now known as roadmr_afk | ||
=== vladk is now known as vladk|offline | ||
=== roadmr_afk is now known as roadmr | ||
jcastro | oh sweet, reviewboard incoming! | 20:48 |
thumper | jcastro: charm? | 20:57 |
thumper | avoine: IIRC you did the python-django charm yes? | 20:58 |
thumper | avoine: I have some questions on the juju list about how to hook up the subordinate charm for supplying the actual django site | 20:58 |
thumper | would love some comments if you have some time | 20:58 |
jcastro | thumper, yep | 20:59 |
lazyPower | mbruzek: did you see? log4j, [2014-06-09T21:01:44.664] WARN: org.elasticsearch.transport.netty: [Kwannon] Message not fully read (response) for [859] handler org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$4@45114dfc, error [true], resetting | 21:01 |
lazyPower | - Logstash itself has an issue all around | 21:01 |
lazyPower | I'm filing the bug now | 21:01 |
arosales | alexisb: rick_h_ jcastro: perhaps we make What's coming in the Juju and Juju gui UOS to Juju whats coming in the GUI | 21:02 |
arosales | as we have a Juju Core Roadmap | 21:02 |
* arosales looking at http://summit.ubuntu.com/uos-1406/all/ | 21:02 | |
arosales | for Cloud DevOps | 21:02 |
mbruzek | lazyPower, I did not see that | 21:02 |
alexisb | though jcastro I am happy to get some help with the juju core roadmap stuff as this will be my first UDS :) | 21:02 |
lazyPower | mbruzek: check in /opt/logstash/*.log - this is scrolling non stop after adding teh cluster relationship | 21:03 |
arosales | Seems Road Map and whats coming (juju core wise) may be the same content | 21:03 |
jcastro | I am of the opinion that one session can cover juju and the gui | 21:03 |
mbruzek | lazyPower, I could not get logstash running on Power remember? | 21:03 |
jcastro | I mean, 1 hour is a long time to read off changelogs and roadmaps | 21:03 |
rick_h_ | jcastro: but it's so exciting! | 21:04 |
arosales | jcastro: agreed thus my suggestion | 21:04 |
rick_h_ | oh hmm, that's tomorrow | 21:04 |
lazyPower | mbruzek: was it not starting up at all? | 21:04 |
usdudeink | The only thing it can't do is PDFs from Firefox with HPLIP, any ideas out there?? | 21:04 |
arosales | jcastro: so can you make "whats-coming-in-juju-and-the-juju-gui" just "whats-coming-in-the-juju-gui" --drop Core and just cover core in the "Juju-core-roadmap" session | 21:04 |
arosales | jcastro: actually I think I can update | 21:05 |
mbruzek | lazyPower, no, it had an unknown-Linux error | 21:06 |
jcastro | arosales, sure, but that's 2 sessions, is that what we want? | 21:06 |
jcastro | I can't imagine the gui has 1 hour of things coming soon | 21:07 |
lazyPower | mbruzek: ack. I'm refining this bug stepping through an itneractive debug session - if you want to follow the progress - https://bugs.launchpad.net/charms/+source/logstash-indexer/+bug/1328272 | 21:07 |
_mup_ | Bug #1328272: Failed Unicast Discovery <audit> <logstash-indexer (Juju Charms Collection):New> <https://launchpad.net/bugs/1328272> | 21:07 |
arosales | jcastro: that was jut what was on the schedule | 21:07 |
arosales | there were 2 sessions, but "whats coming" had both core and gui | 21:08 |
arosales | jcastro: I guess we could cover both in one and then drop the roadmap session | 21:08 |
jcastro | I can go either way | 21:08 |
* arosales just going off what is on the schedule right now | 21:08 | |
jcastro | it seems the gui session has already been renamed | 21:08 |
* arosales renamed it | 21:09 | |
jcastro | oh | 21:09 |
jcastro | rick_h_, hey, can you fill a slot with 55 minutes of content? | 21:09 |
rick_h_ | jcastro: arosales ok, will check back later to figure out what I need to prepare for tomorrow. EOD | 21:09 |
rick_h_ | jcastro: not really, I can do 30min or so perhaps | 21:09 |
arosales | jcastro: feel free to combine "whats coming" to be both core and gui, _but_ then drop the "Juju Core Roadmap session" | 21:09 |
jcastro | I'll ping you tomorrow, worst case we can do it on thursday too | 21:09 |
rick_h_ | jcastro: ok | 21:09 |
arosales | jcastro: thanks or getting the CloudDevOps track all set | 21:10 |
jcastro | our big data session will be epic | 21:10 |
alexisb | jcastro, if you want to combine the core and gui stuff into one session that is fine by me | 21:15 |
alexisb | I can make sure we have core present | 21:15 |
jcastro | yeah let's do that | 21:16 |
jcastro | It's supposed to be a summary, surely we can do both in an hour | 21:16 |
alexisb | ok, when is the session? | 21:17 |
alexisb | jcastro, ^^^ | 21:18 |
jcastro | wed, 1500 UTC | 21:18 |
jcastro | rick_h_, that gives you an extra day to prepare | 21:18 |
alexisb | jcastro, perfect, thank you | 21:18 |
jcastro | that gives us an extra slot to go over with "Getting Started with Juju" | 21:20 |
jcastro | just in case | 21:20 |
=== CyberJacob is now known as CyberJacob|Away | ||
jose | negronjl: hey! do you have a minute? I'm stuck with some seafile stuff | 23:46 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!