=== marlinc_ is now known as marlinc | ||
hazmat | sidnei, its missing some of the waits for units running stuff, but the python-env branch of the new deployer is working against pyjuju | 01:53 |
---|---|---|
* sidnei gives it a shot | 01:55 | |
hazmat | sidnei, pushed some fixes for terminate | 02:07 |
hazmat | er reset | 02:07 |
sidnei | hazmat: ok, not using that on the pyjuju one yet, still doing nova delete | 02:07 |
sidnei | hazmat: seems like it worked, except for the waits :) | 02:09 |
hazmat | sidnei, nice | 02:10 |
sidnei | hazmat: https://pastebin.canonical.com/91208/ | 02:10 |
hazmat | time for bed then :-) | 02:11 |
sidnei | cheers | 02:13 |
=== ismail is now known as Guest64702 | ||
=== jppiiroinen is now known as jppiiroinen|afk | ||
=== marcoceppi_ is now known as marcoceppi | ||
=== wedgwood_away is now known as wedgwood | ||
jcastro | marcoceppi: so I did a talk on juju @ a lug last night | 12:58 |
jcastro | and the drupal6 charm is broken | 12:58 |
jcastro | should I file a bug to unpromulgate? | 12:58 |
marcoceppi | jcastro: I'd at least file a bug | 12:58 |
marcoceppi | I'm a little unclear as to the unpromulgate process, does it just happen whenever or does it have to go through orphaned process, etc | 12:59 |
jcastro | no clue | 12:59 |
marcoceppi | Plus, if we unpromulgate it, where to we keep the charm for historical reasons? ~charmers/ will likely cause some oddites with ingestion in the GUI | 13:00 |
jcastro | well in any case | 13:01 |
jcastro | this would make a good test case to see what happens | 13:01 |
* marcoceppi nods | 13:01 | |
jcastro | can we kick it back to brandon's namespace? | 13:01 |
marcoceppi | jcastro: I can't, no one but brandon has access to that namespace | 13:01 |
marcoceppi | BUT | 13:01 |
marcoceppi | it looks like he has a copy in his personal branch already so that would "just happen" | 13:02 |
marcoceppi | First audit casualty? | 13:02 |
jcastro | yeah | 13:06 |
jcastro | audit by failing a demo | 13:07 |
jcastro | marcoceppi: I am publishing the new calendar now | 13:08 |
jcastro | for reviews | 13:08 |
hazmat | hmm. that was known broken.. | 13:09 |
jcastro | marcoceppi: ttrss looks like it's ready for round 2 | 13:09 |
hazmat | i ended up pushing a drupal7 one to my ns | 13:09 |
hazmat | when i was doing a demo. | 13:09 |
marcoceppi | hazmat: yeah, I think it's been general knowledge that it's broken. Just not sure what to do about it until the recent audit sessions | 13:10 |
jcastro | no more mr nice guy | 13:10 |
jcastro | it doesn't work ---> see ya! | 13:10 |
hazmat | agreed | 13:10 |
* marcoceppi warms up the unpromulgater | 13:11 | |
jcastro | if anyone has any cycles today for the review queue, that would be <3 | 13:11 |
jcastro | marcoceppi: hey so so far Andreas has been asking for more charm-writing content in the charm school | 13:13 |
jcastro | want to concentrate on that this afternoon? | 13:13 |
marcoceppi | jcastro: sure | 13:13 |
jcastro | I sent you instructions for running the ubuntuonair thing | 13:16 |
jcastro | since it'll be you and mims today | 13:16 |
marcoceppi | jcastro: oh yeahh | 13:17 |
marcoceppi | thanks | 13:18 |
freeflyi1g | does juju go version only support constraints like arch, cpu-cores and mem? | 13:33 |
mgz | freeflyi1g: yes, for now | 13:34 |
freeflyi1g | mgz: thx | 13:35 |
mgz | feedback welcome about what you want most | 13:35 |
freeflyi1g | mgz: in python version, maas-name is very useful :) | 13:36 |
wedgwood | hazmat: is there anyway to fix a machine agent showing "down"? I've ssh'd into the unit and restarted it, but the status is the same. | 13:52 |
wedgwood | (besides redeploying) | 13:53 |
sidnei | jcastro: are you guys going to be at velocity? | 14:06 |
jcastro | No, we got declined | 14:15 |
hazmat | wedgwood, could you pastebin the machine agent log | 14:19 |
wedgwood | hazmat: it's looking like a zookeeper problem | 14:19 |
hazmat | wedgwood, restarting the machine agent should resolve that | 14:20 |
hazmat | wedgwood, connectivity? | 14:20 |
wedgwood | hazmat: well, the zookeeper keeps dying | 14:20 |
hazmat | wedgwood, g+? | 14:20 |
wedgwood | hazmat: sure | 14:21 |
hazmat | i landed some fixes to do better back off on retry, but its only in the ppa as of two days ago. | 14:21 |
* hazmat refills on coffee | 14:21 | |
jcastro | ahasenack: hey do you have any specific things you'd like covered in the charm school? Or just general charm authorship stuff? | 14:51 |
ahasenack | jcastro: hm | 14:53 |
ahasenack | jcastro: "best practices" for interface creation I guess, and simple examples of set and get relations, with emphasis on the fact that relations might not be established yet so it should noop | 14:54 |
ahasenack | jcastro: or maybe for this first one just explain the hooks | 14:57 |
=== cereal_b1rs is now known as cereal_bars | ||
gnuoy | jamespage, does java set the default maxheap size based on the ram of the system it's running on or is it set when the jre is compiled? | 15:28 |
jamespage | gnuoy, its never clever - there is a jre default and then whatever the application specificies | 15:37 |
jamespage | cassandra does some auto-sizing | 15:37 |
jamespage | others don't | 15:37 |
gnuoy | jamespage, lovely, thanks | 15:37 |
marcoceppi | I was poking around the help for juju-core 1.10, what version (if any) can you specify an alternate .juju/environments.yaml (or different .juju "home")? I couldn't find it any of the help topics for juju-core | 15:38 |
sidnei | marcoceppi: JUJU_HOME iirc | 15:52 |
marcoceppi | sidnei: yup, thanks! | 15:52 |
avoine | is there a snippet somewhere on how to do a wait loop for apt-get in case an other charm is installing? | 16:30 |
* avoine think every charm should use that | 16:31 | |
mattgriffin | it's like 2:30 in the morning and you're working on charms? | 16:31 |
avoine | here it's noon | 16:31 |
mattgriffin | oops… wrong channel :) | 16:32 |
mattgriffin | avoine: :) | 16:32 |
marcoceppi | avoine: There is talk of using aptdaemon to resolve this problem. Not sure if there are any snippets though | 16:32 |
avoine | ok | 16:33 |
avoine | I'll check it | 16:33 |
marcoceppi | s/is/was/ | 16:35 |
jamespage | avoine, marcoceppi: so the latest python juju (and I believe juju-core) only allows serial hook execution in the same container | 16:35 |
marcoceppi | That might actually not be the right package, but there's supposedly a way to queue packages for installation so you don't collide with apt-get in parent | 16:35 |
jamespage | thus avoiding a principle and subordinate charm trying todo conflicting operations at the same time | 16:36 |
marcoceppi | jamespage: that's super helpful to know | 16:36 |
jamespage | marcoceppi, hazmat fixed that for us during the HA work on OpenStack as we hit that issue *alot* | 16:37 |
jamespage | (around January I think) | 16:37 |
marcoceppi | excellent. I should write more subordinate charms | 16:37 |
hazmat | that also made it into juju core (trunk atm afaicr) | 16:43 |
marcoceppi | hazmat: any idea when the ppa will be updated with a more recenty juju-core release? | 16:45 |
hazmat | marcoceppi, when there's a new release | 16:46 |
hazmat | marcoceppi, there are several ppas associated to juju-core | 16:46 |
marcoceppi | I thought core was up to 1.14 or something (where 1.10 is currently in the ppa) | 16:46 |
hazmat | 1.11 | 16:46 |
hazmat | is trunk | 16:46 |
hazmat | mramm2, when's the next core release? | 16:47 |
mramm2 | I think next friday | 16:47 |
mramm2 | would have been this friday but Dave is traveling | 16:48 |
mramm2 | I also think we should move to earlier in the week in case there is something serious that requires a followup release -- don't want to have to do that over the weekend! | 16:48 |
marcoceppi | For those waiting for the Ubuntu On Air, we'll be starting in a few mins | 17:02 |
marcoceppi | The Ubuntu on air page isn't quite working for us, so you can follow along here: http://www.youtube.com/watch?v=yRcqSjOGweo&feature=youtu.be | 17:09 |
marcoceppi | If you have any questions please feel free to ping me with your questions! | 17:12 |
marcoceppi | Questions? | 17:55 |
paraglade | marcoceppi: how about talking a bit about implicit relations | 17:56 |
paraglade | and scope | 17:57 |
marcoceppi | paraglade: queued up! | 17:58 |
paraglade | marcoceppi: this might be something actually to cover when you get into subordinates | 18:00 |
marcoceppi | paraglade: probably which we'll probably spend a whole 'nother session on | 18:00 |
paraglade | :) | 18:00 |
paraglade | cool thanks! | 18:04 |
arosales | m_3, marcoceppi thanks or the charm school. | 18:04 |
marcoceppi | m_3: Thanks for running us through charms! | 18:04 |
m_3 | arosales: sure | 18:04 |
m_3 | thanks peeps for tuning in! | 18:04 |
=== defunctzombie_zz is now known as defunctzombie | ||
=== defunctzombie is now known as defunctzombie_zz | ||
dpb1 | m_3: Hey -- Could you review this one when you get a chance? https://code.launchpad.net/~davidpbritton/charms/precise/landscape-client/add-landscape-relation/+merge/161497 | 20:44 |
marcoceppi | Oh boy, where is charm.log in go-juju deployments? | 20:59 |
marcoceppi | /var/lib/juju/units/<unit-name>/charm.log isn't quite there anymore :) | 21:00 |
m_3 | marcoceppi: look in /var/log | 21:03 |
marcoceppi | m_3: ah, /var/log/juju/unit-name.log perfect | 21:03 |
marcoceppi | that will make archiving logs in juju-core a hell of a lot easier | 21:03 |
m_3 | dpb1: I'll try today, but most likely Monday | 21:04 |
dpb1 | m_3: monday is fine. | 21:04 |
dpb1 | m_3: just a gentle reminder. :) | 21:05 |
dpb1 | (now that sprint season is done for a while) | 21:05 |
m_3 | gotcha | 21:05 |
m_3 | actually at a conference next week | 21:05 |
m_3 | _then_ the season's over for a bit | 21:05 |
marcoceppi | I don't know if it's just me, and my expectations, but I feel like "the cloud" has been noticeably more responsive since using the juju-core version | 21:15 |
=== wedgwood is now known as wedgwood_away | ||
jhujhiti | is this the right place to look for help with the openstack charms? | 22:17 |
sarnold | it's not -wrong-, anyway :) but it is late in the day.. | 22:17 |
jhujhiti | well, it's worth a shot | 22:19 |
jhujhiti | i'm trying to help someone i work with get this openstack deployment straightened out. he's used juju charms to deploy mysql/rabbitmq/keystone/glance/swift/etc in HA. but it seems to have left the default database for each of the openstack services set to sqlite | 22:21 |
jhujhiti | there's a relation in juju, so i'm not sure what's been done wrong | 22:22 |
jhujhiti | and there are too many moving parts for me to just dive in and fix it, having never set it up myself | 22:22 |
jhujhiti | using glance as an example, i read through the 'juju debug-log' as i removed and re-added the relation to mysql. glance says something to the effet of 'database settings changed, will try again' but it never does | 22:24 |
=== Makyo is now known as Makyo|out |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!