[01:44] noted today that $LANG is unset in a hook execution context, this can cause problems for some python libraries that (arguably incorrectly) rely on a default system encoding e.g. calling codecs.open() [02:14] commented on https://github.com/juju/juju/issues/133 marcoceppi, any thoughts on resolving that one? [02:45] Odd_Bloke, rcj: ping [09:24] gnuoy, cinder is sufferring from inadequet patching in its unit tests [09:24] they work ok on a real machine - but on a virt machine (like the test environment) [09:24] vdb is a real device :-) [09:24] I've worked around this for now by prefixing device names with 'fake' [09:24] but it will need a wider review - I'll raise a bug task for it [09:24] for 15.10 [09:35] gnuoy, https://code.launchpad.net/~james-page/charms/trusty/cinder/unit-test-fixes/+merge/266692 review please :-) [09:35] resolves the cinder test failures for now [09:40] jamespage, thanks, merged [09:50] jose: Pong. [09:59] gnuoy, anything else I can help with? [10:09] jamespage, well if you wanted to create a skeleton release note I wouldn't hold it against you ... [10:11] gnuoy, ok [10:22] gnuoy, dosaboy, beisner: https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1507 [10:22] ta [10:25] gnuoy, deploy from source needs to coreycb input - is he back today? [10:26] * jamespage goes to look [10:43] jamespage, looks like we have charm upgrade breakage. [10:43] rabbit is refusing connections from neutron-gateway after the upgrade (invalid creds). investigating now [10:45] Actually, it's refusing connections from everywhere by the looks of it [11:03] gnuoy, that would indicate a crappy rabbitmq upgrade methinks [11:03] gnuoy, is that on 1.24? [11:06] gnuoy, I'll take a peek [11:15] gnuoy, I think I see a potential bug in the migration code in peerstorage [11:19] jamespage, sorry, was stuffing food in my face [11:19] jamespage, yes, 1.24 [11:20] jamespage, am all ears viz-a-viz peerstorage migration bug. I still have my environment so can supply additional debug if helpful [11:20] gnuoy, L97 [11:20] there is a relation_get without a rid [11:21] ah [11:21] so I think that may cause a regeneration of all passwords if called outside of the cluster relation context [11:22] gnuoy, grrr - redeploying as dns foobar [11:28] jamespage, it definitely looks like the password has changed in rabbit, using "rabbitmqctl change_password" to set it back to what it was seems to fix things. [11:28] gnuoy, the password changed in rabbit, or the password changed on all the relations? [11:28] that's my hypothesis [11:29] jamespage, the password changed in rabbit [11:30] jamespage, well actually, what I'm saying is, the password in the client config and the password advertised down the relations are the same but they don't seem to equal the actual password rabbit has for the user [11:30] gnuoy, yah [11:30] that matches my theory - just trying to prove it [11:30] kk [11:31] gnuoy, there is no code in the charm that changes passwords in rabbit, but it would ignore a change triggered by a broken migration - that would propagate out to related services, but not reflect the actual password [11:32] o/ good morning [11:33] gnuoy, afaict, reverse dns is/was a-ok. host entries are coming and going with instances. [11:34] beisner, it worked straight through on my bastion with the only error being trusty/git/icehouse. I've scheduled another run but it's been in the queue for ~4hours [11:34] gnuoy, however, i have observed that due to rmq-funk in serverstack, some messages are really delayed. that is observable in that serverstack-dns may not always have the message back and the reverse dns record added by the time the instances is already booted an on its way. :-/ [11:35] gnuoy, saw this as well on ddellav's bastion as we were t-shooting a failed re-(re)deploy [11:35] my dns appears foobarred right now [11:35] I thought I just fixed it up [11:35] gnuoy, throttle is way down. if we turn it up to have more concurrency, serverstack gives us error instances. [11:36] i just removed 6 error instances from last night (which induced some job fails) [11:36] beisner, indeed - I have partial entries for my dns [11:36] jamespage, I don't follow the scenario you outlined. broken migration? [11:36] I assume you don't mean db migration [11:36] gnuoy, yeah - the migration incorrectly missed the peer relation data, so generates a new password [11:37] gnuoy, peer -> leader migration [11:37] oh, yes, of course that migration [11:37] jamespage, so rabbit is pushing out a new password to the clients without actually changing the password for the user to the new value? [11:38] yeah [11:38] oh /o\ [11:38] I think that's the case, but can't get an env up right now [11:40] beisner, gnuoy: it would appear notifications are going astray somewhere on serverstack [11:40] jamespage, oh yeah ... also observable in not always getting an instance; juju sits at "allocating..." [11:41] meanwhile nova knows nothing of the situation [11:41] but, on the jobs ref'd in bugs, i've run, re-run, and re-confirmed that things went well for those runs, afaict. [11:42] beisner, hmm [11:44] jamespage, gnuoy - mojo os-on-os deploy test combos all pass. bear in mind, that just fires up an instance on the overcloud, checks it, and tears down. http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner/ [11:44] so there's a \o/ ! [11:45] jamespage, gnuoy - the bare metal equivalent of that ^ is also almost all green. re-running a T-K fail. http://10.245.162.77:8080/view/Dashboards/view/Mojo/job/mojo_runner_baremetal/ [12:03] gnuoy, do we have a bug open for the rmq upgrade problem? [12:03] the password def gets missed during the migration [12:03] jamespage, nope, I'll create one now [12:07] jamespage, gnuoy: fyi just deployed T-I/next. vgs and lvs come back "no volume groups found." added to bug 1480504 [12:07] Bug #1480504: Volume group "cinder-volumes" not found [12:08] jamespage, Bug #1480893 [12:08] Bug #1480893: Upgrading from stable to devel charm breaks clients [12:13] Odd_Bloke: hey, I'm getting some errors with the ubuntu-repository-cache charm, the start hook is failing [12:14] let me run and do a pastebin of the output [12:17] gnuoy, dosaboy: added some detail to that bug - I need to take an hour out - maybe dosaboy could look at a fix in the meantime? [12:17] otherwise I'll pickup when I get back [12:18] jamespage, dosaboy, I can take a look [12:19] gnuoy, ta - i think the migration code needs to switch to always resolving the using the rid for the cluster relation - or get passed that from high up the stack (its not currently) [12:19] kk === psivaa is now known as psivaa-lunch [12:28] Odd_Bloke: lmk once you're back around please [12:28] jose: o/ [12:29] Odd_Bloke: hey. I'm getting an error on the start hook of the ubuntu-repository-cache charm, says 'permission denied' for /srv/www/blahblah [12:29] I'm having some issues with GCE right now so haven't been able to launch the instance [12:29] Oh, hmph. [12:29] Let me see if I can reproduce. [12:29] cool [12:30] I'll try to run again [12:30] jose: Are you using any config, or just the defaults? [12:30] Odd_Bloke: defaults here [12:40] jose: Cool, waiting for my instances now. :) [12:41] I wish I could say the same... [12:42] :p [12:49] jose: I'm seeing a failure in the start hook; let me dig in to it. [12:49] Some of the charmhelpers bits changed how they do permissions, so it's probably an easy fix. [12:49] cool, I thought that but wasn't sure [12:50] jose: Do you have a recommendation for quickly testing new versions of charms? Is there something I can do with containers, or something? [12:50] Odd_Bloke: oh, definitely! wall of text incoming [12:51] so, ssh into the failing instance. then do sudo su. cd /var/lib/juju/agents/unit-ubuntu-repository-cache-0/charm/hooks/ [12:51] * Odd_Bloke braces for impact. [12:51] edit start from there [12:51] then save your changes and do a juju resolved --retry ubuntu-repository-cache/0 [12:51] and if it goes well it should go out of error state [12:52] just copy the exact same changes you did on the unit to your local charm and commit + push [12:54] DHX should be a good tool too, but I can't give much insight on how it works and its usage [12:55] jamespage, hey I'm back, need input for something? [12:58] Hmph, I'm sure we saw this problem before and I fixed it. [12:59] I guess I did trash my old charm-helpers merge branch, which might have been where I fixed it. [13:00] probably missed that one bit :) [13:01] coreycb, yeah - could you check the deploy from source release notes pls? [13:01] coreycb, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1507 [13:06] gnuoy, how far did you get? [13:06] jamespage, so... [13:07] I don't think which specify rid any higher [13:07] gnuoy, ? [13:07] since leader_get is supposed to mimic leader-get [13:07] gnuoy, well in the scope of peerstorage, its whatever we make it :-) [13:07] as we have a wrapper function there [13:08] jamespage, as for line 98, peer_setting = _relation_get(attribute=attribute, unit=local_unit(), rid=valid_rid) [13:08] does fix it [13:08] yah [13:08] jamespage, if you use relation_get you get an inifinte loop which is fun [13:08] gnuoy, I was thinking - http://paste.ubuntu.com/11993006/ [13:09] less the debug [13:09] gnuoy, this has potential to impact of pxc and stuff right? [13:09] jamespage, yes the whole caboodle [13:10] grrr [13:10] gnuoy, infact I'm surprise everything else is still working :-) [13:10] jamespage, +1 to your fix given the point you make about the scope of leader_get in peer storage [13:10] gnuoy, ok working on that now [13:13] jamespage, notes look good, I made a few minor tweaks. [13:24] dosaboy, gnuoy: https://code.launchpad.net/~james-page/charm-helpers/lp-1480893/+merge/266712 [13:26] that should sort-out the out-of-cluster context migration of peer data to leader storage [13:28] jamespage, I'm surprised lint isn't sad about rid being defined twice [13:30] jamespage, err ignore me [13:31] * jamespage was already doing that :-0 [13:31] gnuoy, lol [13:36] jamespage: reviewed [13:38] dosaboy, gnuoy jumped you and landed that [13:38] dosaboy, I actually think leader_get should not be exposed outside of peerstorage [13:38] its an internal function imho [13:38] the api is peer_retrieve [13:38] which deals with the complexity [13:42] jamespage: yup fair enough [13:45] gnuoy, want me to deal with syncing that to rmq? [13:50] jamespage, well, we should sync it across the board [13:50] gnuoy, +1 [13:50] gnuoy, got that automated yet? [13:50] ish [13:52] beisner, looks like it's time for another charmhelper sync across the charms. I'll do that now unless you have any objections? [13:52] gnuoy, +1 also ty [13:56] jose: My units seems to get stuck in 'agent-state: installing'; any idea how I can work out what's happening? [13:56] Odd_Bloke: juju ssh ubuntu-repository-cache/0; sudo tail -f /var/log/juju/unit-ubuntu-repository-cache-0.log (-n 50) [13:57] that gives you the output of your scripts [13:58] jose: I haven't even got the agent installed yet, so my scripts haven't started. [13:58] Odd_Bloke: you'll have to go on the GCE console [13:58] Odd_Bloke: and look at "events" (or something) there [13:58] Odd_Bloke: oh, huh. if there's a machine error, then juju ssh 0; sudo tail /var/log/juju/all-machines.log [13:59] axino: it's probably best to take a look at all-machines.log, last time when I went to the gce console machines simply weren't there and I couldn't find a detailed answer on what was going on :) [14:00] jose: there was nothing in all-machines.log last time I had issues :( just events in GCE console (which are a bit hard to find, I must say) [14:00] Oh, perhaps I misunderstand the status output. [14:00] I'm still learning how to deal with GCE :) [14:01] jose: OK, looks like I've fixed it. [14:01] Let me push up a MP. [14:01] woohoo! \o/ [14:02] jose: https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix-perms/+merge/266724 [14:03] taking a look [14:05] jose: So host.mkdir creates parents, so that line is unnecessary, and forces permissions to something that is broken. [14:05] jose: So we can just lose that line. [14:06] as long as it works we're good :P [14:16] jose: It does. :) [14:17] 'running start hook' [14:17] Oh, you're _testing_ it? [14:17] Pfft. [14:17] I am :) [14:17] need to [14:18] :) [14:18] As well you should. [14:18] Is anyone working on a charm to install designate? I want to try it on my openstack deployment and I'll be happy to test a charm instead of installing it by hand (I have no experience writing charms right now) [14:19] sto: I'm sorry, but I don't know what designate is. maybe you have a link to its website? [14:19] jose: it is an openstack service https://github.com/openstack/designate [14:20] oh [14:20] And it is already packaged [14:20] unfortunately, I don't see a designate charm on the store. sorry :( [14:21] but maybe an openstack charmer can work on it? :) [14:21] gnuoy, I'm going to switch to liberty milestone 2 updates - pull me back if you need hands [14:21] Yes, I know that there is no charm on the store, thats why I was asking... ;) [14:21] its not critical but would like to push it out soonish [14:22] ok, np === natefinch is now known as natefinch-afk [14:22] gnuoy, just lmk when the c-h sync is all pushed, and i'll run metal tests. probably with some sort of heavy metal playing. [14:22] beisner, crank up Slayer, c-h sync is all pushed [14:23] gnuoy, jamespage - wrt that cinder bug, it's with the default lvm-backed storage where i'm seeing breakage. works fine with ceph-backed storage. bug updated with that lil tidbit. [14:23] sto I heard people talking about creating a charm but I'm not sure it ever got past the hot air stage [14:23] gnuoy, awesome thanks [14:24] gnuoy, isn't that on our list-o-stuff to add more official support for in the os-charms? [14:24] I think Barbican and Designate were high on the list [14:25] yep that sounds right. [14:29] Odd_Bloke: woohoo! it looks like it deployed cool! [14:29] I'm gonna give it a quick test ride and merge [14:32] gnuoy: ok, thanks, I' guess I'll install it by hand on a container to see how it works [14:34] Odd_Bloke: woot woot! works works works works! [14:34] jamespage, beisner Trusty Icehouse, stable -> next upgrade test ran through cleanly. thanks for the patch Mr P. [14:35] gnuoy, jamespage \o/ [14:35] gnuoy, do you have a modified upgrade spec to deal with qg:ng? [14:36] beisner, yes, I'm running from lp:~gnuoy/openstack-mojo-specs/mojo-openstack-specs-ha [14:36] jose: \o/ [14:36] Odd_Bloke: should be merged. thanks a bunch for the quick fix, really appreciated! [14:37] jose: No worries, thanks for the quick merge. :) [14:42] gnuoy, these guys didn't get a c-h sync, is that by design?: n-g, pxc, n-ovs, hacluster, ceph-radosgw, ceph-osd [14:42] beisner, I'll check, they may not be using the module that changed (but I'd have thought pxc was tbh) [14:53] beisner, sorry about that, done now (no change for n-ovs) [14:53] oh, cause it did work the first time === psivaa-lunch is now known as psivaa === JoshStrobl is now known as JoshStrobl|AFK [15:50] marcoceppi: oh hey I forgot to ask you if everything with rbasak/juju in distro is ok? [15:50] anyone need anything from me? [15:51] jcastro: I have to fix something in the packaing and upload it, I'm about to do a cut of charm-tools and such so I'll fix those then [15:51] ack [15:54] beisner, suspect that regex is causing the issue - reconfirmning now [15:54] jamespage, ack ty [16:12] beh. look out, gnuoy, jamespage - i just got 11 ERROR instances on serverstack ("Connection to neutron failed: Maximum attempts reached") [16:21] beisner, sniffs like rmq [16:22] * beisner must eat, biab... === Guest11873 is now known as zz_Guest11873 [16:42] lazyPower: [16:42] apuimedo: o/ [16:42] lazyPower: how are you doing? [16:42] Pretty good :) Hows things on your side of the pond? [16:43] warm [16:43] :-) [16:43] lazyPower: I have a charm that at deploy time needs to know the public ip it will have [16:44] usually what I was doing was add a machine, and then knowing the ip change the deployment config file [16:44] apuimedo: unit-get public-address should get you situated with that though [16:44] but I was wondering if it were possible in the install script to learn the public ip [16:44] ok [16:44] that's what I thought [16:45] and it's the same the other machines in the deployment will see it with, right? [16:45] lazyPower: so unit_public_ip should do the trick [16:46] hookenv.unit_public_ip [16:47] yep [16:47] and looking at teh source, that wraps unit-get public-address :) === JoshStrobl|AFK is now known as JoshStrobl [16:59] ;-) [16:59] thanks [17:06] np apuimedo :) === zz_Guest11873 is now known as CyberJacob [17:43] gnuoy, jamespage - the heat charm does have a functional usability issue, though not a deployment blocker, nor a blocker for using heat with custom templates. that is, the /etc/heat/templates/ dir is just awol. bug 1431013 looks to have always been this way, so prob not crit for 1507/8 rls. [17:43] Bug #1431013: Resource type AWS::RDS::DBInstance errors === natefinch-afk is now known as natefinch [17:44] ejat-: Hey, how did you get along this weekend? I wound up being AFK for a good majority. [17:47] gnuoy, jamespage, coreycb ... aka ... ^ our "one" remaining tempest failure to eek out ;-) http://paste.ubuntu.com/11994632/ [17:49] beisner, would that fixup the rest of the failing smoke tests? [17:49] see paste ... we are down to that 1 [17:49] after some merges and template tweaks today [17:50] coreycb, i'm installing heat from package in a fresh instance just to see if the templates dir is awol there (ie. without a charm involved). [17:54] coreycb, yeah so these files and this dir don't make it into /etc/heat/templates when installing on trusty. http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/heat/trusty/files/head:/etc/heat/templates/ [17:54] beisner, that's awesome, down to 1 [17:55] beisner, might be a packaging issue [17:56] coreycb, yeah, woot! [17:56] coreycb, and ok, bug updated, she's all yours ;-) [17:57] beisner, thanks yeah I'll dig deeper later, need to get moving on stable kilo [17:57] coreycb, yep np. thanks! [18:16] gnuoy, 1.24.4 is in ppa:juju/proposed re: email, when you next exercise the ha wip spec(s), can you do that on 1.24.4? [18:34] jamespage, your requested changes have been made and all tests updated: https://code.launchpad.net/~ddellav/charms/trusty/glance/upgrade-action/+merge/265592 [19:50] beisner, reverting that regex change resolves the problem [19:50] with cinder [19:51] jamespage, ah ok. i can't find context on that original c-h commit @ http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/revision/409 [19:52] beisner, [daniel-thewatkins] Detect full disk mounts correctly in is_device_mounted [19:52] jamespage, yep, saw that, but was looking for a merge proposal or bug to tie it to. [19:53] (being that that fix breaks this charm) [20:02] jamespage, i suspect context is: http://bazaar.launchpad.net/~charmers/charms/trusty/ubuntu-repository-cache/trunk/view/head:/lib/ubuntu_repository_cache/storage.py#L131 [20:02] s/is/was/ [20:03] beisner, I'm actually wondering whether that charm-helpers change has uncovered a bug [20:04] beisner, huh - yeah it does [20:04] ooo oo a cascading bug [20:04] beisner, /dev/vdb was getting missed on instances, so got added to the new devices list before [20:04] no longer true [20:04] * jamespage scratches his head for a fix [20:08] beisner, the charm does not have configuration semantics that support re-using a disk that's already mounted [20:09] beisner, overwrite specific excludes disks already in use - its a sorta failsafe [20:09] beisner, I could do a ceph type thing for testing [20:12] jamespage, ok so vdb is mounted @ /mnt, and with that c-h fix, the is it mounted helper actually works. whereas all along we've just been clobbering vdb? is that about right? [20:12] beisner, yup [20:13] jamespage, ok i see it clearly now. [20:19] beisner, ok - testing something now [20:21] beisner, https://code.launchpad.net/~james-page/charms/trusty/cinder/umount-mnt/+merge/266803 [20:21] testing now [20:22] sweet. oh look you even updated the amulet test. i was just thinking: i'll need to update a config option there. [20:23] jamespage, if this approach is what we stick with, i'll update o-c-t bundles [20:23] beisner, how else would I test my change? ;) [20:24] jamespage, well that's the shortest path for sure! [20:24] beisner, longer term filesystem_mounted should go to charm-helpers [20:24] but for tomorrow here is fine imho [20:28] jamespage, ack [20:39] beisner, passed its amulet test for me [20:39] beisner, https://code.launchpad.net/~james-page/charms/trusty/cinder/umount-mnt/+merge/266803 [20:39] gnuoy, ^^ or any other charmer [20:39] beisner, I've not written a unit test which makes me feel guilty [20:39] but I need to sleep [20:40] jamespage: idk, lgtm [20:40] jamespage, lol [20:40] marcoceppi, ta [20:40] jamespage, yes, i believe this will do the trick. thanks a ton. i've updated and linked the bug. [20:40] jamespage: maybe just default ephemeral-mount to /mnt ? [20:41] marcoceppi, meh - I'd prefer to keep it aligned to ceph [20:41] jamespage: and I really don't care enough either way [20:41] marcoceppi, just in case someone did have /mnt mounted as something else :-) [20:41] marcoceppi, and really did not want it unmounted [20:41] * marcoceppi nods [20:41] marcoceppi, this is really a testing hack [20:41] jamespage: yeah, I see that in the amulet test you updated [20:42] beisner, ok - going to land that now [20:42] jamespage, yep +1 [20:43] beisner, done - to bed with me! [20:43] nn [20:43] jamespage, thanks again. and, Odd_Bloke thanks for fixing that bug in is_device_mounted. [20:50] beisner: :) [21:00] how do i deal with an environment that seems completely stalled? when i try ‘juju status’ it just hangs indefinitely [21:10] moqq: is the bootstrap node running? [21:11] what provider are you using? [21:12] marcoceppi: yes, machine-0 service is up. manual provider [21:12] moqq: can you ssh into the machine? [21:12] yep [21:12] moqq: `initclt list | grep juju` [21:13] marcoceppi: http://pastebin.com/dUqwsTez [21:14] moqq: sweet! VoltDB [21:14] * marcoceppi gets undistracted [21:14] haha [21:14] moqq: try `sudo restart jujud-machine-0` [21:14] give it a few mins [21:14] then juju status [21:15] also, are you out of disk space? `df -h`? [21:15] no plenty of space. and restarting the service to no avail, have cycled it a good handful of times [21:15] moqq: have you ccycled the juju-db job as well? [21:15] that's the next one [21:15] yeah [21:15] moqq: time to dive into the logs [21:16] what's the /var/log/juju/machine-0 saying? [21:16] marcoceppi: http://pastebin.com/KWDXACvD [21:17] moqq: were you running juju upgrade-juju ? [21:17] yeah at one point i tried to and it failed [21:17] moqq: from what version? [21:18] moqq: this may be a bug that was fixed recently, and if so there's a way to recover still [21:18] 1.23.something -> 1.24.4 [21:18] iirc [21:19] moqq: what does `ls -lah /var/lib/juju/tools` look like? [21:20] marcoceppi: http://paste.ubuntu.com/11996021/ [21:20] moqq: this should help: https://github.com/juju/docs/issues/539 [21:20] moqq: you'll need to do that for all of the symlinks [21:21] moqq: so, stop all the agents first [21:21] then that [21:21] then start them all up again, with juju-db and machine-0 being the first and second ones you bounce [21:21] ok thanks. on it [21:22] coreycb, around? if so can you land this puppy?: https://code.launchpad.net/~1chb1n/charms/trusty/hacluster/amulet-extend/+merge/266355 [21:26] beisner, sure, but is that branch frozen for release? [21:29] thanks marcoceppi that did the trick! [21:30] moqq: awesome, glad to hear that. It was only a bug that existed in 1.23, so going forward you shouldn't have an issue with upgrades *related to this* [21:30] ok excellent [21:30] now, its gotten me to 1.24.3 [21:30] but it seems to be refusing to go to 1.24.4 [21:31] ubuntu@staging-control:/var/lib/juju/tools$ juju upgrade-juju --version=1.24.4 >>> ERROR no matching tools available [21:31] moqq: 1.24 is a proposed release [21:31] moqq: you need to set your tools stream to proposed instead of released [21:31] moqq: I'd honestly just wait until it's released (in a few days) [21:31] i’m pretty sure i already did. juju has been constently chewing up 100% of all of our cpus [21:32] so i was hoping the .4 upgrade would fix that [21:32] ah [21:32] cuz if its not solved soon i have to rip out juju and switch to puppet or chef [21:33] moqq: hum, juju using 100% shouldn't happen [21:33] is there a bug already for this? [21:33] yeah https://bugs.launchpad.net/juju-core/+bug/1477281 [21:33] Bug #1477281: machine#0 jujud using ~100% cpu, slow to update units state [21:34] moqq: looks like this was reported with 1.23, is it still chewing 100% cpu on 1.24.3? [21:35] it looks fine for the moment. but when i did this upgrade on the other env earlier it was fine for 20m then went back to spiking [21:35] going to watch it [21:36] moqq: if it does spike up and start chewing 100% again, def ping me in here and update that bug saying it's still a problem, it's not targeted at a release so it's really not on the radar atm [21:37] moqq: as to your other question about 1.24.4, what does `juju get-env agent-stream` say? [21:38] marcoceppi: ok, will do [21:38] apparently >>> ERROR key "agent-stream" not found in "staging" environment [21:39] moqq: haha, well that's not good [21:39] well, that's not bad either [21:39] jsut, interesting [21:40] moqq: you could try `juju set-env agent-stream=proposed`, then an other upgrade-juju (per https://lists.ubuntu.com/archives/juju/2015-August/005540.html) [21:40] but if there is no value currently it may not like that [21:40] just gave a warning, but it set ok [21:41] moqq: well, if you feel like being daring you can give it a go [21:41] in the changelog I doin't see any reference to CPU consumption [22:13] core, devs, charmers: Is there a method by which juju can be forced to not overwrite changes to config files on node reboot? [22:14] coreycb, nope we can land passing tests any time.