=== CyberJacob is now known as CyberJacob|Away === StoneTable is now known as aisrael === nottrobin_ is now known as nottrobin === mbarnett` is now known as mbarnett === Guest18526 is now known as wallyworld [04:13] thumper: State.StateServingInfo checks for a StateServingInfo doc [04:13] or more specifically a doc that will marshal into that type [04:13] then checks if doc.Port == 0 and takes that as a not found signal [04:13] it does this because state.Open inserts an empty document into that collection ... [04:14] right ... [04:15] thumper: can you think of a reason not to change this to [04:15] not insert the doc if there is nothing to insert [04:15] then use ErrNotFound as the sentital for the document not being found ? [04:16] * thumper looks at it now... [04:18] I'm guessing because it is always expected to be there (for some value of there) [04:18] makes it easy to set by just calling update [04:18] isn't there an Upsert? [04:27] davecheney: I think with a change to use upsert, I think the behaviour could be changed [04:27] fuk this nosql bullshit [04:29] ok, that'll have to stay as broken as it is today [04:29] func (st *State) SetStateServingInfo(info StateServingInfo) error { [04:29] if info.StatePort == 0 || info.APIPort == 0 || [04:29] info.Cert == "" || info.PrivateKey == "" { [04:29] ^ we only consider these four fields to be important [04:29] any others can be nil and are not consdiered 'empty' [04:32] :) [04:32] davecheney: perhaps #juju-dev a better channel... users probably don't care :-) [04:37] whoops [04:38] i'm in there wrong channel === uru_ is now known as urulama === CyberJacob|Away is now known as CyberJacob === urulama is now known as urulama-afk === jamespag` is now known as jamespage [10:22] hello all === robinbowes is now known as yo61 [10:25] did anyone succeed with using juju-local in utopic? it was my third attempt this morning, and now it fails to build containers [10:25] I filed bug 1363832 this morning [10:25] Bug #1363832: [utopic] fails to build containers [10:25] I wonder if it's generally broken, or something on my system is special === bloodearnest_ is now known as bloodearnest === gsamfira1 is now known as gsamfira === jam2 is now known as jam1 [12:05] has any one else observed juju invoking config-changed at random on a deployed environment? [12:06] We only noticed because it fails as a configured auth token has expired [14:03] did anyone succeed with using juju-local in utopic? it was my third attempt this morning, and now it fails to build containers [14:03] I filed bug 1363832 this morning [14:03] Bug #1363832: [utopic] fails to build containers [14:03] I wonder if it's generally broken, or something on my system is special [14:05] juju? i cant even figure out how to get it to work-ish === html is now known as johnny_number_5 [14:44] bloodearnest, at random?.. it gets called in a few places [14:44] bloodearnest, it called before start, and after upgrade, and whenever the config changes, and if the machine is rebooted. [14:44] hazmat: as in, after a long period of no juju activity, with no human stimulus [14:45] fwereade, ^ [14:45] hazmat: havn't been able to reliably reproduce [14:45] hazmat: but I've seen it on local provider, and jjo's seen it a few times in prodstack [14:46] pitti, i haven't seen that.. i filed a separate issue about the default containers defaulting to trusty (or more specifically latest lts) instead of precise. [14:47] pitti, are there any other containers on the machine from juju? its hard to tell if its having issues starting the template container or the actual container [14:48] hazmat: no, before there were only a few completely unrelated containers [14:48] hazmat: it tried to create a juju-trusty-lxc-template but failed [14:48] I lxc-destroy'ed that after each failed attempt [14:50] i should really polish up my juju lxc scripts.. simplifies the interaction much more (nothing on the host). [14:50] pitti, could you manually start the template container? ie lxc-start -d -n juju-trusty-lxc-template [14:51] hazmat: well no, there is nothing in that container [14:51] hazmat: it created a /var/log/juju/ (empty), and that's it [14:51] oh.. [14:52] pitti, that's strange.. the cloud image template might have some cache issues as well, if it also fails if you manually do lxc-create -t ubuntu-cloud -- -r trusty .. i'd try to clearing out the appropriate series dir in /var/cache/lxc [14:53] hazmat: there is no other juju related contianer [14:53] pitti, and re env settings.. you have juju get-env | grep lxc shows -> lxc-clone-aufs: false ? [14:53] hazmat: so if juju deploy is supposed to merely clone an existing container, that would explain why it fails [14:54] hazmat: but then bootstrap ought to have failed [14:54] pitti, yup.. it setups a template container, and clones for the actual unit [14:54] (I pointed that out in the bug report -- bootstrap does *not* create any container) [14:54] if your on btrfs.. it uses a snapshot automatically if /var/lib/lxc is btrfs [14:54] no, plain ext4 [14:54] pitti, yeah.. bootstrap does nothing [14:54] so where does the template container come from? [14:54] if bootstrap doesn't create it, and deploy expects it? [14:54] pitti, its all async the first time a machine is requested needed [14:54] pitti, deploy will do it inline [14:55] this is a long standing issue imo cause it has to download.. a full image and its doing it without giving feedback [14:55] ie. at min we should be showing more info on status when its doing stuff in the background for the image dl [14:55] well, a "full image" is something like 60 MB or so? [14:56] at least that's the rough size of stgraber's pre-built LXC templates [14:56] bootstrap can't quite do it, cause we don't know what series will need to be created [14:56] that should download in a minute; I waited > 10 mins [14:56] hazmat: ok, thanks for explaining the workflow [14:56] I suppose I'll set it up again and this time just let it sit there for really long [14:56] but it showed a lot of error messages and there was no active process, it seemed quite dead/failed to me [14:56] pitti, no.. its bigger.. ~220mb [14:57] uh, wow [14:57] pitti, its the image off cloud-images.ubuntu.com its downloading .. not the mystery images off linux-containers.org [14:57] ah, that; but even that just takes 3 mins or so for me [14:58] pitti, you can see the image it downloads @ /var/cache/lxc/cloud-$series per the ubuntu-cloud-template source [14:58] pitti, the actual error message is the failure to start the container, so i'm wondering if you manually create a container does it work [14:59] with the ubuntu-cloud lxc template [14:59] ok, /scratch/juju set up again, bootstrap is done [14:59] $ juju get-env | grep lxc [14:59] container: lxc [14:59] lxc-clone-aufs: false [14:59] hazmat: ^ FTR [14:59] cool [14:59] hazmat: I just changed root-dir and default-release, nothing else [15:00] if the manual creating an lxc with the cloud template doesn't work, that's the primary issue and i'd suggest wiping /var/cache/lxc/cloud* as a likely cause. [15:00] so I don't have anything juju-ish in /scratch/lxc/ which [15:00] just my unrelated other containers [15:00] k [15:00] * pitti runs juju deploy juju-gui [15:01] /var/cache/lxc/cloud-trusty/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz [15:01] that exists and doesn't change size (185 MB) [15:01] /var/cache/lxc/cloud-precise/ubuntu-12.04-server-cloudimg-amd64-root.tar.gz also exists and doesn't change size (238 MB) [15:02] bloodearnest, we do call config-changed on somewhat slim pretexts, but not *at random* [15:02] hazmat: tail -f /scratch/juju/log/* doesn't show anything either [15:03] and ps aux no processses that are more recent than 2 hours ago [15:03] so something in the template container creation goes pear-shaped [15:03] pitti, to sanity check can you manually create a container using the cloud template? [15:04] hazmat: well, I'd need a cloud template container first :) (that's the very bug) [15:04] how do I craete that? [15:04] from /var/cache/lxc/cloud-trusty/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz [15:04] sudo lxc-create -n myprecise -t ubuntu-cloud -- -r precise -S ~/.ssh/id_rsa.pub [15:04] sudo lxc-start -d -n myprecise [15:05] hazmat: no, see above -- there *is* no ubuntu-cloud container -- creating that is the thing that fails [15:05] pitti, lxc has different templates for creating a container [15:05] the only thing that was created in all this was the empty juju-trusty-lxc-template [15:05] pitti, per contents of /usr/share/lxc/templates/ [15:05] hazmat: ah sorry, ignore me [15:05] create -t, not clone [15:06] failed to get https://cloud-images.ubuntu.com/query/precise/server/daily-dl.current.txt [15:06] There is no download available for release=precise, stream=daily, arch=amd64 [15:07] hazmat: that smells like something then [15:07] that's wierd that link resolves [15:08] and it should be defaulting to release stream not daily [15:09] fwereade: at random == our perception :) [15:10] oh.. i wonder if it goes to daily cause your on an unreleased.. smarts of some form [15:11] pitti, it sounds like the bug may apply to the lxc templates then if the underlying lxc tools aren't working [15:11] fwereade: other than after start/upgrade/reboot, or an actual honest-to-goodness config-change, are there any more scenarios that it might be? [15:15] hazmat: reaading /usr/share/lxc/templates/lxc-ubuntu-cloud it seems to default to "tryreleased" and falls back to "daily" if that fails [15:16] $ ubuntu-cloudimg-query trusty released amd64 [15:16] failed to get https://cloud-images.ubuntu.com/query/trusty/server/released-dl.current.txt [15:16] hazmat: ^ that might explain it further [15:16] yeah [15:16] so that fails, and it falls back to "daily" [15:16] pitti, that link resolves though.. [15:17] pitti, that link works for me [15:17] pitti, as does the cli [15:17] hazmat: yes, for me too (in the browser) [15:18] * pitti adds a cloud-utils task then [15:18] cool [15:19] hazmat: does wget work for you? [15:19] hazmat: it fails because the certificate can't be checked [15:19] and it's calling wget without --no-check-certificate (quite rightly so) [15:20] bloodearnest, I *think* those should be the only cases (plus agent restart, not quite a reboot necessarily) [15:20] bloodearnest, is it possible something's bouncing the agent? [15:20] hazmat: ok, thanks muchly for your help! I'll go on prod our cloud folks [15:22] fwereade: that sounds a likely candidate, will try to check next time it happens [15:23] hazmat: if wget works for you on that URL, it'd be interesting if you are running trusty or utopic [15:24] hazmat: wget does work for me in trusty, but apparently got stricter in utopic [15:24] fwereade: what's the rationale behind running config-changed after reboot/restart? [15:24] bloodearnest, heh [15:25] bloodearnest, we don't track what version of the config you've seen [15:25] ahh [15:25] bloodearnest, I have no idea what inspired that approach [15:25] bloodearnest, but fixing it has not been especially high on my list [15:26] pitti, cert issues on trusty then? [15:26] or perhaps tls negotiation [15:26] hazmat: or wget just simply didn't default to checking the cert for https:// on trusty yet [15:26] which is more likely [15:26] bloodearnest, (and since we *do* run it on reboot I worry a bit that not doing so will subtly break some charms that have accidentally come to depend on it) [15:26] hazmat: I pinged smoser about it in #u-devel [15:27] fwereade: ack, it was a bug in our charm that made us discover it, just wanted to know why it was happening [15:27] hazmat: I hacked ubuntu-cloudimg-query and will now re-try juju [15:32] * pitti also adds --no-check-certificate to the ubuntu-cloud LXC template and tries again [15:35] hazmat: ah, now I have a juju-trusty-lxc-template, a running martin-local-machine-1, and we are as far as agent-state-info: 'hook failed: "install"' [15:35] hazmat: that's a whole lot of progress :) [15:36] pitti, awesome [15:37] apt-get install failed [15:43] pitti, ugh.. dns issues in container? [15:43] pitti, or are you getting checksum mismatch? [15:43] I wish it would contain apt's error messages [15:44] pitti, lxc-attach -n your-container-name && and apt-get install [15:44] hazmat: yep, that's my next step [15:44] pitti, actually better alternative [15:44] is juju debug-hooks unit-name [15:44] and then juju resolved --retry [15:44] + unit-name [15:45] aah [15:45] pitti, if you like your containers in clouds.. with across host networking.. this is something fun i put together over the weekend. http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/trunk/view/head:/readme.txt [15:45] # cat /etc/apt/apt.conf.d/42-juju-proxy-settings [15:45] Acquire::http::Proxy "http://127.0.0.1:3142"; [15:46] now, that wouldn't work, my dear juju [15:46] wtf [15:46] it should be pointing to the bridge address [15:46] 10.0.3.1 [15:47] and now we're back to juju bugs ;-) [15:48] hazmat: thanks for pointing out teh debug stuff [15:48] pitti, np [15:49] hazmat: filed and updated bug 1364069 [15:49] Bug #1364069: local provider must transform localhost in apt proxy address [15:49] resolved --retry still fails, but I need to run now; more tomorrow :) [15:50] thats the one downside of using my own juju lxc scripts.. i don't get to experience the pain of local provider [15:50] pitti, cheers === utlemmin` is now known as utlemming === utlemming_away is now known as utlemming === utlemming_away is now known as utlemming === arosales is now known as roslaes === roslaes is now known as arosales === scuttle|` is now known as scuttlemonkey