=== racedo` is now known as racedo [09:59] i'm taking another swap day today, BTW [10:00] rogpeppe, I don't remember allowing that [10:00] mattyw: oh [10:00] mattyw: in which case i guess i'll just have to kill you === gary_poster is now known as gary_poster|away === gary_poster|away is now known as gary_poster [12:58] hey. whats the state of the null /none/manual provider ? [12:58] is that in 1.16 ? [13:37] smoser: the manual provider is in 1.16 but it's not really ready for primetime. There's still some bugs around it. [13:38] natefinch, thats ok. [14:52] hey.... [14:52] http://paste.ubuntu.com/6324341/ [14:52] that seems to imply that "Due to needing newer versions of LXC the local provider does require a newer kernel than the released version of 12.04. Therefore we install Linux 3.8 from the LTS Hardware Enablement Stack" [14:53] which is declared as fact at https://juju.ubuntu.com/docs/config-local.html [14:53] is incorrect [14:53] can we have that doc fixed please ? [14:53] or someone point me to why that statement exists (hazmat?) [14:56] smoser: I don't know the exact details, but we've definitely seen problems with 12.04. Not only the old LXC but also the old MongoDB. Are you sure you're not already running the newer kernel on that machine? [14:57] natefinch, i'm using the cloud archive [14:57] which is the solution designed by CTS for some supportable solution of juju on 12.04 [14:59] ie, that is probably what doc should suggest. [15:00] smoser: yeah, it sounds like the doc is just out of date. I haven't really kept up on the exact nuances of the local provider, which is my only hesitation. definitely we should make sure the documentation is accurate and suggests the least painful way to get it running on 12.04 [15:12] smoser, not sure why that verbiage appeared. [15:13] smoser, specifically around manual provider it works, but you need to manually remove the $JUJU_HOME/environments/$manual_env_name.jenv to destroy/reset it [15:14] and it leaves juju stuff on the machines in question.. [15:14] smoser lp:juju-core/docs [15:38] smoser, there are performance issues with older kernels (including 3.8) and container networking that have been documented, not sure its relevant in this context though [15:40] hazmat, hallyn was unaware of such issues [15:40] so links to such documentation might be useful. [15:41] smoser, i saw it referenced by the docker community last month re this thread https://groups.google.com/forum/#!msg/docker-user/txAd5BiVapU/AfXvssMqkr4J [15:42] i think performance of veth is probably not a good enough reason to sugges tsomeone use a HWE kernel that is not necessary otherwise. [15:42] at least without citing that as the reason [15:42] HWE kernels have real drawbacks, forcing you to upgrade or lose security updates earlier than 12.04 LTS kernel. [16:04] natefinch, do you have time to review two small branches that backport fixes to 1.16.1? https://codereview.appspot.com/18610043/ and https://codereview.appspot.com/18640043/ [16:06] sinzui: sure [16:09] sinzui: the first one already got a LGTM from jam, so that's fine. I can lgtm the other one. [16:15] * sinzui thinks his review emails are being eaten again [16:20] any word on a fix for the lxc/apt issue coming down the pipe? I'd really like to demo the GUI next week on lxc :) [16:25] hatch is this the node problem? [16:26] sinzui: when I try and deploy the gui it fails with some apt-get stuff [16:27] I was told it was an lxc bug [16:27] happens on 12.04 and 12:10 with the most recent juju-core [16:28] hatch, these are the bugs that I want to close today/tomorrow with a release https://launchpad.net/juju-core/+milestone/1.16.1 [16:29] hatch, there aren't any bugs about lxc reported that also speak of apt [16:29] ahh ok so that doesn't help unfortuately....I don't know what the true cause of it is, it could be lxc related and not juju related [16:29] all apt bugs relate to firewalls in fact. [16:30] hmm ok I'll try and get some more information and file a formal bug so that someone can tell me one way or another if it's juju related or not [16:31] I am sure it is a bug and maybe even reported. We want to tag it juju-gui so that I can drive a fix for it [16:45] sinzui: looks like after doing some updates after the sprint it's working again, so something must have landed to fix it :) [16:46] okay hatch. If you do have bugs you care about, tag then with juju-gui so that I can arrange for fixes right away [16:46] will do thanks! [18:38] smoser, re hwe sounds reasonable to me, i don't have any insight into why we doc'd hwe kernels for precise, we're not using any lxc functionality that's only later kernel rels afaics. the delta might be related to lxc userspace tooling and how a newer version of that interacts with older kernel versions, dunno. [19:54] hello. I am facing some issues with juju on an Openstack private cloud. Could someone explain point to some documentation on that ? [19:55] I am running Havana deployed on top of MAAS [19:57] hi Thumper, can you help me ? [19:57] maybe, what's your problem? [19:58] I am trying to run Juju on top of a Havana [19:58] on a private cloud. [19:58] * thumper thinks [19:58] is your problem django? [19:58] I've deployed Havana using Juju/MAAS. But now I am having issues deploying VMs on top of Havana using juju. [19:58] ISTR that there is an issue with dependencies [19:59] MAAS needing one version and havana needing another [19:59] a version of Juju ? [19:59] no [19:59] djanog [19:59] but when juju deploys a machine [19:59] it adds the cloud-tools archive [20:00] which then installs a version of django that havana can't use [20:00] this is a known bug and something we are going to try to work around [20:00] but not sure of the fix just yet [20:00] does this sound like your problem may be related to that? [20:01] okay. What is the side effect of that django bug? My issue right now is that Juju cannot bootstrap _on top of_ Havana. [20:01] Havana, seems to behave just fine by itself. [20:01] I'm not entirely sure what the side effect it, just that it doesn't work [20:01] this could be it [20:01] IF it is the same issue [20:02] what stage does it fail? [20:02] after I setup my environments.yaml file to point to my Havana install [20:02] I call juju bootstrap. [20:02] it uploads all the tools to Swift/RadosGW [20:02] then the following error comes up: [20:02] can you see that the machine has started? can you ssh to it? [20:03] 2013-10-29 19:47:26 ERROR juju supercommand.go:282 cannot start bootstrap instance: index file has no data for cloud {RegionOne http://m1basic-05.vm.draco.metal.as:5000/v2.0} not found [20:03] ok... that sounds like something else [20:03] maybe [20:03] I think so. [20:03] so... perhaps this is simple stream related [20:04] how do people usually load up their images into a Private OpenStack install for using with Juju? [20:04] we now use simple streams to find the instances [20:04] exactly... [20:04] I don't know [20:04] wallyworld may be able to help when he starts in a few hours [20:04] as he did a lot of the simplestream stuff [20:04] okay. [20:04] sodre: which TZ are you in? [20:04] I'm on EST [20:05] so already 5pm? [20:05] * thumper looks at the date applet [20:05] 4:05 [20:05] oh, no falling back just yet [20:06] * thumper thinks [20:06] gah, TZ confused again [20:06] anyway... [20:06] :) [20:06] wallyworld is in brisbane, AU [20:06] okay... [20:06] and not likely to start for a few hours [20:06] or stry smoser [20:06] s/stry/try/ [20:06] Let me pm him. [20:07] sodre: he is in this channel [20:07] smoser: how do people usually load up their images into a Private OpenStack install for using with Juju? [20:07] smoser: any idea? [20:08] ahh.. got it [20:08] I haven't used IRC in such a long time. [20:09] I understand, before working at canonical I hadn't used irc at all [20:10] ok, not quite at all, but very briefly [20:11] Do most people at canonical work from home ? [20:12] ah. sodre yes most people work from home. [20:12] the suggested way to load images into glance is at https://code.launchpad.net/~smoser/simplestreams/example-sync [20:13] that is nice. I guess there is always development going ton. [20:13] Alright, let me clone that branch. [20:22] smoser: thanks [20:26] thanks thumper and smoser... I am still trying to get it to work... I'll be back soon. [20:26] sodre: good luck [21:09] smoser: are you still available ? [21:10] not really [21:10] qiuck.... [21:10] for some reason the PUT requests are going itno /swift/simplestreams instead of swift/v1/simplestreams [21:10] any idea why that would be ? [21:11] Also, running swift from the command-line places things in the correct place. [21:13] unm... [21:13] i'm not sure. [21:14] that code there *is* used in production [21:14] so it is known working [21:14] okay. I am using radosgw, maybe it should default to v1 [21:16] I'll give the code a quick read. Maybe I can figure out why it is not giving placing the v1 in PUT. [21:37] wallyworld_: just for you: http://www.git-tower.com/blog/git-cheat-sheet/ [21:38] :-( [21:38] troll! [21:39] like a boss [21:39] I was using git yesterday as one of my home projects uses it [21:40] hadn't been in for a while and needed to pull a particular branch to compile [21:40] took me 45 minutes to work out how [21:40] winning! [21:40] who knew you had to pull *everything* and then "checkout" [21:41] wallyworld_: I am low on coffee. If you could stop by the local bean on the way over that'd be awesome [21:41] it will cost you [21:41] * bigjools bends over [21:42] it's on the wrong side of the road so it will cost you a lot [21:42] * thumper covers his eyes [21:42] there are children in this channel [21:42] * wallyworld_ unips [21:42] * bigjools goatses [21:42] WORK CHANNEL!! [21:42] thumper: there sure are! [21:42] * thumper cracks a whip [21:49] wallyworld_: do you work on simplestreams as well ? [21:50] the juju side of things, yes [21:50] I have a question :) [21:50] sure [21:51] I am trying to use sstream-mirror-glance, but the URLs it gives to the swift client lacks the /v1 part. [21:52] e.g. when I run it, I get the following error [21:52] Container PUT failed: http://m1basic-04.vm.draco.metal.as:80/swift/simplestreams 405 Method Not Allowed [21:53] However, if I run ' swift post simplestreams ' [21:53] everything goes fine. [21:54] I have logs from the Swift back-end as well, if you would like to see them.. +3lines. [21:54] sodre: i have no knowledge of the lp:simplestreams project and associated tools. i just do the juju-core lib [21:54] which is a separate codebase written in Go [21:54] got it [21:55] sorry :-( [21:55] np [21:55] but it does seems perhaps there is a bug there [21:56] yeah, in the Swift log, it shows that the sstream-mirror-glance code did a PUT /swift/simplestreams [21:56] while the swift CLI, did a POST followed by a PUT /swift/v1 [22:06] at least I found out why... the code in simplestreams/openstack.py strips the version out :) [22:07] no I wonder why they are doing that. [22:08] good question :-) [22:28] wallyworld_ : The simplestreams images are getting loaded into glance after I commented out the "strip_versions" call. [22:28] \o/ [22:28] now I have another question which you might be able to help. [22:29] I am trying to deploy juju on top of Havana. [22:29] Havana itself seems to be working okay, i.e. I can launch CirrOS [22:29] create Networks, assign floating ips, etc... [22:31] let me clean up my bootstrap version [22:34] so I have sourced my admin-openrc.sh [22:34] I ran juju bootstrap [22:34] and then at the end I get the following error. [22:34] ERROR juju supercommand.go:282 cannot start bootstrap instance: index file has no data for cloud {RegionOne http://m1basic-05.vm.draco.metal.as:5000/v2.0} [22:35] I imagine that happens because it did a get for /swift/v1/admin-juju/streams/v1/index.json [22:35] and it failed. [22:37] how do I tell Juju to find /streams/v1/index.json in a different container ? [22:49] sodre: you set the tools-url config option if you want to get simplestreams metadata from a non-default container [22:50] okay... let me try taht. [22:50] or, ensure the index file you are using has metadata for RegionOne and your endpoint [22:50] sodre: actually [22:50] not tools-url [22:51] you want image-metadata-url for images [22:51] i think that's what it can't find [22:51] if it complains about tools, you need tools-url [22:52] Time to go and hit something, more emails after lunch [22:52] * thumper -> gym [22:52] okay. [22:56] humm... not yet... [22:57] I'll play with it once I get home. [22:57] ttyl [22:59] wallyworld_, do you understand what went wrong in this azure bootstrap. I see evidence that abentley's CI testing has placed current tools in juju-dist/testing/tools, but I don't see how out environments could collide and break: http://pastebin.ubuntu.com/6326844/ [23:00] * wallyworld_ looks [23:04] sinzui: i've not seen that error before. it looks like it is failing to load image metadata from cloud-images.canonical.com [23:04] * sinzui ponders setting image-metadata-url: http://cloud-images.ubuntu.com/releases [23:04] that may work, let me check the code [23:04] wallyworld_, okay. I think so too. I am not testing changes yet :) This was a stable deploy for an upgrade test [23:06] sinzui: btw, trunk has all the changes except for the --public option with sync tools (but you can create the json file for that by hand in the interim). i hope to land today in any case [23:06] okay [23:07] wallyworld_, I was starting the blessing/cursing of 1.16 r1977. Once the stable is updated, I was going to return to automate releases [23:07] rightio [23:07] damn. image-metadata-url did change anything [23:08] did not? [23:09] hmm, maybe the sjson is the issue [23:09] 1.16 shouldn't be looking at it yet [23:10] well there is evidence that the sjson is understood, but I don't think it is 100% vetted [23:11] 1.16 should use sjson if it is there [23:11] does anyone know what the status of the juju side of https://bugs.launchpad.net/maas/+bug/1239488 ? [23:11] <_mup_> Bug #1239488: [SRU] Juju api client cannot distinguish between environments [23:13] sinzui: the above bug will be fixed with the 1.16 release you are doing now, right? [23:14] I don't think so wallyworld_ The bug does not claim to affect juju, let alone have a bug targeted to the 1.16.1 release [23:14] These are the bug fixes we want to close https://launchpad.net/juju-core/+milestone/1.16.1 [23:15] there were juju changes to fix that maas env bug [23:15] sinzui, i was told there was a corresponding juju change for the multi-tenant usage to actually work, have not been able to find what that buig is, tho [23:15] bigjools: ^^^^^? [23:15] you made changes to juju-core right? [23:16] yes [23:16] was the branch linked to a bug? [23:16] * bigjools digs [23:16] are those released ? [23:16] not yet i don't think [23:17] https://bugs.launchpad.net/juju/+bug/1081247 [23:17] <_mup_> Bug #1081247: maas provider releases all nodes it did not allocate [does not play well with others] [23:17] bigjools, thanks [23:17] https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1229275 [23:17] <_mup_> Bug #1229275: [maas] juju destroy-environment also destroys nodes that are not controlled by juju [23:17] also that, but it has the same branches linked [23:20] sinzui, when is 1.16.1 due? [23:22] bigjools: that bug is against the wrong projevt [23:23] that may be why sinzui didn't see it [23:23] wallyworld_: ha! [23:23] to include it for the next 1.16 release [23:23] wallyworld_: it does have a juju-core task [23:23] oh, so it does [23:23] I invalidated pyjuju [23:24] does this url make you feel better? https://bugs.launchpad.net/juju-core/+bug/1081247 [23:24] <_mup_> Bug #1081247: maas provider releases all nodes it did not allocate [does not play well with others] [23:24] yes :-) [23:24] * wallyworld_ relocates, bbiab [23:24] * bigjools prepares for his arrival [23:25] * bigjools chuckles, evilly [23:27] sorry bigjools wallyworld_ I had to jump into a hangout. [23:27] no worries sinzui [23:27] I am not rushing the 1.16.1 release. I can wait for a critical bug fix. I just need to know what needs merging