=== racedo` is now known as racedo | ||
rogpeppe | i'm taking another swap day today, BTW | 09:59 |
---|---|---|
mattyw | rogpeppe, I don't remember allowing that | 10:00 |
rogpeppe | mattyw: oh | 10:00 |
rogpeppe | mattyw: in which case i guess i'll just have to kill you | 10:00 |
=== gary_poster is now known as gary_poster|away | ||
=== gary_poster|away is now known as gary_poster | ||
smoser | hey. whats the state of the null /none/manual provider ? | 12:58 |
smoser | is that in 1.16 ? | 12:58 |
natefinch | smoser: the manual provider is in 1.16 but it's not really ready for primetime. There's still some bugs around it. | 13:37 |
smoser | natefinch, thats ok. | 13:38 |
smoser | hey.... | 14:52 |
smoser | http://paste.ubuntu.com/6324341/ | 14:52 |
smoser | that seems to imply that "Due to needing newer versions of LXC the local provider does require a newer kernel than the released version of 12.04. Therefore we install Linux 3.8 from the LTS Hardware Enablement Stack" | 14:52 |
smoser | which is declared as fact at https://juju.ubuntu.com/docs/config-local.html | 14:53 |
smoser | is incorrect | 14:53 |
smoser | can we have that doc fixed please ? | 14:53 |
smoser | or someone point me to why that statement exists (hazmat?) | 14:53 |
natefinch | smoser: I don't know the exact details, but we've definitely seen problems with 12.04. Not only the old LXC but also the old MongoDB. Are you sure you're not already running the newer kernel on that machine? | 14:56 |
smoser | natefinch, i'm using the cloud archive | 14:57 |
smoser | which is the solution designed by CTS for some supportable solution of juju on 12.04 | 14:57 |
smoser | ie, that is probably what doc should suggest. | 14:59 |
natefinch | smoser: yeah, it sounds like the doc is just out of date. I haven't really kept up on the exact nuances of the local provider, which is my only hesitation. definitely we should make sure the documentation is accurate and suggests the least painful way to get it running on 12.04 | 15:00 |
hazmat | smoser, not sure why that verbiage appeared. | 15:12 |
hazmat | smoser, specifically around manual provider it works, but you need to manually remove the $JUJU_HOME/environments/$manual_env_name.jenv to destroy/reset it | 15:13 |
hazmat | and it leaves juju stuff on the machines in question.. | 15:14 |
hazmat | smoser lp:juju-core/docs | 15:14 |
hazmat | smoser, there are performance issues with older kernels (including 3.8) and container networking that have been documented, not sure its relevant in this context though | 15:38 |
smoser | hazmat, hallyn was unaware of such issues | 15:40 |
smoser | so links to such documentation might be useful. | 15:40 |
hazmat | smoser, i saw it referenced by the docker community last month re this thread https://groups.google.com/forum/#!msg/docker-user/txAd5BiVapU/AfXvssMqkr4J | 15:41 |
smoser | i think performance of veth is probably not a good enough reason to sugges tsomeone use a HWE kernel that is not necessary otherwise. | 15:42 |
smoser | at least without citing that as the reason | 15:42 |
smoser | HWE kernels have real drawbacks, forcing you to upgrade or lose security updates earlier than 12.04 LTS kernel. | 15:42 |
sinzui | natefinch, do you have time to review two small branches that backport fixes to 1.16.1? https://codereview.appspot.com/18610043/ and https://codereview.appspot.com/18640043/ | 16:04 |
natefinch | sinzui: sure | 16:06 |
natefinch | sinzui: the first one already got a LGTM from jam, so that's fine. I can lgtm the other one. | 16:09 |
* sinzui thinks his review emails are being eaten again | 16:15 | |
hatch | any word on a fix for the lxc/apt issue coming down the pipe? I'd really like to demo the GUI next week on lxc :) | 16:20 |
sinzui | hatch is this the node problem? | 16:25 |
hatch | sinzui: when I try and deploy the gui it fails with some apt-get stuff | 16:26 |
hatch | I was told it was an lxc bug | 16:27 |
hatch | happens on 12.04 and 12:10 with the most recent juju-core | 16:27 |
sinzui | hatch, these are the bugs that I want to close today/tomorrow with a release https://launchpad.net/juju-core/+milestone/1.16.1 | 16:28 |
sinzui | hatch, there aren't any bugs about lxc reported that also speak of apt | 16:29 |
hatch | ahh ok so that doesn't help unfortuately....I don't know what the true cause of it is, it could be lxc related and not juju related | 16:29 |
sinzui | all apt bugs relate to firewalls in fact. | 16:29 |
hatch | hmm ok I'll try and get some more information and file a formal bug so that someone can tell me one way or another if it's juju related or not | 16:30 |
sinzui | I am sure it is a bug and maybe even reported. We want to tag it juju-gui so that I can drive a fix for it | 16:31 |
hatch | sinzui: looks like after doing some updates after the sprint it's working again, so something must have landed to fix it :) | 16:45 |
sinzui | okay hatch. If you do have bugs you care about, tag then with juju-gui so that I can arrange for fixes right away | 16:46 |
hatch | will do thanks! | 16:46 |
hazmat | smoser, re hwe sounds reasonable to me, i don't have any insight into why we doc'd hwe kernels for precise, we're not using any lxc functionality that's only later kernel rels afaics. the delta might be related to lxc userspace tooling and how a newer version of that interacts with older kernel versions, dunno. | 18:38 |
sodre | hello. I am facing some issues with juju on an Openstack private cloud. Could someone explain point to some documentation on that ? | 19:54 |
sodre | I am running Havana deployed on top of MAAS | 19:55 |
sodre | hi Thumper, can you help me ? | 19:57 |
thumper | maybe, what's your problem? | 19:57 |
sodre | I am trying to run Juju on top of a Havana | 19:58 |
sodre | on a private cloud. | 19:58 |
* thumper thinks | 19:58 | |
thumper | is your problem django? | 19:58 |
sodre | I've deployed Havana using Juju/MAAS. But now I am having issues deploying VMs on top of Havana using juju. | 19:58 |
thumper | ISTR that there is an issue with dependencies | 19:58 |
thumper | MAAS needing one version and havana needing another | 19:59 |
sodre | a version of Juju ? | 19:59 |
thumper | no | 19:59 |
thumper | djanog | 19:59 |
thumper | but when juju deploys a machine | 19:59 |
thumper | it adds the cloud-tools archive | 19:59 |
thumper | which then installs a version of django that havana can't use | 20:00 |
thumper | this is a known bug and something we are going to try to work around | 20:00 |
thumper | but not sure of the fix just yet | 20:00 |
thumper | does this sound like your problem may be related to that? | 20:00 |
sodre | okay. What is the side effect of that django bug? My issue right now is that Juju cannot bootstrap _on top of_ Havana. | 20:01 |
sodre | Havana, seems to behave just fine by itself. | 20:01 |
thumper | I'm not entirely sure what the side effect it, just that it doesn't work | 20:01 |
thumper | this could be it | 20:01 |
thumper | IF it is the same issue | 20:01 |
thumper | what stage does it fail? | 20:02 |
sodre | after I setup my environments.yaml file to point to my Havana install | 20:02 |
sodre | I call juju bootstrap. | 20:02 |
sodre | it uploads all the tools to Swift/RadosGW | 20:02 |
sodre | then the following error comes up: | 20:02 |
thumper | can you see that the machine has started? can you ssh to it? | 20:02 |
sodre | 2013-10-29 19:47:26 ERROR juju supercommand.go:282 cannot start bootstrap instance: index file has no data for cloud {RegionOne http://m1basic-05.vm.draco.metal.as:5000/v2.0} not found | 20:03 |
thumper | ok... that sounds like something else | 20:03 |
thumper | maybe | 20:03 |
sodre | I think so. | 20:03 |
thumper | so... perhaps this is simple stream related | 20:03 |
sodre | how do people usually load up their images into a Private OpenStack install for using with Juju? | 20:04 |
thumper | we now use simple streams to find the instances | 20:04 |
sodre | exactly... | 20:04 |
thumper | I don't know | 20:04 |
thumper | wallyworld may be able to help when he starts in a few hours | 20:04 |
thumper | as he did a lot of the simplestream stuff | 20:04 |
sodre | okay. | 20:04 |
thumper | sodre: which TZ are you in? | 20:04 |
sodre | I'm on EST | 20:04 |
thumper | so already 5pm? | 20:05 |
* thumper looks at the date applet | 20:05 | |
sodre | 4:05 | 20:05 |
thumper | oh, no falling back just yet | 20:05 |
* thumper thinks | 20:06 | |
thumper | gah, TZ confused again | 20:06 |
thumper | anyway... | 20:06 |
sodre | :) | 20:06 |
thumper | wallyworld is in brisbane, AU | 20:06 |
sodre | okay... | 20:06 |
thumper | and not likely to start for a few hours | 20:06 |
thumper | or stry smoser | 20:06 |
thumper | s/stry/try/ | 20:06 |
sodre | Let me pm him. | 20:06 |
thumper | sodre: he is in this channel | 20:07 |
thumper | smoser: <sodre> how do people usually load up their images into a Private OpenStack install for using with Juju? | 20:07 |
thumper | smoser: any idea? | 20:07 |
sodre | ahh.. got it | 20:08 |
sodre | I haven't used IRC in such a long time. | 20:08 |
thumper | I understand, before working at canonical I hadn't used irc at all | 20:09 |
thumper | ok, not quite at all, but very briefly | 20:10 |
sodre | Do most people at canonical work from home ? | 20:11 |
smoser | ah. sodre yes most people work from home. | 20:12 |
smoser | the suggested way to load images into glance is at https://code.launchpad.net/~smoser/simplestreams/example-sync | 20:12 |
sodre | that is nice. I guess there is always development going ton. | 20:13 |
sodre | Alright, let me clone that branch. | 20:13 |
thumper | smoser: thanks | 20:22 |
sodre | thanks thumper and smoser... I am still trying to get it to work... I'll be back soon. | 20:26 |
thumper | sodre: good luck | 20:26 |
sodre | smoser: are you still available ? | 21:09 |
smoser | not really | 21:10 |
smoser | qiuck.... | 21:10 |
sodre | for some reason the PUT requests are going itno /swift/simplestreams instead of swift/v1/simplestreams | 21:10 |
sodre | any idea why that would be ? | 21:10 |
sodre | Also, running swift from the command-line places things in the correct place. | 21:11 |
smoser | unm... | 21:13 |
smoser | i'm not sure. | 21:13 |
smoser | that code there *is* used in production | 21:14 |
smoser | so it is known working | 21:14 |
sodre | okay. I am using radosgw, maybe it should default to v1 | 21:14 |
sodre | I'll give the code a quick read. Maybe I can figure out why it is not giving placing the v1 in PUT. | 21:16 |
bigjools | wallyworld_: just for you: http://www.git-tower.com/blog/git-cheat-sheet/ | 21:37 |
wallyworld_ | :-( | 21:38 |
thumper | troll! | 21:38 |
bigjools | like a boss | 21:39 |
bigjools | I was using git yesterday as one of my home projects uses it | 21:39 |
bigjools | hadn't been in for a while and needed to pull a particular branch to compile | 21:40 |
bigjools | took me 45 minutes to work out how | 21:40 |
bigjools | winning! | 21:40 |
bigjools | who knew you had to pull *everything* and then "checkout" | 21:40 |
bigjools | wallyworld_: I am low on coffee. If you could stop by the local bean on the way over that'd be awesome | 21:41 |
wallyworld_ | it will cost you | 21:41 |
* bigjools bends over | 21:41 | |
wallyworld_ | it's on the wrong side of the road so it will cost you a lot | 21:42 |
* thumper covers his eyes | 21:42 | |
thumper | there are children in this channel | 21:42 |
* wallyworld_ unips | 21:42 | |
* bigjools goatses | 21:42 | |
thumper | WORK CHANNEL!! | 21:42 |
bigjools | thumper: there sure are! | 21:42 |
* thumper cracks a whip | 21:42 | |
sodre | wallyworld_: do you work on simplestreams as well ? | 21:49 |
wallyworld_ | the juju side of things, yes | 21:50 |
sodre | I have a question :) | 21:50 |
wallyworld_ | sure | 21:50 |
sodre | I am trying to use sstream-mirror-glance, but the URLs it gives to the swift client lacks the /v1 part. | 21:51 |
sodre | e.g. when I run it, I get the following error | 21:52 |
sodre | Container PUT failed: http://m1basic-04.vm.draco.metal.as:80/swift/simplestreams 405 Method Not Allowed | 21:52 |
sodre | However, if I run ' swift post simplestreams ' | 21:53 |
sodre | everything goes fine. | 21:53 |
sodre | I have logs from the Swift back-end as well, if you would like to see them.. +3lines. | 21:54 |
wallyworld_ | sodre: i have no knowledge of the lp:simplestreams project and associated tools. i just do the juju-core lib | 21:54 |
wallyworld_ | which is a separate codebase written in Go | 21:54 |
sodre | got it | 21:54 |
wallyworld_ | sorry :-( | 21:55 |
sodre | np | 21:55 |
wallyworld_ | but it does seems perhaps there is a bug there | 21:55 |
sodre | yeah, in the Swift log, it shows that the sstream-mirror-glance code did a PUT /swift/simplestreams | 21:56 |
sodre | while the swift CLI, did a POST followed by a PUT /swift/v1 | 21:56 |
sodre | at least I found out why... the code in simplestreams/openstack.py strips the version out :) | 22:06 |
sodre | no I wonder why they are doing that. | 22:07 |
wallyworld_ | good question :-) | 22:08 |
sodre | wallyworld_ : The simplestreams images are getting loaded into glance after I commented out the "strip_versions" call. | 22:28 |
wallyworld_ | \o/ | 22:28 |
sodre | now I have another question which you might be able to help. | 22:28 |
sodre | I am trying to deploy juju on top of Havana. | 22:29 |
sodre | Havana itself seems to be working okay, i.e. I can launch CirrOS | 22:29 |
sodre | create Networks, assign floating ips, etc... | 22:29 |
sodre | let me clean up my bootstrap version | 22:31 |
sodre | so I have sourced my admin-openrc.sh | 22:34 |
sodre | I ran juju bootstrap | 22:34 |
sodre | and then at the end I get the following error. | 22:34 |
sodre | ERROR juju supercommand.go:282 cannot start bootstrap instance: index file has no data for cloud {RegionOne http://m1basic-05.vm.draco.metal.as:5000/v2.0} | 22:34 |
sodre | I imagine that happens because it did a get for /swift/v1/admin-juju/streams/v1/index.json | 22:35 |
sodre | and it failed. | 22:35 |
sodre | how do I tell Juju to find /streams/v1/index.json in a different container ? | 22:37 |
wallyworld_ | sodre: you set the tools-url config option if you want to get simplestreams metadata from a non-default container | 22:49 |
sodre | okay... let me try taht. | 22:50 |
wallyworld_ | or, ensure the index file you are using has metadata for RegionOne and your endpoint | 22:50 |
wallyworld_ | sodre: actually | 22:50 |
wallyworld_ | not tools-url | 22:50 |
wallyworld_ | you want image-metadata-url for images | 22:51 |
wallyworld_ | i think that's what it can't find | 22:51 |
wallyworld_ | if it complains about tools, you need tools-url | 22:51 |
thumper | Time to go and hit something, more emails after lunch | 22:52 |
* thumper -> gym | 22:52 | |
sodre | okay. | 22:52 |
sodre | humm... not yet... | 22:56 |
sodre | I'll play with it once I get home. | 22:57 |
sodre | ttyl | 22:57 |
sinzui | wallyworld_, do you understand what went wrong in this azure bootstrap. I see evidence that abentley's CI testing has placed current tools in juju-dist/testing/tools, but I don't see how out environments could collide and break: http://pastebin.ubuntu.com/6326844/ | 22:59 |
* wallyworld_ looks | 23:00 | |
wallyworld_ | sinzui: i've not seen that error before. it looks like it is failing to load image metadata from cloud-images.canonical.com | 23:04 |
* sinzui ponders setting image-metadata-url: http://cloud-images.ubuntu.com/releases | 23:04 | |
wallyworld_ | that may work, let me check the code | 23:04 |
sinzui | wallyworld_, okay. I think so too. I am not testing changes yet :) This was a stable deploy for an upgrade test | 23:04 |
wallyworld_ | sinzui: btw, trunk has all the changes except for the --public option with sync tools (but you can create the json file for that by hand in the interim). i hope to land today in any case | 23:06 |
sinzui | okay | 23:06 |
sinzui | wallyworld_, I was starting the blessing/cursing of 1.16 r1977. Once the stable is updated, I was going to return to automate releases | 23:07 |
wallyworld_ | rightio | 23:07 |
sinzui | damn. image-metadata-url did change anything | 23:07 |
wallyworld_ | did not? | 23:08 |
sinzui | hmm, maybe the sjson is the issue | 23:09 |
sinzui | 1.16 shouldn't be looking at it yet | 23:09 |
sinzui | well there is evidence that the sjson is understood, but I don't think it is 100% vetted | 23:10 |
wallyworld_ | 1.16 should use sjson if it is there | 23:11 |
adam_g | does anyone know what the status of the juju side of https://bugs.launchpad.net/maas/+bug/1239488 ? | 23:11 |
_mup_ | Bug #1239488: [SRU] Juju api client cannot distinguish between environments <verification-done> <MAAS:Fix Released by julian-edwards> <maas (Ubuntu):Triaged> <maas (Ubuntu Saucy):Fix Committed> <https://launchpad.net/bugs/1239488> | 23:11 |
wallyworld_ | sinzui: the above bug will be fixed with the 1.16 release you are doing now, right? | 23:13 |
sinzui | I don't think so wallyworld_ The bug does not claim to affect juju, let alone have a bug targeted to the 1.16.1 release | 23:14 |
sinzui | These are the bug fixes we want to close https://launchpad.net/juju-core/+milestone/1.16.1 | 23:14 |
wallyworld_ | there were juju changes to fix that maas env bug | 23:15 |
adam_g | sinzui, i was told there was a corresponding juju change for the multi-tenant usage to actually work, have not been able to find what that buig is, tho | 23:15 |
wallyworld_ | bigjools: ^^^^^? | 23:15 |
wallyworld_ | you made changes to juju-core right? | 23:15 |
bigjools | yes | 23:16 |
wallyworld_ | was the branch linked to a bug? | 23:16 |
* bigjools digs | 23:16 | |
adam_g | are those released ? | 23:16 |
wallyworld_ | not yet i don't think | 23:16 |
bigjools | https://bugs.launchpad.net/juju/+bug/1081247 | 23:17 |
_mup_ | Bug #1081247: maas provider releases all nodes it did not allocate [does not play well with others] <maas-provider> <pyjuju:Triaged> <juju-core:Fix Committed by julian-edwards> <juju-core 1.16:Fix Committed by rogpeppe> <MAAS:Invalid> <https://launchpad.net/bugs/1081247> | 23:17 |
adam_g | bigjools, thanks | 23:17 |
bigjools | https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1229275 | 23:17 |
_mup_ | Bug #1229275: [maas] juju destroy-environment also destroys nodes that are not controlled by juju <maas-provider> <theme-oil> <pyjuju:Triaged> <juju-core:Fix Committed by thumper> <juju-core 1.16:Fix Committed by rogpeppe> <juju-core (Ubuntu):Triaged> <maas (Ubuntu):Triaged> <juju-core (Ubuntu Saucy):Triaged> <maas (Ubuntu Saucy):Triaged> <https://launchpad.net/bugs/1229275> | 23:17 |
bigjools | also that, but it has the same branches linked | 23:17 |
adam_g | sinzui, when is 1.16.1 due? | 23:20 |
wallyworld_ | bigjools: that bug is against the wrong projevt | 23:22 |
wallyworld_ | that may be why sinzui didn't see it | 23:23 |
bigjools | wallyworld_: ha! | 23:23 |
wallyworld_ | to include it for the next 1.16 release | 23:23 |
bigjools | wallyworld_: it does have a juju-core task | 23:23 |
wallyworld_ | oh, so it does | 23:23 |
bigjools | I invalidated pyjuju | 23:23 |
bigjools | does this url make you feel better? https://bugs.launchpad.net/juju-core/+bug/1081247 | 23:24 |
_mup_ | Bug #1081247: maas provider releases all nodes it did not allocate [does not play well with others] <maas-provider> <pyjuju:Invalid> <juju-core:Fix Committed by julian-edwards> <juju-core 1.16:Fix Committed by rogpeppe> <MAAS:Invalid> <https://launchpad.net/bugs/1081247> | 23:24 |
wallyworld_ | yes :-) | 23:24 |
* wallyworld_ relocates, bbiab | 23:24 | |
* bigjools prepares for his arrival | 23:24 | |
* bigjools chuckles, evilly | 23:25 | |
sinzui | sorry bigjools wallyworld_ I had to jump into a hangout. | 23:27 |
bigjools | no worries sinzui | 23:27 |
sinzui | I am not rushing the 1.16.1 release. I can wait for a critical bug fix. I just need to know what needs merging | 23:27 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!