mwhudson | hello | 00:11 |
---|---|---|
mwhudson | is it possible to compile juju with gccgo? | 00:11 |
mwhudson | asking because it would be cool to run juju on (simulated) arm64 nodes | 00:11 |
davecheney | mwhudson: maybe, i've never tried | 00:15 |
davecheney | i'd be interested in hearing your results | 00:15 |
davecheney | a few | 00:15 |
davecheney | things | 00:15 |
mwhudson | i guess i only need to compile the 'tools' with gccgo? | 00:15 |
davecheney | 1. i thought that arm32 could run on arm64 | 00:15 |
davecheney | mwhudson: oh, that is going to make it it a lot harder | 00:16 |
mwhudson | harder? | 00:16 |
mwhudson | i don't need to do that i guess | 00:16 |
davecheney | i think we hard code the gc toolchain | 00:16 |
davecheney | 2. does gccgo support arm64 ? | 00:16 |
mwhudson | not all arm64 implementations can run arm32, though indeed most can | 00:16 |
davecheney | mwhudson: can the implemetation you are planinng on deploying to run arm32 bins | 00:16 |
davecheney | ? | 00:16 |
davecheney | oh | 00:16 |
mwhudson | davecheney: i heard rumours that it does but i don't know | 00:16 |
mwhudson | um, i don't know :) | 00:17 |
davecheney | and if lsb_release -a returns aarch64 or something then there will be more problems | 00:17 |
mwhudson | currently just targeting the arm foundation model | 00:17 |
davecheney | mwhudson: you're brave | 00:17 |
davecheney | i don't have 5 years to spend waiting for it to compile | 00:17 |
mwhudson | hm | 00:17 |
mwhudson | darn it | 00:17 |
mwhudson | my build of aarch64-unknown-linux-gnu-gccgo just failed :( | 00:18 |
davecheney | mwhudson: i'm sure i can knock up a branch or the release tarball that will *think* it is aarch64 when its' just arm32 | 00:18 |
davecheney | mwhudson: that doens't suprise me | 00:18 |
mwhudson | make[4]: *** No rule to make target `../libatomic/libatomic_convenience.la', needed by `libgo.la'. Stop. | 00:18 |
davecheney | no offense to the fast model developers | 00:18 |
mwhudson | uh hah, that looks architecture dependent | 00:18 |
davecheney | but unless you are paid to use the fast model | 00:19 |
davecheney | it's not worth your time | 00:19 |
mwhudson | davecheney: i'm not going to _build_ anything on the fast model | 00:19 |
mwhudson | i'm not that daft | 00:19 |
davecheney | :) | 00:19 |
davecheney | mwhudson: i'm pretty sure I can make a version of 1.12 that will bootstrap on arm64 | 00:20 |
davecheney | with a little bit of hacking | 00:20 |
davecheney | you'll have to use juju bootstrap --upload-tools | 00:20 |
kurt_ | is there a "condensed" version of juju status? I'm fond of using watch in conjunction with status and my newer services are all pushing off the bottom of the terminal. I guess I could do something fancy with perl or sed/awk. But I'm lazy. | 00:20 |
mwhudson | right | 00:20 |
davecheney | kurt_: juju status --format=json | something that understands json ? | 00:20 |
davecheney | kurt_: you can also do | 00:20 |
mwhudson | i guess there is another issue looming, as i don't suppose juju has a provider that uses the foundation model | 00:20 |
davecheney | juju status {service/unit} | 00:20 |
kurt_ | ah ok, thanks | 00:21 |
davecheney | mwhudson: no, i don't think the foundation model is considered a cloud | 00:21 |
* davecheney shudders | 00:22 | |
davecheney | i said coud | 00:22 |
davecheney | cloud | 00:22 |
mwhudson | ok, seems gccgo requires libatomic and libatomic isn't there on aarch64 yet | 00:24 |
davecheney | mwhudson: also libgo contains a port of the go standard library from the Go project | 00:27 |
davecheney | and the Go standard lib doesn't have support for aarch64 atomics and things | 00:28 |
davecheney | so I'd say, unless it's been confirmed to work | 00:28 |
mwhudson | ah ok | 00:28 |
davecheney | gccgo won't work on aarch64 | 00:28 |
mwhudson | so this is all a bit of a stretch | 00:28 |
mwhudson | that's ok :) | 00:28 |
davecheney | mwhudson: if you want to deploy juju workloads on aarch64 | 00:43 |
davecheney | i recommend using the 32bit tools | 00:43 |
=== hloeung is now known as Guest71555 | ||
davecheney | of all the things that could go wrong, that would be the least of them | 00:43 |
=== Guest71555 is now known as hloeung | ||
weblife | m_3: finished those changes ( http://pastebin.com/UnFsJBNT ). I need a little help with my bash (I never use python), when I reach the end of my for statement I want it install the PPA. I thought $SHA = $shaCheck[::-1] would work but it doesn't. I should enclose the entire statement for null if the config is left blank, do that later. | 03:09 |
weblife | this question goes to any other python pros | 03:09 |
sarnold | weblife: I would prefer to use the --status option to sha256sum or shasum program and not try to check the text value of the output | 03:19 |
sarnold | weblife: is there any way you can use gpg to verify the hashes? | 03:21 |
weblife | sarnold: maybe this is my first experience of using hashes. m_3 asked if I could make it do upstream verification, so gave it a try. What do you mean --status? | 03:31 |
weblife | juju status? | 03:31 |
sarnold | weblife: sha256sum --status | 03:31 |
sarnold | weblife: that hides the output and lets you use the exit status of the sha256sum program to tell success/failure.. | 03:32 |
weblife | if I recall from the help there is an algorithm option for option I saw to do something like that but I would need to find that document again. Have no clue if its possible though. let me look. | 03:34 |
sarnold | weblife: "man shasum" or "man sha256sum" :) | 03:35 |
sarnold | (I haven't actually checked which sums are in the node upstream, I just sort of assume 256 by now..) | 03:35 |
weblife | http://nodejs.org/dist/v0.10.16/SHASUMS.txt | 03:36 |
sarnold | aha, sha1sum | 03:36 |
sarnold | aha! | 03:37 |
sarnold | http://nodejs.org/dist/v0.10.16/SHASUMS256.txt | 03:37 |
weblife | but if it doesn't exist it gives you a only a return | 03:37 |
sarnold | and better still, there's a signed version :) http://nodejs.org/dist/v0.10.16/SHASUMS256.txt.asc | 03:38 |
sarnold | weblife: you can probably assume sha256sum will be provided in the future, if not the past. :) | 03:38 |
weblife | sarnold: I can only try. Which one SHASUMS256.txt.asc or SHASUMS256.txt.gpg ? | 03:41 |
sarnold | weblife: I'd use .asc | 03:41 |
weblife | sarnold: back to researching this, your making this fun :x | 03:42 |
sarnold | weblife: woo! :) | 03:43 |
sarnold | weblife: I don't know how familiar you are with gpg, gpg --recv-key 6C481CF6 ; gpg SHASUMS256.txt.asc | 03:44 |
weblife | sarnold: I'm not familiar with hashes at all :x Never worried about it. | 03:46 |
sarnold | weblife :) | 03:47 |
weblife | sarnold: It looks like I will still run into the same problem above if there is no matching file. | 03:59 |
weblife | I know nothing about hashes except for the basics and there is a lot to learn, gonna leave to the experts since I will probably never use these. Submitting as is soon as I figure this last check out. | 04:06 |
=== tasdomas_afk is now known as tasdomas | ||
kurt_ | Hi - anyone on here know about deploying cinder on MAAS from juju-gui? | 05:57 |
marcoceppi | kurt_: I think we typically use ceph | 06:24 |
kurt_ | marcoceppi: jcastro passed this on to me today: http://pastebin.ubuntu.com/6003753/plain/ | 06:25 |
kurt_ | the template you gave me uses cinder too | 06:25 |
kurt_ | ie. http://paste.ubuntu.com/6001802/ | 06:25 |
kurt_ | marcoceppi: the template shows setting the block device to "None" and that seemed to work. LOL, I'm not understanding how its allocating blocks if that's set to none. | 06:27 |
marcoceppi | you'd have to read the charm | 06:27 |
* marcoceppi checks | 06:27 | |
marcoceppi | are you taking about the charm itself? I don't see that in the paste | 06:28 |
kurt_ | I'm looking at the template you kindly passed on yesterday | 06:29 |
kurt_ | I think its your environments.yaml | 06:30 |
kurt_ | cinder is definitely included, and the block-device string is set to None | 06:31 |
kurt_ | "block-device": None | 06:31 |
marcoceppi | kurt_: I copied those settings from the following wiki | 06:32 |
=== defunctzombie_zz is now known as defunctzombie | ||
marcoceppi | https://wiki.ubuntu.com/ServerTeam/OpenStackHA | 06:33 |
kurt_ | i c | 06:34 |
kurt_ | so its really either cinder or ceph, right? | 06:37 |
kurt_ | yawns…is tired | 06:38 |
=== rogpeppe1 is now known as rogpeppe | ||
stub | Is it on the roadmap for the local provider to use ephemeral lxc containers? | 09:55 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== natefinch is now known as natefinch-afk | ||
=== robbiew1 is now known as robbiew | ||
varud | Hello, I hate to be the guy that jumps in with a stupid question ... but I can't figure out what the point of the juju gui is? Can I export from there to some sort of yaml file for deployment to MAAS or AWS? | 13:54 |
jamespage | guh - anyone know which version of maas works with juju-core 1.12? | 13:55 |
jamespage | mgz, jam: ^^ | 13:55 |
jcastro | varud: yes. You can export deployments from the gui itself and then import them into other environments | 14:00 |
varud | I just had an epiphany, the demo site is a demo site | 14:00 |
jcastro | hazmat: heya, do you have docs somewhere on deployer? | 14:00 |
varud | and I need to deploy juju-gui as a charm to my environment | 14:00 |
jcastro | right | 14:00 |
varud | While that's obvious now, it's extremely cryptic to somebody new to juju | 14:01 |
varud | had my head scratching for a while | 14:01 |
hazmat | jcastro, there's some in source / sphinx docs directory. i was going to publish to read the docs site | 14:01 |
jcastro | any ideas on how we can make that more obvious? | 14:01 |
varud | https://jujucharms.com/ | 14:02 |
varud | Instead of saying 'Environment on Demonstration' | 14:02 |
varud | Say 'Demo Mode' with a link to https://juju.ubuntu.com/resources/the-juju-gui/ | 14:02 |
mgz | jamespage: er, the current vrsion of maas should, no? | 14:02 |
jamespage | mgz, in precise? | 14:02 |
jcastro | varud: that's a good idea, I'll file the bug now | 14:02 |
jamespage | #bang | 14:02 |
jamespage | nope | 14:02 |
rick_h | varud: we're working through some demo/walkthrough material as recent feedback was just along your line there. | 14:02 |
varud | On a side note, while I've got your ear | 14:02 |
jcastro | rick_h: oh nice, is that tracked anywhere? | 14:03 |
varud | I'm trying to deploy OpenStack on one machine (I know, that's crazy) | 14:03 |
jcastro | varud: I am always all ears! | 14:03 |
varud | and would like to use MaaS | 14:03 |
rick_h | jcastro: not yea, came out of IoM feedback and the UX'y people are thinking/working on how to present it best | 14:03 |
mgz | precise almost certainly needs a newer thing, I'm not sure what the sru/backport status is | 14:03 |
varud | But doesn't there need to be a management node | 14:03 |
rick_h | hazmat: let me know if you get it working, I was trying to get the charmworld api docs up on there and hit https://github.com/rtfd/readthedocs.org/issues/435#issuecomment-22929015 | 14:04 |
jamespage | mgz, there is a plan | 14:04 |
varud | so in that case, I'll need to manually set up MaaS on a handcrafted VM on the machine | 14:04 |
varud | and then deploy to the rest of the machine | 14:04 |
varud | or am I missing something | 14:04 |
jcastro | I think this is the virtual MAAS use case | 14:04 |
jcastro | jamespage: right? | 14:04 |
jcastro | being able to just do it all on one machine. | 14:04 |
hazmat | jcastro, rick_h, actually it wasn't read the docs i was doing the pypi doc setup direct with sphinx.. ala http://pythonhosted.org/an_example_pypi_project/buildanduploadsphinx.html | 14:05 |
rick_h | hazmat: ah, never mind then | 14:05 |
jamespage | jcastro, varud: hmm - that would work | 14:05 |
jamespage | varud, how big is your machine? | 14:05 |
hazmat | jcastro, http://pythonhosted.org/juju-deployer/ | 14:05 |
varud | 12 cores | 14:05 |
jcastro | hazmat: I don't care where they are or in what format, I have an idea for a bundle and I'd like to play with deployer | 14:05 |
varud | 128GB RRAM | 14:05 |
jcastro | hazmat: perfect! | 14:05 |
jcastro | hazmat: so here's my idea. We take an openstack deployment and just use it as a bundle | 14:06 |
jcastro | and then we tell people like varud, here is your openstack bundle, just deployer it. | 14:06 |
hazmat | jcastro, sure there are a few examples of that, and its how we do openstack testing. | 14:06 |
jcastro | right, so what I think we need to do is basically put the bundles somewhere | 14:07 |
varud | One more thing as a new juju person ... can you guys consider not using wordpress everywhere in your docs :-/ | 14:07 |
hazmat | jcastro, charmworld and gui support for bundles is coming | 14:07 |
jcastro | so people can just fire it up instead of going through all the manual openstack steps | 14:07 |
jamespage | varud, should work OK | 14:07 |
jcastro | varud: I totally agree with that and we'll be updating that. | 14:07 |
rick_h | jcastro: hazmat very soon | 14:07 |
hazmat | jcastro, at which point openstack could just be deployed from the gui against a maas provider | 14:07 |
varud | it makes it seem like juju is old ... which it's not | 14:07 |
jcastro | people think all we do is wordpress, lol | 14:07 |
jamespage | varud, lemme dig out the charm - you can run it standalone... | 14:07 |
varud | put something sexier there | 14:07 |
jcastro | varud: what would you put in there? | 14:08 |
jcastro | rick_h: when you say very soon, how soon? | 14:08 |
varud | maybe a bitcoin miner :-) I doubt people would like that though | 14:08 |
varud | let me think | 14:08 |
rick_h | like week-soon I believe. That's in charmworld. Then gui needs to show it. | 14:08 |
rick_h | jcastro: ^ | 14:08 |
jcastro | rick_h: dude, that is awesome. | 14:08 |
jcastro | rick_h: so, are you telling me that in a week people will be able to share and deploy bundles from the gui? | 14:09 |
rick_h | jcastro: so end of week the plan is to support pulling bundles into the backend and start work on juju gui front end. Next week finish bundle workd in gui. | 14:09 |
rick_h | jcastro: so end of month...worky worky | 14:10 |
jamespage | varud, https://code.launchpad.net/~virtual-maasers/charms/precise/virtual-maas/trunk | 14:10 |
jamespage | varud, I've not tried it standalone | 14:10 |
jcastro | rick_h: so I think it'd be awesome to do an openstack bundle | 14:11 |
rick_h | jcastro: definitely | 14:11 |
varud | thanks jamespage | 14:11 |
jcastro | with or without vmass | 14:11 |
jcastro | depending on what jamespage says | 14:11 |
rick_h | jcastro: yea, basically an addendum to the video Mat did deploying openstack via the gui | 14:11 |
jcastro | rick_h: actually, openstack will probably be the hairiest bundle anyway, might as well use it as the testcase | 14:11 |
jamespage | jcastro, I agree | 14:11 |
varud | yes, since that video is why I'm here actually | 14:11 |
jcastro | rick_h: awesome, so TLDR, you guys were already working on this. | 14:11 |
varud | outreach works | 14:11 |
rick_h | jcastro: like "Here's the how to, now let's take the shortcut and use the bundle" | 14:12 |
jcastro | kurt_ could have really used the bundle too | 14:12 |
rick_h | jcastro: not creating the bundle specifically, but on the radar as a use case pushing the work | 14:12 |
* jcastro nods | 14:12 | |
varud | In case my use case is relevant (I think it's not atypical actually) | 14:12 |
jcastro | now my next question ... do we have a working bundle somewhere people can use with deployer in the meantime? | 14:12 |
jcastro | your use case is very relevant! | 14:12 |
varud | I want to deploy all the openstack services on one machine isolated with LXC | 14:13 |
varud | and then allocate nova-compute with KVM hypervisor | 14:13 |
varud | using MaaS | 14:13 |
varud | as a proof of concept but one that's capable of handling real traffic | 14:13 |
* jcastro nods | 14:13 | |
varud | I think that scenario is a great way to dive into juju | 14:13 |
rick_h | varud: yep, and we're working on getting there bit by bit | 14:14 |
varud | let me know if I can help | 14:14 |
varud | since I'm basically focused on that very thing | 14:14 |
rick_h | thanks varud | 14:15 |
varud | As for a replacement for wordpress in docs/demos, I'll think of something tonight ... hopefully Python based :-) | 14:18 |
varud | Later and thanks for the help | 14:18 |
jcastro | discourse maybe? | 14:18 |
jcastro | the one thing that we got feedback on is one time we did a talk on juju and demo'ed hadoop | 14:19 |
jcastro | and a bunch of people didn't grok that | 14:19 |
jcastro | maybe we should have a simple demo + an advanced demo | 14:19 |
varud | that looks great | 14:19 |
varud | I was thinking of pelican and stuff like that but it doesn't show much since you don't need multiple services for a static site generator | 14:19 |
varud | hadoop is certainly a use case but you want something that you can see | 14:20 |
varud | owncloud might work too | 14:20 |
jrwren | is there a discourse charm? | 14:20 |
jcastro | yeah, it's not in the store yet though, it's in ~marcoceppi | 14:21 |
varud | or etherpad | 14:21 |
varud | gotta go, but I'll be back on the list now that I know people are here | 14:21 |
jcastro | jrwren: https://jujucharms.com/~marcoceppi/precise/discourse-HEAD | 14:22 |
varud | I'm in Nairobi so UTC+3 is my day | 14:22 |
rick_h | jrwren: doh, jcastro beat me to it | 14:22 |
varud | I like discourse though, great idea | 14:22 |
jrwren | sweet | 14:22 |
varud | later | 14:22 |
jrwren | lol @ tcmalloc. it must help ruby | 14:23 |
jcastro | m_3: ping me when you're around | 14:26 |
=== tasdomas is now known as tasdomas_afk | ||
=== varud is now known as varud_away | ||
=== BradCrittenden is now known as bac | ||
kurt_ | jcastro: good morning. Something I'm wondering. The guides all have cider for block storage, but I'm hearing ceph is the new direction. Any comments on that? | 14:51 |
jcastro | ceph is the hotness | 14:52 |
jcastro | jamespage: I am assuming we put cinder in the guides because that's the official openstack thing? | 14:53 |
jamespage | kurt_, yes | 14:53 |
jamespage | kurt_, ceph is implemented as a cinder backend for block storage | 14:54 |
jamespage | kurt_, so the API is still cinder; its just backed by ceph rather than local disk + iscsi | 14:54 |
jamespage | kurt_, and ceph freaking rocks! | 14:54 |
jamespage | highly scalable, highly avaliable storage - sweet! | 14:55 |
kurt_ | jamespage: are there any deployment guides you can share? I've seen this: http://ceph.com/dev-notes/deploying-ceph-with-juju/ | 14:55 |
* jamespage has to declare that he also maintains Ceph in Ubuntu.... | 14:55 | |
jamespage | kurt_, are you using juju? | 14:55 |
jamespage | that might be a stupid question bearing in mind which channel we are in | 14:55 |
kurt_ | jamespage: yup | 14:55 |
kurt_ | lol | 14:55 |
=== natefinch-afk is now known as natefinch | ||
jamespage | kurt_, OK - so first of all deploy ceph using charms | 14:56 |
jamespage | and then juju add-relation cinder ceph | 14:56 |
jamespage | and juju add-relation nova-compute ceph | 14:56 |
jamespage | and bobs your uncle | 14:56 |
kurt_ | lol | 14:57 |
jamespage | kurt_, this is still current - http://javacruft.wordpress.com/2012/10/17/wrestling-the-cephalopod/ | 14:57 |
kurt_ | does it matter the order of deployment? | 14:57 |
jamespage | aside from the reference to nova-volume at the bottom. | 14:57 |
kurt_ | if I've already got cinder deployed? | 14:57 |
jamespage | kurt_, so long as you have not already provisioned storage from cinder local disk you should be OK in any order | 14:57 |
kurt_ | cool | 14:58 |
kurt_ | I agreed ceph looks cool and promising | 14:58 |
kurt_ | agree rather | 14:58 |
kurt_ | jamespage: thanks for your comments | 15:03 |
jamespage | kurt_, np | 15:03 |
kurt_ | jamespage: do you recommend not using the gui for the ceph portion and doing it manually as shown in the WP guide? | 15:04 |
jamespage | kurt_, you can do it with the gui - the tricky bits are generating the uuid and the ceph monitor secret - unfortunately they still have to be done via a shell | 15:05 |
jamespage | kurt_, do you just want to try it with a single node? | 15:05 |
jamespage | that is possible - its obviously not HA but its good for testing | 15:06 |
jamespage | and limits machine consumption! | 15:06 |
kurt_ | I have 27 VMs at my disposal :D | 15:06 |
kurt_ | Maybe that's better for a proof of concept run | 15:07 |
kurt_ | The guide uses 3, which is also fine too | 15:07 |
kurt_ | but if you have guidance on single node deployment other than just doing -n3 and anything I couldn't figure out on my own, do share. | 15:08 |
m_3 | jcastro: hey | 15:08 |
jcastro | m_3: hey for charm testing .... | 15:08 |
jcastro | m_3: is there a way we could automate checking to see which ports a charm uses? | 15:09 |
jcastro | m_3: so for example a bunch of private clouds only use 80/443, etc. | 15:09 |
=== freeflying is now known as freeflying_away | ||
jamespage | kurt_, I'd recommend a read of https://jujucharms.com/precise/ceph-14/#bws-readme | 15:09 |
jcastro | And last week I added a best practice that charms shouldn't use weird ports | 15:09 |
jamespage | kurt_, if you want todo it with a single node just drop the 'monitor-count' to 1 | 15:09 |
m_3 | jcastro: oh, not dynamically | 15:09 |
m_3 | jcastro: you just wanna see which 'open-port' calls are made | 15:10 |
kurt_ | ok, thanks jamespage | 15:10 |
jamespage | kurt_, if your instances get ephemeral storage | 15:10 |
jamespage | you can use that for storage | 15:10 |
jamespage | ephemeral-unmount: /mnt | 15:10 |
jamespage | should ensure that this works ok | 15:10 |
jamespage | with osd-devices: /dev/vdb or whatever | 15:11 |
jcastro | m_3: not just the open-port, I mean if you're doing weird stuff like adding GPG keys via a keyserver, stuff like that. | 15:11 |
jcastro | so like let's say my charm uses some custom PPA or repo | 15:11 |
m_3 | jcastro: lemme look at status output... I just tore stuff down, so one minute | 15:11 |
kurt_ | this part of the deployment is new for me. still some stuff to figure out | 15:11 |
jcastro | m_3: this is more of a "hey I wonder if this is a good idea" | 15:12 |
m_3 | jcastro: wait, you mean _outbound_ traffic? | 15:12 |
m_3 | no | 15:12 |
m_3 | that'd probably be um... harder | 15:12 |
fwereade | m_3, stub, marcoceppi: seeking opinions re https://bugs.launchpad.net/juju-core/+bug/1192433 -- I feel that early departure is a god idea for peers (as noted in bug) and for providers, but not for requirers, which run the risk of being cut off by the provider if they appear to depart early | 15:13 |
_mup_ | Bug #1192433: relation-list reporting dying units <jujud> <relations> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1192433> | 15:13 |
m_3 | fwereade: k, looking | 15:14 |
fwereade | m_3, the main issue is that this is actually the first provider/requirer asymetry we'd be implementing if we did this, and it's not clear what the impact might be -- if people have generally been using pro/req in the "natural" way -- ie requirers connect to providers -- it will be fine; but it will be somewhat surprising for any charms with relations implemented "backwards" | 15:17 |
m_3 | fwereade: and total confusion wrt subs imo | 15:18 |
m_3 | oh, wait but that wouldn't apply... nm | 15:18 |
fwereade | m_3, I think it would still apply -- but I'm not sure the situation is actively different there | 15:19 |
fwereade | m_3, it all hinges on how people have interpreted pro/req | 15:19 |
jcastro | fwereade: https://juju.ubuntu.com/docs/authors-charms-in-action.html | 15:21 |
* m_3 wouldn't expect that to be consistent across charms | 15:21 | |
m_3 | didn't william actually write that? | 15:22 |
jcastro | yes | 15:23 |
jcastro | at the sprint I moved it over to the docs | 15:23 |
jcastro | just wanted to point out that it got generated, etc | 15:23 |
m_3 | ah, gotcha | 15:23 |
fwereade | jcastro, this change would involve a tweak or two there, tis true | 15:23 |
m_3 | hmmmm... I'm ok with asymmetry wrt prov/req... have to think a bit more about ramifications | 15:24 |
m_3 | fwereade: although I catch myself often blurring the line between juju's relation/service state and the actual state of the _service_ when thinking about dying or unresponsive counterparts in a relation | 15:26 |
fwereade | m_3, yeah, a case could be made that it's the charm author's responsibility to deal with unresponsive relatives regardless, and that any situation that would be helped here is actually a symptom of a charm bug | 15:28 |
m_3 | but I guess the net result is the same with unresponsive bits... so earlier departure would work | 15:28 |
m_3 | fwereade: getting more info... in an actionable way (hook)... is probably easier to deal with though | 15:30 |
fwereade | m_3, yeah, I would in general prefer to make charmers' lives easier ;) | 15:30 |
jamespage | mgz, tried recent everything - still no luck - https://bugs.launchpad.net/juju-core/+bug/1214451 | 15:30 |
_mup_ | Bug #1214451: Unable to bootstrap maas environment <juju-core:New> <https://launchpad.net/bugs/1214451> | 15:30 |
mgz | hmmm | 15:31 |
m_3 | fwereade: appreciate that! yeah, I'm thinking that asymmetry (which is sort-of implied in the naming) is an ok thing given we're getting a lot more info from juju about the relation | 15:31 |
jamespage | mgz, I got the same error with 12.04 maas + juju-core 1.12 fwiw | 15:32 |
jamespage | juju sync-tools worked like a dream tho! | 15:32 |
=== varud_away is now known as varud | ||
=== varud is now known as varud_away | ||
marcoceppi | fwereade: the problem is we never really enforce what provides and requires means, and since relations are bidirectional most charms just ignore it | 15:47 |
marcoceppi | it being the convention of provides and requires. In fact, the whole idea of provides and requires is extremely opinionated from charm author to author | 15:49 |
roaksoax | jamespage: upgrade to whatever is in -proposed | 15:50 |
jamespage | roaksoax, daily-backports works | 15:50 |
roaksoax | jamespage: that contains the fix for that maas error with juju core | 15:50 |
jamespage | roaksoax, whats in -proposed? | 15:50 |
jamespage | ah - right | 15:51 |
roaksoax | jamespage: or that too. (yeah meant 12.04 -proposed for the SRU'd fixes) | 15:51 |
jamespage | roaksoax, I'd rather test -proposed | 15:51 |
jamespage | roaksoax, but downgrading maas is a purge/reinstall I suspect! | 15:51 |
roaksoax | yep | 15:52 |
fwereade | marcoceppi, can you point me to some examples? ISTM that it can only really meaningly vary across unconnected groups of charms | 15:54 |
fwereade | marcoceppi, hmm, well, I guess it's not necessarily transitive | 15:54 |
fwereade | marcoceppi, but defining, say, an app server charm that *provides* both http and mysql would seem... surprising | 15:54 |
fwereade | marcoceppi, but perhaps given that we don't enforce internal consistency it may be too much to ask :/ | 15:55 |
marcoceppi | fwereade: let me see if I can find a few examples | 15:55 |
marcoceppi | I vaguely remember coming across a few examples where a charm was providing an interface it actually meant to consume and another charm was requiring an interface it was to provide just to fit in with previous charms developed | 15:57 |
fwereade | marcoceppi, m_3: quick aside: would you anticipate problems if we implemented perceived-early-departure for peer relations only (for now at least)? | 16:00 |
marcoceppi | fwereade: I don't think that'd cause a problem, can't see that being an issue off the top of my head | 16:01 |
m_3 | fwereade: dunno... I was thinking haproxy before which is direct, non-peer relations | 16:14 |
m_3 | fwereade: but yeah, sounds reasonable to start with peers and we'll see where to go from there | 16:15 |
fwereade | m_3, is there a problem with haproxy and early-provider-departure?I don't see it | 16:17 |
fwereade | m_3, (I know that's not quite what you said, but I'm even more confused about that -- a peers change wouldn't affect it at all, surely?) | 16:17 |
=== teknico1 is now known as teknico | ||
m_3 | fwereade: no, haproxy manages down relation counterparts at the service level... but it's the prototypical example of a charm needing `relation-list` | 16:29 |
m_3 | so it was the first one that popped to mind earlier | 16:29 |
=== TheRealMue is now known as TheMue | ||
m_3 | but it could still eventually benefit from earlier notification of down relation counterparts (not peers) | 16:31 |
weblife | getting the following error with 'charm create test': Failed to find test in apt cache, creating an empty charm instead. | 16:33 |
marcoceppi | weblife: that's not an error | 16:34 |
marcoceppi | that's expected output of charm create, it's just saying that it can't find that name "test" in apt cache, it's creating an exmpty charm instead | 16:34 |
weblife | Okay but it seems to be missing things like the svg icon file. So I figured this was an error. | 16:36 |
marcoceppi | weblife: make sure you're getting it from the ppa | 16:37 |
marcoceppi | weblife: ppa:juju/pkgs | 16:37 |
weblife | marcoceppi: thank you as always | 16:38 |
sinzui | hi ~charmers. I see https://jenkins.qa.ubuntu.com/ is not responding, and mange.jujucharms.com wants to collect charm test data from it | 17:16 |
sinzui | Do we need to change were test data is collected from? | 17:16 |
hazmat | m_3, do you know who maintains on that jenkins^ | 17:19 |
m_3 | hazmat: there was a problem yesterday with jenkins version upgrades... in #is | 17:21 |
hazmat | m_3, thanks | 17:21 |
m_3 | sinzui: ^^ | 17:21 |
sinzui | thank m_3 | 17:21 |
m_3 | the jenkins build_publisher plugin is very sensitive to versions... but this was a bigger problem in general for jenkins.qa.ubuntu.com | 17:22 |
hazmat | sinzui, fwiw the response i got back is that its being worked on from webops vanguard | 17:23 |
lamont | it's a #is thing, not #webops, being actively worked since before I went checking | 17:25 |
sinzui | hazmat, lamont fab. charmworld itself isn't broken, It is just logging a lot of hate | 17:26 |
* m_3 thinks it's destructive to internalize too much of that | 17:27 | |
sarnold | hehe | 17:27 |
lamont | heh | 17:28 |
lamont | logging hate is better than eating hate | 17:28 |
roaksoax | lin/win 3 | 17:39 |
beuno | lin/win! | 17:39 |
weblife | yeah back up and running normal | 17:49 |
adam_g | jamespage, i think this fell by the wayside, any chance of a poke again? https://code.launchpad.net/~gandelman-a/charm-helpers/sync_include_hints/+merge/174320 | 17:50 |
mhall119 | arosales: smoser: you guys have some Cloud & Server sessions imported into Summit, but they need to be added to the schedule. I sent an email this morning about how to do that, let me know if you have any questions. | 17:53 |
arosales | mhall119, thanks I'll take a look at that today | 17:53 |
mhall119 | thanks arosales | 17:55 |
jamespage | adam_g, +1 | 18:10 |
adam_g | jamespage, cool. thanks. also, this look like a sensible approach? http://paste.ubuntu.com/6007366/ | 18:15 |
=== varud_away is now known as varud | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== varud is now known as varud_away | ||
natefinch | arosales: got a second to talk about juju on windows? | 18:54 |
adam_g | jamespage, disregard that paste. such a config wouldn't really work | 19:02 |
arosales | natefinch, hello | 19:03 |
arosales | natefinch, sure, I don't know in what context but I am glad to talk | 19:04 |
natefinch | arosales: hi... Mark Ramm said you might have the info on our obligations for delivering juju on windows... and looks like I'm the one that'll be making sure we meet those obligations | 19:05 |
arosales | natefinch, ah ok. Do you want to G+? | 19:06 |
natefinch | arosales: yeah, perfect | 19:06 |
arosales | natefinch, https://plus.google.com/hangouts/_/2177b0954df808545cd6dac802822b1fcb7316bf?authuser=1&hl=en | 19:07 |
kurt_ | jamespage: you mentioned earlier about generating a ceph monitor secret and the uuid. Must this be done on the node to be deployed on? There's a bit of a chicken and egg thing if the answer is yes unless it's automated in the charm? | 19:07 |
marcoceppi | kurt_: I believe th uuid and secret are derived from the fsid which is a required configuration option needed at deploy time | 19:09 |
kurt_ | marcoceppi: pardon my ignorance, but are those indigenous to the node being deployed on - or can they be generated on a completely different node? | 19:10 |
marcoceppi | kurt_: they're the same between all nodes and are proivded by the user via configuration. | 19:11 |
marcoceppi | basically you're just running juju set ceph fsid=`uuid` | 19:11 |
kurt_ | marcoceppi: but how do I create this? Is it a randomly generated thing? | 19:12 |
marcoceppi | kurt_: the fsid just needs to be a random string. The charm author recommends `uuid` but you could have it set to anything | 19:12 |
kurt_ | I am doing my deployment via MAAS | 19:12 |
kurt_ | ah, ok | 19:12 |
marcoceppi | kurt_: but it's set during deployment, so in the juju-gui, prior to deployment, you can fill the fsid with any random string | 19:13 |
kurt_ | And the same thing for the monitor secret? | 19:13 |
marcoceppi | kurt_: I assume so, let me check | 19:14 |
kurt_ | Can I assume I can use exactly what's already been used in the examples? | 19:14 |
marcoceppi | kurt_: which examples? I'd not recommend using anything in examples as it's the same reason there aren't default optoins for these. It creates an attack vector where someone knows the password to access your disks | 19:16 |
kurt_ | yes, understood | 19:16 |
marcoceppi | kurt_: the readme for monitor-secret recommends running `ceph-authtool /dev/stdout --name=mon. --gen-key` | 19:16 |
marcoceppi | you can run this on any node that has ceph-authrool, doesn't need to be run directly on the node | 19:16 |
marcoceppi | even your local machine | 19:17 |
kurt_ | awesome, thanks. I was going to run this on my maas master node, so that works. | 19:19 |
kurt_ | fyi - jamespage sent me this earlier: http://javacruft.wordpress.com/2012/10/17/wrestling-the-cephalopod/ | 19:19 |
kurt_ | he said its mostly up to date, just the nova bits at the bottom are out of date | 19:20 |
marcoceppi | kurt_: well, he did say mostly ;) | 19:20 |
kurt_ | true indeed | 19:21 |
marcoceppi | We realize MAAS + Juju and the openstack charm docs aren't quite up to date. The charms have had a lot of work done on them and there just hasn't been time to document everything. I hope we can resolve both of these soon | 19:21 |
kurt_ | I'm figuring it all out slowly but surely | 19:22 |
kurt_ | I'm 90% there | 19:22 |
marcoceppi | please feel free to document and report back what you find! We really apprecaite you working through all this | 19:22 |
kurt_ | sure. I do plan to blog it. I'm hoping for promotion from somebody over there once I get it done. :) | 19:23 |
kurt_ | maybe on one of your own blog sites | 19:23 |
marcoceppi | id definitely promote it | 19:26 |
kurt_ | can I have url to your blog? | 19:26 |
marcoceppi | http://marcoceppi.com | 19:27 |
kurt_ | GRRR--several of my VMs crashed. This may be too much stress for Fusion to handle. | 19:27 |
kurt_ | Thanks! | 19:27 |
=== varud_away is now known as varud | ||
jamespage | kurt_, that was all great advice about the ceph config options from marcoceppi | 19:46 |
=== varud is now known as varud_away | ||
jcastro | kurt_: huh did you mention earlier you're on juju .7? | 19:46 |
marcoceppi | jcastro: did he? | 19:47 |
kurt_ | yes | 19:48 |
kurt_ | .7 | 19:48 |
jcastro | out of curiosity, how did you end up with that? | 19:48 |
jcastro | did you search via apt-cache search for juju or ... ? | 19:48 |
kurt_ | I was trying to remember | 19:49 |
kurt_ | maybe I installed from ppa? | 19:50 |
kurt_ | I started off from this guide http://ceph.com/dev-notes/deploying-ceph-with-juju/ | 19:50 |
jcastro | scuttlemonkey: heya, maybe that page should be updated? | 19:52 |
jcastro | kurt_: so at some point when you're ready to do it all again for repeatability I'd like to see how juju 1.12 does with what you're trying | 19:53 |
kurt_ | sure. | 19:53 |
jcastro | adam_g: though you guys are still just starting to test the openstack charms with juju-core right? | 19:53 |
kurt_ | I'm reeling from a vmware crash right now. :) I may have pushed it to it's limits | 19:53 |
adam_g | jcastro, we are testing it | 19:54 |
jcastro | as long as it's not our fault for the crashes I'm happy. :p | 19:54 |
kurt_ | LOL | 19:54 |
kurt_ | you would think vmware is stable as all get out | 19:54 |
kurt_ | ok, time for system reboot | 19:56 |
marcoceppi | jcastro: I would even say use 1.13 and wait for 1.14 - it's already better than 1.12 | 19:56 |
scuttlemonkey | jcastro: wha? | 19:56 |
marcoceppi | In fact I have strong feelings about this whole odds are dev evens are stable, but that's for another channel another day. | 19:57 |
scuttlemonkey | oh, yeah it's a bit stale...I should at least put a warning on there until I can go through and clean it up | 19:57 |
jcastro | scuttlemonkey: your older ceph/juju blogpost. | 19:57 |
scuttlemonkey | I'll drop a note box on top of it here in a few | 19:58 |
scuttlemonkey | hopefully have time to poke at it later this week | 19:58 |
kurt_ | scuttlemonkey: you wrote that? | 19:58 |
scuttlemonkey | kurt_: yeah | 19:58 |
kurt_ | nice | 19:58 |
scuttlemonkey | thx | 19:58 |
kurt_ | are you pmcgarry then? :) | 19:59 |
scuttlemonkey | yah, that's me | 19:59 |
kurt_ | ah ok. | 19:59 |
kurt_ | I'm doing the ceph stuff right now. So I've been looking at it quite a bit. | 20:00 |
scuttlemonkey | nice | 20:00 |
scuttlemonkey | ceph-deploy has come a long way in the last couple weeks | 20:00 |
scuttlemonkey | was hoping I could set aside some time to put it against the charm and normalize things a bit | 20:00 |
scuttlemonkey | but I haven't really been watching the charm dev | 20:00 |
kurt_ | cool. One point of slight confusion (from the layman <- that's me) was the notion of needing to create a monitor secret and fsid prior to deployment and that they are random | 20:01 |
kurt_ | not random | 20:01 |
kurt_ | but can be any string generated by the tools | 20:01 |
kurt_ | and that those tools can be on any node | 20:02 |
kurt_ | not necessarily the one being deployed to | 20:02 |
scuttlemonkey | ahh | 20:02 |
kurt_ | I was talking through that with marcoceppi | 20:02 |
kurt_ | So maybe you could add information related to generating those two things? | 20:03 |
scuttlemonkey | I should see if I can talk alfredo into monkeying with the charms...he is doing most of the ceph-deploy work these days and even did the ansible playbooks | 20:03 |
scuttlemonkey | didn't I have that in there? | 20:04 |
kurt_ | Ok. That would be awesome. I'd also just be happy documenting the process too. :) | 20:04 |
jcastro | maybe he can reuse the playbook in the charm? | 20:04 |
scuttlemonkey | We need to generate a uuid and auth key for Ceph to use. > uuid insert this as the $fsid below > ceph-authtool /dev/stdout --name=$NAME --gen-key insert this as the $monitor-secret below. - See more at: http://ceph.com/dev-notes/deploying-ceph-with-juju/#sthash.1rrpTWly.dpuf | 20:04 |
jcastro | For an integration win! | 20:04 |
scuttlemonkey | jcastro: hehe inception-y goodness :) | 20:05 |
kurt_ | ah - was that there before??? | 20:06 |
kurt_ | FFS, am I blind? | 20:06 |
scuttlemonkey | kurt_: haha, yeah no worries :) | 20:06 |
scuttlemonkey | jcastro: ok, disclaimer included until I can get back to it | 20:07 |
jcastro | <3 | 20:07 |
jcastro | holla at me if you need anything | 20:07 |
kurt_ | Oh yeah - one more bit of feedback - your guide is tailored to AWS. The steps aren't much different for a private deployment, right? | 20:07 |
kurt_ | if there is anything different, you may want to call it out | 20:07 |
scuttlemonkey | jcastro: will do, hopefully I can get alfredo all fired up :) | 20:08 |
jcastro | yeah | 20:08 |
jcastro | or maybe we should shove all that in the README | 20:08 |
jcastro | along with jamespage's blog post information | 20:08 |
scuttlemonkey | kurt_: yeah, there are the potential for many so I wrote it for what I had and figured people would adapt as necessary | 20:08 |
kurt_ | and final thing w/r to "Prep for Ceph Deployment" section - you may want to make it clear those tools can be ran on *any* node that has the tools. I know that's obvious for AWS, but not for someone doing a MAAS deployment. That was where my confusion was. | 20:10 |
scuttlemonkey | ahhh | 20:11 |
scuttlemonkey | a fair point | 20:11 |
scuttlemonkey | ok, wandering back off into the ether | 20:13 |
scuttlemonkey | thanks for keeping me honest :) | 20:13 |
kurt_ | scuttlemonkey: thank you for your work with ceph and putting the guide together | 20:14 |
scuttlemonkey | kurt_: my pleasure! | 20:15 |
=== varud_away is now known as varud | ||
kurt_ | When my nodes go down, is there any easy way to restart juju services after a crash? Do I have to redeploy everything? | 20:39 |
kurt_ | ie. agent-state: not-started | 20:39 |
kurt_ | This is actually my root-node, so I can't even "juju ssh 0" to the node to fiddle the bits | 20:42 |
kurt_ | Am I SOL and having to destroy the environment? | 20:44 |
kurt_ | ie. start from scratch (shudder) | 20:45 |
=== varud is now known as varud_away | ||
kurt_ | it appears hosts don't like this condition and I'm forced to start over. Is that a fair assumption? | 20:55 |
kurt_ | Nope figured it out. http://askubuntu.com/questions/271312/what-to-do-when-juju-machine-0-has-got-agent-state-not-started-state | 20:58 |
weblife | is it possible to upgrade a charm from a repository ? I tried this but no go: juju upgrade-charm --repository charms local:node-app | 21:07 |
=== freeflying_away is now known as freeflying | ||
=== varud_away is now known as varud | ||
=== varud is now known as varud_away | ||
marcoceppi | weblife: was the charm already deployed from local? or from charm store? | 21:48 |
weblife | marcoceppi: From local. I just went ahead and re-deployed but if I can do it I would like to | 21:49 |
marcoceppi | weblife: you don't need to specify local: next time | 21:50 |
marcoceppi | juju upgrade-charm --repository charms node-app | 21:50 |
marcoceppi | since juju already knows it's local: | 21:50 |
weblife | because the charm is tagged already with local? but if I wanted to make it convert to charm store use cs: ? | 21:51 |
marcoceppi | weblife: so, fair warning I've only just recently learned about the switching part, and from what I've learned I'm terrified to use to too often | 21:51 |
marcoceppi | When you're running upgrade-charm you never need to specify protocol (ie cs: or local:) | 21:52 |
weblife | marcoceppi: thank you for clarifying | 21:52 |
marcoceppi | However. If you deployed a charm from the charm store you can switch to a local version using --switch | 21:52 |
hazmat | roaksoax, ping | 21:53 |
marcoceppi | weblife: I'd recommend looking at the output and warnings of `juju help upgrade-charm` | 21:53 |
marcoceppi | weblife: if you were to pursue --switch | 21:53 |
weblife | marcoceppi: that sounds useful for testing a charm then customizing | 21:55 |
marcoceppi | weblife: a good use case is "I've deployed from the store, and found a vital bug that I need to patch and can't wait" | 21:56 |
marcoceppi | so you can switch to a local version with the hopes of eventually switching back to charm store (or not!) | 21:56 |
jcastro | http://askubuntu.com/questions/335108/juju-cant-ssh-service-units | 21:57 |
jcastro | I think he doesn't need the sudo right? | 21:57 |
marcoceppi | jcastro: did askubuntu bot stop? | 21:57 |
jcastro | I might just be faster? | 21:57 |
marcoceppi | I'll bounce it anyways | 21:57 |
kurt_ | Hey guys - I'm trying to recover from vmware crashing | 21:58 |
kurt_ | systematically going through and having to restart the agents, then reboot the nodes | 21:58 |
kurt_ | the gui is slow as molassis | 21:58 |
kurt_ | just spinning | 21:58 |
marcoceppi | jcastro: weird, I don't know any posts that recommend not having the ssh key in maas, in fact it didn't work when I tried. | 21:58 |
* marcoceppi looks forward to getting his maas setup running | 21:59 | |
weblife | the bot is slow. I made a post before and it lagged. | 21:59 |
kurt_ | any ideas, or am I pretty much SOL and need to destroy my entire set up? | 21:59 |
marcoceppi | weblife jcastro it appears it errored out | 22:00 |
jcastro | kurt_: it sounds to me like restarting everything will take longer | 22:00 |
weblife | kurt_: vmware gui is super slow. That was my experience when doing blackberry development. | 22:01 |
kurt_ | weblife: not vmware gui - juju-gui | 22:01 |
weblife | kurt_: then I dunno, your SOL :) | 22:01 |
kurt_ | jcastro: I take it juju isn't very good at recovering from such situations? | 22:02 |
marcoceppi | kurt_: well the gui is communicating live via an API to bootstrap, so that might explain why it's so slow | 22:02 |
marcoceppi | kurt_: if you restart a machine Juju agents _should_ restart as long as the bootstrap node is running | 22:02 |
jcastro | kurt_: I would capture the logs from the bootstrap and see what a core dev says | 22:03 |
jcastro | bb later tonite, dinner | 22:03 |
kurt_ | marcoceppi: which logs - just debug? | 22:03 |
marcoceppi | kurt_: the bootstrap node has logs in /var/log/juju | 22:03 |
marcoceppi | actually, all nodes have /var/log/juju/ | 22:03 |
kurt_ | is the debug-log simply and aggregation of that info from all nodes? | 22:04 |
marcoceppi | kurt_: yeah... debug-log iirc, is just rsyslog forwarding all logs from all nodes to the bootstrap node, then just tailing that log | 22:04 |
kurt_ | ok, what I thought... | 22:05 |
kurt_ | nothing interesting there. darned | 22:06 |
=== freeflying is now known as freeflying_away | ||
=== freeflying_away is now known as freeflying | ||
=== varud_away is now known as varud | ||
=== varud is now known as varud_away | ||
hazmat | kurt_, so re gui | 22:52 |
hazmat | kurt_, if you reload it what do you see? | 22:52 |
kurt_ | heh heh, now that I ..just..seconds ago destroyed my environment? yes? :D | 22:53 |
hazmat | doh | 22:53 |
kurt_ | nothing - it just spins | 22:53 |
kurt_ | the basic palette was there, and its logged in, but the circle was just spinning | 22:53 |
hazmat | kurt_, so typically in chrome, you can go into the inspector (right click inspect element), switch to network tab, reload the page.. and see the websocket traffic | 22:53 |
kurt_ | ah | 22:53 |
hazmat | which is basically a connection to the environment | 22:53 |
kurt_ | I looked at the error and access logs for apache and didn't see anything really interesting either | 22:54 |
hazmat | the gui charm has to play a game, where it proxies the websocket api endpoint via haproxy, so that it can use the same ssl cert for serving up the page.. but its basically the same | 22:54 |
hazmat | kurt_, you'd have to login to the gui unit and check the haproxy logs i suspect | 22:54 |
kurt_ | are there any common issues you've ran in to? | 22:55 |
hazmat | kurt_, once things are in a steady state, your not going to find much interesting in the juju debug-log output | 22:55 |
hazmat | kurt_, no | 22:55 |
hazmat | kurt_, it generally just works | 22:55 |
hazmat | kurt_, hence the curiosity around what went odd | 22:55 |
kurt_ | right. | 22:55 |
hazmat | kurt_, you've got vmware, pxe boot instances registered in maas | 22:55 |
hazmat | ? | 22:55 |
kurt_ | yeah for sure | 22:56 |
hazmat | kurt_, it could be a network issue between the gui proxy and the state server instance, or between the browser and the gui proxy. | 22:56 |
kurt_ | that whole part works, with of course the exception that vmware won't actually boot the nodes | 22:56 |
hazmat | i've seen similiar setups with virtualbox pxe it works... and we have an old kvm charm that does something similiar (virtual-maas) | 22:57 |
kurt_ | yes it could. It may have been worth troubleshooting NAT. Didn't think too deeply about that partl | 22:57 |
kurt_ | yes, I figured out kvm work, but it can use libvirt | 22:58 |
kurt_ | I'm working with one of the libvirt devs to get libvirt working correctly with vmware under mac osx, then I should be good | 22:59 |
kurt_ | hopefully then maas can boot vmware fusion like it does kvm | 22:59 |
hazmat | cool | 23:00 |
kurt_ | but pxe under vmware with the lower cost versions is known not to work | 23:00 |
hazmat | i've heard the virtualbox w/ pxe + maas setup on osx works, though its a bit manual to setup | 23:00 |
hazmat | although could be scripted | 23:00 |
kurt_ | sorry, let me rephrase - WOL does not work | 23:01 |
kurt_ | pie boot works fine | 23:01 |
hazmat | ah | 23:01 |
kurt_ | pxe boot | 23:01 |
* hazmat wishes there was a software emulation of ipmi | 23:01 | |
kurt_ | that would be nice | 23:01 |
kurt_ | why don't you guys write the stack for it? :D | 23:02 |
hazmat | would make testing ipmi setups much simpler.. as it is now.. its get a bunch of hardware for a lab setup. | 23:02 |
kurt_ | pxe boot is working fine under vmware - its just the WOL stuff that I wished worked correctly | 23:02 |
kurt_ | that's part of what makes MAAS so cool | 23:03 |
kurt_ | it would be even cooler if it supported true elasticity | 23:04 |
adam_g | using juju-core, is it not possible to terminate a machine that is in the 'pending' state? | 23:06 |
hazmat | adam_g, using what version ? sounds similar to https://bugs.launchpad.net/juju-core/+bug/1190715 | 23:08 |
_mup_ | Bug #1190715: unit destruction depends on unit agents <juju-core:Fix Committed by fwereade> <https://launchpad.net/bugs/1190715> | 23:08 |
hazmat | its in trunk | 23:08 |
adam_g | hazmat, 1.13.1-raring-amd64 + juju-deployer | 23:08 |
adam_g | hazmat, thanks ill read that over in a few | 23:09 |
hazmat | adam_g, is there anything in the provisioning logs? | 23:09 |
fwereade | adam_g, if it has a unit, you the unit's existence blocks the machine's removal; but you should be able to remove the unit and then the machine | 23:09 |
adam_g | hazmat, actually, i just destroyed and rebootstrapped. ill check again in a few | 23:09 |
fwereade | adam_g, it's awkward but shouldn't be unrecoverable | 23:10 |
adam_g | fwereade, i successfully destroyed the service units associated with the pending machine. after they were gone, i tried to terminate the machine but nothing happened | 23:10 |
adam_g | ill poke again in a few | 23:10 |
fwereade | adam_g, that's not immediately familiar; the provisioner ought to have picked that up then | 23:12 |
fwereade | adam_g, do you recall what the machine's life status was, and/or whether an instance id was set? those would help me figure it out | 23:13 |
adam_g | fwereade, http://paste.ubuntu.com/6008312/ | 23:15 |
adam_g | machines 5 and 9 were the ones that never came up to begin wtih, and seemed not to terminate | 23:15 |
adam_g | unless i was being impatient | 23:15 |
adam_g | (didnt notice the 'dying' state at the timne) | 23:15 |
fwereade | adam_g, yeah, the provisioner ought to have spotted that; the logs from machine 0 would be helpful | 23:16 |
adam_g | fwereade, ill see what i can get together next time the provider decides not to give me a machine | 23:17 |
fwereade | adam_g, thanks | 23:19 |
adam_g | fwereade, ok, back in the same situation | 23:29 |
adam_g | provider issue seems to be instance coming up with no networking | 23:30 |
* fwereade winces a bit | 23:30 | |
adam_g | fwereade, what do you want from machine 0? /var/log/juju/* ? | 23:30 |
fwereade | adam_g, machine-9.log should be enough | 23:31 |
fwereade | adam_g, -0 | 23:31 |
adam_g | fwereade, http://paste.ubuntu.com/6008358/ | 23:32 |
adam_g | (machine-0.log) | 23:32 |
adam_g | http://paste.ubuntu.com/6008361/ | 23:32 |
adam_g | (juju status( | 23:32 |
=== varud_away is now known as varud | ||
fwereade | adam_g, hmm, not clear what's happening there... thumper, any thoughts? | 23:39 |
fwereade | adam_g, in a spirit of devilment, I'd be interested to know what happens if you kill the machine agent on 0 and see what happens when it comes back up | 23:39 |
thumper | ?! | 23:40 |
adam_g | fwereade, here is some more log tail since last paste: http://paste.ubuntu.com/6008379/ | 23:41 |
adam_g | one sec, ill kill it | 23:41 |
adam_g | http://paste.ubuntu.com/6008381/ | 23:43 |
adam_g | including kill and restart | 23:43 |
fwereade | adam_g, sorry, but I think I'm baffled for tonight -- would you create a bug please? | 23:48 |
fwereade | adam_g, maybe it'll be obvious in the morning but I'm not seeing it now | 23:48 |
=== varud is now known as varud_away | ||
adam_g | fwereade, sure | 23:50 |
adam_g | fwereade, https://bugs.launchpad.net/juju-core/+bug/1214651. | 23:56 |
_mup_ | Bug #1214651: Machine stuck in 'pending' state cannot be terminated <juju-core:New> <https://launchpad.net/bugs/1214651> | 23:56 |
adam_g | fwereade, thanks for the help so far | 23:56 |
fwereade | adam_g, thanks for the report :) | 23:57 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!