roy-feldman | Hey Marco - juju bootstrap is working now with my Maas node, without any changes! | 00:01 |
---|---|---|
roy-feldman | in KVM | 00:01 |
roy-feldman | But I had to something a little odd, which may be a MaaS problem | 00:02 |
roy-feldman | Basically I had to PXE boot it twice, and I had to select a boot option on the second boot | 00:03 |
marcoceppi_ | roy-feldman: that's...odd - but I'm glad it works! | 00:06 |
roy-feldman | After the initial PXE boot, I got the network error I described | 00:07 |
roy-feldman | Then I started guest again, and hit F12 and selected the virtio boot option, not the default boot option which PXE boot | 00:08 |
roy-feldman | It seems that MaaS is not completing the PXE boot in one step | 00:10 |
roy-feldman | Also, the start button in the MaaS interface is a "noop" | 00:10 |
roy-feldman | Now I do have a Juju question | 00:12 |
roy-feldman | I can see my MaaS node when I do juju status | 00:12 |
roy-feldman | I have done a juju deploy and juju expose of mysql | 00:14 |
roy-feldman | How long should it take for local mysql charm to transition from pending to running? | 00:15 |
marcoceppi_ | roy-feldman: depends on how beefy your machine is. Think of it as having to install the Ubuntu OS, install the juju working parts, then deploy the charm and install the charm. | 00:16 |
roy-feldman | I am running a beefy i7 laptop .. I wouldn't think it would take very long | 00:16 |
roy-feldman | It doesn't matter if the machine is already running with a OS? | 00:17 |
roy-feldman | This was not a cold deploy | 00:17 |
marcoceppi_ | roy-feldman: not sure, I've typically enlisted machines from the PXE boot screen | 00:17 |
roy-feldman | I have done it the other way | 00:18 |
roy-feldman | I entered the Mac address to Maas | 00:18 |
roy-feldman | Then I booted the machine which registered it with MaaS | 00:19 |
roy-feldman | In my case, I had to boot it twice | 00:19 |
marcoceppi_ | roy-feldman: but did the machine have the Ubuntu OS yet? | 00:19 |
roy-feldman | Yes | 00:19 |
marcoceppi_ | See, I'm not sure what happens here since MaaS builds it's own images to use | 00:19 |
marcoceppi_ | So I'm not sure if it's going to wipe/re-install or what | 00:20 |
roy-feldman | During the second boot it appeared to install the latest server packages | 00:20 |
roy-feldman | Perhaps. In any case I hope to soon have better model of how MaaS + Juju work then I have now. ;-) | 00:22 |
roy-feldman | Any suggestions on how I can see what is going on, what logs I can be looking at? | 00:23 |
roy-feldman | Looking at the output of Juje debug-log, I see a series messages starting with "ProviderInteractionError: Unexpected Error interacting with provider: 409 CONFLICT" | 00:27 |
roy-feldman | I think that would explain why my mysql instance is not coming up | 00:27 |
roy-feldman | Should I file a bug report? | 00:28 |
marcoceppi_ | huh, 409 CONFLICT means MaaS doesn't have any nodes for Juju to use | 00:29 |
roy-feldman | Maybe I shouldn't have my only maas node running if I want to deploy a charm | 00:29 |
marcoceppi_ | roy-feldman: you'll need at least two MaaS nodes to bootstrap and deploy with Juju | 00:30 |
marcoceppi_ | One for the Bootstrap node and one for the charm you wish to deploy :) | 00:30 |
roy-feldman | I do have two nodes | 00:30 |
marcoceppi_ | Oh | 00:30 |
marcoceppi_ | What does the MaaS dashboard show? | 00:30 |
roy-feldman | If you mean I have a node for the MaaS server and another for deployment | 00:30 |
roy-feldman | Hold on | 00:30 |
roy-feldman | It show that I have one node which has been allocated | 00:32 |
marcoceppi_ | And that's it? | 00:32 |
roy-feldman | There were 0 nodes when I installed MaaS | 00:33 |
marcoceppi_ | How many nodes do you have available? | 00:33 |
roy-feldman | 1 | 00:34 |
roy-feldman | Does juju require its own MaaS node? | 00:36 |
roy-feldman | I assumed that I could run Juju at the native level to interact with MaaS. Am I wong? | 00:37 |
lifeless | roy-feldman: you can indeed | 00:38 |
lifeless | you'll need a MaaS controller, and then a Juju control node, running on a MaaS provisioned node | 00:38 |
roy-feldman | I missed that in the configuration steps | 00:39 |
roy-feldman | The setup of the Juju control node | 00:39 |
roy-feldman | Where is that documented? | 00:39 |
lifeless | its what juju bootstrap does | 00:40 |
roy-feldman | So I need a free MaaS node when I run Juju bootstrap | 00:41 |
roy-feldman | ? | 00:41 |
roy-feldman | i.e. So are you saying that every time I run juju bootstrap with a maas envrionment, I need an available Maas Node for the juju controller? | 00:43 |
roy-feldman | What is the best way to rollback my juju deployment and go back to adding additional MaaS nodes? | 00:45 |
roy-feldman | Should I do a destroy-environment? | 00:46 |
lifeless | yes, thats what I'm saying | 00:48 |
roy-feldman | thanks | 00:48 |
lifeless | uhm, you shouldn't need to destroy it, just add more nodes so it can get them when you go to deploy a charm | 00:48 |
roy-feldman | And its Ok to simply cntrl-c out of my original juju deploy and expose? | 00:49 |
roy-feldman | No need to do any housecleaning? | 00:49 |
roy-feldman | The one that never completed because there wasn | 00:50 |
roy-feldman | wasn't an available node | 00:50 |
roy-feldman | BTW, shouldn't juju expose give some kind of message if there aren't sufficient nodes to complete the action? | 00:52 |
lifeless | please file a bug abou that | 00:53 |
roy-feldman | will do | 00:53 |
lifeless | I agree it would be good to do so, I don't know why it didn't... may be a MaaS bug, for instance. | 00:53 |
roy-feldman | Looking at the trace, it looks like the loop is in juju | 00:54 |
roy-feldman | Specifically juju/agents/provision.py | 00:54 |
roy-feldman | It just keeps retrying | 00:55 |
lifeless | you could leave it where it is and add nodes, if it keeps retrying when a node comes available it will succeed ;) | 00:56 |
roy-feldman | Thanks again for all the help, I will try again with more nodes and see what happens and I will file a bug report about juju provision | 00:57 |
* negronjl is done for the night | 01:41 | |
bkerensa | SpamapS, marcoceppi: http://www.omgubuntu.co.uk/2012/06/ubuntu-12-10-development-update-1 | 03:51 |
bkerensa | you got questions about juju ^ :P | 03:51 |
=== almaisan-away is now known as al-maisan | ||
=== tobin_ is now known as tobin | ||
=== tobin is now known as Guest74089 | ||
=== Guest74089 is now known as tobin__ | ||
=== garyposter is now known as gary_poster | ||
jimbaker | at usenix config mgmt summit today in boston, will be talking later on "service orchestration with juju" | 13:16 |
niemeyer | jcastro: ping | 13:19 |
niemeyer | jimbaker: Sweet | 13:19 |
jimbaker | niemeyer, it's a good lineup of speakers from chef, bcfg2, cfengine, vmware | 13:22 |
=== al-maisan is now known as almaisan-away | ||
themiwi | Hi all. Is there an easy way to get essentially the output of `getconf getconf _NPROCESSORS_ONLN` of the remote host in a *-relation-changed hook? Or should I use `n=$(ssh $(relation-get hostname) getconf _NPROCESSORS_ONLN)`? | 14:52 |
m_3 | themiwi: you could pass that as a relation variable... one side would do `relation-set num-cpus=$(...)` and the other side does `relation-get num-cpus` | 14:56 |
m_3 | themiwi: that can usually be inferred by instance-type though.. depends on provider. Also you might take a look at constraints in the juju docs. | 14:58 |
m_3 | all depends on what you're trying to do... passing relation variables works fine though | 14:59 |
negronjl | 'morning all | 15:34 |
imbrandon | morn | 15:34 |
negronjl | hi imbrandon | 15:34 |
m_3 | negronjl: yo | 15:34 |
negronjl | 'morning m_3 | 15:35 |
twobottux | aujuju: Is juju specific to ubuntu OS on EC2 <http://askubuntu.com/questions/149952/is-juju-specific-to-ubuntu-os-on-ec2> | 15:53 |
SpamapS | I love the askubuntu bot | 16:33 |
SpamapS | really.. fantastic idea | 16:33 |
m_3 | SpamapS: yeah, me too... can we answer it here? | 16:52 |
jcastro | imbrandon: you don't need to add "new-charm" anymore, just setting the status Does The Right Thing(tm) | 16:53 |
jcastro | also dude, awesome job on the RPMs! | 16:53 |
jcastro | has anyone tried them yet? | 16:53 |
imbrandon | umm not sure | 16:53 |
imbrandon | i just put the word out a few hours ago | 16:53 |
imbrandon | so no feedback yet | 16:53 |
jcastro | you blog it or need me to? | 16:54 |
imbrandon | i'm about to blog about that and my new "download for ubnutu" button soon, so shuld get some | 16:54 |
imbrandon | soonish | 16:54 |
jcastro | k, poke me and I'll syndicate it on cloud.u.c | 16:54 |
imbrandon | your more than welcome to , more ppl read yours i think | 16:54 |
imbrandon | ll | 16:54 |
imbrandon | kk | 16:54 |
imbrandon | i'm pretty sure my whole juju category is syndicated already | 16:55 |
imbrandon | on cloud.u.c | 16:55 |
imbrandon | i'll make sure tho | 16:55 |
jcastro | no I need to post it, it doesn't automatically publish | 16:55 |
imbrandon | ahh, kk | 16:55 |
SpamapS | m_3: no, how would it give you credit? ;) | 16:55 |
imbrandon | SpamapS: loging to the bot via api :) | 16:56 |
=== koolhead17|zzZZ is now known as koolhead17 | ||
SpamapS | imbrandon: good luck with that | 16:59 |
SpamapS | sounds like a yak to be shaved later | 16:59 |
m_3 | hmmm... travelng to nepal lately? | 17:00 |
imbrandon | SpamapS: heh | 17:00 |
adam_g | hazmat: ping | 17:05 |
hazmat | adam_g, pong | 17:06 |
adam_g | hazmat: looking at snapshot.py of charm runner for the first time... is clean_juju_state() something that can be easily adpated to work with a non-local environment? | 17:08 |
hazmat | adam_g, about to get into a meeting.. | 17:08 |
hazmat | adam_g, the state cleaning yes, the storage cleaning no | 17:08 |
hazmat | adam_g, we don't have a provider storage method for killing files | 17:09 |
adam_g | hazmat: if i wanted to just script around it via ssh/paramiko, i'd just be deleting the related files from the web storage right? | 17:09 |
hazmat | adam_g, what provider? | 17:10 |
adam_g | MAAS | 17:10 |
adam_g | actually, i wouldn't even need to do that. ive got local access to the MAAS server | 17:10 |
hazmat | adam_g, if its maas, i'd check if they have an api for deleting files and just sniff ~/.enviroments.yaml by hand for the creds to delete the files | 17:10 |
adam_g | hazmat: ill start there, thanks | 17:10 |
m_3 | negronjl: yo... gotta sec? | 17:47 |
negronjl | m_3: sure | 17:47 |
m_3 | g+? | 17:47 |
negronjl | m_3: sure .. give me a sec. I'll invite you when I'm there | 17:48 |
m_3 | ok | 17:48 |
negronjl | m_3: started invite sent | 17:49 |
themiwi | m_3: Sorry, got interrupted and then had to dash away to catch the train. Yes, passing this information as a relation variable to override the default would be great. However, I'd like to also provide a sensible default choice which is not just a pessimistic 1. | 18:20 |
m_3 | whoohoo... jim's talk/demo went well | 18:53 |
SpamapS | sweet | 19:01 |
jcastro | nice | 19:01 |
SpamapS | themiwi: interesting problem you're trying to solve. In what case is it important to know the *remote* CPU count for service configuration? | 19:03 |
SpamapS | themiwi: either way, you don't have to provide a default. Just keep running your changed hook until the other side has *set* that value. | 19:03 |
SpamapS | themiwi: the changed hooks can ping-pong back and forth a few times. | 19:04 |
themiwi | SpamapS: I'm trying to cook up a charm for the Sun Grid Engine (SGE), where the master/head node needs to know the number of slots it can allocate on the compute node. | 19:04 |
themiwi | SpamapS: yes, that's what I'm going to do now. I set it in the *-relation-joined hook of the compute charm and the keep querying it in the *-relation-changed of the head charm | 19:05 |
SpamapS | themiwi: perfect :) | 19:21 |
SpamapS | themiwi: wi would have thought grid engine would have its own RPC to talk to its nodes and figure things like CPU's out | 19:22 |
SpamapS | anyway.. time to find nourishment | 19:22 |
themiwi | SpamapS: I suppose it could, but the thing with cluster administration is, that administrators want to be able to adjust every tiny detail and would probably hate such automatisms... | 19:23 |
SpamapS | hah | 19:24 |
SpamapS | tweakers | 19:24 |
themiwi | SpamapS: ;-) yep. they take pride in squeezing every single flop from their clusters. | 19:26 |
* m_3 is a card carrying tweaker | 19:30 | |
themiwi | Another question: Whenever a relation is added, the variable nodecpus is set to the number of processors/cores of the slave. so far so good. however, the master needs to maintain a *sum* of all node cpus. Where would I store this persistent information? | 20:22 |
m_3 | themiwi: juju doesn't choose this for you... I'd stick it in a map or db on the filesystem (we often use /var/lib/juju/ to house such things). some folks like facter.. there're lots of options. key is that this map will update over time as nodes die and/or join | 20:26 |
m_3 | themiwi: hooks xxx-relation-changed, xxx-relation-departed, and xxx-relation-broken will have to recalculate that number over the lifetime of the cluster | 20:27 |
themiwi | m_3: was just wondering whether i could do something like "unit-set cpucount $ncpus"... | 20:27 |
themiwi | m_3: but maintaining a simple text file containing that figure is also no problem. | 20:28 |
m_3 | themiwi: relation-list is useful here | 20:28 |
m_3 | themiwi: oh, so config allows you to set stuff from the command line too... | 20:28 |
m_3 | there's a config-changed hook | 20:28 |
themiwi | m_3: sure. writing these charms is quite intricate and the documentation rather scarce, it seems :-) | 20:28 |
m_3 | and then `juju set var=value` commands | 20:28 |
m_3 | yeah... some are really easy... some really hard... depends on the service | 20:29 |
themiwi | especially keeping things straight with all that ping-ponging between xxx-relation-changed hooks is a bit daunting | 20:30 |
m_3 | yes | 20:30 |
=== marcoceppi_ is now known as marcoceppi | ||
m_3 | it's super-powerful though | 20:31 |
m_3 | it's auto-negotiation for config params that you usually have to set manually or in config scripts | 20:32 |
themiwi | sure, but you also have to be super-careful | 20:34 |
imbrandon | its a one time investment though :) | 20:57 |
imbrandon | or should be ... heh | 20:57 |
m_3 | maybe once per long-term release :) | 20:57 |
imbrandon | right | 20:58 |
imbrandon | :) | 20:58 |
jcastro | imbrandon: heya | 21:01 |
jcastro | hazmat never got your ubuntufication of the sphinx template | 21:01 |
jcastro | can you hook him up so our docs look awesome? | 21:01 |
jcastro | hazmat: this is handy btw, http://jujucharms.com/tools/store-missing | 21:02 |
jcastro | nice one on that one | 21:02 |
imbrandon | jcastro: sure | 21:06 |
imbrandon | jcastro: yea i hadent sent it to anyone really yet | 21:06 |
imbrandon | like the download button ? heh u should pimp it a lil for me :) | 21:07 |
jcastro | no, you said you styled the stuff for juju.ubuntu.com/docs | 21:07 |
jcastro | the sphinx stuff | 21:07 |
imbrandon | doing it now, is there a reason not to just do a merge req , i mean i dont mind sending them directly to hazmat , but just curious | 21:08 |
imbrandon | and yes thats what i said "sure" to, and then added the bit about the button | 21:09 |
imbrandon | :) | 21:09 |
imbrandon | but yea, let me dig that branch up its burried in my ~/Projects folder and then i'll pass it arround , give me ~30ish min | 21:09 |
jcastro | well remember I was like "show daniel too a bunch of other ubuntu stuff uses sphinx" | 21:09 |
imbrandon | yup yup | 21:09 |
jcastro | no rush or whatever | 21:09 |
hazmat | imbrandon, just do a merge request | 21:10 |
jcastro | I just happened to be talking about it with kapil | 21:10 |
hazmat | imbrandon, my only concern is if it requires updates to the installation | 21:10 |
imbrandon | hazmat: cool cool , yea i dont mind, was just like ummm ? heh | 21:10 |
imbrandon | nah | 21:10 |
imbrandon | its just some new files in _templates | 21:10 |
hazmat | imbrandon, the build should cron update and run automatically, so as long as there aren't any new sphinx extensions it should just work as a mp proposal | 21:11 |
imbrandon | thats about it, dont even think the biuld rules need updates | 21:11 |
imbrandon | if i rember right | 21:11 |
imbrandon | yup yup | 21:11 |
imbrandon | btw it does use the html build target right ? | 21:12 |
imbrandon | as far as whats published | 21:12 |
imbrandon | not like the all-in-one html or something weird | 21:12 |
jcastro | SpamapS: ok so like brandon is winning with his packages. :) | 21:17 |
SpamapS | jcastro: I can't do anything to thwart the Debian NEW queue | 21:22 |
imbrandon | hahah i'm sure SpamapS will mucho win in the end, i actually had alot of help unknowingly from hazmat doing the footwork over at pypi.python.com a while back | 21:25 |
imbrandon | so i cheated :) | 21:25 |
imbrandon | SpamapS: btw your interview over at omg is on fire, there are over 200 ppl current viewing it and a ton of positive twitter comments ( despite some of the "ohh its not desktop" idiots ) | 21:29 |
imbrandon | been like that for like 3 or 4 hours now | 21:29 |
SpamapS | imbrandon: heh cool. :) #2 this week | 21:31 |
koolhead17 | SpamapS, :) | 21:45 |
imbrandon | SpamapS: ugh , can we ( read: you ) rename jitsu binary to juju-jitsu ? hehe ( i know i know probably not ) ... there is a conflict with another deployment tool named jitsu and both dont like to live in bin/jitsu at the some time heh | 21:49 |
imbrandon | no biggie though, lets just hope that I'm not most ppl and have both installed , or want to at least | 21:49 |
imbrandon | ( the other jitsu is the Joyent's tool to deploy and manage http://no.de and http://jit.su accounts | 21:50 |
imbrandon | for nodejs ) | 21:50 |
SpamapS | imbrandon: oh there's another jitsu? | 21:54 |
imbrandon | yea | 21:54 |
imbrandon | i found out the hard way | 21:54 |
SpamapS | why does node.js take all the best names | 21:54 |
imbrandon | heh | 21:54 |
imbrandon | i went to install juju-jitsu and it was like ummm there is somethign there already | 21:54 |
imbrandon | i looked and rembered | 21:55 |
imbrandon | heh | 21:55 |
imbrandon | i was like damn! | 21:55 |
SpamapS | node also conflicts withs ome broke ass old AX25 packet thing caled node | 21:55 |
imbrandon | but yea its probably a corner case | 21:55 |
imbrandon | heh | 21:55 |
SpamapS | not really a corner case | 21:55 |
SpamapS | I can see them conflicting | 21:56 |
SpamapS | there ought to be a single registry for bin names | 21:56 |
imbrandon | really its used as a wrapper anyhow, so if it was just named juju-jitsu for the bin | 21:56 |
imbrandon | it would probably be fine | 21:56 |
imbrandon | mostly* a wrapper | 21:56 |
SpamapS | I want it called jitsu | 21:57 |
SpamapS | typing juju-jitsu sucks | 21:57 |
imbrandon | cuz once youve done juju-jitsu wrap-juju , it wont matter tho :) | 21:57 |
imbrandon | and only rename the bin not everything :) | 21:57 |
imbrandon | i dont really care, but i did run into it today heh | 21:57 |
SpamapS | I'd be willing to relent and do jjitsu | 21:57 |
imbrandon | heh | 21:58 |
imbrandon | jj :) that would make a good alias, /me adds to .bash_profile | 21:58 |
imbrandon | i wanna add real alias support too btw, like hub has for git , you just "alias git=hub" in bash_profile and go on about your business | 21:59 |
imbrandon | its sweet | 21:59 |
imbrandon | thats one of those $sometimes things | 22:00 |
imbrandon | actually its more like .... eval "$(hub alias -s)" | 22:01 |
imbrandon | but still, same thing, just works if you use zsh or any shell | 22:01 |
negronjl | SpamapS, jcastro, m_3: Do we know which day is the Charm School planned for during Velocity ? | 22:48 |
SpamapS | negronjl: did we decide to do one w/o the conference blessing? | 22:49 |
negronjl | SpamapS: not sure yet, hence the question :) | 22:49 |
SpamapS | negronjl: IIRC, it was rejected by them | 22:50 |
negronjl | SpamapS: ah. thx | 23:07 |
SpamapS | http://packages.qa.debian.org/j/juju.html | 23:31 |
m_3 | negronjl: nope... dunno | 23:48 |
negronjl | m_3: thx | 23:51 |
imbrandon | SpamapS: saweet | 23:52 |
imbrandon | :P | 23:52 |
imbrandon | SpamapS: here, this one is all for you :) http://www.brandonholtsclaw.com/blog/2012/juju-everywhere#comment-555572949 | 23:53 |
imbrandon | Mr Spaleta | 23:54 |
SpamapS | Our biggest fan | 23:55 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!