=== freeflying_away is now known as freeflying | ||
=== defunctzombie is now known as defunctzombie_zz | ||
hazmat | marcoceppi, merged, incidentally i've been splitting out the various bits of deployer into plugins in the core-plugins branch | 01:20 |
---|---|---|
=== defunctzombie_zz is now known as defunctzombie | ||
marcoceppi | hazmat: ah, cool | 01:53 |
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== thumper is now known as thumper-afk | ||
=== stub` is now known as stub | ||
=== defunctzombie is now known as defunctzombie_zz | ||
=== defunctzombie_zz is now known as defunctzombie | ||
=== wallyworld__ is now known as wallyworld | ||
=== jam1 is now known as jam | ||
=== tasdomas_afk is now known as tasdomas | ||
=== defunctzombie is now known as defunctzombie_zz | ||
AskUbuntu | agent-state-info: 'hook failed: "config-changed" deploy wordpress using juju | http://askubuntu.com/q/335720 | 07:51 |
=== thumper-afk is now known as thumper | ||
mthaddon | any chance of a review for https://code.launchpad.net/~mthaddon/charms/precise/pgbouncer/package-holds/+merge/180533 - been sitting in the queue for a little while and is relatively trivial | 08:11 |
sinzui | hi charmers. I am seeing "ERROR Invalid SSH key" for juju status for charmworld on canonistack. I got a juju update for saucy today, but I also set the alternative to juju-7. Any clues about resolving this? | 13:10 |
sinzui | My access was fine 18 hours ago | 13:11 |
marcoceppi | sinzui: when you type juju --version does it say 0.7? | 13:14 |
sinzui | yes | 13:14 |
sinzui | I can confirm that I can ssh to each machine when I specify the proper key. I think the wrong key is being selected. | 13:15 |
marcoceppi | sinzui: has your id_rsa changed in the last 18 hours? | 13:15 |
sinzui | marcoceppi, no keys have changed | 13:16 |
marcoceppi | sinzui: can you ssh directly in to the bootstrap? | 13:17 |
sinzui | marcoceppi, I certainly can | 13:19 |
marcoceppi | sinzui: hum, sorry. It's been a while since I've used 0.7, >1.x has really spoiled me. Verify that your id_rsa.pub is in the authoized-keys list for .ssh/ on bootstrap? | 13:20 |
sinzui | marcoceppi, We use team credential to manage staging.jujucharms.com. I can access every instance shown by nova list. | 13:20 |
marcoceppi | Then verify you're trying to status against the right environment? | 13:20 |
sinzui | the env is always right. | 13:21 |
sinzui | marcoceppi, I just bootstrapped a new env on canonistack. Juju cannot get the status for the same reason as the old env. | 13:22 |
marcoceppi | sinzui: there was an update to 0.7 recently | 13:22 |
marcoceppi | I have no idea what it was, but it appears to have broken this | 13:22 |
sinzui | thanks marcoceppi | 13:23 |
marcoceppi | sinzui: well, http://bazaar.launchpad.net/~juju/juju/0.7/revision/632 does not look like it'll hurt anything. I wonder if there was a change earlier that didn't get built because of build errors that caused this | 13:24 |
marcoceppi | sinzui: either way, bugs should be filed, probably even want to bother #juju-dev about it | 13:25 |
sinzui | marcoceppi, I agree. I am checking if other in the team can work with canonistack. I might spend some time with the GSAa | 13:26 |
marcoceppi | sinzui: might be a good time to move to juju 1.12.0 ;) | 13:26 |
sinzui | marcoceppi, I think it would be very irresponsible to build on modern juju but deploy on juju 0.7 | 13:28 |
sinzui | When prodstack allows new juju, we can switch | 13:28 |
sinzui | marcoceppi, I can work with canonistack again after a reboot. I have no idea what was buggered. Possibly the flip-flop from juju 0.7 -> 1.13 -> 0.7 tainted something | 13:57 |
sidnei | sinzui: it already does | 14:06 |
sidnei | sinzui: as in, we just deployed a couple prod services with new juju | 14:06 |
marcoceppi | sinzui: that's really weird. | 14:06 |
marcoceppi | glad you were able to get it resolved | 14:06 |
sinzui | sidnei, I was told we cannot use new juju until fenchurch is proven | 14:07 |
sidnei | sinzui: i don't know what's that about. i was told we cannot use old juju for new deployments at all. | 14:08 |
sinzui | great. mixed messages. I would love to have just one juju installed | 14:09 |
marcoceppi | sinzui: I have no idea what that is about, but we're really moving away from 0.7 asap | 14:10 |
sidnei | mthaddon can probably clarify | 14:11 |
mthaddon | so what's the question here? | 14:12 |
sidnei | "<sinzui> sidnei, I was told we cannot use new juju until fenchurch is proven" | 14:13 |
mthaddon | sinzui: who told you that? | 14:13 |
sinzui | elmo mentioned it at IoM | 14:14 |
sinzui | ^ mthaddon | 14:14 |
mthaddon | sinzui: I'll check with elmo, but I highly doubt he said that, as he's pushing us to use juju-core for any new environments in prodstack right now | 14:15 |
sinzui | mthaddon, I would be happy to move charmworld and gui to new juju | 14:15 |
elmo | sinzui: sorry, for any confusion over what I said; but use of juju-core is not blocked by anything in IS. As mthaddon said, we have a mandate to use juju-core and only juju-core for any new services or complete redeployments of an existing service | 14:27 |
sinzui | elmo: thanks. I will bring this up with the gui team | 14:28 |
kurt_ | Does anyone recognize the error "Unable to retrieve authorized projects." from openstack-dashboard? | 14:37 |
=== freeflying is now known as freeflying_away | ||
Ex1T | Heyhi | 14:54 |
=== tasdomas is now known as tasdomas_afk | ||
marcoceppi | jamespage: so, charm-tools has changed a lot since the last time it was sync'd to the archives. How would I start the process of getting a new version sync'd to Saucy? Also, what's the latest I could sync as I've got a new version coming out soon | 15:23 |
jamespage | marcoceppi, feature freeze is next thursday | 15:23 |
jamespage | so ideally wednesday | 15:23 |
jamespage | marcoceppi, is the packaging itself still OK? | 15:24 |
marcoceppi | jamespage: the recipie has changed in addition to the contents | 15:24 |
marcoceppi | recipe* | 15:24 |
marcoceppi | jamespage: actually, it looks like the saucy recipe has been updated | 15:25 |
marcoceppi | https://code.launchpad.net/~marcoceppi/ubuntu/saucy/charm-tools/fix-deps/+merge/165161 | 15:26 |
jamespage | marcoceppi, thats what I wanted! | 15:27 |
jamespage | marcoceppi, so juju -> juju-core in Saucy | 15:27 |
jamespage | juju-0.7 will be the old package name | 15:28 |
marcoceppi | jamespage: gotchya, so for saucy it can still recommend/suggest "juju" as juju is the new metapackage that installs juju-core with update alternatives? | 15:28 |
jamespage | thats it | 15:28 |
marcoceppi | jamespage: cool, that fix was because in precise, if you install charm-tools from ppa and juju-core from ppa, you get a broken install as juju installs 0.7 | 15:29 |
marcoceppi | I'll open another update | 15:29 |
jamespage | the ppa builds should really all do the right things by now - i.e. use alternatives | 15:30 |
marcoceppi | jamespage: right, but back whenthis change was made the ppa version of juju core and juju in the precise archives clashed a bit | 15:32 |
jamespage | marcoceppi, probably still does | 15:32 |
marcoceppi | hum, so maybe I'll keep this for the ppa of charm-tools, just so things don't die, but I'll at least update the saucy version, thanks! | 15:33 |
=== defunctzombie_zz is now known as defunctzombie | ||
kurt_ | Can a version mismatch for keystone cause problems? ie. client (openstack-dashboard) and keystone node? | 16:43 |
kurt_ | http://pastebin.ubuntu.com/6014649/ | 16:43 |
kurt_ | I'm working hard to trace keystone auth issues and trying to understand where my problems are. | 16:43 |
marcoceppi | kurt_: iirc versions of openstack and keystone (grizzly vs Folsom for instance) cause problems | 17:01 |
kurt_ | marcoceppi: from my paste bin - is that what you see is a version mismatch? | 17:02 |
jcastro | jamespage: hey have you tried that python redux bundle on AWS or HP Cloud? | 17:03 |
jamespage | jcastro, sorry - no | 17:04 |
jcastro | ok so hp cloud doesn't work for me | 17:04 |
jcastro | however, for about the first 3 minutes it works awesome | 17:04 |
marcoceppi | kurt_: not sure which version is which. one second | 17:04 |
jcastro | jamespage: I need to sort some environment issues but I think I am close | 17:04 |
marcoceppi | jcastro: I couldn't get networking to work on HP cloud. dashboard and glance worked | 17:05 |
jcastro | jamespage: it's pretty badass watching deployer fire up stuff like that. | 17:05 |
jamespage | jcastro, yeah - sorry - up to my eyeballs in kernel incompatibility problems with openvswitch in saucy right now | 17:13 |
jamespage | (if I seem a little distracted) | 17:13 |
jcastro | no worries | 17:18 |
jcastro | I Was expecting you to be EODed anyway | 17:18 |
kurt_ | marcoceppi: any chance to look at my pastebin? | 17:37 |
=== tasdomas_afk is now known as tasdomas | ||
=== tasdomas is now known as tasdomas_afk | ||
marcoceppi | kurt_: sorry, was mobile. Back at desk | 17:56 |
kurt_ | marcoceppi: no worries - just trying to sort through my final set of problems getting openstack running :) | 17:57 |
kurt_ | I'm so close | 17:57 |
kurt_ | I have these weird auth issues and still the cinder/ceph stuff to figure out | 17:57 |
kurt_ | one layer at a time :) | 17:57 |
marcoceppi | kurt_: I hear yeah, let me see what's in the cloud archive. Ideally you want all your services running the same openstack release, IE grizzly, folsom, etc | 17:57 |
kurt_ | yes, there is definitely a mixture there | 17:57 |
kurt_ | folsom/grizzly | 17:58 |
marcoceppi | in that case you'll probably definitely want keystone on grizzly if it isn't already | 17:58 |
kurt_ | but I think all of it is the stock stuff from the gui | 17:58 |
kurt_ | keystone itself is, I believe | 17:58 |
kurt_ | the paste bin should validate that | 17:58 |
marcoceppi | kurt_: not sure if this is a valid statement, maybe jamespage can correct me, but you'll want almost all the openstack charms using cloud:precise-updates/grizzly as their openstack-origin | 18:00 |
kurt_ | funny thing is both openstack-dashboard and keystone have that | 18:01 |
kurt_ | validating... | 18:01 |
kurt_ | cloud:precise-grizzly | 18:02 |
kurt_ | that's what I have been using | 18:02 |
kurt_ | that's what all of the docs say to use I believe | 18:02 |
kurt_ | keystone does not have it's origin explicitly set | 18:03 |
kurt_ | from the gui anyways | 18:03 |
marcoceppi | jamespage: could you, when you get a chance, verify the right openstack-origin for grizzly and openstack charms? | 18:03 |
kurt_ | I found in some cases, like ceph I believe, that grizzly wasn't availlable | 18:04 |
kurt_ | but as I said, I want to strip off one layer at a time till I get this to work | 18:05 |
marcoceppi | kurt_: so when I had keystone problems and dashboard, I ended up using the wrong version of keystone | 18:05 |
kurt_ | yes, I definitely cannot pull a token from the dashboard. 500 error. So that's a basic problem | 18:06 |
kurt_ | dashboard -> keystone | 18:06 |
kurt_ | but I can get token fine locally from keystone | 18:06 |
kurt_ | jcastro: IMHO it would be very useful to print DNS names on the deployed nodes on the canvas of the gui | 18:09 |
kurt_ | it would save the user a step of lookup when troubleshooting from the MAAS perspective | 18:09 |
marcoceppi | kurt_: I hear there's a bunch of updates to the gui coming wrt the way you drill down in to a service, while I don't think you'll get node names on canvas (imagine a service with 100 units deployed) it should be less tedious to drill down in to going forward | 18:10 |
kurt_ | I was thinking it would be much less tedious on the user to see the actual node name on the icon on the gui rather than having to drill down to the node to see it | 18:11 |
kurt_ | small production/time-saver thing | 18:12 |
marcoceppi | kurt_: but those icons represent a service, which has 1 or more units. The units are the ones that get a name | 18:12 |
kurt_ | ah yes, true | 18:12 |
sarnold | if there's only one unit in a service, it might be a nice optimization to also show the unit name | 18:12 |
marcoceppi | so in the event of multiple units that representation would be lost either by not having it displayed or by having too much info to show | 18:12 |
marcoceppi | sarnold: possibly, but I don't want to break users expectations for a unit name | 18:12 |
kurt_ | what would be *really* nice is mouseover with a pop-up of all hosts associated with the service :D | 18:13 |
marcoceppi | one mysql unit and three units wordpress, mysql shows unit name wordpress doesn't. May be preceived as broken | 18:13 |
marcoceppi | kurt_: that might be a better use case actually | 18:13 |
sarnold | marcoceppi: yeah, but there's something to be said for reducing needless clicks where it can make sense, too. | 18:13 |
marcoceppi | kurt_: you could file a bug aginst juju-gui witht he suggestion, to get feedback from them. I have no say over this in the end :) | 18:13 |
sarnold | mouseover, okay, I like that idea. :) | 18:13 |
kurt_ | I can add that to the list :D | 18:14 |
sarnold | a few thousand can hide in a mouseover without too much hassle :) | 18:14 |
marcoceppi | sarnold: ;) I imagine at that point you'd want to run juju status from the command line | 18:14 |
kurt_ | sure. but from a quick view administrative perspective it would save a lot of time in many cases | 18:15 |
rick_h | I think there's something that will help underway. Try setting :flags:/serviceInspector/ to the url | 18:15 |
marcoceppi | kurt_: certainly | 18:15 |
rick_h | it will show the units like mongodb/0 /1 and such | 18:15 |
rick_h | is that what you're looking for? | 18:16 |
kurt_ | is that directed to me rick_h? | 18:16 |
rick_h | kurt_: kinda, at the general conversation | 18:16 |
* marcoceppi tries | 18:16 | |
rick_h | about wanting to see some 'name' when clicking on a service | 18:16 |
rick_h | http://comingsoon.jujucharms.com/:flags:/serviceInspector/ - deploy mongodb - click on the service icon - go to units and it shows the unit name | 18:17 |
kurt_ | I was thinking more along the lines of the specific hostnames associated with the service, like qxkgb.master | 18:17 |
rick_h | kurt_: ah, the hostname, hmm | 18:17 |
kurt_ | juju is but one layer of the information | 18:18 |
rick_h | so I think clicking on a unit mongodb/0 will show you that info then | 18:18 |
rick_h | but it's not on hover | 18:18 |
rick_h | so it'd be 3 clicks in | 18:18 |
kurt_ | my suggestion would mean 0 clicks :D | 18:18 |
marcoceppi | rick_h: whoa, this is cool. What's S C E D do? | 18:19 |
rick_h | yea, there's a ton of things we can try to show, but too much info sucks | 18:19 |
rick_h | marcoceppi: go to coming soon. Updates there add the icons and such | 18:19 |
marcoceppi | rick_h: ack, will upgrade-charm on a deployed juju-gui give the me the latest? | 18:19 |
rick_h | marcoceppi: eventually | 18:19 |
marcoceppi | ;__; okay | 18:20 |
rick_h | marcoceppi: oh, sorry, thought you meant 'update-charm' from something in the gui. That's in progress. | 18:20 |
marcoceppi | booya contstraints | 18:20 |
rick_h | marcoceppi: but to your original question no, it'll only get you the last 'release' | 18:20 |
marcoceppi | rick_h: gotchya, I'll just wait for the next release | 18:21 |
rick_h | and we're a couple of weeks from our last release right now so comingsoon is the latest/greatest | 18:21 |
rick_h | marcoceppi: rgr | 18:21 |
marcoceppi | rick_h: possibly putting the address/hostname in () next to the running units list in the serviceinspector might be a plausible alternative | 18:21 |
marcoceppi | at least a truncated version of it with a hyperlink to that hostname | 18:22 |
rick_h | marcoceppi: maybe, but hostnames tend to be long sucky things. Think about the hostnames on the aws instances :/ | 18:22 |
rick_h | heh, yea | 18:22 |
rick_h | some work to figure out something there | 18:22 |
marcoceppi | mongodb/0 (az-123.1231.4...) | 18:22 |
marcoceppi | rick_h: ack, but there's some user feedback for you. | 18:22 |
marcoceppi | thanks for the feedback btw, kurt_ | 18:22 |
kurt_ | sure | 18:23 |
rick_h | marcoceppi: definitely. Wasnted to clear up what was being asked for. That helps for sure | 18:23 |
marcoceppi | rick_h: will replace be used to juju upgrade-charm --switch?! | 18:23 |
rick_h | marcoceppi: no idea | 18:24 |
jamespage | kurt_, marcoceppi: the ceph and openstack-charms differ a little | 19:02 |
jamespage | source: cloud:precise-updates/grizzly for ceph | 19:02 |
jamespage | and openstack-origin: cloud:precise-grizzly for the openstack charms | 19:02 |
marcoceppi | jamespage: thanks for the confirmation! | 19:04 |
jamespage | marcoceppi, np | 19:04 |
kurt_ | jamespage: I've been looking for a better guide on deployment/integration cinder with ceph for openstack in grizzly | 19:10 |
kurt_ | I'm having a lot of problems figuring what works and what doesn't | 19:10 |
kurt_ | scuttle monkey's guide is very good, but its outdated and doesn't deal with integrating stock grizzly. | 19:11 |
weblife | Darn someone beat me to it. Thought gitolite would make a good charm. | 21:06 |
marcoceppi | weblife: I don't think it's in the store yet | 21:07 |
marcoceppi | weblife: oops, yes it is | 21:07 |
weblife | someone called it gitlab | 21:07 |
marcoceppi | weblife: gitlab != gitolite | 21:07 |
weblife | That would have been a winning charm | 21:08 |
marcoceppi | https://github.com/sitaramc/gitolite/wiki http://gitlab.org/ | 21:08 |
marcoceppi | weblife: I know the gitlab charm could use some work | 21:08 |
marcoceppi | and there isn't a gitlab_ci charm yet | 21:09 |
marcoceppi | nor a gitlab-shell charm | 21:09 |
marcoceppi | If you were interested in growing that gitlabhq services | 21:09 |
weblife | I am. Probably will help expand on it where I can. Just been thinking what I can do to enter this contest. Need the money :) | 21:11 |
marcoceppi | For instance, you could have a gitlab-shell charm that talks with NFS, allowing you to scale out the git repositories side, then have gitlab setup to talk to gitlab-shell (which is required in newer versions), and have gitlab_ci deployed and a relation between the two to automatically setup CI for repos | 21:11 |
marcoceppi | the gitlab charm would have to be updated to reflect the new gitlab-shell (and probably other things) so you'd have quite abit, but I think with time you could eventaully demo a github* alternative at scale :) | 21:12 |
weblife | not bad idea. Every software company wold love that. | 21:13 |
weblife | Saved this convo for later review, would like to do something like this. | 21:18 |
jamespage | kurt_, have you seen https://wiki.ubuntu.com/ServerTeam/OpenStackHA? | 21:34 |
jamespage | its for a full HA deployment; but it also documents alot of general details about deploying openstack with juju | 21:35 |
jamespage | obviously you can drop the HA configurations (specifically vip's) and the '-n 2' for most of the services | 21:35 |
=== freeflying_away is now known as freeflying | ||
weblife | I know what I can do. An IRC bot charm! | 21:54 |
kurt_ | jamespage: I have indeed. But I believe that guide to be out of date. Swift for example I thought was no longer used. | 21:55 |
kurt_ | jamespage: information related to cinder/ceph deployment/integration with juju openstack (especially the gui). I'm having to piece together information from various sources, some of which unfortunately are out of date. | 22:02 |
kurt_ | sorry - info is hard to find | 22:02 |
=== freeflying is now known as freeflying_away | ||
weblife | m_3: you there | 22:47 |
weblife | m_3: I updated https://code.launchpad.net/~web-brandon/charms/precise/node-app/install-fix with your request. It nows loads the express template default if no app ( http://ec2-54-212-165-14.us-west-2.compute.amazonaws.com ). It will also load specified node version source from config.yaml and do a sha1 check and build if passed. PPA if no version is entered or if sha1 fails. | 23:02 |
weblife | I gonna do a few extra things too. Do I need to submit merge again? It looks like its still pending. | 23:04 |
=== thumper is now known as thumper-afk | ||
=== freeflying_away is now known as freeflying |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!