=== freeflying_away is now known as freeflying === defunctzombie is now known as defunctzombie_zz [01:20] marcoceppi, merged, incidentally i've been splitting out the various bits of deployer into plugins in the core-plugins branch === defunctzombie_zz is now known as defunctzombie [01:53] hazmat: ah, cool === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === thumper is now known as thumper-afk === stub` is now known as stub === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === wallyworld__ is now known as wallyworld === jam1 is now known as jam === tasdomas_afk is now known as tasdomas === defunctzombie is now known as defunctzombie_zz [07:51] agent-state-info: 'hook failed: "config-changed" deploy wordpress using juju | http://askubuntu.com/q/335720 === thumper-afk is now known as thumper [08:11] any chance of a review for https://code.launchpad.net/~mthaddon/charms/precise/pgbouncer/package-holds/+merge/180533 - been sitting in the queue for a little while and is relatively trivial [13:10] hi charmers. I am seeing "ERROR Invalid SSH key" for juju status for charmworld on canonistack. I got a juju update for saucy today, but I also set the alternative to juju-7. Any clues about resolving this? [13:11] My access was fine 18 hours ago [13:14] sinzui: when you type juju --version does it say 0.7? [13:14] yes [13:15] I can confirm that I can ssh to each machine when I specify the proper key. I think the wrong key is being selected. [13:15] sinzui: has your id_rsa changed in the last 18 hours? [13:16] marcoceppi, no keys have changed [13:17] sinzui: can you ssh directly in to the bootstrap? [13:19] marcoceppi, I certainly can [13:20] sinzui: hum, sorry. It's been a while since I've used 0.7, >1.x has really spoiled me. Verify that your id_rsa.pub is in the authoized-keys list for .ssh/ on bootstrap? [13:20] marcoceppi, We use team credential to manage staging.jujucharms.com. I can access every instance shown by nova list. [13:20] Then verify you're trying to status against the right environment? [13:21] the env is always right. [13:22] marcoceppi, I just bootstrapped a new env on canonistack. Juju cannot get the status for the same reason as the old env. [13:22] sinzui: there was an update to 0.7 recently [13:22] I have no idea what it was, but it appears to have broken this [13:23] thanks marcoceppi [13:24] sinzui: well, http://bazaar.launchpad.net/~juju/juju/0.7/revision/632 does not look like it'll hurt anything. I wonder if there was a change earlier that didn't get built because of build errors that caused this [13:25] sinzui: either way, bugs should be filed, probably even want to bother #juju-dev about it [13:26] marcoceppi, I agree. I am checking if other in the team can work with canonistack. I might spend some time with the GSAa [13:26] sinzui: might be a good time to move to juju 1.12.0 ;) [13:28] marcoceppi, I think it would be very irresponsible to build on modern juju but deploy on juju 0.7 [13:28] When prodstack allows new juju, we can switch [13:57] marcoceppi, I can work with canonistack again after a reboot. I have no idea what was buggered. Possibly the flip-flop from juju 0.7 -> 1.13 -> 0.7 tainted something [14:06] sinzui: it already does [14:06] sinzui: as in, we just deployed a couple prod services with new juju [14:06] sinzui: that's really weird. [14:06] glad you were able to get it resolved [14:07] sidnei, I was told we cannot use new juju until fenchurch is proven [14:08] sinzui: i don't know what's that about. i was told we cannot use old juju for new deployments at all. [14:09] great. mixed messages. I would love to have just one juju installed [14:10] sinzui: I have no idea what that is about, but we're really moving away from 0.7 asap [14:11] mthaddon can probably clarify [14:12] so what's the question here? [14:13] " sidnei, I was told we cannot use new juju until fenchurch is proven" [14:13] sinzui: who told you that? [14:14] elmo mentioned it at IoM [14:14] ^ mthaddon [14:15] sinzui: I'll check with elmo, but I highly doubt he said that, as he's pushing us to use juju-core for any new environments in prodstack right now [14:15] mthaddon, I would be happy to move charmworld and gui to new juju [14:27] sinzui: sorry, for any confusion over what I said; but use of juju-core is not blocked by anything in IS. As mthaddon said, we have a mandate to use juju-core and only juju-core for any new services or complete redeployments of an existing service [14:28] elmo: thanks. I will bring this up with the gui team [14:37] Does anyone recognize the error "Unable to retrieve authorized projects." from openstack-dashboard? === freeflying is now known as freeflying_away [14:54] Heyhi === tasdomas is now known as tasdomas_afk [15:23] jamespage: so, charm-tools has changed a lot since the last time it was sync'd to the archives. How would I start the process of getting a new version sync'd to Saucy? Also, what's the latest I could sync as I've got a new version coming out soon [15:23] marcoceppi, feature freeze is next thursday [15:23] so ideally wednesday [15:24] marcoceppi, is the packaging itself still OK? [15:24] jamespage: the recipie has changed in addition to the contents [15:24] recipe* [15:25] jamespage: actually, it looks like the saucy recipe has been updated [15:26] https://code.launchpad.net/~marcoceppi/ubuntu/saucy/charm-tools/fix-deps/+merge/165161 [15:27] marcoceppi, thats what I wanted! [15:27] marcoceppi, so juju -> juju-core in Saucy [15:28] juju-0.7 will be the old package name [15:28] jamespage: gotchya, so for saucy it can still recommend/suggest "juju" as juju is the new metapackage that installs juju-core with update alternatives? [15:28] thats it [15:29] jamespage: cool, that fix was because in precise, if you install charm-tools from ppa and juju-core from ppa, you get a broken install as juju installs 0.7 [15:29] I'll open another update [15:30] the ppa builds should really all do the right things by now - i.e. use alternatives [15:32] jamespage: right, but back whenthis change was made the ppa version of juju core and juju in the precise archives clashed a bit [15:32] marcoceppi, probably still does [15:33] hum, so maybe I'll keep this for the ppa of charm-tools, just so things don't die, but I'll at least update the saucy version, thanks! === defunctzombie_zz is now known as defunctzombie [16:43] Can a version mismatch for keystone cause problems? ie. client (openstack-dashboard) and keystone node? [16:43] http://pastebin.ubuntu.com/6014649/ [16:43] I'm working hard to trace keystone auth issues and trying to understand where my problems are. [17:01] kurt_: iirc versions of openstack and keystone (grizzly vs Folsom for instance) cause problems [17:02] marcoceppi: from my paste bin - is that what you see is a version mismatch? [17:03] jamespage: hey have you tried that python redux bundle on AWS or HP Cloud? [17:04] jcastro, sorry - no [17:04] ok so hp cloud doesn't work for me [17:04] however, for about the first 3 minutes it works awesome [17:04] kurt_: not sure which version is which. one second [17:04] jamespage: I need to sort some environment issues but I think I am close [17:05] jcastro: I couldn't get networking to work on HP cloud. dashboard and glance worked [17:05] jamespage: it's pretty badass watching deployer fire up stuff like that. [17:13] jcastro, yeah - sorry - up to my eyeballs in kernel incompatibility problems with openvswitch in saucy right now [17:13] (if I seem a little distracted) [17:18] no worries [17:18] I Was expecting you to be EODed anyway [17:37] marcoceppi: any chance to look at my pastebin? === tasdomas_afk is now known as tasdomas === tasdomas is now known as tasdomas_afk [17:56] kurt_: sorry, was mobile. Back at desk [17:57] marcoceppi: no worries - just trying to sort through my final set of problems getting openstack running :) [17:57] I'm so close [17:57] I have these weird auth issues and still the cinder/ceph stuff to figure out [17:57] one layer at a time :) [17:57] kurt_: I hear yeah, let me see what's in the cloud archive. Ideally you want all your services running the same openstack release, IE grizzly, folsom, etc [17:57] yes, there is definitely a mixture there [17:58] folsom/grizzly [17:58] in that case you'll probably definitely want keystone on grizzly if it isn't already [17:58] but I think all of it is the stock stuff from the gui [17:58] keystone itself is, I believe [17:58] the paste bin should validate that [18:00] kurt_: not sure if this is a valid statement, maybe jamespage can correct me, but you'll want almost all the openstack charms using cloud:precise-updates/grizzly as their openstack-origin [18:01] funny thing is both openstack-dashboard and keystone have that [18:01] validating... [18:02] cloud:precise-grizzly [18:02] that's what I have been using [18:02] that's what all of the docs say to use I believe [18:03] keystone does not have it's origin explicitly set [18:03] from the gui anyways [18:03] jamespage: could you, when you get a chance, verify the right openstack-origin for grizzly and openstack charms? [18:04] I found in some cases, like ceph I believe, that grizzly wasn't availlable [18:05] but as I said, I want to strip off one layer at a time till I get this to work [18:05] kurt_: so when I had keystone problems and dashboard, I ended up using the wrong version of keystone [18:06] yes, I definitely cannot pull a token from the dashboard. 500 error. So that's a basic problem [18:06] dashboard -> keystone [18:06] but I can get token fine locally from keystone [18:09] jcastro: IMHO it would be very useful to print DNS names on the deployed nodes on the canvas of the gui [18:09] it would save the user a step of lookup when troubleshooting from the MAAS perspective [18:10] kurt_: I hear there's a bunch of updates to the gui coming wrt the way you drill down in to a service, while I don't think you'll get node names on canvas (imagine a service with 100 units deployed) it should be less tedious to drill down in to going forward [18:11] I was thinking it would be much less tedious on the user to see the actual node name on the icon on the gui rather than having to drill down to the node to see it [18:12] small production/time-saver thing [18:12] kurt_: but those icons represent a service, which has 1 or more units. The units are the ones that get a name [18:12] ah yes, true [18:12] if there's only one unit in a service, it might be a nice optimization to also show the unit name [18:12] so in the event of multiple units that representation would be lost either by not having it displayed or by having too much info to show [18:12] sarnold: possibly, but I don't want to break users expectations for a unit name [18:13] what would be *really* nice is mouseover with a pop-up of all hosts associated with the service :D [18:13] one mysql unit and three units wordpress, mysql shows unit name wordpress doesn't. May be preceived as broken [18:13] kurt_: that might be a better use case actually [18:13] marcoceppi: yeah, but there's something to be said for reducing needless clicks where it can make sense, too. [18:13] kurt_: you could file a bug aginst juju-gui witht he suggestion, to get feedback from them. I have no say over this in the end :) [18:13] mouseover, okay, I like that idea. :) [18:14] I can add that to the list :D [18:14] a few thousand can hide in a mouseover without too much hassle :) [18:14] sarnold: ;) I imagine at that point you'd want to run juju status from the command line [18:15] sure. but from a quick view administrative perspective it would save a lot of time in many cases [18:15] I think there's something that will help underway. Try setting :flags:/serviceInspector/ to the url [18:15] kurt_: certainly [18:15] it will show the units like mongodb/0 /1 and such [18:16] is that what you're looking for? [18:16] is that directed to me rick_h? [18:16] kurt_: kinda, at the general conversation [18:16] * marcoceppi tries [18:16] about wanting to see some 'name' when clicking on a service [18:17] http://comingsoon.jujucharms.com/:flags:/serviceInspector/ - deploy mongodb - click on the service icon - go to units and it shows the unit name [18:17] I was thinking more along the lines of the specific hostnames associated with the service, like qxkgb.master [18:17] kurt_: ah, the hostname, hmm [18:18] juju is but one layer of the information [18:18] so I think clicking on a unit mongodb/0 will show you that info then [18:18] but it's not on hover [18:18] so it'd be 3 clicks in [18:18] my suggestion would mean 0 clicks :D [18:19] rick_h: whoa, this is cool. What's S C E D do? [18:19] yea, there's a ton of things we can try to show, but too much info sucks [18:19] marcoceppi: go to coming soon. Updates there add the icons and such [18:19] rick_h: ack, will upgrade-charm on a deployed juju-gui give the me the latest? [18:19] marcoceppi: eventually [18:20] ;__; okay [18:20] marcoceppi: oh, sorry, thought you meant 'update-charm' from something in the gui. That's in progress. [18:20] booya contstraints [18:20] marcoceppi: but to your original question no, it'll only get you the last 'release' [18:21] rick_h: gotchya, I'll just wait for the next release [18:21] and we're a couple of weeks from our last release right now so comingsoon is the latest/greatest [18:21] marcoceppi: rgr [18:21] rick_h: possibly putting the address/hostname in () next to the running units list in the serviceinspector might be a plausible alternative [18:22] at least a truncated version of it with a hyperlink to that hostname [18:22] marcoceppi: maybe, but hostnames tend to be long sucky things. Think about the hostnames on the aws instances :/ [18:22] heh, yea [18:22] some work to figure out something there [18:22] mongodb/0 (az-123.1231.4...) [18:22] rick_h: ack, but there's some user feedback for you. [18:22] thanks for the feedback btw, kurt_ [18:23] sure [18:23] marcoceppi: definitely. Wasnted to clear up what was being asked for. That helps for sure [18:23] rick_h: will replace be used to juju upgrade-charm --switch?! [18:24] marcoceppi: no idea [19:02] kurt_, marcoceppi: the ceph and openstack-charms differ a little [19:02] source: cloud:precise-updates/grizzly for ceph [19:02] and openstack-origin: cloud:precise-grizzly for the openstack charms [19:04] jamespage: thanks for the confirmation! [19:04] marcoceppi, np [19:10] jamespage: I've been looking for a better guide on deployment/integration cinder with ceph for openstack in grizzly [19:10] I'm having a lot of problems figuring what works and what doesn't [19:11] scuttle monkey's guide is very good, but its outdated and doesn't deal with integrating stock grizzly. [21:06] Darn someone beat me to it. Thought gitolite would make a good charm. [21:07] weblife: I don't think it's in the store yet [21:07] weblife: oops, yes it is [21:07] someone called it gitlab [21:07] weblife: gitlab != gitolite [21:08] That would have been a winning charm [21:08] https://github.com/sitaramc/gitolite/wiki http://gitlab.org/ [21:08] weblife: I know the gitlab charm could use some work [21:09] and there isn't a gitlab_ci charm yet [21:09] nor a gitlab-shell charm [21:09] If you were interested in growing that gitlabhq services [21:11] I am. Probably will help expand on it where I can. Just been thinking what I can do to enter this contest. Need the money :) [21:11] For instance, you could have a gitlab-shell charm that talks with NFS, allowing you to scale out the git repositories side, then have gitlab setup to talk to gitlab-shell (which is required in newer versions), and have gitlab_ci deployed and a relation between the two to automatically setup CI for repos [21:12] the gitlab charm would have to be updated to reflect the new gitlab-shell (and probably other things) so you'd have quite abit, but I think with time you could eventaully demo a github* alternative at scale :) [21:13] not bad idea. Every software company wold love that. [21:18] Saved this convo for later review, would like to do something like this. [21:34] kurt_, have you seen https://wiki.ubuntu.com/ServerTeam/OpenStackHA? [21:35] its for a full HA deployment; but it also documents alot of general details about deploying openstack with juju [21:35] obviously you can drop the HA configurations (specifically vip's) and the '-n 2' for most of the services === freeflying_away is now known as freeflying [21:54] I know what I can do. An IRC bot charm! [21:55] jamespage: I have indeed. But I believe that guide to be out of date. Swift for example I thought was no longer used. [22:02] jamespage: information related to cinder/ceph deployment/integration with juju openstack (especially the gui). I'm having to piece together information from various sources, some of which unfortunately are out of date. [22:02] sorry - info is hard to find === freeflying is now known as freeflying_away [22:47] m_3: you there [23:02] m_3: I updated https://code.launchpad.net/~web-brandon/charms/precise/node-app/install-fix with your request. It nows loads the express template default if no app ( http://ec2-54-212-165-14.us-west-2.compute.amazonaws.com ). It will also load specified node version source from config.yaml and do a sha1 check and build if passed. PPA if no version is entered or if sha1 fails. [23:04] I gonna do a few extra things too. Do I need to submit merge again? It looks like its still pending. === thumper is now known as thumper-afk === freeflying_away is now known as freeflying