Luca__ | Hi there. Does anybody knows if charms are already available for saucy? I cant see any of them | 00:44 |
---|---|---|
zradmin | davecheney: are you available at the moment? I created a stand alone mysql instance and tied the quantum-gateway charm to that... still the same behavior for me | 00:53 |
davecheney | zradmin: oh damn | 00:54 |
davecheney | zradmin: i;m here to help | 00:54 |
zradmin | davecheney: thanks! I'm using MAAS to deploy on and my test maas controller has been around since 12.04.2/juju 1.12 - through updates etc i have upgraded the controller and it is now on 12.04.3/juju 1.16.3 - i have destroyed the environment and the charms should be pulling directly from the charm store for each deployment. could the old maas controller possibly be an issue? | 00:58 |
davecheney | zradmin: not really sure | 00:58 |
davecheney | not a maas expert | 00:59 |
davecheney | but i'd recommend using the latest possible maas version | 00:59 |
davecheney | probably from the cloud archive | 00:59 |
zradmin | yeah its on there | 00:59 |
zradmin | im just wondering if somehow im getting an older version of the charm even though its pulling directly from the store | 00:59 |
davecheney | zradmin: no, that is not possible | 01:00 |
davecheney | that is | 01:00 |
davecheney | it is not possible if you destroyed the environment and reboostrapped | 01:01 |
davecheney | as the cache is inside the environemnt | 01:01 |
zradmin | yeah its been rebootstrapped several times | 01:01 |
zradmin | is there a way to check the charm version on node 0? | 01:01 |
davecheney | zradmin: no | 01:03 |
davecheney | there is no charm deployed on macine 0 | 01:03 |
davecheney | juju status will report the version of the charm downloaded and deployed from the store | 01:03 |
zradmin | ok so node0 doesnt store a cached copy of the charm then | 01:03 |
Luca__ | davecheney: Do you know if there are chams available for saucy? I cant find any in the repository | 01:04 |
davecheney | Luca__: most of the charms are for the LTS relases | 01:06 |
davecheney | it's unlikely that we'll have a lot of saucy charms | 01:06 |
davecheney | Luca__: the series of the charm defines the series of the machine it is deployed on | 01:06 |
davecheney | so, while you migth be running saucy on your desktop | 01:06 |
davecheney | you want to be deploying precise machines to run your services | 01:06 |
Luca__ | davecheney: Actually I did not find any | 01:06 |
davecheney | that is why most of the charms are for precise | 01:06 |
sarnold | Luca__: I think only the precise charms are promulgated to the charmstore; I know I've seen non-precise charms but there's not many of them, and I don't know how to search them these days.. | 01:07 |
Luca__ | davecheney: my intention was trying deploying Openstack with 13.10 using charms. I am now ending up staring again with 12.04.03 | 01:07 |
davecheney | Luca__: canonical only supports deploying openstack on our LTS releases | 01:07 |
zradmin | davecheney: I found this on node0 in /var/lib/juju/charmcache/cs_3a_precise_2f_mysql-29.charm 29 was the latest revision right? | 01:08 |
sarnold | davecheney: sadly you'd never know that from e.g. http://www.ubuntu.com/server/ | 01:08 |
Luca__ | davecheney: Thanks. Will try from there | 01:08 |
sarnold | "13.10! charms! openstack!" hehe | 01:09 |
Luca__ | sarnold: Agree, from the web page impression is quite different | 01:09 |
zradmin | sarnold: the website definetly needs an update... takes at least a month just to figure out where to begin :) | 01:09 |
Luca__ | sarnold: Right, that is why I started with 13.10 | 01:09 |
sarnold | Luca__: plus you figure starting with 13.10 would giv eyou a good jump on 14.04 LTS, right? :) you wouldn't be the first to hope so... | 01:09 |
Luca__ | zradmin: same here, have been working on this for the last 15 days to figure out how to start | 01:09 |
zradmin | Luca__: this is the best publicly available document i have found so far https://wiki.ubuntu.com/ServerTeam/OpenStackHA | 01:10 |
Luca__ | davecheney: I have pinged jamespage trying to get some information about the status of openstack bundles, however he does not seem available | 01:11 |
zradmin | Luca__: but use the latest version of juju... not .7 that the guide states | 01:11 |
sarnold | Luca__: if he's from .us, he might already be enjoying thanksgiving holidays | 01:11 |
Luca__ | zradmin: Yes thanks I am following that, though requirements are unrealistic.... 28 servers is really something useless and I dont have a clear idea how to install different services on same nodes | 01:12 |
davecheney | zradmin: looking at the charm store, mysql-29 is the latest | 01:12 |
Luca__ | zradmin: I am now with 12.04.03 I believe I am using the latest juju, though need to check | 01:13 |
zradmin | Luca__: juju deploy $SERVICE --to $MACHINE# | 01:13 |
davecheney | Luca__: yeah, opensack isn't for the faint of heart | 01:13 |
davecheney | and juju won't giv eyou a lot of help there | 01:13 |
Luca__ | zradmin: so far I have been trying adding havana repository however I am getting a GPG error: http://ubuntu-cloud.archive.canonical.com precise-updates/havana Release: The following signatures couldn't be verified because the public key is not available: | 01:13 |
davecheney | as the default policy is one service unit per machine | 01:14 |
sarnold | oh man, this looks useful :) https://help.ubuntu.com/community/UbuntuCloudInfrastructure | 01:14 |
sarnold | wonder how I hadn't seen this one before | 01:14 |
davecheney | Luca__: i'm not quite sure how you got onto the 13.10 track | 01:14 |
davecheney | charms always dictate the series of the machine they are deployed on | 01:14 |
davecheney | we only have openstack charms for precise | 01:14 |
davecheney | so the machines deployed will be precise machines | 01:14 |
Luca__ | davecheney: you are right, it is not the easiest stuff to deploy | 01:14 |
zradmin | davecheney: the later jujus support multi service deployment using lxc containers right? | 01:15 |
Luca__ | sarnold: That link is unfortunately quite outdated... Not even quantum in the deployment | 01:15 |
sarnold | Luca__: oh. drat. | 01:16 |
Luca__ | sarnold: I would not advise you following that unless you want to be really behind | 01:16 |
davecheney | zradmin: juju 1.16.x (and 1.14 i think) supports lxc containers | 01:17 |
davecheney | but I would be very surprised if our openstack charms accept being deployed inside an lxc container | 01:17 |
davecheney | as usuall, networking is the issue with containers | 01:17 |
davecheney | i think maas + lxc + openstack is possible, but not tested | 01:17 |
zradmin | davecheney: nah i havent used it for those but i do use it for things like nagios/juju-gui | 01:18 |
Luca__ | davecheney: I started with 13.10 then followed several docs, and looked for openstack charms for saucy, which were obviously not available. Ending downloading those for precise, which did not work in 13.10 and then now startign again with 12.04.03... This has been the path in the last 15 days | 01:18 |
zradmin | for the openstack api services im running them in their own vms in proxmox | 01:19 |
zradmin | eth0 is on one set of switches, while eth1 is on the dmz side for all of my machines (physical and virtual) | 01:20 |
Luca__ | zradmin: were you able to deploy openstack with ubuntu? | 01:26 |
zradmin | Luca__: grizzly was the closest i came to having it all up and running, but now i have an issue with neutron not coming up properly | 01:27 |
davecheney | Luca__: i'm sorry you got so sidetracked | 01:27 |
davecheney | i'm not sure what you mean by download | 01:27 |
davecheney | juju deploy mysql | 01:28 |
davecheney | will deploy mysql on the current LTS | 01:28 |
davecheney | we try not to make it any more complicated than that | 01:28 |
zradmin | davecheney: have most of the deployments you've seen been setup as HA? or are people just setting it up as single nodes? | 01:28 |
davecheney | zradmin: always ha | 01:29 |
zradmin | thats what i thought | 01:29 |
davecheney | zradmin: nobody wants an unreliable hypervisor | 01:29 |
zradmin | is the guid similar to the on posted on the wiki or is there any updated documentation/configs we can try to follow | 01:29 |
zradmin | lol for sure :) | 01:30 |
davecheney | guid ? | 01:31 |
davecheney | guide | 01:31 |
Luca__ | zradmin: did you deploy with charms? | 01:31 |
zradmin | yeah guide sorry for my typo | 01:31 |
zradmin | Luca__: yes! it makes it much easier than configuring by hand when paired with MAAS. I can deploy a test environment across 28 servers in 2 hours easily | 01:33 |
sarnold | two hours? zounds :) | 01:33 |
Luca__ | davecheney: Basically I created an environment.yaml file with default-series: precise instead and not saucy, even though I was on 13.10. I as well downloaded locally charms but still for precise. This is what I meant by downloading them | 01:34 |
zradmin | sarnold: and then spend the next two weeks trying to figure out why neutron isnt working :( | 01:34 |
Luca__ | zradmin; As far as I know jamespage should be working on HA deployments and bundles | 01:34 |
Luca__ | zradmin: For production environment HA is mandatory... otherwise useless | 01:35 |
davecheney | Luca__: it's even easier | 01:35 |
davecheney | remove the default-series config optoin | 01:36 |
davecheney | you don't need it | 01:36 |
sarnold | zradmin: oh man, that's annoying. :/ I know nearly nothing of the whole environment, and it seems unlikely I'll ever own enough machines to really give it all a try... | 01:36 |
Luca__ | davecheney: got it. In any case now I am with 12.04.03 will start from there. So far I could create a bootstrap node, now will start with the HA deployment. Unfortunately I am having some issue setting the Ubuntu Cloud Repository. It is complaining about public key not available | 01:37 |
davecheney | Luca__: i'm sorry, i don't quite understand | 01:38 |
davecheney | you don't nede to anything | 01:38 |
davecheney | juju does tihs | 01:38 |
Luca__ | davecheney: You dont need to use Ubuntu Cloud Repository? http://www.ubuntu.com/download/cloud/cloud-archive-instructions | 01:40 |
Luca__ | zradmin: Did you follow HA Guide? | 01:40 |
sarnold | Luca__: this page: https://wiki.ubuntu.com/ServerTeam/CloudArchive has the commands e.g. sudo add-apt-repository cloud-archive:havana | 01:41 |
davecheney | Luca__: if you were going to install opensack by hand, maybe | 01:41 |
Luca__ | zradmin: In any case as I said earlier 28 servers it is kind of unrealistic. Really need to scale down | 01:41 |
davecheney | but the charms will do this themselves onte machines that they spin up | 01:41 |
sarnold | Luca__: add-apt-repository automatically retrieves the key you need from launchpad | 01:41 |
Luca__ | davecheney: got it | 01:42 |
Luca__ | davecheney: basically juju would look into the correct archive and use havana packages if I understand correctly? | 01:43 |
Luca__ | sarnold: Thanks, I was there now :) | 01:43 |
zradmin | Luca__: yeah i followed the ha guide and ammended it when havana was released because i wanted to get past the neutron rename. i have 16 machines to play with in a blade enclosure so my plan has been to use 3 for ceph, and virtualize all the api services between 2 others... after that its all compute-nodes | 01:44 |
davecheney | Luca__: not juju, the charms | 01:44 |
davecheney | the charms contain all the logic | 01:44 |
davecheney | juju is just a workflow engine | 01:44 |
Luca__ | rzadmin: Have you been able to deploy several services into one blade? wit the --to flag? | 01:46 |
zradmin | well in a blade each blade (we're using half heights) is its own server | 01:46 |
zradmin | ^blade enclosure | 01:46 |
Luca__ | davecheney: Yes, sorry, I was thinking about charms and wrote juju ... | 01:46 |
Luca__ | zradmin: An enclosure comes at most with 16 blades, therefore I need to deploy some services on a single server. However the HA guide requires 28 servers. How did you move from 28 to 16? | 01:48 |
zradmin | i use proxmox(open source hypervisor using kvm) on 2 of the blades and have vms for each of the services on both | 01:49 |
zradmin | onces thats running properly we can scale the apis to new nodes as needed thanks to juju add-unit | 01:49 |
Luca__ | zradmin: Dont know the details but I grasped the idea | 01:52 |
sarnold | zradmin: suddenly a two hour deploy makes sense, you're putting a ton of work on two little blades :) | 01:53 |
zradmin | sarnold: they're pretty big, but the deploy is slowed down mainly by me waiting for each service to finish coming online with all the relationships before provisioning the next one | 01:54 |
davecheney | zradmin: are you doing that because you think it will fail ? | 01:55 |
davecheney | or just to see it working ? | 01:55 |
Luca__ | sarnold: As I dont have a whole chassis for myself I was thinking using couple of servers and virtualize there :) | 01:55 |
sarnold | zradmin: if you don't mind, what capabilities do each blade have? how much does the whole enclosure and blades cost? :D | 01:55 |
Luca__ | sarnold: An enclosure is quite expensive | 01:55 |
Luca__ | sarnold: A single half blade servers about 3000$ | 01:56 |
* davecheney never understood why people use blades | 01:56 | |
Luca__ | Enclosure is more that 10,000$, plus switchs OAM | 01:56 |
davecheney | they use more power | 01:56 |
davecheney | cost more than regular macines | 01:56 |
davecheney | and since when was data center space a bigger problem than watts/rack | 01:56 |
Luca__ | davecheney: well, it is good if you want to integrate servers | 01:56 |
sarnold | Luca__: ouch :) okay scratch that then, hehe | 01:56 |
zradmin | sarnold: we picked up the chasis fully loaded (used on ebay!) for around 11k, dell m610s w 2/8 core procs & 48GB RAM | 01:56 |
Luca__ | nut definitely not worse for a openstack deployment. beside you would have problems with midplane bandwidth | 01:57 |
sarnold | davecheney: blades may fit better into a house :) hehe | 01:57 |
* davecheney stands by his asserting that blade chassises are a false economy | 01:57 | |
davecheney | sarnold: in AU, most blade chassis need 3 phase power | 01:57 |
Luca__ | rzadmin: Yes, if used that could be a reasonable price | 01:57 |
davecheney | that isn't available in my condo | 01:58 |
zradmin | davecheney: agreed.... unless you are renting a half rack from a data center and they forgot to put power usage in the lease agreement! :D | 01:58 |
sarnold | davecheney: hehe, yeah, I wound up scratching "buy a used thumper" off my todo list once I saw three-phase requirement. d'oh ) | 01:58 |
zradmin | lol | 01:58 |
davecheney | the thumper doens't need 3 phase | 01:58 |
davecheney | just 200v which isn't availble commonly inthe US | 01:59 |
sarnold | no? just 220 withot the phases? | 01:59 |
Luca__ | :) | 01:59 |
sarnold | just laundry drying machines... hehe | 01:59 |
Luca__ | rzadmin: Did yo use juju 1.6? | 02:00 |
Luca__ | I just found out that current installed version on my 12.04.03 box is 0.5+bzr531-0ubuntu1.3 | 02:01 |
Luca__ | pretty outdated... | 02:01 |
zradmin | Luca__: 1.16.3 | 02:01 |
sarnold | 0.5?? | 02:01 |
Luca__ | Version: 0.5+bzr531-0ubuntu1.3 | 02:01 |
zradmin | sudo add-apt-repository ppa:juju/pkgs | 02:02 |
zradmin | sudo apt-get update | 02:02 |
zradmin | etc | 02:02 |
Luca__ | yep | 02:02 |
Luca__ | kid of surprised too about the version.... | 02:02 |
davecheney | Luca__: sory about that | 02:02 |
davecheney | LTS rules mean we cannot change the version of juju in precise | 02:02 |
Luca__ | Had already bootstrap node, need to restart all | 02:02 |
Luca__ | davecheney: no worries, I understand | 02:03 |
davecheney | https://juju.ubuntu.com/docs/ | 02:03 |
davecheney | install instruction are here | 02:03 |
Luca__ | At least I am getting few good infos from this chat! | 02:03 |
sarnold | well I'll be, I thought for sure precise had shipped with 0.6... | 02:05 |
sarnold | no wonder "install juju from the ppa" was always step #1 :) | 02:05 |
Luca__ | :) | 02:05 |
Luca__ | zradmin: Strange... I added the repository but installed 0.7... | 02:07 |
Luca__ | Version: 0.7+bzr628+bzr633~precise1 | 02:08 |
sarnold | Luca__: https://juju.ubuntu.com/docs/ says to use "ppa:juju/stable | 02:09 |
Luca__ | yes... | 02:09 |
sarnold | not "ppa:juju/pkgs" -- the stable ppa has the 1.16 | 02:09 |
Luca__ | and juju-core | 02:09 |
Luca__ | non juju | 02:09 |
Luca__ | yep, found now, sorry :( | 02:10 |
sarnold | see e.g. https://launchpad.net/~juju/+archive/stable | 02:10 |
zradmin | yeah the ppa:juju/stable is the right one my bad | 02:10 |
Luca__ | now is righ , no worries | 02:11 |
Luca__ | Is there a way to select a bootstrap node? | 02:20 |
zradmin | Luca__: not to my knowledge, are you using maas? | 02:39 |
Luca__ | yes | 02:39 |
Luca__ | I thought so | 02:39 |
zradmin | Luca__: i remove my vms from maas when i rebuild the environment and add them one at a time so i know which service is where | 02:40 |
Luca__ | You are right, this is a good idea | 02:41 |
Luca__ | Have you tried using LXC containers with Juju? | 02:42 |
zradmin | just for small things like juju-gui | 02:42 |
zradmin | i usually put that on node 0 | 02:43 |
Luca__ | Do you have any documentation to point at? | 02:43 |
zradmin | for which piece? | 02:43 |
Luca__ | I dont have any experience with LXC and Juju so I was wondering if you followed any document | 02:48 |
=== freeflying is now known as freeflying_away | ||
Luca__ | zradmin: how many networks did you define for your openstack deployment ? | 03:07 |
zradmin | Luca__: nope no document, just the internal has been difened so far... they used to have the ext-net configured in the nova-cc charm but it looks like they took it out | 03:11 |
=== freeflying_away is now known as freeflying | ||
Luca__ | zradmin: what did you use for the monitor-secret under ceph.yaml? To me it looks like the way of getting this secret is kind of recursive | 03:18 |
Luca__ | monitor-secret: a ceph generated key used by the daemons that manage to cluster to control security. You can use the ceph-authtool command to generate one: ceph-authtool /dev/stdout --name=mon. --gen-key | 03:19 |
zradmin | yup thats it | 03:20 |
Luca__ | What did you configure then? | 03:21 |
zradmin | i generated a key like that | 03:23 |
zradmin | but i think if you leave it undefined it autogenerates one as well | 03:24 |
Luca__ | zradmin: I am not sure am getting what you mean... You need ceph-authtool to generate the key, but you dont have yet ceph-authtool as you are supposed to install with the charm. Beside the documentation related to this charm says this is a mandatory parameter | 03:26 |
zradmin | install the ceph tools on your maas controller | 03:26 |
zradmin | then you can generate it :) | 03:26 |
Luca__ | Thanks | 03:30 |
=== freeflying is now known as freeflying_away | ||
stryderjzw | Hi, I am running the destroy-relation command, but nothing is happening to the services when I run juju status. Anyone seen this or have a suggestion on how to debug? | 04:36 |
=== freeflying_away is now known as freeflying | ||
freeflying | will this channel be proper to ask question related charm? or do we have dedicated one for it | 06:48 |
sarnold | freeflying: this is good, or you can also use askubuntu.com or the mail list | 07:42 |
=== CyberJacob|Away is now known as CyberJacob | ||
=== racedo` is now known as racedo | ||
=== freeflying is now known as freeflying_away | ||
=== ociuhandu_ is now known as ociuhandu | ||
=== freeflying_away is now known as freeflying | ||
=== freeflying is now known as freeflying_away | ||
iri- | How do I make juju aware of changes to the IP/DNS in AWS? | 10:47 |
SuperMatt | using the power of juju, can I create linux containers on *another* machine? | 11:27 |
=== dimitern is now known as dimitern_afk | ||
SuperMatt | I know this isn't real juju, but can someone help me with an openstack question? | 13:44 |
=== freeflying_away is now known as freeflying | ||
X-warrior` | Why does the "juju resolved" doesn't fix/remove this message: 'hook failed: "cluster-relation-joined"' ? | 13:56 |
melmoth | X-warrior, no idea, but what about adding a --retry ? | 14:53 |
melmoth | so it ll run the failed hook once again | 14:53 |
melmoth | if that fail, i guess the next step is juju debug-hooks involved/unit , and run the hook manually to see what the problem is | 14:53 |
X-warrior` | that sux | 15:04 |
X-warrior` | I can't mark it as resolved, so I can't destroy the service, neither the machine | 15:04 |
X-warrior` | sux | 15:05 |
X-warrior` | it should have an option | 15:42 |
X-warrior` | --force to force doing what you want, for example, if you're deleting a service and use force, it will remove ignoring the current state or something similar... | 15:42 |
=== CyberJacob is now known as CyberJacob|Away | ||
X-warrior` | What could happen if I terminate a machine from amazon console? Instead of deleting it using destroy-machine? I can't resolve an error state, but this environment has some machines that I cannot loose. | 16:25 |
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== gary_poster is now known as gary_poster|away | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== thomi_ is now known as thomi | ||
=== freeflying is now known as freeflying_away | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== freeflying_away is now known as freeflying |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!