=== CyberJacob is now known as CyberJacob|Away | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== stub` is now known as stub | ||
=== kadams54-away is now known as kadams54 | ||
=== kadams54 is now known as kadams54-away | ||
=== urulama_ is now known as urulama | ||
=== erkules_ is now known as erkules | ||
=== negronjl is now known as negronjl_afk | ||
=== jacekn_ is now known as jacekn | ||
=== CyberJacob|Away is now known as CyberJacob | ||
Odd_Bloke | aisrael: I've been poking at this Squid issue, and I'm finding that a "curl http://archive.ubuntu.com" is really slow (and is spending a lot of time resolving DNS); can you repro that? | 12:21 |
---|---|---|
Odd_Bloke | (http://stackoverflow.com/a/22625150 is a good way of spitting out how long different bits of the transfer are taking) | 12:21 |
Odd_Bloke | aisrael: http://paste.ubuntu.com/9528021/ is what I'm seeing. | 12:23 |
Odd_Bloke | So my theory is that this is a DNS issue; Squid is timing out when making DNS requests and so 503'ing. | 12:24 |
Odd_Bloke | I'll see if I can up Squid's timeout to confirm my suspicion. | 12:24 |
Odd_Bloke | aisrael: It's actually worse than that, Squid seems to be caching the DNS failure. | 12:36 |
HorzA | how do i uninstall openstack? | 13:33 |
=== scuttlemonkey is now known as scuttle|afk | ||
aisrael | Odd_Bloke: That does look consistent with what I am seeing | 13:54 |
marcoceppi | HorzA: how did you install openstack? | 13:56 |
Odd_Bloke | aisrael: Cool, thanks for checking. | 13:56 |
aisrael | Odd_Bloke: I told squid to use 8.8.8.8 for dns, and no longer throws 503's at me | 14:03 |
aisrael | Odd_Bloke: so you're definitely on the right track | 14:04 |
=== skay_afk is now known as skay | ||
=== skay is now known as Guest81309 | ||
=== Guest81309 is now known as skay | ||
ackk | hi, could someone please point me to where should I look to get the info shown in "state-server-member-status" from the delta stream? | 15:38 |
=== kadams54 is now known as kadams54-away | ||
marcoceppi | ackk: might want to ask in #juju-dev | 15:40 |
ackk | marcoceppi, thanks | 15:40 |
=== kadams54-away is now known as kadams54 | ||
jcastro | marcoceppi, man, awesome, I'd love to go to chicago to give a juju talk | 16:07 |
marcoceppi | jcastro: yeah, I was abou to reply but knew you were a lot closer | 16:07 |
=== kadams54 is now known as kadams54-away | ||
thebozz | Hi guys, we're having trouble deploying Openstack over MAAS using openstack-install. We're using this tutorial: http://www.ubuntu.com/download/cloud/install-ubuntu-openstack . We're at step 4, and we're getting this output: http://pastebin.com/Byaxct7c | 17:40 |
marcoceppi | thebozz: looks like you're hitting a timeout, where the deployment is taking too long to run | 17:52 |
marcoceppi | if you run the openstack-installs script again it should pick up where it left off | 17:52 |
designated | can someone please tell me what I'm doing incorrectly? "juju deploy --config local.yaml --to 19 cs:~openstack-charmers/trusty/percona-cluster mysql" works just fine but I want to add additional service units to specific machines and the following is failing "juju add-unit --to 20 cs:~openstack-charmers/trusty/percona-cluster mysql" with error: unrecognized args: ["mysql"] | 17:52 |
lazyPower | designated: your close | 18:03 |
lazyPower | designated: its juju add-unit -n # --to # mysql | 18:03 |
lazyPower | you dont need to specify the cs charmpath for adding units - the state server has already loaded that charm source under the alias "mysql" for your deployment | 18:04 |
designated | lazyPower: so after specifying the cs path during deployment, it isn't required for future service unit additions...nice thank you. | 18:04 |
lazyPower | Thats correct ;) | 18:04 |
Odd_Bloke | aisrael: Turns out Virtualbox historically has some issues with DNS servers on 127.0.[01].1; there is a workaround that works which I'm about to push up. :) | 18:07 |
Odd_Bloke | aisrael: http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/jenkins_kvm/revision/552 is the fix; am rebuilding vivid images now, will let you know when I new one is available to test. | 18:23 |
Odd_Bloke | aisrael: Working (for me) images now available at http://cloud-images.ubuntu.com/vagrant/vivid/current/ | 18:41 |
Odd_Bloke | aisrael: Let me know if they work for you and I'll kick off builds for trusty and utopic. | 18:41 |
designated | I'm following https://wiki.ubuntu.com/ServerTeam/OpenStackHA . I have deployed 4 instances of mysql on 4 separate nodes, but I have a question about "hacluster mysql-hacluster" charm. Does this only deploy mysql-hacluster on a single node? | 18:59 |
designated | or can I deploy the "mysql-hacluster" charm on all 4 nodes as well? | 18:59 |
designated | when attempting "juju deploy hacluster mysql-hacluster --to 19" I receive the following: ERROR cannot use --num-units or --to with subordinate service | 19:11 |
designated | is there another way to specify a specific machine when deploying a subordinate service? | 19:12 |
aisrael | Odd_Bloke: Excellent! Grabbing the image to test now. | 19:16 |
lazyPower | designated: subordinate services by design get deployed into scope:container when you relate them to the service | 19:16 |
lazyPower | designated: so you dont need to deploy them --to anything | 19:17 |
designated | lazyPower: ahh so if I just deploy the subordinate service it will get installed to all nodes neccesary when the relation is built? | 19:18 |
lazyPower | designated: correct. if subordinates are not related to anything, they are transient "unplaced" services | 19:18 |
lazyPower | and only exist on the bootstrap node as a deployment alias, ready for integraiton when you decide to relate it :) | 19:19 |
designated | lazyPower: fair enough. thank you for the explanation. | 19:19 |
lazyPower | designated: no problem. Have you had a chance to run through any of the charm schools or juju videos? | 19:19 |
designated | lazyPower: not yet but I will. is the a recommended place to start? | 19:20 |
designated | is there* | 19:20 |
designated | lazyPower: also, will this cause problems with other services deployed in HA mode later on? I expect these openstack services to run on the same 4 nodes all in HA mode. I've built it this way manually, just don't know if some of the charms require separate physical machines. | 19:22 |
lazyPower | designated: there's an overview here - https://juju.ubuntu.com/resources/videos/ - but a lot of the charm school videos are deeper subject matter more on charm construction, how to write them, test them, and what options are available to you as a user to troubleshoot an environment should something go awry | 19:23 |
lazyPower | designated: i'm considering making a video aimed at brand new users to go from install to orchestrated stack and covering the different methods to get from A to Z - as i think that would be useful, covering core concepts behind the differences in charms and subordinates, etc. | 19:24 |
lazyPower | but thats a fairly large project and I dont have an ETA on if/when that would be done | 19:24 |
designated | lazyPower: as an example, the provided local.yaml show a different VIP for each service. I want them all to be accessible from the same VIP. | 19:24 |
lazyPower | designated: well, when you do the --to, which we have dubbed hulk smash mode | 19:24 |
designated | lazyPower: thanks for the link, bookmarking now. | 19:25 |
lazyPower | you can run into issues down teh road in terms of scale. We actually recommend you use the --to lxc:# or --to kvm:# - which will isolate the service in a vm/container respectively - but there's an end of the sidewalk with that as it stand today | 19:25 |
lazyPower | the networking story between containers and vm's is being worked on this cycle | 19:25 |
lazyPower | but if you're planning on keeping those services colocated - and not doing much in terms of scale you're fine to continue colocating with --to | 19:25 |
lazyPower | marcoceppi might have additional details in that department. I myself hulk smash to save $$ when i'm deploying my own projects, but i'm small potatoes - if you're setting up say, a telecommunications network, I would def. want to place services on proper machines so scale out is a snap | 19:26 |
designated | lazyPower: I'm deploying openstack and basically want the same 4 controllers to provide all services in | 19:26 |
designated | HA mode. | 19:26 |
designated | it's just not feasible to have a separate physical machine for each service. | 19:26 |
lazyPower | Completely understood, and i think that's an acceptable route to take. | 19:27 |
lazyPower | its kind of dependent on your networking setup, and how its all configured - but i'm not an openstack expert - What i suggest is to export your bundle when you've completed your deployment - scrub any sensitive details out of the config options and post it ot the list and ask for a review - if it looks feesable and resilient enough to change | 19:28 |
designated | lazyPower: am I correct in understanding, you're recommending deploying each of these services to LXCs on each node? | 19:28 |
lazyPower | one of our OpenStack engineers should be able to give you feedback and insight as to your deployment configuration | 19:28 |
lazyPower | designated: well thats a blanket statement when it comes to co-location of services. LXC containers or KVM vm's will isolate the services and provide density. | 19:28 |
marcoceppi | o/ designated reading scrollback | 19:34 |
marcoceppi | designated: is this for a private openstack deployment? | 19:36 |
marcoceppi | on physical hardware with maas? | 19:37 |
designated | marcoceppi: yes, it is a private deployment on metal using MAAS. | 19:41 |
designated | marcoceppi: why do you ask? | 19:45 |
=== kadams54 is now known as kadams54-away | ||
marcoceppi | designated: you should really use isolation, like lxc: / kvm: when deploying | 19:55 |
marcoceppi | designated: I figure you'll be using 4 nova-compute nodes, as well? | 19:55 |
designated | marcoceppi: okay, I'll do that. | 19:55 |
designated | marcoceppi: I'll actually have 28 compute nodes | 19:56 |
designated | marcoceppi: what is the advantage of using kvm over lxc or vice versa? | 19:57 |
=== kadams54-away is now known as kadams54 | ||
designated | marcoceppi: the compute nodes will each run on dedicated hardware. I just want all of the other openstack and supporting services to run on 4 physical nodes (openstack controllers). | 20:02 |
aisrael | Odd_Bloke: The vivid image looks good! DNS looks good, apt-get is happy. | 20:16 |
designated | when deploying to an LXC, can you still specify a physical NIC in the charm's configuration? | 20:25 |
=== kadams54 is now known as kadams54-away | ||
=== keithzg_ is now known as keithzg | ||
marcoceppi | designated: not quite, but you can set up a bridge network to the nics you care about for lxc containers in maas | 20:53 |
marcoceppi | maas is the best supported substrate for containerized deployments | 20:54 |
marcoceppi | and that's what you'll want to do | 20:54 |
marcoceppi | I just need to look that up, one min (maybe 10) | 20:54 |
designated | marcoceppi: I deployed mysql to bare metal and it worked fine, when I do the same thing but deploy to lxc on each of the 4 nodes, it never finished, it just sits in a pending state. I'm guessing it has something to do with specifying physical interfaces in the configuration. Is the charm supposed to build the bridge, or is this something I must do manually? | 21:00 |
marcoceppi | designated: this is something either maas or you will need to do prior to deployment | 21:01 |
designated | marcoceppi: thank you | 21:04 |
designated | marcoceppi: If I deploy all openstack supporting services to lxc on 4 nodes, will it be alright install neutron-gateway directly to the same 4 nodes outside of an lxc, to avoid networking issues? | 21:14 |
marcoceppi | designated: I'm not sure, that ventures in to depths of permetations I have not yet tried | 21:23 |
marcoceppi | you might want to email openstack-charmers and ask them | 21:23 |
marcoceppi | https://launchpad.net/~openstack-charmers | 21:24 |
designated | marcoceppi: thank you | 21:36 |
=== scuttle|afk is now known as scuttlemonkey | ||
=== beisner- is now known as beisner |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!