/srv/irclogs.ubuntu.com/2014/12/15/#juju.txt

=== CyberJacob is now known as CyberJacob|Away
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== stub` is now known as stub
=== kadams54-away is now known as kadams54
=== kadams54 is now known as kadams54-away
=== urulama_ is now known as urulama
=== erkules_ is now known as erkules
=== negronjl is now known as negronjl_afk
=== jacekn_ is now known as jacekn
=== CyberJacob|Away is now known as CyberJacob
Odd_Blokeaisrael: I've been poking at this Squid issue, and I'm finding that a "curl http://archive.ubuntu.com" is really slow (and is spending a lot of time resolving DNS); can you repro that?12:21
Odd_Bloke(http://stackoverflow.com/a/22625150 is a good way of spitting out how long different bits of the transfer are taking)12:21
Odd_Blokeaisrael: http://paste.ubuntu.com/9528021/ is what I'm seeing.12:23
Odd_BlokeSo my theory is that this is a DNS issue; Squid is timing out when making DNS requests and so 503'ing.12:24
Odd_BlokeI'll see if I can up Squid's timeout to confirm my suspicion.12:24
Odd_Blokeaisrael: It's actually worse than that, Squid seems to be caching the DNS failure.12:36
HorzAhow do i uninstall openstack?13:33
=== scuttlemonkey is now known as scuttle|afk
aisraelOdd_Bloke: That does look consistent with what I am seeing13:54
marcoceppiHorzA: how did you install openstack?13:56
Odd_Blokeaisrael: Cool, thanks for checking.13:56
aisraelOdd_Bloke: I told squid to use 8.8.8.8 for dns, and no longer throws 503's at me14:03
aisraelOdd_Bloke: so you're definitely on the right track14:04
=== skay_afk is now known as skay
=== skay is now known as Guest81309
=== Guest81309 is now known as skay
ackkhi, could someone please point me to where should I look to get the info shown in "state-server-member-status" from the delta stream?15:38
=== kadams54 is now known as kadams54-away
marcoceppiackk: might want to ask in #juju-dev15:40
ackkmarcoceppi, thanks15:40
=== kadams54-away is now known as kadams54
jcastromarcoceppi, man, awesome, I'd love to go to chicago to give a juju talk16:07
marcoceppijcastro: yeah, I was abou to reply but knew you were a lot closer16:07
=== kadams54 is now known as kadams54-away
thebozzHi guys, we're having trouble deploying Openstack over MAAS using openstack-install. We're using this tutorial: http://www.ubuntu.com/download/cloud/install-ubuntu-openstack . We're at step 4, and we're getting this output: http://pastebin.com/Byaxct7c17:40
marcoceppithebozz: looks like you're hitting a timeout, where the deployment is taking too long to run17:52
marcoceppiif you run the openstack-installs script again it should pick up where it left off17:52
designatedcan someone please tell me what I'm doing incorrectly?  "juju deploy --config local.yaml --to 19 cs:~openstack-charmers/trusty/percona-cluster mysql" works just fine but I want to add additional service units to specific machines and the following is failing "juju add-unit --to 20 cs:~openstack-charmers/trusty/percona-cluster mysql" with error: unrecognized args: ["mysql"]17:52
lazyPowerdesignated: your close18:03
lazyPowerdesignated: its juju add-unit -n # --to # mysql18:03
lazyPoweryou dont need to specify the cs charmpath for adding units - the state server has already loaded that charm source under the alias "mysql" for your deployment18:04
designatedlazyPower: so after specifying the cs path during deployment, it isn't required for future service unit additions...nice thank you.18:04
lazyPowerThats correct ;)18:04
Odd_Blokeaisrael: Turns out Virtualbox historically has some issues with DNS servers on 127.0.[01].1; there is a workaround that works which I'm about to push up. :)18:07
Odd_Blokeaisrael: http://bazaar.launchpad.net/~ubuntu-on-ec2/vmbuilder/jenkins_kvm/revision/552 is the fix; am rebuilding vivid images now, will let you know when I new one is available to test.18:23
Odd_Blokeaisrael: Working (for me) images now available at http://cloud-images.ubuntu.com/vagrant/vivid/current/18:41
Odd_Blokeaisrael: Let me know if they work for you and I'll kick off builds for trusty and utopic.18:41
designatedI'm following https://wiki.ubuntu.com/ServerTeam/OpenStackHA . I have deployed 4 instances of mysql on 4 separate nodes, but I have a question about "hacluster mysql-hacluster" charm.  Does this only deploy mysql-hacluster on a single node?18:59
designatedor can I deploy the "mysql-hacluster" charm on all 4 nodes as well?18:59
designatedwhen attempting "juju deploy hacluster mysql-hacluster --to 19" I receive the following: ERROR cannot use --num-units or --to with subordinate service19:11
designatedis there another way to specify a specific machine when deploying a subordinate service?19:12
aisraelOdd_Bloke: Excellent! Grabbing the image to test now.19:16
lazyPowerdesignated: subordinate services by design get deployed into scope:container when you relate them to the service19:16
lazyPowerdesignated: so you dont need to deploy them --to anything19:17
designatedlazyPower: ahh so if I just deploy the subordinate service it will get installed to all nodes neccesary when the relation is built?19:18
lazyPowerdesignated: correct. if subordinates are not related to anything, they are transient "unplaced" services19:18
lazyPowerand only exist on the bootstrap node as a deployment alias, ready for integraiton when you decide to relate it :)19:19
designatedlazyPower: fair enough.  thank you for the explanation.19:19
lazyPowerdesignated: no problem. Have you had a chance to run through any of the charm schools or juju videos?19:19
designatedlazyPower: not yet but I will.  is the a recommended place to start?19:20
designatedis there*19:20
designatedlazyPower: also, will this cause problems with other services deployed in HA mode later on?  I expect these openstack services to run on the same 4 nodes all in HA mode.  I've built it this way manually, just don't know if some of the charms require separate physical machines.19:22
lazyPowerdesignated: there's an overview here - https://juju.ubuntu.com/resources/videos/ - but a lot of the charm school videos are deeper subject matter more on charm construction, how to write them, test them, and what options are available to you as a user to troubleshoot an environment should something go awry19:23
lazyPowerdesignated: i'm considering making a video aimed at brand new users to go from install to orchestrated stack and covering the different methods to get from A to Z - as i think that would be useful, covering core concepts behind the differences in charms and subordinates, etc.19:24
lazyPowerbut thats a fairly large project and I dont have an ETA on if/when that would be done19:24
designatedlazyPower: as an example, the provided local.yaml show a different VIP for each service.  I want them all to be accessible from the same VIP.19:24
lazyPowerdesignated: well, when you do the --to, which we have dubbed hulk smash mode19:24
designatedlazyPower: thanks for the link, bookmarking now.19:25
lazyPoweryou can run into issues down teh road in terms of scale. We actually recommend you use the --to lxc:# or --to kvm:# - which will isolate the service in a vm/container respectively - but there's an end of the sidewalk with that as it stand today19:25
lazyPowerthe networking story between containers and vm's is being worked on this cycle19:25
lazyPowerbut if you're planning on keeping those services colocated - and not doing much in terms of scale you're fine to continue colocating with --to19:25
lazyPowermarcoceppi might have additional details in that department. I myself hulk smash to save $$ when i'm deploying my own projects, but i'm small potatoes - if you're setting up say, a telecommunications network, I would def. want to place services on proper machines so scale out is a snap19:26
designatedlazyPower: I'm deploying openstack and basically want the same 4 controllers to provide all services in19:26
designatedHA mode.19:26
designatedit's just not feasible to have a separate physical machine for each service.19:26
lazyPowerCompletely understood, and i think that's an acceptable route to take.19:27
lazyPowerits kind of dependent on your networking setup, and how its all configured - but i'm not an openstack expert - What i suggest is to export your bundle when you've completed your deployment - scrub any sensitive details out of the config options and post it ot the list and ask for a review - if it looks feesable and resilient enough to change19:28
designatedlazyPower: am I correct in understanding, you're recommending deploying each of these services to LXCs on each node?19:28
lazyPowerone of our OpenStack engineers should be able to give you feedback and insight as to your deployment configuration19:28
lazyPowerdesignated: well thats a blanket statement when it comes to co-location of services. LXC containers or KVM vm's will isolate the services and provide density.19:28
marcoceppio/ designated reading scrollback19:34
marcoceppidesignated: is this for a private openstack deployment?19:36
marcoceppion physical hardware with maas?19:37
designatedmarcoceppi: yes, it is a private deployment on metal using MAAS.19:41
designatedmarcoceppi: why do you ask?19:45
=== kadams54 is now known as kadams54-away
marcoceppidesignated: you should really use isolation, like lxc: / kvm: when deploying19:55
marcoceppidesignated: I figure you'll be using 4 nova-compute nodes, as well?19:55
designatedmarcoceppi: okay, I'll do that.19:55
designatedmarcoceppi: I'll actually have 28 compute nodes19:56
designatedmarcoceppi: what is the advantage of using kvm over lxc or vice versa?19:57
=== kadams54-away is now known as kadams54
designatedmarcoceppi: the compute nodes will each run on dedicated hardware.  I just want all of the other openstack and supporting services to run on 4 physical nodes (openstack controllers).20:02
aisraelOdd_Bloke: The vivid image looks good! DNS looks good, apt-get is happy.20:16
designatedwhen deploying to an LXC, can you still specify a physical NIC in the charm's configuration?20:25
=== kadams54 is now known as kadams54-away
=== keithzg_ is now known as keithzg
marcoceppidesignated: not quite, but you can set up a bridge network to the nics you care about for lxc containers in maas20:53
marcoceppimaas is the best supported substrate for containerized deployments20:54
marcoceppiand that's what you'll want to do20:54
marcoceppiI just need to look that up, one min (maybe 10)20:54
designatedmarcoceppi: I deployed mysql to bare metal and it worked fine, when I do the same thing but deploy to lxc on each of the 4 nodes, it never finished, it just sits in a pending state.  I'm guessing it has something to do with specifying physical interfaces in the configuration.  Is the charm supposed to build the bridge, or is this something I must do manually?21:00
marcoceppidesignated: this is something either maas or you will need to do prior to deployment21:01
designatedmarcoceppi: thank you21:04
designatedmarcoceppi: If I deploy all openstack supporting services to lxc on 4 nodes, will it be alright install neutron-gateway directly to the same 4 nodes outside of an lxc, to avoid networking issues?21:14
marcoceppidesignated: I'm not sure, that ventures in to depths of permetations I have not yet tried21:23
marcoceppiyou might want to email openstack-charmers and ask them21:23
marcoceppihttps://launchpad.net/~openstack-charmers21:24
designatedmarcoceppi: thank you21:36
=== scuttle|afk is now known as scuttlemonkey
=== beisner- is now known as beisner

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!