[07:32] <jam1> dimitern: geoff teale is inviting you (as I understand)
[07:32] <jam1> mup: whois geoff
[07:32] <mup> jam1: Unknown commands are unknown.
[07:32] <jam1> mup: who geoff
[07:32] <mup> jam1: Can't grasp that.
[07:33] <jam1> mup help
[07:33] <jam1> mup: help
[07:33] <mup> jam1: Run "help <cmdname>" for details on: bug, echo, help, infer, poke, run, sendraw, sms
[07:33] <dimitern> :D
[07:33] <jam1> mup poke geoff
[07:33] <jam1> mup: poke geoff
[07:33] <mup> jam1: Plugin "ldap" is not enabled here.
[07:35] <dimitern> jam1, it's tealeg
[11:52] <ackk> hi, does juju wait for machines to shut down properly before releasing/detroying them through the provider?
[11:53] <mgz> ackk: in what context?
[11:55] <ackk> mgz, specifically, with the maas provider. when calling destroy-environment I don't see DHCPRELEASEs in MAAS' dhcp. I see the shutdown message on the machines being destroyed but perhaps maas they are powered off by maas before the shutdown has completed?
[11:55] <mgz> juju doesn't generally destroy machines unless you tell it to, and a cloud's terminate-machine *is* a "shut down properly" vms get a shutdown signal as normal
[11:56] <mgz> ackk: possibly. worth asking in #maas perhaps, I'm not sure what their intended semantics on destroy-environment are, but we just tell maas to release all the machines
[11:57] <ackk> mgz, so what does juju do on destroy-environment? destroy all services, then all machines?
[11:57] <ackk> mgz, sorry, I'm trying to put together all the pieces :)
[11:57] <mgz> nope, just releases all the machines straight away, doesn't do any fiddling around with state first
[11:58] <mgz> so, no relation hooks get run etc
[11:58] <ackk> mgz, ok, so in the case of maas it's just a release call
[11:59] <ackk> mgz, thanks
[11:59] <mgz> yup
[13:15] <tedg`> jose, lazyPower, thanks guys for landing the SSL support for OwnCloud!
[13:27] <Guest94660> Question about get peer ip after stop/start machine on amazon ec2. unit-get seems get correct local ip but relation-get -r <id> private-address <unit-id> still get previous peer ip
[13:28] <Guest94660> do i need overwrite ip with relation-set?
[14:29] <jose> tedg: enjoy! and please file a bug if you find anything else missing, I'll be glad to take a look at it :)
[14:31] <abrimer> does anyone have experience deploying openstack havana on a cluster of IBM HS21 servers?
[14:32] <tedg> jose, Cool, do you have plans to update for OwnCloud 7?
[14:32] <jose> abrimer: are you using maas?
[14:32] <abrimer> yes. maas and juju
[14:32] <tedg> jose, Not sure that I need it, but it's version is one greater! :-)
[14:32] <jose> tedg: it's definitely on the queue for approval, it's a simple change so should be in the store soon!
[14:32] <jose> abrimer: cool, someone should be along to follow-up. I'm about to leave for a meeting
[14:32] <tedg> Great, thanks!
[14:32] <abrimer> I am having problems with getting quantum to setup the network properly
[14:33] <jamespage> abrimer, I have experience of both but not together - I might be able to help
[14:33] <abrimer> outstanding.
[14:33] <abrimer> I have been working with maas and juju for some time now
[14:33]  * jamespage listens
[14:34] <abrimer> I don't think that I have setup maas networking such that the openstack juju charms have what is needed.
[14:34] <jamespage> abrimer, are you having problems with accessing instances once deployed?
[14:34] <abrimer> yes.
[14:34] <abrimer> and the networking is not setup for eth1 at all
[14:34] <jamespage> abrimer, on the neutron-gateway?
[14:34]  * jamespage assumes you are deploying with neutron
[14:35] <abrimer> I truely believe that I don't have the deployment lifecycle setup properly
[14:35] <abrimer> yes. neutron via the quantum charm
[14:35] <jamespage> abrimer, good - so the neutron-gateway/quantum-gateway charm required two network ports
[14:35] <abrimer> right. and I only have eth0 configured via maas
[14:37] <abrimer> when I create a network and attach the mac for each of the compute nodes, the nova-cloud-controller node and the quantum node my public ip changes in juju
[14:38] <abrimer> should I walk back my deployment all the way to how my maas is configured with regards to the networking?
[14:38] <jamespage> one for traffic to compute nodes, and one to provide access to a public networks
[14:39] <jamespage> abrimer, eth0 only is OK for now
[14:39] <jamespage> eth1 needs to be connected to the 'public access' network
[14:39] <jamespage> just for the neutron-gateway node
[14:39] <jamespage> abrimer, HS21 is a blade center right?
[14:40] <whit> morning charmy world
[14:40] <abrimer> jamespage, yes the HS21 is a blade and I have 14 of them specifically for this project.
[14:40] <abrimer> all managed via the bladecenter chassis with their own cisco
[14:41] <abrimer> I cannot for the life of me get the neutron networking to work,
[14:42] <abrimer> everything up to the openstack-dashboard installs and I can log into horizon
[14:42] <abrimer> when I first get in there is no public network setup
[14:43] <abrimer> I can use the cli to create the network, subnet, and router but the instances will not get an ip assigned during initial vm boot
[14:43] <abrimer> I feel that my gre setup is terribly wrong but don;t know where to start
[14:44] <jamespage> abrimer, the gre tunnels should run OK over the configured eth0 interfaces
[14:45] <jamespage> traffic breaks out over ext-port on the neutron-gateway charm
[14:45] <abrimer> I thought so too. I have assumed that following the standard maas and juju deployment would get me a working config. I know that I am doing something wrong but cannot figure what it is.
[14:45] <jamespage> abrimer, you may want to bump the mtu on your network interfaces to deal with the overhead of GRE
[14:45] <jamespage> (and switch)
[14:45] <abrimer> I have NOT done that.
[14:46] <jamespage> abrimer, it should still work without doing that
[14:46] <jamespage> but you might see some issues - ping at least should work
[14:46] <abrimer> the cisco that is in the IBM chassis is essentially a 2950 and will not allow for an MTU less than 1500 is that a problem?
[14:47] <jamespage> bigger is better
[14:47] <jamespage> 1546
[14:47] <abrimer> I allowed ssl (22) and icmp for the default sec groups but ping will not work for me
[14:47] <abrimer> OH. Bigger not smaller
[14:48] <jamespage> yup
[14:48] <abrimer> I thought that 1400 was my target.
[14:49] <abrimer> I know enough to be dangerous at this time. I have the cli commands down pat for nova, neutron, and ovs but I think that my overall understanding of the network layouts is deficient
[14:49] <jamespage> abrimer, you can drop the instance MTU via the neutron-gateway charm but its not 100% reliable
[14:50] <abrimer> is multi-host networking something that I would want? or is that only applied if using nova-networking?
[14:50] <jamespage> only nova net
[14:50] <abrimer> cool. thought so but wanted to be sure.
[14:51] <abrimer> are there any other gotchas that I may want to eyeball beyond MTU?
[14:52] <abrimer> I has been my understanding that using maas and juju should provide a working stack without my having to dig into the servers and manually config. Is that a true statement?
[14:52] <jamespage> yes
[14:52] <jamespage> you should be able todo everything via charm configuration
[14:53] <jamespage> abrimer, but that does assume that the neutron-gateway charm is on a node networked correctly
[14:53] <abrimer> thought so. again, I think that there is something (probably simple and obvious) that I am doing wrong here. Just cant put my finger on it.
[14:54] <abrimer> right. both eth interfaces are available for that blade and if I load an OS alone it is pingable, ssh, scp the works.
[14:54] <jamespage> abrimer, ok
[14:55] <jamespage> abrimer, when you have deployed, can you get to all the servers?
[14:55] <abrimer> I think that I will put the MTU to 1546 as you recommend and see where it leads me.
[14:55] <abrimer> jamespage, yes all servers are available
[14:55] <jamespage> abrimer, you only need to apply taht to eth0
[14:56] <abrimer> even when eth0 is my maas ip address?
[14:56] <jamespage> abrimer, not sure I understand that
[14:56] <jamespage> by default, all the machines should provision with a configured eth0
[14:57] <abrimer> sorry. when the server is installed maas provides the ip 10.10.30.X
[14:57] <jamespage> abrimer, awesome
[14:57] <jamespage> abrimer, so on the compute and gateway nodes run "sudo ovs-vsctl show"
[14:57] <xwang2713> hhh
[14:57] <jamespage> and see if gre tunnels are present for all compute and gateway nodes
[14:58] <abrimer> right, and I see the br-tun and the local_ip and remote_ip are on the 10.10.30
[14:58] <abrimer> they are correct for each compute node in regards to the remote-ip address
[14:58] <jamespage> abrimer, that's good
[14:59] <jamespage> abrimer, on the neutron-gateway node check that br-ex has a network port
[14:59] <abrimer> I setup eth1 with a 10.10.20 so that there is external access and I manually change ovs to that interface
[15:00] <jamespage> abrimer, you don't need todo it directly on the server
[15:00] <abrimer> OH
[15:00] <jamespage> juju set neutron-gateway ext-port=eth1
[15:00] <whit> cory_fu: having some connectivity issues, relocating, but then want to catch up about monit start/stop stuff
[15:00] <cory_fu> kk
[15:01] <abrimer> I know that this is simple stuff, probable very elementary to you. I appreciate your help so much
[15:01] <jamespage> abrimer, np
[15:03] <abrimer> I have examined the openstack docs with regards to the networking and have looked at the troubleshooting document, do you have any advice regarding how to troubleshoot gre tunnels without setting up instances?
[15:03] <abrimer> you know, using the cli to examine the network
[15:04] <lazyPower> tedg: all in a days work. *hattip*
[15:08]  * whit relocates
[15:09] <jamespage> abrimer, neutron agent-list
[15:09] <jamespage> is useful - sorry have to drop for a bit
[15:09] <jamespage> also doing meetings today
[15:09] <abrimer> no problem jamespage, thanks for all of your help man.
[15:11] <jamespage> yw
[16:06] <lazyPower> Tribaal: Thanks for these CH merges. +1'd and merged upstream.
[17:24] <Tribaal> lazyPower: welcome :)
[17:26] <Tribaal> lazyPower: I have a few more things to hack on CH and charms using it... it seems there was a lot of organic growth lately, that leaves some room for clearing things up :)
[17:27] <lazyPower> Yeah, may want to ping tvansteenburgh and marcoceppi - as they are spear heading some new efforts with CH to make it more beginner friendly and address some of our longer running issues
[17:28] <Tribaal> lazyPower: ah? Is there anywhere where I could track discussions regarding CH? Despite being pretty involved in Ubuntu's Openstack plans I never see anything about it anywhere.
[17:30] <lazyPower> We talk about them occasionally in here. Mostly though that's been discussed during our standups and in the bugs filed against CH
[17:30] <Tribaal> ok
[17:31] <Tribaal> that's not very convenient for "external" communication however :/
[17:31] <Tribaal> one thing at the time, first, let's get that code in better shape :)
[17:39] <tvansteenburgh> Tribaal: question about https://code.launchpad.net/~tribaal/charm-helpers/drop-juju-gui-dead-code/+merge/228986
[17:40] <tvansteenburgh> did you actually check every charm in the store to make sure none are using that code?
[17:59] <frobware> how can I prevent add-machine adding a proxy. I added one at some point in the past, but it was bogus. Now every time I `add-machine' I see this: http://pastebin.ubuntu.com/7953894/
[18:04] <frobware> ah, I see: juju --debug unset-env apt-http-proxy
[18:04] <frobware> I was using set-env apt-http-proxy=
[18:06] <Tribaal> tvansteenburgh: no, I haven't checked *all* of the charms, indeed. I tried to find a list of all charms, but couldn't find one. I would be happy to have a script check every charm methodically if I could be shown such a list :)
[18:07] <Tribaal> tvansteenburgh: I checked the most obvious ones (juju-gui being one of them), and decided to apply the "see who complains" approach, TBH
[18:08] <tvansteenburgh> Tribaal: yeah, i think that approach is probably fine for this. i can't decide how much i care
[18:08] <Tribaal> tvansteenburgh: either way, the juju-gui part needs a lot of work if it has to stay - it contains duplicates of a lot of functionality (for example, it reimplements the apt-get wrappers)
[18:08] <Tribaal> so I chose the path of least resistance :)
[18:08] <tvansteenburgh> Tribaal: it's impossible to verify that no charm uses that code anyway, b/c charms can exist outside the store
[18:09] <tvansteenburgh> Tribaal: but it seems low risk to remove since charmhelpers are bundled with each charm. that won't always be the case though
[18:10] <Tribaal> tvansteenburgh: exactly. If somebody complains, I wager a lot of the functionality they actually need is either 1) available somewhere else in CH or 2) trivial to reimport...
[18:11] <Tribaal> tvansteenburgh: I'm happy to be pointed at and mocked if somebody comes back at us for removing that (and fix the mess, too).
[18:11] <Tribaal> :)
[18:11] <tvansteenburgh> lol
[18:12] <Tribaal> well, ok, "happy" might not be the right term here
[18:12] <tvansteenburgh> Tribaal: fair enough. i'm pinging in #juju-gui to see if anyone there complains, if not...
[18:12] <tvansteenburgh> ok they don't care either, i'll approve it
[18:13]  * Tribaal conjures the image of a guillotine in his mind. His French heritage approves.
[18:49] <tvansteenburgh> kirkland: have you tried transcode on ec2?
[18:51] <tvansteenburgh> kirkland: more to the point, do you know if it works there? everything deploys fine but i can't get the web ui to come up
[19:08] <sebas5384> hey lazyPower!
[19:09] <lazyPower> Whats ups sebas5384
[19:09] <sebas5384> sorry for the delay hehe i sow your messages about the dns server like today
[19:09] <sebas5384> hehe
[19:09] <lazyPower> O, yeah! its a great PoC charm, needs a bit more love
[19:09] <lazyPower> it'll work great in dev though - where your DNS can afford to not be HA
[19:09] <sebas5384> lazyPower: i understand your point, but for a local dev environment
[19:10] <sebas5384> its more than enough, what do you think?
[19:10] <lazyPower> certainly!
[19:10] <sebas5384> great! so, thinking in a local dev environment, into a vagrant box, how do you think it should be used?
[19:12] <lazyPower> are you distributing your services between many vagrant boxes?
[19:13] <lazyPower> its not going to work in that sense if you are. It will work in a single LXC (or cloud) based environment. as all the services are in a single deployment map. You would then add the DNS service charm IP to your /etc/resolv.conf - all your domains are then available
[19:14] <lazyPower> so that will work in terms of Vagrant machine juju host -> to service. i havent tried parent of vagrant -> vagrant -> service. It wasn't intended to be used within vagrant, more so for like EC2 environments, or LXC local based environments. I would think if you proxy your DNS into that vagrant machine, you can reach them.
[19:14] <lazyPower> but that needs testing.
[19:42] <abrimer> jamespage, are you available for a question?
[20:35] <tvansteenburgh> jose you around?
[20:35] <jose> tvansteenburgh: hey! I am
[20:35] <tvansteenburgh> jose, you tested the transcode charm a bit right?
[20:36] <tvansteenburgh> jose, did you run it on ec2?
[20:36] <jose> tvansteenburgh: I did, yes! I could test it again if needed
[20:36] <jose> yep, EC2
[20:36] <tvansteenburgh> and it worked fine?
[20:36] <tvansteenburgh> i can't get the web ui to come up
[20:36] <jose> oh, is the port listed open in juju status?
[20:36] <jose> I recall the port not being called as open with open-port
[20:37] <tvansteenburgh> ah, ok. i'll check that, gotta redeploy my env
[20:37] <tvansteenburgh> that was probably it though, thanks
[20:38] <jose> np :)
[20:39] <jose> I'll give it a shot again in a minute, just finishing up a text
[20:40] <tvansteenburgh> jose, in that case, you wanna just leave your feedback here? https://bugs.launchpad.net/charms/+bug/1342843
[20:40] <jose> sure
[20:40] <tvansteenburgh> jose, no sense both of us testing it - i'll move on to something else
[20:40] <tvansteenburgh> jose, thanks!
[20:40] <jose> no prob!
[21:37] <jose> tvansteenburgh: hey, would you mind confirming the behavior I'm seeing? I'm just getting exit code 5 whenever I try transcode
[21:37] <tvansteenburgh> jose: where do you see that? in one of the unit logs?
[21:38] <jose> tvansteenburgh: correct, in the first unit logs, it never gets to convert it
[21:38] <jose> lemme pastebin
[21:38] <jose> http://paste.ubuntu.com/7955549/
[21:40] <tvansteenburgh> jose: ok, i'll give it a go
[21:40] <jose> thanks :)
[21:56] <tvansteenburgh> jose: my config-changed hook ran successfully
[21:57] <jose> wait, this is after doing 'juju set transcode input_url=link to video'
[21:57] <tvansteenburgh> yeah
[21:58] <tvansteenburgh> i used http://download.blender.org/demo/old_demos/diditdoneit.mpg
[22:03] <tvansteenburgh> jose: http://ec2-54-166-1-51.compute-1.amazonaws.com/transcode/job__diditdoneit.mpg_copy/
[22:03] <jose> I used this one
[22:04] <jose> https://ia600201.us.archive.org/14/items/ligouHDR-HC1_sample1/Sample.mpg
[22:04] <tvansteenburgh> i'll try it
[22:05] <tvansteenburgh> exit 5
[22:06] <jose> hmm, probably because it's httpS?
[22:13] <tvansteenburgh> jose: yeah it worked for me with http
[22:13] <jose> ok, I'll take a look later
[22:14] <jose> will try with some other links
[22:14] <tvansteenburgh> i'm gonna add a couple comments to the bug before i EOD, thanks for testing this!