[00:23] <stokachu> do bundles support --to machine?
[00:54] <marcoceppi> stokachu: yes
[00:55] <stokachu> marcoceppi: whats the syntax
[00:55] <stokachu> in the yaml file
[00:55] <stokachu> machine: 1?
[00:55] <marcoceppi> stokachu: http://pythonhosted.org/juju-deployer/config.html#placement
[00:55] <stokachu> marcoceppi: cool thanks
[00:56] <thumper> o/ marcoceppi
[02:16] <marcoceppi> \o thumper
[02:17] <thumper> marcoceppi: tried out the new debug-log?
[02:17] <marcoceppi> nope
[02:17] <marcoceppi> what am I instore for?
[02:20] <thumper> marcoceppi: fun :-)
[02:20] <marcoceppi> thumper: is it in 1.19?
[02:21] <thumper> yep
[02:21] <thumper> works with local now
[02:26] <lazyPower> ooooo
[02:26]  * lazyPower installs
[02:27] <jose> hey lazyPower, mind giving me a hand with a test I'm writing?
[02:27] <jose> I just want to confirm my python code is not bad
[02:29] <lazyPower> link?
[02:31] <lazyPower> thumper: niiiiiiice
[02:32] <jcastro> o/ thumper!
[02:32] <jcastro> hey, marco and I had an idea today
 That sounds amazing, how can I implement it?
[02:32] <jcastro> well, we were thinking that the resolved --retry thing in debug-hooks is painful
[02:32] <jcastro> so how about you debug-hooks
[02:32] <thumper> jcastro: ?!
[02:33] <jcastro> and from inside the unit you do like "hulk smash"
[02:33] <jcastro> and it does the equivalent of a resolved --retry
[02:36] <thumper> jcastro: we can talk about improving charm dev experience in vegas
[02:36] <thumper> jcastro: I'm assuming you are coming to vegas?
[02:36] <jcastro> I am ready to buy many beers
[02:36] <jcastro> yep
[02:37] <thumper> jcastro: I *want* to make dev experience awesome
[02:37] <thumper> so lets do it!
[02:37]  * thumper goes to put that row on the spreadsheet
[02:38] <jose> lazyPower: http://paste.ubuntu.com/7265206/
[02:39] <lazyPower> jose: while that works, os.system really isnt seen anywhere in our code. we use subprocess.popen or subprocess.call
[02:40] <jose> hmm, how should I call that?
[02:40]  * jose googles
[02:40] <lazyPower> otherwise, looks fine at first glance. i'm assuming you're just testing to see if its up and available? A more semantic approach would be to use python.requests to fetch the url and do some validation on whats returned
[02:40] <lazyPower> eg: the title of the webpage we expect to see. Just because a server responds to a ping doesn't mean the http interface is acting as it should.
[02:40] <jose> yeah, as it's a telnet server all the test needs to do is check if it's up
[02:41] <lazyPower> ah, ok
[02:41] <lazyPower> when i saw nyancat i just assumed it was a website...
[02:41] <jose> :P
[02:41] <lazyPower> i haven't actually interfaced with the nyancat charm
[02:41] <jose> it's a telenet server which tells you for how many seconds you have nyaned
[02:44] <jose> I assume that by replacing that os.system line for `subprocess.call([ping -c 1] + hostname)` it should do good?
[02:45] <lazyPower> jose: the problem I have with just pinging the server is you're not really testing that the service is there. You're just validating that juju provided you a machine
[02:45] <jose> hmm, that's right
[02:45] <lazyPower> and we know juju works
[02:45] <jose> and a telnet command would make it endless(?)
[02:46] <lazyPower> jose: https://docs.python.org/2/library/telnetlib.html
[02:46] <lazyPower> use a telnet library
[02:46]  * jose checks
[02:46] <jose> whoops, I need to run, will be back in 30
[02:46] <lazyPower> now you can validate that not only did you get a telnet connection, but you can validate the response you get too, search for 'nyan' and boom. its validated.
[02:47] <jose> awesome!
[02:47] <jose> remain assured that I'll be making a test for that one, or at least try :)
[02:47] <lazyPower> Looking forward to seeing it in the queue :)
[03:11] <mwhudson> what do you do with machines that have got stuck?
[03:11] <mwhudson> juju destroy-unit doesn't seem to have done anything
[03:16] <marcoceppi> mwhudson: what version of juju?
[03:16] <mwhudson> marcoceppi: trusty
[03:17] <mwhudson> er i guess it's probably 1.18
[03:17] <mwhudson> 1.18.1-trusty-amd64
[03:17] <marcoceppi> mwhudson: you can use the --force flag
[03:17] <mwhudson> error: flag provided but not defined: --force
[03:17] <marcoceppi> mwhudson: terminate-machine *
[03:18] <mwhudson> ah
[03:18] <mwhudson> thanks!
[03:42] <thumper> jcastro: still around?
[05:21] <jose> hey davechen1y, do you have a min?
[05:21] <davechen1y> jose: shoot
[05:21] <jose> I need a hand with an amulet test which is giving me a weird error, let me pastebin
[05:22] <jose> my test is http://paste.ubuntu.com/7265781/ and it returns the following error http://paste.ubuntu.com/7265782/
[05:23] <davechen1y> jose: hmm, that isn't juju
[05:23] <davechen1y> i'm not sure i can help
[05:23] <jose> yeah, it's python-ish
[05:23] <jose> np
[05:25] <davechen1y> tn = telnetlib.Telnet("%s" % d.sentry.unit['nyancat/0'].info['public-address'])
[05:25] <davechen1y> isn't info['public-address'] already a string ?
[05:26] <jose> hmm, not sure
[05:26] <jose> afaik d.sentry.unit['nyancat/0'].info['public-address'] returns the public address
[05:27] <davechen1y> what type is that thow ?
[05:27] <davechen1y> some vague googlgin suggests that error method is due to some difference between strings and unicode strings in python
[05:28] <jose> hmm, will check then, thanks
[05:39] <jose> turns out the problem is on the response that tn.read_until is giving
[05:55] <davechen1y> jose: interesting
[05:56] <jose> I'm going to bed now but will fix and push tomorrow morning
[05:56] <jose> night!
[05:57] <davechen1y> ok
[07:03] <stub> nuclearbob: If you just want the PostgreSQL service standalone, you can use the admin_addresses configuration item. This should allow you to connect from the specified IP addresses directly to the PostgreSQL cluster. 'juju status' will give you the IP address and port (almost certainly port 5432)
[07:05] <stub> marcoceppi: Do you think the config file needs to be documented in the README too, or should the charm store be generating documentation using the descriptions in config.yaml?
[07:06] <stub> Hmm.... if I scroll down enough and click on 'config details', I get a fixed width and colorized rendering of the config yaml.
[07:06] <stub> https://manage.jujucharms.com/charms/precise/postgresql/config
[07:07] <stub> nuclearbob: per Ubuntu packaging, you should be able to connect as the 'postgres' user to the 'postgres' database, and create your users and databases from there using psql or pgadmin or whatever.
[07:07] <stub> nuclearbob: If that doesn't work, file a bug because this setup should be supported.
[07:15] <mdunc> Hi.  Got a question.  Fresh MAAS/Juju install on Ubuntu 12.04.  Deploying Keystone fails when it tries to do `keystone-manage db_sync` with the error "ImportError: No module named sql".
[07:15] <mdunc> Is anyone able to reproduce my problem?
[07:16] <mdunc> I was able to successfully deploy keystone yesterday.
[07:24] <mdunc> jamespage: you work on the keystone charm, right?  any idea?
[07:30] <jamespage> mdunc: give me 15
[07:30] <mdunc> jamespage:  ok.  just tried keystone-31 and it works.
[07:30] <mdunc> jamespage:  thanks for looking in  to it
[07:42] <jamespage> mdunc, ok - that's a regression then
[07:43] <jamespage> we just landed a large piece of work for keystone
[07:43] <jamespage> mdunc, what openstack-origin config are you using?
[07:44] <mdunc> jamespage:  i'm not using any special configuration.  all i did was deploy mysql, rabbitmq-server, and keystone so far all with default settings.
[07:45] <mdunc> jamespage: ah, I guess that setting would be "distro"
[07:45] <jamespage> mdunc, ok - that's essex then
[07:46] <jamespage> mdunc, I would recommend that you use a later openstack release
[07:46] <jamespage> essex is supported still
[07:46] <jamespage> but its 5 versions old
[07:46] <jamespage> mdunc, openstack-origin=cloud:precise-havana
[07:46] <jamespage> mdunc, or from today cloud:precise-icehouse
[07:46] <jamespage> mdunc, let me fix this up tho
[07:48] <mdunc> jamespage: alright, i'll give that a shot
[07:51] <jamespage> mdunc, OK - I see the issue
[07:51] <jamespage> keystone < grizzly does not support a sql backend for policies
[07:56] <mdunc> Ah, good to know.  I just started playing with OpenStack a couple weeks ago and this week is my first time trying installation with Juju.  I still have a lot to learn :)
[07:57] <jamespage> mdunc, I'm really interested to hear you experience
[07:58] <jamespage> mdunc, I have alot of users who have been using the charms for a while now - so someone fresh to them is a good checkpoint!
[07:59] <mdunc> jamespage: so far, it's been great!  juju definitely seems like the way to go for my company as we're all pretty busy and many people don't have the time to learn the ins and outs of openstack to set it up manually.  i gave a demo earlier and they all loved it.
[08:01] <vila> hi there, I can't bootstrap on hp cloud anymore: 2014-04-17 07:53:52 ERROR juju.cmd supercommand.go:300 cannot start bootstrap instance: no "trusty" images in region-a.geo-1 with arches [amd64 arm64 armhf i386]
[08:01] <jamespage> mdunc, if this is for a new deployment I'd stongly suggest using either cloud:precise-icehouse with 12.04 (3 years of support left) OR using trusty
[08:01] <vila> I used to have precise instances, did that change recently and how can I express that I want to stick to precise ?
[08:01] <jamespage> which has icehouse as default
[08:02] <jamespage> vila, hmm
[08:02] <mdunc> jamespage: the only thing that kind of caught me off guard is when removing a charm, it doesn't remove the installed services and juju doesn't seem to want to deploy to it again unless i manually tell it to.  overall though, it's pretty awesome!  you guys are doing great work!
[08:02] <jamespage> utlemming, ^^
[08:02] <jamespage> mdunc, I generally don't recycle machines like that
[08:02] <jamespage> mdunc, destroy-service then terminate-machine
[08:03] <jamespage> mdunc, thanks for the praise!
[08:03] <vila> jamespage, utlemming : If that helps, I did bootstrap yesterday afternoon successfully so ~14h ago
[08:03] <vila> hmm, make that ~16h sorry
[08:04] <mdunc> jamespage: i'll try that next time. thanks!  and yeah, we're definitely going to go with trusty.  maas is already running trusty, but haven't set up juju on trusty yet.
[08:05] <vila> jamespage, utlemming : for completeness: https://pastebin.canonical.com/108759/
[08:05] <jamespage> vila, utlemming is probably not up yet but he'll see this when he awakes
[08:06] <jamespage> mdunc, OK - I pushed the fix for the regression to the keystone charm - it should sync out to the charmstore ~1hr
[08:06] <jamespage> mdunc, however I know its already OK with >= grizzly
[08:06] <vila> jamespage: ack, is there a way to force precise in environments.yaml or something ?
[08:06] <jamespage> vila, yeah default-series: precise
[08:07] <jamespage> vila, what version of ubuntu and which version of juju are you using to bootstrap?
[08:07] <vila> doh, I tried series ;) Thanks for the hint ! It seems to go further
[08:07] <mdunc> jamespage: thanks for the quick fix :)
[08:08] <jamespage> mdunc, always on the lookout for regressions when we land such a big change
[08:08] <vila> jamespage: trusty freshly updated so juju-1.18.1-trusty-amd64 according to the output of juju bootstrap --debug
[08:08] <jamespage> mdunc, the keystone charm basically got re-written this cycle inline with the other openstack charms
[08:08] <jamespage> vila, hmm - I think 1.18.1 might have introduced using the latest lts
[08:08] <jamespage> vila, but I may be wrong
[08:09] <mdunc> jamespage: i'll be testing them all pretty thoroughly over the next few days/weeks as i write up some documentation for rest of my team here.  is there an official place to file bugs against charms if i find any?
[08:09] <vila> jamespage: no worries, I'm pushing to upgrade to trusty anyway, I just need something that works now ;) So I'm good, thanks for *default-*series trick ;)
[08:10] <jamespage> mdunc, launchpad.net/charms
[08:10] <vila> jamespage: where are those options documented by the way ?
[08:11] <mdunc> jamespage: cool, thanks!  alright, i got to get back to work.  have a good day/evening/morning!
[08:11] <jamespage> vila, that specific one is mentioned in the 1.18.1 release notes - https://juju.ubuntu.com/docs/reference-release-notes.html
[08:11] <jamespage> but its been there for a while
[08:12] <vila> ha thanks, I was on the site but didn't find/think about the release notes
[08:12] <vila> right, perfectly explained (as well as why this happens)
[11:45] <timrc> https://bugs.launchpad.net/ubuntu/+source/jenkins/+bug/1294005 <--- this make me sad
[11:45] <_mup_> Bug #1294005: Please remove jenkins from trusty <jenkins (Ubuntu):Fix Released> <https://launchpad.net/bugs/1294005>
[11:46] <timrc> I wish there was more of a transition period here because this breaks us pretty abruptly
[11:54] <jamespage> timrc, sorry - but its old and full of security vulnerabilities
[11:54] <jamespage> timrc, the charm supports switching to use the upstream repositories; suggest that happens y default
[12:34] <caribou> has anyone encountered the situation where mysql refuses to allow connection from other services when colocated on the same machine ?
[12:35] <caribou> I'm deploying mysql then keystone on the same machine. When adding the relation, I get mysql to refuse connection to the keystone charm
[12:35] <caribou> this doesn't happen if they're not on the same machine
[12:35] <caribou> (local provider btw)
[12:46] <jam1> caribou: note that the default mysql configuration is to consume 80% of your RAM with its buffer
[12:46] <jcastro> hey lazyPower
[12:46] <jam1> caribou: there is a known bug that mysql doesn't place nice on local because of that
[12:46] <jcastro> I subscribed ~charmers to the jenkins bug
[12:46] <jam1> you can change it in config, though
[12:47] <jcastro> we need to sort it before promoting jenkins to trusty
[12:47] <caribou> jam1: I'm not worried about that, it's just for charm testing
[12:47] <jam1> caribou: sure, but mysql w/ local fails because of that config, so you can change the config to do your test
[12:47] <caribou> jam1: and afaik, it used to work
[12:47] <caribou> jam1: ah, ok
[12:47] <jcastro> juju set mysql dataset-size="1G"
[12:48] <jam1> caribou: https://lists.ubuntu.com/archives/juju/2014-February/003442.html
[12:48] <jcastro> ^^ you want something like that
[12:48] <caribou> jcastro: thanks
[12:48] <jam1> caribou: I noticed that deploying mysql locally ended up giving like 15GB of ram and then mysql couldn't start
[12:49] <jcastro> yeah mine was doing 12G
[12:50] <jcastro> we should find a way to be like "If I am in LXC don't do that."
[12:50] <jam1> caribou: jcastro: https://bugs.launchpad.net/juju-core/+bug/1255242/comments/15
[12:50] <_mup_> Bug #1255242: HP cloud requires 4G to do what AWS does in 2G mem <ci> <hp-cloud> <intermittent-failure> <upgrade-juju> <juju-core:Invalid> <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1255242>
[12:50] <jam1> is my comment on it
[12:51] <jam1> note, it also fails on HP because of that
[12:51] <jam1> on a default size machine
[12:51] <jam1> apparently 80% of 2GB or whatever doesn't leave enough room for the OS overhead
[12:51] <jam1> so on HP you have to deploy --constraints=mem=4G if you want the default charm 80% to work
[12:51] <caribou> jam1: yep, mine has 7G of VSZ
[12:52] <jcastro> jamespage, what's the recommendation for precise/jenkins?
[12:52] <jamespage> jcastro, use lts
[12:52] <caribou> xcuse my ignorance, but why would the memory size would have a tie with remote connectivity ?
[12:52] <jam1> caribou: because it doesn't actually start
[12:52] <jamespage> jcastro, that version is really old - maybe not the same security issues tho
[12:52] <jam1> it tries to lock that much RAM but can't
[12:52] <caribou> jam1: it does start, I can connect to it locally
[12:52] <jam1> caribou: weird, mine would just fail
[12:53] <jam1> I don't know why it would affect remote connectivity
[12:53] <jamespage> jcastro, I put a branch up for trusty - https://bugs.launchpad.net/ubuntu/+source/jenkins/+bug/1294005
[12:53] <_mup_> Bug #1294005: Please remove jenkins from trusty <audit> <jenkins (Ubuntu):Fix Released> <jenkins (Charms Trusty):New> <https://launchpad.net/bugs/1294005>
[12:53] <caribou> my feeling is that it has something to do with hostname resolution
[12:53] <caribou> if I try to connect remotely using the IP address, I get a normal failure to connect because of the password
[12:53] <caribou> if I use the hostname, then I get a mysql rejection message saying that the host is not allowed to connect
[12:53] <jcastro> jamespage, I've added it to the queue; since it's come up, does it by chance have tests?
[12:54] <jamespage> jcastro, no
[12:54] <caribou> jam1: anyway, i worked around it by having mysql on its own machine for the time being, I need my time to investigate other things
[12:54] <caribou> jam1: but I'm happy to know that I'm not missing something obvious
[12:55] <jcastro> jamespage, ok I'll see if we can prioritize it and get some tests, we have some new people now that can help.
[12:55] <jamespage> jcastro, excellent - thanks
[12:55] <jamespage> jcastro, my branch is function for trusty - just finished testing it
[12:55] <jcastro> ack
[13:02] <stokachu> http://pythonhosted.org/juju-deployer/config.html#placement
[13:02] <stokachu> it says machine id 0 is only supported
[13:03] <stokachu> does that mean i can't deploy to say machine 2 which contains another node?
[13:10] <mattyw> sinzui, I'm having real trouble getting local provider to work in precise, I keep getting container failed to start errors, any thoughts? I believe my kernel is up to date
[13:13] <marcoceppi> stub: typically, we recommend you include config documentation in readme. manage.jujucharms.com is going away and the gui doesn't really illuminate that much about configs except a small excerpt
[13:14] <stokachu> marcoceppi: do bundles only support deploying to machine 0?
[13:14] <marcoceppi> stub: for machine id, yes
[13:14] <marcoceppi> stokachu: ^
[13:14] <marcoceppi> because that's the only guarenteed machine
[13:15] <marcoceppi> stokachu: you can still deploy to other services
[13:15] <stokachu> what about with kvm
[13:15] <stokachu> if i have 2 kvm instances and want to auto-deploy services to both machines that can't be done with bundles right?
[13:15] <marcoceppi> stokachu: what do you mean, 2kvm instances?
[13:16] <stokachu> marcoceppi: if i do juju add-machine
[13:16] <stokachu> it brings up 2 kvm instances in local provider
[13:16] <sinzui> mattyw, I had container errors last year when I had stale cloud images http://curtis.hovey.name/2013/11/16/restoring-network-to-lxc-and-juju-local-provider/
[13:16] <stokachu> machine 1 and 2
[13:16] <stokachu> i wanted to be able to deploy services to both machines
[13:16] <marcoceppi> stokachu: so those are LXC, not KVM, and yes, you can't really do that in deployer, there is no add-machine concept
[13:16] <stokachu> charms*
[13:16] <marcoceppi> stokachu: however, you can do this, deploy ubuntu charm to two units
[13:16] <stokachu> marcoceppi: not kvm?
[13:16] <marcoceppi> stokachu: then do placement ubuntu/0
[13:16] <sinzui> mattyw, I recommend you try removing the cache.
[13:17] <marcoceppi> stokachu: it's not kvm, local provider uses LXC
[13:17] <stokachu> marcoceppi: it also uses kvm
[13:17] <marcoceppi> unless this is something that landed in 1.19, local provider only uses lxc
[13:17] <stokachu> uh its been there since 1.17.x
[13:18] <marcoceppi> stokachu: okay
[13:18] <mattyw> sinzui, that might suggest I was able to boot a trusty container but not a precise one
[13:18] <marcoceppi> are you using the local provider, or are you trying to deploy --to kvm: ?
[13:18] <stokachu> using the local provider
[13:18] <marcoceppi> the local proivder /is/ lxc
[13:18] <stokachu> machine 0 maybe but machines 1 and 2 are kvm
[13:18] <marcoceppi> sinzui: can the local provider use kvm instead of lxc?
[13:18] <hazmat> stokachu, deployer can spec service colocation .. including kvm: / lxc:
[13:19] <hazmat> marcoceppi, yes
[13:19] <hazmat> it can use kvm instead of lxc
[13:19] <nuclearbob> stub: thanks for the info, I can't seem to get that to work, so I'll file a bug
[13:19] <marcoceppi> hazmat: wtf, why isn't this documented anywhere
[13:19]  * marcoceppi loses his mind
[13:19] <stokachu> hazmat: so i list machine 0,1,2 all kvm
[13:19] <stokachu> i just want to deploy to machineX
[13:19] <stokachu> not kvm on machine 0
[13:19] <hazmat> marcoceppi, markdown it and i'll work on it :-)
[13:20] <marcoceppi> hazmat: I don't even know where to start
[13:20] <hazmat> stokachu, so deployer doesn't let you reference arbitrary machines, because that's not reproducible. it will let you specify colocation with other services
[13:20] <hazmat> marcoceppi, what do you mean, i though the md stuff was in flight?
[13:20] <sinzui> marcoceppi, yes, add container: kvm
[13:20] <stokachu> ok
[13:21] <marcoceppi> hazmat: it is, I was referring to I have no idea how to switch local to kvm
[13:21] <marcoceppi> hazmat: this would have saved a lot of lxc headaches
[13:21] <hazmat> stokachu, if you have a syntax for add-machine i'm game to add it.. i've been wanting something cause i use manual tons..
[13:21] <hazmat> marcoceppi, lxc headaches?
[13:21] <hazmat> marcoceppi, because of restrictions?
[13:22] <stokachu> hazmat: if i set the container: kvm in environments.yaml i just do a juju add-machine like normal
[13:22] <stokachu> can set constraints too
[13:22] <marcoceppi> hazmat: lxc hasn't been playing nice on one of my machines
[13:22] <marcoceppi> well, lxc/local provider
[13:22]  * hazmat nods
[13:23] <hazmat> i'm still using my jury rig manual + lxc which has been solid..
[13:23] <stokachu> kvm is really solid too
[13:23] <hazmat> marcoceppi, i'd check to make sure its not using aufs.. by default.. there was a version of juju during 1.17 dev that was doing aufs by default which caused issues.
[13:28] <jcastro> wait a minute
[13:28] <jcastro> are you telling me, I've been messing around with LXC this whole time
[13:28] <jcastro> and I could have been using KVM
[13:28] <marcoceppi> jcastro: yeah
[13:29] <marcoceppi> go figure
[13:29] <jcastro> Well, I'm pretty much out of options, I have asked over and over for core to document stuff
[13:32] <stokachu> jcastro: i happened to stumble across it while looking through the juju code
[13:32] <stokachu> some of the help commands talk about kvm too iirc
[13:32] <lazyPower> jcastro: roger
[13:32] <jcastro> well I know you can do kvm: blah
[13:33] <stokachu> yea
[13:33] <jcastro> but if I can wholesale switch to it
[13:33] <jcastro> that would be awesome
[13:33] <stokachu> jcastro: thats what i do
[13:33] <stokachu> kvm for everything
[13:33] <stokachu> works great
[13:34] <jcastro> ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: EOF
[13:34] <stokachu> jcastro: http://astokes.org/juju-deploy-to-lxc-and-kvm-in-the-local-provider/
[13:34] <jcastro> I've been getting this all week with the local provider
[13:34] <stokachu> according to some of the juju devs this isn't a supported scenario
[13:34] <stokachu> but it works too for mixing both
[13:35] <jcastro> mind if I steal that?
[13:36] <stokachu> jcastro: go for it
[13:39] <jcastro> stokachu, I think the big one there is "network-bridge"
[13:39] <stokachu> yea definitely
[13:39] <mattyw> sinzui, didn't seem to fix my problem :/
[13:39] <jcastro> deploy --to kvm:blah we have written down
[13:40] <stokachu> jcastro: i think https://bugs.launchpad.net/juju-core/+bug/1304530 would be good to have too
[13:40] <_mup_> Bug #1304530: nested lxc's within a kvm machine are not accessible <addressability> <cloud-installer> <kvm> <local-provider> <lxc> <juju-core:Triaged> <https://launchpad.net/bugs/1304530>
[13:42] <jcastro> stokachu, hey so if I deploy to KVMs with the little bridge there
[13:43] <jcastro> are those KVMs accessible from other machines on the network?
[13:43] <stokachu> just on the network bridge
[13:43] <jcastro> ah
[13:43] <stokachu> you'd have to add a network bridge device in libvirt
[13:44] <stokachu> so you could setup br0 to bridge your eth0
[13:44] <stokachu> then have libvirt use br0
[13:44] <stokachu> then it would be accessible throughout your network
[13:44] <jcastro> ok I'll just crosslink to some libvirt docs on that
[13:44] <stokachu> http://wiki.libvirt.org/page/Networking#Bridged_networking_.28aka_.22shared_physical_device.22.29
[13:44] <stokachu> that should help
[13:45] <stokachu> theres a debian/ubuntu section
[13:45] <stokachu> hazmat: im about to start using your judo stuff
[13:46] <stokachu> their pricing is awesome with the per hour charge and cap of the monthly fee
[13:56] <hazmat> stokachu, cool.. your in good company there.
[14:06] <themonk> is there any way to restart a unit? i just want to rerun start hook.
[14:08] <jcastro> you can `juju debug-hooks` to it to rerun the start hook
[14:08] <jcastro> `juju debug-hooks yourservice/#` the # being the machine
[14:09] <jcastro> then in another terminal do "juju resolved --retry yourservice"
[14:09] <jcastro> then in the debug-hooks terminal do `hooks/start`
[14:09] <jcastro> and it will give you the exact error
[14:24] <themonk> jcastro, thanks but my problem is different, i am using local lxc, when i restart my local machine i get start hook error, but when i deploy there is no error
[14:24] <themonk> jcastro, i have a error log will you see it?
[14:24] <jcastro> themonk, sure
[14:25] <jcastro> which charm btw?
[14:28] <themonk> jcastro, my charm can't disclose the name yet, will be opensoucce soon :)
[14:28] <themonk> jcastro, company policy :)
[14:28] <jcastro> oh ok, I was going to say, if it was mysql we know that can break in LXC
[14:30] <jose> jcastro: hey, would you mind giving me a hand with some python-ish error I get when writing an amulet test?
[14:31] <rharper> failed bootstrap returned this:  ERROR waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt contents do not match machine nonce
[14:31] <themonk> jcastro, no its not mysql
[14:31] <jcastro> jose, marco is your man for amulet
[14:31] <jose> then, marcoceppi: ping
[14:31] <jose> :)
[14:31] <rharper> looking for any help on what that might mean
[14:32] <marcoceppi> rharper: bootstrapped timedout trying to do something, rharper run with --debug to get more information
[14:32] <marcoceppi> jose: best to just post the issue than to ask to post :)
[14:33] <rharper> marcoceppi: ok
[14:33] <jose> when doing http://paste.ubuntu.com/7265781/ I get http://paste.ubuntu.com/7265782/ as a response, it looks like the telnet response contains something that doesn't match the character encoding, but I have no idea on how to fix it
[14:38] <jcastro> themonk, you can send me your info if you want me to take a look
[14:39] <themonk> jcastro, i am preparing in pastebin :)
[14:49] <jcastro> hey lazyPower
[14:49] <jcastro> I see you're down on the spreadsheet for jenkins.
[14:49] <lazyPower> i've done some work on the tests, but they didn't pass CI
[14:50] <marcoceppi> jose: you're using python2 formatting for python3 code
[14:51] <marcoceppi> jose: try "nyaned".encode() in tn.read_until
[14:51] <jose> "nyaned".encode() instaed of just "nyaned"?
[14:51] <marcoceppi> yes
[14:51] <jose> ok
[14:52] <jose> damn, I just realized I did that on my /tmp folder and didn't have another backup, pastebin lifesaver
[14:52] <rharper> marcoceppi: on the 10 minute out; is that controllable?  I using juju deployer and I set the timeout in the config to 1800 seconds -- did that value change or is it not being honored any more ?
[14:52] <jcastro> heh, are you writing tests for the nyancat charm?
[14:52] <rharper> marcoceppi: re bootstrap timeout
[14:53] <jose> yeah, seemed like an easy one :P
[14:53] <marcoceppi> rharper: bootstrap timeout is configurable last I checked, it's a timeout in juju not in deployer
[14:53] <rharper> marcoceppi: hrm, I though juju deployer passed through the config to juju
[14:53]  * rharper looks at code
[14:53] <marcoceppi> rharper: no, bootstrap-timeout is an environments.yaml configuration
[14:53] <rharper> only?
[14:53] <marcoceppi> juju help bootstrap
[14:54] <marcoceppi> yes
[14:54] <rharper> ok
[14:54] <rharper> is there such a thing as retry on bootstrap timeout?  dealing with some flaky hardware
[14:55] <rharper> probably would need to do that in deployer; ok -- thanks for the help
[14:58] <themonk> jcastro, pastebin is on heavy load please wait
[14:58] <timrc> marcoceppi, fyi, my issue with juju not starting machines locally had to do, I think, with a stale lock in/var/lib/juju/locks :/
[14:58] <marcoceppi> timrc: lame! but interesting find
[14:58] <themonk> jcastro, http://pastebin.com/7LifaphW
[14:59] <jcastro> huh, that's a new one
[15:15] <smarter> Is it normal that when I do "juju destroy-environment local", I can't do "juju bootstrap" again until I kill the mongod process manually?
[15:16] <smarter> using juju 1.18.1 on trusty
[15:16] <themonk> jcastro, have you found any thing
[15:19] <jcastro> no, that's a new one for me
[15:19] <jcastro> marcoceppi, have you seen that error before? http://pastebin.com/7LifaphW
[15:20] <jcastro> smarter, no that's not normal, but I am having that problem as well
[15:23] <jose> woohoo, test passed!
[15:37] <lazyPower> jcastro: what specifically do you need done for the jenkins charm? just a trusty audit or do you need the full deep dive into the relationship issue and getting the tests passing?
[15:37] <jcastro> lazyPower, jamespage has a branch for trusty/jenkins
[15:37] <jcastro> which we could use, but if we could get tests in there that would be swell
[15:40] <lazyPower> i have tests pending that failed ci but work when running them locally
[15:40] <lazyPower> soooooo
[15:40] <lazyPower> its part of that forever todo item i've got to circle back to CI and figure out why they are failing in CI
[15:41] <themonk> marcoceppi, i have a question if i restart lxc container machine in apache2 charm site-enable/default link gone missing but i enabled a2ensite default in apacche2 subodinate charm btw subordinate charm is for installing a apache mod
[15:42] <themonk> marcoceppi, when i deploy it works file but after restart machine it does not
[15:42] <jcastro> lazyPower, yeah so basically he ported it to trusty, but if we can get it in with tests as part of the audit, that would be better
[15:43] <lazyPower> ack. I'll try to squeeze that in to the schedule. No promises - but i'll set it as a stretch goal
[15:43] <jcastro> nod
[15:44] <jose> jcastro: just to confirm, we're having a charm school tomorrow at 15 your time
[15:44] <jcastro> yep
[15:44] <themonk> jcastro, i have a question if i restart lxc container machine in apache2 charm site-enable/default link gone missing but i enabled a2ensite default in apacche2 subodinate charm btw subordinate charm is for installing a apache mod. when i deploy it works file but after restart machine it does not
[15:44] <jcastro> me and lazyPower
[15:44] <jose> cool
[15:44] <jcastro> themonk, yeah I am unsure how well supported restarting an LXC container is
[15:45] <marcoceppi> themonk: that shouldn't happen
[15:45] <marcoceppi> themonk: no idea why it would be doing that
[15:46] <themonk> marcoceppi, my one observation is after restart juju runs config-changed and start for normal charm ryt
[15:47] <marcoceppi> themonk: it shouldn't that's news to me
[15:47] <marcoceppi> mgz: ^^?
[15:48] <mgz> ...I can barely parse that
[15:48] <marcoceppi> mgz: if you restart a machine deployed by juju, does it run the config-changed and start hooks again?
[15:48] <mgz> start only runs once
[15:49] <mgz> it will probably run config-changed
[15:49] <marcoceppi> mgz: then, I think what themonk is seeing is config-changed isn't running for the subordinate, which is causing issues as the main charm reverts settinsg that the subordinate sets
[15:49] <marcoceppi> themonk: is that about right?
[15:49] <mgz> restarting lxc containers may be a little dodgy
[15:50] <themonk> marcoceppi, i dont think so
[15:51] <mgz> themonk: looking at the unit logs should tell you what got run
[16:01] <themonk> mgz, yes it calls config-changed then rel-join then rel-changed
[16:01] <mgz> that seems fine then.
[16:02] <themonk> mgz, but i get a start hook error after restart
[16:05] <mgz> themonk: then you need to debug that in the charm
[16:07] <themonk> mgz, start hook is ok no bug :)
[16:08] <themonk> mgz, and how do i debug when my machine is booting
[16:08] <mgz> themonk: debug-hooks will still work I'd think
[16:09] <themonk> mgz, it only happens after reboot
[16:10] <mgz> so? run debug-hooks, trigger the agent restart, see what happens
[16:11] <themonk> mgz, how i use debug-hooks when lxc container is loading during reboot
[16:11] <mgz> just reboot that container
[16:12] <mgz> not your whole machine (I'm assuming local provider)
[16:12] <themonk> how?
[16:12] <themonk> yes mine is local provider
[16:12] <mgz> `juju ssh thatservive/unitnumber "sudo shutdown -r now"`
[16:13] <themonk> hmm great :) thanks :)
[16:41] <stokachu> any plans on getting mysql charms into trusty soon?
[16:42] <marcoceppi> stokachu: yes
[16:42] <stokachu> marcoceppi: possible eta?
[16:42] <marcoceppi> stokachu: early next week
[16:42] <stokachu> openstack relies on it and theyre on trusty
[16:42] <marcoceppi> stokachu: I know
[16:42] <stokachu> marcoceppi: anything i can do to make it happen faster
[16:49] <marcoceppi> stokachu: earliest I can do is tomorrow
[16:49] <stokachu> marcoceppi: that would be awesome
[17:00] <blahRus> So I shouldn't waste time trying to deploy openstack with juju today on 14.04?
[17:04] <marcoceppi> blahRus: no, you can deploy openstack on 14.04 you just need to deploy mysql from a local source
[17:04] <blahRus> marcoceppi: kk, any other charms not ready?
[17:04] <blahRus> all prep'ed for icehouse?
[17:05] <marcoceppi> blahRus: most all are there, I think rabbitmq-server is in the same boat as mysql though
[17:05] <marcoceppi> these should all be sorted by next week
[17:10] <blahRus> great, hopefully we can get the ISO's soon ;)
[17:50] <hackedbellini> hi all. I'm trying to branch the gerrit charm (http://manage.jujucharms.com/~canonical-ci/precise/gerrit) since I need to do some modifications to use it, but I cant access the code
[17:50] <hackedbellini> trying to do a "bzr branch lp:~canonical-ci/charms/precise/gerrit/trunk" gives me a "lp:~canonical-ci/charms/precise/gerrit/trunk"
[17:50] <hackedbellini> also, it appears that this page (https://code.launchpad.net/~canonical-ci) is private now. I could access it 2 weeks ago
[17:53] <marcoceppi> sinzui: ^?
[17:54] <hackedbellini> I tried to paste the error and end up pasting the command again. The error is this one: http://pastebin.ubuntu.com/7269665/
[17:55] <hackedbellini> this url for example (https://bazaar.launchpad.net/~canonical-ci/charms/precise/gerrit/trunk/files) gives me an "Unauthorized" error
[17:55] <hackedbellini> this is the "Repository" link in the charm page
[17:56] <sinzui> hackedbellini, marcoceppi: ouch
[17:57] <hackedbellini> sinzui: do you know what is the problem?
[17:57] <sinzui> hackedbellini, marcoceppi: That team certainly is private.
[17:57] <sinzui> I no longer have super privs
[17:59] <sinzui> hackedbellini, I think we can both see that the team exists because it is subscribed to public bugs or branches.
[17:59] <sinzui> hackedbellini, I think you need to find another branch to work with
[17:59] <hackedbellini> sinzui: wow, really? that's really sad...
[18:00] <marcoceppi> sinzui: I though you lead the canonical-ci team, who do I have to bug to have them upstream their charms?
[18:01] <hackedbellini> sinzui: do you know why they made it private?
[18:01] <sinzui> marcoceppi, I was rather thorough about securing Lp's teams. I cannot see who is involved in the team
[18:02] <marcoceppi> sinzui: you're too good for our own good!
[18:08] <marcoceppi> blahRus stokachu jamespage mysql is promulgated to trusty
[18:09] <stokachu> marcoceppi: woot
[18:09] <stokachu> marcoceppi++
[18:15] <jamespage> marcoceppi, +1 thanks
[18:19] <blahRus> :)
[18:19] <blahRus> tyvm
[18:35] <jose> lazyPower: do we have a spreadsheet containing which tests are being worked on and which arent?
[18:36] <jose> it'd be nice so we don't have duplicate efforts
[18:55] <lazyPower> jose: sent it via privage message
[18:55] <jose> awesome, thanks
[19:25] <hackedbellini> When trying to add a new machine, I'm getting this: http://pastebin.ubuntu.com/7270191/
[19:25] <hackedbellini> the only log I can get is this one: http://pastebin.ubuntu.com/7270192/ (from /var/log/juju/machine-12)
[19:25] <hackedbellini> I tried googling for both the agent-state-info error and the error on the log, but found nothing
[19:25] <hackedbellini> do anyone here have any idea on how to solve this?
[20:10] <themonk> will i upgrade to 14.04, is it ok for juju 1.18.1
[20:25] <lazyPower> hackedbellini: looking now - 1 moment
[20:26] <lazyPower> hackedbellini: is this local provider?
[20:33] <Kupo24z> cool
[20:38] <hackedbellini> lazyPower: yes it's local provider, using lxc containers
[20:39] <lazyPower> hmm, machine #12 seems to tank, where as 1 - 11 are fine correct?
[20:41] <hackedbellini> lazyPower: yes, exactly! All other machines are running fine... I have some services running on them (jenkins, mediawiki, postgresql, etc) and they are running fine. But I cant add a new machine
[20:41] <hackedbellini> doing a "juju add-machine" triggers the problem
[20:41] <lazyPower> i'm looking for an answer to this, i'm not positive but i think there is an upper limit to the number of machines you can utilize on LXC
[20:41] <hackedbellini> lazyPower: really? D:
[20:41] <lazyPower> dont take that as the answer yet though, i dont have any proof to back it up
[20:42] <lazyPower> hackedbellini: whats your ram usage look like?
[20:42] <hackedbellini> lazyPower: hrm, I see... That machine is the 10th one. There are 9 currently running
[20:43] <hackedbellini> lazyPower: http://pastebin.ubuntu.com/7270712/
[20:44] <lazyPower> hackedbellini: confirmed there is no hard coded limit to the # of machines
[20:45] <lazyPower> hackedbellini: i would file a bug including the output of your diagnostics such as mem, and attach any relevant logs.
[20:46] <hackedbellini> lazyPower: ok, I'll do that. The problem is that I don't have sudo here, so there are some logs that I can't see :(
[20:46] <hackedbellini> one question: you said that there might be a maximum number of lxc containers I could run at the same time... if that's true, where could I try to check that number? I understand almost nothing about lxc
[20:47] <lazyPower> hackedbellini: confirmed there is no limit for you.
[20:47] <lazyPower> i dont remember where i saw that, but its incorrect.
[20:47] <lazyPower> sorry for the confusion
[20:51] <hackedbellini> lazyPower: ok, no problem!
[20:51] <hackedbellini> one quick question: how can restart juju in a way that it looks like to it that I restarted the server?
[20:51] <hackedbellini> I didn't found any juju service, so I restarted lxc... but I don't know if I should restart anything else, specially because I did a juju upgrade-juju today
[20:51] <hackedbellini> one last question**
[20:52] <lazyPower> hackedbellini: not sure on local provider, as it bootstraps to the HOST environment.
[20:53] <stokachu> does juju automatically install 42-juju-proxy-settings on the deployed machines?
[20:53] <stokachu> even though i dont set a apt-http-proxy in my environments.yaml
[20:54] <hackedbellini> lazyPower: np! Thanks anyway for the help :)
[20:55] <lazyPower> np, sorry i wasn't more help. its been weeks since i've gotten my hands in the local provider. I switched feet and want full force on MAAS / cloud deployments
[20:55] <lazyPower> s/want/went
[21:15] <cwchang> charles there ?
[21:16] <marcoceppi> lazyPower: ^
[21:16] <lazyPower> cwchang: greetings
[21:16] <cwchang> lazyPower is that Charkes ?
[21:16] <cwchang> hi
[21:17] <cwchang> jsut try out irc free node stuff
[21:17] <lazyPower> ah, welcome!
[21:17] <cwchang> thanks !
[21:17] <cwchang> sorry we are not that used to those IRC tool
[21:17] <cwchang> I am always the first one to try
[21:18] <cwchang> I will let Marga join this to lead some discussion for VSM charm review
[21:18] <lazyPower> Ok. I'm pinning it pending an openstack charmer review as i noted in the ticket. It creates an interesting dependency loop on openstack / vsm.
[21:18] <cwchang> ok
[21:19] <cwchang> We will be back in a jiffy
[21:19] <lazyPower> ack. I'll be here cwchang
[21:28] <psivaa> hello, could i know how to workaround 'src/launchpad.net/juju-core/utils/ssh/ssh_gocrypto.go:84: undefined: ssh.ClientConn' when running 'go install -v launchpad.net/juju-core/...' pls?
[21:29] <psivaa> go get -u -v launchpad.net/juju-core/... gives the following logs: http://pastebin.ubuntu.com/7270917/
[21:29] <lazyPower> psivaa: looks like you're trying to build juju from source? try #juju-dev, thats where the core team hangs out.
[21:30] <psivaa> lazyPower: ack, thx. yea i was trying to build from trunk.
[21:56] <lazyPower> cwchang: i'm about to EOD. I'll be floating around but not actively monitoring. If you need anything immediately, feel free to ping me.
[21:58] <jose> lazyPower: deploying the VSM charm on EC2 will not help, right?
[21:59] <jose> because if it, I can help with trial deployments and leave it for the queue
[21:59] <lazyPower> jose: how well do you know openstack?
[21:59] <jose> lazyPower: not that well, let's say
[21:59] <lazyPower> jose: next time my friend.
[21:59] <jose> np :)
[22:00] <lazyPower> I appreciate the volunteering though. Your enthusiasm is infectious :)
[22:01] <jose> if it's within my possibilities to help, I'm glad to
[23:55] <stokachu> should all the openstack charms for trusty be working?