[09:07] <gnuoy`> Tribaal, I have a more involved ch mp ( https://code.launchpad.net/~gnuoy/charm-helpers/hugepages/+merge/268214 ) if you have a sec that would be great but I understand if you don't :)
[09:34] <Geetha> hi
[13:22] <stub> marcoceppi: you dropped
[13:35] <aisrael> lazyPower: http://www.fastcompany.com/3027907/what-engineers-at-facebook-pinterest-snapchat-airbnb-and-spotify-listen-to-while-coding
[13:35] <lazyPower> haha the first playlist listed is a bunch o trance
[13:35] <lazyPower> niiiice
[13:36] <lazyPower> dece find, thanks aisrael
[13:36] <aisrael> lazyPower: my pleasure!
[14:51] <GS_> Hi, I am trying to deploy mysql charm using "juju deploy mysql" command on ubuntu ppc64le platform, start hook is failing with exit status 1..Can you any one please help me out on this?
[15:05] <GS_> Hi, I am trying to install mysql on ubuntu ppc64le platform using "juju deploy mysql" command, but start hook is failing. Can any one please help me out on this?
[15:35] <GS_> Hi, I am trying to install mysql on ubuntu ppc64le platform usig "juju deploy mysql" command, but start hook is failing with exit status 1. And It is working fine on ubuntu x_86 platform.
[15:36] <apuimedo> lazyPower: jamespag`: any idea why `juju ssh` to lxc container is not working
[15:36] <apuimedo> and also I can't ping connect to openstack servers running on those machines
[15:39] <GS_> Can any one help me out on this? why start hook is failing in ppc64le ?
[15:44] <ddellav> GS_, when you run juju debug-log what error messages do you see?
[15:48] <GS_> 2015-08-18 08:41:44 INFO config-changed Processing triggers for ureadahead (0.100.0-16) ... 2015-08-18 08:41:44 INFO config-changed Setting up mysql-server (5.5.44-0ubuntu0.14.04.1) ... 2015-08-18 08:41:45 INFO juju-log dataset size in bytes: 6845104128 2015-08-18 08:41:46 INFO config-changed mysql stop/waiting 2015-08-18 08:41:49 INFO config-changed start: Job failed to start 2015-08-18 08:41:49 INFO juju-log Restart failed, trying again 
[15:49] <GS_> 2015-08-18 08:41:49 INFO config-changed stop: Job has already been stopped: mysql 2015-08-18 08:42:19 INFO config-changed mysql start/running 2015-08-18 08:42:20 INFO start mysql stop/waiting 2015-08-18 08:42:22 INFO start start: Job failed to start 2015-08-18 08:42:22 ERROR juju.worker.uniter.operation runhook.go:103 hook "start" failed: exit status 1
[15:50] <ddellav> GS_, try: juju ssh mysql/0 that should get you onto the container with mysql installed. Then you can debug mysql directly. It looks like it didn't like some config options and couldn't start
[15:50] <n3m8tz> Hi
[15:50] <ddellav> i'd check /var/log/mysql or wherever the logs are on the container (it
[15:50] <ddellav> (it'll be in the my.cnf)
[15:58] <GS_> 150818  4:42:22 InnoDB: Completed initialization of buffer pool 150818  4:42:22 InnoDB: Fatal error: cannot allocate memory for the buffer pool 150818  4:42:22 [ERROR] Plugin 'InnoDB' init function returned error. 150818  4:42:22 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 150818  4:42:22 [ERROR] Unknown/unsupported storage engine: InnoDB 150818  4:42:22 [ERROR] Aborting
[15:59] <lazyPower> apuimedo: in a charm school will be with you shortly
[16:00] <apuimedo> ok, thanks
[16:00] <rick_h_> GS_: this looks like the defaults issue that eco saw withthe memory usage and the default.
[16:00] <rick_h_> jcastro: have a link handy? ^
[16:00] <rick_h_> aisrael: I think you were poking at it as well? ^
[16:01] <rick_h_> GS_: see https://bugs.launchpad.net/charms/+source/mysql/+bug/1373862 if that matches?
[16:01] <mup> Bug #1373862: MySQL doesn't deploy due to oversized dataset <mysql (Juju Charms Collection):Fix Committed by marcoceppi> <https://launchpad.net/bugs/1373862>
[16:01] <aisrael> oh yes.
[16:01] <rick_h_> GS_: so you might need to deploy with a different config value to get that going.
[16:01] <aisrael> When I'm running locally, I: juju set mysql dataset-size='256M'
[16:03] <aisrael> GS_: If you do that immediately after deploy, the charm should work.
[16:04] <GS_> Thank you all...I will try out this.
[16:04] <ddellav> :)
[16:49] <ejat> anyone can help me with this error : http://paste.ubuntu.com/12119236/
[16:54] <lazyPower> ejat: what substrate, charm, juju version, ubuntu version is this?
[16:54] <ejat> lazyPower : im trying the openstack-install
[16:54] <ejat> vivid
[16:57] <lazyPower> ejat: openstack-install as in autopilot from landscape?
[16:58] <ejat> yups ..
[16:58] <ejat> lazyPower : the team in #ubuntu-solution helping me now .. thanks
[16:58] <lazyPower> allright, if you need anything feel free to ping back :)
[17:20] <jeand> hi all
[17:20] <jeand> I pushed a new charm to my namespace on launchpad
[17:20] <jeand> http://bazaar.launchpad.net/~jean-deruelle/charms/trusty/mobicents-restcomm-charm/trunk/files
[17:21] <jeand> how long does it take to be indexed and available on the charm store at https://jujucharms.com/q/restcomm?series=trusty&type=charm ?
[17:21] <rick_h_> jeand: it takes 1-2 hours atm. We've got two systems (legacy and modern) than have to be kept in sync so one waits for the other to be ready
[17:22] <jeand> Thanks rick_h_ for the information
[17:22] <jeand> another thing
[17:23] <jeand> I submitted a bug
[17:23] <jeand> to have it officially in the charm store
[17:23] <jeand> https://bugs.launchpad.net/charms/+bug/1473509
[17:23] <mup> Bug #1473509: Mobicents RestComm Juju Charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1473509>
[17:23] <jeand> how long does it usually takes to get a review ?
[17:23] <rick_h_> jeand: hmm, so there's a process for this. I think you have to invite or assign the charmers team to it.
[17:23]  * rick_h_ looks up the docs around that
[17:24] <jeand> it seems I can't assign it to anyone else than me
[17:24] <jeand> "You may only assign yourself because you are not affiliated with this project and do not have any team memberships."
[17:24] <rick_h_> jeand: https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms
[17:25] <rick_h_> not assign but subscribe it looks like per the docs
[17:27] <jeand> Thanks rick_h_ I should have RTFM better
[17:28] <rick_h_> jeand: looks like charm proof doesn't like your yaml for the tags either atm
[17:28] <rick_h_> jeand: it throws an error, it should be more like https://api.jujucharms.com/charmstore/v4/trusty/juju-gui-38/archive/metadata.yaml possibly
[17:28] <rick_h_> or I've got a really old charm-tools (/me checks that next)
[17:29] <jeand> rick_h_, when I run juju charm proof on my side it doesn't complain
[17:29] <jeand> juju --version
[17:29] <jeand> 1.24.4-trusty-amd64
[17:31] <aisrael> What's `charm version` say?
[17:32] <jeand> charm version
[17:32] <jeand> charm-tools 1.5.1
[17:32] <rick_h_> jeand: yea, fresh install here and no PPA so looks like I've got a really old charm-tools of 1.0.0 :)
[17:32] <rick_h_> jeand: ok, so yea follow the process for the review queue, I'm not sure what the current times are on that atm though
[17:33] <jeand> ok cool
[17:33] <jeand> thanks for the help here
[17:33] <rick_h_> np, good luck!
[17:34] <jeand> looks like it's present on the charm store now !
[17:34] <jeand> https://jujucharms.com/u/jean-deruelle/mobicents-restcomm-charm/trusty/0
[17:34] <jeand> Thanks !
[17:34] <rick_h_> jeand: very cool
[17:35] <rick_h_> jeand: thanks for your patience. Once we kill off the old system we'll be making it a lot faster.
[17:35] <aisrael> jeand: Your charm should pop up here within a couple hours: http://review.juju.solutions/
[17:35] <aisrael> There's a bit of a backlog we're working through, unfortunately.
[17:37] <jeand> no worries, thanks for the notice
[17:37] <jeand> I'm testing a tentative bundle in the meanwhile
[17:46] <coreycb> beisner, charm-helpers have been synced to openstack next charms
[17:49] <beisner> coreycb, ok thanks.  fyi, fired off a trusty-liberty-proposed next deploy to see how we fair.
[17:59] <lazyPower> mbruzek: do you have a spare cycle? i need a hot review on this for OIL - https://github.com/whitmo/etcd-charm/pull/17
[18:02] <skylerberg> Is there a way to run amulet tests with a local charm? I want to specify a repository with a path, but I haven't seen anything besides the default resolver that checks the charm store when I use add.
[18:03] <lazyPower> export JUJU_REPOSITORY=path/to/charm/repo
[18:03] <lazyPower> ensure the amulet test doesnt' specify CS:<series> and isntead just declares the service
[18:03] <lazyPower> for example if you're testing the charm "foo"
[18:03] <lazyPower> amulet.deploy("foo")
[18:04] <lazyPower> it should auto-pick the local copy of the charm to deploy
[18:04] <skylerberg> lazyPower: Can I mix and match? I want to use some charms from the store and then my charm locally.
[18:04] <lazyPower> well, amulet should only be deploying the local charm thats under load
[18:04] <lazyPower> if you need multiple
[18:04] <lazyPower> use the local:series/charm directive
[18:04] <lazyPower> and make sure when you commit, its using the proper resource locations that are not local:
[18:04] <lazyPower> as that will just confuse CI and things will blow up on reporting
[18:05] <lazyPower> alai:  https://github.com/whitmo/etcd-charm/pull/17 - could use your input on this
[18:06] <alai> lazyPower, cool i'll take a look
[18:06] <skylerberg> lazyPower: Thanks for the help, that should get me going on writing these tests.
[18:12] <alai> lazyPower, +1 thanks for a quick patch
[18:28] <beisner> wolsen, gnuoy`, coreycb - fyi - in resuming the rmq edge issues @ vivid-kilo, got the 2nd of two bugs filed and I'm calling the VK test disabled-ufn:
[18:28] <lazyPower> apuimedo: i understand you're having issues with LXC on yoru cloud?
[18:29] <lazyPower> which cloud provider is this?
[18:29] <lazyPower> and how are the services deployed?
[18:29] <beisner> wolsen, gnuoy`, coreycb :  bug 1486177
[18:29] <mup> Bug #1486177: vivid-kilo 3-node native cluster race:  cluster-relation-changed Error: unable to connect to nodes ['rabbit@juju-X-machine-N']: nodedown <amulet> <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1486177>
[18:29] <apuimedo> some OSt
[18:29] <apuimedo> but it's the same I had with DO
[18:29] <beisner> wolsen, gnuoy`, coreycb :  bug 1485722
[18:29] <mup> Bug #1485722: rmq on >= vivid has mnesia (no data dir) <amulet> <openstack> <uosci> <nrpe (Juju Charms Collection):New> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1485722>
[18:29] <apuimedo> I can't connect to lxc containers
[18:29] <apuimedo> except from the instance they run on
[18:30] <apuimedo> my main suspect is the arp filtering that cloud providers usually do
[18:30] <beisner> wolsen, gnuoy`, coreycb - tldr:  half the time, one unit fails to cluster, and when all 3 do cluster ok, a separate blocker exists.
[18:32] <lazyPower> apuimedo: this is the host only networking exception
[18:32] <lazyPower> which version of juju? i was under the impression 1.24 removed this limitation, but i may be wrong
[18:32] <lazyPower> let me fetch the guide that i did to fix this w/ overlay networking
[18:32] <apuimedo> 1.24.4-trusty-amd64
[18:32] <lazyPower> http://blog.dasroot.net/container-networking-with-flannel.html
[18:33] <lazyPower> yeah it seems like the networking only works on certain substrates (aws, and openstack)
[18:33] <apuimedo> I'm using OpenStack
[18:33] <lazyPower> weird, i'll have to re-ping dimiter on it then
[18:34] <lazyPower> its quite possible i am misinformed
[18:34] <lazyPower> There are bolt on services you can use to work around this
[18:34] <lazyPower> the problem is the LXC containers that are being spun up on the lost are 10.0.3.x addressing, and the juju state server has no means to communicate with them
[18:34] <lazyPower> networking addons like the fan, calico, flannel, et-al are designed to offer an SDN approach to fixing this
[18:35] <apuimedo> the juju state server works
[18:35] <lazyPower> but this also requires the LXC containers to be reconfigured to attach to that networking bridge that the state server can connect to
[18:35] <apuimedo> the relations are established
[18:35] <lazyPower> so, you can reach the LXC container based service, from the state server
[18:35] <lazyPower> but not when you juju ssh <service>/<unit> ?
[18:35] <apuimedo> I think it is most likely a matter of the undercloud having a too strict security group
[18:35] <apuimedo> yeah, juju ssh does not work
[18:35] <lazyPower> thats a bug
[18:36] <lazyPower> do you mind filing it and linking me? i'd like to track this and reference it when i poke dimiter about it
[18:36] <apuimedo> ok ;-)
[18:36] <lazyPower> we'll want the output from juju status,  the service attempting to connect to via juju ssh, any verbose debug logging, and which cloud provider if applicable
[18:38] <apuimedo> ok
[18:45] <lazyPower> alai: fix is upstream in ~kubernetes namespace, open review item here - https://code.launchpad.net/~kubernetes/charms/trusty/etcd/trunk/+merge/268373
[18:45] <lazyPower> which is for the charm store copy in ~charmers namespace.
[18:54] <alai> cool... testing it now
[19:26] <marcoceppi> rbasak: you around? I have a packaging question
[19:52] <apuimedo> lazyPower: from what I see in my machines
[19:53] <apuimedo> there's only eth0 that has the "public ip" and lxcbr0 that gets a 10.0.3.1/24
[19:53] <apuimedo> but that has no link device
[19:53] <apuimedo> only veths to the lxc containers
[19:54] <apuimedo> I can't see how the different lxc containers could be able to talk to each other
[19:54] <lazyPower> apuimedo: correct
[19:54] <lazyPower> there's no cross host networking by default in juju w/ those containers
[19:54] <lazyPower> this is why SDN as a work around exists today
[19:54] <lazyPower> and juju is slowly growing support for cross-host networking natively w/ the juju networking modules.
[19:54] <apuimedo> lazyPower: so I would have to run flannel for that?
[19:54] <apuimedo> which is the simplest solution?
[19:54] <lazyPower> flannel, calico, the fan,  just to name a few that we have charms for.
[19:54] <apuimedo> fan?
[19:55] <lazyPower> https://insights.ubuntu.com/2015/06/22/fan-networking/
[19:56] <lazyPower> i use flannel quite a bit because it works cross host over an encrypted ip tunnel. its quite slow, requires ETCD to work/ do coordination, but it gets the job done.
[20:11] <apuimedo> lazyPower: is there one for the flannel integration?
[20:11] <apuimedo> article, I mean
[20:11] <lazyPower> apuimedo: the article covers the bases of how to properly get hosts talking
[20:11] <lazyPower> it requires deploying the flannel charm first, then deploy --to lxc: on that host
[20:11] <cholcombe> lazyPower, do you have RSI in your hands?  You seem to type more than anyone i've ever seen :)
[20:12] <lazyPower> cholcombe: Richardson Space Industries? nah... it would be cool if i had star citizen in my hands :D
[20:12] <cholcombe> lol
[20:12] <cholcombe> no repetitive stress injury
[20:12] <lazyPower> well, to a degree, yeah
[20:12] <lazyPower> my hands hurt on a regular basis
[20:12] <lazyPower> you'll see me massaging my hands in hangouts if you're looking
[20:12] <cholcombe> yeah
[20:13] <cholcombe> seems to be a common problem with engineers
[20:13] <lazyPower> years of being a keyboard cowboy i suppose takes its toll
[20:13] <lazyPower> some day we'll get hazard pay for it
[20:13] <cholcombe> yep
[20:13] <cholcombe> haha i wish
[20:14] <lazyPower> it was terrible cholcombe, there were cheese cakes, pizzas and beers everywhere. Our hands were cramping and we couldn't even hold on to any of the goodies!
[20:14] <cholcombe> hahah
[20:14] <cholcombe> lazyPower, well the joints in my hands are starting to sting.  i've been wondering what others are doing to alleviate it
[20:15] <cholcombe> lazyPower, i didn't mean to side track ya :)
[20:15] <lazyPower> all good man, i'm on a roll over here. running support and hacking on kubes
[20:16] <cholcombe> nice!
[20:47] <skylerberg> Is it standard practice to upload partially complete juju charms to our personal namespaces or should I not do that until I think it would be usable by others?
[20:48] <skylerberg> I am not sure if I am supposed to use that as version control or as a publishing platform.
[21:28] <hazmat> skylerberg: publishing not vcs
[21:28] <hazmat> skylerberg: vcs in git(hub) ;-)
[21:29] <hazmat> lazyPower: plus vs fan, it support aws and gce native backends no ip in ip
[21:29] <hazmat> lazyPower: what's the state of the art on monitoring
[21:29] <lazyPower> hazmat: in the jujuverse?
[21:32] <lazyPower> hazmat: i'm assuming thats what you're asking for - we haven't had extra cycles to devote to it that i'm aware of. the community may be working on additional componentry like promethius or carbon.  The question is rather contextless at this point.
[21:33] <hazmat> lazyPower: fair enough, jujuverse.. best charms for monitoring for context.. say big data
[21:35] <lazyPower> hazmat: we recently published to the mailing list a rather large story about syslog analytics
[21:35] <lazyPower> including bundle
[21:35] <hazmat> oh.. cool.
[21:36] <lazyPower> thats state of the art w/ big data and monitoring + some interesting ganglia metrics coming
[21:36]  * hazmat trawls archives
[21:36] <lazyPower> as it has native integration with the big data components
[22:34] <skylerberg> exit
[22:34] <skylerberg> oops, lol
[22:37] <marcoceppi> o/ skylerberg thanks for the contributions so far
[22:40] <skylerberg> marcoceppi: No prob. And thank y'all for being really responsive whenever I run into any issues.
[23:26] <plars> wget: unable to resolve host address ''ubuntu-14.04-server-cloudimg-amd64-root.tar.gz'
[23:26] <plars> getting this when trying to deploy things to lxc under maas
[23:26] <plars> here's a more complete log, any ideas? http://paste.ubuntu.com/12121232/
[23:36] <skylerberg> In amulet I cannot find a way to relate a service that is deployed locally. The problem is that the service name is "local:trusty/cinder-tintri". Then when I try to add the relation I have to provide a string like "local:trusty/cinder-tintri:storage-backend". The extra colon then confuses the relate function.
[23:40] <marcoceppi> skylerberg: the "local:trusty/cinder-tintri" is the name of the charm, the service is just "cinder-tintri"
[23:40] <marcoceppi> so, it's just cinder-tintri:storage-backend
[23:41] <marcoceppi> s/name/url of the charm
[23:45] <skylerberg> marcoceppi: If I try that I get a ValueError on deployer.py:261 saying that the service is not deployed. I think the whole "local:..." string is being stored as the name of the deployed charm inside amulet.
[23:45] <marcoceppi> skylerberg: can you share your amulet test file?
[23:46] <skylerberg> yeah, just a sec
[23:46] <marcoceppi> something like gist or paste.ubuntu.com should suffice
[23:47] <skylerberg> http://paste.ubuntu.com/12121295/
[23:48] <marcoceppi> skylerberg: ah, that's why, don't call the CINDER_TINTRI charm "local:trusty/cinder-tintri" that should just say cinder-tintri. amulet will detect that if the test is in the charm's name that matches the deploy line to use that instead of trying to resovle a charm store address
[23:50] <marcoceppi> skylerberg: you'll also want to make sure you have amulet 1.11.0 installed, it has some fixes for testing with more recent versions of deployer/juju
[23:52] <skylerberg> marcoceppi: Changing it to just "cinder-tintri" gives me a 404 message about the charm not being found. I just checked the version and it is 1.11.0.
[23:53] <marcoceppi> skylerberg: and this test resides within the tests directory in the charm and the charm directory is named "cinder-tintri" ?
[23:54] <skylerberg> Yes, that is correct.
[23:55] <marcoceppi> skylerberg: that shouldn't be happening
[23:55] <skylerberg> If you can point me to the line where it checks if the charm name matches the directory then I should be able to insert a debugging print statement and figure out why that isn't triggering.
[23:59] <marcoceppi> skylerberg: https://github.com/juju/amulet/blob/master/amulet/charm.py#L54
[23:59] <marcoceppi> skylerberg: that's coming from https://github.com/juju/amulet/blob/master/amulet/deployer.py#L115