[01:28] <rbasak> wallyworld: around? I just replied to the bug, and I'm still up if IRC is easier to resolve everything.
[01:29] <rbasak> wallyworld: thank you for your help, BTW.
[01:29] <wallyworld> rbasak: hey, let me read the bug real quick
[01:30] <rbasak> Oh, it hasn't even appeared in Launchpad yet (I replied by email)
[01:30] <wallyworld> ah
[01:30] <rbasak> I'll forward you a copy
[01:30] <wallyworld> kk
[01:30] <rbasak> Done
[01:32] <wallyworld> rbasak: i added a LICENSE file to gomaaspi as requeted in the original bug, but as you say, you can't see that easily
[01:32] <rbasak> wallyworld: ah, OK. Sorry.
[01:32] <wallyworld> rbasak: so on that basis, think we're ok. i can get a tarball to you
[01:33] <rbasak> wallyworld: thanks!
[01:33] <wallyworld> but that will be tomorrow as i have to update the dependencies file and let the CI server do its thing
[01:33] <rbasak> I wonder what the easiest route is to get an upload sorted with licensing fixed, but I can resolve that with sinzui.
[01:33] <wallyworld> rbasak: np, thank you for being patient with me :-) i no next to nothing about licensing
[01:33] <wallyworld> know
[01:34] <wallyworld> rbasak: 1.20.2 will be released real soon (next week) with the correct source with the licensing fixes etc
[01:34] <wallyworld> we need to get some other development done first though
[01:34] <rbasak> wallyworld: ah, that'll be the easiest thing then. I'll just hold on - no harm in a few days wait I think.
[01:35] <wallyworld> rbasak: i can give you a tarball earlier though
[01:35] <wallyworld> just in case we need to make any other changes
[01:35] <rbasak> wallyworld: no problem - I appreciate you jumping straight on it.
[01:36] <wallyworld> welcome. i'm really keen for this release not to be blocked
[01:36] <rbasak> wallyworld: yeah - that's a good idea. I can work with the tarball - thanks.
[01:36] <wallyworld> sure, will keep you in the loop
[01:36] <rbasak> wallyworld: getting closer to keeping the archive up-to-date much quicker with new releases. It'll be great to push this back to Trusty, too.
[01:37] <wallyworld> oh yes, given the juju/mongo issues that will be fixed in this release
[01:37] <rbasak> wallyworld: I'd like to also get some process changes in place so that copyright/licensing can be verified earlier in the process, so by the time there's a release, me or James can upload without any further review at that stage.
[01:38] <rbasak> Then there's less to hold an update up.
[01:38] <rbasak> We can worry about that later, though.
[01:38] <wallyworld> rbasak: agreed, we (juju core leadership team) is onto that and will be putting processes in place to properly introduce new 3rd party dependencies
[01:39] <wallyworld> so all new dependencies are properly vetted and licensed up front
[01:39] <rbasak> wallyworld: that's great - thanks.
[01:39] <rbasak> wallyworld: technically the uploader (to the Ubuntu archive) is responsible for checking upstream licensing when uploading a new release.
[01:39] <rbasak> wallyworld: that's more geared around random upstreams though - not when they're working closely together like this.
[01:40] <rbasak> wallyworld: and we have to update the debian/copyright file which maps every single file to a list of copyright holders and licenses.
[01:40] <wallyworld> rbasak: yeah, at the level i'm taling about, it's where a dev will simple add a 3rd party bit of code to the juju core code base
[01:40] <rbasak> wallyworld: what I'm thinking is that maybe this file can be updated much earlier in the process - basically by you guys at the time you update dependencies.tsv.
[01:41] <wallyworld> we will ensure at that point there's proper copyright assignment via the CLA etc and license file etc in place
[01:41] <wallyworld> oh ok
[01:41] <rbasak> wallyworld: right, but for third party deps also.
[01:41] <rbasak> (where we can't rely on CLA)
[01:41] <wallyworld> rbasak: can you email alexis with the details of what yu want and we can follow up from there?
[01:42] <rbasak> wallyworld: sure
[01:42] <wallyworld> thanks, that will allow us to properly collaborate and work how how to move forward in the best way
[01:42] <rbasak> To be clear, these are really just my musings on what we might be able to do to make everything go smoothly.
[01:43] <rbasak> They aren't requirements or anything.
[01:43] <wallyworld> sure, understood. but good cnversations to have and if we can expend a little effort now to save pain down the line, that's good imo :-)
[01:43] <wallyworld> we can collectively agree on how to proceed
[02:01] <sinzui> rbasak, Each time CI blesses a 1.20.x tarball we get to ask "why not release now" When wallyworld indicates all the fixed packages are imported we can start the release.
[02:02] <wallyworld> sinzui: there are other bug fixes to com first though
[02:02] <wallyworld> but i can update the 1.20 dependnecies file so intermediate tarballs get generated with the right source
[02:03] <sinzui> wallyworld, sure, but I haven't done a release this week, and devel still has regression, so I think 1.20.2 will be released first
[02:03] <wallyworld> sinzui: i don't think we should release 1.20.2 until the current milestone bugs are all fixed, agree?
[02:06] <sinzui> wallyworld, I am not strongly inclined to delay goodness. I would rather release often. Since devel gets a new regression every day, I am happy to release a 1.20.x each week
[02:06] <wallyworld> sinzui: but won't that just case churn for the packaging guys?
[02:06] <wallyworld> getting the backport into trusty
[02:07] <wallyworld> regardless, 1 bug we shouldn't release without fixing is bug 1307434 talking to mongo can fail with "TCP i/o timeout"
[02:07] <_mup_> Bug #1307434: talking to mongo can fail with "TCP i/o timeout" <cloud-installer> <landscape> <performance> <reliability> <juju-core:In Progress by mfoord> <juju-core 1.20:Triaged by mfoord> <https://launchpad.net/bugs/1307434>
[02:08] <wallyworld> that is the primary focus of the 1.20.2 release
[02:08] <sinzui> wallyworld, good point. we need a good pace, every two weeks was fine for james last year.
[02:08] <wallyworld> that bug should be fixed friday or more likely early next week
[02:09] <thumper> wallyworld: back now
[02:09] <thumper> kids weren't that fussed on tinkerbell :-)
[02:09] <wallyworld> thumper: lol, ok, give me a minute
[02:11] <thumper> menn0: https://github.com/juju/juju/pull/321
[02:11] <wallyworld> thumper: in call now
[02:11] <menn0> thumper: looking
[07:22] <html> hi
[07:48] <g0d_51gm4> lazyPower: i confirm y the error in the juju status after the reboot of the Host Machine was linked to the firewall's status set on it. thanks a lot for your patience and support. see y soon bye g.
[08:27] <raywang> hi anyone knows how to change the distro of the series of each nodes  from " juju status "?
[08:31] <raywang> which nodes have been already added in juju
[08:37] <Egoist> hi
[08:37] <Egoist> is -relation-departed hook is executed after remove unit from service?
[12:25] <william_home> Hi all
[12:26] <william_home> i'm trying to deploy a wordpress sample using juju maas and lxc containers
[12:26] <william_home> i'm not connected to internet so I'm facing some challenges
[12:27] <william_home> when trying to deploy an lxc cotainer it cannot download the cloud-img rootfs from cloud-images.ubuntu.com
[12:27] <william_home> i have to make this available offline somehow, any pointers?
[13:56] <Sh3rl0ck> Hello..We have deployed OpenStack Icehouse using the Juju and Maas on Ubuntu 14.04 (Trusty). I am having issues with installing Ceph on it. Are there any reference documents for installing Ceph as a backend to Cinder
[13:58] <pmatulis> william_home: internal mirror?
[14:13] <ctlaugh>  I am using the cinder charm and am trying to work through a problem installing on a system with only a single disk.  What's the right way to specify using a loopback file on trusty/icehouse?  I am putting block-device: "/srv/cinder.data|750G"  in a config file, and, once I SSH in, I can see that the file gets created, but the loopback device and volume group don't get created.
[14:26] <rbasak> niemeyer: around? About src/launchpad.net/goyaml licensing.
[14:27] <rbasak> niemeyer: which files are covered by which licenses?
[14:27] <niemeyer> rbasak: I will add a note to the LICENSE.libyaml file
[14:27] <rbasak> niemeyer: thanks!
[14:27] <niemeyer> rbasak: The *c.go files were ported from the C files from libyaml
[14:28] <niemeyer> rbasak: and thus are still covered by its license
[14:28] <rbasak> Ah - I see.
[14:28] <rbasak> That makes sense
[14:28] <niemeyer> rbasak: Please note that lp.net/goyaml is stale
[14:28] <niemeyer> rbasak: The project currently lives at github.com/go-yaml/yaml
[14:28] <niemeyer> rbasak: and that's where the update will be made
[14:28] <rbasak> niemeyer: OK. I guess I'll see it switch in the next 1.20 release tarball then?
[14:29] <niemeyer> rbasak: I don't really know, sorry.. I'm not involved in packaging juju
[14:29] <rbasak> Or if not I can deal with that I guess. I don't think this change needs to be in the source tree, as long as I'm clear on how to update debian/copyright.
[14:29] <rbasak> OK, no problem.
[14:30] <niemeyer> rbasak: I can tell you that right now: the following files are covered by LICENSE.libyaml:
[14:30] <rbasak> *c.go? I can match against that. Even directly in debian/copyright :)
[14:30] <niemeyer> rbasak: emitterc.go, parserc.go, readerc.go, scannerc.go, writerc.go, yamlh.go, yamlprivateh.go
[14:30] <rbasak> OK
[14:31] <niemeyer> rbasak: Oh, and apic.go
[14:31] <rbasak> Right
[14:31] <rbasak> I can run with that - thank you!
[14:31] <niemeyer> rbasak: No problem, let me know if I can help further
[14:32] <Sh3rl0ck> Hello..I have deployed OpenStack Icehouse using the Juju and Maas on Ubuntu 14.04 (Trusty). I am having issues with installing Ceph on it. Are there any reference documents for installing Ceph as a backend to Cinder
[14:35] <Sh3rl0ck> A new  Juju charm in introdcuced for 14.04 called Cinder-Ceph but I did not find and reference documents (besides release notes)
[14:35] <niemeyer> rbasak: LICENSE.libyaml was updated with those notes
[14:43] <ziliu2020_> I asked this question yesterday but no one answered me..  so I raised it again.  I'm looking for a way to distribute my public key to all juju nodes including containers.  I tried to use juju authorized-keys add command but no luck.  it says the key can not be added and invalid key when I issued the command "juju authorized-keys add key-file.pub".  Am I doing anything wrong here?
[14:45] <william_home> pmatulis: yes, local mirror
[14:49] <rbasak> niemeyer: thank you!
[14:57] <john5223> anyone here try  juju + salt?
[14:58] <ziliu2020_> i just figured it out.. it turned out not add key file, intead add copy/pasted key
[14:59] <Sh3rl0ck> ziliu2020_: Where to paste the key? environment.yaml?
[15:01] <ziliu2020_> no use this command
[15:02] <ziliu2020_> juju authorized-keys add 'ssh-rsa AAAAA.......'
[15:02] <Sh3rl0ck> ziliu2020_: Ok great! Thanks.
[15:02] <ziliu2020_> it will update environment and then populate the keys to all juju nodes
[15:04] <pmatulis> william_home: so you shouldn't have a problem, no internet required
[15:13] <william_home> pmatulis: well define local mirror then :), i have a mirror off all precise /trusty packages
[15:14] <william_home> pmatulis: but how do i mirror the cloud-images and how do i define those?
[15:15] <william_home> my setup is running from trusty maas and juju install
[15:20] <Sh3rl0ck> william_home: Any idea about ceph deployment using Juju for Trusty/Icehouse?
[15:24] <MrkiMile> Hello
[15:26] <MrkiMile> when I try to deploy charm from local dir, that dir has to be named same as the charm. Is there a way to circumvent that? I'm doing: juju deploy local:precise/apache2-test1, and I get an error that there is no charm inside apache2-test1. But if I rename apache2-test1 to apache2, then I can deploy.
[15:27] <pmatulis> MrkiMile: i'm prolly missing something but how else would juju know what charm you want to use if you don't tell it?
[15:29] <marcoceppi> MrkiMile: You have to rename the "name" key in the metadata.yaml to match the directory
[15:29] <marcoceppi> MrkiMile: it's best to simply creat a new tree, ~/charms/test1/precise/apache2
[15:29] <marcoceppi> and seperate them by JUJU_REPOSITORY than go renaming metadata.yaml
[15:34] <MrkiMile> marcoceppi: So, there is no way to tell juju to use charm from the directory that's not named as the charm ?
[16:20] <elarson> I'm playing around with writing my own charm for an app we have
[16:21] <elarson> basically I'm just trying to install the package and start a process provided by the package
[16:21] <elarson> does juju deploy myapp run the start hook?
[16:22]  * elarson just realized where he might find the answer in the manual...
[16:29] <elarson> how are most folks using juju? do you create your own charm for your application?
[16:33] <sebas5384> elarson: you can use a charm for your application nature, like I do with drupal, and then a subordinated charm
[16:35] <sebas5384> yesterday having a juju workshop :) https://www.facebook.com/photo.php?fbid=826602450708048&set=a.590513300983632.1073741832.354420671259564&type=1&theater
[16:36] <elarson> sebas5384: that sounds like you deploy drupal as a charm and then apply your changes as a subordinate, which means it ends up in the same container?
[16:36] <sebas5384> elarson: yep
[16:37] <sebas5384> but thats not a complete solution
[16:37] <elarson> at this point i'm just looking for a good place to start ;)
[16:38] <sebas5384> thats a good place then
[16:38] <sebas5384> hehe
[17:37] <sebas5384> sshuttle is really a bottleneck in the vagrant workflow
[17:37] <sebas5384> :(
[17:38] <sebas5384> just use iptables and you will notice a big diference
[18:10] <jcastro> sebas5384, hey are you guys devving locally on your laptops and then pushing to a cloud?
[18:10] <sebas5384> yes!
[18:11] <jcastro> hey so there's a guy in core leading a team to make local dev to cloud suck less
[18:11] <jcastro> mind if I link you guys up over email? I'm sure you guys have a bunch of suggestions
[18:11] <sebas5384> yeahhh sure!!! we already have a lot of troubleshooting and feedback :)
[18:11] <jcastro> also, if you think we should use iptables instead of sshuttle
[18:12] <jcastro> write it up and we can put that in the docs instead?
[18:12] <sebas5384> yes or other proxy things like hipache for example
[18:13] <sebas5384> i'm doing an experiment with a proxy in the vbox
[18:13] <sebas5384> so you shouldn't be doing no more iptables thingis
[18:14] <sebas5384> other things like using a plugin for vagrant
[18:14] <sebas5384> vagrant install vagrant-nfs_guest
[18:14] <sebas5384> to mount the directory of the deployed project into the container
[18:22] <jcastro> yeah
[18:23] <pmatulis> heh, devving
[18:28] <ctlaugh> Anyone here have knowledge about the cinder charm?
[18:29] <sebas5384> ctlaugh: not yet :P
[19:30] <Sh3rl0ck> Anyone with information about correct ceph charm for OpenStack on 14.04?
[19:34] <Sh3rl0ck> I am confused between ceph and cinder-ceph charm and how they are different?