[00:03] <rick_h__> cos1: howdy
[00:04] <cos1> not bad, man!
[00:04] <cos1> What about yourself?
[00:04] <rick_h__> cos1: having too much fun :)
[00:04] <marcoceppi> fritchie: that's a new one for me.
[00:04] <cos1> is it even a thing? ;)
[00:05] <cos1> I wasn't even aware that someone can have too much of it, rick_h__ :)
[00:05] <rick_h__> cos1: on definitely, it's called 'red-lining' :)
[00:05] <cos1> ah ;_
[00:06] <fritchie> is there a way to delete a maas environment besides maas destroy-environment, it just hangs on that
[00:06] <arosales> cos1: hello and welcome :-)
[00:07] <cos1> thanks arosales!
[00:07] <cos1> Good to be here indeed
[00:07] <arosales> cos1: just in time for the juju 2.0 goodness
[00:08] <cos1> wheee! ;)
[00:08] <cos1> Would be a lot to learn in the first day or two ;)
[00:10] <marcoceppi> o/ cos1
[00:10] <marcoceppi> good to see you again
[00:12] <cos1> likewise man!
[02:47] <axino> marcoceppi cory_fu : now I get "INFO install The executable =python3 (from --python==python3) does not exist" , we want -ppython3 actually (without the =)
[02:48] <lazyPower> axino o/ isn't it silly late for you?
[02:49] <axino> o/ lazyPower
[03:40] <axino> cory_fu marcoceppi : we also need to install the "virtualenv" package, else it's not there sometimes apparently !
[07:35] <saadraza92> hey there
[09:44] <saadraza92> any juju guru here?
[09:45] <saadraza92> facing problems regarding ssh from a juju machine
[10:27] <tinwood> gnuoy, have you got a moment for a quick question?
[11:55] <Sophie__> hello :)
[11:55] <Sophie__> i have big trouble with juju maybe some1 could help me
[11:55] <Sophie__> :D
[12:03] <Sophie__> I am trying to install juju on maas with vms, but although juju has deployed on the node it seems like it is not installed
[12:05] <Sophie__> i cannot make the maas server to auto start the node and I do it manually every time, i dont know if this is the problem
[12:11] <Sophie__> btw I am awesome <3
[12:25] <saadraza92> there?
[12:44] <jcastro> magicaltrout: I see you had a problem with juju's search results
[12:45] <magicaltrout> jcastro: i don't have problems, i just think they're a bit useless :)
[12:45] <jcastro> heh, yeah
[12:45] <jcastro> so, I've filed a few bugs on it
[12:45] <jcastro> and also on our general SEO stuff
[12:45] <jcastro> which is terrible also
[12:46] <jcastro> just letting you know mramm will be leading the effort to make search, SEO, and general "why can't people discover this?" problems less sucky
[12:46] <magicaltrout> cool.
[12:47] <magicaltrout> I just think that the non recommended stuff shouldn't get pushed so far down when nothing in the recomended namespace matches, basically.
[12:47] <magicaltrout> at first i didn't think my charm was listed cause I couldn't find it without Ctrl+F :)
[12:47] <jcastro> anyway, the more people who are non-me who complain about it, the more attention it will get
[12:47] <jcastro> so basically, being more squeaky wheel helps me get resources
[12:47] <magicaltrout> want me to complain on the mailing list as well? :)
[12:47] <jcastro> because why even have a search at all?
[12:48] <jcastro> yes, that would be great
[12:48] <magicaltrout> k, i'll put something together over lunch and dispatch it
[12:48] <magicaltrout> i'm british we like a good moan
[12:49] <jcastro> yeah plus when markS sees our little papercut it helps get us the focus we need to just fix the little things
[12:49] <jcastro> sometimes we're so worried about the little problems that we forget the little things
[12:49] <jcastro> I meant the big problems, heh.
[13:04] <cory_fu> axino: Fixed.  It already does install python-virtualenv: https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L34
[14:48] <tvansteenburgh> what's the best way to get a list of all the trusty charms in the charm store?
[14:49] <jrwren> tvansteenburgh: http://api.jujucharms.com/charmstore/v5/list?series=trusty
[14:49] <rick_h__> https://api.jujucharms.com/charmstore/v4/search?text=&series=trusty&limit=2000
[14:49] <rick_h__> doh
[14:49] <tvansteenburgh> fight!
[14:49] <tvansteenburgh> thanks guys!
[14:49] <jrwren> rick_h__: haha, beat ya! ;]  and I like my way better
[14:50] <rick_h__> jrwren: I do to!
[14:50] <jrwren> you got the https right though. I got it wrong.
[14:50] <tvansteenburgh> jrwren, rick_h__: is there a way to filter this down to only recommended charms?
[14:50] <rick_h__> well I entered it into a browser first
[14:52] <jrwren> tvansteenburgh: add &promulgated=1 to the querystring
[14:52] <rick_h__> tvansteenburgh: I forget if the query arg is promulgated=true or what
[14:52] <tvansteenburgh> jrwren: thanks
[14:52] <urulama> tvansteenburgh: its owner= (empty)
[14:53] <marcoceppi> wait, it's not approved=1 anymore?
[14:53] <urulama> marcoceppi: never was
[14:53] <urulama> marcoceppi: well, at least last 2 years :)
[14:54] <tvansteenburgh> urulama: owner= gives me no results
[14:55] <urulama> tvansteenburgh: https://api.jujucharms.com/charmstore/v4/search?text=&owner=&limit=100
[14:55] <jrwren> promulgated=1 works.   | to jq '.[]|length and there are 1436 trusty charms, and 186 promulgated trusty charms.
[14:56] <jrwren> oh, urulama is right if using /search, but I do not like search, so I use /list ;]
[14:56] <urulama> jrwren: don't use list!
[14:56] <tvansteenburgh> urulama: https://api.jujucharms.com/charmstore/v5/list?series=trusty&owner=
[14:56] <urulama> list is not indexed and is not meant to be used in this cases
[14:56] <tvansteenburgh> oic
[14:56] <urulama> tvansteenburgh: don't use list
[14:57] <jrwren> urulama: but... but... :[
[14:57] <urulama> tvansteenburgh: https://api.jujucharms.com/charmstore/v4/search?text=&owner=&limit=100&series=trusty
[14:57] <urulama> increase the limit if you want more
[14:57] <urulama> jrwren: :) yeah. just don't :D
[14:58] <tvansteenburgh> urulama: ok, thanks
[14:58]  * urulama is going to block the list and will work with macaroons only :P
[14:58] <jrwren>  the /search is much faster.
[14:58] <urulama> jrwren: yeah. its ES (search) vs mongo collection sweep (list)
[14:59] <jrwren> urulama: yeah, i hate ES, hence my love of /list ;]
[15:09] <cholcombe> cory_fu, question for ya.  how do you properly mock open in python?  I've found a bunch of crap blog posts but they all don't work right
[15:09] <jrwren> cholcombe: i dare you to ask voidspace
[15:09] <cholcombe> jrwren, i'll ask whomever :D.  I can get it to work locally but it fails when i push it to jenkins
[15:10] <cory_fu> :)
[15:10] <cory_fu> I'm happy to help, but obviously voidspace is the expert.  :)
[15:10] <cory_fu> cholcombe: Do you have repo link?
[15:10] <cholcombe> cory_fu, sure
[15:11] <cholcombe> lemme link it up
[15:11] <cholcombe> https://review.openstack.org/#/c/287500/6/unit_tests/test_replace_osd.py
[15:11] <cholcombe> cory_fu, jrwren voidspace ^^  line 101
[15:11] <marcoceppi> cholcombe: I do this
[15:11] <cory_fu> cholcombe: The most helpful bit of advice I've found is: http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch
[15:11] <cholcombe> i mean line 95
[15:12] <marcoceppi> cholcombe: http://paste.ubuntu.com/15408036/
[15:12] <marcoceppi> cholcombe: thats for both py2 and py3
[15:12] <cholcombe> marcoceppi, i'll give that a try
[15:12] <cory_fu> marcoceppi: Whoa.  That seems unnecessary
[15:13] <cory_fu> marcoceppi, cholcombe: http://www.voidspace.org.uk/python/mock/helpers.html#mock-open
[15:13] <marcoceppi> cory_fu: why? it's the only way I've found for `with open`
[15:13] <marcoceppi> meh
[15:13] <cholcombe> cory_fu, yeah i can't seem to get mock_open to work with patch correctly
[15:13] <jrwren> __main__, builtins, or __builtin__, that is the question.
[15:14] <cory_fu> cholcombe: Did you include the create=True?
[15:14] <cholcombe> cory_fu, i did
[15:14] <cholcombe> it's perfectly happy when i test locally.  it's jenkins that throws a wrench in it :D
[15:14] <cory_fu> cholcombe: You shouldn't be patching in __main__.  That's going to change depending on how the code is invoked
[15:15] <cory_fu> You should be patching replace_osd.open
[15:15] <cholcombe> cory_fu, yeah that's prob the issue
[15:17] <cholcombe> ugh neither replace_osd.open nor builtins.open works locally in my charmbox
[15:18] <jrwren> cholcombe: did you try __builtin__ ?
[15:18] <cory_fu> cholcombe: Can you point me to the source of replace_osd.lookup_device_name?
[15:18] <cholcombe> cory_fu, sure
[15:19] <voidspace> jrwren: cory_fu: hah, just off to pick up my daughter from school - back soon
[15:19] <cory_fu> jrwren: You always want to patch closest to where the function you're patching is called.  Patching in __builtins__ is the wrong place
[15:19] <cholcombe> cory_fu, https://review.openstack.org/#/c/287500/6/actions/replace_osd.py
[15:19] <cholcombe> line 30
[15:19] <jrwren> cory_fu: nonsense.
[15:19] <jrwren> cory_fu: or rather, I strongly disagree for my use cases.
[15:20] <cory_fu> jrwren: I'm not saying it definitely won't work, but it's a bad habit to get in to and will tend to end you up in situations where your mocks don't work like you expect
[15:21] <cholcombe> i'll just comment and say the function works trust me ;) lol
[15:21] <cory_fu> jrwren: And with mocking open in particular, patching it in __builtins__ can make debugging, error reporting, and a host of other things screwy
[15:21] <jrwren> cory_fu: I don't agree, but I think I understand your concerns.
[15:22] <cory_fu> I know because I used to run in to this stuff all of the time  :)
[15:22] <cholcombe> cory_fu, i was considering changing the way my code in replace_osd is written to make it easier to patch
[15:22] <cory_fu> cholcombe: Can you give me the output of the test failure?
[15:22] <cory_fu> cholcombe: That is not a bad idea.
[15:22] <cholcombe> cory_fu, yeah it says sda == None fails
[15:23] <cholcombe> cory_fu, yeah i was thinking of calling os.open or something easier to patch
[15:26] <cory_fu> cholcombe: Perhaps a nicer way would be to split the open() call into its own method (get_disk_stats) and have lookup_device_name call that instead.  Then you don't really need to test that you can open and read a file, and you can test the logic in l_d_n by itself
[15:26] <cory_fu> And just mock get_disk_stats
[15:26] <cholcombe> cory_fu, yeah that's a method i use in rust also.
[15:26] <cholcombe> cory_fu, that sounds good to do
[15:28] <cory_fu> A slightly "better" way would be to have lookup_device_name take the disk_stats as a param, so it's truly functional, but at that point you're trading some calling convenience for a trivial mock and it's probably not worth it
[15:28] <cory_fu> But, in general, isolating your external dependencies and making all of your logic live in purely functional ... functions makes them much easier to test
[15:28] <cory_fu> And often results in cleaner, easier to use code
[15:30] <cory_fu> cholcombe: For that test failure, what is the actual "dev_name: {}" output?
[15:31] <cholcombe> cory_fu, it outputs None
[15:31] <cholcombe> so i suspect my patch failed in that case
[15:41] <cory_fu> cholcombe: Looks like mock_open() doesn't support `for line in fh`: http://pastebin.ubuntu.com/15408195/
[15:41] <cholcombe> cory_fu, ah wow yeah that's prob the issue
[15:42] <cholcombe> cory_fu, i changed it around so i have a function that just get diskstats and put that up for review :)
[16:25] <icey> beisner: can you take a look at https://code.launchpad.net/~chris.macnaughton/openstack-charm-testing/add-ceph-multi-az/+merge/289387
[16:31] <icey> also want to re-look at https://code.launchpad.net/~chris.macnaughton/openstack-charm-testing/next-ceph-osd-bundle/+merge/285890 beisner
[16:31] <icey> ?
[16:35] <beisner> icey, 1 landed, 1 reviewed :)
[16:35] <icey> beisner: yeah, going to have to redo the reviewed one from scratch :-P
[16:36] <icey> easier than modifying methinks
[16:37] <ryotagami> I have a question regarding charm review. I have a charm that currently in review, and I have an update to that charm that is not related to review items. Should I create a new branch and a bug, or can I update the currently ongoing branch and bug?
[16:38] <beisner> icey, so the az bundle that landed in o-c-t is structured for 1 machine per unit, and sets vdb for the disk, which is what we need for serverstack.   to exercise on metal, i'd set osd-devices to "/dev/sdb /dev/vdb"
[16:39] <icey> new mp coming soon ;-)
[16:40] <beisner> icey, this will be 7 physical machines.  is there anything you can do to co-locate in lxc to get us down to 4 machines?   also, will ephemeral unmount /mnt error when used on metal?
[16:41] <icey> beisner: shouldn't fail, because 'e_mountpoint and ceph.filesystem_mounted(e_mountpoint)'
[16:41] <icey> will work on wrapping up the mons
[16:51] <beisner> icey, ack thx
[16:56] <icey> beisner: https://code.launchpad.net/~chris.macnaughton/openstack-charm-testing/add-ceph-multi-az/+merge/289394
[16:57] <icey> beisner: alternately, can I remove the machine specifications because trhe ceph-mon associates to lxc:ceph-mon/# ?
[18:02] <cory_fu> cholcombe: https://github.com/testing-cabal/mock/pull/343
[18:02] <cory_fu> Not that it helps you now.  :)
[18:29] <wolsen> beisner: https://review.openstack.org/#/c/292599/ is that ready to land you think?
[18:29] <beisner> wolsen, i do.  shall i hit the button?
[18:30] <wolsen> beisner, we're gonna have to sooner or later
[18:30] <wolsen> :)
[18:52] <thedac> narindergupta: Congrats on your OPNFV award!
[18:53] <thedac> belatedly ^^
[18:53] <narindergupta> thedac: thanks and nothing could have been done without your team.
[18:54] <narindergupta> thedac: and ward is for Canonical as I represent Canonical there.... :)
[18:54] <thedac> I suspect you had a lot to do with it
[18:55] <beisner> wolsen, ok that's ackd.  once it merges, you'll wanna rebase the other ceilometer stable update.
[18:56] <wolsen> beisner: yes indeedy thx!
[18:56] <lazyPower> bdx ping
[18:56] <beisner> wolsen, thanks for paving that path
[18:56] <wolsen> yeah big congrats narindergupta! (btw, I'll merge the m-d proposal you made a bit later today)
[18:56] <narindergupta> wolsen: thanks
[18:57] <narindergupta> thedac: :)
[19:07] <natefinch> marcoceppi: min-juju-version landed, finally.
[19:11] <lazyPower> nice
[19:11] <lazyPower> natefinch thats in metadata.yaml right?
[19:13] <natefinch> lazyPower: yep
[19:49] <tvansteenburgh> "The LXD local provider will not work on Ubuntu 12.04 LTS (Precise) and backporting to Ubuntu 14.04 LTS (Trusty) is incomplete."
[19:49] <tvansteenburgh> is this still true re Trusty? ^
[19:53] <marcoceppi> tvansteenburgh: AFAIU yes
[20:26] <marcoceppi> charm and charm-tools 2.0 are building in the devel ppa!
[20:26] <tvansteenburgh> \o/
[20:29] <lazyPower> \o/
[20:51] <marcoceppi> tvansteenburgh: if you have a min, could you take a look at this today? https://github.com/juju/charm-tools/issues/137 if not tomorrow morning would be great
[20:51] <marcoceppi> tvansteenburgh: esp since we will ahve "plugins" show up in the help output of charm 2.0 by default
[20:51] <tvansteenburgh> marcoceppi: will do
[21:00] <axino> cory_fu: but python-virualenv doesn't provide virtualenv anymore, apparently
[21:01] <axino> t*
[21:01] <cory_fu> axino: o_O
[21:02] <cory_fu> That doesn't make any sense
[21:03] <cory_fu> axino: Oh, you're saying they changed it to just "virtualenv"  Hrm
[21:03] <axino> cory_fu: yeah
[21:04] <cory_fu> axino: I thought I saw python-virtualenv referenced in your output, though
[21:06] <axino> (also, I think we need python3-virtualenv - which is pulled by the "virtualenv" package on xenial)
[21:07] <cory_fu> axino: Line 83 from your previous link: https://pastebin.canonical.com/151985/
[21:07] <cory_fu> 2016-03-16 07:52:24 INFO install The following NEW packages will be installed:
[21:07] <cory_fu> 2016-03-16 07:52:24 INFO install   python-virtualenv python3-virtualenv virtualenv
[21:07] <axino> yeah, on that specific unit, the package got pulled
[21:07] <axino> but not on another unit
[21:08] <cory_fu> I don't understand.  Why would it be different?
[21:09] <axino> I do not know
[21:10] <axino> https://pastebin.canonical.com/152206/
[21:11] <axino> looks like python-virtualenv does pull virtualenv. Ugh. I'm not sure what happened yesterday