[00:26] <pmatulis> NAME     OWNER        STATUS     LAST CONNECTION
[00:26] <pmatulis> admin*   admin@local  available  2016-04-28
[00:26] <pmatulis> default  admin@local  available  2016-04-28
[01:36] <admcleod1> is it possible to get model and controller name from inside a unit?
[02:49] <stub> tvansteenburgh: Others have seen that too. Looks like there is a race condition buried in there that I can't see. I'll need a set of logs to trawl through to track it down.
[02:49] <rick_h_> admcleod1: no, you can from the client, but things like controller name are client defined and it's all UUIDs in the other end.
[05:37] <bwallis> Hey, anyone noticing a massive slowdown on launchpad/most canonical hosted sites?
[05:38] <bwallis> Messing around with a juju deployment, and grabbing the tools for the initial bootstrap is taking a crazy amount of time
[05:39] <bwallis> Doing manual wgets is really slow too, 15-20KB/s
[05:40] <blr_> bwallis: well, glad I'm not the only one. Looking into it.
[05:42] <bwallis> yea I've been going nuts trying to figure this out all afternoon, never thought slow responses from the tools host was causing it
[05:43] <bwallis> if there's a mirror somewhere let me know :), I don't think there is though
[05:59] <blr_> bwallis: out of interest, what country are you in?
[06:00] <bwallis> US
[06:00] <bwallis> CA specifically
[06:01] <bwallis> I got around apt issues by just using other mirrors, but as far as I know there's no mirror of streams.canonical.com or launchpad
[06:01] <bwallis> Which sucks, bootstraps just sit here all day:
[06:01] <bwallis> 2016-05-01 22:59:59 DEBUG juju.environs.simplestreams simplestreams.go:674 using default candidate for content id "com.ubuntu.juju:devel:tools" are {20160220 mirrors:1.0 content-download streams/v1/cpc-mirrors.sjson []}
[06:05] <bwallis> Wow, something just happened?
[06:05] <bwallis> A bootstrap completed
[06:29] <jose> \o/ let us know if you need any more help
[06:44] <bwallis> will do, still seems to be slow, but it sped up enough to deploy one machine
[06:44] <bwallis> going to see how it goes with others
[06:53] <bwallis> spoke too soon I think :( failed to download the agent tools
[06:55] <Sharan> i have created one dummy charm and deployed but this charm is going in a error state
[06:56] <Sharan> its showing "Waiting for agent initialization to finish"
[06:57] <Sharan> model: default machines:   "0":     juju-status:       current: error       message: no matching tools available       since: 02 May 2016 11:41:21+05:30     instance-id: pending     machine-status:       current: pending       since: 02 May 2016 11:41:07+05:30     series: trusty services:   testcharm:     charm: local:trusty/testcharm-0     exposed: false     service-status:       current: unknown       message: Waiting for agent initia
[08:18] <bwallis> is there a reason the juju "controller" doesn't appear as a machine to deploy services on in a MaaS deployment now?
[08:18] <bwallis> this is with beta6
[10:29] <sharan> juju1.25 version, we used to get the log files in the path "/var/log/juju/unit-testcharm-0.log", but in juju2.0 version where do we find these log files?
[10:30] <magicaltrout> same place sharan
[10:31] <sharan> i didn't get, i have only "/var/log" after that there is no "juju" folder instead i found "lxd" folder
[10:32] <magicaltrout> dunno but all my charms log to the same place in v2 as v
[10:32] <magicaltrout> 1
[10:32] <sharan> ok i have installed juju2.0 on Z-machine
[10:32] <sharan> so i was trying to find the log files
[10:33] <sharan> if i get into the "/var/log/lxd/juju-a938c2eb-4141-4a2d-89b6-71a20fa0d296-machine-3" i will get "forkstart.log,  lxc.conf,  lxc.log" files
[12:51] <icey> is it possible to get the postgresql charm to install postgres 9.5?
[12:53] <marcoceppi> icey: it might be, stub would know
[12:54] <marcoceppi> icey: if you deploy on xenial
[12:54] <icey> marcoceppi: I was poking around with it, but it looks like it's explicitly setting supported as <= 9.4
[12:54] <icey> and marcoceppi, I just raised a bug about it not deploying on xenial :)
[12:54] <marcoceppi> oic
[14:13] <shruthima> hi all, I have pushed the IBM-IM charm to the store, with a new charm publish commands, but how to attach the bug .....?!!!!
[14:14] <shruthima> Before pushing in this new way i have pushed into lanchpad and raised a bug (https://bugs.launchpad.net/charms/+bug/1575746) and linked to trunk branch. Do i need to change it ?
[14:14] <mup> Bug #1575746: New Charm: IBM Installation Manager <Juju Charms Collection:New> <https://launchpad.net/bugs/1575746>
[14:20] <shruthima> yes 1575746 is the bug i have created for IBM-IM ,but that bug i have linked to trunk branch ..!!!do i need to link it to cs:~ibmcharmers/trusty/ibm-im-0...??
[14:26] <shruthima> I have done some alignment changes to Readme to IBM-IM ...Now what is the process to merge this changes in to IBM-IM charm in charm store???
[14:29] <shruthima> please can anyone suggest on the same??
[14:50] <shruthima> hi kwmonroe/Mbruzek, I have done some alignment changes to Readme in IBM-IM ...Now what is the process to merge this changes in to IBM-IM charm in charm store???
[14:52] <mbruzek> Hello shruthima you can follow these instructions: https://jujucharms.com/docs/stable/authors-charm-store#submitting-a-fix-to-an-existing-charm
[14:53] <magicaltrout> not enough question marks shruthima! ;)
[14:53] <mbruzek> Basically fix this readme in a branch and submit a merge proposal to get it added to the review queue.
[14:55] <shruthima> ok and I have pushed the IBM_IM charm to the store, with a new charm publish commands, but how to attach the bug here.....!!!! Before pushing in this new way i have pushed into lanchpad and raised a bug (https://bugs.launchpad.net/charms/+bug/1575746) and linked to trunk branch. Do i need to change it ?
[14:55] <mup> Bug #1575746: New Charm: IBM Installation Manager <Juju Charms Collection:New> <https://launchpad.net/bugs/1575746>
[14:56] <mbruzek> Oh I see you are using the new process
[14:57] <mbruzek> That is great, let me check this link out
[14:57] <shruthima> ok
[15:03] <mbruzek> shruthima: So you have some changes to the README.md file you would like to make?
[15:04] <shruthima> yes i have some alignment changes
[15:04] <mbruzek> shruthima: I didn't know you were using the new way before, so I gave you the wrong link: https://jujucharms.com/docs/devel/authors-charm-store#submitting-a-new-charm
[15:05] <mbruzek> You should be able to `charm push` those changes into the ibm-im charm in the store.
[15:06] <mbruzek> the result from that command will likely be ibm-im-1 rather than zero.  I already see the ibm-im charm in the review queue, because you filed a bug. So everything seems correct. You can change the charm code as much as you like before the review.
[15:07] <shruthima> hmm thats fine ,actually i had a doubt if we push again with charm push command will it merge with the old one..
[15:08] <mbruzek> After you push another version up, you should update the bug  with something like this: "please review ibm-im-#" where # is the revision you want reviewed.
[15:09] <shruthima> ok u mean bug description?
[15:15] <shruthima> mbruzek: ok u mean to update bug description ?
[15:15] <mbruzek> Or just add a comment, which ever is easier
[15:18] <shruthima> mbruzek : ok so new process is only to reflect in charm store ...next to raise a bug and link to trunk branch we need to follow the same old process is it ..?
[15:21] <mbruzek> shruthima: Yes, we have been working hard on the new process and review queue, but those changes have not been released.
[15:22] <mbruzek> shruthima: so for now we need to use the bug to track this change so it can be reviewed.
[15:22] <shruthima> mbruzek : ok thank you for the information :)
[15:23] <mbruzek> shruthima: Yes and thanks for asking these questions in #juju, I am sure the questions are useful for other people in this room
[15:24] <shruthima> hmmm :)
[15:37] <mbruzek> Does anyone know how to get to the controller machine in juju 2.0.beta ?
[15:37] <mbruzek> One of my machines failed to provision and has no ip address. I want to get the logs from the controller to add to the bug.
[15:38] <mbruzek> but I don't know how to refer to the controller with juju commands.
[15:39] <aisrael> mbruzek: Also wondering about that. I've only found logs once a machine has been created. Before that, debug-log has nothing.
[15:40] <mbruzek> aisrael: Yeah this machine is *not* being created and I don't know why.
[15:41] <LiftedKilt> mbruzek: juju switch admin; juju ssh 0 ?
[15:43] <mbruzek> LiftedKilt: yes that works, thank you for the information.
[15:43] <mbruzek> We should add that to a document somewhere. aisrael do you have a suggested location?
[15:45] <aisrael> Ohh, good find LiftedKilt.
[15:45] <aisrael> mbruzek: https://jujucharms.com/docs/master/developer-debugging
[15:45] <aisrael> mbruzek: Do you want to open a bug against docs, or should I?
[15:45] <mbruzek> aisrael: I will do that, and create the text which you can review
[15:45] <LiftedKilt> mbruzek: yeah it's a little obtuse that machine 0 is hidden in the admin model - you intuitively expect juju machine to show all the machines, but it shows the machines in the selected model.
[15:45] <aisrael> mbruzek: ack
[15:46] <mbruzek> LiftedKilt: It makes sense to me now you explained it to me.
[15:46] <mbruzek> LiftedKilt: thank you for sharing that information with me
[15:46] <LiftedKilt> mbruzek: for sure
[15:54] <jcastro> http://askubuntu.com/questions/755885/is-there-a-way-to-adjust-the-interval-for-the-update-status-hook
[15:54] <jcastro> any help on this one?
[16:24] <mbruzek> aisrael: Please review my addition: https://github.com/juju/docs/pull/1061
[16:25] <DavidRama> oue omw
[16:25] <DavidRama> oups sorry
[16:45] <jcastro> bdx: any word from the devopsdays pdx people on if your talk was accepted?
[16:53] <natefinch> anyone online who knows about the deployer code?  Seems there's an incompatibility with some TLS changes I made in core last week
[17:09] <tvansteenburgh> natefinch: are you talking about the juju-deployer python project, or the new golang deployer code?
[17:10] <natefinch> tvansteenburgh: python
[17:11] <tvansteenburgh> natefinch: i'm one of the maintainers
[17:12] <natefinch> tvansteenburgh: I changed juju-core to only support TLS 1.2 last week... looks like CI found an incompatibility with deployer: http://reports.vapour.ws/releases/3938/job/maas-1_9-deployer-2x/attempt/49#highlight
[17:14] <tvansteenburgh> natefinch: okay, i'll take a look soonish
[17:14] <tvansteenburgh> dpb: fyi ^
[17:15] <natefinch> tvansteenburgh: thanks, let me know if there's anything I can do to help... the bug is here: https://bugs.launchpad.net/juju-core/+bug/1576695
[17:15] <mup> Bug #1576695: Deployer cannot talk to Juju2 (on maas2) because :tlsv1 alert protocol version <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1576695>
[17:15] <tvansteenburgh> dpb1_: ^
[17:15] <tvansteenburgh> natefinch: cool, thanks
[17:32] <lazyPower> tvansteenburgh - do we have a shorter method to determine which unit is a leader in an amulet test, than sentry.run() in a loop?
[17:32] <tvansteenburgh> lazyPower: nope
[17:33] <lazyPower> cool, should i add that under some "common use cases" for the amulet docs?
[17:34] <tvansteenburgh> lazyPower: either that, or file a bug and we can add a get_leader(service) or something
[17:34] <lazyPower> sounds good to me
[17:36] <lazyPower> https://github.com/juju/amulet/issues/131
[17:41] <bwallis> anyone still experiencing really slow speeds to launchpad/most US hosted ubuntu sites?
[17:41] <bwallis> was happening last night as well
[18:39] <suchvenu> Hi
[18:40] <suchvenu> I am getting the following error when I did,  juju destroy-environment local
[18:40] <suchvenu> ERROR cannot connect to API: unable to connect to "wss://10.0.3.1:17070/environment/c9759ee0-88cc-403d-8b6c-aa4e628bf17d/api"
[18:40] <suchvenu> Any idea on thsi error ?
[18:42] <LiftedKilt> the openstack-lxd bundle on xenial leaves ceph broken - all the pgs are stuck in creation
[18:42] <LiftedKilt> has anyone worked with that or found a fix? It works on trusty, so it must be a xenial quirk
[18:55] <lazyPower> suchvenu - which juju version?
[18:57] <suchvenu> 1.25
[18:58] <suchvenu> Ubuntu Trusty version
[18:58] <lazyPower> suchvenu - sudo lxc-ls --fancy, do you see any lxc containers listed?
[18:59] <lazyPower> oo local provider, hang on there's a service you should have running not a container
[19:01] <suchvenu> charm@islrpbeixv665:~/charms/trusty/ibm-db2$ sudo lxc-ls --fancy [sudo] password for charm: NAME                       STATE    IPV4  IPV6  AUTOSTART --------------------------------------------------------- juju-precise-lxc-template  STOPPED  -     -     NO juju-trusty-lxc-template   STOPPED  -     -     NO charm@islrpbeixv665:~/charms/trusty/ibm-db2$
[19:05] <lazyPower> suchvenu - sorry that was a redherring. Its been a moment since i've used the local provider on 1.25
[19:06] <lazyPower> suchvenu - can you pastebin the output of initctl list for me?
[19:09] <suchvenu> http://pastebin.ubuntu.com/16194903/
[19:17] <lazyPower> suchvenu - juju destroy-environment -y --force local
[19:18] <lazyPower> suchvenu - as you were trying ot remove it, this will clean up the stale configuration files, and you can re-bootstrap afterwords
[19:18] <lazyPower> i didn't see the upstart job listed for the local environment, so i think it just didn't complete the job of cleaning up after itself from a prior juju destroy-environment.
[19:19] <suchvenu> ok
[19:37] <tvansteenburgh> stub: https://bugs.launchpad.net/postgresql-charm/+bug/1577544 fyi
[19:37] <mup> Bug #1577544: Connection info published on relation before pg_hba.conf updated/reloaded <PostgreSQL Charm:New> <https://launchpad.net/bugs/1577544>
[20:51] <thedac> gnuoy: See my last comment on https://review.openstack.org/#/c/308070/ and consider a +2
[21:32] <pacavaca> hey guys! when I'm doploying something (openstack) using juju, each machine gets several lxc containers and one called juju-xenial-template (smth like this). I noticed that this template container already has some predefined configuration for my NICs (eth0 and eth1). In particular, it assigns static IPs on one of them and I don't want that. How do I change this template (for all nodes), so that IPs are
[21:33] <pacavaca> always assigned by DHCP?
[22:11] <cory_fu> kwmonroe: You got a PR for that layer-basic change?
[22:13] <kwmonroe> i do not cory_fu.  i'm not sure it's good. http://paste.ubuntu.com/16196042/  do i need to pip install requirements.txt?  should i activate vs including PATH extensions?  is there potential conflict with .venv for other charms (ie, should i call it .venv-newer-tox)?
[22:14] <magicaltrout> kwmonroe: you in for the full conference or just flying in and out?
[22:14] <kwmonroe> magicaltrout: i'm in it for the long haul.  Sun-Thu.
[22:14] <cory_fu> You don't need to pip install requirements, because tox will handle that.  However, since it's currently doing a sudo pip install, why not just stick with that?
[22:14] <magicaltrout> nice
[22:14] <magicaltrout> they speak funny in canada
[22:15] <kwmonroe> i'll learn 'em
[22:15] <magicaltrout> hehe
[22:18] <kwmonroe> cory_fu: not sure what you mean by "why not just stick with that".
[22:21] <cory_fu> kwmonroe: http://pastebin.ubuntu.com/16196470/
[22:24] <cory_fu> kwmonroe: https://github.com/juju-solutions/layer-basic/pull/66
[22:33]  * magicaltrout writes some slides
[22:33] <magicaltrout> better late than never
[22:41] <kwmonroe> cory_fu: marcoceppi:  quick tox fix for charmbox: https://github.com/juju-solutions/charmbox/pull/33/files
[22:42] <kwmonroe> this moves the tox fix to cb instead of layer-basic's Makefile since you guys are in such violent disagreement over cory's pull 66
[22:42] <marcoceppi> kwmonroe cory_fu I'd prefer this, I've honestly put tox in a venv that I activate when i do charm testing stuff, this fixes it where you need it, but we still need to consolidate charm test for charm 2.2 to be docker/lxd ootb
[22:43] <cory_fu> marcoceppi: Why isn't it using charmbox?  That's what we've standardized on for all of our CI infra
[22:43] <marcoceppi> cory_fu: because, time.
[22:43] <marcoceppi> cory_fu: also, the openstack engeineers depend on the old test runner
[22:44] <marcoceppi> we need to get them onto bundletester, then get charm test to just boot the charmbox container and run tests
[22:50] <admcleod1> is there any way to get info about a unit, from within another unit, without a relation to it
[22:51] <marcoceppi> admcleod1: no, not really
[22:51] <marcoceppi> admcleod1: charms only know about what they are connected to, implicit querying of other services would lead to a disaster of having toplogies that require other services but without an explicit relation for the operator to know they're logically connected
[22:52] <cory_fu> admcleod1: juju run service/0 'unit-get private-address'
[22:53] <admcleod1> cory_fu: from with another unit though
[22:53] <admcleod1> marcoceppi: right
[22:53] <cory_fu> admcleod1: But what you really want to do is make sure you expose the namenode and make sure that the namenode is listening on all addresses and then it should work
[22:54] <cory_fu> admcleod1: With a relation.  For the dfs-copy action, we should make it work with the public address
[22:54] <admcleod1> cory_fu: there is that, but will trying to copy data over the public address be an issue with some installations?
[22:54] <admcleod1> s/address/interface
[22:54] <cory_fu> It should work, too, we're probably just missing one of: listen on all addresses (0.0.0.0), open-port on the correct port, juju expose namenode
[22:55] <cory_fu> Yes, it's not ideal, but you don't really have any choice if one of the namenodes is not juju deployed.  For two juju deployed services, we'd want the relation, like we discussed
[22:55] <admcleod1> yeah true
[22:56] <cory_fu> Anyway, I have to EOD
[22:56] <admcleod1> cya
[22:57] <magicaltrout> cory_fu's mrs has him tied around her little finger.....
[22:57] <magicaltrout> oh no, he's just sensible and doesn't work 18 hour days
[22:59] <kwmonroe> admcleod1: if you're going to adjust apache-hadoop-namenode to listen on all interfaces, you can do something like i did for bigtop recently: https://github.com/juju-solutions/layer-apache-bigtop-namenode/commit/ac0c472707e1f48ccbf416f1c1eb2e189b6bd8e9
[23:01] <admcleod1> kwmonroe: i guess ill have to, seems a bit dodgy though
[23:03] <admcleod1> kwmonroe: maybe an action to do it temporarily?
[23:04] <kwmonroe> admcleod1: make it less dodge by creating a new dist.yaml port called "dfs-copy" that's "exposed_on: totally_dodgie", then make the action expose "totally_dodgie", do the copy, then unexpose it.
[23:07] <admcleod1> heh
[23:07] <admcleod1> kwmoroe: yeah but more because its listening on all interfaces
[23:07] <admcleod1> kwmonroe: ^
[23:10] <kwmonroe> oh, i don't think it's too spookie to listen on all interfaces.  any when we migrate services to hosts with different IPs, it's nice to not have to reconfigure.
[23:20] <admcleod1> kwmonroe: i know a few network admins who might disagree.. but ok lets do it! :}