[00:26] NAME OWNER STATUS LAST CONNECTION [00:26] admin* admin@local available 2016-04-28 [00:26] default admin@local available 2016-04-28 [01:36] is it possible to get model and controller name from inside a unit? [02:49] tvansteenburgh: Others have seen that too. Looks like there is a race condition buried in there that I can't see. I'll need a set of logs to trawl through to track it down. [02:49] admcleod1: no, you can from the client, but things like controller name are client defined and it's all UUIDs in the other end. [05:37] Hey, anyone noticing a massive slowdown on launchpad/most canonical hosted sites? [05:38] Messing around with a juju deployment, and grabbing the tools for the initial bootstrap is taking a crazy amount of time [05:39] Doing manual wgets is really slow too, 15-20KB/s [05:40] bwallis: well, glad I'm not the only one. Looking into it. [05:42] yea I've been going nuts trying to figure this out all afternoon, never thought slow responses from the tools host was causing it [05:43] if there's a mirror somewhere let me know :), I don't think there is though [05:59] bwallis: out of interest, what country are you in? [06:00] US [06:00] CA specifically [06:01] I got around apt issues by just using other mirrors, but as far as I know there's no mirror of streams.canonical.com or launchpad [06:01] Which sucks, bootstraps just sit here all day: [06:01] 2016-05-01 22:59:59 DEBUG juju.environs.simplestreams simplestreams.go:674 using default candidate for content id "com.ubuntu.juju:devel:tools" are {20160220 mirrors:1.0 content-download streams/v1/cpc-mirrors.sjson []} [06:05] Wow, something just happened? [06:05] A bootstrap completed === blr_ is now known as blr [06:29] \o/ let us know if you need any more help [06:44] will do, still seems to be slow, but it sped up enough to deploy one machine [06:44] going to see how it goes with others [06:53] spoke too soon I think :( failed to download the agent tools [06:55] i have created one dummy charm and deployed but this charm is going in a error state [06:56] its showing "Waiting for agent initialization to finish" [06:57] model: default machines: "0": juju-status: current: error message: no matching tools available since: 02 May 2016 11:41:21+05:30 instance-id: pending machine-status: current: pending since: 02 May 2016 11:41:07+05:30 series: trusty services: testcharm: charm: local:trusty/testcharm-0 exposed: false service-status: current: unknown message: Waiting for agent initia [08:18] is there a reason the juju "controller" doesn't appear as a machine to deploy services on in a MaaS deployment now? [08:18] this is with beta6 [10:29] juju1.25 version, we used to get the log files in the path "/var/log/juju/unit-testcharm-0.log", but in juju2.0 version where do we find these log files? [10:30] same place sharan [10:31] i didn't get, i have only "/var/log" after that there is no "juju" folder instead i found "lxd" folder [10:32] dunno but all my charms log to the same place in v2 as v [10:32] 1 [10:32] ok i have installed juju2.0 on Z-machine [10:32] so i was trying to find the log files [10:33] if i get into the "/var/log/lxd/juju-a938c2eb-4141-4a2d-89b6-71a20fa0d296-machine-3" i will get "forkstart.log, lxc.conf, lxc.log" files [12:51] is it possible to get the postgresql charm to install postgres 9.5? [12:53] icey: it might be, stub would know [12:54] icey: if you deploy on xenial [12:54] marcoceppi: I was poking around with it, but it looks like it's explicitly setting supported as <= 9.4 [12:54] and marcoceppi, I just raised a bug about it not deploying on xenial :) [12:54] oic === cmars` is now known as cmars [14:13] hi all, I have pushed the IBM-IM charm to the store, with a new charm publish commands, but how to attach the bug .....?!!!! [14:14] Before pushing in this new way i have pushed into lanchpad and raised a bug (https://bugs.launchpad.net/charms/+bug/1575746) and linked to trunk branch. Do i need to change it ? [14:14] Bug #1575746: New Charm: IBM Installation Manager [14:20] yes 1575746 is the bug i have created for IBM-IM ,but that bug i have linked to trunk branch ..!!!do i need to link it to cs:~ibmcharmers/trusty/ibm-im-0...?? [14:26] I have done some alignment changes to Readme to IBM-IM ...Now what is the process to merge this changes in to IBM-IM charm in charm store??? [14:29] please can anyone suggest on the same?? === thedac is now known as dames [14:50] hi kwmonroe/Mbruzek, I have done some alignment changes to Readme in IBM-IM ...Now what is the process to merge this changes in to IBM-IM charm in charm store??? [14:52] Hello shruthima you can follow these instructions: https://jujucharms.com/docs/stable/authors-charm-store#submitting-a-fix-to-an-existing-charm [14:53] not enough question marks shruthima! ;) [14:53] Basically fix this readme in a branch and submit a merge proposal to get it added to the review queue. [14:55] ok and I have pushed the IBM_IM charm to the store, with a new charm publish commands, but how to attach the bug here.....!!!! Before pushing in this new way i have pushed into lanchpad and raised a bug (https://bugs.launchpad.net/charms/+bug/1575746) and linked to trunk branch. Do i need to change it ? [14:55] Bug #1575746: New Charm: IBM Installation Manager [14:56] Oh I see you are using the new process [14:57] That is great, let me check this link out [14:57] ok [15:03] shruthima: So you have some changes to the README.md file you would like to make? [15:04] yes i have some alignment changes [15:04] shruthima: I didn't know you were using the new way before, so I gave you the wrong link: https://jujucharms.com/docs/devel/authors-charm-store#submitting-a-new-charm [15:05] You should be able to `charm push` those changes into the ibm-im charm in the store. [15:06] the result from that command will likely be ibm-im-1 rather than zero. I already see the ibm-im charm in the review queue, because you filed a bug. So everything seems correct. You can change the charm code as much as you like before the review. [15:07] hmm thats fine ,actually i had a doubt if we push again with charm push command will it merge with the old one.. [15:08] After you push another version up, you should update the bug with something like this: "please review ibm-im-#" where # is the revision you want reviewed. [15:09] ok u mean bug description? [15:15] mbruzek: ok u mean to update bug description ? [15:15] Or just add a comment, which ever is easier [15:18] mbruzek : ok so new process is only to reflect in charm store ...next to raise a bug and link to trunk branch we need to follow the same old process is it ..? [15:21] shruthima: Yes, we have been working hard on the new process and review queue, but those changes have not been released. === CyberJacob_ is now known as CyberJacob [15:22] shruthima: so for now we need to use the bug to track this change so it can be reviewed. === CyberJacob is now known as Guest29413 [15:22] mbruzek : ok thank you for the information :) === marlinc_ is now known as marlinc [15:23] shruthima: Yes and thanks for asking these questions in #juju, I am sure the questions are useful for other people in this room === cargonza_ is now known as cargonza === arosales_ is now known as arosales [15:24] hmmm :) === hazmat_ is now known as hazmat [15:37] Does anyone know how to get to the controller machine in juju 2.0.beta ? [15:37] One of my machines failed to provision and has no ip address. I want to get the logs from the controller to add to the bug. [15:38] but I don't know how to refer to the controller with juju commands. [15:39] mbruzek: Also wondering about that. I've only found logs once a machine has been created. Before that, debug-log has nothing. [15:40] aisrael: Yeah this machine is *not* being created and I don't know why. [15:41] mbruzek: juju switch admin; juju ssh 0 ? [15:43] LiftedKilt: yes that works, thank you for the information. [15:43] We should add that to a document somewhere. aisrael do you have a suggested location? [15:45] Ohh, good find LiftedKilt. [15:45] mbruzek: https://jujucharms.com/docs/master/developer-debugging [15:45] mbruzek: Do you want to open a bug against docs, or should I? [15:45] aisrael: I will do that, and create the text which you can review [15:45] mbruzek: yeah it's a little obtuse that machine 0 is hidden in the admin model - you intuitively expect juju machine to show all the machines, but it shows the machines in the selected model. [15:45] mbruzek: ack [15:46] LiftedKilt: It makes sense to me now you explained it to me. [15:46] LiftedKilt: thank you for sharing that information with me [15:46] mbruzek: for sure [15:54] http://askubuntu.com/questions/755885/is-there-a-way-to-adjust-the-interval-for-the-update-status-hook [15:54] any help on this one? [16:24] aisrael: Please review my addition: https://github.com/juju/docs/pull/1061 [16:25] oue omw [16:25] oups sorry [16:45] bdx: any word from the devopsdays pdx people on if your talk was accepted? [16:53] anyone online who knows about the deployer code? Seems there's an incompatibility with some TLS changes I made in core last week [17:09] natefinch: are you talking about the juju-deployer python project, or the new golang deployer code? [17:10] tvansteenburgh: python [17:11] natefinch: i'm one of the maintainers [17:12] tvansteenburgh: I changed juju-core to only support TLS 1.2 last week... looks like CI found an incompatibility with deployer: http://reports.vapour.ws/releases/3938/job/maas-1_9-deployer-2x/attempt/49#highlight [17:14] natefinch: okay, i'll take a look soonish [17:14] dpb: fyi ^ [17:15] tvansteenburgh: thanks, let me know if there's anything I can do to help... the bug is here: https://bugs.launchpad.net/juju-core/+bug/1576695 [17:15] Bug #1576695: Deployer cannot talk to Juju2 (on maas2) because :tlsv1 alert protocol version [17:15] dpb1_: ^ [17:15] natefinch: cool, thanks === natefinch is now known as natefinch-lunch [17:32] tvansteenburgh - do we have a shorter method to determine which unit is a leader in an amulet test, than sentry.run() in a loop? [17:32] lazyPower: nope [17:33] cool, should i add that under some "common use cases" for the amulet docs? [17:34] lazyPower: either that, or file a bug and we can add a get_leader(service) or something [17:34] sounds good to me [17:36] https://github.com/juju/amulet/issues/131 [17:41] anyone still experiencing really slow speeds to launchpad/most US hosted ubuntu sites? [17:41] was happening last night as well === natefinch-lunch is now known as natefinch [18:39] Hi [18:40] I am getting the following error when I did, juju destroy-environment local [18:40] ERROR cannot connect to API: unable to connect to "wss://10.0.3.1:17070/environment/c9759ee0-88cc-403d-8b6c-aa4e628bf17d/api" [18:40] Any idea on thsi error ? [18:42] the openstack-lxd bundle on xenial leaves ceph broken - all the pgs are stuck in creation [18:42] has anyone worked with that or found a fix? It works on trusty, so it must be a xenial quirk [18:55] suchvenu - which juju version? [18:57] 1.25 [18:58] Ubuntu Trusty version [18:58] suchvenu - sudo lxc-ls --fancy, do you see any lxc containers listed? [18:59] oo local provider, hang on there's a service you should have running not a container [19:01] charm@islrpbeixv665:~/charms/trusty/ibm-db2$ sudo lxc-ls --fancy [sudo] password for charm: NAME STATE IPV4 IPV6 AUTOSTART --------------------------------------------------------- juju-precise-lxc-template STOPPED - - NO juju-trusty-lxc-template STOPPED - - NO charm@islrpbeixv665:~/charms/trusty/ibm-db2$ [19:05] suchvenu - sorry that was a redherring. Its been a moment since i've used the local provider on 1.25 [19:06] suchvenu - can you pastebin the output of initctl list for me? [19:09] http://pastebin.ubuntu.com/16194903/ === bsod90 is now known as pacavaca [19:17] suchvenu - juju destroy-environment -y --force local [19:18] suchvenu - as you were trying ot remove it, this will clean up the stale configuration files, and you can re-bootstrap afterwords [19:18] i didn't see the upstart job listed for the local environment, so i think it just didn't complete the job of cleaning up after itself from a prior juju destroy-environment. [19:19] ok [19:37] stub: https://bugs.launchpad.net/postgresql-charm/+bug/1577544 fyi [19:37] Bug #1577544: Connection info published on relation before pg_hba.conf updated/reloaded === FourDollars_ is now known as FourDollars === Tribaal_ is now known as Tribaal === blr_ is now known as blr [20:51] gnuoy: See my last comment on https://review.openstack.org/#/c/308070/ and consider a +2 === natefinch is now known as natefinch-afk [21:32] hey guys! when I'm doploying something (openstack) using juju, each machine gets several lxc containers and one called juju-xenial-template (smth like this). I noticed that this template container already has some predefined configuration for my NICs (eth0 and eth1). In particular, it assigns static IPs on one of them and I don't want that. How do I change this template (for all nodes), so that IPs are [21:33] always assigned by DHCP? [22:11] kwmonroe: You got a PR for that layer-basic change? [22:13] i do not cory_fu. i'm not sure it's good. http://paste.ubuntu.com/16196042/ do i need to pip install requirements.txt? should i activate vs including PATH extensions? is there potential conflict with .venv for other charms (ie, should i call it .venv-newer-tox)? [22:14] kwmonroe: you in for the full conference or just flying in and out? [22:14] magicaltrout: i'm in it for the long haul. Sun-Thu. [22:14] You don't need to pip install requirements, because tox will handle that. However, since it's currently doing a sudo pip install, why not just stick with that? [22:14] nice [22:14] they speak funny in canada [22:15] i'll learn 'em [22:15] hehe [22:18] cory_fu: not sure what you mean by "why not just stick with that". [22:21] kwmonroe: http://pastebin.ubuntu.com/16196470/ [22:24] kwmonroe: https://github.com/juju-solutions/layer-basic/pull/66 [22:33] * magicaltrout writes some slides [22:33] better late than never [22:41] cory_fu: marcoceppi: quick tox fix for charmbox: https://github.com/juju-solutions/charmbox/pull/33/files [22:42] this moves the tox fix to cb instead of layer-basic's Makefile since you guys are in such violent disagreement over cory's pull 66 [22:42] kwmonroe cory_fu I'd prefer this, I've honestly put tox in a venv that I activate when i do charm testing stuff, this fixes it where you need it, but we still need to consolidate charm test for charm 2.2 to be docker/lxd ootb [22:43] marcoceppi: Why isn't it using charmbox? That's what we've standardized on for all of our CI infra [22:43] cory_fu: because, time. [22:43] cory_fu: also, the openstack engeineers depend on the old test runner [22:44] we need to get them onto bundletester, then get charm test to just boot the charmbox container and run tests [22:50] is there any way to get info about a unit, from within another unit, without a relation to it [22:51] admcleod1: no, not really [22:51] admcleod1: charms only know about what they are connected to, implicit querying of other services would lead to a disaster of having toplogies that require other services but without an explicit relation for the operator to know they're logically connected [22:52] admcleod1: juju run service/0 'unit-get private-address' [22:53] cory_fu: from with another unit though [22:53] marcoceppi: right [22:53] admcleod1: But what you really want to do is make sure you expose the namenode and make sure that the namenode is listening on all addresses and then it should work [22:54] admcleod1: With a relation. For the dfs-copy action, we should make it work with the public address [22:54] cory_fu: there is that, but will trying to copy data over the public address be an issue with some installations? [22:54] s/address/interface [22:54] It should work, too, we're probably just missing one of: listen on all addresses (0.0.0.0), open-port on the correct port, juju expose namenode [22:55] Yes, it's not ideal, but you don't really have any choice if one of the namenodes is not juju deployed. For two juju deployed services, we'd want the relation, like we discussed [22:55] yeah true [22:56] Anyway, I have to EOD [22:56] cya [22:57] cory_fu's mrs has him tied around her little finger..... [22:57] oh no, he's just sensible and doesn't work 18 hour days [22:59] admcleod1: if you're going to adjust apache-hadoop-namenode to listen on all interfaces, you can do something like i did for bigtop recently: https://github.com/juju-solutions/layer-apache-bigtop-namenode/commit/ac0c472707e1f48ccbf416f1c1eb2e189b6bd8e9 [23:01] kwmonroe: i guess ill have to, seems a bit dodgy though [23:03] kwmonroe: maybe an action to do it temporarily? [23:04] admcleod1: make it less dodge by creating a new dist.yaml port called "dfs-copy" that's "exposed_on: totally_dodgie", then make the action expose "totally_dodgie", do the copy, then unexpose it. [23:07] heh [23:07] kwmoroe: yeah but more because its listening on all interfaces [23:07] kwmonroe: ^ [23:10] oh, i don't think it's too spookie to listen on all interfaces. any when we migrate services to hosts with different IPs, it's nice to not have to reconfigure. [23:20] kwmonroe: i know a few network admins who might disagree.. but ok lets do it! :}