[00:31] <lazyPower> https://code.launchpad.net/~lazypower/charm-helpers/add-workload-version/+merge/305062  -- if anyone has some spare cycles to look over a new feature for le charm-helpers for me
[03:10] <pragsmike> After cleaning up my maas networks, I still have the same problem: containers end up on lxdbr0 with unreachable addresses
[03:12]  * pragsmike hates computers
[03:17] <pragsmike> UNIT       WORKLOAD     AGENT  MACHINE  PUBLIC-ADDRESS  PORTS   MESSAGE
[03:17] <pragsmike> mariadb/0  unknown      idle   0/lxd/0  10.0.0.26
[03:17] <pragsmike> vanilla/0  maintenance  idle   0        192.168.57.101  80/tcp  Starting Apache
[03:17] <pragsmike> MACHINE  STATE    DNS             INS-ID               SERIES  AZ
[03:17] <pragsmike> 0        started  192.168.57.101  tamdhn               trusty  default
[03:17] <pragsmike> 0/lxd/0  started  10.0.0.26       juju-b78821-0-lxd-0  trusty
[03:18] <pragsmike> vanilla will never be able to connect to mariadb
[08:28] <magicaltrout> alrighty, NASA and the other company are having a big fat row... which leaves me a  nice gap to finish off my charms for next week \o/
[09:04] <magicaltrout> alright 3 talks submitted to apachecon eu
[09:04] <magicaltrout> better stop before they all get accepted like usual and I have a mountain of stuff to do
[09:34] <rock> HI. Facing Issue while deploying juju-gui in internal openstack environment [ubuntu OpenStack Autopilot Setup]. Issue Pasted Info  http://paste.openstack.org/show/567387/. Please anyone provide me some solution for this issue.
[09:36] <magicaltrout> well I have no idea how openstack works, but it looks like you need to define a pool of servers in a region called region1 :)
[10:00] <rock> magicaltrout: OK. Thank you.
[10:01] <rock> But I don't understand one thing. For adding a single juju charm to the existing ubuntu autopilot setup, we need to add servers to MAAS Server zone[cloud region] ?
[10:06] <magicaltrout> rock: surely region1 is post openstack bootstrap
[10:06] <magicaltrout> doesn't openstack have regions defined like AWS does?
[11:30] <magicaltrout> can i test a charm locally that uses resources without pushing stuff to the charmstore?
[11:35] <magicaltrout> ah juju attach it seems
[11:35] <rick_h_> magicaltrout: yep
[11:45] <magicaltrout> i assume we'll be seeing your well groomed facial hair in pasadena rick_h_ ?
[11:48] <rick_h_> magicaltrout: hah probably
[11:48] <rick_h_> i'll look for my pommade
[11:48] <magicaltrout> lol
[11:51] <magicaltrout> ooh my first working resource enabled charm
[11:51] <magicaltrout> nice
[12:00] <pragsmike> any insights would be appreciated, as to why my containers aren't reachable from any other machines:
[12:00] <pragsmike> MACHINE  STATE    DNS             INS-ID               SERIES  AZ
[12:00] <pragsmike> 0        started  192.168.57.101  tamdhn               trusty  default
[12:00] <pragsmike> 0/lxd/0  started  10.0.0.26       juju-b78821-0-lxd-0  trusty
[12:00] <pragsmike> isn't the container supposed to be on the same subnet as machine 0?
[12:04] <magicaltrout> I like your style pragsmike you've exhausted the late in the day folks so now try the early risers? ;)
[12:04] <pragsmike> well i'm pretty exhausted :/
[12:04] <magicaltrout> well rick_h_ may be of use
[12:05] <magicaltrout> pragsmike: I would also backup your irc questions by dumping them on the juju mailing list
[12:05] <magicaltrout> its not as instant but you get a wider response base
[12:05] <pragsmike> good point
[12:05] <pragsmike> and it would keep me from spamming irc so much
[12:05] <magicaltrout> doesn't hurt to spam every channel going ;)
[12:06] <pragsmike> hey i've kept it off #maas
[12:08] <magicaltrout> if i were in your shoes pragsmike i'd spam #juju the juju mailing list and askubuntu
[12:08] <magicaltrout> until someone responds :)
[12:09] <cargill> hi, is there an offline readable edition of the juju user/developer guides?
[12:26] <rick_h_> pragsmike: the team's working on a few bugs around that atm. /me goes to look up bug #'s
[12:27] <rick_h_> cargill: no, but it's it github so you can fork it and have a local copy. https://github.com/juju/docs
[12:29] <cargill> rick_h_: brilliant, thanks
[12:42] <magicaltrout> cory_fu: ping
[12:45] <admcleod> pragsmike: is your lxd bridged to the host interface?
[12:45] <admcleod> magicaltrout: its not quite 6am there yet
[12:45] <pragsmike> no it isn't, that's the problem
[12:46] <admcleod> pragsmike: tried this? https://insights.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained/
[12:46] <pragsmike> checking
[12:47] <magicaltrout> admcleod: so?!
[12:47] <rick_h_> frobware: didn't we have a bug for this yesterday? I'm not seeing which one it is ^
[12:47] <admcleod> magicaltrout: just sayin
[12:47] <rick_h_> frobware: where containers are getting internal private IPs vs ones on the host maas network via dhcp?
[12:48] <pragsmike> that's a good description of the problem
[12:48] <rick_h_> natefinch: morning, did you get a review of the bundle branch up? I'm not seeing it and would like to be able to ship that for the beta/before it gets out of sync with trunk again please
[12:48] <pragsmike> I wonder if it's related to https://lists.ubuntu.com/archives/juju/2016-September/007801.html
[12:49] <magicaltrout> you'll do though admcleod cause i just need some eyes to tell me what moronic thing I'm doing
[12:50] <magicaltrout> https://github.com/buggtb/layer-drillbit/blob/master/metadata.yaml#L22 mysql relation
[12:50] <magicaltrout> https://github.com/buggtb/layer-drillbit/blob/master/reactive/drillbit.py#L183 <- reactive code for the relation
[12:50] <magicaltrout> why I juju deploy mysql and relate the charms
[12:50] <magicaltrout> that method is never run
[12:52] <admcleod> magicaltrout: well the mysql interface needs to set the available state
[12:52] <pragsmike> magicaltrout thank you for pointing me at the mailing list, i think that message i linked is in fact my problem
[12:53] <pragsmike> though i do have the interfaces assigned to subnets and it still doesn't work
[12:54] <admcleod> magicaltrout: and you dont appear to be using the mysql interface
[12:54] <magicaltrout> admcleod: yeah i get that but I look at the interface and that is set if there is a valid connection_string
[12:54] <magicaltrout> https://github.com/johnsca/juju-relation-mysql/blob/master/requires.py#L43
[12:54] <admcleod> magicaltrout: right...
[12:55] <admcleod> magicaltrout: but mysql isnt in layer.yaml
[12:55] <admcleod> magicaltrout: or does something else include it?
[12:56] <magicaltrout> see
[12:56] <magicaltrout> told you it was moronic
[12:56] <magicaltrout> ta
[12:56] <admcleod> cool, no worries
[12:58] <frobware> rick_h_: didn't get to that - could be https://bugs.launchpad.net/juju/+bug/1566791
[12:58] <mup> Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> <cpec> <network> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1566791>
[12:59] <frobware> pragsmike: ^^
[12:59] <frobware> pragsmike: could do with some quick context; this on MAAS?
[13:00] <pragsmike> yes, juju2 beta17/maas2.1 latest alpha
[13:00] <natefinch> rick_h_: I'm sorry, I totally flaked on that yesterday.  I'll look at it right now.
[13:00] <frobware> pragsmike: on the machine hosting the container can you do: lxc config show <container-name>
[13:00] <rick_h_> natefinch: ty
[13:01] <pragsmike> config:
[13:01] <pragsmike>   core.proxy_ignore_hosts: 127.0.0.1,192.168.57.100,::1
[13:01] <pragsmike> doh
[13:01] <frobware> pragsmike: that mail archive you posted to is the same bug I referenced
[13:02] <rick_h_> pragsmike: ok, so we're working on getting that fixed up for the next beta hopefully.
[13:02] <natefinch> rick_h_: do you have a link? I'm not sure where I'm supposed to be looking
[13:02] <rick_h_> natefinch: look in the kanban card please
[13:02] <pragsmike> http://pastebin.com/9LagVLyK
[13:02] <rick_h_> natefinch: it's why I ask folks to make sure to link them up.
[13:02] <pragsmike> cool, good to know
[13:03] <natefinch> rick_h_: oh, right
[13:03] <pragsmike> sorry i'm still new enough that i don't always recognize what i'm looking at
[13:03] <rick_h_> frobware: call?
[13:04] <rick_h_> pragsmike: all good, I'm not new and that mentioned vlans and such so I wasn't sure
[13:05] <frobware> pragsmike: from the maas node, can you PB /etc/network/interfaces
[13:06] <pragsmike> auto eth0
[13:06] <pragsmike> iface eth0 inet dhcp
[13:06] <pragsmike> besides the loopback, and that's in /etc/network/interfaces.d/50-cloud-init.cfg
[13:07] <pragsmike> wait 'maas node' means maas controller, or the container host
[13:07] <pragsmike> i'm guessing the latter is more interesting, standby
[13:08] <pragsmike> http://pastebin.com/6pZuuXXS
[13:08] <pragsmike> the container host's nic does get put on a br- bridge
[13:09] <pragsmike> it's just that the containers still end up using the wrong bridge (lxdbr0)
[13:19] <jacekn> hello. I am trying to get local provider up and running but it seems broken in juju 1.25: https://bugs.launchpad.net/juju-core/+bug/1618963 and also in 2.0: https://bugs.launchpad.net/juju/+bug/1618948 anybody knows workaround to get local provider up and running?
[13:19] <mup> Bug #1618948: Can't bootstrap localhost cloud <juju:New> <https://launchpad.net/bugs/1618948>
[13:19] <magicaltrout> jacekn: isn't your 2.0 install ancient?
[13:20] <magicaltrout> ooh
[13:20] <magicaltrout> tools is ancient
[13:20] <jacekn> magicaltrout: it's the latest one in xenial: 2.0-beta15-xenial-amd64
[13:20] <jacekn> yeah
[13:20] <magicaltrout> can you sync-tools or something?
[13:20] <magicaltrout> they've changed the term
[13:22] <jacekn> nope that does not work http://pastebin.ubuntu.com/23145770/
[13:22] <magicaltrout> i just tried the same jacekn on 2.0 and it bootstraps for me
[13:23] <magicaltrout> yeah this is the thing I don't get
[13:23] <magicaltrout> in the latest beta they removed --upload-tools
[13:23] <magicaltrout> but then if you need to upload them to boot, how do you do it?
[13:23] <magicaltrout> or maybe thats not required I never quite understood
[13:24] <magicaltrout> anyway i'm on beta16
[13:24] <magicaltrout> and it booted
[13:25] <jacekn> magicaltrout: where did you get beta16 from? Xenial has beta15
[13:26] <dimitern> pragsmike: if you grep for `failed to prepare container ".*" network config` in /var/log/juju/machine-0.log on the controller and find it, that will be the reason why you're seeing lxds coming up with a single NIC bridged to lxdbr0
[13:26] <magicaltrout> ppa-dev or something
[13:27] <dimitern> pragsmike: this is because we couldn't finish allocation for a multi-NIC config for the lxd and resorted to using a "fallback" config (eth0->lxdbr0, dhcp); that's also what we're fixing currently
[13:35] <pragsmike> thanks dimitern!
[13:35] <pragsmike> network config: {"hostname": ["Node with this Hostname already exists."]}
[13:36] <pragsmike> actually two of them
[13:36] <pragsmike> 2016-09-07 03:10:58 WARNING juju.provisioner lxd-broker.go:62 failed to prepare container "0/lxd/0" network config: connection is shut down
[13:36] <pragsmike> 2016-09-07 03:12:22 WARNING juju.provisioner lxd-broker.go:62 failed to prepare container "0/lxd/0" network config: {"hostname": ["Node with this Hostname already exists."]}
[13:36] <pragsmike> but the "connection is shut down" gets spammed in bursts throughout the log, mostly reporting about manifold workers
[13:37] <pragsmike> gotta be afk for a few hours, thanks much for the help guys!
[13:37] <pragsmike> i'll just manually bugger the container bridge for now, i just didn't want to have to do that every time i deploy
[13:38] <dimitern> pragsmike: what are the versions of juju and maas?
[13:38] <rick_h_> voidspace: call?
[13:39] <pragsmike> juju 2.0-beta17-xenial-amd64
[13:39] <voidspace> rick_h_: sorry, omw
[13:39] <pragsmike> maas "version": "2.1.0", "subversion": "alpha2+bzr5321"
[13:44] <dimitern> pragsmike: have you tried stable maas versions?
[13:44] <dimitern> pragsmike: 2.1 is quite new and I personally haven't even tried it - might still have issues
[13:45] <dimitern> pragsmike: but that 'node with this hostname already exists' is troubling - shouldn't happen unless it's a maas 2.1 api regression
[13:47] <rock> Hi. I have a question. We have developed a JUJU Charm for configuring cinder to use one of our Storage array as the backend.   So How to redeploy the Charm to add more storage arrays to configure cinder without destroying/removing the current deployed charm. [For example, We don't want to remove the current configured storage arrays from the Cinder configuration.]
[13:48] <frobware> dimitern: ping
[13:49] <frobware> dimitern: based on pragsmike's PB (http://pastebin.com/6pZuuXXS) I'm not expecting this to be the unconfigured parent device issue
[13:50] <rock> can we do this only using juju set-config and juju upgrade-charm ? Is there any other way to do?
[13:55] <SimonKLB> what's the go-to smtp server charm (if anyone exist) ?
[13:56] <rick_h_> rock: so if you change the code in the charm itself you'd upgrade-charm. If you want to change configuration it'd be up to how the charm handles that config
[13:57] <rick_h_> rock: it could be accepting config as a resource, a config field entry, or something else
[14:02] <magicaltrout> i don't believe there is one SimonKLB
[14:02] <magicaltrout> but there should be!
[14:07] <SimonKLB> magicaltrout: ah too bad, it would be great yea
[14:09] <SimonKLB> magicaltrout: actually, just found https://jujucharms.com/postfix/ but it looks a bit outdated... :)
[14:10] <magicaltrout> yeah it would be worth dragging that back out of retirement
[14:10] <magicaltrout> I've had an ldap server on my backlog as well
[14:11] <PCdude> magicaltrout heey :)
[14:11] <magicaltrout> uh oh
[14:14] <rock> rich_h_: Hi. I have a question. for example, we have two same kind of storage arrays. But those two arrays have unique parameter values like [san IP, san pass, san username....]. So Once our charm modified the cinder.config with the same storage backend, then cinder has to contact both storage arrays based on given credential values. How can I do this?
[14:25] <rock> rick_h_: are you there?
[14:27] <rick_h_> rock: I am, sorry in and out with meetings/etc so I might fade back/forth
[14:27] <rick_h_> rock: so this is in reference to the current cinder charm you're specifying the different arrays?
[14:35] <rock> rick_h_: assume that we deployed our charm with some configuration values. and added relation to cinder. Then again we want to redeploy the charm to append with the new configuration values. We don't want to destroy the existing changes.
[14:48] <rock> rick_h_ : Yes. In reference to the current cinder charm we are specifying different arrays. We want to redeploy the charm again and again but Only new configuration values have to be appended.
[14:51] <rick_h_> rock: so you don't typically redeploy a new charm again and again without running them with different config/etc. If you weant to update the config it's up to using the charm mechanisms to update that without a charm redeploy.
[14:51] <rick_h_> rock: I guess I'll defer to folks that know cinder better. I'm not following the use case here very well.
[14:55] <rock> rich_h_: OK. Thank you very much.
[15:23] <magicaltrout> ah ha
[15:23] <magicaltrout> got it
[15:24] <magicaltrout> this is quite cool admcleod https://ibin.co/2uGe9AHrvXI9.png apache drill over mysql from the relation
[15:24] <magicaltrout> so you could now hook up a combination or mongo, flat files, hdfs, mysql from juju into drill
[15:24] <magicaltrout> and then plug drill into your favourite SQL client
[15:25] <magicaltrout> and do cross datasource analysis
[15:31] <magicaltrout> something like select * from mysql.jujudb, hdfs.jujucluster where x=y
[16:23] <admcleod> magicaltrout: nice
[16:25] <hatch> rock: as promised there is a new GUI release which resolves the issue you were seeing with subordinates. You can get it by downlading the bz2 here https://github.com/juju/juju-gui/releases/tag/2.1.11 and then running `juju upgrade-gui /path/to/bz2`
[17:19] <PCdude> stokachu: http://askubuntu.com/questions/821804/openstack-with-landscape-install-fails
[17:20] <PCdude> stokachu: JUJU can bootstrap and it succeeds to install ubuntu on all nodes after the bootstrap
[17:20] <PCdude> stokachu: nodes are getting an IP and can go to the internet (DNS resolves works too)
[17:20] <PCdude> what is going wrong? :) I am stuck...
[17:21] <PCdude> I got ur name on recommendation of "kiko"
[18:11] <cloudguru> Wondering if the charm ingestion service is known to be working / operations or if my charms pushed to personal namespace in launchpad have issues
[18:12] <cloudguru> Charm proof results for : https://code.launchpad.net/~brianlbaird/charms/trusty/qslice/trunk
[18:12] <cloudguru> I: metadata name (qslice) must match directory name (trunk) exactly for local deployment.
[18:12] <cloudguru> I: all charms should provide at least one thing
[18:12] <cloudguru> I: missing recommended hook install
[18:12] <cloudguru> I: missing recommended hook start
[18:12] <cloudguru> I: missing recommended hook stop
[18:12] <cloudguru> I: missing recommended hook config-changed
[18:13] <cloudguru> I'm using layer-docker so the hooks aren't needed as I understand as /reactive does heavy lifting
[18:14] <lazyPower> cloudguru - the i's ar esafe to ignore. it also looks like you ran charm proof against a layer, not against the assembled charm.
[18:15] <cloudguru> k
[18:15] <cloudguru> I'll move up the directory path and push again
[18:15] <lazyPower> cloudguru - well, hang on
[18:15]  * lazyPower is looking
[18:15] <cloudguru> standing by
[18:16] <lazyPower> ok i see what happened here
[18:16] <lazyPower> the charm is building in $PWD
[18:16] <lazyPower> if you look in your layer, you'll see trusty/qslice/--all-the-charms-files-here--/
[18:16] <cloudguru> agreed
[18:17] <lazyPower> so really what you want ot do is charm push just the assembled charm to your namespace in the charm store, and you'll want to bzr push just the layer, so you're only VCS controlling the layer.
[18:17] <cloudguru> that's the result of charm build
[18:17] <lazyPower> one thing i do, is i export JUJU_REPOSITORY in my $HOME/.bashrc, so that charm build will by default place them in that path, instead of in $PWD
[18:18] <marcoceppi> cloudguru: also, ingestion does not work anymore, you'll need to explicitly push it to the charm store
[18:18] <cloudguru> good idea to keep them separated .  Saw the export is your super awesome tutorial
[18:18] <lazyPower> cloudguru - also, wrt publishing - https://jujucharms.com/docs/stable/authors-charm-store   -- make sure you're familiar with the charm release model. (or charm publish, depending on which version of the charm command you have installed)
[18:18] <cloudguru> ah man .. humans required ;-)
[18:18] <marcoceppi> cloudguru: https://jujucharms.com/docs/stable/authors-charm-store
[18:19] <cloudguru> thx guys.  trying this again and will check back here as needed.
[18:19] <lazyPower> np, ping if you hit blockers cloudguru  :)
[18:52] <jose> marcoceppi jcastro niemeyer is it possible to have +t here? there's a topic troll in Ubuntu channels
[18:52] <marcoceppi> jose: what do you mean?
[18:52] <jose> channel mode +t
[18:53] <jose> there's a guy changing channel topics to... undesirable things
[18:53] <marcoceppi> jose: sure
[18:53] <jose> thanks :)
[18:54]  * marcoceppi shrugs
[18:55] <PCdude> marcoceppi: uhm, n00b question what does +t mean?
[18:55] <jose> marcoceppi: it's modelocked with chanserv
[18:56] <cloudguru> @lazypower: updated charms pushed for v1.25 on 14.04 trusty
[19:25] <PCdude> lazyPower: http://askubuntu.com/questions/821804/openstack-with-landscape-install-fails
[19:25] <PCdude> any idea? :)
[19:31] <lutostag> any way to tell where we are in the agent-initialization process?
[19:31] <lutostag> (seems to me my local lxd setup is stuck in that step deploying a charm)
[19:34] <rick_h_> lutostag: check juju status --format yaml
[19:35] <rick_h_> and see if theres an error on the machine there
[19:35] <lutostag> rick_h_: no error, just pending
[19:36] <lutostag> it seems like cloud-init/apt-get dist-upgrade may still be running but not doing anything actively
[19:44] <lutostag> oh well at least the juju add-user and register is freaking awesome
[19:45] <bdx> lutostag: totally, right
[19:46] <bdx> marcoceppi: I was a bit tired last night when I filed that bug on interface-http .... I'm thinking it should be a feature request/bug with haproxy instead .... I'll close that bug now
[20:23] <balloons> anastasiamac_, good morning
[20:27] <lutostag> if anyone else runs into the same as me above... culprit is https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229
[20:27] <mup> Bug #1621229: snap upgrade to 2.14.2~16.04 in xenial lxc hangs <snapd (Ubuntu):New> <https://launchpad.net/bugs/1621229>
[20:28] <lutostag> (but should only be until we get a new daily... or a fix... *shrug*)
[20:43]  * pragsmike returns
[20:44] <marcoceppi> balloons: I think it's actually a valid bug though
[20:45] <anastasiamac_> balloons: \o/ 6.45am - how can i help? m in the process of sending brood to school
[20:46] <magicaltrout> lazyPower: you're a man of immense debugging skills.....
[20:47] <magicaltrout> 'PostgreSQLClient' object has no attribute 'host'
[20:47] <magicaltrout> help me out here
[20:47] <magicaltrout> https://git.launchpad.net/interface-pgsql/tree/requires.py
[20:47] <magicaltrout> the stuff at the top even tells me it has hosts
[20:48] <magicaltrout> https://github.com/buggtb/layer-drillbit/blob/master/reactive/drillbit.py#L206 yet that fails
[20:48] <balloons> marcoceppi, what's a valid bug?
[20:51] <marcoceppi> balloons: sorry, meant to ping bdx
[20:51] <marcoceppi> bdx: I think it's actually a valid bug though
[20:51] <balloons> anastasiamac_, just wanted to mention we need to triage ubuntu source juju bugs as well
[20:51] <balloons> anastasiamac_, https://bugs.launchpad.net/ubuntu/+source/juju-core/+bugs?orderby=-datecreated&start=0
[20:51] <bdx> marcoceppi: with interface-http, or haproxy?
[20:52] <marcoceppi> bdx: interface-http
[20:53] <bdx> marcoceppi, magicaltrout: so ..... iterating quite a few deploys over a few different charms this last week, all of which are making use of the pgsql interface ... I hit the same thing as magicaltrout probably 4 or 5 times
[20:54] <magicaltrout> awww
[20:54] <marcoceppi> interesting
[20:54] <magicaltrout> i'm even robbing code from cmars
[20:54] <marcoceppi> stub: ^?
[20:54] <magicaltrout> and it doesn't work
[20:54] <bdx> I would rip it all down, redeploy the same unchanged charms, and it would not hit the error
[20:55] <bdx> magicaltrout: does yours error consistently?
[20:55] <magicaltrout> i believe so
[20:55] <magicaltrout> but i keep hacking stuff around when its broken, so maybe, maybe not :)
[20:55] <marcoceppi> what's interesting is host isn't an autoaccessor
[20:56] <magicaltrout> part of my hatrid of scripting languages... where's the stuff tell you its wrong before you hit the go button ;)
[20:56] <marcoceppi> magicaltrout: can you try something like this instead
[20:56] <marcoceppi> @when('pgsql.master.available')
[20:56] <marcoceppi> and then
[20:56] <marcoceppi> psql.master.host ?
[20:57] <magicaltrout> yeah 2 secs
[20:57] <marcoceppi> maybe actually psql.master().host
[20:58] <anastasiamac_> balloons: awesome \o/ first time i hear about it :) where do these come from and can I re-target these to juju? oh.. i guess, it must b juju-core coz juju does not exist?...
[20:58] <bdx> https://gist.github.com/jamesbeedy/2c179a7d0a71209f8ccd0183478db9d5
[20:59] <balloons> anastasiamac_, launchpad.net/juju bugs are project bugs. those bugs are bugs found and filed by end users against the source package of juju for ubuntu. They might be issues with the packaging, or issues specific to the distro version.
[20:59] <bdx> marcoceppi, magicaltrout: ^ is what I've been using .... seems to work 99% of the time .... that is the code I was randomly hitting that error on
[21:00] <balloons> anastasiamac_, so typically those type of bugs might end up being pushed upstream and linked to an upstream bug, while tracking the ubuntu work. I would say feel free to add juju as affected to any of them you know juju-core needs to fix
[21:01] <magicaltrout> this time around i get: Can't convert 'PostgreSQLClient' object to str implicitly
[21:01] <anastasiamac_> balloons: sounds good ;)
[21:01] <magicaltrout> for  log("marco and his amazing tweak:"+psql+master().host)
[21:01] <balloons> anastasiamac_, and perhaps to make life simple, you could remove the link to the ubuntu source package then. There's not many bugs in there, so what's there can just be packaging or non-juju-core issues
[21:02] <magicaltrout> okay so  lets try bdx's version
[21:03] <magicaltrout> same output
[21:03] <magicaltrout> oh
[21:03] <magicaltrout> i might have ballsed that one up
[21:04] <magicaltrout> . not +
[21:05] <marcoceppi> I think the key is to use the psql.master object
[21:05] <magicaltrout> yeah much improved
[21:05] <marcoceppi> which is a ConnectionString class that has host
[21:05] <magicaltrout> thanks chaps
[21:05] <marcoceppi> cheers bruv
[21:06]  * magicaltrout plonks marcoceppi in the east end of london
[21:08] <PCdude> please can somebody help me with this problem, I tried everything I can think of :)
[21:08] <PCdude> http://askubuntu.com/questions/821804/openstack-with-landscape-install-fails
[21:10] <jcastro> dpb1: got someone handy who can help PCdude? ^^^
[21:11] <PCdude> jcastro: great! :)
[21:16] <magicaltrout> submitted a couple of semi juju related talks to apachecon eu today jcastro
[21:16] <jcastro> excellente!
[21:17] <thumper> o/ jcastro
[21:29] <bdx> wierd issue #1000000 - http://paste.ubuntu.com/23147588/
[21:29] <mup> Bug #1000000: For every bug on Launchpad, 67 iPads are sold. <Edubuntu:Triaged> <https://launchpad.net/bugs/1000000>
[21:30] <bdx> weird*
[21:30] <bdx> xenial containers never get a juju-agent :-(
[21:31] <bdx> they just hang on Waiting for agent initialization to finish
[21:31] <lutostag> magicaltrout: https://paste.ubuntu.com/23147602/
[21:31] <bdx> trusty containers get the juju agent and start just fine
[21:31]  * lutostag what happens when you don't scroll to the end of the conversation
[21:32] <bdx> I can't get any xenial lxd containers to start on beta16, beta17, or tip
[21:33] <lutostag> bdx: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229
[21:33] <mup> Bug #1621229: snap upgrade to 2.14.2~16.04 in xenial lxc hangs <snapd (Ubuntu):New> <https://launchpad.net/bugs/1621229>
[21:33] <bdx> must not be a juju thing ... all of my logs are clean ... the last command successfully ran in my cloud-init.log is 'dist-upgrade' ...
[21:33] <lutostag> not a juju thing
[21:36] <bdx> lutostag: thanks ... you saved me from re-deploying failures for the next hour, staring at my screen feeling like I'm missing some rogue config :-)
[21:40] <bdx> lutostag: http://paste.ubuntu.com/23147632/ - fixes
[21:41] <lutostag> fun :), didn't know those
[21:45] <magicaltrout> bdx: what day you getting into pasadena sunday night?
[22:13] <magicaltrout> aww wtf i rejects the connection
[22:14] <magicaltrout> s/i/it
[22:15] <bdx> magicaltrout: ya mon! yourself?
[22:16] <magicaltrout> saturday
[22:16] <magicaltrout> got to de-jetlag
[22:17] <magicaltrout> marcoceppi: i'm assuming I should be able to talk to postgres without changing the security in the postgres charm?
[22:21] <bdx> magicaltrout: postgres only adds entries to pg_hba.conf for the ip(s) of your related units
[22:22] <bdx> if you want to connect from another source you have to feed it the extra-pg-auth config
[22:22] <magicaltrout> yeah i can see the entry in the config
[22:22] <magicaltrout> which is weird
[22:22] <bdx> what part?
[22:22] <magicaltrout> so its adding my relation, even if i install postgres-client on the other end
[22:22] <magicaltrout> that gets a rejected connection as well
[22:23] <bdx> due to what?
[22:23] <magicaltrout> FATAL:  pg_hba.conf rejects connection for host "172.31.4.210", user "juju_drillbit3", database "juju_drillbit3", SSL off
[22:24] <bdx> what is in your pg_hba.conf
[22:24] <bdx> oooh, I think I hit this the other day too ... are you requesting multiple dbs?
[22:25] <magicaltrout> not that i'm aware of :)
[22:25] <magicaltrout> http://pastebin.com/5rBMB1Nx
[22:25] <magicaltrout> thats the pg_hba
[22:25] <bdx> if you are, postgres charm will generate a new password for the juju_<service-name> user for each extra db you request
[22:26] <bdx> leaving all_databases_requested[:-1] to have the wrong password
[22:27] <magicaltrout> aah i see
[22:27] <magicaltrout> specify a database and it works
[22:27] <bdx> lol
[22:28] <magicaltrout> doesn't work in jdbc world though
[22:28] <magicaltrout> arse
[22:46] <magicaltrout> oh postgres
[22:46] <magicaltrout> some days you make me so sad
[22:59] <magicaltrout> aaah lol
[23:00]  * magicaltrout figured part of it out
[23:05] <bdx> what was it
[23:05] <bdx> I'm about to start using jdbc too
[23:07] <magicaltrout> mostly a user caused weird zookeeper issue :)
[23:07] <magicaltrout> but there may be an SSL issue as well, give me 5 mins and I'll let you know
[23:21] <magicaltrout> ooh
[23:21] <magicaltrout> i've just been asked to  demo some juju stuff for NASA and Darpa when I'm at JPL in a couple of weeks
[23:22] <lazyPower> magicaltrout nice!
[23:23] <magicaltrout> and other container orchestration tools."
[23:23] <magicaltrout> paste fail
[23:24] <magicaltrout> "I also here from Paul that you are 'juju charm' purveyor.  I've never used this but it certainly looks interesting.  I'd like to hear how juju compares to Docker compose and other container orchestration tools."
[23:24] <magicaltrout> thats the brief ;)
[23:25] <magicaltrout> it is... brief
[23:25] <lazyPower> indeed
[23:25] <lazyPower> and we're so not a "container orchestrator"
[23:25] <magicaltrout> lol yeah
[23:25] <lazyPower> its a byproduct of how awesome we are though
[23:25] <lazyPower> so i can see why thats such a popular misconception
[23:25] <magicaltrout> yup