lazyPower | https://code.launchpad.net/~lazypower/charm-helpers/add-workload-version/+merge/305062 -- if anyone has some spare cycles to look over a new feature for le charm-helpers for me | 00:31 |
---|---|---|
=== natefinch-afk is now known as natefinch | ||
pragsmike | After cleaning up my maas networks, I still have the same problem: containers end up on lxdbr0 with unreachable addresses | 03:10 |
* pragsmike hates computers | 03:12 | |
pragsmike | UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE | 03:17 |
pragsmike | mariadb/0 unknown idle 0/lxd/0 10.0.0.26 | 03:17 |
pragsmike | vanilla/0 maintenance idle 0 192.168.57.101 80/tcp Starting Apache | 03:17 |
pragsmike | MACHINE STATE DNS INS-ID SERIES AZ | 03:17 |
pragsmike | 0 started 192.168.57.101 tamdhn trusty default | 03:17 |
pragsmike | 0/lxd/0 started 10.0.0.26 juju-b78821-0-lxd-0 trusty | 03:17 |
pragsmike | vanilla will never be able to connect to mariadb | 03:18 |
=== menn0 is now known as menn0-afk | ||
=== frankban|afk is now known as frankban | ||
magicaltrout | alrighty, NASA and the other company are having a big fat row... which leaves me a nice gap to finish off my charms for next week \o/ | 08:28 |
=== zz_CyberJacob is now known as CyberJacob | ||
magicaltrout | alright 3 talks submitted to apachecon eu | 09:04 |
magicaltrout | better stop before they all get accepted like usual and I have a mountain of stuff to do | 09:04 |
rock | HI. Facing Issue while deploying juju-gui in internal openstack environment [ubuntu OpenStack Autopilot Setup]. Issue Pasted Info http://paste.openstack.org/show/567387/. Please anyone provide me some solution for this issue. | 09:34 |
magicaltrout | well I have no idea how openstack works, but it looks like you need to define a pool of servers in a region called region1 :) | 09:36 |
rock | magicaltrout: OK. Thank you. | 10:00 |
rock | But I don't understand one thing. For adding a single juju charm to the existing ubuntu autopilot setup, we need to add servers to MAAS Server zone[cloud region] ? | 10:01 |
magicaltrout | rock: surely region1 is post openstack bootstrap | 10:06 |
magicaltrout | doesn't openstack have regions defined like AWS does? | 10:06 |
magicaltrout | can i test a charm locally that uses resources without pushing stuff to the charmstore? | 11:30 |
magicaltrout | ah juju attach it seems | 11:35 |
rick_h_ | magicaltrout: yep | 11:35 |
magicaltrout | i assume we'll be seeing your well groomed facial hair in pasadena rick_h_ ? | 11:45 |
rick_h_ | magicaltrout: hah probably | 11:48 |
rick_h_ | i'll look for my pommade | 11:48 |
magicaltrout | lol | 11:48 |
magicaltrout | ooh my first working resource enabled charm | 11:51 |
magicaltrout | nice | 11:51 |
pragsmike | any insights would be appreciated, as to why my containers aren't reachable from any other machines: | 12:00 |
pragsmike | MACHINE STATE DNS INS-ID SERIES AZ | 12:00 |
pragsmike | 0 started 192.168.57.101 tamdhn trusty default | 12:00 |
pragsmike | 0/lxd/0 started 10.0.0.26 juju-b78821-0-lxd-0 trusty | 12:00 |
pragsmike | isn't the container supposed to be on the same subnet as machine 0? | 12:00 |
magicaltrout | I like your style pragsmike you've exhausted the late in the day folks so now try the early risers? ;) | 12:04 |
pragsmike | well i'm pretty exhausted :/ | 12:04 |
magicaltrout | well rick_h_ may be of use | 12:04 |
magicaltrout | pragsmike: I would also backup your irc questions by dumping them on the juju mailing list | 12:05 |
magicaltrout | its not as instant but you get a wider response base | 12:05 |
pragsmike | good point | 12:05 |
pragsmike | and it would keep me from spamming irc so much | 12:05 |
magicaltrout | doesn't hurt to spam every channel going ;) | 12:05 |
pragsmike | hey i've kept it off #maas | 12:06 |
magicaltrout | if i were in your shoes pragsmike i'd spam #juju the juju mailing list and askubuntu | 12:08 |
magicaltrout | until someone responds :) | 12:08 |
cargill | hi, is there an offline readable edition of the juju user/developer guides? | 12:09 |
rick_h_ | pragsmike: the team's working on a few bugs around that atm. /me goes to look up bug #'s | 12:26 |
rick_h_ | cargill: no, but it's it github so you can fork it and have a local copy. https://github.com/juju/docs | 12:27 |
cargill | rick_h_: brilliant, thanks | 12:29 |
magicaltrout | cory_fu: ping | 12:42 |
admcleod | pragsmike: is your lxd bridged to the host interface? | 12:45 |
admcleod | magicaltrout: its not quite 6am there yet | 12:45 |
pragsmike | no it isn't, that's the problem | 12:45 |
admcleod | pragsmike: tried this? https://insights.ubuntu.com/2016/04/07/lxd-networking-lxdbr0-explained/ | 12:46 |
pragsmike | checking | 12:46 |
magicaltrout | admcleod: so?! | 12:47 |
rick_h_ | frobware: didn't we have a bug for this yesterday? I'm not seeing which one it is ^ | 12:47 |
admcleod | magicaltrout: just sayin | 12:47 |
rick_h_ | frobware: where containers are getting internal private IPs vs ones on the host maas network via dhcp? | 12:47 |
pragsmike | that's a good description of the problem | 12:48 |
rick_h_ | natefinch: morning, did you get a review of the bundle branch up? I'm not seeing it and would like to be able to ship that for the beta/before it gets out of sync with trunk again please | 12:48 |
pragsmike | I wonder if it's related to https://lists.ubuntu.com/archives/juju/2016-September/007801.html | 12:48 |
magicaltrout | you'll do though admcleod cause i just need some eyes to tell me what moronic thing I'm doing | 12:49 |
=== zeus is now known as Guest49408 | ||
magicaltrout | https://github.com/buggtb/layer-drillbit/blob/master/metadata.yaml#L22 mysql relation | 12:50 |
magicaltrout | https://github.com/buggtb/layer-drillbit/blob/master/reactive/drillbit.py#L183 <- reactive code for the relation | 12:50 |
magicaltrout | why I juju deploy mysql and relate the charms | 12:50 |
magicaltrout | that method is never run | 12:50 |
=== Guest49408 is now known as zeus` | ||
admcleod | magicaltrout: well the mysql interface needs to set the available state | 12:52 |
pragsmike | magicaltrout thank you for pointing me at the mailing list, i think that message i linked is in fact my problem | 12:52 |
=== zeus` is now known as zeus | ||
pragsmike | though i do have the interfaces assigned to subnets and it still doesn't work | 12:53 |
admcleod | magicaltrout: and you dont appear to be using the mysql interface | 12:54 |
magicaltrout | admcleod: yeah i get that but I look at the interface and that is set if there is a valid connection_string | 12:54 |
magicaltrout | https://github.com/johnsca/juju-relation-mysql/blob/master/requires.py#L43 | 12:54 |
admcleod | magicaltrout: right... | 12:54 |
admcleod | magicaltrout: but mysql isnt in layer.yaml | 12:55 |
admcleod | magicaltrout: or does something else include it? | 12:55 |
magicaltrout | see | 12:56 |
magicaltrout | told you it was moronic | 12:56 |
magicaltrout | ta | 12:56 |
admcleod | cool, no worries | 12:56 |
=== freyes__ is now known as freyes | ||
frobware | rick_h_: didn't get to that - could be https://bugs.launchpad.net/juju/+bug/1566791 | 12:58 |
mup | Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> <cpec> <network> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1566791> | 12:58 |
frobware | pragsmike: ^^ | 12:59 |
frobware | pragsmike: could do with some quick context; this on MAAS? | 12:59 |
pragsmike | yes, juju2 beta17/maas2.1 latest alpha | 13:00 |
natefinch | rick_h_: I'm sorry, I totally flaked on that yesterday. I'll look at it right now. | 13:00 |
frobware | pragsmike: on the machine hosting the container can you do: lxc config show <container-name> | 13:00 |
rick_h_ | natefinch: ty | 13:00 |
pragsmike | config: | 13:01 |
pragsmike | core.proxy_ignore_hosts: 127.0.0.1,192.168.57.100,::1 | 13:01 |
pragsmike | doh | 13:01 |
frobware | pragsmike: that mail archive you posted to is the same bug I referenced | 13:01 |
rick_h_ | pragsmike: ok, so we're working on getting that fixed up for the next beta hopefully. | 13:02 |
natefinch | rick_h_: do you have a link? I'm not sure where I'm supposed to be looking | 13:02 |
rick_h_ | natefinch: look in the kanban card please | 13:02 |
pragsmike | http://pastebin.com/9LagVLyK | 13:02 |
rick_h_ | natefinch: it's why I ask folks to make sure to link them up. | 13:02 |
pragsmike | cool, good to know | 13:02 |
natefinch | rick_h_: oh, right | 13:03 |
pragsmike | sorry i'm still new enough that i don't always recognize what i'm looking at | 13:03 |
rick_h_ | frobware: call? | 13:03 |
rick_h_ | pragsmike: all good, I'm not new and that mentioned vlans and such so I wasn't sure | 13:04 |
frobware | pragsmike: from the maas node, can you PB /etc/network/interfaces | 13:05 |
pragsmike | auto eth0 | 13:06 |
pragsmike | iface eth0 inet dhcp | 13:06 |
pragsmike | besides the loopback, and that's in /etc/network/interfaces.d/50-cloud-init.cfg | 13:06 |
pragsmike | wait 'maas node' means maas controller, or the container host | 13:07 |
pragsmike | i'm guessing the latter is more interesting, standby | 13:07 |
pragsmike | http://pastebin.com/6pZuuXXS | 13:08 |
pragsmike | the container host's nic does get put on a br- bridge | 13:08 |
pragsmike | it's just that the containers still end up using the wrong bridge (lxdbr0) | 13:09 |
jacekn | hello. I am trying to get local provider up and running but it seems broken in juju 1.25: https://bugs.launchpad.net/juju-core/+bug/1618963 and also in 2.0: https://bugs.launchpad.net/juju/+bug/1618948 anybody knows workaround to get local provider up and running? | 13:19 |
mup | Bug #1618948: Can't bootstrap localhost cloud <juju:New> <https://launchpad.net/bugs/1618948> | 13:19 |
magicaltrout | jacekn: isn't your 2.0 install ancient? | 13:19 |
magicaltrout | ooh | 13:20 |
magicaltrout | tools is ancient | 13:20 |
jacekn | magicaltrout: it's the latest one in xenial: 2.0-beta15-xenial-amd64 | 13:20 |
jacekn | yeah | 13:20 |
magicaltrout | can you sync-tools or something? | 13:20 |
magicaltrout | they've changed the term | 13:20 |
jacekn | nope that does not work http://pastebin.ubuntu.com/23145770/ | 13:22 |
magicaltrout | i just tried the same jacekn on 2.0 and it bootstraps for me | 13:22 |
magicaltrout | yeah this is the thing I don't get | 13:23 |
magicaltrout | in the latest beta they removed --upload-tools | 13:23 |
magicaltrout | but then if you need to upload them to boot, how do you do it? | 13:23 |
magicaltrout | or maybe thats not required I never quite understood | 13:23 |
magicaltrout | anyway i'm on beta16 | 13:24 |
magicaltrout | and it booted | 13:24 |
jacekn | magicaltrout: where did you get beta16 from? Xenial has beta15 | 13:25 |
dimitern | pragsmike: if you grep for `failed to prepare container ".*" network config` in /var/log/juju/machine-0.log on the controller and find it, that will be the reason why you're seeing lxds coming up with a single NIC bridged to lxdbr0 | 13:26 |
magicaltrout | ppa-dev or something | 13:26 |
dimitern | pragsmike: this is because we couldn't finish allocation for a multi-NIC config for the lxd and resorted to using a "fallback" config (eth0->lxdbr0, dhcp); that's also what we're fixing currently | 13:27 |
pragsmike | thanks dimitern! | 13:35 |
pragsmike | network config: {"hostname": ["Node with this Hostname already exists."]} | 13:35 |
pragsmike | actually two of them | 13:36 |
pragsmike | 2016-09-07 03:10:58 WARNING juju.provisioner lxd-broker.go:62 failed to prepare container "0/lxd/0" network config: connection is shut down | 13:36 |
pragsmike | 2016-09-07 03:12:22 WARNING juju.provisioner lxd-broker.go:62 failed to prepare container "0/lxd/0" network config: {"hostname": ["Node with this Hostname already exists."]} | 13:36 |
pragsmike | but the "connection is shut down" gets spammed in bursts throughout the log, mostly reporting about manifold workers | 13:36 |
pragsmike | gotta be afk for a few hours, thanks much for the help guys! | 13:37 |
pragsmike | i'll just manually bugger the container bridge for now, i just didn't want to have to do that every time i deploy | 13:37 |
dimitern | pragsmike: what are the versions of juju and maas? | 13:38 |
rick_h_ | voidspace: call? | 13:38 |
pragsmike | juju 2.0-beta17-xenial-amd64 | 13:39 |
voidspace | rick_h_: sorry, omw | 13:39 |
pragsmike | maas "version": "2.1.0", "subversion": "alpha2+bzr5321" | 13:39 |
dimitern | pragsmike: have you tried stable maas versions? | 13:44 |
dimitern | pragsmike: 2.1 is quite new and I personally haven't even tried it - might still have issues | 13:44 |
dimitern | pragsmike: but that 'node with this hostname already exists' is troubling - shouldn't happen unless it's a maas 2.1 api regression | 13:45 |
rock | Hi. I have a question. We have developed a JUJU Charm for configuring cinder to use one of our Storage array as the backend. So How to redeploy the Charm to add more storage arrays to configure cinder without destroying/removing the current deployed charm. [For example, We don't want to remove the current configured storage arrays from the Cinder configuration.] | 13:47 |
frobware | dimitern: ping | 13:48 |
frobware | dimitern: based on pragsmike's PB (http://pastebin.com/6pZuuXXS) I'm not expecting this to be the unconfigured parent device issue | 13:49 |
rock | can we do this only using juju set-config and juju upgrade-charm ? Is there any other way to do? | 13:50 |
SimonKLB | what's the go-to smtp server charm (if anyone exist) ? | 13:55 |
rick_h_ | rock: so if you change the code in the charm itself you'd upgrade-charm. If you want to change configuration it'd be up to how the charm handles that config | 13:56 |
rick_h_ | rock: it could be accepting config as a resource, a config field entry, or something else | 13:57 |
magicaltrout | i don't believe there is one SimonKLB | 14:02 |
magicaltrout | but there should be! | 14:02 |
SimonKLB | magicaltrout: ah too bad, it would be great yea | 14:07 |
SimonKLB | magicaltrout: actually, just found https://jujucharms.com/postfix/ but it looks a bit outdated... :) | 14:09 |
magicaltrout | yeah it would be worth dragging that back out of retirement | 14:10 |
magicaltrout | I've had an ldap server on my backlog as well | 14:10 |
PCdude | magicaltrout heey :) | 14:11 |
magicaltrout | uh oh | 14:11 |
rock | rich_h_: Hi. I have a question. for example, we have two same kind of storage arrays. But those two arrays have unique parameter values like [san IP, san pass, san username....]. So Once our charm modified the cinder.config with the same storage backend, then cinder has to contact both storage arrays based on given credential values. How can I do this? | 14:14 |
rock | rick_h_: are you there? | 14:25 |
rick_h_ | rock: I am, sorry in and out with meetings/etc so I might fade back/forth | 14:27 |
rick_h_ | rock: so this is in reference to the current cinder charm you're specifying the different arrays? | 14:27 |
rock | rick_h_: assume that we deployed our charm with some configuration values. and added relation to cinder. Then again we want to redeploy the charm to append with the new configuration values. We don't want to destroy the existing changes. | 14:35 |
rock | rick_h_ : Yes. In reference to the current cinder charm we are specifying different arrays. We want to redeploy the charm again and again but Only new configuration values have to be appended. | 14:48 |
rick_h_ | rock: so you don't typically redeploy a new charm again and again without running them with different config/etc. If you weant to update the config it's up to using the charm mechanisms to update that without a charm redeploy. | 14:51 |
rick_h_ | rock: I guess I'll defer to folks that know cinder better. I'm not following the use case here very well. | 14:51 |
rock | rich_h_: OK. Thank you very much. | 14:55 |
magicaltrout | ah ha | 15:23 |
magicaltrout | got it | 15:23 |
magicaltrout | this is quite cool admcleod https://ibin.co/2uGe9AHrvXI9.png apache drill over mysql from the relation | 15:24 |
magicaltrout | so you could now hook up a combination or mongo, flat files, hdfs, mysql from juju into drill | 15:24 |
magicaltrout | and then plug drill into your favourite SQL client | 15:24 |
magicaltrout | and do cross datasource analysis | 15:25 |
magicaltrout | something like select * from mysql.jujudb, hdfs.jujucluster where x=y | 15:31 |
=== petevg_afk is now known as petevg | ||
admcleod | magicaltrout: nice | 16:23 |
hatch | rock: as promised there is a new GUI release which resolves the issue you were seeing with subordinates. You can get it by downlading the bz2 here https://github.com/juju/juju-gui/releases/tag/2.1.11 and then running `juju upgrade-gui /path/to/bz2` | 16:25 |
=== frankban is now known as frankban|afk | ||
PCdude | stokachu: http://askubuntu.com/questions/821804/openstack-with-landscape-install-fails | 17:19 |
PCdude | stokachu: JUJU can bootstrap and it succeeds to install ubuntu on all nodes after the bootstrap | 17:20 |
PCdude | stokachu: nodes are getting an IP and can go to the internet (DNS resolves works too) | 17:20 |
PCdude | what is going wrong? :) I am stuck... | 17:20 |
PCdude | I got ur name on recommendation of "kiko" | 17:21 |
=== alexisb is now known as alexisb-afk | ||
cloudguru | Wondering if the charm ingestion service is known to be working / operations or if my charms pushed to personal namespace in launchpad have issues | 18:11 |
cloudguru | Charm proof results for : https://code.launchpad.net/~brianlbaird/charms/trusty/qslice/trunk | 18:12 |
cloudguru | I: metadata name (qslice) must match directory name (trunk) exactly for local deployment. | 18:12 |
cloudguru | I: all charms should provide at least one thing | 18:12 |
cloudguru | I: missing recommended hook install | 18:12 |
cloudguru | I: missing recommended hook start | 18:12 |
cloudguru | I: missing recommended hook stop | 18:12 |
cloudguru | I: missing recommended hook config-changed | 18:12 |
cloudguru | I'm using layer-docker so the hooks aren't needed as I understand as /reactive does heavy lifting | 18:13 |
lazyPower | cloudguru - the i's ar esafe to ignore. it also looks like you ran charm proof against a layer, not against the assembled charm. | 18:14 |
cloudguru | k | 18:15 |
cloudguru | I'll move up the directory path and push again | 18:15 |
lazyPower | cloudguru - well, hang on | 18:15 |
* lazyPower is looking | 18:15 | |
cloudguru | standing by | 18:15 |
lazyPower | ok i see what happened here | 18:16 |
lazyPower | the charm is building in $PWD | 18:16 |
lazyPower | if you look in your layer, you'll see trusty/qslice/--all-the-charms-files-here--/ | 18:16 |
cloudguru | agreed | 18:16 |
lazyPower | so really what you want ot do is charm push just the assembled charm to your namespace in the charm store, and you'll want to bzr push just the layer, so you're only VCS controlling the layer. | 18:17 |
cloudguru | that's the result of charm build | 18:17 |
lazyPower | one thing i do, is i export JUJU_REPOSITORY in my $HOME/.bashrc, so that charm build will by default place them in that path, instead of in $PWD | 18:17 |
marcoceppi | cloudguru: also, ingestion does not work anymore, you'll need to explicitly push it to the charm store | 18:18 |
cloudguru | good idea to keep them separated . Saw the export is your super awesome tutorial | 18:18 |
lazyPower | cloudguru - also, wrt publishing - https://jujucharms.com/docs/stable/authors-charm-store -- make sure you're familiar with the charm release model. (or charm publish, depending on which version of the charm command you have installed) | 18:18 |
cloudguru | ah man .. humans required ;-) | 18:18 |
marcoceppi | cloudguru: https://jujucharms.com/docs/stable/authors-charm-store | 18:18 |
cloudguru | thx guys. trying this again and will check back here as needed. | 18:19 |
lazyPower | np, ping if you hit blockers cloudguru :) | 18:19 |
jose | marcoceppi jcastro niemeyer is it possible to have +t here? there's a topic troll in Ubuntu channels | 18:52 |
marcoceppi | jose: what do you mean? | 18:52 |
jose | channel mode +t | 18:52 |
jose | there's a guy changing channel topics to... undesirable things | 18:53 |
marcoceppi | jose: sure | 18:53 |
jose | thanks :) | 18:53 |
* marcoceppi shrugs | 18:54 | |
PCdude | marcoceppi: uhm, n00b question what does +t mean? | 18:55 |
jose | marcoceppi: it's modelocked with chanserv | 18:55 |
cloudguru | @lazypower: updated charms pushed for v1.25 on 14.04 trusty | 18:56 |
PCdude | lazyPower: http://askubuntu.com/questions/821804/openstack-with-landscape-install-fails | 19:25 |
PCdude | any idea? :) | 19:25 |
=== alexisb-afk is now known as alexisb | ||
lutostag | any way to tell where we are in the agent-initialization process? | 19:31 |
lutostag | (seems to me my local lxd setup is stuck in that step deploying a charm) | 19:31 |
rick_h_ | lutostag: check juju status --format yaml | 19:34 |
rick_h_ | and see if theres an error on the machine there | 19:35 |
lutostag | rick_h_: no error, just pending | 19:35 |
lutostag | it seems like cloud-init/apt-get dist-upgrade may still be running but not doing anything actively | 19:36 |
lutostag | oh well at least the juju add-user and register is freaking awesome | 19:44 |
bdx | lutostag: totally, right | 19:45 |
bdx | marcoceppi: I was a bit tired last night when I filed that bug on interface-http .... I'm thinking it should be a feature request/bug with haproxy instead .... I'll close that bug now | 19:46 |
balloons | anastasiamac_, good morning | 20:23 |
lutostag | if anyone else runs into the same as me above... culprit is https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229 | 20:27 |
mup | Bug #1621229: snap upgrade to 2.14.2~16.04 in xenial lxc hangs <snapd (Ubuntu):New> <https://launchpad.net/bugs/1621229> | 20:27 |
lutostag | (but should only be until we get a new daily... or a fix... *shrug*) | 20:28 |
* pragsmike returns | 20:43 | |
marcoceppi | balloons: I think it's actually a valid bug though | 20:44 |
anastasiamac_ | balloons: \o/ 6.45am - how can i help? m in the process of sending brood to school | 20:45 |
magicaltrout | lazyPower: you're a man of immense debugging skills..... | 20:46 |
magicaltrout | 'PostgreSQLClient' object has no attribute 'host' | 20:47 |
magicaltrout | help me out here | 20:47 |
magicaltrout | https://git.launchpad.net/interface-pgsql/tree/requires.py | 20:47 |
magicaltrout | the stuff at the top even tells me it has hosts | 20:47 |
magicaltrout | https://github.com/buggtb/layer-drillbit/blob/master/reactive/drillbit.py#L206 yet that fails | 20:48 |
balloons | marcoceppi, what's a valid bug? | 20:48 |
marcoceppi | balloons: sorry, meant to ping bdx | 20:51 |
marcoceppi | bdx: I think it's actually a valid bug though | 20:51 |
balloons | anastasiamac_, just wanted to mention we need to triage ubuntu source juju bugs as well | 20:51 |
balloons | anastasiamac_, https://bugs.launchpad.net/ubuntu/+source/juju-core/+bugs?orderby=-datecreated&start=0 | 20:51 |
bdx | marcoceppi: with interface-http, or haproxy? | 20:51 |
marcoceppi | bdx: interface-http | 20:52 |
bdx | marcoceppi, magicaltrout: so ..... iterating quite a few deploys over a few different charms this last week, all of which are making use of the pgsql interface ... I hit the same thing as magicaltrout probably 4 or 5 times | 20:53 |
magicaltrout | awww | 20:54 |
marcoceppi | interesting | 20:54 |
magicaltrout | i'm even robbing code from cmars | 20:54 |
marcoceppi | stub: ^? | 20:54 |
magicaltrout | and it doesn't work | 20:54 |
bdx | I would rip it all down, redeploy the same unchanged charms, and it would not hit the error | 20:54 |
bdx | magicaltrout: does yours error consistently? | 20:55 |
magicaltrout | i believe so | 20:55 |
magicaltrout | but i keep hacking stuff around when its broken, so maybe, maybe not :) | 20:55 |
marcoceppi | what's interesting is host isn't an autoaccessor | 20:55 |
magicaltrout | part of my hatrid of scripting languages... where's the stuff tell you its wrong before you hit the go button ;) | 20:56 |
marcoceppi | magicaltrout: can you try something like this instead | 20:56 |
marcoceppi | @when('pgsql.master.available') | 20:56 |
marcoceppi | and then | 20:56 |
marcoceppi | psql.master.host ? | 20:56 |
magicaltrout | yeah 2 secs | 20:57 |
marcoceppi | maybe actually psql.master().host | 20:57 |
anastasiamac_ | balloons: awesome \o/ first time i hear about it :) where do these come from and can I re-target these to juju? oh.. i guess, it must b juju-core coz juju does not exist?... | 20:58 |
bdx | https://gist.github.com/jamesbeedy/2c179a7d0a71209f8ccd0183478db9d5 | 20:58 |
balloons | anastasiamac_, launchpad.net/juju bugs are project bugs. those bugs are bugs found and filed by end users against the source package of juju for ubuntu. They might be issues with the packaging, or issues specific to the distro version. | 20:59 |
bdx | marcoceppi, magicaltrout: ^ is what I've been using .... seems to work 99% of the time .... that is the code I was randomly hitting that error on | 20:59 |
balloons | anastasiamac_, so typically those type of bugs might end up being pushed upstream and linked to an upstream bug, while tracking the ubuntu work. I would say feel free to add juju as affected to any of them you know juju-core needs to fix | 21:00 |
magicaltrout | this time around i get: Can't convert 'PostgreSQLClient' object to str implicitly | 21:01 |
anastasiamac_ | balloons: sounds good ;) | 21:01 |
magicaltrout | for log("marco and his amazing tweak:"+psql+master().host) | 21:01 |
balloons | anastasiamac_, and perhaps to make life simple, you could remove the link to the ubuntu source package then. There's not many bugs in there, so what's there can just be packaging or non-juju-core issues | 21:01 |
magicaltrout | okay so lets try bdx's version | 21:02 |
magicaltrout | same output | 21:03 |
magicaltrout | oh | 21:03 |
magicaltrout | i might have ballsed that one up | 21:03 |
magicaltrout | . not + | 21:04 |
marcoceppi | I think the key is to use the psql.master object | 21:05 |
magicaltrout | yeah much improved | 21:05 |
marcoceppi | which is a ConnectionString class that has host | 21:05 |
magicaltrout | thanks chaps | 21:05 |
marcoceppi | cheers bruv | 21:05 |
* magicaltrout plonks marcoceppi in the east end of london | 21:06 | |
PCdude | please can somebody help me with this problem, I tried everything I can think of :) | 21:08 |
PCdude | http://askubuntu.com/questions/821804/openstack-with-landscape-install-fails | 21:08 |
jcastro | dpb1: got someone handy who can help PCdude? ^^^ | 21:10 |
PCdude | jcastro: great! :) | 21:11 |
magicaltrout | submitted a couple of semi juju related talks to apachecon eu today jcastro | 21:16 |
jcastro | excellente! | 21:16 |
thumper | o/ jcastro | 21:17 |
bdx | wierd issue #1000000 - http://paste.ubuntu.com/23147588/ | 21:29 |
mup | Bug #1000000: For every bug on Launchpad, 67 iPads are sold. <Edubuntu:Triaged> <https://launchpad.net/bugs/1000000> | 21:29 |
bdx | weird* | 21:30 |
bdx | xenial containers never get a juju-agent :-( | 21:30 |
bdx | they just hang on Waiting for agent initialization to finish | 21:31 |
lutostag | magicaltrout: https://paste.ubuntu.com/23147602/ | 21:31 |
bdx | trusty containers get the juju agent and start just fine | 21:31 |
* lutostag what happens when you don't scroll to the end of the conversation | 21:31 | |
bdx | I can't get any xenial lxd containers to start on beta16, beta17, or tip | 21:32 |
lutostag | bdx: https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1621229 | 21:33 |
mup | Bug #1621229: snap upgrade to 2.14.2~16.04 in xenial lxc hangs <snapd (Ubuntu):New> <https://launchpad.net/bugs/1621229> | 21:33 |
bdx | must not be a juju thing ... all of my logs are clean ... the last command successfully ran in my cloud-init.log is 'dist-upgrade' ... | 21:33 |
lutostag | not a juju thing | 21:33 |
bdx | lutostag: thanks ... you saved me from re-deploying failures for the next hour, staring at my screen feeling like I'm missing some rogue config :-) | 21:36 |
bdx | lutostag: http://paste.ubuntu.com/23147632/ - fixes | 21:40 |
lutostag | fun :), didn't know those | 21:41 |
magicaltrout | bdx: what day you getting into pasadena sunday night? | 21:45 |
magicaltrout | aww wtf i rejects the connection | 22:13 |
magicaltrout | s/i/it | 22:14 |
bdx | magicaltrout: ya mon! yourself? | 22:15 |
magicaltrout | saturday | 22:16 |
magicaltrout | got to de-jetlag | 22:16 |
magicaltrout | marcoceppi: i'm assuming I should be able to talk to postgres without changing the security in the postgres charm? | 22:17 |
bdx | magicaltrout: postgres only adds entries to pg_hba.conf for the ip(s) of your related units | 22:21 |
bdx | if you want to connect from another source you have to feed it the extra-pg-auth config | 22:22 |
magicaltrout | yeah i can see the entry in the config | 22:22 |
magicaltrout | which is weird | 22:22 |
bdx | what part? | 22:22 |
magicaltrout | so its adding my relation, even if i install postgres-client on the other end | 22:22 |
magicaltrout | that gets a rejected connection as well | 22:22 |
bdx | due to what? | 22:23 |
magicaltrout | FATAL: pg_hba.conf rejects connection for host "172.31.4.210", user "juju_drillbit3", database "juju_drillbit3", SSL off | 22:23 |
bdx | what is in your pg_hba.conf | 22:24 |
bdx | oooh, I think I hit this the other day too ... are you requesting multiple dbs? | 22:24 |
magicaltrout | not that i'm aware of :) | 22:25 |
magicaltrout | http://pastebin.com/5rBMB1Nx | 22:25 |
magicaltrout | thats the pg_hba | 22:25 |
bdx | if you are, postgres charm will generate a new password for the juju_<service-name> user for each extra db you request | 22:25 |
bdx | leaving all_databases_requested[:-1] to have the wrong password | 22:26 |
magicaltrout | aah i see | 22:27 |
magicaltrout | specify a database and it works | 22:27 |
bdx | lol | 22:27 |
magicaltrout | doesn't work in jdbc world though | 22:28 |
magicaltrout | arse | 22:28 |
=== menn0-afk is now known as menn0 | ||
magicaltrout | oh postgres | 22:46 |
magicaltrout | some days you make me so sad | 22:46 |
magicaltrout | aaah lol | 22:59 |
* magicaltrout figured part of it out | 23:00 | |
bdx | what was it | 23:05 |
bdx | I'm about to start using jdbc too | 23:05 |
magicaltrout | mostly a user caused weird zookeeper issue :) | 23:07 |
magicaltrout | but there may be an SSL issue as well, give me 5 mins and I'll let you know | 23:07 |
magicaltrout | ooh | 23:21 |
magicaltrout | i've just been asked to demo some juju stuff for NASA and Darpa when I'm at JPL in a couple of weeks | 23:21 |
lazyPower | magicaltrout nice! | 23:22 |
magicaltrout | and other container orchestration tools." | 23:23 |
magicaltrout | paste fail | 23:23 |
magicaltrout | "I also here from Paul that you are 'juju charm' purveyor. I've never used this but it certainly looks interesting. I'd like to hear how juju compares to Docker compose and other container orchestration tools." | 23:24 |
magicaltrout | thats the brief ;) | 23:24 |
magicaltrout | it is... brief | 23:25 |
lazyPower | indeed | 23:25 |
lazyPower | and we're so not a "container orchestrator" | 23:25 |
magicaltrout | lol yeah | 23:25 |
lazyPower | its a byproduct of how awesome we are though | 23:25 |
lazyPower | so i can see why thats such a popular misconception | 23:25 |
magicaltrout | yup | 23:25 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!