[02:18] <jose> hey guys! will I be able to deploy on the maas master node?
[09:22] <noodles775> lazyPower, hazmat: Another one that you might be interested in: https://code.launchpad.net/~michael.nelson/charms/trusty/elasticsearch/add-ufw/+merge/225934
[10:27] <Egoist> Hello
[10:27] <Egoist> Why when i try deploy charm from local charm repository
[10:28] <Egoist> it takes some kind of old code, not the good one, that i changed
[10:28] <Egoist> ?
[11:55] <rbasak> sinzui: can we sync later today when you're in, please? I'm looking at juju 1.20 -> Utopic, and have a diff of PPA to Utopic packaging with a view to reducing it.
[11:55] <rbasak> It's pretty good already, but I wanted to check with you on a few of the differences so that we can decide which way is preferred and then do the same on both.
[11:56]  * rbasak goes to lunch
[12:23] <d4rkn3t> lazyPower: hi, how are y? I've re-configured everything from the begin, now I've deploy juju-gui "http://paste.ubuntu.com/7765331/" now to open the web Gui do I have to use the public-address?
[12:28] <lazyPower> d4rkn3t: it should respond on any address available.
[12:32] <d4rkn3t> if i use "http://maasccsvr1node1.maas/ receive that host is unknow and "Unknown Host
[12:32] <d4rkn3t> Description: Unable to locate the server requested --- the server does not have a DNS entry. Perhaps there is a misspelling in the server name, or the server no longer exists. Double-check the name and try again."
[12:33] <lazyPower> do you have your maas DNS server set as a dns provider?
[12:33] <lazyPower> as in can you ping maasccsvr1node1.maas?
[12:34] <d4rkn3t> from maas svr the ping is ok from host machine no.
[12:35] <d4rkn3t> I remind y MaaS is a VM created on kvm
[12:36] <lazyPower> i dont know your specific networking setup, so its difficult to gague a proper response
[12:36] <lazyPower> *gauge
[12:36] <lazyPower> do you have a bridged eithernet adapter providing public IP's for your units?
[12:37] <lazyPower> in a typical maas setup you need 2 ethernet devices. one for internal, a secondary for public. the public ethernet adapter is a bridged connection - so you dont have to use tools like sshuttle to reach the instances.
[12:37] <d4rkn3t> I've created a virtual network between my host and the VM
[12:39] <d4rkn3t> and I created  a virtual networkon virt-manager and kvm has generated a virt0 interface on my host. i communicate with VM using these net
[12:42] <lazyPower> if you can reach the individual vm's from outside the KVM host, you're g2g - othewrise you'll have to create a tunnel to communicate with jujud eployed services
[12:42] <d4rkn3t> before to realize that i want first test it in a separate environment and then create an environment like you've suggested me
[12:44] <d4rkn3t> I've added to hosts file the association IP and hostname, and from host i can ping either IP that hostname
[12:44] <lazyPower> you should be able to place your MAAS DNS server in /etc/resolv.conf
[12:44] <lazyPower> for example mine is: search maas  nameserver 10.0.10.2
[12:45] <lazyPower> with a newline between the maas and nameserver lines.
[12:58] <d4rkn3t> I've add the maas's ip on my host and i can ping it by hostname but i receive the same error when try to open the web ui
[13:06] <d4rkn3t> noooo.... I'm sorry I made the ping from RC svr!!!!!!!!
[13:16] <rbasak> sinzui: ironic that what you predicted in https://bugs.launchpad.net/launchpad/+bug/231797 in 2010 has now happened - for Juju, where you now manage releases :)
[13:16] <_mup_> Bug #231797: no sensible way to use debian/watch files with launchpad hosted tarballs (no simple url-and-link list of all downloads) <lp-registry> <packaging> <releases> <Launchpad itself:Triaged> <devscripts (Ubuntu):Invalid> <https://launchpad.net/bugs/231797>
[13:18] <sinzui> rbasak, I get angry everytime that downloads page comes into conversation. Too many people worked on it without knowing who needed to use it.
[13:21] <lazyPower> d4rkn3t: so you're able to connect to the gui now? everything's good?
[13:43] <noodles775> lazyPower, hazmat: ...and following on from firewalling the admin port (9200) is firewalling the node-to-node comms port (9300) if either of you have time: https://code.launchpad.net/~michael.nelson/charms/trusty/elasticsearch/ufw-for-peers-too/+merge/225968
[13:44] <lazyPower> thanks noodles775, it'll be a bit before i get into the queue today
[13:44] <hazmat> noodles775, nice.. on the client .. that does prevent expose from working though .. albeit its not advisable with es
[13:48] <noodles775> hazmat: what's the use-case for wanting to expose 9200? If you really wanted to do that, I'd personally do it via an apache fe, but if there's a real use-case, happy to update it.
[13:50] <hazmat> noodles775, i'm not sure there is
[13:51] <hazmat> noodles775, re the use case.. possibly between multiple ostack envs but not public access
[13:51] <hazmat> noodles775, agreed.. normal usage for a js client would always be a nginx/apache frontend
[13:51] <d4rkn3t> lazyPower: adding hostname and ip on hosts file on host machine i reach the node via ping using the dns no!!! anyway trying to open juju gui receive the same error "Unknown Host "
[13:52] <hazmat> noodles775,  i can't think of a concrete use case, not worth while till there is one.
[13:52] <lazyPower> d4rkn3t: thats bizarre
[13:53] <d4rkn3t> it's really bizarre!!!!
[13:53] <noodles775> hazmat: great (given that we do have a use-case for denying non-client access to 9200, so that any other unit on the internal network isn't able to play with your indexes :P)
[13:54] <hazmat> noodles775, do you use any of the es management plugins?
[13:54] <d4rkn3t> also ssh is working (ssh ubuntu@maasccsvr1node1.maas)
[13:56] <d4rkn3t> lazyPower: the problem was in url, using http the web gui doesn't work via https yes!!!!!
[13:57] <d4rkn3t> now i see the tentative of Juju of connecting to environment!!!
[13:58] <lazyPower> strange, the juju-gui should work over https...
[13:59] <lazyPower> it uses a snakeoil certificate, and quickstart auto-launches an https url for you.
[14:00] <noodles775> hazmat: nope, not so far - we're currently controlling the indexes from the client application, and haven't yet done much analysis (but will be shortly). Which ones do you recommend?
[14:01] <noodles775> hazmat: as far as adding support in the charm for plugins, I'd assume we'd provide two methods (installing from a payload plugins directory for our own usage, as well as from github for casual usage)
[14:01] <noodles775> bigdesk looks useful.
[14:02] <d4rkn3t> it's working over https....... i've accepted the certificate and boom the home page of gui is ready!!!not really, because it tries to connect to environment but  nothing
[14:03] <d4rkn3t> it gets so long time!!!!! nothing happened it continues to try the connection
[14:05] <hazmat> noodles775, yeah.. bigdesk and kopf are the actively developed useful ones.. bigdesk has some nice charts/stats out of the box
[14:06] <hazmat> noodles775, here's a good comparison of the various plugins.. https://blog.codecentric.de/en/2014/03/elasticsearch-monitoring-and-management-plugins/
[14:08] <lazyPower> d4rkn3t: well since you're pushing manual settings in /etc/hosts - sounds like theres still some misconfiguration at play. The GUi probably cannot connect to the API Server, and if its going by hostname, and tehre's no resolution that would explain teh timeout of the GUI connecting.
[14:08] <lazyPower> its a web-socket connection, and i'm pretty sure it works by hostname, rick_h__ would know this for sure.
[14:08] <rick_h__> lazyPower: my ears are itching, reading backscroll
[14:09] <jcastro> hey lazyPower
[14:09] <lazyPower> hi jcastro
[14:09] <mbruzek> rick_h__, itchy ears, you should get that looked at.
[14:09] <jcastro> cory_fu's allura charm looks ready to promulgate
[14:10] <lazyPower> jcastro: gonna be a bit before i'm in teh queue today
[14:11] <rick_h__> lazyPower: d4rkn3t sorry, backscroll seems to imply there's a bunch of history here. What's going on?
[14:11] <rick_h__> lazyPower: d4rkn3t if the gui shows the 'connecting to environment' message it's trying to establish a websocket connection to the state server's api endpoint.
[14:11] <rick_h__> lazyPower: d4rkn3t if it's failing, check the browser console/network tab (you might have to reload with it open) and look at the url it's trying to hit and any error/etc in there
[14:12] <d4rkn3t> lazyPower: http://paste.ubuntu.com/7765803/
[14:12] <lazyPower> rick_h__: d4rkn3t has a juju/maas setup - he's not using the MAAS DNS and instead plunking ip's in his /etc/hosts - my thought is thats not going to work if the gui isn't aware of the websocket state server hostname, as thats what it uses to connect right? or is it using an implicit ip? i'm not really sure where it gets the info from...
[14:14] <d4rkn3t> rick_h:  I've already made the reload of page but nothing it continue to try the connection to node
[14:15] <rick_h__> d4rkn3t: right, but open the network tab in your browser's developer tools and look at what address it's attempting to connect to
[14:15] <rick_h__> lazyPower: hmm, looking to see if I can remember how it finds the address to use. frankban know off the top of your head ^ ?
[14:16] <frankban> rick_h__: reading
[14:16] <rick_h__> frankban: does the charm feed the juju websocket ip/dns name via the settings?
[14:17] <frankban> rick_h__: the guiserver is started passing the juju API url, the GUI just connects to the gui service unit address
[14:18] <rick_h__> frankban: ah right. d4rkn3t so yea, need to find out what url it's trying to connect to. It sounds like the client (your machine) can't route to the charm's running server.
[14:18] <lazyPower> I'm fairly certain the root of these issues are due to lack of resolveable dns within that setup.
[14:18] <lazyPower> yeah
[14:19] <d4rkn3t> rick_h: this is what i see in browser console "WebSocket connection to 'wss://maasccsvr1node1.maas/ws' failed: WebSocket is closed before the connection is established. maasccsvr1node1.maas/:1"
[14:19] <frankban> rick_h__: FWIW the API URL is retrieved by the charm from the hook's execution context: the JUJU_API_ADDRESSES env var
[14:19] <lazyPower> frankban: that, is really good info to have. thank you
[14:19] <lazyPower> frankban: so you just parse the ENV var and fish out the stuff you need? i may take this approach with another charm i'm working on
[14:20] <rick_h__> thanks frankban
[14:21] <d4rkn3t> rick_h_: in the node as GW i've 1-1-2-1.maas
[14:22] <frankban> lazyPower: this is how we get the API address: http://bazaar.launchpad.net/~juju-gui/charms/trusty/juju-gui/trunk/view/head:/hooks/utils.py#L140
[14:22] <frankban> lazyPower: it is possible to really simplify that if you don'y have to support old versions of juju
[14:22] <lazyPower> frankban: looks about right :)
[14:22] <lazyPower> thats what i expected, os.env('foo')
[14:23] <frankban> lazyPower, rick_h__: I think https://bazaar.launchpad.net/~juju-gui/charms/trusty/juju-gui/trunk/view/head:/HACKING.md#L237 is relevant for investigating this kind of errors
[14:23] <lazyPower> I for whatever reason was thinking it was passed as config, but that didn't sound right - because we're not shipping anything to the charm.
[14:25] <lazyPower> frankban: thats all about 2 steps further than where we are with d4rkn3t's issues. he's having an issue with the juju-gui connecting to the state server.
[14:25] <frankban> lazyPower: yeah, to complete that story, the api address is then passed as a cli argument to the GUI server, which acts as a proxy between the browser and the juju-core api server
[14:25] <lazyPower> and that's more along the lines of the DNS half-configuration i've pointed out above. If we get d4rkn3t's setup working without manually stuffing hosts in /etc/hosts, this shoudl all go away.
[14:25] <frankban> cool
[14:26] <lazyPower> i figured it was a connectivity issue, thanks for confirming my suspicious and the insight into how the gui configures itself frankban
[14:26] <lazyPower> rick_h__: *hattip*
[14:26] <frankban> lazyPower: mp
[14:27] <lazyPower> d4rkn3t: so, sounds like the core problem here is the half-configured environment. Is there a specific reason you're manually adding hosts to /etc/hosts? you should be able to just add your region controller as a DNS server to your resolv.conf and the nodes will be available to you
[14:29] <d4rkn3t> lazyPower: I've made that because for my host I can't ping the node using its own hostname, i tried that just to see that from my PC can reach the node and open the JUJU's web gui
[14:31] <lazyPower> d4rkn3t: the MAAS stack deploys a bind server on the region/cluster controller. so you should be using that to do dns resolution
[14:31] <lazyPower> make sure its the first server listed in your resolv.conf and the rest should 'just work'
[14:31] <d4rkn3t> I' ve made that but nothing, this my resolv.conf file on host machine http://paste.ubuntu.com/7765902/
[14:32] <jcastro> mbruzek, does the tomcat charm you worked on do HA?
[14:32] <lazyPower> after the nameserver 1.1.1.10 move the search maas line up there.
[14:33] <lazyPower> then try pinging your .maas node again with it commented out in /etc/hosts
[14:34] <jcastro> mbruzek, nevermind I was looking in the wrong place, looks like you do have cluster support
[14:34] <mbruzek> jcastro, not specifically it will cluster
[14:37] <d4rkn3t> like this "http://paste.ubuntu.com/7765913/" I've removed the line on hosts file i can ping it using hostname, but on my browser juju continues to try the connection to the environment
[14:43] <lazyPower> i'm not sure what to recommend. frankban there's a proxying socket server on the juju-gui host right?
[14:44] <frankban> lazyPower: yes, the guiserver
[14:44] <frankban> lazyPower: which logs to /var/log/upstart/guiserver.log
[14:45] <lazyPower> d4rkn3t: and you've commented out that line in /etc/hosts so you're positive the DNS resolution is working?
[14:46] <jcastro> mattgriffin, hey do you guys have power8 builds in your repo?
[14:49] <d4rkn3t> frankban: this is the guiserver.log on node http://paste.ubuntu.com/7765957/
[14:50] <d4rkn3t> lazyPower: yes now i can resolve that by resolv.conf but if i try to run the command "sudo resolvconf -u" to upgrade the file i lost that nameserver!!!
[14:51] <frankban> d4rkn3t: I don't see any ws connections there, could you please post the response from https://<juju-gui-url>/gui-server-info ?
[14:51] <lazyPower> d4rkn3t: dont panic - you can set this in /etc/resolv.conf.d/head
[14:51] <d4rkn3t> ok
[14:52] <frankban> d4rkn3t: also, it is possible to switch logging to debug with "juju set juju-gui builtin-server-logging=debug"
[14:52] <d4rkn3t> frankban: http://paste.ubuntu.com/7765974/
[14:53] <d4rkn3t> i've run that command but in terminal i dont' see anything
[14:53] <frankban> d4rkn3t: so the bootstrap node is maasccsvr1node1.maas, right?
[14:53] <d4rkn3t> yes
[14:54] <frankban> d4rkn3t: yeah, that command just changes how the gui server logging is configured. now you can look at the logs again while performing a request
[14:55] <d4rkn3t> and the juju status is this http://paste.ubuntu.com/7765980/
[14:56] <d4rkn3t> frankban this is the log http://paste.ubuntu.com/7765985/
[14:57] <d4rkn3t> frankban: and this one is after to reload the page http://paste.ubuntu.com/7765991/
[15:13] <mattgriffin> hi jcastro. i'll check into it. why do you ask?
[15:14] <jcastro> I was just thumbing through the percona-cluster charm and was wondering if it was available on POWER8
[15:14] <mattgriffin> jcastro, cool. i'll look into it :)
[15:14] <jcastro> they do free hardware access btw: https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_com_sys_power-development-platform
[15:15] <mattgriffin> jcastro, excellent. thanks for sharing. i had asked a former colleague about how to get some hardware a month ago... this is great
[15:18] <jcastro> siteox.com also has shell accounts for like 30 bucks a month
[15:21] <frankban> d4rkn3t: I still don't see logs about incoming WebSocket connections, it seems the browser is not sending ws requests. Do you see the same behavior also e.g. switching to another browser or removing caches?
[15:22] <d4rkn3t> frankban: i try to remove cache
[15:22] <frankban> d4rkn3t: incognito mode could also help
[15:23] <d4rkn3t> i've cleaned the cache same problem. i used incognito i see the login page
[15:25] <d4rkn3t> why ???
[15:26] <frankban> d4rkn3t: are you able to log in?
[15:29] <d4rkn3t> the extension "Spotflux Lite" blocked the connection!!!now it's working perfectly
[15:29] <d4rkn3t> it's incredible!!!
[15:29] <d4rkn3t> 2 days for that!!!!
[15:32] <frankban> d4rkn3t: :-/
[15:32] <d4rkn3t>  it's unbelievable.....i'm so sorry if y've to lost your time for me....
[15:33] <d4rkn3t> i try to disable that and go.....re-active that no login!!!
[15:33] <frankban> d4rkn3t: no problem really! happy we solved that
[15:34] <d4rkn3t> yeah, i want to thank y for everything guys, you're fantastic
[15:35] <d4rkn3t> let's go to have a beer all together i'll offer you!!!
[15:36] <d4rkn3t> next step will be adding more nodes and deploy openstack!!!!
[15:36] <d4rkn3t> fucking extensions!!!!!!
[15:38] <jcastro> d4rkn3t, well, at least it's not something complicated!
[15:40] <d4rkn3t> jcastro: y right, but your team have support me for 1 entire day for resolving the problem 'n discovering that the problem was that....
[15:41] <jcastro> heh yeah it happens
[15:41] <jcastro> glad it's sorted for you though!
[15:42] <d4rkn3t> anyway thanks a lot guys for your support, i hope to repay you soon bye
[16:01] <lazyPower> Aww he's gone
[16:01] <lazyPower> i was going to say 'tell your friends about us!' - next time *snaps*
[16:23] <d4rkn3t> i came back here again....in the juju-gui there is a notification "Failed to load charm details. Charm API error of type: no_such_charm", is maybe I've not installed charm-tools??
[16:24] <rick_h__> d4rkn3t: no, that means somewhere there's some bad data for a charm it could not load
[16:24] <rick_h__> d4rkn3t: it should be safe to ignore
[16:26] <d4rkn3t> ok, how can i make to know which bad data it needs to work well?
[16:29] <d4rkn3t> i mean it's possible to correct that?
[16:32] <lazyPower> d4rkn3t: if a charm passes 'charm proof' (which comes from charm-tools) it should be fine.
[16:34] <adeuring> utlemming: could you please have a look here: https://code.launchpad.net/~adeuring/jujuredirector/check-juju-status/+merge/226002
[16:34] <d4rkn3t> ok, i thought it was an error. thanks again
[18:05] <avoine> is it possible that juju don't add the user ssh key by default in authorized_keys on unit in 1.20 ?
[18:07]  * avoine is trying   juju authorized-keys add
[18:21] <lazyPower> noodles775: hey have you seen the elasticsearch 403 failure on the repository?
[18:21] <lazyPower> this is new behavior
[18:29] <noodles775> lazyPower: Nope? I've deployed with the branch quite a few times today (but some are with our own private repo). Is it the elasticearch.org repo for which you're getting a 403?
[18:30] <lazyPower> yes
[18:30] <lazyPower> the default is whats 403'ing
[18:31]  * noodles775 sets a default deploy running and goes to read with a kid.
[18:39] <jcastro> sinzui, may I butcher the release notes to reuse them on my blog?
[18:39] <jcastro> too many good features to not tell the world
[18:46] <noodles775> lazyPower: hrm, I just deployed again with the default repo without issues. Eg. log without errors: http://paste.ubuntu.com/7767027/
[18:46] <noodles775> lazyPower: can you paste the log for the error that you saw, if you've still got it handy?
[18:46] <jcastro> https://news.ycombinator.com/item?id=8006037
[18:47] <lazyPower> noodles775: i can re-run it, give me a few - i'm in a meeting. will you be around for a bit?
[18:47] <noodles775> lazyPower: I can check back later, sure.
[18:47] <sinzui> jcastro, please do. Chop until you have your message for the masses
[18:59] <noodles775> lazyPower: fwiw, a 403 from elasticsearch.org repo an issue at their end (perhaps when they're updating/publishing not sure). Let me know if you're able to reproduce, but if not, I don't think it's related to the charm.
[19:05] <lazyPower> well the charm should default to our repos
[19:40] <lazyPower> noodles775: http://paste.ubuntu.com/7767221/
[19:59] <noodles775> lazyPower: what do you get if you `juju run --unit elasticsearch/0 "curl http://packages.elasticsearch.org/elasticsearch/1.2/debian/dists/stable/main/binary-amd64/Packages"` ?
[20:00] <lazyPower> well that worked
[20:00] <noodles775> lazyPower: when you say the charm should default to our repos, which repos do you mean? (elasticsearch isn't in trusty, afaict)
[20:01] <lazyPower> noodles775: you're right, i just checked and its not in trusty - i remember looking at that during the review
[20:01] <lazyPower> weird that the apt-get update is whats 403'ing when that curl woorks
[20:01] <lazyPower> http://paste.ubuntu.com/7767288/
[20:01] <lazyPower> there's the output from the curl
[20:02] <noodles775> lazyPower: if the curl works, can you debug-hooks on the unit and re-run the hook (I basically want to know if it's an intermittent issue, or related to wherever you're running, i've still not repro'd)
[20:02] <lazyPower> AH YOU KNOW!
[20:02] <lazyPower> i figured ito ut noodles775, this is a lazypower problem
[20:02] <lazyPower> not a charm problem
[20:02] <noodles775> Great, what was the issue?
[20:02] <lazyPower> the curtain proxy of maas
[20:02] <noodles775> Ah :)
[20:02] <lazyPower> i haven't setup any kind of forwarding for ppa's, and its been forever + 6 months since i've tried a charm with a ppa configuration
[20:03] <lazyPower> really sorry about that - i forgot the quirks of my own setup
[20:03] <lazyPower> yep, and commenting out the proxy on elasticsearch fixed it
[20:03] <lazyPower> gahhhh i'm ashamed. i'll go hide now
[20:03] <noodles775> No probs at all, glad it's sorted :)
[20:05] <noodles775> lazyPower: btw, you can also bundle the elasticsearch package in the charm (see the readme) and it will use that instead, if that helps.
[20:05] <noodles775> G'night!
[21:23] <jose> guys, any idea on why my LXC containers may be stuck in pending state?
[21:25] <thumper> jose: yes
[21:25] <thumper> it is all my fault :-(
[21:26] <thumper> jose: is this a new bootstrap?
[21:26] <jose> thumper: yeah
[21:26] <thumper> and the local provider?
[21:26] <jose> no
[21:26] <jose> manual with lxc
[21:26] <jose> with many services in lxc
[21:26] <thumper> hmm
[21:27] <thumper> how long have you waited?
[21:27] <jose> about... 20mins?
[21:27] <jose> lxc-ls --fancy gives no useful info apart from just one machine being stopped
[21:27] <thumper> has this machine been tested with manual before?
[21:27] <jose> it was working a while ago, before I went for lunch
[21:28] <thumper> jose: can you pastebin the following: `find /var/lib/juju` and `lxc-ls --fancy` ?
[21:28] <jose> sure thing!
[21:28] <thumper> this may not be my fault
[21:29] <jose> oh wait
[21:29] <jose> the machine changed to started
[21:29] <jose> http://paste.ubuntu.com/7767588/
[21:30] <jose> wait... looks like a new machine has been fired up
[21:31] <jose> not your fault, thumper :)
[21:32] <thumper> jose: oh good
[21:45] <jose> thumper: if it's worth taking a look at, the machines have been running for around 10m now, and still showing as pending with this lxc-ls http://paste.ubuntu.com/7767622/
[21:57] <thumper> jose: which version of juju?
[21:57] <thumper> jose: also, are you using an apt-cache?
[21:58] <thumper> the machines do an apt-get update/upgrade when starting
[21:58] <thumper> with many machines starting, likely to cause contention maybe?
[22:01] <jose> thumper: I did an update/upgrade a min ago
[22:01] <jose> version 1.81.1
[22:01] <thumper> jose: yeah, but each lxc container does its own
[22:01] <thumper> as it is a "new machine"
[22:01] <jose> oh
[22:01] <thumper> independent of the host
[22:01] <jose> I'm gonna check now
[22:02] <jose> all pending
[22:02] <thumper> jose: take a look at the files in /var/log/juju/containers
[22:02] <thumper> there is console output in there
[22:02] <jose> thumper: erm... nope
[22:02] <jose> not even machine-0-lxc-currentnumbershere.log
[22:03] <thumper> well... in each container
[22:03] <jose> ooh
[22:03] <jose> thumper: how should I ssh into them?
[22:04] <jose> lxc-console tells me 'container is not running' while it is
[22:04] <jose> or at least fancy displays it like so
[22:05] <jose> wh...what?! no lxc containers now!
[22:06] <jose> urgh, I'll just re-deploy
[22:21] <jose> thumper: you still around?
[22:22] <thumper> yeah, on the phone right now, sorry
[22:31] <jose> oh, np
[22:31] <jose> I was wondering how should I ssh into my lxc containers to check those logfiles - something's not smelling good here
[22:32] <sarnold> jose: they ought to be accessible through the filesystem too, right?
[22:32] <jose> sarnold: not sure
[22:32] <jose> or you mean if the logfiles were to be copied in the host's /var/log/juju?
[22:34] <sarnold> jose: I was thinking of /var/lib/lxc/something
[22:34] <jose> sarnold: you got it!
[22:35] <jose> urgh, /var/log/juju inside the containers is just... empty
[22:36] <sarnold> jose: oh :/ maybe there's more complicated filesystems things going on than I expected. sorry jose.
[22:36] <jose> not a problem
[22:38] <jose> sarnold: I got something, maybe you know
[22:38] <jose> I got to the logs in /var/lib/juju/containers/containernamehere/container.log
[22:38] <jose> they all say 'lxc_commands - peer has disconnected'
[22:40] <sarnold> jose: sorry, I got nothing :(
[22:40] <jose> np then :)
[22:40] <jose> I think I'll give this 20 more minutes and go home if nothing shows up
[22:47] <thumper> jose: was looking locally, it seems that the code that used to write to console.log has been moved...
[22:47] <thumper> not sure where it is logging now
[22:47] <jose> oh, well
[22:47] <thumper> there was code that caught the cloud-init output
[22:47] <thumper> so you could see where it was up to
[22:47]  * thumper thinks
[22:47] <jose> I'm deploying OpenStack with LXC containers all (but nova-compute) in one machine
[22:48] <jose> should I try deploying them like... not in lxc containers but in the server directly?
[22:48] <jose> or you think they would crash>
[22:48] <thumper> I'm not sure
[22:48] <thumper> we do have a cloud installer that does that
[22:48] <thumper> somewhere
[22:51] <jose> I'm giving LXC one last try
[22:51] <jose> destroyed the machines and services and I'm re-deploying
[22:51] <jose> it *looks* like lxc containers are being created
[22:53] <themonk> can load my local charm in juju-gui?
[22:53] <themonk> can i load my local charm in juju-gui?
[23:05] <jose> thumper: should lxc containers display as 'started' (the machines not services) even though the services have not been started yet?
[23:13] <thumper> jose: yeah, the lxc status is different
[23:13] <thumper> lxc says "yeah those machines are up"
[23:14] <thumper> the service stops being pending when the agent is started and communicates
[23:14] <thumper> with the main server
[23:14] <thumper> obviously there needs to be a network path from the host to the server, and from container to container
[23:14] <thumper> jose: what is the structure of your environment?
[23:14] <jose> thumper: manual
[23:15] <thumper> right.. but how many machines
[23:15] <thumper> what does status say
[23:33] <jose> sarnold: keystone is giving me an error when migrating the database (on trusty), is this known?
[23:34] <jose> second time I'm seeing this, not sure if it's reproducable
[23:34] <sarnold> jose: sorry, no idea
[23:34] <jose> np
[23:37] <sebas5384> jose: help me to bring ubuntu juju to the first latino Drupal Conference? http://bogota2015.drupal.org :)
[23:37] <sebas5384> I was trying to talk with jcastro but I think he is out
[23:38] <sebas5384> lazyPower: would you be interested in talking about devops?
[23:38] <jose> sebas5384: I can give the link to them tomorrow when they're around
[23:38] <sebas5384> thanks!!
[23:39] <sebas5384> I'm in the devops track chair
[23:39] <sebas5384> helping to select rock stars of devops
[23:52] <marcoceppi> sebas5384: Awesome thanks for the link!
[23:52] <marcoceppi> sebas5384: when is the CFP?
[23:52] <sebas5384> marcoceppi: you are more than invited :D
[23:52] <sebas5384> in the next two weeks we are going to define that
[23:53] <marcoceppi> sebas5384: cool, let me know and I'll submit a talk
[23:53] <sebas5384> yeah definitely! I will talk about that here :)
[23:59] <sebas5384> oh! marcoceppi, when i did the first version of the drupal charm in bash
[23:59]  * jose would like to be there
[23:59] <sebas5384> your charms were my guidance hehe
[23:59] <sebas5384> thanks marcoceppi :)