[00:00] <sebas5384> marcoceppi: specially the wordpress charm
[00:40] <l1fe> anyone use juju with lxc containers? I'm getting error executing "lxc-start" messages when adding containers to machines
[04:35] <sebas5384> ping
[07:52] <g0d_51gm4> hello guys. i need a support for my juju environment, the base installation is done and until last night everything worked well (the juju-gui was ready). today I've started of Nodes via MaaS and the "juju status" gave me this error: "ERROR state/api: websocket.Dial wss://svr1node1.maas:17070/: dial tcp 1.1.2.25:17070: connection refused" why?
[07:55] <g0d_51gm4> I've opened the window console of each single node and the ubuntu's installer is re-started again. this step has been made yesterday!!!!
[08:34] <niedbalski> hey TheMue , are you there?
[08:35] <TheMue> niedbalski: yep, I’m here
[08:35] <niedbalski> TheMue, once you have a change could you review/merge https://code.launchpad.net/~niedbalski/golxc/fix-1329049/+merge/225584?
[08:35] <niedbalski> thanks!
[08:35] <niedbalski> *change/chance
[08:37] <TheMue> niedbalski: took a quick look. right now I’m focussed on another bug but will merge then
[08:38] <niedbalski> TheMue, thanks :)
[09:32] <g0d_51gm4> anyone can answer me?please
[12:33] <marcoceppi> g0d_51gm4: because, stopping a node in Maas unallocates it
[12:33] <marcoceppi> It's not shut down, it's clean node and mark as available
[12:35] <g0d_51gm4> marcoceppi: hi, I shout down the node and today I startup the Region Controller, Cluster Controller and node after to run juju status I've received that error juju can't established the connection to node
[12:37] <g0d_51gm4> before to shout down the machine I've put the Nodes in unallocated status via MaaS
[12:37] <g0d_51gm4> and left them in ready status.....
[12:58] <TheMue> niedbalski: short info, code is merged. thanks again
[13:14] <mhall119> jcastro: are you going to be able to give a juju update in today's UE Live?
[13:15] <mhall119> nvm, just saw your mention in the other channel
[14:19] <tedg> mbruzek, Can you re-review https://code.launchpad.net/~jose/charms/precise/owncloud/port-change+repo+ssl-support/+merge/215527 ?
[14:20] <mbruzek> tedg, Yes I can add it to my queue
[14:21] <tedg> Great, thanks!
[17:50] <adeuring> utlemming, lazyPower: The Juju/Vagrant boxes published on 2014-0707 and 2104-07-09 are unusable after a reboot.  I've traced this down to terribly unstable connections between the script "redirect.py" (which configures port forwarding) and the state server. More or less any query sent from the script to the sate server can fail with errors as diverse as <Env Error - Details:
[17:50] <adeuring>  {   u'Error': u'cannot get all services', u'RequestId': 23, u'Response': {   }}
[17:50] <adeuring> <Env Error - Details:
[17:50] <adeuring>  {   u'Error': u'EOF', u'RequestId': 0, u'Response': {   }}
[17:50] <adeuring> <Env Error - Details:
[17:50] <adeuring>  {   u'Error': u'cannot read settings: EOF', u'RequestId': 4, u'Response': {   }}
[17:50] <adeuring> socket.error: [Errno 111] Connection refused
[17:50] <adeuring> socket.error: [Errno 32] Broken pipe
[17:50] <adeuring> websocket.WebSocketConnectionClosedException
[17:50] <lazyPower> adeuring: good find, do we have a bug against redirector.py?
[17:50] <lazyPower> i can add that to my work list
[17:51] <adeuring> I have a branch ready that circumvents these errors lp:~adeuring/jujuredirector/redirect-script-connection-errors but i am not sure if this is the right approach
[17:51] <lazyPower> looking now
[17:51] <adeuring> lazyPower: there is not yet a bug...
[17:51] <adeuring> lazyPower, utlemming: this connection problem also affects the "gui redirect server" that runs on port 6080
[17:52] <adeuring> and that's not fixed by my branch...
[17:53] <adeuring> lazyPower, utlemming: my mmain problem with this branch: It just tries to deal with the unstable connections but does not address the main cause...
[17:54] <adeuring> the next odd detail: I think that the main juju code did not change betwwen the releases fromo 2014-07-05, 2014-07-07 and 2014-07-09
[17:54] <lazyPower> adeuring: from what i see, it tunes up the wait timer, and handles errors better right?
[17:55] <adeuring> lazyPower: yes, the amin idea is: "if there is a conenction problem, try again"
[17:55] <lazyPower> plus moves the check lower in the set of instructions...
[17:55] <lazyPower> mhmm
[17:55] <adeuring> right, because a try/except around the "for watch in watcher" scared me
[17:58] <lazyPower> haha, yeah
[17:58] <lazyPower> that probably caught more than it needed to
[17:58] <lazyPower> ok, i dont see anything inherently wrong here
[17:58] <lazyPower> sure its a bandaid patch but i dont know why its having an issue
[17:59] <lazyPower> so without that core piece of info, i think this is fine as a stop-gap to a better fix
[17:59] <adeuring> lazyPower: right. and if you have to create a new env, some watched events may go unnoticed
[17:59] <lazyPower> you're cleaning up and exposing some better chances to catch non-timeout related issues.
[17:59]  * lazyPower +1's the code without a MP
[18:00] <adeuring> lazyPower: well, I can write the MP ... but what about the question "why are the connections now so flaky"?
[18:01] <lazyPower> right, that needs to be tracked
[18:01] <lazyPower> we should file a bug, and i'll put a card on teh board to look into it with ya
[18:02] <adeuring> lazyPower:  we could try this apporach: STart with the box from 2014-07-05 (should work fine), upgrade the deb packages and see if this changes anything.
[18:04] <lazyPower> ok
[18:09] <lazyPower> adeuring: if you can see the solutions board - i added a card to work after i wrap my hive2 work
[18:09] <lazyPower> if something comes up and you dont need me to pivot to this - just ping me and i'll wipe it.
[18:10] <adeuring> llazack
[18:10] <adeuring> lazyPower: ok
[18:37] <npasqua> Hello All. I have a question regardign JuJu overwriting .conf files. I set a setting in my nova.conf file, but when I use "restart jujud-unit-nova-compute-0" it reverts the .conf to some base. I want juju to play my setting in there also. How can I do this?
[18:39] <marcoceppi> npasqua: you'll have to have it done in the charm, the charm assumes it's the only thing controlling the box
[18:40] <marcoceppi> so you can fork the charm, add it as a configuration option to the charm, then use that fork and submit the merge for the official charm
[18:42] <npasqua> We want to set the novnc_noproxy option in our configuration from 127.0.0.1 to our compute node's IP... Otherwise we cannot access the console through the openstack dashboard without manually changing the IP in the url.
[18:43] <npasqua> It seems as if it's more of an individual issue rather than an official charm addition (although it is very interesting the vnc packages are not included within the current charms)
[18:44] <npasqua> novnc*
[18:50] <adeuring> lazyPower, utlemming: I've crated an MP. Not sure how much sense it makes, but it's perhaps to discuss the problem further..
[18:50] <adeuring> lazyPower, utlemming https://code.launchpad.net/~adeuring/jujuredirector/redirect-script-connection-errors/+merge/226197
[18:51] <lazyPower> adeuring: just updated teh card with this URL. i'll move my efforts here when its on the top of the stack
[18:51] <adeuring> lazyPower: great
[18:52] <lazyPower> Thanks for the legwork adeuring
[18:52]  * lazyPower hattips
[18:57] <l1fe> anyone here use juju to install openstack on 14.04? i seem to have everything working EXCEPT for networking...where it never created the br100 interface
[18:59] <l1fe> the only setting in my config for nova-cloud-controller is network-manager: "FlatDHCPManager"