/srv/irclogs.ubuntu.com/2014/09/26/#juju.txt

=== arosales_ is now known as arosales
=== urulama-afk is now known as urulama
=== CyberJacob|Away is now known as CyberJacob
=== CyberJacob is now known as CyberJacob|Away
=== alexlist` is now known as alexlist
james_wanyone have any ideas about https://bugs.launchpad.net/juju-core/+bug/1374159 ?08:57
mupBug #1374159: Complains about wanting juju-local installed when it is <juju-core:New> <https://launchpad.net/bugs/1374159>08:57
james_wIt's preventing me from using juju currently08:57
james_was it doesn't work for long enough to complete a deploy of my test environment08:57
=== fabrice is now known as fabrice|lunchpar
jamespagegnuoy, the unit test fix is to add hooks.hooks._config_save = False10:08
jamespage 10:08
jamespageto the relations tests for the openstack charms - I have this in my https split branches10:09
gnuoyjamespage, yes, corey and I did that to the quantum-gateway charm last night10:09
gnuoyjamespage, oh, that's not exactly what we did10:09
gnuoyjamespage, so you want the implicit save on when the charm is running for realz ?10:10
jamespagegnuoy, yeah10:10
jamespagegnuoy, it does not hurt - we might want to use it later on10:10
jamespagegnuoy, so disabling it for testing is just fine10:10
gnuoyk10:11
jamespagegnuoy, we could just land that in as a resync + a trivial change10:11
gnuoyjamespage, will do10:12
gnuoycoreycb, after talking to james ^ I've tweaked the config_save  in the quantum-gateway next charm http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/quantum-gateway/next/revision/6510:16
coreycbgnuoy, ok good to know11:40
rick_h_marcoceppi: lazyPower any chance you all can help reshare/get the word out today? https://plus.google.com/116120911388966791792/posts/9KaLE7m9hv9 and https://twitter.com/jujuui/status/51546773995192320011:47
james_wanyone have any ideas about https://bugs.launchpad.net/juju-core/+bug/1374159 ?12:13
mupBug #1374159: Complains about wanting juju-local installed when it is <juju-core:Incomplete> <https://launchpad.net/bugs/1374159>12:13
james_wIt's preventing me from using juju currently as it doesn't work for long enough to complete a deploy of my test environment12:13
james_wrick_h_: nice screencast, it looks really slick, congrats to those involved12:22
rick_h_james_w: ty much, sorry I don't have any clue on your bug to trade for the nice comments :/12:22
james_wno problem12:23
james_wrick_h_: what can provide the hardware details in the machine view?12:23
rick_h_james_w: so there was a bug in MAAS that was fixed and I think is in 1.20.8 (but this video was before then)12:23
james_wah, ok12:23
rick_h_james_w: and ec2 shows it, you can see it in makyo's video https://www.youtube.com/watch?v=pRd_ToOy87o&list=UUJ65UG_WgFa_O_odbiBWZoA12:24
james_wrick_h_: I'm only sad that we can't really use this work.12:24
rick_h_james_w: so it's we'll hopefully get it everywhere in time12:24
rick_h_james_w: :( why is that?12:24
james_wrick_h_: a couple of reasons really12:25
james_w1. we don't have access to our production environments12:25
james_w2. manual modification of environments doesn't suit our workflow12:25
james_wwe want an approval workflow that is driven from a desired state in version control12:25
rick_h_james_w: ah yea, though with juju auth support coming we've still got the idea of read only and such12:25
james_wyeah, that will be nice12:26
james_wso we can poke around12:26
rick_h_james_w: true, however we'll also be adding some things like linking directly to exposed ip/ports per unit in machine view and such12:26
james_wbut we'll miss out on the really nice uncommitted changes features etc.12:26
rick_h_so now that we show you a real 'per unit' look we can hopefully provide some useful stuff like 'kill that unit' or what would be cool with canary upgrade work to show progress in machine view12:26
rick_h_gotcha, yea12:26
james_wbut this all looks really nice12:27
james_wand I'm sure it goes over well with customers12:27
rick_h_definitely, well hopefully some people who don't use the gui much will have some real use for it and thanks for that feedback.12:27
james_wyeah12:27
rick_h_it helps us know what things we can look to offer that we don't currently12:27
rick_h_and check out the roadmap12:28
james_wI'm trying to think if it could evolve such that we could use the modification features of the gui12:28
james_wI think it would only really be useful for testing out changes, and then exporting a diff when finished12:28
james_wwithout some very substantial modifications12:29
johnmcrick_h_: The updated gui looks excellent. A very impressive step up. I see I can get the source on github. Is there a ppa or other repo I can download it from?12:38
rick_h_johnmc: juju deploy juju-gui12:38
rick_h_johnmc: well juju deploy trusty/juju-gui12:38
johnmcrick_h_: of course. Thanks.12:38
rick_h_johnmc: if you mean source for hacking, there's a LP release tarball as well https://launchpad.net/juju-gui/+download12:39
rick_h_johnmc: but yea, deploy it to check it out in your environment12:39
=== fabrice|lunchpar is now known as fabrice
rm4is it possible to avoid haproxy being a single point of failure by adding extra instances with add-unit13:26
aisraelrick_h_: nice work on the new gui. Watching the ghost walkthrough now. That's slick.13:29
rick_h_aisrael: thanks, team worked long and hard on that and exciting to get it out there.13:29
rick_h_rm4: I'm not 100% sure but it looks like it provides some peer relation-fu that would seem to work towards that end. http://bazaar.launchpad.net/~charmers/charms/precise/haproxy/trunk/view/head:/hooks/hooks.py#L56313:33
rick_h_rm4: it'd be a good bug on the charm to update the readme to address your question directly as I'm sure others wonder the same thing13:33
rm4rich_h_: I can create the peering and get the following13:47
rm4backend haproxy_service13:48
rm4    mode tcp13:48
rm4    option tcplog13:48
rm4    balance leastconn13:48
rm4    server haproxy-0 10.0.3.242:81 check13:48
rm4    server haproxy-1 10.0.3.188:81 check backup13:48
rm4so it does peer fine however not sure if it has a vip keepalived for example as when I destroy the  the first server then its ip address is not accessible13:50
lazyPowerrick_h_: Consider it done :)13:51
rick_h_lazyPower: ty much13:52
avoinehey bloodearnest how went your presentation at pycon uk?13:58
rm4rick_h_: I have submitted bug 137446514:02
mupBug #1374465: Readme does not have a peering section allthough peering is permitted. <haproxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1374465>14:02
rick_h_rm4: awesome thanks!14:03
rm4rick_h_: of course14:04
bloodearnestavoine: went ok thanks. Some issues with the gui freezing up prevented the full demo14:09
bloodearnestavoine: but mfoord is dping the same demo in pycon india this weekend14:09
rick_h_bloodearnest: :(14:09
rick_h_bloodearnest: any hint what was up?14:09
bloodearnestrick_h_: well, it was a mac, using parallels to run a vm, deployed to local provider, and untested. We had to use mfoord's mac because my laptop wouldn't connect to the projecter :/14:10
rick_h_bloodearnest: ok, well let me know if there's anything we can help take a look at14:11
bloodearnestavoine: I added some support for the django charm to cope with multi-unit postgres services, and it handles failover nicely14:11
rick_h_bloodearnest: and let mfoord know machine view is out for any follow up talks in case it fits the demo/material you all covered14:11
bloodearnestavoine: also, added dj-static as a simple static asset solution. We're using it in prod, with a squid infront it works well. Simple deployment.14:12
bloodearnestrick_h_: yeah, we didn't get to the bottom of it. But I think macs hate me, so that's probably it. Feeling's mutual :)14:12
rick_h_bloodearnest: hah ok14:12
lazyPowersinzui: question for you. I have ~ 4 bugs i need to retarget away from teh charms collection to point to a personal branch - and i can't seem to do that in launchpad via the project data point - any hints would be helpful : https://bugs.launchpad.net/charms/+bug/137426714:13
mupBug #1374267: ctap-sampleapp crashes on logout <Juju Charms Collection:New> <https://launchpad.net/bugs/1374267>14:13
sinzuilazyPower, Lp only permits projects, distros, and distro-packages to have bugs14:13
lazyPowerso i can't retarget these bugs against the users namespaced charm?14:14
sinzuilazyPower, that's right14:14
lazyPowerD: that seems... not right14:15
sinzuilazyPower, a branch is personal14:15
lazyPoweras a personal namespaced charm is still a project in LP14:15
lazyPoweror am i misunderstanding a core tenant of LP's structure?14:15
sinzuilazyPower, when issues are shared by a group, the branch needs to be promoted to the project level...but Lp is to ass-backwards to explain that14:15
avoinebloodearnest: yeah, I saw your MP, it is pretty cool. I'll definitely check dj-static14:16
marcoceppilazyPower: we need to create the packagae in lp for charms first off14:16
bloodearnestavoine: the implementation of installing in the charm and running collectstatic is a quick hack, I think it could be done better14:17
sinzuilazyPower, Lp is not about the individual, it actually alienates the opportunistic developer by calling work +junk. Lp wants groups to collaborate, but never explains one person needs to create something valuable, then share it as a project14:17
bloodearnestavoine: collectstatic is not actually needed with dj-static14:17
lazyPoweri guess what confuses me is https://bugs.launchpad.net/charms/trusty/+source/hsenidmobile-ctap-sampleapp - leads me to believe i could do this without much fuss14:17
avoinebloodearnest: I was planning adding a subordinate that install django-storage and connect to s3 or swift14:18
avoinebut for now I was using this: https://code.launchpad.net/~patrick-hetu/+junk/django-contrib-staticfiles14:18
marcoceppilazyPower: we need to make a new tag, that's like "not-a-charm"14:18
lazyPowermarcoceppi: agreed. I'll tag these with that exact phrasing14:19
marcoceppithat we can have review-queue ignore for the time being14:19
marcoceppiI'm releasing new review-queue today, I can add that in there14:19
sinzuilazyPower, since the developer is the "first" community, Lp was never going to attract developers like github14:19
lazyPowersinzui: thanks for the clarification. I'm a little saddened by this news but its not the end of the universe.14:20
bloodearnestavoine: swift is not yet good as s3 for serving static assets14:21
bloodearnestavoine: we are taking the approach of using dj-static and sticking squid in front14:21
avoinebloodearnest: yeah I bet that work pretty well14:21
bloodearnestavoine: two big advantages are 1) single deployment target (just your django units need updating) and 2) same in dev as in prod (as dj-static works in dev too)14:22
bloodearnestavoine: but for a lot of large assets, s3 might be better14:22
bloodearnestlike videos14:22
bloodearnestand mp3s14:22
avoineyeah14:23
avoinebloodearnest: any thoughts on the python vs ansible approach?14:24
bloodearnestavoine: so, I'm a bit confused. You are using ansible, but *not* the hooks integration, just apply_playbook. And you also have ansible for controlling juju from the hosts?14:29
avoinebloodearnest: I was planning to use AnsibleHooks and using it with Juju on the hosts14:33
avoinebloodearnest: but I'm not sure if the overhead of Ansible worth the trouble14:33
bloodearnestavoine: great. I'd be happy to help with that.14:33
bloodearnestavoine: right.14:33
lazyPoweravoine: are you planning on submitting this charm for recommended status from the charm store?14:34
avoinelazyPower: no not soon14:34
avoinelazyPower: I'll stick with the pure python for now14:34
lazyPowerok. I was going to interject that if you want help wrapping that up in proper charm format, we have an ansible charm template for use, and integrating it into juju hooks is pretty straight forward14:34
lazyPowerif you're using a recent edition of charm tools `charm create -t` gives you some options for that14:35
avoinenice I didn't know that14:35
bloodearnestlazyPower: nice!14:39
lazyPoweruh oh, i get the feeling we haven't been very forthcoming with the information about charm create -t14:39
* lazyPower prepares an email to the list14:40
bloodearnestlazyPower: also, have you seen my charm helpers branch that adds super-simple "actions"? We make extensive use of juju run <unit> "actions/some-action a=b c=d" type stuff, which this branch makes easy to integrate with ansible14:45
lazyPowerbloodearnest: i have not, but if its not in the review queue chances are I didn't see it14:45
bloodearnesthttps://code.launchpad.net/~bloodearnest/charm-helpers/ansible-actions/+merge/23342814:45
lazyPowerhttp://review.juju.solutions14:45
johnmcAs discussed yesterday with natefinch and sinzui, I can no longer create LXC containers using juju-maas on one of my machines.14:52
johnmc Working back through my history to the last thing I did before it broke, I might have found something.14:52
johnmcI checked-out (bzr branch) the precise haproxy charm onto my machine and tried to deploy it into a trusty LXC machine14:53
johnmcI then back-tracked and realised I should have been working with hacluster under trusty14:53
bloodearnestlazyPower: I don't see my branch on there, what to I need to do to make it appear? It should be under tools, right?14:53
johnmcDoes anyone know if that could explain my now-broken environment14:54
lazyPowerbloodearnest: that queue may not be implemented yet. I know we're moving at breakneck speed on the new queue14:54
johnmcI'm also finding it impossible to deploy hacluster linked to glance.14:55
bloodearnestlazyPower: coolio14:55
lazyPowerbloodearnest: maybe its worth while to open a bug against the review queue so we can track progress on implementation14:55
lazyPowerbloodearnest: for now it lives here but will eventually be moved to github.com/juju-solutions  --- https://github.com/marcoceppi/review-queue14:56
johnmcWhen I deploy the glance charm and fire-up a new pair of glance instances, then link that to a new hacluster instance, I het this in the logs: http://pastebin.com/ZtNA7UmT14:58
johnmcThat big long list of "INFO hanode-relation-changed Cleaning up res_glance_haproxy:0 on juju-machine-.-lxc-.." lines represent every glance instance I think I've ever had an subsequently destroyed.15:01
johnmcThings look really badly broken. Can anyone help?15:01
johnmcnatefinch: Are you around to help?15:05
natefinchjohnmc: yes, but trying to drum up someone more helpful ;)15:05
natefinchjohnmc: glance and hacluster are beyond my knowledge15:06
natefinchjohnmc: did you get any more help from sinzui last night?  I saw you got an upgrade half-finished, which is never a great place to be15:07
sinzuiAh, no, I had to switch to OS X to complete the release15:07
johnmcnatefinch: I didn't make any progress on that. sinzui said that the upgrade can't be done while those LXC (requested) machines are there. I have no idea how what to do next.15:08
sinzuijohnmc, When the containers are there, but the machine/unit agents are down, you can restart each15:09
johnmcnatefinch: I'm hoping that the attempt to install the precise haproxy charm into trusty might turn out to be a common cause of both the LXC problem, and the hacluster problem.15:10
sinzuijohnmc, I have a brittle arm64 machine that I do this from time to time. I think restarting the machine agent first is best15:10
johnmcsinzui: the containers don't actually exist. I requested their creation, but they were never actually created.15:11
sinzuijohnmc, 1.18.x was a bad time for arm64, so I did restarts of the agents, then the queued upgrade was dequeued and 5 minutes later the upgrade was complete, and the agents stayed up15:11
johnmcsunzui: I have restarted the machine agent (on the base system) many times15:11
johnmcsinzui: there are no agents, because there are no LXC containers. That is how my system is broken.15:12
sinzuirestarting the state-server agents are orthogonal15:12
sinzuijohnmc, If lxc gets blocked on a machine a lot of manual work is needed to unblock http://pastebin.ubuntu.com/8427862/15:14
sinzuijohnmc, I think you need to confirm which containers exist with sudo lxc-ls --fancy15:14
johnmcI have already been through that in detail with nate yesterday. The LXC containers do not exist. I pasted this output yesterday15:15
sinzuijohnmc, Since you cannot destroy your env and start over, I think you need to use remove-unit or destroy-machine to make juju forget about the failed containers and try again15:15
sinzuijohnmc, I don't have experience with destroying maas lxc containers15:16
johnmchttp://pastebin.com/aGUAyizu15:16
sinzuijohnmc, is this trusty with cloning enabled?15:17
johnmcsinzui: I have used "destroy-machine" and used --force to no avail15:17
johnmcsinzui: This is trusty. How is cloning enabled/disabled? I'm not familiar with that setting.15:18
johnmcI followed on online guide15:18
sinzuijohnmc, are there locks named after lxc in /var/lib/juju/locks/15:19
johnmcsinzui: No such lock files on either the host system (machine 1) or the juju-agent machine. Where should I be looking for these?15:21
sinzuijohnmc, the machine with the containers. When lxc doesn't create containers we investigate the host machine15:22
sinzuijohnmc,the state-server doesn't do work. other machines ask it for a list of tasks. so when a machine has issues, we visit to investigate the local problem15:24
johnmcsinzui: There are no lock files.15:24
johnmcsinzui: As I explained to natfinch yesterday, there is no evidence that the host system ever received any request for new LXC contianers15:25
* mgz reads log15:25
mgzjohnmc: do you want to just clean up the lxc container stuff completely?15:26
sinzuijohnmc, interesting, but the order of events is the host machine agent asks for the state server for work. status shows the state-server is waiting of the machine to do it's part15:26
johnmcsinzui: This failure appears to be totally silent with regard to log files. No files under /var/log change at all on the host system in response to a new LXC request.15:29
johnmcsinzui: If we had any log output at all we might get somewhere. How is this to be done?15:29
sinzuijohnmc, I only know to look for logs in /var/log/juju. when there are no logs or logging stops, that might mean the files were removed, confusing the agents. restarting the agents will recreate the logs an there will be a flood of dequeued messages15:31
johnmcsinzui: As requesting new LXC containers has no impact, is it possible to there is a blocked queue of requests?15:32
johnmcsinzui: I've restarted the jujud for machine-1 many times, and verified (just now) that there are no deleted files being written to.15:33
johnmcsinzui: I used lsof to check open files. /var/log/juju/machine-1.log is used by jujud and is present15:33
johnmcsinzui: it's as though the juju-agent machine sends no requests to the host machine at all.15:34
sinzuioh...15:37
=== fabrice_ is now known as fabrice|out
johnmcsinzui: Where do I go from here?16:04
sinzuijohnmc, I am not sure what to do in this case. I don't have any experience with this. When the machine agent cannot talk to the state server, the logs will scream about it. You don't see this issue in the machine-1.log though.16:10
sinzuijohnmc, I assume the machine-1.log you are reading never mentions ERROR.16:12
johnmcsinzui: The machine agent connects to the server. Netstat shows this has an open tcp connection to port 17070. However, no activity takes place. Shouldn't the jujud on the juju-agent server log something somewhere?16:14
sinzuijohnmc, The machine-1  log should be stating it called home. The all-machines.log on the state server is puts all the actions in context.16:16
johnmcsinzui: the machine agent seems to lose it's connection to the juju-agent at least once a day. This is the latest log: http://pastebin.com/2PP8CK8G16:16
sinzuijohnmc, This is obviously a bug. Can you report the issue at https://bugs.launchpad.net/juju-core and attach the all-machine.log for the developers. Please review the log though, Juju likes to include certs and password that you will want to redact16:17
sinzuijohnmc, which version of juju did the env start as, and which have you upgraded too16:19
johnmcsinzui: it was 1.18.1 originally16:24
sinzuiokay, that explains the upgrade already at 1.18.4 message16:27
sinzuithough it implies something has not completed an upgrade to 1.18.416:27
bic2kjuju have any plans to directly support deploying a Docker container.17:21
=== scuttlemonkey is now known as scuttle|afk
natefinchbic2k: we've talked about docker a lot internally... you can certainly write a charm that deploys a docker container, in fact I wrote such a charm very recently.   Docker and juju work fine together now... what would you like to see done differently?17:30
bic2knatefinch: I also just finished writing a charm for deploying some internal docker services. Perhaps thats all we need :-)17:32
natefinchbic2k: I think the one thing that makes docker charms special is that it's relatively "safe" to deploy multiple of them to a single machine, rather than needing to put each of them in a separate LXC container (since they're already contained, of course)17:33
bic2knatefinch: Private docker registry was a bit of a challenge for us.17:35
lazyPowerhttps://plus.google.com/100016143571682046224/posts/PaVGh51FYCR - i'm just going to leave this here...17:35
natefinchbic2k: ahh, yeah, I was deploying from a publicly available image, so that wasn't a concern17:35
bic2knatefinch: You deploying on 12.04 or 14.04?17:36
natefinchbic2k: 14.0417:36
bic2knatefinch: We hit some issues with getting docker installed on 12.04 through apt. Mostly the CLI tools to add a repo insist on adding deb-src and then failing when it isn't there.17:37
natefinchbic2k: why are you using 12.04?17:38
bic2knatefinch: our cluster is old and thats what its on :-)17:38
natefinchbic2k: ahh :)17:38
=== CyberJacob|Away is now known as CyberJacob
=== roadmr is now known as roadmr_afk
=== fabrice|out is now known as fabrice
avoinelazyPower: what is the bundletester command I should run to test the python-django charm like you do?19:40
avoinelazyPower: also are you testing with python3 ?19:41
lazyPoweravoine: negative. python219:41
avoineok19:41
lazyPoweravoine: just `bundletester -F` after i cd into the charmdir19:41
avoineok19:41
lazyPowerpip install bundletester into a venv19:41
lazyPowerthen you get the same results we do when testing / CI does19:41
avoineok19:41
lazyPoweravoine: hth - i'm EOD'd for the rest of the day to prep for my show. If you need anything else, i'll be around this weekend/monday19:42
=== lazyPower is now known as lazyPower|Spinni
avoineok, thanks19:42
sebas538_hi!20:42
=== sebas538_ is now known as sebas5384
sebas5384question: how bundle.yaml deals with local charms?20:42
sebas5384in the charm property, could be like20:43
sebas5384charm: local:precise/drupal20:43
sebas5384but how I specify the local repository20:43
sebas5384?20:44
=== fuzzy_ is now known as Ponyo
=== roadmr_afk is now known as roadmr
sebas5384the machine view is in https://jujucharms.com \o/21:10
sebas5384uhuu!! testing already :)21:10
arosaleslazyPower|Spinni: just the stream for your session correct?21:23
arosalesnot hangout or anything like that21:23
lazyPower|Spinniarosales: not this week. maybe after brussels we'll setup a live in studio session to go with it.21:26
lazyPower|Spinnireplies will be latent, i setup a mixing table this week.21:26
* lazyPower|Spinni afk's again21:26
d4rkn3thello guys, one question for MaaS and juju, is there a way to make the security upgrade on nodes added on MaaS and for the charms on juju, without make that one by one? thanks. If not it may be a suggest to add as app on MaaS anyone can answer me?21:44
bic2kd4rkn3t not that I have tried this, but you may be able to use the ssh command to run the appropriate apt-updates one machine at a time21:44
d4rkn3tI'd like to avoid that, because we've 500 virtual svr21:45
d4rkn3ti'd like to use MaaS to make that21:46
d4rkn3tfor example select the nodes and launch the upgrade!!!21:46
d4rkn3tit's take too long time make that one by one21:47
d4rkn3tthe same think is valid also for the charms deployed with juju21:50
d4rkn3tif I'd want to make the upgrade for example of MySQL deployed using juju, is there a procedure to make that?21:51
=== CyberJacob is now known as CyberJacob|Away
rick_h_d4rkn3t: there's a juju run command, and landscape is great at updates across hardware.22:48
rick_h_d4rkn3t: so in the mysql sense I'd juju run that across my mysql service.22:49
rick_h_d4rkn3t: https://www.youtube.com/watch?v=2d5KdQjXCBs is a cool video on juju run and juju run --help has some basic info.22:51
=== lazyPower|Spinni is now known as lazyPower
lazyPowerI deny knowing anything about juju run :P23:16
lazyPowerrick_h_: good looking out - i just got off stream and was going to suggest the same thing. hi523:16
hazmatrick_h_, lazyPower  juju run --all "apt-get update && apt-get upgrade"23:22
mwenningmarcoceppi, can you specify constraints when you deploy amulet?23:24

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!