=== arosales_ is now known as arosales | ||
=== urulama-afk is now known as urulama | ||
=== CyberJacob|Away is now known as CyberJacob | ||
=== CyberJacob is now known as CyberJacob|Away | ||
=== alexlist` is now known as alexlist | ||
james_w | anyone have any ideas about https://bugs.launchpad.net/juju-core/+bug/1374159 ? | 08:57 |
---|---|---|
mup | Bug #1374159: Complains about wanting juju-local installed when it is <juju-core:New> <https://launchpad.net/bugs/1374159> | 08:57 |
james_w | It's preventing me from using juju currently | 08:57 |
james_w | as it doesn't work for long enough to complete a deploy of my test environment | 08:57 |
=== fabrice is now known as fabrice|lunchpar | ||
jamespage | gnuoy, the unit test fix is to add hooks.hooks._config_save = False | 10:08 |
jamespage | 10:08 | |
jamespage | to the relations tests for the openstack charms - I have this in my https split branches | 10:09 |
gnuoy | jamespage, yes, corey and I did that to the quantum-gateway charm last night | 10:09 |
gnuoy | jamespage, oh, that's not exactly what we did | 10:09 |
gnuoy | jamespage, so you want the implicit save on when the charm is running for realz ? | 10:10 |
jamespage | gnuoy, yeah | 10:10 |
jamespage | gnuoy, it does not hurt - we might want to use it later on | 10:10 |
jamespage | gnuoy, so disabling it for testing is just fine | 10:10 |
gnuoy | k | 10:11 |
jamespage | gnuoy, we could just land that in as a resync + a trivial change | 10:11 |
gnuoy | jamespage, will do | 10:12 |
gnuoy | coreycb, after talking to james ^ I've tweaked the config_save in the quantum-gateway next charm http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/quantum-gateway/next/revision/65 | 10:16 |
coreycb | gnuoy, ok good to know | 11:40 |
rick_h_ | marcoceppi: lazyPower any chance you all can help reshare/get the word out today? https://plus.google.com/116120911388966791792/posts/9KaLE7m9hv9 and https://twitter.com/jujuui/status/515467739951923200 | 11:47 |
james_w | anyone have any ideas about https://bugs.launchpad.net/juju-core/+bug/1374159 ? | 12:13 |
mup | Bug #1374159: Complains about wanting juju-local installed when it is <juju-core:Incomplete> <https://launchpad.net/bugs/1374159> | 12:13 |
james_w | It's preventing me from using juju currently as it doesn't work for long enough to complete a deploy of my test environment | 12:13 |
james_w | rick_h_: nice screencast, it looks really slick, congrats to those involved | 12:22 |
rick_h_ | james_w: ty much, sorry I don't have any clue on your bug to trade for the nice comments :/ | 12:22 |
james_w | no problem | 12:23 |
james_w | rick_h_: what can provide the hardware details in the machine view? | 12:23 |
rick_h_ | james_w: so there was a bug in MAAS that was fixed and I think is in 1.20.8 (but this video was before then) | 12:23 |
james_w | ah, ok | 12:23 |
rick_h_ | james_w: and ec2 shows it, you can see it in makyo's video https://www.youtube.com/watch?v=pRd_ToOy87o&list=UUJ65UG_WgFa_O_odbiBWZoA | 12:24 |
james_w | rick_h_: I'm only sad that we can't really use this work. | 12:24 |
rick_h_ | james_w: so it's we'll hopefully get it everywhere in time | 12:24 |
rick_h_ | james_w: :( why is that? | 12:24 |
james_w | rick_h_: a couple of reasons really | 12:25 |
james_w | 1. we don't have access to our production environments | 12:25 |
james_w | 2. manual modification of environments doesn't suit our workflow | 12:25 |
james_w | we want an approval workflow that is driven from a desired state in version control | 12:25 |
rick_h_ | james_w: ah yea, though with juju auth support coming we've still got the idea of read only and such | 12:25 |
james_w | yeah, that will be nice | 12:26 |
james_w | so we can poke around | 12:26 |
rick_h_ | james_w: true, however we'll also be adding some things like linking directly to exposed ip/ports per unit in machine view and such | 12:26 |
james_w | but we'll miss out on the really nice uncommitted changes features etc. | 12:26 |
rick_h_ | so now that we show you a real 'per unit' look we can hopefully provide some useful stuff like 'kill that unit' or what would be cool with canary upgrade work to show progress in machine view | 12:26 |
rick_h_ | gotcha, yea | 12:26 |
james_w | but this all looks really nice | 12:27 |
james_w | and I'm sure it goes over well with customers | 12:27 |
rick_h_ | definitely, well hopefully some people who don't use the gui much will have some real use for it and thanks for that feedback. | 12:27 |
james_w | yeah | 12:27 |
rick_h_ | it helps us know what things we can look to offer that we don't currently | 12:27 |
rick_h_ | and check out the roadmap | 12:28 |
james_w | I'm trying to think if it could evolve such that we could use the modification features of the gui | 12:28 |
james_w | I think it would only really be useful for testing out changes, and then exporting a diff when finished | 12:28 |
james_w | without some very substantial modifications | 12:29 |
johnmc | rick_h_: The updated gui looks excellent. A very impressive step up. I see I can get the source on github. Is there a ppa or other repo I can download it from? | 12:38 |
rick_h_ | johnmc: juju deploy juju-gui | 12:38 |
rick_h_ | johnmc: well juju deploy trusty/juju-gui | 12:38 |
johnmc | rick_h_: of course. Thanks. | 12:38 |
rick_h_ | johnmc: if you mean source for hacking, there's a LP release tarball as well https://launchpad.net/juju-gui/+download | 12:39 |
rick_h_ | johnmc: but yea, deploy it to check it out in your environment | 12:39 |
=== fabrice|lunchpar is now known as fabrice | ||
rm4 | is it possible to avoid haproxy being a single point of failure by adding extra instances with add-unit | 13:26 |
aisrael | rick_h_: nice work on the new gui. Watching the ghost walkthrough now. That's slick. | 13:29 |
rick_h_ | aisrael: thanks, team worked long and hard on that and exciting to get it out there. | 13:29 |
rick_h_ | rm4: I'm not 100% sure but it looks like it provides some peer relation-fu that would seem to work towards that end. http://bazaar.launchpad.net/~charmers/charms/precise/haproxy/trunk/view/head:/hooks/hooks.py#L563 | 13:33 |
rick_h_ | rm4: it'd be a good bug on the charm to update the readme to address your question directly as I'm sure others wonder the same thing | 13:33 |
rm4 | rich_h_: I can create the peering and get the following | 13:47 |
rm4 | backend haproxy_service | 13:48 |
rm4 | mode tcp | 13:48 |
rm4 | option tcplog | 13:48 |
rm4 | balance leastconn | 13:48 |
rm4 | server haproxy-0 10.0.3.242:81 check | 13:48 |
rm4 | server haproxy-1 10.0.3.188:81 check backup | 13:48 |
rm4 | so it does peer fine however not sure if it has a vip keepalived for example as when I destroy the the first server then its ip address is not accessible | 13:50 |
lazyPower | rick_h_: Consider it done :) | 13:51 |
rick_h_ | lazyPower: ty much | 13:52 |
avoine | hey bloodearnest how went your presentation at pycon uk? | 13:58 |
rm4 | rick_h_: I have submitted bug 1374465 | 14:02 |
mup | Bug #1374465: Readme does not have a peering section allthough peering is permitted. <haproxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1374465> | 14:02 |
rick_h_ | rm4: awesome thanks! | 14:03 |
rm4 | rick_h_: of course | 14:04 |
bloodearnest | avoine: went ok thanks. Some issues with the gui freezing up prevented the full demo | 14:09 |
bloodearnest | avoine: but mfoord is dping the same demo in pycon india this weekend | 14:09 |
rick_h_ | bloodearnest: :( | 14:09 |
rick_h_ | bloodearnest: any hint what was up? | 14:09 |
bloodearnest | rick_h_: well, it was a mac, using parallels to run a vm, deployed to local provider, and untested. We had to use mfoord's mac because my laptop wouldn't connect to the projecter :/ | 14:10 |
rick_h_ | bloodearnest: ok, well let me know if there's anything we can help take a look at | 14:11 |
bloodearnest | avoine: I added some support for the django charm to cope with multi-unit postgres services, and it handles failover nicely | 14:11 |
rick_h_ | bloodearnest: and let mfoord know machine view is out for any follow up talks in case it fits the demo/material you all covered | 14:11 |
bloodearnest | avoine: also, added dj-static as a simple static asset solution. We're using it in prod, with a squid infront it works well. Simple deployment. | 14:12 |
bloodearnest | rick_h_: yeah, we didn't get to the bottom of it. But I think macs hate me, so that's probably it. Feeling's mutual :) | 14:12 |
rick_h_ | bloodearnest: hah ok | 14:12 |
lazyPower | sinzui: question for you. I have ~ 4 bugs i need to retarget away from teh charms collection to point to a personal branch - and i can't seem to do that in launchpad via the project data point - any hints would be helpful : https://bugs.launchpad.net/charms/+bug/1374267 | 14:13 |
mup | Bug #1374267: ctap-sampleapp crashes on logout <Juju Charms Collection:New> <https://launchpad.net/bugs/1374267> | 14:13 |
sinzui | lazyPower, Lp only permits projects, distros, and distro-packages to have bugs | 14:13 |
lazyPower | so i can't retarget these bugs against the users namespaced charm? | 14:14 |
sinzui | lazyPower, that's right | 14:14 |
lazyPower | D: that seems... not right | 14:15 |
sinzui | lazyPower, a branch is personal | 14:15 |
lazyPower | as a personal namespaced charm is still a project in LP | 14:15 |
lazyPower | or am i misunderstanding a core tenant of LP's structure? | 14:15 |
sinzui | lazyPower, when issues are shared by a group, the branch needs to be promoted to the project level...but Lp is to ass-backwards to explain that | 14:15 |
avoine | bloodearnest: yeah, I saw your MP, it is pretty cool. I'll definitely check dj-static | 14:16 |
marcoceppi | lazyPower: we need to create the packagae in lp for charms first off | 14:16 |
bloodearnest | avoine: the implementation of installing in the charm and running collectstatic is a quick hack, I think it could be done better | 14:17 |
sinzui | lazyPower, Lp is not about the individual, it actually alienates the opportunistic developer by calling work +junk. Lp wants groups to collaborate, but never explains one person needs to create something valuable, then share it as a project | 14:17 |
bloodearnest | avoine: collectstatic is not actually needed with dj-static | 14:17 |
lazyPower | i guess what confuses me is https://bugs.launchpad.net/charms/trusty/+source/hsenidmobile-ctap-sampleapp - leads me to believe i could do this without much fuss | 14:17 |
avoine | bloodearnest: I was planning adding a subordinate that install django-storage and connect to s3 or swift | 14:18 |
avoine | but for now I was using this: https://code.launchpad.net/~patrick-hetu/+junk/django-contrib-staticfiles | 14:18 |
marcoceppi | lazyPower: we need to make a new tag, that's like "not-a-charm" | 14:18 |
lazyPower | marcoceppi: agreed. I'll tag these with that exact phrasing | 14:19 |
marcoceppi | that we can have review-queue ignore for the time being | 14:19 |
marcoceppi | I'm releasing new review-queue today, I can add that in there | 14:19 |
sinzui | lazyPower, since the developer is the "first" community, Lp was never going to attract developers like github | 14:19 |
lazyPower | sinzui: thanks for the clarification. I'm a little saddened by this news but its not the end of the universe. | 14:20 |
bloodearnest | avoine: swift is not yet good as s3 for serving static assets | 14:21 |
bloodearnest | avoine: we are taking the approach of using dj-static and sticking squid in front | 14:21 |
avoine | bloodearnest: yeah I bet that work pretty well | 14:21 |
bloodearnest | avoine: two big advantages are 1) single deployment target (just your django units need updating) and 2) same in dev as in prod (as dj-static works in dev too) | 14:22 |
bloodearnest | avoine: but for a lot of large assets, s3 might be better | 14:22 |
bloodearnest | like videos | 14:22 |
bloodearnest | and mp3s | 14:22 |
avoine | yeah | 14:23 |
avoine | bloodearnest: any thoughts on the python vs ansible approach? | 14:24 |
bloodearnest | avoine: so, I'm a bit confused. You are using ansible, but *not* the hooks integration, just apply_playbook. And you also have ansible for controlling juju from the hosts? | 14:29 |
avoine | bloodearnest: I was planning to use AnsibleHooks and using it with Juju on the hosts | 14:33 |
avoine | bloodearnest: but I'm not sure if the overhead of Ansible worth the trouble | 14:33 |
bloodearnest | avoine: great. I'd be happy to help with that. | 14:33 |
bloodearnest | avoine: right. | 14:33 |
lazyPower | avoine: are you planning on submitting this charm for recommended status from the charm store? | 14:34 |
avoine | lazyPower: no not soon | 14:34 |
avoine | lazyPower: I'll stick with the pure python for now | 14:34 |
lazyPower | ok. I was going to interject that if you want help wrapping that up in proper charm format, we have an ansible charm template for use, and integrating it into juju hooks is pretty straight forward | 14:34 |
lazyPower | if you're using a recent edition of charm tools `charm create -t` gives you some options for that | 14:35 |
avoine | nice I didn't know that | 14:35 |
bloodearnest | lazyPower: nice! | 14:39 |
lazyPower | uh oh, i get the feeling we haven't been very forthcoming with the information about charm create -t | 14:39 |
* lazyPower prepares an email to the list | 14:40 | |
bloodearnest | lazyPower: also, have you seen my charm helpers branch that adds super-simple "actions"? We make extensive use of juju run <unit> "actions/some-action a=b c=d" type stuff, which this branch makes easy to integrate with ansible | 14:45 |
lazyPower | bloodearnest: i have not, but if its not in the review queue chances are I didn't see it | 14:45 |
bloodearnest | https://code.launchpad.net/~bloodearnest/charm-helpers/ansible-actions/+merge/233428 | 14:45 |
lazyPower | http://review.juju.solutions | 14:45 |
johnmc | As discussed yesterday with natefinch and sinzui, I can no longer create LXC containers using juju-maas on one of my machines. | 14:52 |
johnmc | Working back through my history to the last thing I did before it broke, I might have found something. | 14:52 |
johnmc | I checked-out (bzr branch) the precise haproxy charm onto my machine and tried to deploy it into a trusty LXC machine | 14:53 |
johnmc | I then back-tracked and realised I should have been working with hacluster under trusty | 14:53 |
bloodearnest | lazyPower: I don't see my branch on there, what to I need to do to make it appear? It should be under tools, right? | 14:53 |
johnmc | Does anyone know if that could explain my now-broken environment | 14:54 |
lazyPower | bloodearnest: that queue may not be implemented yet. I know we're moving at breakneck speed on the new queue | 14:54 |
johnmc | I'm also finding it impossible to deploy hacluster linked to glance. | 14:55 |
bloodearnest | lazyPower: coolio | 14:55 |
lazyPower | bloodearnest: maybe its worth while to open a bug against the review queue so we can track progress on implementation | 14:55 |
lazyPower | bloodearnest: for now it lives here but will eventually be moved to github.com/juju-solutions --- https://github.com/marcoceppi/review-queue | 14:56 |
johnmc | When I deploy the glance charm and fire-up a new pair of glance instances, then link that to a new hacluster instance, I het this in the logs: http://pastebin.com/ZtNA7UmT | 14:58 |
johnmc | That big long list of "INFO hanode-relation-changed Cleaning up res_glance_haproxy:0 on juju-machine-.-lxc-.." lines represent every glance instance I think I've ever had an subsequently destroyed. | 15:01 |
johnmc | Things look really badly broken. Can anyone help? | 15:01 |
johnmc | natefinch: Are you around to help? | 15:05 |
natefinch | johnmc: yes, but trying to drum up someone more helpful ;) | 15:05 |
natefinch | johnmc: glance and hacluster are beyond my knowledge | 15:06 |
natefinch | johnmc: did you get any more help from sinzui last night? I saw you got an upgrade half-finished, which is never a great place to be | 15:07 |
sinzui | Ah, no, I had to switch to OS X to complete the release | 15:07 |
johnmc | natefinch: I didn't make any progress on that. sinzui said that the upgrade can't be done while those LXC (requested) machines are there. I have no idea how what to do next. | 15:08 |
sinzui | johnmc, When the containers are there, but the machine/unit agents are down, you can restart each | 15:09 |
johnmc | natefinch: I'm hoping that the attempt to install the precise haproxy charm into trusty might turn out to be a common cause of both the LXC problem, and the hacluster problem. | 15:10 |
sinzui | johnmc, I have a brittle arm64 machine that I do this from time to time. I think restarting the machine agent first is best | 15:10 |
johnmc | sinzui: the containers don't actually exist. I requested their creation, but they were never actually created. | 15:11 |
sinzui | johnmc, 1.18.x was a bad time for arm64, so I did restarts of the agents, then the queued upgrade was dequeued and 5 minutes later the upgrade was complete, and the agents stayed up | 15:11 |
johnmc | sunzui: I have restarted the machine agent (on the base system) many times | 15:11 |
johnmc | sinzui: there are no agents, because there are no LXC containers. That is how my system is broken. | 15:12 |
sinzui | restarting the state-server agents are orthogonal | 15:12 |
sinzui | johnmc, If lxc gets blocked on a machine a lot of manual work is needed to unblock http://pastebin.ubuntu.com/8427862/ | 15:14 |
sinzui | johnmc, I think you need to confirm which containers exist with sudo lxc-ls --fancy | 15:14 |
johnmc | I have already been through that in detail with nate yesterday. The LXC containers do not exist. I pasted this output yesterday | 15:15 |
sinzui | johnmc, Since you cannot destroy your env and start over, I think you need to use remove-unit or destroy-machine to make juju forget about the failed containers and try again | 15:15 |
sinzui | johnmc, I don't have experience with destroying maas lxc containers | 15:16 |
johnmc | http://pastebin.com/aGUAyizu | 15:16 |
sinzui | johnmc, is this trusty with cloning enabled? | 15:17 |
johnmc | sinzui: I have used "destroy-machine" and used --force to no avail | 15:17 |
johnmc | sinzui: This is trusty. How is cloning enabled/disabled? I'm not familiar with that setting. | 15:18 |
johnmc | I followed on online guide | 15:18 |
sinzui | johnmc, are there locks named after lxc in /var/lib/juju/locks/ | 15:19 |
johnmc | sinzui: No such lock files on either the host system (machine 1) or the juju-agent machine. Where should I be looking for these? | 15:21 |
sinzui | johnmc, the machine with the containers. When lxc doesn't create containers we investigate the host machine | 15:22 |
sinzui | johnmc,the state-server doesn't do work. other machines ask it for a list of tasks. so when a machine has issues, we visit to investigate the local problem | 15:24 |
johnmc | sinzui: There are no lock files. | 15:24 |
johnmc | sinzui: As I explained to natfinch yesterday, there is no evidence that the host system ever received any request for new LXC contianers | 15:25 |
* mgz reads log | 15:25 | |
mgz | johnmc: do you want to just clean up the lxc container stuff completely? | 15:26 |
sinzui | johnmc, interesting, but the order of events is the host machine agent asks for the state server for work. status shows the state-server is waiting of the machine to do it's part | 15:26 |
johnmc | sinzui: This failure appears to be totally silent with regard to log files. No files under /var/log change at all on the host system in response to a new LXC request. | 15:29 |
johnmc | sinzui: If we had any log output at all we might get somewhere. How is this to be done? | 15:29 |
sinzui | johnmc, I only know to look for logs in /var/log/juju. when there are no logs or logging stops, that might mean the files were removed, confusing the agents. restarting the agents will recreate the logs an there will be a flood of dequeued messages | 15:31 |
johnmc | sinzui: As requesting new LXC containers has no impact, is it possible to there is a blocked queue of requests? | 15:32 |
johnmc | sinzui: I've restarted the jujud for machine-1 many times, and verified (just now) that there are no deleted files being written to. | 15:33 |
johnmc | sinzui: I used lsof to check open files. /var/log/juju/machine-1.log is used by jujud and is present | 15:33 |
johnmc | sinzui: it's as though the juju-agent machine sends no requests to the host machine at all. | 15:34 |
sinzui | oh... | 15:37 |
=== fabrice_ is now known as fabrice|out | ||
johnmc | sinzui: Where do I go from here? | 16:04 |
sinzui | johnmc, I am not sure what to do in this case. I don't have any experience with this. When the machine agent cannot talk to the state server, the logs will scream about it. You don't see this issue in the machine-1.log though. | 16:10 |
sinzui | johnmc, I assume the machine-1.log you are reading never mentions ERROR. | 16:12 |
johnmc | sinzui: The machine agent connects to the server. Netstat shows this has an open tcp connection to port 17070. However, no activity takes place. Shouldn't the jujud on the juju-agent server log something somewhere? | 16:14 |
sinzui | johnmc, The machine-1 log should be stating it called home. The all-machines.log on the state server is puts all the actions in context. | 16:16 |
johnmc | sinzui: the machine agent seems to lose it's connection to the juju-agent at least once a day. This is the latest log: http://pastebin.com/2PP8CK8G | 16:16 |
sinzui | johnmc, This is obviously a bug. Can you report the issue at https://bugs.launchpad.net/juju-core and attach the all-machine.log for the developers. Please review the log though, Juju likes to include certs and password that you will want to redact | 16:17 |
sinzui | johnmc, which version of juju did the env start as, and which have you upgraded too | 16:19 |
johnmc | sinzui: it was 1.18.1 originally | 16:24 |
sinzui | okay, that explains the upgrade already at 1.18.4 message | 16:27 |
sinzui | though it implies something has not completed an upgrade to 1.18.4 | 16:27 |
bic2k | juju have any plans to directly support deploying a Docker container. | 17:21 |
=== scuttlemonkey is now known as scuttle|afk | ||
natefinch | bic2k: we've talked about docker a lot internally... you can certainly write a charm that deploys a docker container, in fact I wrote such a charm very recently. Docker and juju work fine together now... what would you like to see done differently? | 17:30 |
bic2k | natefinch: I also just finished writing a charm for deploying some internal docker services. Perhaps thats all we need :-) | 17:32 |
natefinch | bic2k: I think the one thing that makes docker charms special is that it's relatively "safe" to deploy multiple of them to a single machine, rather than needing to put each of them in a separate LXC container (since they're already contained, of course) | 17:33 |
bic2k | natefinch: Private docker registry was a bit of a challenge for us. | 17:35 |
lazyPower | https://plus.google.com/100016143571682046224/posts/PaVGh51FYCR - i'm just going to leave this here... | 17:35 |
natefinch | bic2k: ahh, yeah, I was deploying from a publicly available image, so that wasn't a concern | 17:35 |
bic2k | natefinch: You deploying on 12.04 or 14.04? | 17:36 |
natefinch | bic2k: 14.04 | 17:36 |
bic2k | natefinch: We hit some issues with getting docker installed on 12.04 through apt. Mostly the CLI tools to add a repo insist on adding deb-src and then failing when it isn't there. | 17:37 |
natefinch | bic2k: why are you using 12.04? | 17:38 |
bic2k | natefinch: our cluster is old and thats what its on :-) | 17:38 |
natefinch | bic2k: ahh :) | 17:38 |
=== CyberJacob|Away is now known as CyberJacob | ||
=== roadmr is now known as roadmr_afk | ||
=== fabrice|out is now known as fabrice | ||
avoine | lazyPower: what is the bundletester command I should run to test the python-django charm like you do? | 19:40 |
avoine | lazyPower: also are you testing with python3 ? | 19:41 |
lazyPower | avoine: negative. python2 | 19:41 |
avoine | ok | 19:41 |
lazyPower | avoine: just `bundletester -F` after i cd into the charmdir | 19:41 |
avoine | ok | 19:41 |
lazyPower | pip install bundletester into a venv | 19:41 |
lazyPower | then you get the same results we do when testing / CI does | 19:41 |
avoine | ok | 19:41 |
lazyPower | avoine: hth - i'm EOD'd for the rest of the day to prep for my show. If you need anything else, i'll be around this weekend/monday | 19:42 |
=== lazyPower is now known as lazyPower|Spinni | ||
avoine | ok, thanks | 19:42 |
sebas538_ | hi! | 20:42 |
=== sebas538_ is now known as sebas5384 | ||
sebas5384 | question: how bundle.yaml deals with local charms? | 20:42 |
sebas5384 | in the charm property, could be like | 20:43 |
sebas5384 | charm: local:precise/drupal | 20:43 |
sebas5384 | but how I specify the local repository | 20:43 |
sebas5384 | ? | 20:44 |
=== fuzzy_ is now known as Ponyo | ||
=== roadmr_afk is now known as roadmr | ||
sebas5384 | the machine view is in https://jujucharms.com \o/ | 21:10 |
sebas5384 | uhuu!! testing already :) | 21:10 |
arosales | lazyPower|Spinni: just the stream for your session correct? | 21:23 |
arosales | not hangout or anything like that | 21:23 |
lazyPower|Spinni | arosales: not this week. maybe after brussels we'll setup a live in studio session to go with it. | 21:26 |
lazyPower|Spinni | replies will be latent, i setup a mixing table this week. | 21:26 |
* lazyPower|Spinni afk's again | 21:26 | |
d4rkn3t | hello guys, one question for MaaS and juju, is there a way to make the security upgrade on nodes added on MaaS and for the charms on juju, without make that one by one? thanks. If not it may be a suggest to add as app on MaaS anyone can answer me? | 21:44 |
bic2k | d4rkn3t not that I have tried this, but you may be able to use the ssh command to run the appropriate apt-updates one machine at a time | 21:44 |
d4rkn3t | I'd like to avoid that, because we've 500 virtual svr | 21:45 |
d4rkn3t | i'd like to use MaaS to make that | 21:46 |
d4rkn3t | for example select the nodes and launch the upgrade!!! | 21:46 |
d4rkn3t | it's take too long time make that one by one | 21:47 |
d4rkn3t | the same think is valid also for the charms deployed with juju | 21:50 |
d4rkn3t | if I'd want to make the upgrade for example of MySQL deployed using juju, is there a procedure to make that? | 21:51 |
=== CyberJacob is now known as CyberJacob|Away | ||
rick_h_ | d4rkn3t: there's a juju run command, and landscape is great at updates across hardware. | 22:48 |
rick_h_ | d4rkn3t: so in the mysql sense I'd juju run that across my mysql service. | 22:49 |
rick_h_ | d4rkn3t: https://www.youtube.com/watch?v=2d5KdQjXCBs is a cool video on juju run and juju run --help has some basic info. | 22:51 |
=== lazyPower|Spinni is now known as lazyPower | ||
lazyPower | I deny knowing anything about juju run :P | 23:16 |
lazyPower | rick_h_: good looking out - i just got off stream and was going to suggest the same thing. hi5 | 23:16 |
hazmat | rick_h_, lazyPower juju run --all "apt-get update && apt-get upgrade" | 23:22 |
mwenning | marcoceppi, can you specify constraints when you deploy amulet? | 23:24 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!