=== menn0_ is now known as menn0 === jcsackett_ is now known as jcsackett === vladk|offline is now known as vladk === zz_swebb is now known as swebb === CyberJacob|Away is now known as CyberJacob === sarnold is now known as sarnold_ === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline [08:26] morning all - is anyone working on a trove charm? === vladk|offline is now known as vladk [10:36] jamespage: good news! 1.18 rev 2294 passes dep8 in packaging in utopic in my local test. [10:36] Thanks axw! [10:37] jamespage: so next: do we want to upload that, or wait for an upstream 1.18 release? [10:37] rbasak, whens it due? [10:37] jamespage: and I don't see an MRE for juju-core, so what's the SRU plan for Trusty? [10:37] No idea when it's due. Nothing is listed on https://launchpad.net/juju-core/1.18 [10:37] Is there a release schedule? [10:43] rbasak: sweet :) no worries === vladk|offline is now known as vladk === vladk|offline is now known as vladk === wwitzel3_ is now known as wwitzel3 === vladk|offline is now known as vladk === vladk|offline is now known as vladk === vladk|offline is now known as vladk === psivaa is now known as psivaa-afk [12:26] Hello, I’m trying to get juju running with vagrant on my mac. I used this guide: https://juju.ubuntu.com/docs/config-vagrant.html but my vagrant machine is not doing anything. Last log message is: [12:26] Taking a nap to let Juju Gui get setup [12:28] screenshot: https://www.evernote.com/shard/s6/sh/599d7275-386e-4673-a6aa-b980bdfb9674/47cfe11653f99b1cab31cb3f6234e6c2/deep/0/Vollbild-22.05.14-14-27.png [12:32] lol [12:32] who knew we were nap worthy [12:33] therealmarv: do you know if you used the 14.04 or 12.04? [12:34] 12.04. It is indeed a very long nap… and seems without an end [12:34] hmm, I've not used the vagrant workflow. I wonder if it's loading the first image for the lxc machine or somethng [12:35] marcoceppi: or jcastro any idea on the delay at the gui deploy in this vagrant workflow? ^ [12:37] therealmarv: rick_h_ that's a utlemming thing, but it's basically just deploying the GUI and it takes anywhere from 30s to 5 mins [12:38] marcoceppi: yea, sorry I was going to ping him but don't see him around atm [12:38] therealmarv: guessing based on your irc timing that it's well past that timeframe? [12:38] I’m sure I’m waiting longer than 5 minutes…. seems really like a nap without end ;) [12:39] your vagrant image is tired! [12:39] marcoceppi: is there a place to file bugs on the images? [12:39] hehe [12:39] rick_h_: not yet, utlemming is going to be publishing the build process as open source by end of week. lazyPower is the one who knows most about this process [12:40] marcoceppi: ty much, I'll move to bugging lazyPower and utlemming [12:40] sorry for the trouble therealmarv, you're on the bleeding edge, which we greatly appreciate, and we'll have to work out some debugging tips and such [12:42] no problem. I will try local lxc install now on a fresh virtualbox image. [12:50] Do anyone know if it costs anything to borrow the new "cloud in a box" for 2 weeks? [12:50] Or who might know the details about that.. [12:51] Or what channel is most appropriate to ask in? :-) [13:29] sander^work, kirkland is the person to talk to, though I am pretty sure he's just going to say to use the form, heh [13:34] therealmarv: are you using the trusty or the precise vagrant box? [13:34] lazyPower: precise [13:35] did not tried out trusty yet [13:35] therealmarv: we're seeing intermittant issues with the trusty box. Precise is still the recommended basebox pending a fix for some fringe issues [13:36] therealmarv: The Vagrant box is pulling down an intermediary redirector to place on the Vagrant image that will forward your requests to the JujuGui instance being deployed via LXC in the container. Did you tweak any of the settings in the Vagrantfile? [13:36] no nothing. I followed strictly https://juju.ubuntu.com/docs/config-vagrant.html [13:37] btw I wonder if something in environment.yaml is needed because it is still default to amazon [13:37] ok. If you destroy the box and bring it back up does it progress past the gui setup nap? [13:37] it should set the default to local during the boot sequence. [13:38] I’m guessing it is doing so (inside the vm). As I said I’ve done nothing special. [13:38] Recreating the same VM with precise had the same result: nap without end ;) [13:40] I also used everytime the 64bit version if this helps [13:41] Hmm.. Ok. [13:42] jcastro, is there some online version of the courses involved with it? as I have hardware I could run it on. [13:43] therealmarv: i'm booting, just a moment. Let me see if this is a wider problem than we are aware of [13:43] I’m testing trusty in the meantime… also booting now [13:45] therealmarv: when you get a chance can you pastebin me the output from /var/log/juju-setup.log? [13:47] ok will do… my machine is now waiting again on the critical nap (GUI)… let’s see if it goes further… === psivaa-afk is now known as psivaa [13:48] relevant output would also be fetching the output of juju status, and $HOME/.juju/local/log/all-machines.log [13:51] lazyPower: oh wow. Trusty worked! Strange… but I double checked precise which was napping forever [13:51] Output http://pastebin.com/yL8cygDn [13:52] Interesting. If Trusty works for you, tally ho then :) You may encounter an issue with more than 2 or 3 machines deployed with juju complaining about being unable to clone running containers [13:52] its intermittant though, so maybe it'll do fine [13:55] therealmarv: thanks for the output. Looks like the redirector completed and the GUI didn't stand up - output from all-machines would be insightful if you've still got the instance up. [13:56] I will rerun precise setup… now [13:57] sander^work, yep: https://insights.ubuntu.com/2014/05/21/ubuntu-cloud-documentation-14-04lts/ [13:57] we're working on the html documentation now, but basically it's Juju/MAAS/OpenStack [14:04] jcastro, a few years ago, I got stuck with an particular error targeted at the bios/hardware.. I guess it's fixed now :-) [14:04] Cool.. documentation looks nice. [14:04] it really depends [14:05] how well the IPMI MAAS stuff will work [14:05] but there's been a ton of fixes this cycles, so I am willing to bet it will work [14:05] if not, at worst you'll have to manually power on the machines, but you can cross that bridge when you get there [14:06] No problem.. as i'm using some remotely controlled bladeservers to test with. [14:06] sander^work: jcastro it's also pretty trival to make a new power module for MAAS [14:06] it's a plugin based system [14:07] so if the blade server has an API you could make a new power type in MAAS that knows how to talk to it [14:07] Cool. [14:07] Does it integrate with powering it up, for testing, on vmware or hyper v? [14:09] sander^work: there's incoming work to integrate it with HyperV, i'm unsure of the status of Maas with VMWare [14:10] Ok. [14:12] sander^work: there's been some discussion over VMWare on the mailing lists. I would encourage you to join the mailing list and ask the community at large at the status and if anyone has any insights for you with regard to your specific requirements. [14:15] lazyPower: juju-setup.log from precise: http://pastebin.com/t6EDmHTx [14:15] lazyPower: Precise is napping forever. Triple checked now. [14:16] therealmarv: thats the same output from the juju-setup log, can you vagrant ssh in and get me the output from $HOME/.juju/local/log/all-machines.log? [14:17] sorry for the multi-log request, and thanks for helping! === hatch__ is now known as hatch [14:20] @lazyPower: here it is http://pastebin.com/qxi0ygSA np [14:21] therealmarv: Brilliant, thank you! Looks like the LXC container creation failed, and that caused the script to hang. [14:21] I'll take this output and move from here. Thanks again therealmarv - you've been a big help [14:21] lazyPower: look at the last line ubuntu-cloudimg-query trusty released amd64 --format '%{url}\n'; confused by argument: trusty; + url1=; container creation template for vagrant-local-machine-1 failed; Error creating container vagrant-local-machine-1 [14:21] glad I could help :) === cory_fu2 is now known as cory_fu === tedg is now known as ted === hatch__ is now known as hatch === qhartman_ is now known as qhartman [14:53] hey all, I'm having some issue with juju upgrade-charm and need a sanity check whether I'm seeing a bug or I'm just doing it wrong [14:54] I run it like this: juju upgrade-charm --switch=local:trusty/mycharm --repository=file/repo myservice [14:55] and it tells me it's updating the charm (and increments the version by 1) [14:56] when I look on the unit ala cat /var/lib/juju/agents/unit-uaa-0/charm/.juju-charm [14:57] the version number is several behind what was reported by upgrade-charm [14:57] all service units are resolved [14:57] whit: what does juju status show? [14:58] hey marcoceppi :), https://gist.github.com/whitmo/7acc7d87f7351df6cede [14:59] marcoceppi, maybe I jumped the gun. I see the upgrading there [14:59] whit: yeah, it may take a hot seccond [14:59] whit: also you don't need to do --switch when just upgrading the charm [15:00] switch is more like "I want to go from charm store to local" or vice versa [15:00] it's just as crazy clobberful as the --to flag [15:00] marcoceppi, but local to local is fine? [15:00] whit: yeah, so you can do upgrade-charm and point to a different --repository and everything without using --switch [15:00] just as long as the revision file number is greater than what is currently deployed [15:01] marcoceppi, how is a revision tracked on a local charm? [15:01] the revision file [15:02] it holds an arbitrary number that represents it's "revision" [15:02] ah [15:02] marcoceppi, is bumping that necessary sometimes? [15:02] whit: never since 1.18 [15:02] it'll automatically do a +1 bump on upgrade-charm [15:03] in past versions you had to do a -u flag, or manually increment it [15:03] marcoceppi, not seeing that [15:03] this is stuck at 001 (though juju is adding a -+1 to each description it pushes out) [15:05] anyway, thanks marcoceppi ! [15:32] marcoceppi, I had some old juju hanging around. thanks again... [15:34] jamespage: do you have a minute? I've been testing "future" releases of juju so that the current Utopic bugs won't happen again. [15:35] rbasak, bit busy right now [15:35] jamespage: OK, I can leave it for now. [15:35] rbasak, tomorrow am? [15:35] jamespage: sure === scuttlemonkey_ is now known as scuttlemonkey === vladk is now known as vladk|offline [15:59] I am doing a test deployment of openstack icehouse on 14.04, I am trying to use a dedicated network for ovs gre comunictions, however it is currently using the wrong NIC interface. I can see inside ./plugins/ml2/ml2_conf.ini the value local_ip that looks to be responsible for this but can not see how this can be changed to another IP on another NIC within juju === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr === roadmr is now known as roadmr_afk === hatch__ is now known as hatch === nottrobin is now known as cHilDPROdigY1337 [16:40] I did notice that a chef build had this issue but it was patched, you can see this here https://community.rackspace.com/products/f/45/t/3245 === cHilDPROdigY1337 is now known as nottrobin === roadmr_afk is now known as roadmr === roadmr is now known as roadmr_afk [17:42] jose: hello [17:45] I'm trying to figure out whether I can use Juju to deploy CoreOS units. [17:52] Cannot Download Charm From juju-gui | http://askubuntu.com/q/470730 === roadmr_afk is now known as roadmr === vladk|offline is now known as vladk === vladk is now known as vladk|offline [18:11] bodie_: yeah that would be awesome === sebas538_ is now known as sebas5384 [18:12] I sow a talk about the future of containers === vladk|offline is now known as vladk [18:25] and I think topology problems won't be the problem, because we will have something like "cluster of process" [18:26] like openshift do with cartridge x gears [18:26] hey mruzek , Did you have a chance to review the remote pdb merge? [18:36] sebas5384, yeah, have you checked out deis.io? I think it runs on fleet [18:40] bodie_: hmm i will take a look [18:40] fleet being a container manager for coreos [18:40] bodie_: yeah! I exactly that [18:40] basically open source heroku [18:40] looks soooo cool [18:42] juju bootstrap time-zone 7:00 hours front of server | http://askubuntu.com/q/470751 [18:42] yeah [18:43] bodie_: theres always missing the relation hooks, that are so good in juju [18:48] definitely great stuff [19:11] jcastro: how juju's vision is aligned with paas using things like deis.io ? [19:12] we don't really do paas we're a tool for people to deploy paas'es [19:32] jcastro: hmmmm that give me a lot to think [19:34] sebas5384: with a bit of configuration several of our juju charms (see: rails) woudl work with some paas aspects of continuous deployment - however juju by itself is not a PAAS, its IAAS, you deploy PAAS's over IAAS to handle the scale of your PAAS [19:35] which is a long winded way of repeating what jorge just said... sorry for the duplication. [19:35] hmm thanks lazyPower [19:38] jcastro, I'm interested in thinking about that too [19:38] i'm thinking if for a team of developers, juju is needed, because production can be exactly paas, becuase i can scale and do redundancy too [19:38] jcastro: do you think it would be possible to abstract a CoreOS cluster into a deploy target such that containers could be deployed on them as services? [19:38] or are you saying that's definitively not the goal [19:39] bodie_, I think that would be totally awesome [19:39] aye [19:39] as far as juju is concerned, that's just another cloud [19:39] the same as "AWS" or "Rackspace" [19:39] perhaps Fleet could be used as the deploy API [19:40] I'm still trying to wrap my head around some of this [19:40] basically, I'd really like to put together a thing where I can add CoreOS instances as needed [19:41] and then those are deploy targets for... perhaps juju, perhaps something else [19:41] I think maybe(?) hazmat has looked into stuff like this, not sure though [19:41] I think the folks at deis.io are trying something similar [19:41] cool, I'll have to ping him [19:42] I asked them on HN why they didn't just use juju for the heavy lifting but they didn't like the license [19:43] there was another similar PAAS that was doing some of the same things, the name escapes me though [19:43] * hazmat steps out of charm authoring [19:44] bodie_, yeah.. deis.. pivoted heavily in the last few months.. from chef + docker + django.. to coreos + docker + django. [19:46] bodie_, re coreos as a target.. its conceivable.. in a couple of forms, though perhaps not what your think of... the cleanest separation with orchestration would be a set of charms.. one per app (set of containers) that upon instantiation would drive the core os containers, and reconfigure their sysd conf based on relation changes. [19:47] bodie_, more natively is tough since the docker full os container support is pretty primitive [19:47] i was playing around with the ubuntu-upstart images last night [19:48] on the public registry, but they have some issues.. if you install a some packages in them... plymoth hangs and sleeps.. looks like some events missing for full os container startup. [19:52] hmm [19:52] hazmat: should be more like, app container, not full os [19:52] or service container [19:52] bodie_, the primary issue for orchestration is that a typical docker container is a brick... you get to cli and env params only at runtime. if you want to orchestrate you need to be able to change that, and the only way to do that with app containers is to restart them round-robin. [19:53] sebas5384, so for app containers.. we have a docker based charm in rethinkdb that illustrates using juju to orchestrate via restart of nodes on changes [19:54] interestnig [19:54] ting* [19:54] making containers work in a general way (and play nice with non containers) also means handling first class networking for containers [19:54] I was thinking perhaps if you want to alter things you simply plop down a new container and kill the old one once it's up [19:54] which is something we're working on atm [19:55] bodie_, fundamentally for app containers you need to run multiples.. new one down and old one dead.. is risky around data volumes [19:57] sebas5384, bodie_, what's the use case. are you interested in containers or image based workflows or? [19:57] hazmat: so docker runs into a lxc container(or not), and then to scale can be add more machines? [19:57] sebas5384, or add more containers on other extant machines [19:57] hazmat: good question ;) [19:57] sebas5384, bodie_ you can do containers today with juju .. both lxc and kvm [19:57] even nest them :-) [19:58] yeah!! thats why on of the main things why i'm using it [19:58] bodie_, also fwiw. i just pushed out at an etcd charm [19:59] juju with openstack using docker images is interesting too [19:59] cs:~hazmat/trusty/etcd [19:59] sebas5384, it is.. although it gets really wierd there.. a full os container makes more sense at that layer/level. [20:00] ie.. doing things like block volume attach from cinder doesn't map to docker.. [20:00] whereas a full os container would have no issues with that [20:00] sebas5384, the docker heat integration seems a bit better suited to single process app containers. [20:01] hazmat: didn't know about that docker x heat integration [20:02] honestly.. i think lxc containers (full os) map better as nova compute instances... but i do like the image workflow and portability that docker brings to the table [20:02] hazmat: yeah, exactly [20:03] i'll be at dockercon in a few weeks if either of you are around [20:03] there's a new openstack working group on containers as well [20:03] now i'm thinking how i can do paas with juju(not just for deploying) [20:04] sebas5384, so full enterprise paas.. we've got some cloudfoundry charms in the works (pretty early stages) [20:04] hazmat: nice! well i won't be there, but will be nice to have a hangout to discuss this kind o things :) [20:05] but it does basic app deploys into warden (cloud foundry's container impl ;-) [20:06] sebas5384, sounds good.. jcastro would you be up for organizing something like that? [20:06] hangout on air style [20:06] nice hazmat taking a look at [20:06] maybe i was using juju for the wrong thing after all hehe [20:06] yeah! hazmat that would be great [20:07] sebas5384, there are definitely folks that treat juju as a paas.. and that can work.. but there's a pretty good blog post on what paas should mean from the cf folks [20:07] * hazmat digs [20:07] sebas5384, http://blog.cloudfoundry.org/2013/10/24/essential-elements-of-an-enterprise-paas/ [20:08] thanks hazmat, would be my reading for later :D [20:08] hazmat: Buildpack support and relation between each one [20:09] is what i'm concerned [20:09] because building an app with some custom configurations of the nginx service or php for example [20:14] sebas5384, in twelve factor apps.. buildpack is generally independent of service usage. ie. relations inject env /conf variables for the built app. [20:15] re twelve factor app .. per heroku paas methodology though a bit more general than that .. http://12factor.net/ [20:16] * hazmat dives back into charm authoring [20:17] hazmat and bodie_ thanks! great talk, I'm going to think more about this [20:17] jcastro: please please can we schedule some hangout on air about this topic? i think is really interesting [20:21] sebas5384, like an openended discussion about paas/docker/containers, etc? [20:21] sure, I'll do it after the troubleshooting I and II and LXC debugging [20:22] jcastro: yeah! and how Juju can be used as PaaS or just as IaaS [20:22] great :) [20:25] because the fact that a drupal or wordpress charm exist, paas (not completely) can be achieved [20:25] with juju [20:26] I think I would see juju more as the tool that the cluster admins / paas provider would use to deploy the services that would act as the PaaS, managing containers [20:27] but I'm not sure I have all that straight in my head [20:30] yeah bodie_ i'm confuse now about that [20:30] because I have a startup implementing devops in the company, and in our clients [20:31] so, i have to choose very well the tools we are going to use [20:31] to automatize things === vladk is now known as vladk|offline [20:52] hey lazyPower http://awesomescreenshot.com/02e2unug39 [20:52] hehe [20:53] i'm everywhere [20:54] haha spooky === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob [21:19] arosales: pong [21:19] arosales: sorry, was taking an exam :) what's up? [22:01] jose: hi. I hope you exame went well. [22:01] kinda :P [22:02] :-) [22:03] jose: wanted to thank you for all the work you are doing on improving quality in the charm store [22:03] and also wanted to see if you would be interested in project that is very near and dear to me. [22:04] well, what is it about? [22:04] The Great Charm Audit of 2014 [22:04] https://lists.ubuntu.com/archives/juju/2013-December/003331.html [22:04] its also become important as we try to move charms forward into trusty [22:05] it would be solid to audit charms and also get them moved into Trusty during that audit [22:05] The thought is with amulet tests for each charm we can then test against Trusty and see if we can actually move it forward [22:06] audit is something I will try to work on, I think I have the checklist :) [22:06] about amulet tests... I'm not a python person, but I can write a basic deployment test, not sure how I would be able to handle service verification [22:11] good to hear you are intrested in the audit :-) [22:12] jose: we could work with you to get ramped up on amulet. A lot of hte work would be looking at releations and config testing per charm [22:13] arosales: well, I'l be more than happy to help wherever I can, that's for sure [22:14] jose: its a good opportunity to learn how different charms works and get some good merge proposals [22:15] as well as learn some python in between! [22:15] jose have you seen https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0Aia4W3c4fbL-dGs4SVBJMGRIdnlSMWhzSmo3WE1mZ1E&usp=drive_web#gid=0 [22:15] yeah, it's open in my browser [22:15] there's *lots* to do in there [22:16] ah great, so that list is sorted with highest priority at the top [22:16] anything that is of interest that is in yellow needs done :-) [22:16] I think 'passes proof' is the easiest to get done :P [22:30] jose, tests don't have to be amulet [22:30] jose, its any executable just like hooks === vorpalbunny is now known as thumper === CyberJacob is now known as CyberJacob|Away