=== webbrandon is now known as weblife === jose- is now known as JoseeAntonioR === axw_ is now known as axw [04:34] @marcoceppi Do you want me to send you my patch at least? [04:34] It works well from my experience... === thumper is now known as thumper-afk === CyberJacob|Away is now known as CyberJacob [07:14] Is it possible to read somewhere draft of article 'AppArmor and charms' ? Early was on this page https://juju.ubuntu.com/AppArmor === TheMue_ is now known as TheMue === TheRealMue is now known as TheMue === defunctzombie is now known as defunctzombie_zz [08:29] <_mup_> Bug #1206412 was filed: Can't access the WordPress Server deployed using MAAS-JUJU. Web page access ends up with Error "502 Bad Gateway (nginx/1.2.6 (Ubuntu)" [08:58] hie there === bloodearnest_ is now known as bloodearnest [11:31] heya folks - I am using the new juju-core lxc provider, and am getting an error about git not being in $PATH that loops every 3 seconds [11:31] http://paste.ubuntu.com/5928665/ [11:32] this worked fine on friday, fwiw [11:33] my charms just retry installing every 3s until the deployment timesout [11:35] juju version is 1.11.4-1~1514~raring1 from ppa [11:52] bloodearnest: if you `juju ssh u1-psearch-app/0` can you verify git is actually installed? [11:54] hm... I'm not sure, but seems that when I add subordinate relation it triggers container relation install hook [11:55] marcoceppi, not installed [11:56] bloodearnest: so it looks like the charm requires git but it isn't/wasn't installed. Adding git-core to the list of packages installed during hooks/install should resolve this [11:59] marcoceppi, yeah, I'm looking at that, but AFAICS, it doesn't require git. And this happens on every charm (~7) from the stack I'm trying to deploy [12:00] bloodearnest: that's interesting. [12:00] marcoceppi, and the error came from git.go line 177, so I thought it might be juju related [12:00] bloodearnest: file a bug, it sounds like something that was changed in core [12:00] marcoceppi, kk [12:04] marcoceppi, am going to try with pyjuju/lxc -if it fails there, it likley the charm(s) at fault [12:07] marcoceppi, hmm, seems I can't hit archive.ubuntu.com from the lxc, so that's likely the issue [12:07] bloodearnest: ah, that would make sense [12:11] marcoceppi, yeah, some iptables rules to expose lxc to the world gone awry [12:29] marcoceppi: ping, morning [12:29] rick_h: morning [12:30] marcoceppi: so I'm poking at https://bugs.launchpad.net/juju-gui/+bug/1202636 for the gui [12:30] <_mup_> Bug #1202636: Charm Details Page Under Providers Change Openstack to HP Cloud [12:30] marcoceppi: and basically we're going to start reporting that the HP tests are HP. However, the tests data we import says it's openstack. [12:30] rick_h: yeah, it's testing openstack provider against hp-cloud [12:30] marcoceppi: for now I'm going to rename it in the Gui and carry on, but at some point it'd be cool to coordinate a move to call it hp throughout [12:31] rick_h: ack, I'll add a work item to the charmtester stuff and ping you to pick a time to switch [12:31] marcoceppi: just to put on your radar and heads up that the gui changes is giong to go through so we'll see local/ec2/hp as the provider test results [12:31] marcoceppi: thanks === cmagina_ is now known as cmagina [13:44] Hello. This morning around 7:00 (It's 16:43 here now) I started some juju deploy tasks. These tasks are still pending. Is this normal? [13:44] Output of juju status: http://pastebin.com/dF5gn6GY [13:45] I also notice that of my two servers the agent state is on not-started, which sounds strange to me [14:02] On one machine juju-machine-agent refuses to start and give this error: Failure: zookeeper.NodeExistsException: node exists [14:02] I cant find a clear explanation and solution when I search on this error with google [14:16] I tried to destroy-service/unit, juju status shows dying, is there anything I can do? I want to destroy the machine, and redeploy it === BradCrittenden is now known as bac [14:17] freeflying: what is the status of the unit/service now? Is it in error? [14:17] freeflying: if it is in error, then you need to juju resolved , and then you will be able to destroy it probably [14:18] ahasenack: life: dying [14:18] freeflying: what about the rest of the lines? [14:19] ahasenack: agent-state is error, so I can use resolved? [14:19] freeflying: yeah, try it, and then destroy-unit, and then terminate-machine [14:20] ahasenack: hah, resolved, thanks [14:22] good [16:49] so how is the ec2 api for deploying charms on an openstack environment? [16:49] I have used ec2 native, but haven't had a chance to play on a pure openstack setup yet === defunctzombie_zz is now known as defunctzombie [17:02] scuttlemonkey: we don't use the ec2 compatibility api anymore for openstack deployments. We use the straight openstack api and it works rather well [17:02] marcoceppi: ahh, right on...didn't see much doc on that front [17:02] looked like mostly s3-specific stuff, guess I'll keep digging [17:03] scuttlemonkey: for the most recent stuff, I'd recommend looking at the juju-core code if you aren't already [17:03] marcoceppi: yeah that's the next stop. I generally like to parse human-readable stuffs first if I can :) [17:15] has anyone done a comparison of juju to something like cloudformation? I like juju for a lot of things but being able to configure VIPs is a nice feature. they look to be pretty similar solutions in a lot of ways === natefinch is now known as natefinch-lunch [17:24] xmltok: CloudFormation is AWS Only, Juju is cloud agnostic* [17:31] i suppose cloudformation on openstack (heat) probably just builds out an haproxy node, which could easily be done through juju [17:35] yes, that's quite accurate in fact === natefinch-lunch is now known as natefinch [17:58] WRT juju in a large production environment, can a team share a bootstrap node? I get how an engineer can build out their platform but i'm not sure how ops would share the management of a bunch of deployments. does each charm configuration set require a different bootstrap node, or can i have one bootstrap node for each production colo? [18:08] xmltok: So you can specify who has access to a deployment in the environments.yaml by including their ssh public keys as one of the options. From there, as long as each user has proper credentials to access the cloud environment they can use juju from the command line. There's also a juju-gui charm you can deploy which allows users to log in via a web interface and manage that envrionment/deployment from a browser [18:09] Each environment only needs one bootstrap node to operate. So that one node runs orchestration for the entire deployment [18:28] cool [18:29] ive been planning on implementing a similar enviroment sharing solution for teams of engineers, so it sounds like this would solve that problem too [18:30] if I wanted to write a web interface to modifying the environment (changing relationships, adding charms) is there an API to the bootstrap node or is it all via juju command line options? [18:33] xmltok: there already is one. juju-gui is that web interface [18:33] but yes it uses an so in the bootstrap node [18:40] ok, i didn't realize that gui was deployable internally, it looked so good i figured it was some kind of pay solution [18:41] so i could in effect have a gui server set up and engineers could log in and point it at their different environment bootstrap nodes to modify them as needed, or ops could point them at the different prod environments. that is pretty cool [18:51] it's one GUI per environment ATM xmltok [18:51] can the GUI run on the bootstrap node? [19:00] xmltok: yup [19:02] xmltok: https://jujucharms.com/precise/juju-gui/#bws-readme [19:48] Does default-series stand for the juju version being installed? [19:53] Or the image it is going to be deployed? [19:54] ubuntu image that is [20:04] webbrandon: Ubuntu series, ie: precise, quantal, raring, etc [20:05] We recommend, and it defaults to, precise (the current LTS) === JoseeAntonioR is now known as jose [20:39] Okay found something that describes it in more depth. Think I will submit a more detailed description of the setting in the docs since it doesn't describe it well. [21:01] weblife: what's that? [21:02] default-series [21:04] * thumper waves [21:05] also updating the AWS setup getting started page [21:20] weblife: it shouldn't be needed with the latest juju unless you something other than LTS [21:22] @marcoceppi Whats that the default-series option or the AWS page(https://bugs.launchpad.net/juju-core/+bug/1201833)? [21:22] <_mup_> Bug #1201833: AWS instructions need an update [21:30] weblife: precise, for all clouds [21:33] so there is no more setting specific environments like 'oneiric'? [21:34] weblife well I don't think you can use oneiric any more with juju [21:35] I was about to write this into the AWS setup along with the security changes: [21:37] Environments can currently be configured with a default-series option, which controls the Ubuntu series to be ran on new machines (where available) and the repository collection from which to get charms (always). You can find available Ubuntu AMI versions that are supported with AWS in the AWS Marketplace at aws.amazon.com/marketplace. You can also find a list of the different Ubuntu series at http://en.wikipedia.org/wiki/List_of_ [21:37] Ubuntu_releases if you decide to setup your own EBS-backed AMI. [21:37] following what I read here: https://bugs.launchpad.net/juju/+bug/865163 [21:37] <_mup_> Bug #865163: default-series option has surprising behaviour [21:44] halo, when I desploy a new charm on say, openstack, it spawns up a whole vm for it? [21:46] aimatt: yeah, that's a large part of juju's cause to exist :) [21:47] sarnold: ok, I'm just trying to think of how I would make a charm for a particular service that depends on a service, such as memcache, running locally [21:48] I would guess that I would just include apt-get install memcached in that services, charm [21:48] right? [21:48] aimatt: or you could use a subordinate service [21:49] aimatt: https://juju.ubuntu.com/docs/authors-subordinate-services.html [21:50] aimatt: I have a feeling that might not be the best solution for memcache. I've got a feeling you might want to be able to scale those separate from the other services, and you might want its cache to serve for more than the one service unit it would be deployed with, as a subordinate [21:51] we would have a separate memcache cluster too, this is used for things like locks [21:51] ah okay :) [21:51] local stuff only [21:52] thanks for the link, I'm wrapping my head around it [21:53] weblife: So, that's not entirely accurate. So simplestream data determines which AMI or image to use for juju, you just supply the series data [21:59] sarnold: that looks perfect. thank you [22:00] aimatt: cool! :) [22:00] I think the indentation of the YAML there is borked though [22:01] but I get it [22:03] heh, so it is, html source has it correctly though [22:04] * marcoceppi fixes html structure [22:05] marcoceppi: 1206704 [22:06] sarnold: thanks! [22:08] marcoceppi: I _love_ the "file a bug" link on the bottom of the page. that's just friendly. :) [22:11] @marcoceppi Okay I will leave that part out then. [22:54] I don't know what to do... I am told I need to get experience in "charm contributors" to submit patches and repairs. So I assign myself to a bug with them: https://bugs.launchpad.net/juju-core/+bug/1201833 but I can't submit my repair because "charmers" requires the same. So I uploaded my repair to: https://code.launchpad.net/~web-brandon/juju-core/juju-core. Am I missing a step because I am feeling pretty defeated for simply try [22:54] ing to help make things better. [22:54] <_mup_> Bug #1201833: AWS instructions need an update === CyberJacob is now known as CyberJacob|Away [23:45] Thank you jcastro === torin_ is now known as tsandall [23:54] another question: if I use juju, what do I really need openstack for? [23:55] aimatt: you need something to create VMs for you -- or, in the case of MAAS, actual machines :) [23:56] aimatt: juju just knows how to ask openstack or ec2 or azure or hp cloud or .. for a new machine instance [23:56] sarnold: oh, so the openstack charm is primarily for maas? [23:58] aimatt: that's where my knowledge gets thin -- as I understand it, you'd have two jujus in place -- one to control the openstack environment itself, one to control the things you -run- on openstack