[03:54] <SpamapS> james_w: any service period, but it only monitors SSH and PING right now
[03:55] <SpamapS> james_w: I am working out host groups, and then I will flesh out some ideas for monitoring things more deeply.
[04:03] <james_w> SpamapS, ah, that sort of ping :-)
[04:03] <james_w> SpamapS, I was talking to marcoceppi and m_3 about deeper monitoring last week
[04:03] <james_w> I'm very interested in getting that working
[04:04] <SpamapS> james_w: I was thinking that to really be effective there needs to be some abstract ways to say "Monitor me like this"
[04:04] <SpamapS> james_w: so that we don't lock in to nagios
[04:04] <james_w> SpamapS, yeah, to a point
[04:04] <james_w> I think an interface for "here's a nagois plugin, run it" is important too
[04:04] <SpamapS> meh, thats what the nrpe subordinate is for
[04:05] <james_w> abstract is fantastic, but we can't abstract everything
[04:05] <SpamapS> oh you want to feed the whole thing back to nagios?
[04:05] <james_w> then my use case might be solved already :-(
[04:05] <james_w> err :-)
[04:05] <james_w> we have some application-specific plugins that we would want to expose if the admin was using nagios
[04:05] <SpamapS> no I misunderstood who would be handing out plugins
[04:06] <SpamapS> but to that point, a 'myapp-nagios-plugins' subordinate *would* work
[04:06] <james_w> "run this plugin every 5 minutes if the admin wants to monitor"
[04:06] <james_w> yeah, I think subordinates will be the answer
[04:06] <james_w> with some nagios-plugin interface maybe
[04:06] <james_w> I don't know nagios well enough
[04:07] <james_w> I've never used it, just written plugins
[04:07] <SpamapS> I'm underoing a personal refresher .. it was the core of things for me for a long time, but I got it down to the point where I didn't have to do anything with it ;)
[04:08] <SpamapS> james_w: I think its fair to have a 'monitoring' interface which passes back abstract plugin names.. and then nagios will just make a best effort to monitor what it can.. and you can extend its capabilities with a subordinate
[04:09] <james_w> that sounds reasonale
[04:09] <SpamapS> so a website would say things like  'http;url=foo http;url=foo2'
[04:09] <james_w> reasonable
[07:16] <SpamapS> alright, my nagios charm now groups all machines into a hostgroup named after the service
[07:30] <SpamapS> Ok since the old nagios charm was basically completely broken, just went ahead and pushed mine into the store.
[07:30] <SpamapS> ihashacks: nagios is now infinitely more useful.. though still needs a lot of work to be able to do full monitoring.
[08:40] <imbrandon> morning all
[14:18] <melmoth> how to investigate what is going on if a juju status show a mysql service in "pending" state for more than 30 mn after deploying it (lxc) ?
[14:20] <marcoceppi> melmoth: it'd be a good idea to see if the master container has been created or not. If you're on a slower internet connection it can take sometime to create the first image
[14:21] <melmoth> i can connect to the console of the mysql service with lxc-console
[14:21] <marcoceppi> there's a machine-agent.log file in your data-dir path for your local provider
[14:21] <melmoth> so yes, it is
[14:21] <marcoceppi> melmoth: well, what does /var/log/juju/mysql-service-0.log show?
[14:22] <koolhead17> melmoth, aah thats what i was writing :)
[14:22] <koolhead17> marcoceppi, hey there
[14:22] <melmoth> hello koolhead17 :)
[14:22]  * koolhead17 has lost his sleep
[14:22] <marcoceppi> o/ koolhead17
[14:23] <koolhead17> :P
[14:23] <melmoth> http://pastebin.com/8bQ6dq7R
[14:24] <melmoth> (the log file is elsewhere, but it match this machine)
[14:24] <melmoth> last time i checked when i had a problem, the last line was this "lxc_start - invalid pid for SIGCHLD" stuff
[15:00] <melmoth> so, i try to destroy my environment, starting again
[15:00] <melmoth> still stuck
[15:00] <melmoth> (to summarise, i cannot launch a mysql service on precise, lxc. The service keeps being on pending state)
[15:01] <melmoth> any help appreciated. cause i have no idea what to change or where to look
[15:13] <m_3> melmoth: wow, haven't seen that error for lxc
[15:13] <m_3> melmoth: I almost always recommend clearing your lxc cache when there are problems
[15:13] <melmoth> i did
[15:14] <m_3> melmoth: `sudo rm -Rf /var/cache/lxc/`
[15:14] <melmoth> i m now re trying again,with the ppa
[15:14] <m_3> melmoth: ah, yeah... check juju version, and your libvirtd group membership
[15:15] <melmoth> i m in libvirtd group
[15:15] <melmoth> juju was the latest official one available in precise, now tryng with 0.5+bzr535-1juju5~precise1
[15:16] <m_3> melmoth: after that it might be networking... `virsh net-list --all` should show only a default network that's active
[15:16] <melmoth> wich i think is a new one (got a juju update after adding the ppa for chamrs-tools)
[15:16] <m_3> melmoth: but give it time to try to come up with the new version
[15:16] <melmoth> well, it show another network that is active, and i sort of need it for other purpose
[15:17] <m_3> melmoth: make sure the `default` libvirt network is on 192.168.122.0 (you can do this with `ps auwx | grep dnsmasq`)
[15:18] <m_3> melmoth: also that 192.168.122.0 is on `virbr0` (via `ip addr show`)
[15:19] <m_3> unfortunately, that's pretty picky at the moment (known problem... bugs are filed)
[15:19] <melmoth> seems ok
[15:19] <melmoth> the only stuff i changed on te default network is to disable dhcp on it
[15:19] <melmoth> (wich i also need for some other purpose)
[15:22] <m_3> melmoth: hmmmm.... well that will be a problem for the juju local provider
[15:22] <m_3> melmoth: two possibilities, depending on your setup:
[15:23] <m_3> melmoth: a.) move your custom stuff up to a different libvirt network so juju can have `default/virbr0/192.168.122.0`
[15:23] <m_3> (I know that totally sucks)
[15:24] <m_3> melmoth: b.) perhaps you can spin up the juju local provider in a VM on that machine?
[15:24] <balloons> is there a video walkthrough that explains juju at all? Something like setting up juju and deploying wordpress? I don't want a long video, just something to "show" the power and purpose of juju
[15:24] <m_3> melmoth: b might be easier... it won't necessarily perform as well as on the bare metal, but... it'd at least get up
[15:24] <melmoth> i think i might give b a try if i fail with the current test
[15:24]  * balloons is trying to share juju with others
[15:25] <melmoth> so at least i m sure i m using all vanilla stuff and can break everything without caring about my desktop :)
[15:25] <m_3> balloons: yeah, I think we've got a good one up on juju.ubuntu.com... I'd have to look
[15:26] <m_3> melmoth: yeah, sorry... we're pushing to get this particular problem with lxc fixed
[15:26] <balloons> m_3, ahh.. thanks, I see the demo links @ the top
[15:26] <balloons> hmm, they do all seem to be older though with a focus on ensemble
[15:27] <m_3> melmoth: b should be fine if you can get the vm to think it's using something called `virbr0`
[15:28] <nathwill> you can change the network device in the lxc container config...
[15:28] <m_3> balloons: yes, so much has changed recently with the charm store landing too... it's a much easier story... `juju deploy mysql` deploys directly from the store instead of having to download charms and specify local repos
[15:30] <m_3> nathwill: can you answer some of the askubuntu questions on this?  haven't a clue what to recommend as the best workaround for when you have existing libvirt networks
[15:30] <balloons> m_3, yes.. sounds like an opportunity for someone to update :-)
[15:30] <balloons> thanks
[15:30] <m_3> nathwill: I think the code's actually looking for the literal `virbr0` though... and not the lxc default :(
[15:30] <nathwill> right
[15:30] <nathwill> if you want to change that, you can
[15:31] <m_3> balloons: oh please!  that'd be awesome
[15:31] <nathwill> in the container config... and probably in the template container to affect new units
[15:31] <nathwill> though i haven't tested that
[15:32] <nathwill> but i went through doing the same thing in my lxc instances to make sure they're all on the same network (192. vs 10.)
[15:32] <m_3> nathwill: imo correct fix is for juju cli to a.) use lxc networking instead of libvirt (in precise), and b.) add a new dedicated lxc network for juju that doesn't conflict with any address spaces in use at the time
[15:34] <nathwill> m_3, i agree that would be spiffy
[15:34] <m_3> nathwill: cool... let's capture that b/c I think lots of people would benefit from your tests
[15:35] <m_3> imo currently the biggest issue people have with juju... and it's often during their _first_ experience with it... it's a huge priority
[15:35]  * m_3 coffee
[15:36] <LemU_> Hi! I would need help... I was following this guide https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Deploying_Ubuntu_Cloud_Infrastructure_with_Juju     Im almost end of guide but cloud-publish-tarball ubuntu-11.10-beta1-server-cloudimg-amd64.tar.gz images ain't working.
[15:36] <LemU_> It just gives me: Unable to run euca--describe-images.  Is environment for euca- set up?
[15:40] <negronjl> 'morning all
[15:40] <m_3> negronjl: morning
[15:41] <negronjl> m_3: 'morning ... Ubuflu in full swing over here :/
[15:41] <m_3> LemU_: looking at the doc now... euca-tools is pretty common for all kinds of ec2-api-based clouds
[15:41] <m_3> negronjl: ouch
[15:41] <m_3> bummer man
[15:42] <nyr0x> hey, what's the purpose of 'machine 0'? i figured out that it is some kind of management node of the environment. i want to deploy a local environment with orchestra and i have 10 compute nodes that juju will take care of. but i don't want to lose one of the nodes doing almost nothing... so how much power does this 'machine 0' need? could i setup a small vm handling this task?
[15:42] <LemU_> m_3 yeah I know, I was able to use euca-tools when I did manually install openstack cloud sometime ago, and I thought that I did understand how it works, but seems I'm missing something..
[15:43] <m_3> LemU_: I'd make sure you're following MaaS-based docs... the docs might have changed over the past month with 12.04 landing with Maas
[15:44] <m_3> nyr0x: juju refers to `machine 0` as the bootstrap node... it runs zookeeper and some agents that juju depends on
[15:44] <LemU_> m_3 I have everything else working here, I just cant add images on my cloud.
[15:45] <m_3> nyr0x: one thing to watch out for... `orchestra` has been deprecated and the juju bare-metal provider is called MaaS (metal as a Service) now
[15:47] <LemU_> or to be more specific I can't reach my euca tools or something like that
[15:48] <m_3> LemU_: euca-upload-bundle is out-of-band from juju... and my experience with it is stale.  perhaps somebody on #ubuntu-server or #ubuntu-cloud can help debug?
[15:49] <LemU_> m_3 Ok, I'll try that, thanks for help !
[15:50] <m_3> nyr0x: really small VM would work great for the juju bootstrap node btw
[15:50] <m_3> nyr0x: the only real requirement is that zookeeper is java-based... maybe one-cpu, 256M?  maybe 512M? dunno... you won't be stressing it with 10 nodes
[15:51] <m_3> LemU_: sure thing
[15:52] <nyr0x> m_3: thx than i can use the sever handling the login node etc. (playing around to deploy a hpc-cluster)
[15:53] <m_3> nyr0x: cool... keep us posted about progress
[15:54] <nyr0x> m_3: once everything is running i will publish a series of blogposts about the setup
[15:57] <m_3> nyr0x: cool... check out the minimum requirements for the `maas server` in the docs... it'll need cobbler, dhcpd, zookeeper, juju agents.  MaaS in much easier to get working than the old `orchestra` docs
[15:57] <m_3> nyr0x: iirc it's a bootup option from the standard server iso
[16:07] <SpamapS> m_3: 512M minimum.. zk keeps *everything* in RAM
[16:08] <SpamapS> nyr0x: ^^
[16:09] <m_3> SpamapS: thanks
[16:10] <SpamapS> I think the t1.micro is a decent guide for what works as the tiniest possible node 0
[16:12] <SpamapS> m_3: back home and ready to rock?
[16:12] <m_3> SpamapS: only time I've looked was 130M used by zk for quite a bit more nodes... but then elbow room for other processes would imply at least 512 to be safe
[16:13] <m_3> SpamapS: well back home at least :)... wrote part of a blender render-farm charm last night just to get away from it all
[16:13] <SpamapS> Hah nice
[16:13] <SpamapS> m_3: yeah I entertained myself by bringing nagios into the 21st century last night. ;)
[16:14] <m_3> I saw that
[16:14] <m_3> SpamapS: james_w and I were discussing over breakfast Saturday
[16:15] <nyr0x> is it possible to bootstrap 2 environments to handle 2 different sets of machines?
[16:16] <nathwill> nyr0x, yeah, from what i've seen, you use -e envname to specify the environment for your commands.
[16:16] <m_3> nyr0x: yup
[16:16] <melmoth> just to be sure: on precise is it better to use the ppa version of juju or is the default one suppose to be "good enough" ?
[16:16] <m_3> nyr0x: I have like 12 different environments set up... maybe 4 running at any given time
[16:18] <SpamapS> though that may require 2 separate maas's
[16:19] <nyr0x> m_3: how do you specify which environment uses which machines? different management classes? because i have 2 kinds of machines in my network... compute nodes and I/O nodes they are made of different hardware
[16:19] <m_3> nyr0x SpamapS: yes, mine are all ec2/openstack... no maas setups yet
[16:19] <SpamapS> nyr0x: the ability to "tag" machines is feature #1 that maas needs to add soon
[16:20] <SpamapS> I'm actually quite shocked it wasn't added first.
[16:22] <m_3> nyr0x: but constraints can be specified now in juju (http://fewbar.com/2012/04/juju-constraints-unbinds-your-machines/)  I think that works with MaaS now
[16:22] <SpamapS> m_3: constraints only allows *name* :(
[16:22] <SpamapS> for maas
[16:27] <m_3> bummer
[16:28] <SpamapS> nyr0x: you can control which nodes are available at what time by only accepting/commissioning nodes right before you give them to juju
[16:28] <SpamapS> nyr0x: in the very near future maas will have classification features that will make that permanent
[16:30] <nyr0x> SpamapS: sounds promising
[17:14] <tobin_> win 2
[17:21] <Jarmo> My team has been setting up Cloud with MAAS and Juju, we did get openstack environment running, but we can't add images...  cloud-publish-tarball ./ubuntu-11.10-beta1-server-cloudimg-amd64.tar.gz images , gives us only this: Unable to run euca--describe-images. Is environment for euca- set up?
[17:29] <SpamapS> Jarmo: I'm not sure I understand what you're even trying to do..
[17:30] <SpamapS> Jarmo: it sounds like cloud-publish-tarball needs you to have your environment variables setup to access openstack via the euca-* tools
[17:31] <SpamapS> Jarmo: that means setting env variables like EC2_URL, EC2_ACCESS_KEY, EC2_SECRET_KEY, probably S3_URL too
[17:35] <Jarmo> SpamapS check end of this guide: https://help.ubuntu.com/community/UbuntuCloudInfrastructure
[17:35] <Jarmo> I think those cred files are setting variables?
[17:37] <SpamapS> Jarmo: yes the '.ec2rc.sh' bit should set those
[17:37] <SpamapS> Jarmo: if you can't run 'euca-describe-images' then something is wrong w/ your OpenStack setup
[17:37] <Jarmo> hmmm, ok...
[17:38] <SpamapS> Jarmo: you may want to ask in #ubuntu-server and #ubuntu-cloud too. Lots of the devs who work on Ubuntu Cloud hang out in there
[17:39] <Jarmo> It's like this: i have used openstack manually and I think I know how to use eucatools etc.... I just dont find is there way to use manually way on maas + juju environment too.... I'll try ubuntu-cloud next :) THX :)
[17:40] <Jarmo> or no: #ubuntu-cloud Cannot join channel (+i) - you must be invited
[17:40] <SpamapS> wha?!
[17:42] <SpamapS> Oh
[17:42] <SpamapS> it was shut down
[17:42] <SpamapS> which makes sense
[17:42] <Jarmo> ahaa :D
[17:42] <SpamapS> Jarmo: ok, yeah so just try #ubuntu-server
[17:42] <Jarmo> ok, thanks
[17:47] <m_3> SpamapS jcastro: we're getting quite a bit of action on the homebrew-juju osx cli... what criteria do we use to give someone the rights to merge Pull Requests into trunk?
[17:47] <m_3> that's not charm-contributors or charmers... it's more juju itself
[17:47] <jcastro> that's a good question
[17:48] <m_3> s/quite a bit of action/action/... but still
[17:48] <SpamapS> m_3: Its up to Brandon IMO.
[17:48] <SpamapS> m_3: unless we want to start dedicating hardware and time to it.
[17:48] <m_3> SpamapS: kinda think we should... (it's important to juju overall)
[17:49] <m_3> I'll send something to the list
[17:51] <jcastro> yeah this sounds important enough to get list-wide buy in
[17:52] <Jarmo> BTW, Would love if there would be charm for creating images for openstack, or if there would be button "add image" on openstack-dashboard...
[17:54] <Jarmo> I'll think ill try to do it after I get my system working perfectly.. just need to understand what goes wrong atm
[17:54] <m_3> Jarmo: We do have http://cloud-images.ubuntu.com/ which includes openstack images
[17:56] <Jarmo> but those images need handling with eucatools manually, would love if there would be way to put them in just by clicking button and telling "use this image"
[17:58] <Jarmo> My team has been working for setting up Cloud with MAAS & juju, only thing we have problems is adding images
[17:58] <Jarmo> or reaching eucatools, not 100% sure what excatly goes wrong
[17:59] <Jarmo> maybe authenticating (damn that is hard word)
[18:00] <Jarmo> but must say that I love charms, makes setting up openstack much easier & faster...
[18:01] <Jarmo> ..but it is harder to tell what goes wrong at the moment
[18:01] <SpamapS> m_3: well if we're going to pull it in.. we need to move it to launchpad so we don't have to maintain 2 teams.
[18:01] <SpamapS> m_3: then all of ~charmers can own it like all the rest of these tools
[18:02]  * SpamapS braces for the flame of bzr hate
[18:02] <Jarmo> :D
[18:03] <SpamapS> Jarmo: there's an active effort to have an external glance server which hosts the Ubuntu images, and then a feature to point your openstack at said glance server.
[18:03] <marcoceppi> SpamapS: tbh it needs to be in LP because that's where the Juju core is stored
[18:04] <SpamapS> marcoceppi: yeah. It gets complicated spreading out between github and launchpad
[18:04] <SpamapS> if nothing else we need a project page so we can track bugs that are caused by the brew packaging
[18:05] <Jarmo> there is guide : https://help.ubuntu.com/community/UbuntuCloudInfrastructure wich says EC2 API  To begin using the EC2 API, select Settings-> EC2 Credentials -> Download EC2 Credentials in the Openstack dashboard. Save the file (eg, /home/adam/openstack/"). We can then unzip these and begin using our cloud... Does this mean when I use those files on any computer wich is on same network I should have access for eucatools? or how you unde
[18:06] <SpamapS> Jarmo: well you have to source the ec2env.sh or whatever it is called, but then yes
[18:06] <Jarmo> ok, then I did understand right...
[18:06] <SpamapS> Jarmo: note that the 12.04 OpenStack dashboard also has an option in there for downloading a juju environments.yaml
[18:06] <Jarmo> (have been working 11+ hours so im not anymore so sure where I do my mistakes :P )
[18:08] <Jarmo> SpamapS about downloading juju environments.yaml: i didn't find how this would be useful/how this feature should be used? can you lighten me where this helps?
[18:09] <SpamapS> Jarmo: download it, put it in ~/.juju and type 'juju bootstrap' and you should get a working juju
[18:09] <SpamapS> Jarmo: instead of having to figure out all the config options for juju like ec2-uri, access-key, etc. etc.
[18:10] <Jarmo> ahaa, I have to give that a try tomorrow :)
[18:23] <m_3> SpamapS: sent 'osx client' to the list
[18:28] <Jarmo> hmmm, if I follow this "guide" I dont find too many spots where I could have done wrong...
[18:28] <Jarmo> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
[18:28] <ihashacks> SpamapS: tyvm for your attention to the issues I send here. I bzr branch'ed your nagios/trunk but I don't know how to tell juju to use it as a local repository
[18:30] <ihashacks> ... or do I zip it up as a .charm and stick it in ~/.juju/cache ?
[18:31] <SpamapS> ihashacks: juju deploy --repository ~/charms local:nagios
[18:31] <SpamapS> ihashacks: you need a dir under ~/charms with the release of ubuntu (precise) and put nagios in there
[18:32] <SpamapS> ihashacks: also you can set $JUJU_REPOSITORY and drop the --repository ~/charms (I put this in my .profile so I never have to use --repository)
[18:32] <SpamapS> m_3: woot thanks
[18:32] <ihashacks> I did --repository= ... needed to get rid of the = ... derp!
[18:32] <SpamapS> Jarmo: hopefully the people in #ubuntu-server will have more insight into your issue
[18:32] <SpamapS> ihashacks: no, the = is optional
[18:33] <SpamapS> ihashacks: though I think it won't work with --repository=[space]dir
[18:33] <Jarmo> was hoping that too, but they were kinda silent guys :D Was so hoping that cloud channel would have been open :D
[18:34] <Destreyf> ihashacks: i had issues with local repositories, my workaround was --repository=. local:<charm name>
[18:34] <m_3> SpamapS: btw, those're just packaging formulas... they pull the cli itself from lp
[18:36] <SpamapS> m_3: Yeah I know. I'm fine with leaving them there, but its going to be harder and harder to support split-admin of teams who might want to help w/ both.
[18:36] <SpamapS> m_3: can cross that bridge when we come to it.
[18:37] <SpamapS> but IMO, this is a fail from Debian that we don't want to repeat. they have 19 ways to maintain packaging in git/svn/bzr .. even heavy divergence between teams that use git.
[18:37] <SpamapS> So, the more diversity in tools, the greater the barrier there is to contribution.
[18:38] <ihashacks> Wait, it wasn't the = either. It didn't like "local:nagios" had to use just "nagios"
[18:39] <SpamapS> ihashacks: using just 'nagios' will deploy from the main charm store
[18:39] <SpamapS> ihashacks: which, actually, is now my charm ;)
[18:39] <SpamapS> because that old one was.. well.. just that.. really old.. clearly hadn't been used ever
[18:41] <ihashacks> ...clearly :)
[18:41] <ihashacks> Ok that must have happened in the last hour then because I tried earlier and it still showed nagios0 instead of (now) nagios1
[18:42] <Destreyf> SpamapS: i ran across some references on LaunchPad regarding Machine specific deployment, the last post talks about a Go Port?
[18:43] <Destreyf> SpamapS: https://bugs.launchpad.net/juju/+bug/806241 <- to be specific
[18:43] <_mup_> Bug #806241: It should be possible to deploy multiple units  to a machine (unit placement) <production> <juju:Confirmed> < https://launchpad.net/bugs/806241 >
[18:43] <SpamapS> Destreyf: right, there is a rewrite of juju underway to the go language.
[18:43] <SpamapS> Destreyf: bzr branch lp:juju/go  to check it out
[18:44] <SpamapS> ihashacks: that number has nothing to do with the charm version
[18:44] <Jarmo> that sounds really smart idea!
[18:44] <SpamapS> ihashacks: you should see revision 20 if you have mine, or 2 if you have the old one
[18:44] <Destreyf> that is the Unit number in the enviroment.
[18:44] <Destreyf> so if you have done "add-unit" you'd see another.
[18:45] <Destreyf> SpamapS: i shall look at juju/go thanks!
[18:49] <SpamapS> Destreyf: its quite a ways off. What is it that you want to run two of on one machine?
[18:50] <SpamapS> Destreyf: there are ways of deploying extra things, called subordinates.. that usually handles most of the cases that people want.
[18:50] <Destreyf> We have a 4 Node 2U server, that we're going to use as the center of our cloud, and we're wanting to deploy openstack across it using juju to allow for easier configuration and setup of the nova-compute nodes
[18:50]  * negronjl is back from lunch
[18:51] <SpamapS> Destreyf: keep in mind that with VMs and the cloud, there's not so much of a point in running two primary things on a box as there were when we had only discreet servers
[18:51] <Destreyf> OpenStack setup requires 8 servers for its operation, which is more than i have at this moment.
[18:51] <SpamapS> Destreyf: so you want to put mysql, rabbit, etc. on one box?
[18:52] <Destreyf> SpamapS: Basically i'm wanting to have MySQL and Rabbit on one box, and also on each "Compute" node having the nova-volume as well as we're wanting to offer a backup solution to some of our clients as well.
[18:53] <Destreyf> whoo, almost closed my IRC window
[18:54] <m_3> balloons: ah.. just saw that the videos I was pointing you to were commented out.  They were BrightTalk webinars and required an account to watch.  as such, they don't belong as-is on the front page of juju.ubuntu.com
[18:54] <ihashacks> Not the unit nubmer but the number in the actual charm cache (at least that's what I think it is) http://paste.ubuntu.com/987680/
[18:54] <SpamapS> Destreyf: yeah I think juju is going to remain weak for small bare metal deployments for the near term future.
[18:54] <SpamapS> ihashacks: when you use local: there is no charm cache
[18:54] <m_3> balloons: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CFMQtwIwAA&url=http%3A%2F%2Fwww.brighttalk.com%2Fwebcast%2F6793%2F39309&ei=5VSxT9eqKuTq2QWGrLzpCA&usg=AFQjCNG9kG4AGdC45c6tql-e3xEpb8iXng
[18:55] <balloons> m_3, I don't think I found the brighttalk videos
[18:55] <Destreyf> SpamapS: He didn't use local
[18:55] <SpamapS> ihashacks: oh right but you used the store.
[18:55] <Destreyf> ihashacks: the _1 vs _0 is referencing the unit number
[18:55] <SpamapS> ihashacks: juju status should show the charm: for the service.. if it says it is version 20 then you have the new one.
[18:55] <balloons> interesting
[18:55] <balloons> your having a webcast?
[18:55] <ihashacks> I question that because in that directory I only have a 0 for wordpress but I have 6 units for wordpress in the current test I'm running.
[18:56] <Destreyf> ihashacks: if you issue juju status, you'll see a nagios/1 that's the unit number, you'll notice you probably have 2 of them
[18:56] <m_3> balloons: we'll try to get it either signup-free or redo them as pure screencasts
[18:56] <Destreyf> the cache directory probably has 2 versions cached for nagios, and only 1 for Wordpress
[18:57] <balloons> m_3, yes, definitely looks cool.. but a big silly to have to register to view the old talks
[18:57] <ihashacks> Right, so the number relates to versions, not units. What I was infering was that I saw a new version in the cache and therefore got the new Nagios version.
[18:57] <m_3> balloons: here it is without all the tracking crap http://www.brighttalk.com/webcast/6793/39309
[18:57] <ihashacks> I think we're all on the same page and just don't realise it. :)
[18:58] <Destreyf> lol probably
[18:58] <SpamapS> ihashacks: indeed we are :)
[18:59] <SpamapS> Destreyf: back to your questions. There *are* ways around your problem with current juju. But none of them involve deploying the stock charms. You have to make some minor tweaks.
[19:00] <SpamapS> Destreyf: the simplest thing you could do would be to fork the current charms and add 'subordinate: true' and a 'requires: { local-relation: { interface: juju-info, scope: container } }' to the metadata.
[19:01] <balloons> m_3, interesting container they have on that page
[19:01] <Destreyf> I have no problems forking the Charms :P
[19:02] <SpamapS> Destreyf: then you would deploy the 'ubuntu' charm and add-relation to put those services together on one box. "juju deploy ubuntu rabbit-and-mysql ; juju deploy mysql-subordinate ; juju deploy rabbitmq-server-subordinate  ...
[19:02] <SpamapS> Destreyf: we have a problem encouraging you to fork the charms, because we'd like to encourage collaboration, not forking. :)
[19:03] <SpamapS> Destreyf: the second step of that is 'juju add-relation rabbit-and-mysql mysql-subordinate ; juju add-relation rabbit-and-mysql rabbitmq-server-subordinate'
[19:04] <koolhead17> melmoth, around
[19:05] <Destreyf> so mysql-subordinate is the charm?
[19:05] <Destreyf> and the rabbit-and-mysql is a reference name to a machine?
[19:06] <SpamapS> Destreyf: not a machine, a service
[19:06] <SpamapS> Destreyf: but yes, the others would be the names of the -subordinate forked charms
[19:07] <SpamapS> IMO we should give you the option to do this at runtime, since its so simple, but for the moment, we don't.
[19:07] <Destreyf> Okay, i think i have an understanding now.
[19:07] <SpamapS> Destreyf: so the 'ubuntu' charm is a special empty charm
[19:07] <Destreyf> SpamapS: its been great talking to you, i had wondered if i'd run into you directly after seeing you post on launchpad
[19:07] <SpamapS> Destreyf: did we meet?
[19:08] <SpamapS> I lose track. :-/
[19:08] <Destreyf> No, just your posts are always very well done, and full of useful information.
[19:08] <Destreyf> :P
[19:08] <SpamapS> I try. ;)
[19:09] <Destreyf> i've been working through getting the MAAS setup for the better part of 2 weeks, after dealing with some hardware issues i've gotten alot more working, i've only had one other problem, which nukes juju out of the box, for some reason, when the MAAS provisions a box, when it boots for cloud-init, the apt sources are ubuntu-mirror.localdomain
[19:10] <Destreyf> and juju/zookeeper and others are unable to install
[19:10] <Destreyf> but that's a 30 second fix with ssh after the MAAS provisions :P
[19:11] <jorge___> hi! just to confirm, if I want to use a mirror in the instances created by juju, there is no support yet? I have to use some ppa?
[19:12] <SpamapS> jorge___: right, there are some tricks you can do but ultimately you're stuck with whatever cloud-init chooses for your mirror.
[19:13] <Destreyf> SpamapS: i've got to run, i'll be playing with Subordinates, thank you very much for your time, if i get a chance, i'll see what i can contribute back to the community once i get up and running.
[19:13] <SpamapS> jorge___: on EC2 thats an S3 based mirror.. outside EC2 it will usually just be archive.ubuntu.com
[19:13] <jcastro> SpamapS: hey I noticed you've been promulgating while we were away, nice!
[19:13] <jcastro> 78 charms!
[19:13] <SpamapS> Destreyf: your interest is a *great* start. :)
[19:13] <SpamapS> jcastro: Yeah I promulgated a few of the charm contest entries.
[19:14] <jcastro> well done!
[19:14] <melmoth> koolhead17, i m back (got mysql started at least)
[19:15] <koolhead17> melmoth, cool. because i see SpamapS here :)
[19:15] <koolhead17> jcastro, hello sir
[19:15] <jcastro> hi!
[19:18] <jorge___> SpamapS: hum, ok! I was using 11.10 and now I'm using 12.04, in a private cloud. Before, I modified the image to point to my mirror. So, I have to do it again. I've read next releases are going to bring some features as apt-mirror, proxy and parameters to cloud-init, as when we call nova boot -f <file>. Am I right?
[19:20] <SpamapS> jorge___: https://launchpad.net/juju/+milestone/galapagos  That is the current milestone's bugs.. and I see bug 897645 is there with a branch awaiting review...
[19:20] <_mup_> Bug #897645: juju should support an apt proxy or alternate mirror for private clouds <cloud-init:Fix Released> <juju:In Progress by hazmat> < https://launchpad.net/bugs/897645 >
[19:23] <SpamapS> actually my bad, its got a branch, but not proposed for merge just yet
[19:24] <jorge___> SpamapS: ok! Thanks. I've written a description about a specific problem here. Maybe it can be helpful ... I don't know if it is a bug. Someone in openstack list told this. I dont know. http://pastebin.com/SnC4GLEi
[19:25] <SpamapS> jorge___: I saw your issue on the openstack list. I think its because juju over-uses groups and openstack isn't quite able to keep up with the way juju is abusing groups ;)
[19:26] <SpamapS> anyway, lunch time
[19:26] <koolhead17> is someone participating or already registered juju charm session for LISA12>
[19:26] <koolhead17> https://www.usenix.org/conference/lisa12/call-for-participation
[19:26] <koolhead17> https://www.usenix.org/conference/lisa12/workshops-training-program-and-bofs#training
[19:26] <jcastro> hazmat is registring lisa12
[19:26] <SpamapS> koolhead17: we are submitting talks, and I hope we'll do a full charm school
[19:26] <koolhead17> looks place for charm school
[19:27] <koolhead17> hazmat, few more days left :)
[19:27] <SpamapS> yeah its close to me too, so I can give one ad-hoc in the hallway if they say no ;)
[19:27]  * SpamapS goes to lunch
[19:29] <marcoceppi> hazmat: good luck at LISA - land of neck beards, perl, BOFH, and 1990's Linux Guy
[19:29] <koolhead17> marcoceppi, i saw some of them at uds too :D
[19:29] <SpamapS> marcoceppi: according to Robbie W, the crowd from LISA11 was super excited about juju
[19:30] <marcoceppi> SpamapS: wow, I guess it's changed a lot from LISA08
[19:30] <SpamapS> marcoceppi: they're a great target, as they're not interested in "learning" the cloud, but they're going to get dragged into it anyway
[19:31] <SpamapS> best that we be the grease in those old squeaky wheels
[19:31] <SpamapS> marcoceppi: I thought the same about it too, but Robbie found that the crowd was realizing they can't just keep buying boxes ... the cloud is coming :)
[19:31] <marcoceppi> Cool, that's a great audience to target then
[19:32] <koolhead17> SpamapS, i read on the site with mention of term "Cloud" which was kind happy feeling :P
[19:42] <jcastro> SpamapS: m_3: hangout invite incoming!
[19:44] <Destreyf> SpamapS: sorry to pop back in, on the requires, i see a JSON string, but in MySQL the requires statement is straight YAML, do i need to just convert the JSON to YAML?
[19:44] <SpamapS> Destreyf: heh.. json is a subset of yaml ;)
[19:45] <Destreyf> lol hadn't ever thought of it that way.
[19:45] <thomi> Hi - I've been writing a juju charm to deploy quassel-core, and I'd love someone to take a look at it and give me a few pointers. What's the recommended way to do this? Just push a branch to launchad?
[19:46] <SpamapS> thomi: https://juju.ubuntu.com/Charms has some pointers
[19:46] <jcastro> there are steps there to follow ^
[19:47] <thomi> ahhhhh, those steps aren't in the "create a charm" tutorial...
[19:48] <thomi> swesome, thanks.
[19:48] <thomi> *awesome even.
[19:49] <Destreyf> SpamapS: sorry, that was a stupid question on my part :P
[19:55] <Destreyf> when i bootstrap my server, i get "agent-state: not-started" and it doesn't change.
[19:56] <Destreyf> just that just mean i need to deploy or?
[19:56] <SpamapS> Destreyf: your servers may be busy doing stuff
[19:56] <Destreyf> it shouldn't be, its a fresh box that had been setup last week and hasn't been touched yet.
[19:59] <Destreyf> i'm getting an error that occurrs every minute in juju debug-log
[19:59] <Destreyf> http://pastie.org/private/lw3emg4ygqdznfi1ozs9q
[20:01] <Destreyf> and just so it can be seen: http://pastie.org/private/q69fcgvfnt498sgmifu3a <- that's the provision.py just in case.
[20:01] <SpamapS> Destreyf: that means you don't have enough nodes in maas
[20:01] <SpamapS> Destreyf: make sure they're 'accept and commission'ed
[20:02] <Destreyf> http://i47.tinypic.com/2ykj1ap.png <- 3 nodes all say ready
[20:03] <Destreyf> i did try the PPA of juju at one time, however i reverted back, maybe something related?
[20:16] <Destreyf> hmm, i tried to do a cloud-init by hand on the node, and am getting a 401 back from the instance-id, something else must be going on.
[21:08] <_mup_> Bug #999338 was filed: juju should complain/error on unknown charm metadata <juju:Confirmed> < https://launchpad.net/bugs/999338 >
[22:02] <ihashacks> SpamapS: all works well with the add-relation for the nagios charm unless you have two units for a service
[22:03] <ihashacks> "members" are separated by a newline then a , instead of just a ,
[22:11] <ihashacks> I'm thinking an rstrip is needed somewhere around line 74 of /var/lib/juju/units/skymon-0/charm/hooks/common.py
[22:16] <koolhead17> jcastro, ping
[22:20] <SpamapS> ihashacks: thanks, I'll fix that
[22:23] <ihashacks> np. You keep updating, I'll keep testing. :)
[22:29] <Destreyf> SpamapS: Working on a fresh install of ubuntu 12.04. :P
[22:31] <Destreyf> SpamapS: off i go to the server room again. :D
[22:31] <SpamapS> Destreyf: no remote kvm?
[22:36] <SpamapS> ihashacks: Hmm.. its working fine for me..
[22:36] <SpamapS>     members ip-10-252-6-67.us-west-2.compute.internal,ip-10-252-85-84.us-west-2.compute.internal
[22:41] <SpamapS> Hrm.. I'm starting to think that internal hostname is not as useful as service unit..
[22:41] <SpamapS> mysql-0 looks better than ip-x9134815901
[22:41] <imbrandon> heh
[22:44] <ihashacks> members skysql-0
[22:44] <ihashacks> ,skysql-1
[22:44] <ihashacks> that is how my "members" look in the generated hostgroup files
[22:45] <SpamapS> ihashacks: weird
[22:45] <ihashacks> isntead of members skysql-0,skysql-1
[22:45] <SpamapS> ihashacks: are you using local provider? maas?
[22:45] <ihashacks> I get that if I 1) add-relation to a single-unit service (works) and then add-unit a second (fails) or 2) add-relation to a new service that already has two units (fails)
[22:46] <ihashacks> local provider (which I understand to be the red-headed step-child right now) :)
[22:46] <SpamapS> ihashacks: I added a unit to a working service. Got no newlines in there.
[22:46] <SpamapS> ihashacks: I actually did most of my dev on the local provider
[22:46] <SpamapS> but I didn't test add-unit
[22:49]  * SpamapS tries it in local provider
[22:55]  * SpamapS lands changes in charm-tools to set a 'maintainer' field.
[22:57] <ihashacks> Manually removed newline in the hostgroup, Nagios starts up. juju add-unit a 3rd unit and the original newline plus a second new one is there.
[22:57] <ihashacks> members skysql-0
[22:57] <ihashacks> grrrr
[22:57] <ihashacks> ,skysql-1
[22:57] <ihashacks> ,skysql-2
[23:03] <SpamapS> ihashacks: I'm debugging now
[23:45] <SpamapS> ihashacks: ok, confirmed this only happens on local provider.. now to figure out why
[23:45] <SpamapS> I think I know
[23:48] <SpamapS> ihashacks: ok, fix pushed to lp:charms/nagios
[23:49] <SpamapS> ihashacks: it takes a while for that to make it to the live charm store
[23:50] <ihashacks> I've noticed, http://jujucharms.com/charms/precise/nagios still shows the notes from 2011/10/12 even though I know you fixed it. :)
[23:50] <ihashacks> THANK YOU for the help. When this shows up in the store I'll beat on it again and let  you know what else I find.
[23:55] <SpamapS> ihashacks: unfortunately, they are not linked..
[23:55] <SpamapS> ihashacks: so sometimes the store has stuff that is not in jujucharms.com
[23:55] <SpamapS> and vice-versa
[23:56] <SpamapS> hazmat: ^^ note that this charm is woefully out of date on jujucharms.com .. any ideas?
[23:57] <Destreyf_> Hola
[23:57]  * hazmat pokes around
[23:57] <Destreyf_> SpamapS: Hey, i had a question in regards to the deploy command you listed earlier.
[23:58] <Destreyf_> "juju deploy ubuntu rabbit-and-mysql" is ubuntu an actual charm, or do i use any charm in its place?