[02:36] <TheChistoso> when i run "juju status" i get: "the authenticity of host '...' can't be established...are you sure you want to continue connecting? ..." i've run this once before and i assumed it added the host to known_hosts -- why is it prompting again? and what can i do to fix it?
[03:34] <hazmat> TheChistoso, if its the same machine, it should get added to the host.. but if you create a new env, it its a new machine. juju isn't doing ssh cert fingerprint management. another option (with some risk) is to disable host fingerprint checking for some iprange corresponding to the cloud provider if posible.
[04:05] <TheChistoso> how do i tell juju to use a particular machine?
[04:06] <TheChistoso> hazmat: oddly enough, i just ran it again and the problem's gone
[04:09] <hazmat> TheChistoso, cool.. but to answer the question, you either keep the env around,  or try one of the ssh workarounds. juju's philosophy is machines are  ephemeral.. and that services are whats important. its worth a bug report though
[04:09]  * hazmat pokes around
[04:10] <hazmat> its bug 892552
[04:10] <_mup_> Bug #892552: juju does not extract system ssh fingerprints <juju:Confirmed> <juju-core:Confirmed> <https://launchpad.net/bugs/892552>
[04:10] <TheChistoso> i understand the philosophy but i'm using it w/ maas and i only have a set # of machines
[04:10] <hazmat> ah
[04:11] <TheChistoso> so i'd like one machine to run multiple services
[04:12] <hazmat> TheChistoso, with pyjuju jistu deploy-to or with juju-core deploy -force-machine achieve that goal
[04:12] <hazmat> fwiw http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html
[04:19] <TheChistoso> couple questions...how do i cancel a pending deploy? and how do i add add'l machines if, let's say, a new server has arrived and i just racked it?
[04:20] <TheChistoso> (using maas + juju)
[04:23] <TheChistoso> okay figured out the first -- juju destroy-service
[04:40] <TheChistoso> hazmat: tyvm, btw, b/c jitsu seems to be working
[04:46] <TheChistoso> so i expected that exposing mysql would open up port 3306 but it didn't seem to
[04:57] <TheChistoso> and when is juju 2.0 due?
[06:17] <TheChistoso> anybody available to answer maas+juju questions?
[10:57] <evilnickveitch> does keypair auth work for juju core on HP now?
[10:57] <mgz> there's a branch to be landed shortly
[10:57] <evilnickveitch> mgz, cool
[11:34] <jair> I have been hearing a lot about juju but have one specific question, is it free software?
[11:35] <jair> I can use and download ubuntu I know that but what about the juju instance?
[11:36] <jair> just checking making sure this is not something like eucalipto something possible to use only if you pay to canonical?
[11:39] <jair> I think this is the right channel
[11:41] <marcoceppi> jair: Juju is free and open source software provided by Canonical
[11:43] <jair> opensource is different than free software right?
[11:43] <marcoceppi> jair: no, There's free software, then there's open source software. This happens to be both
[11:44] <jair> marcoceppi: so if I install debian can I use juju in my environment? or will only work with ubuntu versions and ony latest version of Ubuntu correct?
[11:44] <jair> marcoceppi: interesting
[11:45] <jair> marcoceppi: I really appreciate your help and willingness to clarify my concerns
[11:45] <marcoceppi> jair I believe there are only packages for ubuntu at the moment, but you can compile the source and use it on Debian, nothing's stoping you. It's my understanding we'll be supporting other platforms in the near future
[11:50] <jair> marcoceppi: I see, thank you very much my friend!
[11:51] <jair> I will be doing some testing at home
[11:51] <marcoceppi> No problem :)
[11:51] <jair> I have seen it a few times on the youtube presentations.
[11:52] <jair> looks cool, also I am trying to learn bash and phyton so some of the charms are written on those languages.
[11:52] <jair> which is very cool
[11:52] <marcoceppi> jair: you can find the quick "getting started" guide on the homepage: https://juju.ubuntu.com/get-started/ and feel free to ask here or on the mailing list if you have any questions
[11:53] <jair> sounds good
[11:54] <jair> marcoceppi: I will definetely get my hands on it
[11:57] <jair> thank you"
[11:58] <jair> Thank you for all the information and the links to knowledge
[12:04] <Marlinc> Is it possible to allow multiple architectures using a constraint? So my environment can have 32 bit and 64 bit machines?
[12:05] <rbasak> all: Marlinc has been hit by bug 1064291, asked in #maas, and I directed him here. I think another solution might be to disable constraints entirely, but I'm not sure how to do that.
[12:05] <_mup_> Bug #1064291: Default constraints make no sense on MAAS <arm> <juju:New> <Release Notes for Ubuntu:Invalid> <https://launchpad.net/bugs/1064291>
[12:05] <Marlinc> Jup
[12:08] <Marlinc> Why is it marked a invalid by the way
[12:09] <marcoceppi> Marlinc: it was marked invalid for the ubuntu-release-notes project
[12:09] <Marlinc> Ah okay I don't get the Launchpad issue tracker quite well yet
[12:09] <Marlinc> Ah I see
[12:11] <marcoceppi> Marlinc: So you can change the architecture at anytime either on a per-deploy basis or the defaults altogether with juju set-constraints. The defaults can only be changed /after/ a bootstrap though. So you'll still have to juju bootstrap --constraints arch=arm before being able  to do something like juju set-constraints "mem=256m arch=i386" or whatever you'd like to change
[12:12] <marcoceppi> and the defaults only live on for that environment, they won't carry over between bootstraps or other environments
[12:12] <Marlinc> So if I would use amd64 as default I would be able to use a i386 node inside that environment? At all
[12:13] <marcoceppi> Marlinc: You can change the arch for any machine not yet deployed at anytime. Let me find the doc on machine constraints
[12:13] <Marlinc> https://juju.ubuntu.com/docs/constraints.html ?
[12:14] <marcoceppi> But you can do juju deploy mysql --constraints arch=i386; juju deploy wordpress --constraints arch=amd64; juju deploy nfs --constraints arch=arm and so long as the provider can provide those arch it'll work
[12:14] <marcoceppi> Marlinc: yup, that's it
[12:14] <marcoceppi> So you can have mixed arch (and even mixed series) machines in a deployment
[12:14] <Marlinc> Okay
[12:14] <Marlinc> Would be nice if it would allow it by default though
[12:15] <Marlinc> Without having to specify a arch
[12:15] <marcoceppi> Marlinc: Well, there's a split hairs scenario there. I see where you're coming from with maas. The majority of the people either don't care about the arch or want them to all be the same. It'd be tedious to have to specify it everytime (hence why I see where you're coming from)
[12:16] <marcoceppi> Hopefully the dev team can come up with a decent solution to this problem
[12:16] <marcoceppi> Marlinc: which version of Juju are you using?
[12:17] <Marlinc> 0.7
[12:18] <mgz> Marlinc: you can enable multiple arches trivially with `juju set-constraint arch=any` on your environment
[12:18] <Marlinc> Ah
[12:19] <mgz> arguably that should be the default, indeed
[12:19] <Marlinc> Ah great thank you very mch
[12:19] <Marlinc> Much
[12:19] <Marlinc> I think that would be a great default too
[12:25] <rbasak> mgz: is there also a way to do that with mem? I'd like to document the workaround in the bug.
[12:25] <mgz> mem=0 is fine
[12:25] <mgz> as it's really a gte
[12:25] <rbasak> mgz: what's the argument during the bootstrap command, please?
[12:26] <rbasak> Does --constraints arch=any,mem=0 work?
[12:26] <rbasak> Or is there some other way of separating that?
[12:27] <Marlinc> Deploy MySQL on a machine with at least 32GiB of RAM, and at least 8 ECU of CPU power (architecture will be inherited from the environment, or default to amd64):
[12:27] <Marlinc> $ juju deploy --constraints "cpu=8 mem=32G" mysql
[12:27] <mgz> it's "arch=any mem=0" genrally
[12:28] <rbasak> Thanks!
[12:28] <mgz> hm, thought this was already in the maas docs, but maybe it only made it to launchpad bugs/mass-devel list?
[12:28] <marcoceppi> any, good to know
[12:29] <rbasak> I'm not sure. I'm only aware of the bug.
[12:29] <mgz> it's worth noting set-constraint on the environment, as otherwise you need to remember --constraints on every deploy
[12:29] <mgz> +s
[12:29] <rbasak> Doing it on bootstrap sets the default environment constraints, doesn't it?
[12:30] <mgz> ah, maybe it does
[12:59] <hazmat> it does
[13:36] <jcastro> heya hazmat
[13:36] <jcastro> looks like the docs aren't updating
[13:36] <jcastro> I pushed an update like 2 weeks ago and the docs haven't generated
[13:36] <jcastro> I thought that was a cron job?
[13:55] <hazmat> jcastro, it should be cron'd, as for the location/result its in webops hands.
[14:03] <jcastro> ok so I should just file a general RT you think?
[14:03] <hazmat> jcastro, yup
[14:03] <jcastro> any info I can put there, like what machine it's on or anything?
[14:06] <hazmat> jcastro, sorry i don't have anything additional.. i've got a separate doc cron job running on jujucharms.com/docs but i handed over the domain, and its scheduled to be transitioned to a new backend/frontend in a week or two.
[14:07] <hazmat> evilnickveitch, do you know anything re doc deploy?
[14:07] <jcastro> yeah it looks like that isn't being regenerated either
[14:07] <evilnickveitch> hazmat, for juju? no, but I know the maas one is broken
[14:09]  * hazmat pokes around
[14:53] <jcastro> hazmat: hah man
[14:53] <jcastro>  Last Generated on Nov 22, 2012. Created using Sphinx 0.6.4.
[14:53] <jcastro> stay classy juju docs
[14:54] <jcastro> RT filed
[14:56] <hazmat> jcastro, that's funny
[14:58] <jcastro> want me to CC you on all this stuff or want me to just deal with it with IS?
[14:58] <hazmat> jcastro, cc me pls
[15:00] <hazmat> jcastro, we might have disabled because the makefile gets executed and has upload/commit rights from a large group
[15:00] <hazmat> interesting
[15:00] <hazmat> the repo changed locations
[15:01] <hazmat> jcastro, we moved the repo from ~charm-contributors/juju/docs/ to  ~charmers/juju/docs
[15:02] <jcastro> oh ok
[15:02] <jcastro> so the cron is probably still there
[15:02] <jcastro> we just moved the source out from under them
[15:10] <Marlinc> My Juju is having a issue when using it with MAAS: provision:maas: juju.agents.provision ERROR: Cannot get machine list
[15:11] <Marlinc> It also throws ProviderInteractionError: Unexpected TimeoutError interacting with provider: User timeout caused connection failure.
[15:11] <Marlinc>  errors
[15:11] <Marlinc> Anyone with something on the top of their head what this could be?
[15:24] <hazmat> Marlinc, does the maas cli work?
[15:24] <hazmat> Marlinc, just wanted to verify maas is up and responding to on its api
[15:25] <Marlinc> Yep it is
[15:28] <Marlinc> It is hazmat :)
[15:28] <hazmat> hmm
[15:30] <hazmat> Marlinc, could you save the last few hundred lines of the provisioning log, and pastebin it perhaps. next i'd try restarting the provisioning agent..  ls /etc/init for exact agent name..  i think its $ service juju-provisioning-agent restart
[15:30] <Marlinc> Okay
[15:38] <Marlinc> I'll take a look later the router stopped working...
[15:50] <SpamapS> https://www.ohloh.net/p/juju/analyses/latest/languages_summary
[15:51] <SpamapS> tis official, goju > juju
[16:16] <hazmat> jcastro, docs.. http://jujucharms.com/docs/
[16:16] <hazmat> updatd
[16:41] <jcastro> hey guys:
[16:41] <jcastro> "I just spun up juju on AWS and was pretty impressed with how damn easy it was (the toughest part was looking up my AWS keys). Now I'm really excited to play around with it and see how it interacts with Chef."
[17:23] <TheChistoso> i'm using maas+juju. i added a new machine and it's shown as "ready" in the node list. when i try and deploy mysql, it never picks up the new machine
[17:23] <TheChistoso> juju bootstrap worked fine, btw
[17:24] <TheChistoso> i started the deploy last night and 10 hours later juju status is still showing it as "pending"
[19:33]  * avoine is trying to find a way to handle multiple version of Django in the charm
[20:28] <sinzui> hey. I want to update the mongodb charm to restore a db from a dump on install. I don't think this can be done on install though. I think config-changed needs to do this, and I know the restore should only happen during installs or upgrade-charm hooks?
[20:28] <sinzui> Are there charms that have solved such a problem?
[20:37] <marcoceppi> sinzui: You _can_ have the charm do it during the install hook, since config-get is available during install.
[20:38] <marcoceppi> So you'd just have to amend the readme to say that it can be resotred using this config but only if the config options is seeded during deployment, that it won't work after unit installation
[20:38] <avoine> sinzui: the postgresql charm have a dumpfile_location config variable
[20:38] <marcoceppi> Not wether or not that's charmer kosher, it seems like a grey area to me, so others please correct me if I'm wrong
[20:38] <sinzui> marcoceppi, understood.
[20:39] <sinzui> avoine, thank you! I will look
[20:41] <sinzui> marcoceppi, I think the "it won't work after unit installation" is good as the goal is to only do this before there is a database.
[20:42] <sinzui> marcoceppi, I also imagine this should only happen if the charm was configured to the be master during deploy. Slaves don't need to restore.
[20:43] <marcoceppi> sinzui: So, I'm going to say as long as you document that caveat in the readme it'll be okay. Typically configuration options shouldn't be "immutable" in a sense and should react with a config-changed hook, but that's not always possible
[20:46] <sinzui> marcoceppi, yep. this immutable aspect is indeed my concern. I was worried that I need some persistent means to know that the db was restored and not to do it again.
[20:46] <marcoceppi> sinzui: You could do that with a file marker, something like touch .db-restored and check if that file exists everytime the config-changed hook runs
[20:47] <sinzui> marcoceppi, in the charm's dir, or in the location where the db restored too?
[20:47] <marcoceppi> sinzui: in the charm's directory, since the hooks run with the root of the charm as it's cwd
[20:49] <marcoceppi> well, anywhere really. I typically just let put it in the cwd
[20:49] <sinzui> marcoceppi, does upgrade-charm overwrite or replace the unit's charm dir?
[20:50] <marcoceppi> sinzui: that's a really good question
[20:50] <sinzui> I can drop a file into one of my running units, then do an upgrade to see
[20:51] <marcoceppi> I want to say it overwrites, but doesn't replace the entire directory, preserving files created outside ofthe charm.
[20:51] <marcoceppi> sinzui: since I know a lot of charms store data in this fashion
[20:51]  * marcoceppi hopes that's the case
[20:51] <sinzui> I bet webops know since they add files to charms and upgrade all the time
[20:52]  * thedac reads backscroll
[20:52] <sinzui> thedac does upgrade-charm overwrite or replace the unit's charm dir?
[20:52] <thedac> it overwrites and does not replace
[20:53] <thedac> so items that were in the charm and have been removed may still be in the charm dir on an instance
[20:53] <sinzui> okay, that is probable best since it allows for a rollback option
[20:54] <sinzui> thanks thedac, marcoceppi, avoine.
[20:54] <thedac> no problem
[20:54] <marcoceppi> o/
[21:00] <TheChistoso|2> anybody around that could help me diagnose a maas+juju issue please?
[21:20] <hazmat> sinzui, in future it employs git locally for charms to yank dead files
[21:21] <hazmat> sinzui, the local charm is still on disk in archive/zip form in either case
[21:21] <sinzui> okay, thanks for the warning.
[21:21] <hazmat> TheChistoso|2, could pastebin the send the provisioning agent log  /var/log/juju on the bootstrap node
[21:21] <hazmat> s/the/or
[21:36] <TheChistoso|2> hazmat: http://pastebin.com/TS4kJg7b
[21:37] <TheChistoso|2> it helps to know where to look -- tyvm
[21:37] <TheChistoso|2> ProviderInteractionError: Unexpected TimeoutError interacting with provider: User timeout caused connection failure.
[21:38] <TheChistoso|2> and then: exceptions.TypeError: an integer is required
[21:38] <marcoceppi> TheChistoso|2: What's "mass-server" set to in the environments.yaml?
[21:38] <hazmat> TheChistoso|2, on the server what version of juju is being run.. ie. output of dpkg -s juju
[21:39] <TheChistoso|2> hazmat: juju-0.7
[21:39] <TheChistoso|2> marcoceppi: maas-server: http://10.53.0.102/MAAS/
[21:39] <mwhudson> oh i know this one
[21:40] <mwhudson> you need to either (1) put the port number in the url
[21:40] <mwhudson> so
[21:40] <mwhudson> maas-server: http://10.53.0.102:80/MAAS/
[21:40] <mwhudson> or
[21:40] <mwhudson> bootstrap on something newer than precise
[21:40] <marcoceppi> TheChistoso|2: I believe the "exceptions.TypeError: an integer is required" error is because you're not explicitly stating the port. Try setting it to http://10.53.0.102:80/MAAS/
[21:40] <mwhudson> TheChistoso|2: ^^
[21:40] <TheChistoso|2> ah
[21:40] <marcoceppi> mwhudson: :D
[21:40] <hazmat> TheChistoso|2, you'll need to restart the provisioning agent as well
[21:41] <TheChistoso|2> i was concerned about bootstrapping on anything later b/c i wasn't sure how compatible the various charms are on anything later
[21:43] <TheChistoso|2> hazmat: what command would i use to restart the provisioning agent? not sure which one it is in my list of services
[21:44] <hazmat> TheChistoso|2, upstart.. via sudo service juju-provisioning-agent restart.. you might need to double check the name from the available list in /etc/init
[21:44] <mwhudson> TheChistoso|2: probably putting the port in is the sensible thing
[21:44] <hazmat> the name of the upstart service that is
[21:45] <TheChistoso|2> mwhudson: i added the port
[21:45] <mwhudson> i didn't realize you could recover from this, i re-bootstrapped when this bit me...
[21:46] <hazmat> mwhudson, i do remember that bug.. but it  seems strange though.. the fix for maas ports went in 2012-06 from the changelog.. unless this really isn't juju-0-7 but 0-5 from precise.
[21:46] <TheChistoso|2> the name of the service doesn't seem obvious -- i should be doing this on the bootstrap node, correct?
[21:46] <hazmat> TheChistoso|2, yes
[21:46] <mwhudson> hazmat: it's a bug in juju isn't it?
[21:46] <mwhudson> errrr
[21:46] <mwhudson> hazmat: it's a bug in maas isn't it?
[21:46] <mwhudson> bigjools will be super happy to discuss the maas sru
[21:46] <mwhudson> i'm sure
[21:47] <mwhudson> hm, maybe not, i dunno
[21:48] <TheChistoso|2> i can't find the name of the job/service. perhaps i'll just reboot :/
[21:48] <hazmat> mwhudson, the traceback is from juju.. the issue was in the maas provider in juju i thought
[21:48] <mwhudson> yeah, just looking at the bug
[21:49] <hazmat> TheChistoso|2, ls /etc/init  .. should be prefixed juju-provisioning..
[21:49] <hazmat> TheChistoso|2, then $ service juju-provisioning-agent restart
[21:49] <TheChistoso|2> hazmat: not seeing that -- there's nothing prefixed w/ juju
[21:50] <TheChistoso|2> okay i finally guessed it: juju-provision-agent
[21:50] <hazmat> TheChistoso|2, this should be on the juju bootstrap node
[21:50] <hazmat>  /etc/init should definitely have juju upstart files..
[21:51] <TheChistoso|2> hazmat: ya know what -- i wasn't paying close enough attention. i was looking in /etc/init.d/ (force of habit)
[21:51] <hazmat> TheChistoso|2, cool.. now you'll need to change your client config re maas port and try another command.. juju deploy/add-unit etc.. which will force a sync
[21:51] <hazmat> hmm.. i might have spec'd the wrong order there.. the restart should happen after the sync.. else the provisioning agent is just going to run into the old setting
[21:56] <TheChistoso|2> hmm...doesn't seem to be taking. i changed it in my environments.yaml, restarted the provisioning agent, destroyed the service, deployed the service again (mysql), restarted the provisioning agent again for good measure and...and just as I was writing this I saw the node power on so I guess it's working :D
[21:57] <TheChistoso|2> so my ultimate goal will be to deploy open stack (grizzly) through juju. wish me luck. (c:
[21:57] <TheChistoso|2> according to the guide, it says 28 nodes would be required. i'm doing a PoC -- i have 16 machines available. hope that'll work...
[21:59] <TheChistoso|2> i would have bootstrapped w/ raring, but maas didn't download any raring images -- is that a known issue?
[21:59] <TheChistoso|2> oh and my apologies -- i didn't properly thank you guys...THANK YOU!
[22:05] <hazmat> TheChistoso|2, 28 is for full HA setup.. you can use jitsu deploy-to for certain services (non-conflicting) to have them co-located on the same machine..
[22:05] <hazmat> and good luck
[22:05] <TheChistoso|2> i intend to do that -- and thank you
[22:06] <TheChistoso|2> i was actually hoping someone had an existing set of charms for co-locating an intelligent set of openstack services
[22:07] <TheChistoso|2> it recommends doing it atop precise -- should i instead use quantal or raring (when available)?
[22:08] <TheChistoso|2> lol that question didn't sound right -- what i meant to ask is, "do you know of any particular problems choosing a more up-to-date release?"
[22:08] <hazmat> TheChistoso|2, the charms install from the cloud archive, depending on the release version you choose, so the openstack bits are typically current
[22:09] <sarnold> TheChistoso|2: check charmstore for charm availability, not all charms are on all releases
[22:09] <hazmat> TheChistoso|2, er.. the openstack charms install packags from the cloud archives.. based on config.. i'd stay on precise
[22:11] <TheChistoso|2> hazmat: alright -- will do.
[22:12] <TheChistoso|2> should exposing mysql open port 3306?
[22:16] <TheChistoso|2> juju status isn't showing port 3306 as an open port
[22:37] <TheChistoso|2> i installed mysql and wordpress successfully
[22:37] <TheChistoso|2> i then removed the relation b/t mysql and wordpress and it looked like it did the correct thing
[22:37] <TheChistoso|2> then i ran juju destroy-service wordpress
[22:37] <TheChistoso|2> site's still running
[22:38] <TheChistoso|2> is that correct?
[22:39] <TheChistoso|2> shouldn't it be off or stopped since there's no other service that'd be listening on the port?
[22:44] <marcoceppi> TheChistoso|2: there's a big where the stop hook isn't executed on older versions of juju. if you want to remove the machine run terminate-machine with the machine number as a parameter
[22:44] <marcoceppi> s/big/bug/
[22:45] <TheChistoso|2> so newer versions of juju aren't in the precise images that maas downloads?
[22:46] <TheChistoso|2> and newer versions aren't in the apt repos for precise?
[22:47] <TheChistoso|2> i was surprised when you had me check the juju version and it was rather old (isn't the latest something like 1.10?)
[22:48] <TheChistoso|2> (while I'm on this -- is there an expected release date for juju 2.0?)
[23:51] <hazmat> TheChistoso, mysql charm doesn't expose port
[23:53] <hazmat> TheChistoso|2, the new version of juju is a new implementation, its not yet available in the distro packages, but via ppa or download.
[23:54] <TheChistoso|2> is it possible to use a newer version when maas is commissioning the nodes juju uses?