[04:16] <Marek1211> Hi I was just wondering about scaling services. For example when we scale a wordpress services for load balancing by adding units...does it automatically manage replication of data like pictures from our main worpress service to the other? or how does it work?
[04:39] <jose> Marek1211: yes, as they scale horizontally and use the same database
[04:40] <jose> all that data is stored in the db
[04:42] <jose> if have to leave, but if you have questions you can also email juju@lists.ubuntu.com
[07:43] <Mosibi> When doing a 'juju bootstrap -e openstack', the bootstrapped instance is trying to connect (apt) to internet sources.
[07:43] <Mosibi> How/where can i configure that our own internal mirror is used for that?
[07:54] <jam> Mosibi: there is a configuration item "apt-http-proxy" and "apt-https-proxy", but I don't know that there is a way to tell it to use a different mirror entirely
[07:56] <Mosibi> jam: there are indeed several proxy config items, and setting http-proxy works, but we will end up with a totaly isolated (private) cloud. So i must have the possibility to point at our own mirror.
[07:56] <Mosibi> jam: but thx for looking and answering!
[08:01] <lazyPower> Mosibi: you may have to stuff some of that info to rewrite teh proxies in cloud init. I would think we'd have an easier way to do it but thats all i'm coming up with
[08:01] <lazyPower> and i'm far from an expert
[08:01] <lazyPower> ..on that subject.
[08:02] <Mosibi> lazyPower: how does the integration with juju and cloud-init work?
[08:02] <jam> Tim Penhey (thumper) was the one who did the original proxy setting stuff. He would be the one to ask why there wasn't just an explicit "use this mirror" instead.
[08:03] <lazyPower> Mosibi: cloud-init is isolated from juju. What you would be accomplishing with cloud-init is adding the proxy configuration logic so that info is written on the hosts first boot.
[08:03] <Mosibi> Can i put/include some cloud-init config file that i can include with juju bootstrap?
[08:04] <Mosibi> Or do ik have to hack it myself in juju-core?
[08:04] <Mosibi> ik=i
[08:04] <lazyPower> https://help.ubuntu.com/community/CloudInit
[08:04] <lazyPower> Mosibi: i'd do some light reading on cloud-init first to verify this is feesable.
[08:05] <lazyPower> Mosibi: and here's the openstack docs on using user-data with cloud-init: http://docs.openstack.org/user-guide/content/user-data.html
[08:05] <Mosibi> lazyPower: user-data can be used to solve my problem, but ther are also specific apt hooks
[08:05] <Mosibi> but how do i include that with a bootstrap?
[08:06] <lazyPower> Mosibi: ah, that i'm not real familiar with and would redirect to Thumper, or the mailing list.
[08:06] <Mosibi> ack
[08:06] <Mosibi> thx!
[08:06] <Mosibi> Thumper is not online on IRC?
[08:07] <lazyPower> he's not around atm. he may be out today - i'm not positive.
[08:10] <Mosibi> lazyPower: thx... i will hang around :)
[08:11] <jam> lazyPower: thumper is in AU so it is after midnight for him right now.
[08:11] <jam> Mosibi: I think the mailing list is probably your best bet
[08:12] <lazyPower> jam: i thought he was a Kiwi
[08:12] <lazyPower> i guess thats close enough to be considered AU
[08:12] <jam> lazyPower: you'r right
[08:12] <jam> AU is my shorthand for way in the >+10 Timezones, but I do know he is in NZ
[08:13] <lazyPower> I just remember that being important to distinguish in social situations... dont ask me why.
[08:13]  * lazyPower is feeling kind of loopy being up this early
[08:35] <Mosibi> lazyPower: jam: we have a 'go' freak/fan/expert in our team. I am going to ask him to look at the juju-core code. Maybe he can implement it..
[08:36] <lazyPower> that would be pretty excellent. We love community contributions
[13:45] <dpb1> hi lazyPower, sorry I missed our meeting on monday. I scheduled a last-minute vacation and forgot to decline my appointments.  :(
[13:46] <hazmat> Mosibi, user data supports mirror config, its a matter of wiring that support through core.. as an env config option used when rendering cloudinit userdata.
[13:51] <lazyPower> dpb1: no worries. I realized it was better suited to follow up with the charm mainainer than the landscape team.
[13:51] <lazyPower> so i actually canceled it
[13:52] <dpb1> lazyPower: cool.  good to hear
[13:53] <lazyPower> dpb1: thanks for putting up with my overzealous templar charmer persona. He has been sacked and replaced with more estute persona befitting a charmer.
[13:53] <dpb1> lazyPower: give the old chap my regards if you see him again. :)
[13:55] <lazyPower> Duly noted.
[13:56] <jrwren> Mosibi: can you let me know what you find on that front? I've been curious about it for a while.
[14:09] <allomov> hey, all pardon me for ignorance, but what is this thing in juju-gui ? https://www.evernote.com/shard/s108/sh/bd43cfbf-7767-42b8-b1d1-9f874e8d882e/8582ddd01135755bd944cac368aef624
[14:09] <allomov> hey, all! pardon me for ignorance, but what is this thing in juju-gui ? https://www.evernote.com/shard/s108/sh/bd43cfbf-7767-42b8-b1d1-9f874e8d882e/8582ddd01135755bd944cac368aef624
[14:10] <lazyPower> allomov: greetings. That's an indicator for how many services are attached to the subordinate charm.
[14:12] <Guest2308> Hi guys, I have a problem in bootstrapping juju environment with openstack. It creates the VM and install all software, but after apt-get installation the command returns "no instance found" error. Do you have any idea?
[14:13] <allomov> lazyPower: thank you for answer. still not sure I've got it right. is it count of external services or count of "green relations", could you tell where I can read about it ?
[14:14] <lazyPower> allomov: https://juju.ubuntu.com/docs/authors-subordinate-services.html
[14:14] <allomov> Guest2308: can you try to ssh to this instance ?
[14:15] <allomov> lazyPower: great. thank you 1 > time
[14:16] <Guest2308> allomov: yes I have ssh access to instance
[14:19] <Guest2308> 2014-06-18 14:16:27 INFO juju.cmd supercommand.go:302 running juju-1.18.4-precise-amd64 [gc] 2014-06-18 14:16:27 DEBUG juju.agent agent.go:384 read agent config, format "1.18" 2014-06-18 14:16:27 INFO juju.provider.openstack provider.go:202 opening environment "openstack" 2014-06-18 14:17:27 ERROR juju.cmd supercommand.go:305 no instances found 2014-06-18 14:17:29 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: rc: 1
[15:28] <dpb1> tvansteenburgh: hey there -- I pushed an update: https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102  sorry about that!
[15:33] <leftyfb> where can I get help to understand how the nagios/NRPE charms work? Specifically, as an example, with an apache charm being monitored, why isn't the nagios charm monitoring port 80 and/or why isn't the NRPE charm monitoring the apache2 service locally?
[15:53] <automatemecolema> hypothetical question: what happens if I lose access to my zookeeper? I'm really afraid to use juju for enterprise consumption because of so any unknowns
[16:00] <tvansteenburgh> dpb1: thanks! i'll have a look after lunch
[16:00] <marcoceppi> automatemecolema: we haven't used zookeeper in over a year. I assume you mean the bootstrap node, but I have to ask: What version of juju are you using?
[16:00] <automatemecolema> yes I mean the bootstrap node
[16:01] <automatemecolema> the latest baked into 14.04 now
[16:03] <automatemecolema> Also, does anyone have a good place to point me on development around building charms with puppet?
[16:43] <automatemecolema> Something I've noticed using the trusty juju-gui charm is dragging and dropping your .yaml files into the canvas doesn't work so great
[16:43] <automatemecolema> Works fine with precise
[16:44] <automatemecolema> Or do I have it wrong, and my environment has to be running in quickstart bootstrap to get that bundled functionality
[17:11] <dpb1> Can I mix and match local provider lxc and kvm containers?
[17:16] <automatemecolema> dpb1 I don't see why not
[17:16] <automatemecolema> You would specify to local environments
[17:16] <automatemecolema> two*
[17:18] <dpb1> automatemecolema: ok, good idea, thx
[17:18]  * dpb1 searches for special foo to get that work (something with an alternate port for the mongodb process)
[17:18] <automatemecolema> binford2k so your saying you feel like there are major concerns with running foreman in the area of performance and reliability? That doesn't really answer the question though, because if you can write something better, then why doesn't something better exist?
[17:20] <marcoceppi> tvansteenburgh: hey, I've got a merge on it's way for charm-tools, last bug before release, will you be around to review in about 20 mins?
[17:35] <pmatulis> i did 'juju destroy-environment local' and now i would like to start fresh.  but neither 'juju bootstrap' ('environment is already bootstrapped') nor 'juju deploy wordpress' ('environment is no longer alive') works
[17:37] <sparkiegeek> pmatulis: what does juju status show?
[17:38] <pmatulis> sparkiegeek: it does show the previous machines (lxc containers).  weird
[17:38] <sparkiegeek> pmatulis: be brutal and try "juju destroy-environment --force local"
[17:42] <marcoceppi> tvansteenburgh: https://code.launchpad.net/~marcoceppi/charm-tools/require-series/+merge/223620
[17:44] <pmatulis> sparkiegeek: actually, i think i messed up my lxc machines.  how do i really start fresh with juju?  can i remove ~/.juju and do the bootstrap thing?
[17:45] <automatemecolema> I probably wouldn't remove juju
[17:45] <sparkiegeek> pmatulis: IIRC there's a "kill" plugin
[17:45] <sparkiegeek> pmatulis: https://github.com/juju/plugins/
[17:45] <sparkiegeek> try that first :)
[17:45] <automatemecolema> did you try the force parameter to kill the environment?
[17:45] <pmatulis> automatemecolema: yeah, that's what i did originally
[17:46] <automatemecolema> but juju status still shows the environment is still around?
[17:46] <pmatulis> yeah
[17:46] <pmatulis> can't i just remove some file(s) under ~/.juju ?
[17:46] <pmatulis> and then bootstrap thingy?
[17:47] <automatemecolema> you can remove the environments.yaml file and the file inside the environments directory .env
[17:47] <automatemecolema> then do a juju init
[17:47] <automatemecolema> and reconfigure your environments file and try another bootstrap
[17:48] <avoine> pmatulis: check this out: http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider
[17:49] <pmatulis> avoine: wow ok, that's a lot of stuff but it looks like what i'm after
[17:50] <automatemecolema> avoine nice find
[17:51] <tvansteenburgh> marcoceppi: ack, will review this afternoon
[17:53] <arosales> pmatulis: I think you can safely keep your environment.yaml around. Its the ~/.juju/environments/local.jenv is the file that has tripped me up.
[17:54] <pmatulis> arosales: i removed it but still no dice
[17:54] <pmatulis> hmm, almost there:
[17:54] <pmatulis> juju bootstrap
[17:54] <pmatulis> ERROR cannot use 37017 as state port, already in use
[17:55] <arosales> pmatulis: if your home directory is encrypted you will need to make sure juju is using a mount point outside your home dir.
[17:55] <pmatulis> arosales: no encryption
[17:55] <arosales> ok
[17:55] <arosales> pmatulis: sounds like mongo is still running
[17:57] <pmatulis> arosales: killed it, but now back to square one:
[17:57] <pmatulis> juju bootstrap
[17:57] <pmatulis> Bootstrap failed, destroying environment
[17:57] <pmatulis> ERROR environment is already bootstrapped
[17:57] <arosales> after you kill mongo you'll need to clean up once more from your previous bootstrap
[17:57] <arosales> http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ also had some nice script suggestions from the link automatemecolema provided
[17:58] <arosales> pmatulis: so consider killing mongo in the same swoop when cleaning up lxc and juju
[17:59] <pmatulis> grrr
[18:15] <pmatulis> bootstrap worked but the new wordpress/mysql setup failed:
[18:15] <pmatulis> http://paste.ubuntu.com/7664967/
[18:39] <pmatulis> wow i actually learned something.  i looked at the logs ~/.juju/local/logs/unit-wordpress-0.log and found & corrected the error
[18:39] <pmatulis> all good now, at least according to 'juju status'
[18:52] <tvansteenburgh> marcoceppi: require-series patch back to you
[18:54] <marcoceppi> tvansteenburgh: thanks, I was about to write tests, but got distracted
[18:59] <jose> hey mbruzek, are there any chances you may check the owncloud MP again today?
[19:01] <pmatulis> sudo find / -name charm | grep wordpress
[19:01] <pmatulis> /vol1/lxc_images/ubuntu-local-machine-1/rootfs/var/lib/juju/agents/unit-wordpress-0/charm
[19:01] <pmatulis> where do i find the actual charms?  after deploying wordpress & mysql i found that ↑
[19:01] <pmatulis> but nothing for mysql
[19:03] <tvansteenburgh> dpb1: i don't see a new commit on apache2/avoid-regen-cert
[19:05] <dpb1> tvansteenburgh: pushing now.  D'oh!
[19:05] <tvansteenburgh> :)
[19:05] <dpb1> r57
[19:05] <tvansteenburgh> cool, thanks!
[19:06] <dpb1> tvansteenburgh: I also want to make the same change against trusty, should that be a separate mp?
[19:06] <tvansteenburgh> eh, i think so? lazyPower?
[19:11] <tvansteenburgh> or marcoceppi, mbruzek? (see dpb1's questions above) i assume the answer is yes...?
[19:12] <mbruzek> dpb1 yes
[19:12] <marcoceppi> dpb1: yes
[19:12] <marcoceppi> unfortunately
[19:15] <dpb1> all: thanks
[19:20] <jcastro> marcoceppi, https://github.com/juju/docs/commit/6bdac65670c9fcedfa5421b615ee08cc0c9d9ccf
[19:20] <jcastro> what does this metadata field do in the markdown?
[19:20] <marcoceppi> what?
[19:20] <marcoceppi> rather, can you re-ask your question jcastro?
[19:20] <jcastro> se how he added
[19:21] <jcastro> + Title: blah blah
[19:21] <jcastro> in the markdown
[19:21] <jcastro> what does that mean?
[19:21] <marcoceppi> no idea, there's no plugin for that to my knowledge
[19:21] <marcoceppi> It's going to render like crap in the live docs
[19:22] <aquarius> jose, ping
[19:22] <jose> aquarius: pong
[19:22] <jcastro> marcoceppi, maybe he added stuff?
[19:22] <jose> looks like you read that post :)
[19:22] <aquarius> jose, you wanted my help to charm soonsnap?
[19:22] <marcoceppi> jcastro: I don't see it anywhere
[19:22] <aquarius> I did. :)
[19:22] <jose> aquarius: yeah, check http://ec2-54-85-96-127.compute-1.amazonaws.com/
[19:22] <marcoceppi> we already track metadata about the page in the navigation
[19:23] <jose> aquarius: when I click the buttons nothing happened, I just downloaded apache and clones
[19:23] <jose> cloned*
[19:23] <aquarius> "ReferenceError: io is not defined" in the console.
[19:23] <aquarius> that looks relevant.
[19:23] <jcastro> marcoceppi, commits on truck without review? tsk tsk.
[19:23] <jose> aquarius: if you want, I can give you ssh access to the box so you can take a look
[19:23] <marcoceppi> we should get a bot lander to force no one being able to commit on trunk
[19:23] <aquarius> you don't have socket.io installed, by the look of it
[19:24] <marcoceppi> jcastro: like what core is doing
[19:24] <jose> hmm, /me checks
[19:24] <aquarius> jose, did you npm install?
[19:24] <jcastro> marcoceppi, https://github.com/juju/docs/commit/d8504af882455766389ccc095a0551ead78d13b4
[19:24] <jose> aquarius: not actually
[19:24] <jcastro> marcoceppi, yeah, we should be doing that anyway
[19:24] <aquarius> that's likely to be a reasonable part of the problem, then
[19:24] <marcoceppi> jcastro: looks like he made a new plugin
[19:25] <aquarius> jose, note that it's a node application. It's not a pure client-side app. So the server is run with "node app.js", as you'll see from Procfile
[19:26] <jose> ah, ok
[19:26] <aquarius> (or from "npm start", as defined in package.json)
[19:26] <jcastro> o/ aq!
[19:26] <aquarius> It can't be pure client-side until everyone supports webrtc. :)
[19:26] <aquarius> heya jcastro!
[19:26] <jose> npm ERR! message failed to fetch from registry: socket.io, weird
[19:28] <jose> I'm just creating a brand new machine
[19:34] <jose> aquarius: any idea on why I may get http://paste.ubuntu.com/7665293/ ?
[19:35] <jose> fresh install, just installed npm
[19:35] <aquarius> erm
[19:35] <aquarius> should work
[19:35] <aquarius> everybody uses express.
[19:35] <aquarius> might just be npm weirdness
[19:35] <aquarius> ah
[19:36] <aquarius> you're using an ancient node
[19:36] <aquarius> use a newer one.
[19:36] <jose> hmm, those are the ones from the repos, I may need to get one from a PPA
[19:36] <aquarius> indeed, yeah
[19:37] <jose> do you think https://launchpad.net/~chris-lea/+archive/node.js/ is recommended?
[19:37] <aquarius> trusty has a modernish node
[19:37] <aquarius> if you're on something old, then use chris lea's ppa, indeed
[19:37] <aquarius> that's what I use
[19:37] <jose> awesome, thanks
[19:38] <jose> I'm on precise, so... :P
[19:48] <jose> aquarius: awesome, looks like I got it working. thanks! :)
[19:49] <aquarius> excellent.
[19:49] <jose> I hope you'll see that charm on the store soon :)
[19:50] <jose> aquarius: want it to be listed as soonsnap or pubphoto?
[19:51] <aquarius> don't know
[19:51] <aquarius> pubphoto was its original codename
[19:51] <jose> well, it's your app
[19:51] <jose> I can do whichever you like
[19:51] <aquarius> soonsnap is what the live real version is called
[19:51] <aquarius> maybe call it pubphoto
[19:51] <aquarius> so it doesn't conflict :)
[19:51] <jose> ok, pubphoto then
[19:52] <jose> I'll make sure to edit the index.html in order for it to display pubphoto and not soonsnap :)
[19:53] <jcastro> jose, I need to pick a new charm school schedule
[19:53] <jcastro> should I just put them directly on the onair cal?
[19:53] <jose> jcastro: you just let me know and it'll be done
[19:53] <jose> you can do that too
[19:53] <jcastro> I can add them
[19:53] <jose> bare in mind that there may be cases when I won't be available to host
[19:54] <jcastro> that's fine
[19:54] <jcastro> I can host
[19:55] <jcastro> I like to work too. :)
[19:56] <jose> :)
[19:56] <jose> remember to let me know when you can have that call
[19:56] <lifeless> jcastro: you like to work it
[19:56] <jcastro> I need to step it up before jose ends up doing everything, heh
[19:57] <jose> aquarius: also, you have your analytics code on the branch
[19:57] <aquarius> jose, feel free to take that out if you want
[19:57] <jose> cool, thanks
[20:30] <wrale_> can i deploy ceph osd and nova compute charms on the same node?  I have 67 nodes with two 3tb disks in each..  i'd like all nodes to be both ceph storage node and hypervisor for openstack
[20:30] <wrale_> *67 compute nodes
[20:36] <jose> hey guys, does anyone know how can I run an node.js app in a port lower than 1024 without being sudo/root?
[20:37] <jose> wrale_: if you notice, there's a machine number for each node. just do 'deploy charmname --to #' where # is the machine number
[20:37] <jose> not sure if it will work for openstack, though, I haven't played with openstack :)
[20:37] <wrale_> thanks jose.. i'll try that
[20:37] <jose> or you can do lxc:# instead of just # and it'll deploy it in an LXC container inside that machine
[20:37] <jose> wrale_: ^
[20:37] <jose> let me know how it went
[20:38] <wrale_> will do.. once my maas node installs and the rest comes together.. :)
[20:38] <wrale_> using 14.04 LTS now..
[20:38] <wrale_> first time
[20:39] <jose> awesome!
[20:39] <wrale_> maas on 12.04 hated me ..lol .
[20:42] <jose> if you have any troubles with maas, people in #maas may be able to help you
[20:44] <wrale_> they ignored me :)  but it's cool now, i hope
[20:51] <jrwren> jose: if you can use your own nodejs binary, CAP_NET_BIND_SERVICE might work.
[20:52] <jose> jrwren: figured it out, looks like setcap will be useful
[20:52] <jose> thanks :)
[21:39] <achiang> anyone know how to work around https://bugs.launchpad.net/charms/+source/mongodb/+bug/1312389
[21:39] <_mup_> Bug #1312389: Need to make trusty version of the charm available in the charm store <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1312389>
[21:46] <achiang> i suppose i should just type juju deploy cs:precise/mongodb instead
[21:55] <achiang> switching topics... i had a bug in another charm i wrote. i think i've fixed the bug... but how do i redeploy the charm to test it?
[21:59] <tvansteenburgh> achiang: juju destroy-service mycharm && juju deploy mycharm
[21:59] <achiang> tvansteenburgh: ok. that's kinda what i figured
[21:59] <achiang> thanks
[22:00] <tvansteenburgh> sure thing
[22:02] <jose> achiang: make sure to specify it's local and set the repository location
[22:02] <achiang> jose: no, that wasn't it. i had to issue juju resolved <unit> multiple times
[22:03] <jose> you mentioned redeployment of a charm you fixed, you need to specify those to deploy a local charm
[22:04] <achiang> jose: i was already deploying a local charm though
[22:04] <jose> ah, ok
[22:04] <achiang> juju destroy-service doesn't seem to have removed the machine from my pool
[22:04] <achiang> i guess i need to do that manually
[22:04] <jose> correct, it just destroys the service, but not the machine
[22:04] <jose> terminate-machine #
[22:05] <achiang> that seems silly
[22:05] <achiang> if i remove a service, juju should know what machine it's assigned to. if no other services are on that machine, it should remove the machine from the pool automatically
[22:05] <jose> what in the manual provider?
[22:05] <tvansteenburgh> but you might want to reuse it for something else
[22:06] <tvansteenburgh> it's faster to deploy a service to an existing machine
[22:06] <achiang> tvansteenburgh: well... when i did the 2nd juju deploy mycharm, it created a new machine instead of reusing the existing machine
[22:07] <tvansteenburgh> yeah, that's the default
[22:07] <jose> you could've specified --to # and deployed it to an existing machine, though :)
[22:08] <achiang> of course the knobs are there -- i am simply relating my onramp experience with juju as a new user, who has heard many magical things about it, and discovering that it's not quite so magic
[22:09] <sarnold> achiang: I think the reasoning also includes "don't destroy the user's data"
[22:09] <achiang> sarnold: what are the semantics of 'destroy-service' then?
[22:10] <achiang> the verb 'destroy' would normally imply... destroy ;)
[22:10] <sarnold> achiang: as I understand it, tear down the service but preserve the machines/data in case you want to collect it before terminating instances or storage
[22:10] <achiang> well, we have 'destroy-service' and 'remove-machine'
[22:11] <achiang> my mistake
[22:11] <achiang> it is remove-service, not destroy-service
[22:12] <achiang> at least according to the docs
[22:12] <tvansteenburgh> the latter is an alias
[22:13] <tvansteenburgh> `juju help commands`
[22:13] <sarnold> I could see a 'destroy-' variant cleaning up after itself...
[22:14] <jose> I agree that destroy- sounds a little bit more... aggresive
[22:39] <jose> new charm on the revq